Konkretisierung von KI-Kompetenz im Software-Asset-Management

14/05/2025

Concretization of AI competence in software asset management

Monitoring usage data, detecting over- and under-licensing, creating demand forecasts, integrating data from different sources and optimizing software costs - all supported by the use of AI. An idea that promises new efficiency gains and savings and should trigger euphoria among many license managers.

Current figures also show that the integration of AI remains at the top of many companies' list of priorities: The proportion of IT decision-makers who consider AI to be the most important strategic task for 2025 has risen from 35% to 46% compared to the previous year - far ahead of traditional topics such as cost reduction or IT security. This makes it clear that AI is here to stay.

However, these technological advantages also conceal numerous legal and ethical requirements that will be regulated by the EU AI Act in the future. These regulations affect not only traditional AI products, but also all internal tools and applications - especially those used in the context of SAM to monitor, analyze and evaluate software.

Classification and implementation of measures

From February 2, 2025, companies will be obliged to identify and assess all AI applications in accordance with the EU AI Act and assign them to a risk category. This also applies to AI-supported systems in software asset management - such as tools for automated license analysis, contract review or demand forecasting. Depending on the area of application, data access and decision-making impact, these must be classified from low to high risk.

For example, a company uses an AI system that evaluates historical consumption data, interprets license models and makes licensing recommendations. Even if this application is not safety-critical, it prepares decisions with financial implications and accesses sensitive company data - and therefore falls within the scope of the EU AI Act.

If such a system is classified as high-risk AI, comprehensive regulatory requirements must be met. These include not only the technical traceability and transparency of the decision-making logic, but also continuous testing processes, safety precautions, human supervision and complete documentation.

Classification alone is only the first step. Companies must also establish structural, organizational and personnel measures to ensure the compliant and responsible use of their AI systems in the long term - for example through targeted training, internal guidelines, control mechanisms and clear responsibilities. Particularly in the SAM area, where AI is increasingly preparing decisions, a conscious approach to these requirements is crucial for legal compliance and sustainable efficiency gains.

Implementation strategies

But how can companies implement these requirements? Or will AI simply train us itself in the future?

The legislators have so far remained rather vague in their response. Article 4(3) of the EU AI Act merely states that providers and operators of AI systems must take appropriate measures to ensure that their staff - as well as external service providers - have a sufficient level of AI competence. In particular, technical knowledge, training, experience and the specific application context should be considered. The following recommendations for action show what this can mean in concrete terms:

  • Risk management in the license context: Introduction of a structured classification process for all AI-based SAM tools - including regular risk analyses, e.g. on the dependency on training data, the predictive quality of demand forecasts or compliance with licensing requirements.
  • Technical understanding of SAM AI: License managers and IT teams need to understand how the AI works: How does it process license data? Which sources are integrated? What rules does the system use for evaluation? This is the only way to identify and minimize risks in license management.
  • Documentation and transparency of decision logic: All AI components used in the SAM should be fully documented - including the algorithms, evaluation models and decision-making logic used. This applies to automated risk classifications, license recommendations and contract interpretations.
  • Employee training and awareness-raising: Training must address SAM-specific AI topics - such as dealing with biased training data, transparency requirements for automated contract evaluation or the safe use of freely available AI tools in the licensing environment. The aim is also to recognize and classify so-called shadow AI - i.e. unofficially used AI systems.
  • External controls: A regular external audit can help to demonstrate the conformity of SAM-KI with regulatory requirements. This is particularly important for companies with complex license models or particularly sensitive data.
  • Internal guidelines and AI responsibility in license management: Companies should establish binding guidelines on the use of AI in SAM - including clear responsibilities. Appointing an AI officer or representative can help to coordinate monitoring, documentation and internal communication. This role can also act as a central point of contact - for IT, data protection, purchasing and external auditors, for example.

Implementation deadline

By August 2, 2025, all companies that use AI systems - including in the area of software asset management - must be able to prove that their systems are used in compliance with the regulations. This includes both organizational measures and technical checks. Those who fail to meet the requirements risk severe sanctions.

Depending on their severity, violations of the AI Act can be punished with fines of up to 35 million euros or 7% of annual global turnover. Violations of documentation requirements or the misuse of high-risk AI can also result in fines of up to 15 million euros or 3% of turnover. Misleading statements to supervisory authorities, on the other hand, can lead to fines of up to 7.5 million euros or 1.5% of turnover.

This means that the classification of SAM AI applications into the correct risk class is already becoming very important.

Monitoring is carried out by national authorities - in Germany, the BSI, the Federal Cartel Office and data protection authorities are currently being discussed. A central AI office will be set up at EU level to coordinate implementation and ensure uniform application within the member states.

Conclusion

In the area of software asset management, too, AI not only brings new opportunities, but also new obligations. Those who are prepared today can not only act in a legally compliant manner, but also increase trust and efficiency in the long term. Our experts will be happy to support you with their specialist knowledge - from classification and policy development to training your license managers and other employees.

Author: Eric Löwenstein