Dealing with artificial intelligence and chatbots in DAX companies: The need for corporate guidelines
In today's business world, artificial intelligence (AI) and chatbots are no longer just technical gimmicks, but have developed into essential tools that have a significant influence on the strategies of large companies. DAX companies in particular, which are among the largest and most influential in Germany, face the challenge of integrating these technologies responsibly. Clearly defined corporate guidelines that regulate the use of AI and chatbots play a key role here.
Common strategies and approaches
Although each DAX company has its own specific strategies for integrating AI, there are also similarities. Agility and customer focus are key aspects that many companies are pursuing. Volkswagen, for example, focuses on agile product development and the systematic use of data to optimise its products and processes. Mercedes-Benz, on the other hand, uses AI to analyse production data and improve quality management, whereby the democratisation of data is crucial for rapid decision-making.
Another common denominator is the focus on data protection and ethical standards. Bayer, for example, is committed to ethical standards and the protection of privacy, while Deutsche Telekom is actively discussing the opportunities and risks of digitalisation. These companies recognise that the responsible use of AI must not only comply with legal requirements, but also requires the trust of customers and the public.
Differences in implementation
Despite these similarities, the specific approaches vary between the companies. Volkswagen, for example, relies on outsourced, overarching innovation centres to promote new technologies and optimise production processes, while BASF focuses on the research and application of AI in individual divisions to support long-term sustainability goals.
Another example is Allianz, which sees AI as a tool for improving services while placing people at the centre of customer interaction. In contrast, BMW pursues a strongly data-driven approach that includes over 100 digital applications to improve data consistency and production processes.
Fresenius, on the other hand, focuses on medical care and has developed an ethical framework for the use of AI in order to minimise risks. This shows that the different industries in which these companies operate also lead to varying approaches to the integration of AI.
The need for company guidelines
Given the complexity and potential risks associated with the use of AI and chatbots, the development of clear corporate guidelines is essential. These guidelines must not only take into account technical aspects, but also ethical and social issues.
Siemens, for example, has formulated principles to ensure the responsible use of AI and to combat bias in the systems developed. The company is also striving for a cultural change that promotes an environment of trust. Similarly, SAP emphasises continuous improvement and the integration of AI into all business processes in order to increase efficiency.
Deutsche Telekom: House of Digital Responsibility
An excellent example of this is Deutsche Telekom's "House of Digital Responsibility". This concept embodies a technology- and value-orientated vision that focuses on the people behind all stakeholder groups: Customers, employees, shareholders, partners and society as a whole. Telekom is committed to human-centred technology development that focuses on the responsible use of digital and analogue processes. The House of Digital Responsibility is based on a foundation of laws and regulations that are integrated into daily business activities, as well as an explicit commitment to the UN Guiding Principles on Business and Human Rights. An understanding of cultural differences and shared values forms the basis of this approach.
These basic principles are supplemented by the pillars "Data privacy and security" and "Transparency and dialogue", which ensure the protection of personal data and open communication about the opportunities and risks of digitalisation. Deutsche Telekom expresses its digital responsibility in four central areas: digital ethics promotes sovereign, people-oriented technologies, digital participation ensures access to the digital society for all, future work places people at the centre of a dynamic transformation, and climate and resource protection commits to the sustainable use of resources through digitalisation.
Siemens: Responsible AI
Siemens is focussing on responsible AI in order to make artificial intelligence (AI) trustworthy and ethical. Siemens sees responsible AI as a combination of transparency, fairness and responsibility, with AI increasingly influencing everyday life - from voice assistants to the autonomous optimisation of industrial processes. Decisive progress in machine learning and data availability has created new opportunities, but also the challenge of avoiding unpredictable or even discriminatory outcomes due to the "black box" structure of modern AI systems.
Siemens is meeting these challenges with seven mitigation principles and is specifically focussing on explainable AI to promote the comprehensibility of decisions and active learning approaches that integrate human feedback into the training process. The aim is to create trustworthy AI that actively alerts users to uncertainties or errors and differentiates between private aspects that protect data privacy. Edge AI also strengthens data security through decentralised data processing at device level.
Responsible AI is an interdisciplinary process that encompasses technical, ethical and cultural aspects. Siemens relies on diversity in development to minimise bias and continuous training to raise awareness of AI ethical standards. This promotes a technological culture that builds trust while improving the efficiency and adaptability of AI in industrial contexts.
BMW Group: Digital Identity
The BMW Group's Digital Identity is a clearly defined framework for the responsible use of digital technologies across the organisation. This guideline is based on five principles that place people's needs at the centre and thus promote trust, data protection and cyber security. As a central element of digital competence, the BMW Group has created a digital identity that aligns all digital processes and innovations with sustainability, user-friendliness and transparency. In particular, the aim is to improve the customer experience by integrating data protection and cybersecurity from the outset ("Privacy, Safety, and Security by Design"). BMW also promotes the principle of open collaboration through the transparent use of digital solutions and support for open source initiatives. Training and the targeted promotion of digital skills for employees are also an integral part of the strategy to enable the democratic and responsible use of digital technologies.
Relevance of the topic
In its assessment, EY emphasises that companies must act proactively when using AI systems, particularly with regard to the upcoming EU AI Regulation. The regulation calls for comprehensive protection of the fundamental rights of natural persons in connection with AI and particularly addresses so-called high-risk AI systems. Companies should therefore immediately establish AI governance that identifies, classifies and documents all AI use cases in order to fulfil the regulatory requirements at an early stage. It is particularly important to consider data governance and data management in the development phase of such systems. Subsequent adjustments often involve considerable effort, especially if copyright infringements or quality deficiencies in data processing have already occurred. EY therefore recommends integrating basic legal requirements such as data protection, data management and IP checks into existing management systems and establishing compliance structures at an early stage in order to minimise legal risks and future-proof the use of AI.
In addition to EY's assessment, Munich Re also emphasises the growing importance of specialised insurance cover for AI risks, similar to the development of the cyber insurance market in the 2000s. Munich Re sees a parallel to the early days of cyber insurance in the increasing spread and regulation of AI: While the first policies were specifically tailored to certain loss scenarios, a standardised market for AI insurance is expected in the long term. This change is likely to be driven by regulatory requirements, such as the planned EU AI regulation, which will force companies to proactively manage AI-related risks and integrate compliance measures.
Munich Re points to the possibility of "Silent AI" exposure, where AI risks are not covered by traditional insurance contracts or are difficult to calculate. Similar to cyber risks, this could lead to specific exclusions in existing policies and increase the need for specialised AI insurance. Munich Re advises companies to carefully analyse their AI applications and the associated risks and to establish appropriate governance in order to protect themselves in good time in the changing regulatory environment and minimise liability risks.
Conclusion
To summarise, it can be said that artificial intelligence and chatbots are not just technological tools for DAX companies, but strategic components of their business models. The focus on agility, data protection, ethical standards and promoting innovation shows that these companies not only want to remain competitive in the digital age, but also want to act responsibly.
The diversity of approaches makes it clear that uniform standards are necessary in order to maximise the opportunities offered by AI while minimising the risks. Corporate guidelines play a decisive role here by creating clear framework conditions that support the responsible use of these technologies.
Author: Henri Fild