Building blocks of AI safety in the company
This overview shows key areas of action for the safe and responsible use of artificial intelligence.
AI security governance
Companies must comply with regulatory and industry-specific requirements in relation to AI security, e.g. EU AI Act
AI security frameworks & best practices
The catalogue of measures to safeguard against AI security risks, which is individualised for companies and based on experience and best practices, is research-based and geared towards leading experts
Core components
In addition to the familiar code, infrastructure and application components, AI introduces additional challenges through data, training and model development. In order to assess the specific risks of AI development in a well-founded manner, the interaction of these components in particular must be taken into account.
People, processes and technology
People, processes and technologies must be considered in all dimensions of the holistic AI security approach, e.g. in relation to the safe handling of tools, effects for those responsible for tools, quality processes