of EU AI Law It comes into force today and outlines regulations for the development, marketing, implementation and use of artificial intelligence in the European Union.
The council wrote that the law aims to “promote the adoption of human-centric and trustworthy artificial intelligence while ensuring a high degree of protection of health, safety, and society.” [and] “It will safeguard fundamental rights, including democracy, the rule of law and environmental protection, protect against the harmful effects of AI systems in the EU and support innovation.”
According to the law, some high-risk uses of AI include:
-
Implementing technology into medical devices.
-
Used for biometric authentication.
-
It determines access to services such as health care.
-
Any form of automated processing of personal data.
-
Emotion recognition for medical or safety reasons.
“Biometrics” is defined as “the automated recognition of a human’s physical, physiological, or behavioral characteristics, such as face, eye movements, body type, voice, cadence, gait, posture, heart rate, blood pressure, odor, or keystroke characteristics, for the purpose of verifying an individual’s identity, with or without the individual’s consent, by comparing that individual’s biometric data with that individual’s biometric data stored in a reference database,” the regulators wrote.
The Biometric Regulation excludes the use of AI for authentication purposes, such as verifying that an individual is who they say they are.
The law provides that special consideration must be given when using AI to determine whether individuals can access essential public and private services, such as health care in cases of maternity, work-related injury, illness, unemployment, dependency and old age, social assistance and housing assistance, because they are classified as high risk.
The use of this technology for the automated processing of personal data is also considered to be a high risk.
The law states that “A European health data space will facilitate non-discriminatory access to health data and the training of AI algorithms on those datasets in a privacy-preserving, secure, timely, transparent and trustworthy manner and with appropriate institutional governance.”
“Relevant competent authorities (including sector authorities) that provide or support access to data may also support the provision of high-quality data for training, validation and testing of AI systems.”
When testing high-risk AI systems, companies should test them in real-world situations and obtain informed consent from participants.
Organisations must also keep records (logs) of events occurring during testing of their systems for at least six months and must report any significant incidents occurring during testing to the market surveillance authority of the Member State where the incident occurred.
The law states that AI must not be used for emotion recognition related to “emotions or intents such as happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction or amusement.”
However, there is no prohibition on using AI to recognize emotions related to physical states such as pain or fatigue, for example in systems that detect fatigue in professional pilots or drivers to prevent accidents.
Requirements for transparency, i.e. traceability and explainability, exist for AI systems that interact with humans, for AI-generated or manipulated content (e.g. deepfakes), and for certain AI applications such as permissioned emotion recognition and biometric classification systems.
Companies are also expected to eliminate or mitigate the risk of bias in their AI applications, and to implement mitigation measures to address bias if it occurs.
The law underlines the Council’s intention to protect EU citizens from potential risks from AI, but also says its aim is not to stifle innovation.
“The regulation must support innovation, respect scientific freedom and not hinder research and development activities. Accordingly, it must exempt from its scope AI systems and models that have been specifically developed and are operated solely for the purposes of scientific research and development,” the regulators wrote.
“Furthermore, we must ensure that the rule does not affect scientific research and development activities regarding AI systems or models before they are put on the market or into service.”
HIMSS Healthcare Cybersecurity The forum is scheduled to take place in Washington, DC from October 31st to November 1st. More information and registration.