BLOG

Health and healthcare in the AI Act 


Health and healthcare are central to the recent AI Act adopted by the European Union. In this legislation, the classification of the various uses or applications of AI is based on the levels of associated risks in terms of health, safety and fundamental rights. The irecs team, having worked on the ethical issues of AI in health and healthcare, is delighted to see that a number of its conclusions and recommendations coincide with many key aspects of this new legislation: concerning individual health, public health, healthcare systems and biomedical research.


Concerning individual health, the AI Act is concerned that "AI-enabled manipulative techniques" or nudging could prove dangerous to mental health. It also stresses the importance of guaranteeing the reliability of diagnostic systems and medical decision support systems. The act also anticipates the consequences of possible malfunctions in AI systems that could lead to health problems. It stresses the need to establish responsibilities in the event of negative impacts on health resulting from the application of AI systems.


With regard to public health, the AI Act refers in particular to AI systems that could be used to handle emergency calls, triage patients and assess systemic risks across the European Union. At this level too, the issue of "biometric categorisation", defined as "assigning natural persons to specific categories on the basis of their biometric data", is given particular consideration. In these applications of AI, as in genetic engineering, which irecs is also studying from an ethical point of view, the aim is to avoid possible eugenic drifts that could result from the use of AI in healthcare. In relation to healthcare systems and health economics, the AI Act also envisages AI applications in the management of welfare state benefits, risk assessment and the pricing of public and private health insurance systems.


In terms of medical research, the Act aims to guarantee access to high-quality datasets in a secure, transparent and privacy-respectful manner, so as not to deprive users of healthcare systems of any benefits from AI applications in this field. This planned sharing of health data in the European Health Data Space is designed to facilitate EU-wide machine learning on health issues. However, the Act specifies that the sandbox development of AI systems in health, on "disease detection, diagnosis prevention, control and treatment and improvement of health care systems" shall be done "by a public authority or another natural or legal person governed by public law or by private law".


In general, the Act identifies certain AI systems as high-risk systems in the field of health and healthcare: those designed for biometric categorisation and those dedicated to emotion recognition. For irecs, the AI Act represents a good compromise between academic freedom and the protection of individuals and their dignity in the field of health and healthcare. Its adoption is undoubtedly an important step forward in clarifying the ethical and legal issues surrounding the deployment of AI techniques in health and healthcare.



Author:

Etienne Aucouturier (Alternative Energies and Atomic Energy Commission (CEA), France)
Etienne Aucouturier graduated with a PhD in Philosophy from Université Paris 1 Panthéon-Sorbonne. He is currently a researcher in Ethics of Science and Technology at the CEA and a member of the Université Paris-Saclay research ethics committee.

Share by: