BLOG

How the EU AI Act could guide the ethics of AI research 


In the blink of an eye, artificial intelligence has gone from theoretical posturing to tools embedded into software, search engines and even healthcare systems millions use each day. Yet approaches to addressing ethical aspects of AI in research are in their infancy. AI technologies pose unforeseen and unique ethical challenges that are not yet reflected in many ethics review processes for research. 

 

The AI field is rumbling with the upcoming EU AI Act and its far-reaching consequences for the technology. It is expected to be the world’s first regulation on AI, driven by the desire to mitigate potential harms before they burrow deep roots in society. Even though the Act does not apply to the research field, it will almost inevitably influence it. So how does the Act work, and how can researchers protect communities from the potential harms of AI, even without a legal obligation?

 

Traditionally, ethical approval of a research study focused solely on protecting study participants. In the past decade, research ethics committees have begun considering the wider implications of research and scientific studies on society, communities, and the environment. The Ada Lovelace Institute, a leading research body on AI, calls unintentional societal consequences of an AI technology ‘ethical debt’. For example, Facebook researchers developed an AI tool to detect suicidal ideation of a user based off social media posts. A technology that can infer sensitive health data about a person, without being accountable to the reporting and regulatory requirements that medical and mental health professionals hold, has a huge privacy risk.


AI holds the potential to alter society and the environment so fundamentally that many experts urge research ethics bodies to make specific processes for reviewing and approving AI projects. Researchers need to identify and mitigate risks, enshrine data protection and transparency, and create accountability for the decisions made by AI.

But the novelty and complexity of AI technologies means many research ethics committees remain in new, unfamiliar territory. Ethics guidelines are already being developed, such as the ethics framework for research using AI in European Commission-funded projects. Institutions like Stanford University and UNESCO have ethics guidelines for AI, and the Horizon Europe Project SIENNA developed ethical frameworks and recommendations for AI and robotics. But these guidelines have yet to reach broad uptake across the research community.

Dr Anaïs Resseguier, an ethics researcher from Trilateral Research and member of the irecs project, argues that even if the upcoming AI Act does not apply to the research field, it can provide resources to the research community to mitigate harms. Furthermore, such guidance will help put the AI models on the path to compliance with the AI Act’s obligations before they enter the market.

 

The draft AI Act (as of January 2024) uses a risk-based approach. The higher risk an AI technology poses, the greater the regulatory requirements. The riskiest technologies are deemed unacceptable and prohibited on their threats to fundamental rights, democracy or the environment — imagine a public social score that ranks you on your behaviour in public. A step down from unacceptable are high-risk technologies that could threaten human rights, like technology used for law enforcement. These will be subjected to repeated risk reviews and tests to identify risks.

The AI Act can serve as one of the many contributions to the emerging area of research ethics for AI technologies. Alongside this work, the irecs project is creating training materials and recommendations for research ethics committees, and researchers themselves, to understand AI technologies themselves and the risks and benefits they hold.

 

We have a window of time before the lid on a Pandora’s Box bursts open. By empowering ourselves with knowledge and proactive approaches, the research field can protect not only its participants, but the world as a whole.

Share by: