Life Sciences

The EU AI Act: Everything life sciences companies need to know

April 16, 2024 | Article | 10-minute read

The EU AI Act: Everything life sciences companies need to know


With the passage March 13 of the EU’s AI Act, the world gets its first comprehensive set of regulations governing the use of artificial intelligence (AI)—and with it, life sciences companies finally gain clarity on what constitutes safe and responsible AI, the requirements for developing and deploying AI in the EU, the AI use cases that are prohibited and penalties that will befall companies that fail to comply.

 

While the Act introduces an additional regulatory burden for life sciences companies, its provisions largely mirror the precepts of responsible AI we and others have advocated for years. Companies that deftly navigate the Act’s statutes stand to gain competitive advantage by offering demonstrably safe and reliable AI solutions; at the same time, stricter regulation should yield a higher overall standard of healthcare AI.

 

Here’s what pharmaceutical, medtech and healthtech companies need to know as they prepare for the Act to go into force.

Key provisions of the EU AI Act



The new EU AI Act seeks to regulate AI systems that could negatively impact people’s safety and fundamental rights by creating a comprehensive set of control requirements and penalties for providers and deployers of high-risk AI systems. The Act builds on existing legal frameworks, such as the general data protection rule (GDPR), the medical device regulation (MDR) and the in-vitro diagnostic medical devices regulation (IVDR), to provide a single legal framework within the EU and to promote innovation and investment in AI while protecting individual rights and privacy.

 

Here’s what’s in it.

 

Scope. The new Act covers anyone developing or deploying AI systems within the EU (regardless of where the company is incorporated), individuals using AI systems inside the EU for anything other than personal use and anyone building or using AI systems located outside the EU whose outputs will be used inside the EU.

 

AI risk categories. The Act takes a tiered approach to managing AI risk by designating three categories of AI systems, each of which requires its own set of actions and controls. Life sciences companies will need to ensure their AI systems are properly identified, categorized and assured against.

 

Risk category #1: Prohibited AI practices. These systems are deemed to present an unacceptable risk and are strictly forbidden under the EU AI Act. These are defined as AI systems that (among other uses):

  • Distort people’s behavior in any way that could cause them harm, either subliminally or through exploiting vulnerability based on age or mental or physical disability
  • Constitute “social credit” systems operated by or on behalf of public authorities

Risk category #2: High-risk AI systems. These systems come with a long list of assurance requirements, including conformity assessments, quality management and technical documentation. High-risk AI systems may pertain to (among other uses):

  • Both real-time and retroactive biometric identification
  • Access to essential private services and public benefits
  • Products or safety components of products that are covered by the EU’s medical devices and in-vitro diagnostic medical devices regulations (MDR and IVDR)

Risk category #3: Other AI systems posing limited risk. AI systems designed to interact with people (think chatbots, digital nudges and the like) either must inform users they’re interacting with an AI or be built in such a way that it’s obvious given the context.

 

Penalties. Companies found to have violated the Act’s rules against prohibited AI practices will be subject to a fine of either 35 000 000 EUR or 7% of their worldwide annual turnover—whichever is higher. Those found to be non-compliant with the rules laid out for high-risk systems will be subject to a fine of either 15 000 000 EUR or 3% of annual worldwide revenue.

 

Implementation timeline. The Act calls for a phased implementation timeline based on level of risk. The ban on prohibited AI practices kicks in six months after the Act goes into force—which will occur 20 days after its publication (pending as of mid-April 2024) in the official journal of the EU. Compliance requirements for high-risk systems, meanwhile, will kick in 24 months after the act goes into force. Enforcement on AI systems used as safety components and covered by medical device regulations goes into force after 36 months.

 

R&D carveout. Given the Act’s stated goal of balancing the imperatives of innovation on the one hand and citizens’ fundamental rights and safety on the other, the Act includes a carveout for AI used exclusively for research and development. These AI systems and outputs are explicitly excluded from the Act’s provisions.

 

Requirements for providers and deployers of high-risk systems. For companies developing or deploying high-risk AI systems, the Act outlines an extensive set of compliance measures including:

  • Risk management controls covering identification, analysis and evaluation of risk and adoption of appropriate risk management measures
  • Data and governance controls to ensure data used to train and validate algorithms are high quality, representative and free from bias
  • Transparency requirements such that human users are equipped to interpret outputs
  • Accuracy and robustness controls that define and disclose accuracy standards throughout a system’s life cycle, including redundancy controls in case of technical failure and addressing “feedback loops” of biased outputs
  • Human oversight capabilities that enable humans to effectively oversee an AI system’s use with a clear understanding of a system’s abilities and limitations, the ability to interpret outputs and override them if necessary and to “hit stop” when needed

What the EU AI Act means for pharma, medtech and healthtech



So, will life sciences companies have to scramble to overhaul their AI governance practices to achieve compliance before the EU AI Act goes into effect, similar to what happened with GDPR a few years ago? The short answer is no.

 

Does this mean life sciences companies should carry on business as usual? Also no.

 

Classifying AI use cases by risk profile

 

Life sciences companies should immediately inventory and classify their AI systems against the risk categories outlined in the Act, focusing on identifying any in-market or in-development use cases that may fall into the category of prohibited AI.

 

While we don’t know of any AI systems life sciences companies are deploying today that would fall into the Act’s “prohibited” category, companies nonetheless should be extremely wary. As an example, the Act prohibits the use of AI for social credit scores. Depending on how regulators choose to interpret this statute, it could potentially apply to algorithms that are used to determine access to treatments or specialists based on factors such as medication adherence or appointment attendance.

 

In addition to identifying potentially prohibited AI systems, life sciences companies will need to expeditiously identify any high-risk use cases, because these will be subject to additional compliance and transparency controls. This exercise should apply both to in-development applications as well as existing ones, which may need to be retrofitted to comply with the Act.

 

Expect increased scrutiny, compliance burden and potential for market access delays

 

Needless to say, companies deploying AI systems deemed high risk should be prepared for increased regulatory scrutiny to ensure compliance. New requirements traced to the Act will likely result in increased cost—both in money and time as companies modify existing development processes and documentation and conduct audits of existing AI. Companies also should expect go-to-market delays as new regulatory hurdles potentially slow approvals for new AI-powered medical technologies.

 

Risk management and data governance will be paramount

 

Companies will need to proactively identify and mitigate risks associated with their AI systems, with an emphasis on algorithmic bias, safety and security vulnerabilities. With the increased focus on safety and transparency, life sciences companies may need to increase focus on risk management frameworks and comprehensive risk strategies as they deploy their AI systems.

 

Hand in hand with comprehensive and proactive risk management of AI, life sciences companies should take the opportunity to shore up their data quality and responsible data practices. This means ensuring AI systems are trained on high-quality, representative data and instituting measures to detect and mitigate potential data biases that could generate inaccurate or unfair outputs. Companies also must continue to institute and document measures to prevent unauthorized access or misuse of patient data. This includes tracking the origin and flow of data used throughout the AI development life cycle to enable better auditing and identification of potential issues.

 

Transparency and explainability come to the fore

 

Based on the Act’s requirements for high-risk AI, an underrated cornerstone of AI governance will rise to the fore with the emergence of the AI Act: transparency and explainability. To be compliant, companies will need to ensure that AI systems can be explained in such a way that both healthcare professionals and regulators can understand—including a given AI’s capabilities and limitations, potential sources of bias and how it arrives at its outputs. This will need to include robust documentation of training data, algorithms and decision-making processes.

 

Given their long histories of regulatory compliance expertise, pharma and medtech companies may be better positioned than healthtech and others to absorb the added cost and burden associated with the AI Act. Some may be tempted to outsource some regulatory responsibilities, much as they do with contract manufacturing and life cycle management for software-as-a-medical-device (SaMD). If companies do choose to go this route, they will still need to implement robust governance around managing their outsource vendors.

3 recommendations for life sciences companies in response to the EU AI Act



While enforcement mechanisms for the Act don’t kick in immediately, there are still steps all life sciences companies should undertake immediately to prepare themselves.

 

#1: Closely monitor regulatory developments and seek out collaborations. While the Act is officially the law of the land, many questions on compliance expectations will likely be explained in further official clarifications to the law. Additionally, each EU member’s state is required to set up its own national governing bodies for AI, and so it’s possible there may be regional differences in how the Act is interpreted and enforced. Engaging early with regulators—that is, once they’ve been established in accordance with the Act—to better understand expectations could pay dividends, and the same goes for working with industry groups to proactively develop industry AI best-practice guidelines.

 

#2: Study how early movers and other interested parties are responding. In healthcare, the most prominent use of patient-facing AI today is with clinical decision support; life sciences companies should closely monitor companies deploying these systems for their actions and reactions to the Act. However, healthcare companies generally have been slower than others to implement AI solutions. To gauge how these new regulations will play out in the real world, life sciences companies should observe how companies outside healthcare are responding—especially digital-native and AI-native companies.

 

Relatedly, watch closely to see what industry groups such as Advamed, HIMSS, PhRMA and others are saying, especially clarifications they’re requesting or interpretations they’re lobbying for.

 

#3: Develop a proactive AI compliance and data governance strategy. Develop a clear AI compliance roadmap by outlining the necessary steps, resources and timelines for meeting the Act’s requirements. This must include a plan for classifying AI systems based on risk profile, focusing on transparency and explainability and prioritizing robust data governance.

Life sciences companies should embrace the EU AI Act, not recoil from it



The ethos of moving fast and breaking things is—thankfully—foreign to life sciences companies, where the health, safety and welfare of real patients is always foremost. As such, the basic precepts of the EU AI Act should align well with life sciences companies’ existing, if nascent, AI governance strategies. Nevertheless, added regulatory burden is rarely welcome.

 

We strongly recommend that life sciences companies lean into the additional clarity the Act affords. Given the increasingly attractive elements of the European data, technology and policy landscape, companies should commit to European leadership in AI with an eye toward transferring these capabilities to the U.S.—as it’s done with market access and pricing in the context of the Inflation Reduction Act. As such, companies should not pause experiments with AI in the EU. Instead they should use the Act’s passage as an opportunity to operationalize trustworthy and responsible AI development, both as a source of competitive advantage and as a catalyst to create a higher standard of healthcare AI.

 

Across multiple reports with World Economic Forum, we’ve charted the unprecedented potential for AI to transform health and healthcare for global populations. Yet, in spite of this potential, the 2024 ZS Future of Health Report found that doctors and patients are withholding their full trust. Increased public and clinician trust of AI in healthcare could supercharge its adoption and help usher in a new era of AI-powered diagnostics, personalized treatment recommendations, improved patient experiences and better health outcomes for all.

Add insights to your inbox

We’ll send you content you’ll want to read – and put to use.





About the author(s)