
AI Act and medical devices: what manufacturers need to anticipate (Team-NB, 2025)
MD, AI and Cybersecurity
The AI Act (EU Regulation 2024/1689), which came into force on August 1, 2024, imposes new requirements on artificial intelligence systems, including those used in medical devices. From August 2, 2027, devices incorporating AI, either as a primary function or as a safety component, will be subject to additional obligations if they are considered "high-risk".
Against this backdrop, the Team-NB association, which brings together notified bodies in the medical sector, published an important position paper in April 2025. Objective: to alert the industry to the concrete challenges of implementing the regulation for medical devices integrating AI(MDAI), and to propose realistic solutions for the industry.
1. What the AI Act changes for medical device manufacturers
The text applies horizontally to all sectors, but medical devices are explicitly classified as high-risk if :
- aI is used as a primary function or safety component (e.g.: image analyzer, clinical recommendation system) ;
- and the device already requires a conformity assessment by a notified body under MDR or IVDR.
Key date: from August 2, 2027, any new device or legacy device undergoing significant modification will have to incorporate the requirements of the AI Act in addition to MDR/IVDR.
2. Key messages from the Team-NB position paper
2.1. The need for rapid designation of notified bodies
Team-NB warns that without rapid transposition of the text at member-state level, there will be a shortage of notified bodies designated to assess MDAIs in time.
Their pragmatic proposal:
- Use Article 43(3) of the AI Act to extend already existing software codes (Regulation 2017/2185),
- Avoid an overly cumbersome second designation circuit, initially provided for in Article 30 of the AI Act.
But some member states seem to want to block this approach, at the risk of creating a regulatory bottleneck in 2027.
2.2. New requirements to be integrated into the QMS
Beyond the technical requirements of the MDR, the AI Act imposes new layers to be integrated into the quality management system (QMS ):
- Respect for fundamental rights: confidentiality, non-discrimination, human autonomy;
- Mandatory human oversight (Art. 14): no autonomous "black box";
- Traceability and data governance (Art. 10): quality, representativeness, documentation of data sets;
- Mandatory logs (Art. 12): systematic recording of AI operation.
Notified bodies may go as far as :
- Request access to datasets,
- Perform additional tests,
- Access to source code if required.
2.3. Key definitions to be clarified to avoid slippage
Team-NB calls for urgent clarification of several notions:
- AI system: a broad definition that includes any function based on inference. Risk of encompassing conventional systems that are not "intelligent".
- Safety component: any function whose failure could impact health or safety → almost all AI modules in a DM?
- Substantial modification: interpreted as equivalent to "significant change" in the MDR... but with no harmonized framework today .
Without clarification, unaffected software could find itself sucked into the scope.
3. Case studies
Case 1 - Imaging software with AI: major change to come
A manufacturer has an MDR device with AI trained on initial clinical data. He wishes to replace the model with a new algorithm trained on an enriched base.
Issues:
- Is this evolution a substantial modification?
- Should a new conformity assessment be triggered, including the AI Act?
- The manufacturer will need to demonstrate that its QMS and risk analysis now cover the fundamental rights and governance requirements of AI.
Case 2 - Connected patch with embedded recommendation module
A connected therapeutic patch sends recommendations to a caregiver via an app. These recommendations are generated via an AI engine trained on patient histories.
Consequences:
- Critical function = safety component → the patch falls within the high-risk scope of the AI Act ;
- It will be necessary to prove that :
- datasets are well documented ;
- human oversight is guaranteed (the app does not decide on its own);
- patient autonomy is respected (no hidden or irreversible decisions).
4. Mini FAQ
Does this apply to all software?
No. Only those incorporating an AI system as defined (inference, autonomy, adaptability) and covered by an MDR/IVDR assessment by a NB are included.
My software is MDR certified today. Do I have to redo everything?
Not necessarily. But from August 2027, any significant modification involving AI could trigger a reassessment incorporating the AI Act.
Can I reuse my MDR documents?
In part, yes. But you'll need to enrich them: data governance, fundamental rights, documentation of AI models, logging, etc.
What about datasets?
NBs may require access to datasets (training, validation, testing). We therefore need to anticipate quality, traceability and RGPD aspects.
Can EHDS help?
Yes, but not in the short term. The European Health Data Space (EHDS) will not be operational for several years.
5. Conclusion
The AI Act is not just a regulatory add-on: it imposes a new layer of cross-cutting requirements on AI-enabled medical devices.
Manufacturers must:
- Anticipate the technical, legal and documentary requirements;
- Adapt their QMS to incorporate the principles of the AI Act;
- Identify now where their devices will be affected from 2027.
At CSDmed, we support manufacturers as they move towards maturity: AI Act impact audit, technical documentation, compliance strategy, interface with notified bodies.
Are you developing AI-enabled medical software? Let's talk about it.
6. Related resources
- Characterizing the risks of medical software: what to learn from IMDRF guide N81 (2025)?
- Characterizing the risks of medical software: what should we remember from IMDRF guide N88 (2025)?
- ISO 13485 for start-ups: 3 realistic approaches
- ISO 14971 and start-ups: how to get started on risk analysis without getting lost