ai-healthcare

EU AI Act in 2026: Does Your AI Scribe Count as "High-Risk" in a Clinic Workflow?

A plain-English decision tree for clinics navigating the EU AI Act. Learn when an AI scribe is "just documentation" vs. high-risk, what the 2026 timeline means, and which questions to ask your vendor.

Published on February 1, 202513 min read
D

Written by

Dya Clinical Team

Clinical Documentation Experts

EU AI Act in 2026: Does Your AI Scribe Count as "High-Risk" in a Clinic Workflow?

The EU AI Act is no longer a proposal. It entered into force in August 2024, and the rules that matter most to clinics—high-risk obligations for AI systems—are set to apply from August 2026 onward. If your practice uses an AI scribe, or plans to, the question is straightforward: does it count as "high-risk"?

The answer isn't always obvious. An AI scribe that transcribes a session and formats a SOAP note looks harmless. But if that same tool starts suggesting diagnoses, flagging risk indicators, or feeding data into treatment decisions, the regulatory picture changes entirely.

This guide walks through the classification logic in plain English, explains the timelines that actually matter, and gives you a checklist of questions to ask any AI vendor before August 2026.

What the EU AI Act Actually Requires

The AI Act uses a risk-based framework. Not every AI system is treated the same. There are four tiers:

  • Unacceptable risk — banned outright (social scoring, manipulative AI, real-time biometric surveillance in public spaces)
  • High-risk — allowed, but subject to strict requirements around documentation, oversight, transparency, and conformity assessment
  • Limited risk — transparency obligations only (e.g., disclose that a chatbot is AI)
  • Minimal risk — no specific obligations

Most clinical AI tools land somewhere between "high-risk" and "limited risk." The distinction matters because high-risk classification triggers a full compliance regime: technical documentation, risk management systems, human oversight mechanisms, data governance requirements, and in many cases, third-party conformity assessment by a notified body.

Getting this wrong carries real consequences. Fines for misclassification can reach up to €15 million or 3% of global annual turnover.

The Two Paths to "High-Risk"

Under Article 6 of the AI Act, an AI system qualifies as high-risk through one of two pathways:

Path 1: It's a Medical Device (Article 6(1), Annex I)

If your AI scribe qualifies as a medical device—or is a safety component of one—under the EU Medical Devices Regulation (MDR), and requires a third-party conformity assessment (i.e., it's Class IIa or higher), it is automatically a high-risk AI system.

This is relevant for AI tools that go beyond documentation. If the software analyses patient data to suggest diagnoses, recommends treatments, or flags clinical risks, it likely qualifies as Software as a Medical Device (SaMD) under the MDR.

Key detail: The full obligations for medical-device AI systems under Article 6(1) don't apply until August 2, 2027—one year later than the general high-risk deadline. The EU's proposed Digital Omnibus package may push this further to August 2028, depending on when harmonised standards become available.

Path 2: It Falls Under an Annex III Use Case (Article 6(2))

Annex III lists specific use cases that are automatically high-risk. The ones relevant to healthcare fall under Category 5 ("access to essential services"):

  • AI systems used to evaluate eligibility for public healthcare services or to grant, reduce, or revoke benefits
  • AI systems used for risk assessment and pricing in life and health insurance
  • AI systems used for emergency call triage or dispatching emergency healthcare services

A pure documentation tool—one that transcribes speech and formats clinical notes—doesn't fit neatly into any of these categories. But the line blurs quickly if the tool starts influencing decisions about patient access, insurance, or triage.

The Decision Tree: Is Your AI Scribe "High-Risk"?

Here's the practical framework. Walk through these questions for any AI tool used in your clinic:

Step 1: Does the tool qualify as a medical device?

Ask your vendor directly: is this product CE-marked as a medical device under the MDR? If yes—and it's Class IIa or above—it's high-risk under the AI Act. Full stop.

Most pure AI scribes are not medical devices. They capture and format information but don't diagnose, recommend, or monitor. However, the line is drawn based on intended purpose, not marketing language. If the tool's documentation or promotional materials imply clinical decision support, regulators may classify it differently than the vendor intended.

Step 2: Does it fall under an Annex III use case?

Review the Annex III categories above. If the tool is used to assess healthcare eligibility, calculate insurance risk, or triage patients—even as a secondary feature—it may trigger high-risk classification.

Step 3: Does it influence clinical decisions?

This is where most AI scribes live or die in the classification. The AI Act's Article 6(3) provides an exemption for systems that:

(a) Perform a narrow procedural task — e.g., converting speech to text, formatting notes into a template

(b) Improve the result of a previously completed human activity — e.g., cleaning up a clinician's notes after they've already made their assessment

(c) Detect patterns without replacing or influencing human assessment — e.g., flagging that a clinician used different terminology than usual, without making clinical suggestions

(d) Perform a preparatory task for a human assessment — e.g., organising patient history before a clinician reviews it

If your AI scribe only transcribes and formats notes—without suggesting diagnoses, flagging symptoms, recommending actions, or ranking severity—it likely qualifies for the Article 6(3) exemption and is not high-risk.

Step 4: Even if exempt, you still have obligations

Providers who claim the Article 6(3) exemption must:

  • Document the assessment before placing the system on the market
  • Register the system in the EU database for high-risk AI systems (yes, even non-high-risk systems from Annex III categories must register)
  • Provide documentation to national authorities on request

Ask your vendor whether they've completed this assessment. If they haven't heard of Article 6(3), that's a red flag.

Step 5: Does the tool profile patients?

One critical override: the Article 6(3) exemption never applies if the system profiles individuals. Under the GDPR definition, profiling means automated processing of personal data to evaluate aspects of a person—health status, behaviour, reliability, location.

An AI scribe that tracks patient patterns across sessions, flags behavioural changes, or builds risk profiles is profiling. Even if it performs a "narrow procedural task," profiling voids the exemption and triggers high-risk classification.

The Quick-Reference Classification Table

What the tool does Likely classification Why
Transcribes speech to text Not high-risk Narrow procedural task (Art. 6(3)(a))
Formats notes into SOAP/template Not high-risk Narrow procedural task (Art. 6(3)(a))
Summarises session content Grey area Could be "improving" a human activity, or could be generating new clinical content
Suggests ICD-10 codes Grey area Could be preparatory, but may influence billing decisions
Flags potential diagnoses or symptoms Likely high-risk Influences clinical decision-making
Recommends treatment options High-risk Likely qualifies as SaMD under MDR
Tracks patient patterns across sessions High-risk Profiling voids Article 6(3) exemption
Triages patient urgency High-risk Explicitly listed in Annex III

Timeline: What Happens When

The staged rollout matters for planning. Here's what's already happened and what's coming:

Already in effect:

  • February 2, 2025 — Prohibited AI practices banned; AI literacy obligations active
  • August 2, 2025 — Governance rules and obligations for general-purpose AI (GPAI) models apply

Coming next:

  • February 2, 2026 — European Commission publishes guidelines with practical examples of high-risk and not-high-risk use cases (directly relevant to AI scribes)
  • August 2, 2026 — High-risk obligations for Annex III systems apply (this is the main deadline for stand-alone AI tools in clinics)
  • August 2, 2027 — High-risk obligations for AI systems embedded in regulated products (medical devices) apply

The Digital Omnibus Wrinkle

In November 2025, the European Commission proposed the Digital Omnibus package, which would delay high-risk deadlines until harmonised compliance standards are ready:

  • Annex III systems (stand-alone high-risk tools): delayed to the earlier of December 2, 2027, or 6 months after standards are published
  • Annex I systems (medical device AI): delayed to the earlier of August 2, 2028, or 12 months after standards are published

The Digital Omnibus is a proposal, not law. It still requires approval from EU member states and the European Parliament. But the direction is clear: the Commission acknowledges that compliance infrastructure isn't ready on schedule.

What this means for clinics: Don't use the potential delay as an excuse to ignore preparation. The compliance requirements themselves aren't changing—only the enforcement date. Clinics that prepare now will be ahead regardless of the final timeline.

10 Questions to Ask Your AI Scribe Vendor

Before August 2026, have this conversation with any vendor supplying AI tools to your clinic:

1. "Is your product classified as a medical device under the MDR?"

If yes, ask for the CE marking and the classification level (I, IIa, IIb, III). Class IIa and above means automatic high-risk under the AI Act.

2. "Have you completed an Article 6(3) assessment?"

Any vendor whose tool falls within Annex III categories should have a documented assessment explaining why their system is or isn't high-risk. If they haven't done this, they're not ready for 2026.

3. "Is your system registered in the EU AI database?"

Even non-high-risk systems from Annex III categories must register. Ask for the registration reference.

4. "Does your tool influence clinical decisions in any way?"

Push past marketing language. Does it suggest diagnoses? Recommend codes? Flag symptoms? Rank severity? Any of these could shift classification from "documentation tool" to "high-risk system."

5. "Does the system profile patients across sessions?"

If the tool tracks patterns, builds patient profiles, or flags behavioural changes over time, the Article 6(3) exemption doesn't apply—regardless of the tool's primary function.

6. "Where is patient data processed and stored?"

The AI Act sits alongside GDPR. Clinical data processed outside the EU may create additional compliance requirements. Ask about data residency, retention policies, and sub-processor arrangements. If your clinic operates in Switzerland, see our FADP compliance checklist for AI medical transcription for region-specific guidance.

7. "What human oversight mechanisms are built in?"

High-risk systems require meaningful human oversight—not just a "confirm" button. Ask how the system enables clinicians to understand, supervise, and override its outputs.

8. "Can you provide your technical documentation?"

High-risk AI providers must maintain comprehensive technical documentation covering the system's design, development, training data, testing, and performance metrics. If a vendor can't share this, question their compliance readiness.

9. "What happens if the classification changes?"

AI tools evolve. A documentation-only scribe today could add clinical decision support features tomorrow. Ask how the vendor handles reclassification and whether updates could change the risk profile.

10. "Who bears compliance responsibility—provider or deployer?"

Under the AI Act, both providers (the company building the AI) and deployers (the clinic using it) have obligations. Deployers of high-risk systems must ensure human oversight, monitor the system for risks, and report serious incidents. Clarify the division of responsibility in your contract.

What Clinics Should Do Now

You don't need to wait for final harmonised standards to start preparing. Here's a practical sequence:

Inventory your AI tools

List every AI system used in your clinic—not just the AI scribe. Include scheduling tools, triage chatbots, coding assistants, billing automation, and anything that processes patient data with AI. For each, note what it does and whether it influences clinical or administrative decisions. If you're unsure what counts as an "AI scribe" vs. dictation vs. manual notes, our comparison of AI scribes, dictation, and note-taking breaks down the differences.

Classify each tool

Use the decision tree above. For each AI system, determine whether it's high-risk, potentially high-risk, or clearly exempt. Document your reasoning.

Review vendor contracts

Check whether contracts address AI Act compliance. Look for clauses covering: conformity assessment responsibility, incident reporting, data governance obligations, and what happens if the product's risk classification changes.

Establish human oversight processes

For any tool that might be high-risk, ensure your clinical workflows include meaningful human review. "Meaningful" means clinicians understand what the AI is doing, can assess its outputs critically, and can override or disregard them. Rubber-stamping AI-generated notes doesn't qualify. For multi-practitioner clinics, standardising report structures across your team can help establish the consistent oversight processes the AI Act requires.

Monitor the February 2026 guidelines

The European Commission's practical examples—due by February 2, 2026—will be the clearest guidance yet on what counts as high-risk and what doesn't. These guidelines will likely address AI scribes directly.

The Bottom Line

Most AI scribes that stick to transcription and formatting—converting speech to structured clinical notes without influencing decisions—will likely fall outside the high-risk classification under the Article 6(3) exemption. That's the good news.

The grey area is real, though. Features like automated coding suggestions, clinical summaries that go beyond what the clinician said, or cross-session pattern tracking can push an otherwise simple tool into high-risk territory. And the obligations that follow are substantial.

The EU AI Act doesn't ban AI in clinics. It demands transparency about what AI tools do, accountability for how they're used, and meaningful human oversight over clinical processes. For clinics already practicing good documentation habits—reviewing AI outputs, maintaining clinical judgment, keeping patients informed—the gap between current practice and compliance may be smaller than expected.

Start the conversation with your vendors now. The classification question isn't going away, and the answer determines everything that follows.

Sources


Need an AI scribe built for compliance? Dya Clinical generates structured clinical notes from your sessions—transcription and formatting only, no clinical decision-making, no patient profiling. Your notes, your judgment, your oversight. Try it free for 7 days.

#regulation#compliance#ai-healthcare#documentation#best-practices

Related Articles