Insights from the EU AI Act and Multidisciplinary Medical Teams

The above image was generated using Source: DALLE-3 on 15/04/2025 using prompt “Generate an image highlighting Clinical AI”

Author

Introduction

Since ChatGPT’s public release in 2022, governments have been working to balance AI innovation with safety and trust. One major step is the EU AI Act, which is the world’s first comprehensive AI regulation. It came into effect this year and introduces a risk-based framework, AI systems are categorized by their potential for harm. High-risk systems, like clinical AI, are subject to strict requirements.Since ChatGPT’s public release in 2022, governments have been working to balance AI innovation with safety and trust. One major step is the EU AI Act, which is the world’s first comprehensive AI regulation. It came into effect this year and introduces a risk-based framework where AI systems are categorized by their potential for harm. High-risk systems, like clinical AI, are subject to strict requirements.

Clinical AI has potential to improve diagnostics, treatment planning, and reduce admin tasks—but challenges remain. The biggest one is explainability. Many models are black boxes, offering predictions without clear reasoning, which makes them hard to trust.

The blog is based on and references the article published on nature at https://www.nature.com/articles/d41586-025-00618-x

Explainability Methods

Current explainability methods fall into two categories:

The first is rules-based systems. These are built using predefined thresholds or rules. For example, an AI might be programmed to flag an X-ray as showing pneumonia if lung opacity exceeds a certain percentage. While this method is fully transparent, it’s also inflexible. Medical conditions are often subtle and complex, and strict rules can miss important nuances, especially in rare or atypical cases.

The second is post-hoc explainability, which tries to explain a model’s decision after it’s made. A common method here is saliency mapping where the AI highlights the part of an input, like a region on an X-ray that influenced its prediction. But these explanations are often indirect. They show where the model focused, but not why it reached a certain conclusion. If the model highlights irrelevant areas, like image artifacts or labels, it can actually mislead clinicians or reduce trust. These interpretations also don’t use clinical language, making them harder to integrate into real medical decision-making.

 Concept Bottleneck Models (CBMs)

The article introduces Concept Bottleneck Models (CBMs) as compelling alternative. Unlike black-box models that go straight from raw input to a prediction, CBMs take an intermediate step: they first identify clinically meaningful concepts, like tumor stage, grade, or the presence of inflammation, before using those concepts to make a final decision which mirrors how human clinicians reason.

They also allow for real-time human feedback. For example, if a CBM misidentifies a tumor as low-grade when a clinician knows it’s high-grade, the clinician can correct that specific concept. The model then updates its final prediction based on the corrected input. This makes it possible for the AI to learn with the team, instead of making fixed, unchangeable decisions.

CBMs can also flag uncertainty. If the model encounters a pattern it doesn’t recognise, something that doesn’t clearly match any known clinical concept it can label it as “unknown.” These cases can then be sent for human review, ensuring that unclear or risky predictions don’t go unchecked.

Conclusion

One question that kept resurfacing as I was doing my research is how widely is AI used currently in health sector? AI is no longer a futuristic idea in healthcare, it’s already here, being trialled or deployed in areas like radiology, pathology, triage, administrative automation, and even treatment planning.  Yet, despite this growing presence, adoption is often cautious and fragmented. Many clinicians remain skeptical. Regulation like the EU AI Act isn’t a barrier to innovation but it’s a guidepost for doing AI right.

References

Banerji, C.R.S., Chakraborti, T., Ismail, A.A., Ostmann, F. and MacArthur, B.D. (2025). Train clinical AI to reason like a team of doctors. Nature, 639(8053), pp.32–34. doi:https://doi.org/10.1038/d41586-025-00618-x.

About the Author

Intern at Research Graph Foundation |  + posts