Innovating with intelligence: navigating artificial intelligence (AI) medical devices from code to care

The promise of AI in healthcare is extraordinary. From algorithms that can spot cancer earlier than human radiologists to AI systems that predict patient deterioration hours before symptoms appear, we are witnessing transformative innovations that could impact millions of patient lives. Yet many of these breakthrough technologies may struggle to reach patients at scale.

The challenge is not technical but systemic. As AI interventions increasingly become regulated as medical devices, moving these innovations from laboratory to patient bedside requires navigating three interconnected access barriers: evolving regulatory frameworks, misaligned reimbursement systems, and persistent trust deficits among practitioners and patients. Understanding how these barriers interact and how they collectively limit patient access to life-saving innovations is crucial for anyone working to bridge the gap between algorithmic potential and clinical reality.

The regulatory evolution: from software to medical device

The transformation of AI from a software tool to a regulated medical device represents one of the most significant shifts in healthcare regulation in recent decades. As of March 2025, the FDA has authorised 1,016 AI/machine learning (ML) enabled medical devices, representing a dramatic acceleration from 692 AI/ML enabled medical devices as of October 2023, with more than 80% of approvals occurring since 2019 [1, 2]. This surge reflects not only the pace of innovation but also a fundamental shift in how AI software is regulated and, consequently, in how it reaches patients. This more formal treatment can be seen in the Software as a Medical Device (SaMD) framework outlined by the FDA in the United States: within this framework SaMD is defined as "software intended to be used for one or more medical purposes that perform these purposes without being part of a hardware medical device” [3]. This classification brings AI under the same rigorous oversight as traditional medical devices, fundamentally changing market access pathways and creating new hurdles for patient access.

This regulatory evolution is also taking place on a more global stage, with different approaches having an impact on regulatory approval and subsequent patient access. The EU AI Act [4] was approved on 13 June 2024 and entered into force in August 2024, introducing the world's first comprehensive AI regulation that classifies most medical AI as "high-risk," requiring a second certification alongside traditional medical device approval. This dual approval process creates unprecedented regulatory complexity that could delay patient access. Meanwhile, the UK's MHRA has taken a fundamentally different approach, positioning itself as an agency that pursues "light-touch" regulation through guidance rather than new legislation. The MHRA launched an extensive "Software and AI as a Medical Device" Change Programme with dedicated projects including "Project Glass Box" for AI explainability and "Project Ship of Theseus" for managing adaptive AI systems that continuously learn and evolve after deployment [5].

The reimbursement paradox: when innovation meets economics

The promise of AI in healthcare faces a stark economic reality that may have little to do with the technology itself. Despite the increasing number of FDA-approved AI medical devices, insurance claims analysis reveals "the small percentage that AI usage takes up relative to total billing", an indicator of the chasm between regulatory approval and patient access [6]. This isn't a story of technological failure but rather a systemic mismatch in how healthcare systems finance and deploy AI innovation.

The fundamental challenge may lie in the economic profile of AI. Traditional medical devices generate revenue through discrete, billable procedures, whilst AI systems often create value by preventing unnecessary interventions, reducing diagnostic errors, or improving workflow efficiency.

However, perhaps more importantly, barriers to access are likely extend beyond payment structures. Healthcare systems operate on thin margins, making them risk-averse to technologies that require significant upfront investment. AI deployment typically requires significant infrastructure changes: staff training, workflow redesign, technical support, and ongoing maintenance. These costs are immediate and tangible, whilst the benefits (improved outcomes, reduced errors, and enhanced efficiency) are often diffuse and difficult to monetise under current payment models. Geographic and socioeconomic disparities further complicate access. Rural hospitals and community health centres often lack the technical infrastructure necessary to deploy AI systems effectively, even when reimbursement is available. The fragmented nature of healthcare procurement compounds these challenges, forcing AI companies to navigate hundreds of individual payers and procurement processes.

Recognising these barriers, stakeholders are developing innovative approaches to improve AI access. Value-based purchasing arrangements allow AI developers to share financial risk with healthcare providers, receiving payment only when demonstrable improvements are achieved. Subscription models treat AI as operational technology rather than per-use services, enabling predictable costs whilst allowing companies to scale across entire health systems. International examples [7] demonstrate various approaches: Germany's Digital Health Applications programme provides automatic reimbursement for AI-enabled digital therapeutics whilst real-world evidence is collected, France's PECAN scheme enables temporary reimbursement for digital health technologies, and the UK's Early Value Assessment supports rapid access for technologies addressing national unmet needs [8], demonstrating that systematic policy interventions can accelerate patient access without compromising value and safety standards.

The access challenge ultimately reflects a broader question about how healthcare systems adapt to technological innovation. If AI becomes essential for delivering high-quality care, unequal access could exacerbate existing health inequities. Success requires not just better algorithms but better systems for translating innovation into accessible care.

The trust deficit: explainability and acceptance

Perhaps the most fundamental access barrier is trust, which operates at multiple levels to limit patient access to AI innovations. The 'black box' nature of AI acts as a significant barrier to its social acceptance, with concerns from adopters around the lack of explainability regarding AI algorithm predictions creating a circular problem: without widespread adoption, AI cannot demonstrate real-world value, but without proven value, adoption remains limited. This trust deficit means that even regulatory-approved, reimbursed AI technologies may remain unused in healthcare systems, effectively blocking patient access despite being technically and economically ready.

Healthcare professionals express legitimate concerns about accountability and liability when AI recommendations lead to adverse outcomes, creating another layer of access barriers. If a professional fails to use or abide by the advice of an AI tool and there is a poor outcome, it is conceivable that this may be considered clinical negligence. Conversely, if following AI recommendations leads to adverse outcomes, the question of liability becomes complex. These concerns create a professional environment where many clinicians may choose to avoid AI tools entirely, effectively limiting patient access regardless of technological capabilities.

Patient trust presents yet another dimension of the access challenge. Patients may feel uncomfortable with AI-driven decisions, particularly when they cannot understand how recommendations were generated. This patient-level resistance can create bottom-up pressure against AI adoption, as healthcare providers prioritise patient comfort and the therapeutic relationship over technological efficiency.

New standards are emerging to address algorithmic bias and explainability in healthcare AI. The UK's MHRA requires manufacturers to prove that they have appropriately identified, measured, managed and mitigated risks arising from bias, whilst the EU AI Act mandates examination of possible biases that are likely to affect health and safety, including assessment across specific geographical, contextual, behavioural or functional settings where AI systems will be used. However, regulation alone cannot bridge the trust gap that limits patient access. Successful implementations require continuous engagement through post-market surveillance systems, clinician user groups, and human-centred design processes that involve practitioners and patients in the development process. These approaches recognise that trust is not a technical problem to be solved but a social relationship to be cultivated, requiring ongoing effort to maintain patient access to beneficial AI innovations.

From regulation to access: the path forward

The convergence of regulatory maturity, economic innovation, and trust-building represents an unprecedented opportunity to transform AI from laboratory curiosity to an essential healthcare tool. Key priorities for achieving widespread patient access include:

  • Integrated policy approaches: Regulatory frameworks, reimbursement models, and implementation support must be designed as interconnected systems rather than independent processes, with coordinated pathways that address approval, payment, and deployment simultaneously.

  • Stakeholder alignment: Success requires unprecedented collaboration between AI developers, healthcare providers, payers, and patients to create shared value propositions that reward patient outcomes rather than technological complexity.

  • Equity-centred deployment: Access strategies must explicitly address geographic, socioeconomic, and infrastructure disparities to ensure AI benefits reach underserved populations rather than exacerbating existing health inequities.

  • Joint scientific advice with AI-specific expertise: Regulatory agencies should expand collaborative scientific advice programmes to include dedicated AI expertise, enabling developers to receive coordinated guidance on complex AI validation requirements across multiple jurisdictions simultaneously.

The organisations that succeed in AI healthcare today recognise that technical excellence is just the starting point for patient access. As regulatory frameworks develop and economic models change, the real competitive advantage is not in creating better algorithms but in designing systems that consistently deliver proven AI innovations to the patients who need them most.

For any enquiries or deeper insights on these developments, feel free to reach out to our team at: enquiries@decisiveconsulting.co.uk.

References

1. McCarthy Tétrault LLP. AI-Enabled Medical Devices: Transformation and Regulation. TechLex Blog. Available at: https://www.mccarthy.ca/en/insights/blogs/techlex/ai-enabled-medical-devices-transformation-and-regulation.

2. Spyrosoft. Regulation of AI in Healthcare in 2024: EU and FDA approaches. Available at: https://spyro-soft.com/blog/healthcare/regulation-of-ai-in-healthcare-in-2024-eu-and-fda-approaches.

3. Food and Drug Administration. Software as a Medical Device (SAMD). Available at: https://www.fda.gov/medical-devices/digital-health-center-excellence/software-medical-device-samd.

4. European Parliament and Council of the European Union. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act). Official Journal of the European Union. 2024 Jul 12;L 2024/1689:1-144. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689.

5. Medicines & Healthcare products Regulatory Agency. Software and AI as a Medical Device Change Programme roadmap. Available at: https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme-roadmap.

6. Wu K, Wu E, Theodorou B, Liang W, Mack C, Glass L, et al. Characterizing the Clinical Adoption of Medical AI Devices through U.S. Insurance Claims. NEJM AI. 2024;1(1).

7. Farah L, Borget I, Martelli N. International Market Access Strategies for Artificial Intelligence–Based Medical Devices: Can We Standardize the Process to Faster Patient Access? Mayo Clin Proc Digit Health. 2023 Aug 8;1(3):406–412.

8. National Institute for Health and Care Excellence. Early Value Assessment (EVA) for medtech. Available at: https://www.nice.org.uk/about/what-we-do/eva-for-medtech.


Written by Michael Harding

Decisive Edge 21st July 2025

Next
Next

Does the future of evidence-based healthcare lie in autonomous research?