Frameworks that make digital health legible, evaluable, deployable.

Curated regulatory pathways, evaluation frameworks, AI assurance standards, and interoperability references — the documents healthcare innovators, clinicians, and policy professionals actually use to build, evaluate, and deploy.

46Frameworks & standards
4Editorial categories
14Updated in 2025—26
May 2026Last updated
01 · Regulatory frameworks & pathways

The rules of market access.

FDA, EU, UK, and global medical-device regulators determine who can ship a digital health product, in what category, and with what evidence. Several of these frameworks were materially updated in early 2026.

U.S. FDA USA Active
FoundationalUpdated 2026

FDA’s risk-based framework for standalone medical software, aligned with IMDRF. Note: FDA withdrew the SaMD Clinical Evaluation guidance in January 2026.

Visit →
European Union EU Phased rollout
EUAI Regulation

Risk-tiered AI regulation. Most healthcare AI falls in the high-risk category, requiring conformity assessment, transparency, and human oversight.

Visit →
02 · Evaluation & evidence frameworks

Does it actually work?

The frameworks the field uses to evaluate whether a digital health product does what it says — from clinical validation, to outcomes measurement, to evidence-based reporting standards.

Digital Medicine Society Global Free
FoundationalValidation

Foundational framework for digital clinical measures: Verification, Analytical validation, Clinical validation. Extended by V3+ Usability Validation.

Visit →
Digital Medicine Society Global Free
PractitionerImplementation

Practitioner playbooks covering Digital Clinical Measures, Digital Healthcare, Pediatric Digital Medicine, and Implementing AI in Healthcare.

Visit →
ICHOM Global Free
OutcomesValue-based

Standardized outcome measurement sets across more than 40 conditions, designed to enable global benchmarking of value-based care.

Visit →
ORCHA UK Active
UKApp Assessment

UK-based digital health assessment platform. Powers app libraries for the NHS and other health systems through standardized review.

Visit →
NHS England UK Refreshed Feb 2026
UKUpdated 2026

Five-pillar assessment for NHS-bound digital health products: clinical safety, data protection, technical assurance, interoperability, usability. Refreshed Feb 2026 with a 25 percent question reduction.

Visit →
EQUATOR Network Global Free
ReportingTrials

Reporting guidelines for clinical trials of AI interventions. Extensions to CONSORT (results) and SPIRIT (protocols).

Visit →
DECIDE-AI Steering Group Global Free
ReportingDecision Support

Reporting guideline for early-stage clinical evaluation of AI decision support, covering the gap between offline validation and randomized trials.

Visit →
TRIPOD Group Global Free
ReportingPrediction Models

Transparent Reporting of multivariable prediction models, AI extension. Covers diagnostic and prognostic AI/ML model reporting.

Visit →
EQUATOR Network Global Free
ReportingSystematic Reviews

Reporting standard for systematic reviews of clinical AI studies, extending PRISMA to address AI-specific methodological considerations.

Visit →
ARPA-H USA Active funding
USAR&D

U.S. federal R&D agency funding high-risk, high-reward biomedical and digital health programs across diagnostics, AI, and care delivery.

Visit →
03 · AI assurance, safety & responsible use

Who’s watching the AI.

Governance, assurance, and ethics frameworks specific to health AI. The layer between regulatory pathways and operational deployment — what health systems are increasingly required to demonstrate.

Coalition for Health AI USA Free
USAHealth AI

CHAI’s primary playbook for ethical and quality-assured deployment of AI in healthcare, paired with the Assurance Standards Guide.

Visit →
NIST USA Voluntary
USAFoundational

Voluntary U.S. framework for governing, mapping, measuring, and managing AI risk. Widely adopted across health AI assurance programs.

Visit →
NIST USA Active
USAGenAI

Companion profile to the AI RMF specifically addressing generative AI risks, including hallucination, content provenance, and prompt injection.

Visit →
ISO Global Certifiable
GlobalCertifiable

First international certifiable standard for AI management systems. Specifies requirements for establishing, implementing, and continually improving an AIMS.

Visit →
OECD Global Free
GlobalPrinciples

International principles for trustworthy AI, adopted by more than 40 countries and informing healthcare AI policy globally.

Visit →
EQUATOR Network Global Free
GlobalLibrary

Master library of more than 500 health research reporting guidelines. Hosts CONSORT, STROBE, PRISMA, TRIPOD, SPIRIT, and AI extensions.

Visit →
HITRUST Alliance USA Certifiable
USASecurity

Common Security Framework that harmonizes HIPAA, NIST, ISO 27001, PCI, and other regulations. The de facto security certification for U.S. health-tech.

Visit →
04 · Technical standards & interoperability

The plumbing.

How digital health systems exchange data, structure information, and connect to the broader healthcare infrastructure. Without these, nothing scales.

Health Level Seven International Global Free
FoundationalInterop

Fast Healthcare Interoperability Resources. The dominant modern healthcare data exchange standard, mandated for U.S. EHRs and increasingly globally.

Visit →
SMART Health IT Global Free
AppsEHR

App platform standard built on FHIR and OAuth 2.0. The standard pathway for third-party clinical apps to plug into EHRs.

Visit →
ASTP / ONC USA Annual updates
USAData Classes

United States Core Data for Interoperability. Standardized data classes that ONC-certified health IT must support, expanded annually.

Visit →
Sequoia Project USA Active
USAHIE

Trusted Exchange Framework and Common Agreement. National-scale health information exchange via Qualified Health Information Networks (QHINs).

Visit →
MITA Global Free
GlobalImaging

Digital Imaging and Communications in Medicine. The universal standard for medical image storage, exchange, and metadata.

Visit →
SNOMED International Global Licensed
GlobalTerminology

Comprehensive clinical terminology used across EHRs globally. The most widely-deployed clinical reference terminology in the world.

Visit →
Regenstrief Institute Global Free
GlobalLab Codes

Universal coding system for laboratory tests, clinical observations, and survey instruments. Required by USCDI and used by every major lab.

Visit →
OHDSI Global Free
GlobalResearch

Observational Medical Outcomes Partnership Common Data Model. Standardized data model used for federated observational health research at scale.

Visit →
Common questions

Frequently asked questions.

Quick answers about which frameworks apply to which products, who needs to comply, and how the layers fit together.

What is Software as a Medical Device (SaMD) and how does the FDA regulate it?
Software as a Medical Device (SaMD) is software intended for medical purposes that performs those purposes without being part of a hardware medical device. The FDA regulates SaMD using a risk-based approach aligned with the International Medical Device Regulators Forum (IMDRF) framework. Key FDA guidance includes the 510(k) pathway, De Novo classification, Predetermined Change Control Plans (PCCPs) for AI/ML, Good Machine Learning Practice (GMLP), and the Clinical Decision Support Software guidance updated in January 2026. The FDA had authorized over 1,350 AI-enabled medical devices by early 2026.
What frameworks govern AI assurance and safety in healthcare?
The leading AI assurance frameworks for healthcare include the CHAI Responsible AI Guide (RAIG) and Assurance Standards Guide from the Coalition for Health AI; the Joint Commission and CHAI Responsible Use of AI Guidance released in September 2025; the NIST AI Risk Management Framework with its Generative AI Profile; ISO/IEC 42001 for AI management systems; the National Academy of Medicine AI Code of Conduct; and the WHO Ethics and Governance of AI for Health. Together these frameworks shape how U.S. health systems govern, evaluate, and deploy clinical AI. See the full AI assurance & safety section.
What is the V3 framework for digital clinical measures?
The V3 framework, developed by the Digital Medicine Society (DiMe), evaluates digital clinical measures across three layers: Verification (do the sensors capture what they claim to capture), Analytical validation (does the algorithm convert sensor data accurately), and Clinical validation (does the measure correspond to the clinical concept of interest in the target population). DiMe has since published V3+ Usability Validation as an extension. The V3 framework has been accessed over 30,000 times and adopted by NIH, FDA, and the European Medicines Agency. See also the DiMe Playbooks.
Which standards make digital health systems interoperable?
The core interoperability standards are HL7 FHIR (Fast Healthcare Interoperability Resources, the dominant modern data exchange standard), SMART on FHIR (the app platform standard for plugging third-party apps into EHRs), USCDI (United States Core Data for Interoperability), and TEFCA (national-scale health information exchange via QHINs). Additional standards include DICOM for imaging, SNOMED CT for clinical terminology, LOINC for lab observations, and the OMOP Common Data Model for observational research. See the full technical & interoperability section.
What reporting guidelines apply to clinical AI studies?
The leading reporting guidelines for clinical AI studies are CONSORT-AI / SPIRIT-AI (clinical trial results and protocols), DECIDE-AI (early-stage clinical evaluation of AI decision support), TRIPOD+AI (transparent reporting of AI prediction models), and PRISMA-AI (systematic reviews of clinical AI). All are hosted by the EQUATOR Network, which maintains the master library of over 500 health research reporting guidelines.
How do U.S. and EU digital health regulations differ?
The U.S. regulates digital health primarily through the FDA’s risk-based SaMD framework, with the 21st Century Cures Act exempting certain Clinical Decision Support software. The EU regulates through the Medical Device Regulation (MDR 2017/745), which classifies software based on risk and requires CE marking through Notified Bodies. The EU AI Act adds an additional layer specifically for AI systems, classifying most healthcare AI as high-risk and requiring conformity assessment, transparency, and human oversight. The UK MHRA operates a parallel post-Brexit regime, while Health Canada and the IMDRF help harmonize across regulators.
What is the NHS DTAC and who needs to comply?
The NHS Digital Technology Assessment Criteria (DTAC) is the national baseline assessment framework for digital health technologies entering the NHS in England. It covers five areas: clinical safety, data protection, technical assurance, interoperability, and usability and accessibility. Any supplier or developer wishing to make a digital health product available to NHS organizations must complete a DTAC assessment. NHS England refreshed the DTAC in February 2026 with a 25 percent reduction in questions and clearer scope alignment with NICE.
What is the difference between ISO 14971, IEC 62304, and ISO/IEC 42001?
ISO 14971 is the foundational risk management standard for medical devices, required for nearly every regulatory submission worldwide. IEC 62304 specifies the software lifecycle processes for medical device software, including SaMD; Edition 2 expanded coverage to legacy software and security. ISO/IEC 42001 is the first international certifiable standard for AI management systems, specifying how organizations should govern, design, deploy, and continually improve AI products. Together they form the standards backbone for compliant SaMD and AI-enabled medical device development.
Which frameworks are most important for digital health startups to know?
Digital health startups should prioritize understanding the FDA SaMD framework (or the EU MDR for European markets), HL7 FHIR for interoperability, ISO 14971 for risk management, IEC 62304 for software lifecycle, the DiMe V3+ framework for evidence generation, the NIST AI Risk Management Framework or ISO/IEC 42001 for AI products, and HITRUST CSF for security. For U.S. health system buyers, the CHAI Responsible AI Guide and Joint Commission AI guidance are increasingly required. UK-bound products must additionally satisfy the NHS DTAC and the NICE Evidence Standards Framework.
Stay ahead

Get the Digital.Health newsletter.

Curated digital health news, framework updates, and platform releases — delivered to your inbox. Join 30,000+ clinicians, innovators, and health leaders.

Subscribe free →

Complete index of digital health frameworks and standards

Digital.Health curates 46 essential frameworks and standards across four categories: Regulatory frameworks & pathways (12), Evaluation & evidence frameworks (12), AI assurance, safety & responsible use (13), and Technical standards & interoperability (9).

Curated by Daniel Kraft, MD, Stanford- and Harvard-trained physician-scientist and Founder of Digital.Health.

Common questions this index answers: What is Software as a Medical Device (SaMD) and how does the FDA regulate it? What frameworks govern AI assurance and safety in healthcare? What is the V3 framework for digital clinical measures? Which standards make digital health systems interoperable? What reporting guidelines apply to clinical AI studies? How do U.S. and EU digital health regulations differ? What is the NHS DTAC and who needs to comply? What is the difference between ISO 14971, IEC 62304, and ISO/IEC 42001? Which frameworks are most important for digital health startups to know?

Topics covered: FDA Software as a Medical Device, EU MDR and AI Act, MHRA AI as a Medical Device, IMDRF harmonization, WHO AI for health, DiMe V3+ framework, NICE Evidence Standards Framework, ICHOM outcome measurement, NHS DTAC, Peterson Health Technology Institute, CONSORT-AI / SPIRIT-AI, DECIDE-AI, TRIPOD+AI, PRISMA-AI, ARPA-H, CHAI Responsible AI Guide, Joint Commission AI guidance, NIST AI Risk Management Framework, ISO/IEC 42001, ISO 14971, IEC 62304, WHO AI ethics, NAM AI Code of Conduct, HTI rules, OECD AI Principles, EQUATOR Network, HITRUST CSF, HL7 FHIR, SMART on FHIR, USCDI, TEFCA, DICOM, IEEE 11073, SNOMED CT, LOINC, OMOP Common Data Model.

Regulatory frameworks & pathways

  • FDA Software as a Medical Device (SaMD) by U.S. FDA — Risk-based framework for standalone medical software, aligned with IMDRF. SaMD Clinical Evaluation guidance withdrawn January 2026.
  • FDA Digital Health Center of Excellence by U.S. FDA — Coordinating hub for digital health policy. Houses TEMPO pilot launched with CMS in early 2026.
  • FDA Clinical Decision Support Software Guidance by U.S. FDA — Updated Jan 2026. Four-criteria test under 21st Century Cures Act for non-device CDS vs SaMD.
  • FDA Predetermined Change Control Plans (PCCPs) for AI/ML by U.S. FDA — Final guidance for managing modifications to AI/ML-enabled devices over the lifecycle.
  • FDA Good Machine Learning Practice (GMLP) by FDA, Health Canada, MHRA — Ten guiding principles for ML medical-device development.
  • FDA General Wellness: Policy for Low-Risk Devices by U.S. FDA — Updated Jan 2026. Defines what is outside FDA medical device oversight.
  • EU Medical Device Regulation (MDR 2017/745) by European Commission — Governs SaMD, digital therapeutics, and AI medical devices in the EU.
  • EU AI Act by European Union — Risk-tiered AI regulation. Most healthcare AI is high-risk and requires conformity assessment.
  • MHRA Software and AI as a Medical Device by MHRA — UK roadmap for SaMD and AIaMD post-Brexit.
  • Health Canada Software as a Medical Device by Health Canada — Risk-based pathway aligned with IMDRF.
  • IMDRF Software as a Medical Device documents by IMDRF — International harmonized definitions, risk categorization, and clinical evaluation principles.
  • WHO Regulatory Considerations on AI for Health by World Health Organization — Global guidance on AI/ML medical device regulation.

Evaluation & evidence frameworks

  • DiMe V3+ Framework by Digital Medicine Society — Verification, Analytical validation, Clinical validation. Plus V3+ Usability Validation extension.
  • DiMe Playbooks by Digital Medicine Society — Playbooks covering Digital Clinical Measures, Digital Healthcare, Pediatric, and Implementing AI in Healthcare.
  • NICE Evidence Standards Framework for Digital Health Technologies by NICE — UK evidence requirements by digital health risk tier.
  • ICHOM Standard Sets by ICHOM — Standardized outcome measurement across 40+ conditions.
  • ORCHA Digital Health Library by ORCHA — UK digital health assessment platform powering NHS app libraries.
  • Peterson Health Technology Institute (PHTI) by Peterson Center on Healthcare — Independent evidence-based assessments of digital health solutions.
  • NHS Digital Technology Assessment Criteria (DTAC) by NHS England — Five-pillar NHS assessment, refreshed February 2026.
  • CONSORT-AI / SPIRIT-AI by EQUATOR Network — Reporting guidelines for clinical trials of AI interventions.
  • DECIDE-AI by DECIDE-AI Steering Group — Reporting guideline for early-stage clinical evaluation of AI decision support.
  • TRIPOD+AI by TRIPOD Group — Transparent reporting of AI prediction models.
  • PRISMA-AI by EQUATOR Network — Reporting standard for systematic reviews of clinical AI.
  • ARPA-H Programs by ARPA-H — U.S. federal R&D funding for high-risk biomedical and digital health programs.

AI assurance, safety & responsible use

  • CHAI Responsible AI Guide (RAIG) by Coalition for Health AI — Primary playbook for ethical and quality-assured AI deployment.
  • Joint Commission and CHAI Responsible Use of AI Guidance by Joint Commission and CHAI — Sept 2025 framework with seven governance domains.
  • NIST AI Risk Management Framework by NIST — Voluntary framework for governing, mapping, measuring, and managing AI risk.
  • NIST AI RMF Generative AI Profile by NIST — Companion profile addressing GenAI-specific risks.
  • ISO/IEC 42001 — AI Management Systems by ISO — First international certifiable standard for AI management systems.
  • ISO 14971 — Risk Management for Medical Devices by ISO — Foundational risk management standard for SaMD.
  • IEC 62304 — Medical Device Software Lifecycle by IEC — Software lifecycle processes for medical-device software.
  • WHO Ethics & Governance of AI for Health by World Health Organization — Six core ethical principles plus 2024 LMM guidance.
  • National Academy of Medicine AI Code of Conduct by National Academy of Medicine — Cross-sector AI Code of Conduct shaping U.S. health-AI norms.
  • HHS / ASTP-ONC HTI-1 and HTI-2 Final Rules by ASTP / ONC — DSI transparency requirements for ONC-certified EHRs.
  • OECD AI Principles by OECD — International principles for trustworthy AI adopted by 40+ countries.
  • EQUATOR Network reporting guidelines by EQUATOR Network — Master library of 500+ health research reporting guidelines.
  • HITRUST CSF by HITRUST Alliance — Common Security Framework harmonizing HIPAA, NIST, ISO 27001 for U.S. health-tech.

Technical standards & interoperability

  • HL7 FHIR by Health Level Seven International — Dominant modern healthcare data exchange standard, mandated for U.S. EHRs.
  • SMART on FHIR by SMART Health IT — App platform standard for third-party clinical apps to plug into EHRs.
  • USCDI by ASTP / ONC — Standardized data classes that ONC-certified health IT must support.
  • TEFCA by Sequoia Project — National-scale health information exchange via QHINs.
  • DICOM by MITA — Universal standard for medical image storage and exchange.
  • IEEE 11073 Personal Health Devices by IEEE — Standards for communication between personal health devices and wearables.
  • SNOMED CT by SNOMED International — Most widely-deployed clinical reference terminology globally.
  • LOINC by Regenstrief Institute — Universal coding system for laboratory tests and clinical observations.
  • OMOP Common Data Model by OHDSI — Standardized data model for federated observational health research at scale.