FDA’s risk-based framework for standalone medical software, aligned with IMDRF. Note: FDA withdrew the SaMD Clinical Evaluation guidance in January 2026.
Visit →Frameworks that make digital health legible, evaluable, deployable.
Curated regulatory pathways, evaluation frameworks, AI assurance standards, and interoperability references — the documents healthcare innovators, clinicians, and policy professionals actually use to build, evaluate, and deploy.
The rules of market access.
FDA, EU, UK, and global medical-device regulators determine who can ship a digital health product, in what category, and with what evidence. Several of these frameworks were materially updated in early 2026.
FDA’s coordinating hub for digital health policy. Houses the TEMPO pilot launched with CMS in early 2026, Pre-Cert lessons learned, and AI/ML strategy.
Visit →Defines the four-criteria test under the 21st Century Cures Act for distinguishing non-device CDS from regulated SaMD. Updated January 2026.
Visit →Final FDA guidance establishing PCCPs for managing modifications to AI/ML-enabled devices across the total product lifecycle.
Visit →Ten guiding principles for ML medical-device development, jointly authored by FDA, Health Canada, and the UK MHRA.
Visit →Updated January 2026. Defines what falls outside FDA medical device oversight, with expanded examples covering wearables and non-invasive monitoring.
Visit →Successor to the Medical Device Directive. Governs SaMD, digital therapeutics, and AI-enabled medical devices placed on the EU market.
Visit →Risk-tiered AI regulation. Most healthcare AI falls in the high-risk category, requiring conformity assessment, transparency, and human oversight.
Visit →UK regulator’s roadmap and guidance for SaMD and AI as a Medical Device, shaping the post-Brexit UK regulatory regime.
Visit →Risk-based pathway for SaMD aligned with IMDRF. Co-author of the GMLP guiding principles with FDA and MHRA.
Visit →International harmonized definitions, risk categorization, quality management, and clinical evaluation principles for SaMD.
Visit →WHO’s global guidance on regulatory considerations for AI/ML medical devices, intended to support harmonization across member states.
Visit →Does it actually work?
The frameworks the field uses to evaluate whether a digital health product does what it says — from clinical validation, to outcomes measurement, to evidence-based reporting standards.
Foundational framework for digital clinical measures: Verification, Analytical validation, Clinical validation. Extended by V3+ Usability Validation.
Visit →Practitioner playbooks covering Digital Clinical Measures, Digital Healthcare, Pediatric Digital Medicine, and Implementing AI in Healthcare.
Visit →UK reference for the level of evidence needed to demonstrate effectiveness and value across digital health technology risk tiers.
Visit →Standardized outcome measurement sets across more than 40 conditions, designed to enable global benchmarking of value-based care.
Visit →UK-based digital health assessment platform. Powers app libraries for the NHS and other health systems through standardized review.
Visit →Independent institute publishing evidence-based assessments of digital health solutions across value, clinical impact, and adoption.
Visit →Five-pillar assessment for NHS-bound digital health products: clinical safety, data protection, technical assurance, interoperability, usability. Refreshed Feb 2026 with a 25 percent question reduction.
Visit →Reporting guidelines for clinical trials of AI interventions. Extensions to CONSORT (results) and SPIRIT (protocols).
Visit →Reporting guideline for early-stage clinical evaluation of AI decision support, covering the gap between offline validation and randomized trials.
Visit →Transparent Reporting of multivariable prediction models, AI extension. Covers diagnostic and prognostic AI/ML model reporting.
Visit →Reporting standard for systematic reviews of clinical AI studies, extending PRISMA to address AI-specific methodological considerations.
Visit →U.S. federal R&D agency funding high-risk, high-reward biomedical and digital health programs across diagnostics, AI, and care delivery.
Visit →Who’s watching the AI.
Governance, assurance, and ethics frameworks specific to health AI. The layer between regulatory pathways and operational deployment — what health systems are increasingly required to demonstrate.
CHAI’s primary playbook for ethical and quality-assured deployment of AI in healthcare, paired with the Assurance Standards Guide.
Visit →September 2025 joint framework establishing seven governance domains for AI in U.S. health systems. Precursor to a voluntary AI certification program.
Visit →Voluntary U.S. framework for governing, mapping, measuring, and managing AI risk. Widely adopted across health AI assurance programs.
Visit →Companion profile to the AI RMF specifically addressing generative AI risks, including hallucination, content provenance, and prompt injection.
Visit →First international certifiable standard for AI management systems. Specifies requirements for establishing, implementing, and continually improving an AIMS.
Visit →Foundational risk management standard required for SaMD and most medical device regulatory submissions worldwide.
Visit →Software lifecycle processes for medical device software. Edition 2 expands coverage to legacy software, software of unknown provenance, and security.
Visit →Six core ethical principles for AI in health, with companion 2024 guidance on large multi-modal models (LMMs).
Visit →Cross-sector AI Code of Conduct for healthcare, shaping U.S. norms and informing the CHAI and NIST frameworks.
Visit →Health Data, Technology, and Interoperability rules. Establish decision-support intervention transparency requirements (DSIs) for ONC-certified EHRs.
Visit →International principles for trustworthy AI, adopted by more than 40 countries and informing healthcare AI policy globally.
Visit →Master library of more than 500 health research reporting guidelines. Hosts CONSORT, STROBE, PRISMA, TRIPOD, SPIRIT, and AI extensions.
Visit →Common Security Framework that harmonizes HIPAA, NIST, ISO 27001, PCI, and other regulations. The de facto security certification for U.S. health-tech.
Visit →The plumbing.
How digital health systems exchange data, structure information, and connect to the broader healthcare infrastructure. Without these, nothing scales.
Fast Healthcare Interoperability Resources. The dominant modern healthcare data exchange standard, mandated for U.S. EHRs and increasingly globally.
Visit →App platform standard built on FHIR and OAuth 2.0. The standard pathway for third-party clinical apps to plug into EHRs.
Visit →United States Core Data for Interoperability. Standardized data classes that ONC-certified health IT must support, expanded annually.
Visit →Trusted Exchange Framework and Common Agreement. National-scale health information exchange via Qualified Health Information Networks (QHINs).
Visit →Digital Imaging and Communications in Medicine. The universal standard for medical image storage, exchange, and metadata.
Visit →Family of standards for communication between personal health devices, wearables, and remote monitoring systems.
Visit →Comprehensive clinical terminology used across EHRs globally. The most widely-deployed clinical reference terminology in the world.
Visit →Universal coding system for laboratory tests, clinical observations, and survey instruments. Required by USCDI and used by every major lab.
Visit →Observational Medical Outcomes Partnership Common Data Model. Standardized data model used for federated observational health research at scale.
Visit →Frequently asked questions.
Quick answers about which frameworks apply to which products, who needs to comply, and how the layers fit together.
What is Software as a Medical Device (SaMD) and how does the FDA regulate it?
What frameworks govern AI assurance and safety in healthcare?
What is the V3 framework for digital clinical measures?
Which standards make digital health systems interoperable?
What reporting guidelines apply to clinical AI studies?
How do U.S. and EU digital health regulations differ?
What is the NHS DTAC and who needs to comply?
What is the difference between ISO 14971, IEC 62304, and ISO/IEC 42001?
Which frameworks are most important for digital health startups to know?
Get the Digital.Health newsletter.
Curated digital health news, framework updates, and platform releases — delivered to your inbox. Join 30,000+ clinicians, innovators, and health leaders.
Subscribe free →Complete index of digital health frameworks and standards
Digital.Health curates 46 essential frameworks and standards across four categories: Regulatory frameworks & pathways (12), Evaluation & evidence frameworks (12), AI assurance, safety & responsible use (13), and Technical standards & interoperability (9).
Curated by Daniel Kraft, MD, Stanford- and Harvard-trained physician-scientist and Founder of Digital.Health.
Common questions this index answers: What is Software as a Medical Device (SaMD) and how does the FDA regulate it? What frameworks govern AI assurance and safety in healthcare? What is the V3 framework for digital clinical measures? Which standards make digital health systems interoperable? What reporting guidelines apply to clinical AI studies? How do U.S. and EU digital health regulations differ? What is the NHS DTAC and who needs to comply? What is the difference between ISO 14971, IEC 62304, and ISO/IEC 42001? Which frameworks are most important for digital health startups to know?
Topics covered: FDA Software as a Medical Device, EU MDR and AI Act, MHRA AI as a Medical Device, IMDRF harmonization, WHO AI for health, DiMe V3+ framework, NICE Evidence Standards Framework, ICHOM outcome measurement, NHS DTAC, Peterson Health Technology Institute, CONSORT-AI / SPIRIT-AI, DECIDE-AI, TRIPOD+AI, PRISMA-AI, ARPA-H, CHAI Responsible AI Guide, Joint Commission AI guidance, NIST AI Risk Management Framework, ISO/IEC 42001, ISO 14971, IEC 62304, WHO AI ethics, NAM AI Code of Conduct, HTI rules, OECD AI Principles, EQUATOR Network, HITRUST CSF, HL7 FHIR, SMART on FHIR, USCDI, TEFCA, DICOM, IEEE 11073, SNOMED CT, LOINC, OMOP Common Data Model.
Regulatory frameworks & pathways
- FDA Software as a Medical Device (SaMD) by U.S. FDA — Risk-based framework for standalone medical software, aligned with IMDRF. SaMD Clinical Evaluation guidance withdrawn January 2026.
- FDA Digital Health Center of Excellence by U.S. FDA — Coordinating hub for digital health policy. Houses TEMPO pilot launched with CMS in early 2026.
- FDA Clinical Decision Support Software Guidance by U.S. FDA — Updated Jan 2026. Four-criteria test under 21st Century Cures Act for non-device CDS vs SaMD.
- FDA Predetermined Change Control Plans (PCCPs) for AI/ML by U.S. FDA — Final guidance for managing modifications to AI/ML-enabled devices over the lifecycle.
- FDA Good Machine Learning Practice (GMLP) by FDA, Health Canada, MHRA — Ten guiding principles for ML medical-device development.
- FDA General Wellness: Policy for Low-Risk Devices by U.S. FDA — Updated Jan 2026. Defines what is outside FDA medical device oversight.
- EU Medical Device Regulation (MDR 2017/745) by European Commission — Governs SaMD, digital therapeutics, and AI medical devices in the EU.
- EU AI Act by European Union — Risk-tiered AI regulation. Most healthcare AI is high-risk and requires conformity assessment.
- MHRA Software and AI as a Medical Device by MHRA — UK roadmap for SaMD and AIaMD post-Brexit.
- Health Canada Software as a Medical Device by Health Canada — Risk-based pathway aligned with IMDRF.
- IMDRF Software as a Medical Device documents by IMDRF — International harmonized definitions, risk categorization, and clinical evaluation principles.
- WHO Regulatory Considerations on AI for Health by World Health Organization — Global guidance on AI/ML medical device regulation.
Evaluation & evidence frameworks
- DiMe V3+ Framework by Digital Medicine Society — Verification, Analytical validation, Clinical validation. Plus V3+ Usability Validation extension.
- DiMe Playbooks by Digital Medicine Society — Playbooks covering Digital Clinical Measures, Digital Healthcare, Pediatric, and Implementing AI in Healthcare.
- NICE Evidence Standards Framework for Digital Health Technologies by NICE — UK evidence requirements by digital health risk tier.
- ICHOM Standard Sets by ICHOM — Standardized outcome measurement across 40+ conditions.
- ORCHA Digital Health Library by ORCHA — UK digital health assessment platform powering NHS app libraries.
- Peterson Health Technology Institute (PHTI) by Peterson Center on Healthcare — Independent evidence-based assessments of digital health solutions.
- NHS Digital Technology Assessment Criteria (DTAC) by NHS England — Five-pillar NHS assessment, refreshed February 2026.
- CONSORT-AI / SPIRIT-AI by EQUATOR Network — Reporting guidelines for clinical trials of AI interventions.
- DECIDE-AI by DECIDE-AI Steering Group — Reporting guideline for early-stage clinical evaluation of AI decision support.
- TRIPOD+AI by TRIPOD Group — Transparent reporting of AI prediction models.
- PRISMA-AI by EQUATOR Network — Reporting standard for systematic reviews of clinical AI.
- ARPA-H Programs by ARPA-H — U.S. federal R&D funding for high-risk biomedical and digital health programs.
AI assurance, safety & responsible use
- CHAI Responsible AI Guide (RAIG) by Coalition for Health AI — Primary playbook for ethical and quality-assured AI deployment.
- Joint Commission and CHAI Responsible Use of AI Guidance by Joint Commission and CHAI — Sept 2025 framework with seven governance domains.
- NIST AI Risk Management Framework by NIST — Voluntary framework for governing, mapping, measuring, and managing AI risk.
- NIST AI RMF Generative AI Profile by NIST — Companion profile addressing GenAI-specific risks.
- ISO/IEC 42001 — AI Management Systems by ISO — First international certifiable standard for AI management systems.
- ISO 14971 — Risk Management for Medical Devices by ISO — Foundational risk management standard for SaMD.
- IEC 62304 — Medical Device Software Lifecycle by IEC — Software lifecycle processes for medical-device software.
- WHO Ethics & Governance of AI for Health by World Health Organization — Six core ethical principles plus 2024 LMM guidance.
- National Academy of Medicine AI Code of Conduct by National Academy of Medicine — Cross-sector AI Code of Conduct shaping U.S. health-AI norms.
- HHS / ASTP-ONC HTI-1 and HTI-2 Final Rules by ASTP / ONC — DSI transparency requirements for ONC-certified EHRs.
- OECD AI Principles by OECD — International principles for trustworthy AI adopted by 40+ countries.
- EQUATOR Network reporting guidelines by EQUATOR Network — Master library of 500+ health research reporting guidelines.
- HITRUST CSF by HITRUST Alliance — Common Security Framework harmonizing HIPAA, NIST, ISO 27001 for U.S. health-tech.
Technical standards & interoperability
- HL7 FHIR by Health Level Seven International — Dominant modern healthcare data exchange standard, mandated for U.S. EHRs.
- SMART on FHIR by SMART Health IT — App platform standard for third-party clinical apps to plug into EHRs.
- USCDI by ASTP / ONC — Standardized data classes that ONC-certified health IT must support.
- TEFCA by Sequoia Project — National-scale health information exchange via QHINs.
- DICOM by MITA — Universal standard for medical image storage and exchange.
- IEEE 11073 Personal Health Devices by IEEE — Standards for communication between personal health devices and wearables.
- SNOMED CT by SNOMED International — Most widely-deployed clinical reference terminology globally.
- LOINC by Regenstrief Institute — Universal coding system for laboratory tests and clinical observations.
- OMOP Common Data Model by OHDSI — Standardized data model for federated observational health research at scale.