The Coalition for Health AI (CHAI) has released a draft of the CHAI Assurance Standards Guide, a comprehensive framework designed to ensure the responsible use of artificial intelligence in healthcare. The draft, unveiled on June 26, represents a consensus among diverse stakeholders, including patient advocates, technology developers, clinicians, and data scientists. CHAI is seeking public input to refine this 185-page document before finalization. The guide aims to provide actionable guidance on ethics and quality assurance throughout the AI lifecycle, which is broken down into six stages: defining the problem and planning, designing the AI system, engineering the AI solution, assessment, piloting, and deployment and monitoring. The framework emphasizes real-world applications and practical concerns, encouraging input from those directly involved in healthcare AI design, development, and deployment. The accompanying checklists offer detailed steps for stakeholders to ensure ethical and quality-oriented AI implementation in healthcare settings.

New Framework for Responsible Healthcare AI Use Drafted by CHAI
The Coalition for Health AI has drafted a detailed guide for responsible healthcare AI use, now open for public input.
1–2 minutes










