Published on:

WHO Publishes AI Considerations in Healthcare

In light of the rapid technological advancements and increasing utilizations of artificial intelligence (AI), the World Health Organization (WHO) issued a publication outlining key regulatory considerations on AI for healthcare. The publication highlights emerging best practices for the development and use of AI in healthcare and aims to lay out an overview of regulatory considerations on AI for healthcare covering six general topic areas discussed below.

As the publication explains in greater detail, the WHO recommends that stakeholders take into account the following considerations as they continue to develop frameworks and best practices for the use of AI in healthcare:

  1. Documentation and transparency: Pre-specifying and documenting the intended medical purpose and development process should be considered in a manner that allows for the tracing of the development steps as appropriate. A risk-based approach should also be considered for the level of documentation and record-keeping utilized for the development and validation of AI systems.
  2. Risk management and AI systems development lifecycle approaches: A total product lifecycle approach should be considered throughout all phases in the life of an AI system. Additionally, it is essential to consider a risk management approach that addresses risks associated with AI systems, such as cybersecurity threats and vulnerabilities, underfitting, and algorithmic bias.
  3. Intended use, and analytical and clinical validation: Initially, providing transparent documentation of the intended use of the AI system should be considered. Moreover, a key consideration is demonstrating performance beyond the training and testing data through external analytical validation in an independent dataset. This external validation dataset should be representative of the population and setting in which it is intended to deploy the AI system and should be independent of the dataset used for developing the AI model during training and testing. Furthermore, it is important to consider a graded set of requirements for clinical validation based on risk. At later stages, a period of more intense post-deployment monitoring should be considered through post-market surveillance and market surveillance for AI systems.
  4. Data quality: Developers should consider whether available data are of sufficient quality to support the development of the AI system to achieve the intended purpose. Careful design or prompt troubleshooting can help identify data quality issues early and can prevent or mitigate possible resulting harm. Stakeholders should also consider mitigating data quality issues and the associated risks that arise with healthcare data, as well as continue to work to create data ecosystems to facilitate the sharing of high-quality data sources.
  5. Privacy and data protection: Privacy and data protection should be considered during the design and deployment of AI systems. Early in the development process, developers should consider gaining a broad understanding of applicable data protection regulations and privacy laws and should ensure that the development process meets or exceeds such legal requirements. It is also important to consider implementing a compliance program that addresses risks and ensures that the privacy and cybersecurity practices take into account potential harm as well as the enforcement environment.
  6. Engagement and collaboration: During development of an AI innovation and deployment roadmap, it is important to consider the development of accessible and informative platforms that facilitate engagement and collaboration among key stakeholders, where applicable and appropriate. It is fundamental to consider streamlining the oversight process for AI regulations through such engagement and collaboration in order to accelerate practice-changing advances in AI.

As artificial intelligence applications become more common, there is great potential for AI to rapidly advance research and development in healthcare. However, the evolving complexity of the AI landscape will almost certainly draw focus on the need for safe and appropriate development and use of AI systems. Providers and suppliers should take steps to understand the regulatory and compliance considerations underlying AI applications.

For over 35 years, Wachler & Associates has represented healthcare providers and suppliers nationwide in a variety of health law matters, and our attorneys can assist providers and suppliers in understanding new developments in healthcare law and regulation. If you or your healthcare entity has any questions pertaining to healthcare compliance, please contact an experienced healthcare attorney at 248-544-0888 or wapc@wachler.com.

Contact Information