Articles Posted in Artificial Intelligence

Published on:

Healthcare providers are starting to see the first claims audits based on analysis and determinations made by artificial intelligence (AI). Although the technology is new, many of the issues remain the same. Especially where the companies that develop AI-based audit tools sell these tools and services to commercial insurance companies, AI-driven audits increasingly resemble audits of Medicare providers and suppliers performed by the Recovery Audit Contractors, or RACs.

RACs are Medicare contractors charged by the Centers for Medicare & Medicaid Services (CMS) to identify overpayments and underpayments made to providers and to facilitate return of overpayments to the Medicare Trust Fund. Primarily, RACs accomplish this by conducting audits and issuing repayment demands. RACs are different from other types of Medicare contractors that conduct audits because RACs are paid on a contingency fee. That is, RACs received a percentage of any funds they extract from providers, making them significantly incentivized to deny claims and demand repayment even where there is no clinical or legal basis to do so.

Similarly, because few insurance carriers have developed sophisticated AI tools in house, they often contract outside technology companies to provide the AI audit tools, and often to conduct the audits themselves. These outside contractors are motivated to deny claims and identify alleged overpayments in order to retain the business of the insurance carrier. This motivation is further enhanced where the outside contractor is paid a percentage of the alleged overpayments that their AI tool identifies. Therefore, any provider should carefully scrutinize any such audit findings, much as they would scrutinize the findings of a similarly motivated RAC.

Published on:

In the Medicare Advantage (MA) program, overseen by the Centers for Medicare & Medicaid Services (CMS), Medicare Advantage Organizations (MAOs) – typically private insurers – receive monthly payments from CMS. The MAOs then contract with healthcare providers and suppliers to provide services pursuant to multiple MA plans offered by the MAOs. With the rise of artificial intelligence (AI), many providers have expressed concern that MAOs are using AI tools to review claims, make coverage determinations, deny claims, and conduct audits with little to no human oversight.

CMS recently released its 2024 MA Final Rule and a set of accompanying frequently asked questions (FAQs). In these, CMS cautioned MAOs that, while an algorithm or software tool can be used to assist MA plans in making coverage determinations, it remains the responsibility of the MAO to ensure that the algorithm or AI complies with all applicable rules for how coverage determinations by MA organizations are made. CMS emphasized that this included that MAOs make medical necessity determinations based on the circumstances of each specific individual including the patient’s medical history, physician recommendations, and clinical notes; and in line with all fully established Traditional Medicare coverage criteria (including established criteria in applicable Medicare statutes, regulations, National Coverage Determinations (NCDs), or Local Coverage Determinations (LCDs)), or with publicly accessible internal coverage criteria that are based on current evidence in widely used treatment guidelines or clinical literature.

CMS gave several examples of non-compliant use of AI by MAOs, mostly due to a lack of human, clinical oversight of the AI-driven tool and the implementation of its outputs. In an example involving a decision to terminate post-acute care services, CMS noted that an algorithm or software tool can be used to assist providers or MA plans in predicting a potential length of stay, but that prediction alone cannot be used as the basis to terminate post-acute care services. For those services to be terminated in accordance with MA regulations, the patient must no longer meet the level of care requirements needed for the post-acute care at the time the services are being terminated, which can only be determined by re-assessing the individual patient’s condition prior to issuing the notice of termination of services. Additionally, for inpatient admissions, CMS noted that algorithms or AI alone cannot be used as the basis to deny admission or downgrade to an observation stay; the patient’s individual circumstances must be considered against the permissible applicable coverage criteria.

Published on:

The integration of Artificial Intelligence (AI) into healthcare represents a frontier of innovation, offering transformative potential for patient care, diagnostic accuracy, and operational efficiency. However, as healthcare providers and technology companies rapidly adopt AI solutions, navigating the complex landscape of regulatory compliance becomes increasingly crucial. This landscape is defined by focuses on patient safety, data privacy, and ethical standards, making regulatory compliance as critical as the technological advancements themselves.

At the heart of healthcare regulation is the imperative to ensure patient safety and efficacy of care. Regulatory bodies like the U.S. Food and Drug Administration (FDA) have been active in establishing frameworks for the approval and use of AI-driven medical devices and software. While FDA generally has authority to regulate medical devices, there are important limits on its authority. Users of AI tools that assist practitioners in analyzing a patient’s symptomology and rendering a diagnosis may want to explore whether the tool constitutes a Clinical Decision Support tool, which are generally beyond the scope of FDA regulation.

While AI can provide powerful tools to assist licensed healthcare practitioners, there may be significant implications where an AI tool attempts to replace a licensed healthcare practitioner. These implications include both ethical considerations for the licensed practitioner and compliance consideration for the unlicensed user of an AI-driven tool. Every state issues licenses to practice within a certain scope of practice and limits conduct within that scope of practice to holders of a license. For example, generally only licensed medical doctors may practice medicine. A licensed medical practitioner who allows an AI-driven tool to dictate patient care and fails to exercise independent medical judgement may have violated ethical and legal obligations under their applicable license. On the other hand, the unlicensed user of an AI-driven tool may face accusations of authorized practice where the tool is performing activities that are limited only to licensed physicians, nurses, etc.

Contact Information