top of page

AI: The good, the bad, and the ugly in Medicare Advantage

When most of us think about artificial intelligence (AI) in healthcare, we don’t usually think about an algorithm determining if our 85-year-old grandmother [1] will be kicked out of the nursing home or if your father will be approved for the scan needed to determine the progression of cancer. We all agree that medical costs are out of control and rising faster than salaries and outpacing retirement savings and there is no doubt that a shortage of registered nurses, general practitioners, physicians, and specialists is driving the development of AI and use of algorithms to bring operational efficiency and control health care costs.

However, the Center for Medicare Advocacy has increasingly become more concerned about AI powered decision-making tools used by Medicare Advantage (MA) plans [2]. AI tools have the potential to be more restrictive than the current Medicare coverage guidelines and may not take into consideration the full clinical situation of each member.

The Centers for Medicare and Medicaid Services (CMS) has also begun implementing needed change for prior authorization. Included in the 2024 Final Rule for Medicare Advantage [3], there are several rules that have the potential to mitigate some concerns, but not all. Starting January 1, 2024, MA plans cannot use Utilization Management (UM) policies for basic or supplemental benefits unless they have been reviewed and approved by the plan’s UM Committee. The Final Rule also prohibits plans from denying coverage for Medicare covered services based on their own criteria if traditional Medicare doesn’t have the same restrictions. This will decrease the plan’s ability to make “black box” decisions and must make UM policies available to providers and members.

At the state level, Pennsylvania is considering requirements that insurers disclose how they use AI [4]. In an August 14, 2023, news release, PA Representative Arvind Venkat said Cigna denied large batches of members’ claims without individual review and is an “example of the danger with algorithm-driven health insurance decision making”.

In a July 27, 2023, statement, “Cigna said PxDX, the technology referenced in the ProPublica report, is more than a decade old and does not involve algorithms, artificial intelligence or machine learning. The technology is also an industry standard, with similar tools being used by other commercial payers and CMS, according to Cigna”. "It is time to regulate AI in health insurance claims processes that may only accelerate such dangerous abdication of claims review responsibilities," Mr. Venkat said.

The World Health Organization x core principles for ethics and governance of AI for health. The six core principles identified by WHO are:

  1. protect autonomy;

  2. promote human well-being, human safety, and the public interest;

  3. ensure transparency, explainability, and intelligibility;

  4. foster responsibility and accountability;

  5. ensure inclusiveness and equity;

  6. promote AI that is responsive and sustainable.

Vendors promise to decrease costs to MA plans and risk sharing agreements are made based upon the ability to control costs through AI, but how do we monitor, regulate, or oversee AI to prevent algorithms from becoming a hard and fast rule that impedes care for members? How do we ensure a clinical, licensed, and appropriately trained professional is reviewing authorization requests, at least at the denial level?

The AI algorithm is looking at patients as data and making decisions based upon that data, removing the clinical review and human assessment. How many times do we see incomplete data, incomplete charts, missing files or diagnosis codes? Without a person to review the case or the decisions, the determination made by AI will equal the accuracy and quality, or lack thereof, of information inputs. We all know the saying “garbage in equals garbage out”.

Here are our top five recommendations to consider along with the use of AI in clinical care:

  1. Ensure processes that include the human element and clinical expertise in Prior Authorization decision making. It’s best practice to develop performance indicators that are monitored internally through departmental leadership and committee structures to identify changing trends and utilization rates.

  2. Develop internal ethics and policy as an organization to build alignment on the use of technology and tools such as AI. Many organizations have a technology evaluation and oversight policy.

  3. Develop and implement a strong vendor and delegation oversight program to monitor how your members are managed and any technology tools associated. This should be included in the initial delegation audit and annually at a minimum.

  4. Consider the implications and risks of the use of AI and member outcomes when making business decisions. Ensure there is organizational accountability for the technology. This should be built into any business continuity planning, risk management and cybersecurity programs.

  5. Promote and foster continuous learning about AI, and technology. The more your organization learns and understands the uses, risks and benefits of AI the more advanced you become as an organization.

Unsure where to start?

[1] Casey Ross, Bob Herman, Denied by AI: How Medicare Advantage plans use algorithms to cut off care for seniors in need,

[2] C. St. John, E. Krupa, When Artificial Intelligence in Medicare Advantage Impedes Access to Care: A Case Study

[3] 2024 Medicare Advantage and Part D Final Rule (CMS-4201-F),

[4] Rylee Wilson, Pennsylvania considers requiring insurers to disclose how they use AI,


bottom of page