AI in Benefits: Managing the Risk to Get to the Rewards

Artificial intelligence (AI) is influencing more of our lives every day. AI selects the articles that show up in our newsfeed; it helps run our homes and drive our cars. And now it’s making a difference in employer-sponsored health benefits.

Most employers share the common goal of providing employees with benefits that keep them healthy and productive. AI is already helping to save cost and add value at many benefit touchpoints – from open enrollment to HSA management to health care delivery – and has the potential to do much more. Accenture estimates that AI applications in healthcare will save the US healthcare economy approximately $150 billion by 2026. We see big opportunities for health AI in four major areas: digital health and telehealth; member engagement; medical diagnosis and procedures; and treatment recommendation and adherence.

But how do we get there? In a webinar last month, representatives of the National Academy of Medicine and the US Government Accountability Office cautioned that we still have some important things to figure out. While AI has huge potential to positively impact healthcare delivery and health benefits, there are serious risks as well. Governance must be in place to ensure the following risks are properly managed:

  • Diversity and bias. AI works with data that has been influenced by humans for years and reflects human biases. Machine-learning programs absorb these biases and so the decisions they ultimately make will reflect them. One way to address this problem is with a more diverse field of AI experts. While more women and people of color have begun working in AI, there is still a long way to go. In addition, we have the most data on those who access care the most, which may further health disparities, since people of color receive less medical care than whites. In addition, there is more data about people who are sick (e.g. ICU patients) than on people who are healthy, which could influence outcomes. Better data is needed to ensure unbiased AI.
  • Ethics. Ethical considerations must be paramount in all healthcare AI initiatives to ensure patient privacy, confidentiality, and safety are not compromised. Importantly, liability for AI mistakes has not yet been established – does it fall on the provider, the developer, or both? -- and this open question will hamper trust in the AI systems.
  • Transparency. Clinicians must have transparency into the process that AI uses to make decisions and recommendations; but since AI-influenced processes are ever-evolving and learning, continuous education is both necessary and challenging.

When considering using an AI program in healthcare delivery, it is important to analyze if it can be robust, private, and fair. There’s a theme here: the risks inherent in AI revolve around data quality and transparency. Different health systems gather data differently, and data transparency is something that employers struggle with today. Improving healthcare data will go a long way toward mitigating the risks of AI for employers and health systems.

When considering benefits solutions, employers need to ask vendor partners if and how AI is utilized, what outcomes are being realized, the quality of the data being used, and how the AI aspect may help the employer and members in the long run. Employers can also analyze their current suite of benefits to see how AI can help them fill gaps to accomplish their goals - perhaps an AI chatbot can improve the enrollment experience and foster better engagement? In these same conversations, employers need to ask pointed questions about the three governance topics above – diversity and bias, ethics, and transparency -- to ensure they have a well-rounded picture of how AI is influencing their members’ benefits. 

Amber Boehm
by Amber Boehm

Solution Architect, Center for Health Innovation

Register for Mercer US Health News to receive weekly e-mail updates.