More healthcare organizations are adopting AI to improve efficiency. AI promises to bring significant benefits, but this rapidly evolving technology also can introduce new exposures for providers. Partnering with knowledgeable wholesale brokers can help agents and their clients better understand the risk and coverage implications of AI.
Artificial intelligence (AI), in the form of computer algorithms that can process vast amounts of data, identify patterns, and execute actions based on pre-set parameters, has existed for decades. AI is the “brain” behind robotic process automation, such as analyzing and collecting data from forms, as well as chatbots that assist website visitors in finding information or completing simple tasks. A new iteration of this technology, which is becoming increasingly popular across many industries, including healthcare, is generative AI. Generative AI, as its name implies, enhances legacy AI with a startling new capability: it can generate content on its own, remarkably quickly. In fields such as healthcare, AI can bring significant benefits to providers and patients, but it also introduces new exposures.
As an industry, healthcare is data-rich and process-intensive, making it a prime target for technologies that add both efficiency and accuracy. As a result, AI has numerous applications in healthcare settings. These include, but are not limited to:
Administrative operations. AI can automate administrative tasks such as appointment scheduling, recordkeeping, billing, and predicting hospital patient flow to better align staffing and other resources.
Drug discovery and development. Analyzing complex biological data, AI can predict the effectiveness of certain compounds and assist in designing clinical trials or monitoring patient responses.
Telemedicine and remote monitoring. AI applications can be programmed to offer remote consultations, monitor patient health, and alert healthcare providers to critical status changes that require follow-up, a hospital visit, or an in-person appointment. Behavioral health support also can be provided via AI tools such as chatbots.
Personalized treatment. Combining a patient’s health records and lifestyle data, AI can suggest personalized treatment plans and enable providers to tailor care to individual patients based on their unique genetics.
Disease diagnosis and health risk prediction. AI-enabled diagnostic tools can analyze radiology, magnetic resonance imaging (MRI), and computed tomography (CT) scans to identify diseases such as cancer or brain and musculoskeletal disorders. AI also can layer inpatient data to predict the likelihood that certain conditions may develop.
AI BENEFITS & RISKS
Healthcare organizations vary in their implementation of AI, but many are already deploying it. While still in an early stage, artificial intelligence can be trained to recognize patterns in medical imaging, laboratory results, and patient records that might be difficult for human clinicians to identify, potentially enabling earlier detection of disease. Predictive analytics powered by AI can process enormous data sets to anticipate disease outbreaks, complications in patient cases, and hospital readmissions. Productivity gains that result from AI in administrative tasks could enhance the quality of patient care by freeing up physicians, nurses, and other medical staff, potentially helping to alleviate the serious industry-wide problem of burnout and staffing turnover. At the same time, utilizing AI has some drawbacks and poses risks to healthcare providers, including:
Bias. AI models are only as effective as the quality and amount of the data on which they are trained. If AI’s human programmers are using data that is incomplete or inadequately represents certain patient demographics, for example, AI can produce biased results that perpetuate healthcare inequities. Disparate impact across some groups of patients could trigger claims of medical malpractice, and/or violations of federal laws such as the Civil Rights Act and Americans with Disabilities Act.
Data breach. AI requires access to large volumes of data, much of which can involve protected health information (PHI), which is governed by state and federal data privacy regulations. For example, the Health Insurance Portability and Accountability Act (HIPAA) imposes steep penalties for failure to protect PHI, and a data breach could trigger civil litigation as well. Even though AI is widely used in monitoring cybersecurity, data controls, and security protocols for AI-powered devices themselves are still developing.
Inaccuracy. Impressive as AI can be in producing accurate analyses, the technology is not infallible. For reasons that are not yet clear even to AI’s creators, sometimes AI tools will deliver inaccurate or outright false results. Such instances are known as “hallucinations.” 3
Adverse experience. Patients have been accustomed to receiving care from human providers, and AI is obviously not human. The technology cannot relate to patients and their concerns in the same way a human can - with empathy and intuition. A poor experience with a healthcare provider that involves AI may influence a patient’s decision to continue treatment, seek another provider, or, in the event of an adverse outcome, file a lawsuit.
The tendency of AI, particularly generative AI tools such as ChatGPT, to hallucinate gives many people pause. Several technologists, including Elon Musk, have urged the tech industry to voluntarily postpone further development of generative AI until laws and regulations can be put in place to provide guardrails.4
LIABILITY ACROSS SEVERAL LINES
At the moment, the use of AI in healthcare raises more questions than answers. It remains to be seen how much providers rely on AI tools in making diagnoses and delivering care or how these actions increase or mitigate provider liability. The nature of AI means a claim may touch multiple parties and lines of coverage, a complexity that is yet to be addressed in case law.
For example, if a physician using AI misdiagnoses cancer in a patient, who is responsible for the harm that results? The doctor and nursing staff who interacted with the patient? The developer of the AI technology used in diagnosis? All of them? A situation like this might lead to claims of medical malpractice liability, product liability, and technology errors and omissions liability. In the event of a data breach or the use of sensitive information by AI without consent, cyber liability could also come into play. If AI is widely used in a hospital setting and multiple patients are adversely impacted, a class-action lawsuit against the organization’s directors and officers is also a distinct possibility.
HOW AGENTS CAN HELP
Retail agents can assist healthcare clients that are using or expect to use AI in their operations, without being technology specialists. Retailers can help insureds protect themselves by:
Understanding the role of AI in the organization. Retailers should work with their clients to gather as much information as possible about how much providers rely on AI and where it’s used. Is AI only applied to administrative functions such as scheduling and billing, or is it used in diagnosis? To what degree are clinical staff relying on AI to diagnose and develop treatment plans?
Examine existing coverages in light of AI-related exposures. Healthcare clients may have a variety of exposures depending on their locations, operations, and the services they provide. Retailers should examine existing insurance and risk management programs to determine if these are adequate. When it comes to using AI, a healthcare provider’s risk profile may change.
Seek specialist insight. A wholesale insurance specialist is a valuable ally to retail agents when it comes to understanding and finding appropriate coverage for emerging risks, including AI. Retailers should engage their wholesale partner as early as possible in preparation for renewal discussions and whenever questions arise on how insurance markets are covering evolving risks.
Artificial intelligence may offer significant advantages to healthcare, but it also introduces new risks that require careful consideration to mitigate and manage. Working with an experienced wholesale specialist, retail agents can ensure their healthcare clients obtain the broadest and most appropriate protection for emerging risks such as AI. Reach out to your local CRC Group Producer today to learn how we can help.
- “How technology will shape healthcare in 2023,” Healthcare Dive, January 17, 2023; https://www.healthcaredive.com/news/digital-health-predictions-2023-telehealth-ai-privacy-cybersecurity/638555/
- “Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices,” U.S. Food & Drug Administration; https://www.fda.gov/ medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
- “What Makes A.I. Chatbots Go Wrong?” New York Times, April 8, 2023; https://www.nytimes.com/2023/03/29/technology/ai-chatbots-hallucinations.html?searchResultPosition=8
- “Elon Musk and others urge AI pause, citing ‘risks to society,’” Reuters, April 5, 2023; https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
- Almost Half of Health Systems Use AI for Workforce Issues, Health Analytics, March 16, 2023. https://healthitanalytics.com/news/almost- half-of-health-systems-use-ai-for-workforce-issues#:~:text=A%20survey%20shows%20that%2047.5,AI%20solution%20for%20this%20 purpose.