Anthropic Launches Claude for Healthcare, Days After ChatGPT Health

Anthropic has announced the launch of Claude for Healthcare, a set of HIPAA-ready tools and resources for healthcare providers and consumers.

Jackie LeavittAleksander Hougen

Written by Jackie Leavitt (Editor at Large)

Reviewed by Aleksander Hougen (Chief Editor)

Last Updated:

claude for healthcare featured image

Anthropic’s announcement of Claude for Healthcare comes days after OpenAI launched ChatGPT Health, a new beta tab focused on consumers that allows users to ask wellness questions and centralize medical records. 

The main difference between ChatGPT Health and Claude for Healthcare is that ChatGPT is focused on the end user, while Claude for Healthcare caters to users but also offers resources to healthcare providers and payers. 

HIPAA-Ready Connectors and Agent Skills

Because Claude for Healthcare is HIPAA-ready, Claude can now connect to HIPAA-compliant platforms. This includes the Centers for Medicare & Medical Services (CMS), the National Provider Identifier Registry, and the International Classification of Diseases for diagnosis and procedure codes.

Anthropic has also added two new “agent skills.” First is a skill in Fast Healthcare Interoperability Resources (FHIR), the modern international standard for securely exchanging data between healthcare systems. 

The other skill is in prior authorization, which provides a customizable template for organizations’ policies and work patterns, and helps cross-reference between coverage requirements, patient records, clinical guidelines and appeal documents.

Patient Advice

Like ChatGPT Health, Claude for Healthcare also has a patient-centered function, which can help users understand their health information in simple language, prepare questions for medical conversations, detect health and fitness patterns. 

According to Anthropic’s announcement about Claude for Healthcare, “[t]he aim is to make patients’ conversations with doctors more productive, and to help users stay well-informed about their health.” 

Users need to opt in to enable any access, and Anthropic does not use any health data to train its models.

U.S. users on the Claude Pro and Max plan can give Claude secure access to their health records and lab results. Additionally, Apple Health and Android Health connect integrations will be available in beta this week through the iOS and Android apps. 

Health Hallucinations

Another distinction between Claude for Healthcare and ChatGPT Health is how the tools respond and approach possible hallucinations — when AI chatbots generate false or misleading data, often in a confident way. 

ChatGPT has recently faced scrutiny over some hallucinatory healthcare recommendations its provided users — including suggesting to one man to swap out sodium chloride (salt) for sodium bromide, a poisonous compound that put him in a psychiatric ward for three weeks. For ChatGPT, which is trained using Reinforcement Learning from Human Feedback (RLHF), the hallucinations don’t seem to be improving with updated models. 

In comparison, Anthropic’s governance model is rooted in Constitutional AI (CAI), it uses a second AI model to critique and revise the first AI’s responses based on a “constitution,” a public set of rules inspired by the UN Universal Declaration of Human Rights and other ethical guidelines. In essence, it allows Claude’s AI model to double check that its response is ethical and aligned with human values — this model is a possible reason for Claude’s higher accuracy and lower rate of hallucinations compared to most other models. 

Even though its hallucination rates might be lower than other models, Claude does still hallucinate — which means it can still hallucinate medical advice. However, according to Anthropic’s announcement, the Healthcare tool “is designed to include contextual disclaimers, acknowledge its uncertainty, and direct users to healthcare professionals for personalized guidance.”

My Take

Anthropic’s approach to a healthcare app versus OpenAI’s approach represent the tools’ different markets: Anthropic has positioned itself more for professional use while OpenAI is targeting everyday users. With that in mind, it’s not surprising that Anthropic has included more features focused on integrating with HIPAA healthcare systems; and with Claude’s Constitutional AI model and lower hallucination rates, it feels like a more safe option for understanding your health. However, there are still risks for users due to hallucinations.

Nevertheless, if over 230 million people around the world ask ChatGPT medical advice every week — and I’m sure other chat bots have healthcare questions, too — there’s obviously a deep human need for tools like ChatGPT Health and Claude for Healthcare. What’s important is that these tools have strong security, privacy and clear warnings about hallucinations that educate users on the importance of still seeking professional medical advice. 

What do you think of the new wave of healthcare chatbot tools? If you found this article helpful, you might be interested in receiving our newsletters; you can sign up below.

↑ Top