OpenAI has introduced a new feature in the United States called ChatGPT Health, allowing users to upload their medical records and app data to receive personalized health responses. But while the tech giant promises enhanced privacy and clear boundaries, digital rights advocates warn the move could open the door to serious privacy risks.
The tool is designed to integrate health data from apps like MyFitnessPal, Peloton, and Apple Health, along with formal medical records. By processing this information, ChatGPT Health can provide tailored responses to health-related questions—though OpenAI stresses that it is not a tool for diagnosis or medical treatment.
Privacy Firestorm: Advocates Call for Airtight Safeguards
Andrew Crawford of the Center for Democracy and Technology (CDT) described the launch as a pivotal moment, highlighting the sensitive nature of health data and the importance of strict boundaries between general AI usage and medical records.
“AI health tools can empower patients, but health data is some of the most personal information people share. It must be protected with airtight safeguards,” said Crawford.
Concerns are especially heightened as OpenAI explores advertising as a revenue stream, raising fears about data misuse or cross-referencing between health inputs and general chatbot interactions.
OpenAI responded by stating that all ChatGPT Health conversations will be stored separately from regular chatbot chats and excluded from model training. The firm claims it has introduced enhanced privacy protections to secure user data.
A Rising Trend: AI as a Health Companion
According to OpenAI, more than 230 million people use ChatGPT weekly to ask questions about health and wellbeing. The company says its new feature is meant to support medical care, not replace it.
In a blog post, OpenAI noted that ChatGPT Health is launching first to a small group of early users in the U.S., with a waitlist available for others. The service is not yet available in the UK, Switzerland, or the European Economic Area (EEA) due to stricter data protection laws, including the GDPR.
This cautious rollout highlights the complex regulatory landscape AI companies must navigate when handling health data, especially in regions with strong consumer data protections.
A ‘Watershed Moment’ in Health AI?
Industry analysts are calling this launch a significant shift in the role of AI in healthcare. Max Sinclair, CEO of AI marketing platform Azoma, called ChatGPT Health a “watershed moment” that could reshape how people approach personal health management and influence buying behaviors related to health and wellness.
“OpenAI is positioning ChatGPT as a trusted health companion. If done right, this could be a game-changer—not just in healthcare, but in retail and consumer behavior,” Sinclair said.
Sinclair also noted that this move could give OpenAI a competitive edge in the AI race, especially as Google’s Gemini and other rivals push deeper into healthcare applications.
Regulatory Questions Still Loom
Despite the optimistic outlook from the tech sector, privacy advocates remain concerned.
“Since U.S. companies set their own rules on how health data is collected and used, we risk exposing sensitive information to inadequate safeguards,” Crawford said.
The concern is that without clear federal regulation in the U.S., health data could be collected, shared, or even sold without users’ informed consent.
What’s Next?
While ChatGPT Health is only available in the U.S., this launch signals a broader shift toward AI-powered personal health tools, a space likely to expand in Africa, Asia, and Latin America as digital infrastructure improves.
But experts urge caution: before adopting these tools, clear regulation, data protection, and ethical safeguards must be in place.

