Artificial intelligence can be useful in many areas of life, including health — as long as it is treated like a reference book and not a doctor’s orders. In a world where people already rely on AI for navigation, writing, and research, using it to look up symptoms or understand a possible condition has become increasingly common. To that end, OpenAI has announced a new tool, ChatGPT Health. AI can help people ask better questions and find more references when used properly. But along with this increased function comes the risk that those suggestions are mistaken for diagnoses. Would you trust artificial intelligence with your health recommendations? How about your private medical records?
ChatGPT and Your Privacy
More than “230 million people globally ask health and wellness-related questions on ChatGPT every week,” according to a press release issued by OpenAI. So the next seemingly logical course would be to dedicate a section of the AI platform solely to health, right? A separate, and reportedly more secure, section of ChatGPT will be used for this service, which the platform says will help keep private medical records secure.
“When helpful, ChatGPT may use context from your non-Health chats—like a recent move or lifestyle change—to make a health conversation more relevant,” OpenAI explained in its release. “However, Health information and memories never flow back into your non-Health chats, and conversations outside of Health can’t access files, conversations, or memories created within Health. You can view or delete Health memories at any time within Health or the ‘Personalization’ section of Settings.”
Does this really protect people’s medical history and records? Even if ChatGPT Health doesn’t directly share or sell your information, is it still protected from other entities? Andrew Crawford of the Center for Democracy and Technology released a statement explaining how there isn’t a general-purpose privacy law, and HIPAA (Health Insurance Portability and Accountability Act), only protects the data held by certain people and organizations, such as insurance companies and health care providers.
“AI companies, along with the developers and companies behind health and apps, are typically not covered by HIPAA,” Crawford clarified. “The recent announcement by OpenAI introducing ChatGPT Health means that a number of companies not bound by HIPAA’s privacy protections will be collecting, sharing, and using people’s health data. And since it’s up to each company to set the rules for how health data is collected, used, shared, and stored, inadequate data protections and policies can put sensitive health information in real danger.”
What kind of information would people be offering ChatGPT Health? “You can now securely connect medical records and wellness apps—like Apple Health, Function, and MyFitnessPal—so ChatGPT can help you understand recent test results, prepare for appointments with your doctor, get advice on how to approach your diet and workout routine, or understand the tradeoffs of different insurance options based on your healthcare patterns,” the press release promised.
Large Language Models and Health Advice
Large Language Models (LLMs), as defined by IBM, are “a category of deep learning models trained on immense amounts of data, making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks.” In other words, they are trained to respond like humans. Unfortunately, they are not always accurate. Even ChatGPT has a warning: “ChatGPT can make mistakes. Check important info.” Still, a lot of people take its advice as gospel, and that has led to health issues and even death.
Sam Nelson of California was just 18 when he started asking the chatbot for advice on taking drugs. The teen asked how many drugs he could take to go “trippy hard” and the bot replied, “Hell yes—let’s go full trippy mode,” then instructed him to take twice as much cough syrup to increase hallucinations. It even recommended playlists for him to enjoy during the high. Tragically, the youth died of an overdose.
It’s not just teens who are susceptible to following the advice of AI. A 60-year-old man consulted the platform for diet recommendations. He started following ChatGPT’s suggestions and died three months later with bromism, which reportedly was caused by taking sodium bromide he had purchased online, according to LiveScience.
One of the problems with AI is that it suffers from “hallucinations” and the answers it provides seem real, oftentimes backed up with official-looking sources that turn out to be fake. News4 found that “in some cases, AI’s answers can be deceptive. When News4 asked one chatbot for medical advice, it showed a disclaimer message but then claimed to be a real doctor and provided a California doctor’s license number. That doctor was surprised to hear a bot was using her information.”
A group of physicians decided to test AI chatbots using hundreds of different medical prompts. Although the findings were not published in a medical journal, the doctors shared their discoveries online, saying they found that up to 43% of the responses from AI were problematic and 13% were unsafe.
So far, that doesn’t stop people from seeking advice. Six in ten adults have used AI for medical advice or guidance, according to the health tech company Tebra. The number jumps to 75% among Gen Z users. And now with ChatGPT Health becoming available, those numbers could continue to climb.
AI can give bad health advice, and it can do so convincingly and with access to deeply personal data. A wrong answer from a search engine is one thing; a confident recommendation delivered after reviewing your medical records, diet logs, prescriptions, and lab results is another. When AI gets health information wrong, the consequences are not a typo or a bad restaurant suggestion. They can be deadly. The danger is not a single bad response or a single data breach, but a system that quietly trains people to outsource judgment and trust to a tool that cannot verify truth, feel responsibility, or be held accountable when something goes wrong.






