Should you ask ChatGPT for medical advice?
Physician and AI researcher Adam Rodman says AI can be helpful but has some tips on how, when to use it safely

Adam Rodman.
Niles Singer/Harvard Staff Photographer
Physicians noticed something unusual in the late 2000s: Patients were coming to appointments armed with sometimes-dubious medical information they had gleaned online from “Dr. Google,” according to Adam Rodman, an internist and AI researcher.
Today, about 68 percent of adults have turned to a search engine for medical advice in the past. But Dr. Google has a competitor. About 32 percent of adults, approximately half of those who sought advice online, turned to AI chatbots for help.
Rodman thinks such resources, used appropriately, are an overall net good. In op-eds and online courses, Rodman, a Harvard Medical School assistant professor of medicine at Beth Israel Deaconess Medical Center, has shared advice for how to best employ Dr. Chat.
In this interview, edited for length and clarity, Rodman offers a stoplight system to figure out when it’s safe to ask a chatbot, and when you should really just ask your doctor.
How were doctors thinking about online medical information before the age of AI?
The early literature refers to this as the internet-informed patient. In the early 2000s, doctors noticed people would come into their appointments with articles they found online, but it was still only among really tech-savvy people. It certainly wasn’t a normal interaction.
Then in the late 2000s, search engines started to take advantage of neural network technology, and they were able to serve up more relevant health information. They figure out what you’re going to want to read next, and they give it to you.
That’s when we first got the phrase “Dr. Google,” often used as a pejorative, from doctors who saw patients coming in with a level of confidence that may or may not have been earned.
Of course, there are patients who know a lot about their health and are very well informed, but we also saw a lot of patients misinformed.
That’s where we get this concept of cyberchondria. It’s related to hypochondria: this idea that search engines can drive people to more and more extreme places until you go from googling your headache to reading about glioblastoma multiforme — and research has shown that it’s a real phenomenon.
“Both Google and AI companies are now very aware that people are using their tools for health information and are trying to build in safety mechanisms.”
We all have understandable and reasonable anxieties about our health. Seeking out information is something fundamental about humanity.
The problem is when that starts to interact with these recommendation algorithms that are optimized for engagement, and for showing you what you want to see even if it’s incorrect.
Now let’s bring AI into the mix. Is it any different to ask a chatbot about symptoms versus googling them?
It’s nuanced. In one sense, LLMs do exactly what Google does: They serve you up the things you unconsciously want to hear, even if those things make you anxious.
On the other hand, unlike with a Google search, some people feel they have a relationship with an LLM. LLMs speak with extreme authority and confidence no matter what they say. It’s under-explored the extent to which that could make cyberchondria worse.
Both Google and AI companies are now very aware that people are using their tools for health information and are trying to build in safety mechanisms. The bots will tell you to go to the emergency room or call your doctor, those sorts of things.
But at least theoretically, language models are much, much better than Google, especially the more modern reasoning models, when it comes to identifying medical conditions.
What do you mean by “theoretically”?
There was a very good paper earlier this year from a researcher named Andrew Bean that tested several LLMs and found they performed very well at identifying medical conditions alone, but did much worse in conversation with real people.
What that shows is that user interaction matters a lot. The way people interact with the model, the clarity of their questions, matters. Those psychological phenomena we talked about are present in ways that are really hard to mitigate.
What kinds of health questions are safe to ask an LLM, and what kinds aren’t?
I would divide it into a stoplight system. Red: never safe. Yellow: sometimes safe. Green: almost always safe.
In the green light are general questions about health, where the quality of the information is not particularly context-dependent.
For example, “I have diabetes and my doctor has told me I need to eat a diabetic diet. Here are some things I like to eat. Can you help me build a diabetic meal plan?” Or “I’m trying to start a new exercise program, can you help?” Or “My doctor just prescribed me amlodipine. What are some common side effects?”
In the yellow light are questions where you want to involve a doctor in the loop. For example, prepping for your visits, understanding a visit after it happens, or understanding a test result that doesn’t entirely make sense to you.
Let’s say you just left your doctor’s visit and you’re a little bit confused about what’s going on. Log in to your patient portal, copy that note, take out your identifying information, plug it into an LLM, and then have a discussion.
With these kinds of questions, you really need to make sure you’re putting in enough health context to help LLM give you a good response. So you need to have some understanding of prompt engineering to get information that’s helpful for you.
In the red light — and I should stress that this might change in the future as technology develops — are things like asking an LLM how to manage a condition, if your doctor is prescribing the right medication, or why you were prescribed drug X over drug Y. These are highly contextual questions that the models aren’t trained for.
In short, the best way people can use it right now is not as a replacement for medical advice but as a way to help prepare or increase your understanding before or after visits.
Are there privacy concerns when it comes to sharing health information with AI?
It’s not inherently riskier to share data with an AI firm than with a search engine. That said, the major companies — OpenAI, Anthropic, Microsoft — are now developing health functions specifically so that people can put in their medical information directly, and that’s quite new.
Additionally, studies have shown people do share more information with an LLM than they would with a search engine. So from a technology perspective, it’s no different, but in practice it is a much bigger security concern.