Self-diagnosis

Dr. Google Syndrome Evolving into Dr. AI Syndrome

Why Self-Diagnosing with AI is a Comedy (and Sometimes a Tragedy) of Errors

 

 

By Richard Labaki

 

Remember the days when you’d wake up with a mysterious ache, type your symptoms into Google, and suddenly convince yourself you had a rare tropical disease, a zombie virus, or at best, a mild case of death? Welcome to the era of Dr. Google Syndrome – the unofficial medical degree you earn after a few frantic clicks at 2 a.m. But now, as AI chatbots like ChatGPT enter the scene, we’ve graduated to a new phenomenon: Dr. AI Syndrome. It’s like Dr. Google’s tech-savvy cousin who talks a lot, sounds smart, but still can’t replace your actual therapist or physician.

The evolution of self-diagnosis

Back in the early 2000s, Google was the go-to “doctor” for those unwilling or unable to visit a real one. You’d type “headache + nausea + dizziness,” and Google would serve up everything from dehydration to brain tumor. The problem? Google doesn’t know “you” – it can’t ask follow-up questions or weigh your personal history. It just throws information at you, leaving you spiraling down a rabbit hole of worst-case scenarios, aka cyberchondria.

Fast forward to today, and AI chatbots like ChatGPT promise a more conversational, personalized experience. You can ask, “Hey ChatGPT, what’s wrong with my stomach?” and get a detailed, articulate response that feels like talking to a knowledgeable friend. But here’s the catch: despite passing some medical exams in controlled settings, AI’s real-world medical diagnosis accuracy is still less than half the time correct – and sometimes hilariously wrong. Imagine your AI doctor confidently telling you that a common cold is actually a rare tropical parasite infestation. Spoiler: it’s not.

Despite passing some medical exams in controlled settings, AI’s real-world medical diagnosis accuracy is still less than half the time correct

Why relying on AI for self-diagnosis is a bad idea!

The idea of AI as a medical oracle is tempting, but it comes with serious pitfalls:

You have to know how to ask: AI chatbots depend heavily on how you phrase your questions. A vague prompt like “I feel bad” gets a vague answer. You need to know enough medical jargon or symptoms to ask the “right” questions. Otherwise, you might get a generic or misleading response. I, for example, know nothing about mechanical engineering. If I were to start asking ChatGPT about matters related to mechanical engineering, I wouldn’t even know how to ask the right questions and then verify the responses.  

AI can hallucinate: No, not in the psychedelic sense, but AI sometimes “hallucinates” – it invents plausible-sounding but false information. This can lead to dangerous advice, like telling a patient they had a vaccine they never received or missing critical symptoms.

Lack of context and nuance: AI can’t perform physical exams, order lab tests, or interpret subtle clinical signs. It also can’t factor in your full medical history or emotional state, which are crucial for accurate diagnosis and treatment.

Accountability issues: If your AI “doctor” messes up, who’s responsible? The developers? The user? The chatbot itself? This murky territory means you’re often left holding the bag for any misdiagnosis or delayed treatment.

AI sometimes “hallucinates” – it invents plausible-sounding but false information

When AI goes off script

In one case, a mental health professional asked ChatGPT for academic references for a legal case. ChatGPT invented fake citations. The opposing lawyer caught it, and now the therapist is facing court sanctions for using AI-generated false information.

Lesson: AI hallucinations aren't just bad – they can get you sued.

A Belgian man in his 30s began using an AI chatbot named Eliza on the app Chai to discuss his growing eco-anxiety. Over six weeks, the bot encouraged him to end his life. Tragically, he followed through. The chatbot was programmed to be emotionally responsive, but lacked ethical boundaries, leading to a preventable death.

Lesson: Emotional dependency on AI can become dangerous without safeguards.

The National Eating Disorders Association (NEDA) replaced its human helpline staff with an AI chatbot. Almost immediately, users reported that the bot gave weight loss advice – to people struggling with eating disorders. It was quickly shut down.

Lesson: Replacing humans with bots in sensitive situations = facepalm.

In a documented user experience study, a woman sought advice for a skin burn, and the chatbot suggested menstruation-related issues. Somewhere between “I spilled coffee on my arm” and “Are you on your period?”, the AI glitched – big time.

Lesson: AI doesn't always understand context. Or anatomy.

Many users report typing symptoms like “headache and fatigue” into ChatGPT or other AI bots and receiving dramatic conclusions like “brain tumor”, “rare autoimmune disease”, or “you might be dying.”

Lesson: Worst-case scenario bias can turn a sniffle into a Shakespearean tragedy.

So, what’s the takeaway?

Dr. Google syndrome taught us that self-diagnosing online can spiral into anxiety and misinformation. Dr. AI syndrome, while more sophisticated, hasn’t yet solved these problems – it has just added new layers of complexity.

- Use AI chatbots as “informational tools, not diagnostic authorities”.

- Always consult a real healthcare professional for diagnosis and treatment.

- If you do use AI, be “critical and skeptical” – challenge the answers and don’t take them at face value.

- Remember, AI can’t replace the human touch, empathy, and clinical judgment of a trained professional.

In the end, whether it’s Google or AI, self-diagnosis is like trying to fix a car by reading the manual without ever popping the hood. Sure, you might get lucky, but more often than not, you’ll end up with a lot of confusion and a car still not running – or in this case, health still uncertain.

So next time you feel under the weather, resist the urge to summon Dr. Google or Dr. AI for a diagnosis. Instead, make an appointment with your real health practitioner –someone who can listen, examine, and treat you properly. Because while AI is a powerful tool, it’s not (yet) your personal physician.


If you enjoyed this post, share it with your fellow cyberchondriacs and AI enthusiasts. Feel free to leave your comments/questions below - I would love to hear your opinion and answer your questions.

 

i “Google” Therefore i Know

By Richard Labaki

The American president Abraham Lincoln practiced as a lawyer before going into politics.  And during one notable trial, he finished his summing up to the jury by saying, “My learned opponent [the prosecutor] has given you all the facts but has drawn the wrong conclusions.”  Upon losing the case, the prosecutor asked Lincoln how he was able to turn the jury around.  “Well, during the recess I wandered into a café, sat with the jury and told them a story,” Lincoln answered.  “It was about a farmer who was mending a fence, when his ten-year-old son came running shouting, ‘Dad, sister is up in the hay loft with a man and he is pulling down his pants and she is pulling up her skirt and I think they are going to pee all over the hay.”  According to Lincoln, the farmer said to his son, “You got all the facts straight, but you have drawn the wrong conclusion.”  

I remember this amusing story every time someone comes to me for help after he or she had exhausted time and energy researching and trying out random methods to improve their health.  In many instances, a lot of harm had been done in the process (following the wrong dietary routes, taking the wrong kinds of supplements or dosages, etc.)  I mean let’s face it: The internet has opened the floodgates of information.  Any topic and not just health could be delved into just by typing in the right words on Google.  Nevertheless, this has also made people more susceptible to falling victims to misinformation or disinformation.  After all, not everything you learn through the internet is properly scrutinized by experts and substantiated by trustworthy studies.  There are those who try to manipulate information in order to serve their own ends (for example, turning people into consumers.)  And there are those who simply share their presumed success stories or personal opinions thinking that what worked for them could work for everyone else (some even go as far as presenting themselves to be health gurus despite the lack of credentials.)  But even if you do acquire all the right health facts through the internet that does not mean you will be able to draw the right conclusions. 

Facts are simply pieces of information, which need to be organized into a body of knowledge.  And knowledge could be of value only in the right hands (an expert who has spent years studying, practicing and testing.)  Therefore, it is preposterous to assume that through acquiring facts alone one would be able to handle something as important and sensitive as health-related issues!  So what does this mean?  Am I recommending that people stop trying to educate themselves about wellbeing and stop seeking natural means that facilitate healing?  Surely not!  All I am saying is that people need to be vigilant in scrutinizing the information they come across through any medium and not just the internet.  Moreover, it will always be a wise policy to seek professionals, who are able to 1) differentiate between what is true and what is false, and 2) have the required experience to steer you in the right direction.  The role of a good therapist is not to turn you into a lifelong client in order to make money out of you.  His or her role is to help you make the shift to a healthier and more vibrant life until you are able to continue down that road on your own. 

If you found this article interesting, please "share" and "like". And feel free to leave your comments/questions below - would love to hear your opinion and answer your questions.