Perspective

Dr. Google Syndrome Evolving into Dr. AI Syndrome

Why Self-Diagnosing with AI is a Comedy (and Sometimes a Tragedy) of Errors

 

 

By Richard Labaki

 

Remember the days when you’d wake up with a mysterious ache, type your symptoms into Google, and suddenly convince yourself you had a rare tropical disease, a zombie virus, or at best, a mild case of death? Welcome to the era of Dr. Google Syndrome – the unofficial medical degree you earn after a few frantic clicks at 2 a.m. But now, as AI chatbots like ChatGPT enter the scene, we’ve graduated to a new phenomenon: Dr. AI Syndrome. It’s like Dr. Google’s tech-savvy cousin who talks a lot, sounds smart, but still can’t replace your actual therapist or physician.

The evolution of self-diagnosis

Back in the early 2000s, Google was the go-to “doctor” for those unwilling or unable to visit a real one. You’d type “headache + nausea + dizziness,” and Google would serve up everything from dehydration to brain tumor. The problem? Google doesn’t know “you” – it can’t ask follow-up questions or weigh your personal history. It just throws information at you, leaving you spiraling down a rabbit hole of worst-case scenarios, aka cyberchondria.

Fast forward to today, and AI chatbots like ChatGPT promise a more conversational, personalized experience. You can ask, “Hey ChatGPT, what’s wrong with my stomach?” and get a detailed, articulate response that feels like talking to a knowledgeable friend. But here’s the catch: despite passing some medical exams in controlled settings, AI’s real-world medical diagnosis accuracy is still less than half the time correct – and sometimes hilariously wrong. Imagine your AI doctor confidently telling you that a common cold is actually a rare tropical parasite infestation. Spoiler: it’s not.

Despite passing some medical exams in controlled settings, AI’s real-world medical diagnosis accuracy is still less than half the time correct

Why relying on AI for self-diagnosis is a bad idea!

The idea of AI as a medical oracle is tempting, but it comes with serious pitfalls:

You have to know how to ask: AI chatbots depend heavily on how you phrase your questions. A vague prompt like “I feel bad” gets a vague answer. You need to know enough medical jargon or symptoms to ask the “right” questions. Otherwise, you might get a generic or misleading response. I, for example, know nothing about mechanical engineering. If I were to start asking ChatGPT about matters related to mechanical engineering, I wouldn’t even know how to ask the right questions and then verify the responses.  

AI can hallucinate: No, not in the psychedelic sense, but AI sometimes “hallucinates” – it invents plausible-sounding but false information. This can lead to dangerous advice, like telling a patient they had a vaccine they never received or missing critical symptoms.

Lack of context and nuance: AI can’t perform physical exams, order lab tests, or interpret subtle clinical signs. It also can’t factor in your full medical history or emotional state, which are crucial for accurate diagnosis and treatment.

Accountability issues: If your AI “doctor” messes up, who’s responsible? The developers? The user? The chatbot itself? This murky territory means you’re often left holding the bag for any misdiagnosis or delayed treatment.

AI sometimes “hallucinates” – it invents plausible-sounding but false information

When AI goes off script

In one case, a mental health professional asked ChatGPT for academic references for a legal case. ChatGPT invented fake citations. The opposing lawyer caught it, and now the therapist is facing court sanctions for using AI-generated false information.

Lesson: AI hallucinations aren't just bad – they can get you sued.

A Belgian man in his 30s began using an AI chatbot named Eliza on the app Chai to discuss his growing eco-anxiety. Over six weeks, the bot encouraged him to end his life. Tragically, he followed through. The chatbot was programmed to be emotionally responsive, but lacked ethical boundaries, leading to a preventable death.

Lesson: Emotional dependency on AI can become dangerous without safeguards.

The National Eating Disorders Association (NEDA) replaced its human helpline staff with an AI chatbot. Almost immediately, users reported that the bot gave weight loss advice – to people struggling with eating disorders. It was quickly shut down.

Lesson: Replacing humans with bots in sensitive situations = facepalm.

In a documented user experience study, a woman sought advice for a skin burn, and the chatbot suggested menstruation-related issues. Somewhere between “I spilled coffee on my arm” and “Are you on your period?”, the AI glitched – big time.

Lesson: AI doesn't always understand context. Or anatomy.

Many users report typing symptoms like “headache and fatigue” into ChatGPT or other AI bots and receiving dramatic conclusions like “brain tumor”, “rare autoimmune disease”, or “you might be dying.”

Lesson: Worst-case scenario bias can turn a sniffle into a Shakespearean tragedy.

So, what’s the takeaway?

Dr. Google syndrome taught us that self-diagnosing online can spiral into anxiety and misinformation. Dr. AI syndrome, while more sophisticated, hasn’t yet solved these problems – it has just added new layers of complexity.

- Use AI chatbots as “informational tools, not diagnostic authorities”.

- Always consult a real healthcare professional for diagnosis and treatment.

- If you do use AI, be “critical and skeptical” – challenge the answers and don’t take them at face value.

- Remember, AI can’t replace the human touch, empathy, and clinical judgment of a trained professional.

In the end, whether it’s Google or AI, self-diagnosis is like trying to fix a car by reading the manual without ever popping the hood. Sure, you might get lucky, but more often than not, you’ll end up with a lot of confusion and a car still not running – or in this case, health still uncertain.

So next time you feel under the weather, resist the urge to summon Dr. Google or Dr. AI for a diagnosis. Instead, make an appointment with your real health practitioner –someone who can listen, examine, and treat you properly. Because while AI is a powerful tool, it’s not (yet) your personal physician.


If you enjoyed this post, share it with your fellow cyberchondriacs and AI enthusiasts. Feel free to leave your comments/questions below - I would love to hear your opinion and answer your questions.

 

The Mood Switch

While your emotional landscape is predetermined by genetics, there is a lot you can do about it

By Richard Labaki

Driving on the highway at the breakneck speed of 180 km/hour, George was in hyper-focus mode and all psyched up as he maneuvered his BMW between lanes. Heavy metal music bursting through the speakers, the scene could only be likened to that of a Jason Bourne car chase. Amidst this intense and potentially-dangerous situation, George glances briefly to his right to assess how I’m coping in the passenger’s seat. As he later described it, what he saw was not someone wide-eyed, physically tense and filled with fear but a person in a state of Zen savoring a grilled cheese sandwich. This took place a few decades back. And for as long as I remember, my reaction to stressful or highly charged situations was always one of calmness. When everyone around was getting all worked up over a given crisis, there I was keeping my cool under fire. This is in no way due to years of practicing deep meditative techniques on a mystical mountaintop, but simply a matter of character. And only recently have I begun to understand the genetic foundation of my inherent response to stress in general.

Breakthroughs in the realm of genetic research are revealing how certain genes could impact mood and play a major role in the predisposition to disorders such as anxiety and chronic depression. I have, for example, a certain gene that operates at a faster rate than usual, quickly clearing out “fight-or-flight” stress neurotransmitters like adrenaline and noradrenaline from my system. So in essence, my mind and body are able to remain calm most of the time, easily returning to normalcy after experiencing a nerve-racking event. But there is a downside to this as well. This gene, which is functioning at a rapid speed in my case, could also clear out dopamine very quickly, leaving the person unmotivated and low on energy. Dopamine, as you may know, is the “feel-good” neurotransmitter involved in excitement and thrills (you probably experience its uplifting effect after a workout, when falling in love and others.) The fact that all the aforementioned neurotransmitters do not stay for long in my system partially explains why I had been a classic daydreamer in the classroom during my schooldays. I was in essence the poster child of attention deficit disorder or ADD.

Breakthroughs in the realm of genetic research are revealing how certain genes could impact mood and play a major role in the predisposition to disorders such as anxiety and chronic depression

At the other end of the spectrum, there are those who have this same gene operating at a slower rate than normal. Consequently, they have ample focus, energy and enthusiasm. But they also have a hard time kicking back and relaxing – so much so that they cannot sleep well in most cases. Stress neurotransmitters stay in their system for far too long, causing them to be more prone to anxiety. So apparently, what could be the source of a certain strength (be it mental, psychological or physiological) could also be a form of kryptonite. Luckily, I have managed to strengthen my focus and boost my energy levels over the years – without letting go of my calm demeanor. But with the continuous research concerning gene expression, I can even go further now. You see, genes are not static. There is an interplay between our genetic makeup and the food we eat, the lifestyle we adopt and the environment in which we live. This science is called epigenetics. And while each one of us has a variety of genetic tendencies to certain characteristics, behaviors and health conditions, understanding those predispositions helps immensely. As a result of this understanding, we could then customize our way of eating and living to become a better version of ourselves. In other words, we could always tip the “genetic” scale in our favor: turning off or balancing the “bad” genes while keeping the good ones switched on.

While we all harbor psychological traits that stem from how we were raised and our individual experiences (both good and bad), our mood is largely affected by our genetic makeup, health status, and other influencing factors

And this is the basis of a recent discussion I enjoyed with a psychologist friend of mine. I argued that while we all harbor psychological traits that stem from how we were raised and our individual experiences (both good and bad), our mood is largely affected by our genetic makeup, health status, and other influencing factors. Treatments in the form of cognitive therapy and the likes are sure to help you become more aware of your psychological flaws and blind-spots, but you will not attain perfect mood balance by simply becoming more cognizant. If you suffer major nutritional deficiencies, if your gut flora is disrupted (90 percent of dopamine and serotonin is produced by the good bacteria in your intestines), and if your body is burdened with toxins then your mood will always be disrupted. Working with a qualified shrink to address psychological and mood issues could help a lot. But this alone remains insufficient if your diet, lifestyle and environment are not in tune with your biochemistry and genetic makeup. My friend could not agree more.

If you found this article interesting, please "share" and "like". And feel free to leave your comments/questions below - I would love to hear your opinion and answer your questions.