Not All Answers Are Helpful: How AI Can Reinforce OCD Reassurance-Seeking

As both a therapist specializing in OCD and anxiety and someone who has spent my life battling these struggles personally, I’ve noticed a new trend in both my practice and my own experience. In the past, we used to warn clients: “Don’t Google your symptoms - WebMD will always convince you you’re dying!” People recognized the trap and, over time, many learned to break those cycles.

Now, however, the landscape has shifted. We put an incredible amount of trust in ChatGPT and other AI tools. We know we can ask highly specific questions and receive equally specific answers—all without “bothering” anyone else. At first glance, this seems harmless, maybe even useful. You ask a chatbot a question, it gives you a detailed response, and the spike of anxiety eases. But the relief is temporary. Doubt soon creeps back in, and the cycle begins again.

OCD thrives on uncertainty, and one of its most common compulsions is reassurance-seeking. In the past, that often meant asking loved ones the same question repeatedly or endlessly Googling symptoms and “what ifs.” With AI, reassurance is instant, polished, and seemingly limitless. Yet, as with all compulsions, what feels soothing in the moment ultimately strengthens the obsession over time.

A Personal Example

A couple of months ago, I noticed a small bug crawling across my bedspread. One of my biggest contamination fears has always been insects, so to put it lightly, I was not thrilled to have a new “guest.” Hoping to ease my fear, I took a photo of the bug and sent it to ChatGPT to identify. To my horror, it responded: “That looks like a bed bug.”

Already having a rough day, I immediately launched into a panic attack. My mind spiraled: “How could this be a bed bug? Someone would have had to bring it into my home, and I haven’t had houseguests in ages!” Finally, I asked my actual exterminator, who calmly told me it was just a harmless beetle. Not only had ChatGPT triggered a full-blown spiral - it was also wrong.

What I See in the Therapy Room

One client, let’s call her Emily, struggles with health anxiety. Whenever she feels a twinge in her chest or a lingering headache, she turns to AI: “Could this mean I have cancer? What are the signs of a brain tumor?” The chatbot delivers a calm, medically informed response. But instead of easing her worries, it fuels her urge to check again the next time she notices a new sensation. What started as one daily question quickly escalated into dozens. Her reassurance-seeking, once limited to Google, now had a more polished, always-available source.

Another client, David, lives with contamination-related OCD. He worries about germs on doorknobs, money, even food packaging. Recently, he admitted he uses AI late at night to ask things like, “Can you get salmonella from touching a grocery bag?” or “What happens if I don’t wash my hands after touching a railing?” The chatbot typically reassures him that the risk is low. But for David, that comfort wears off within minutes. Instead of learning to tolerate uncertainty, he’s reinforcing the belief that every intrusive thought requires an answer.

The Gap AI Can’t Close

What Emily and David both discovered is that no matter how advanced AI becomes, it can never provide an answer with 100% certainty. A chatbot might say, “The risk is very low” or “It’s unlikely you’re seriously ill.” But for someone with OCD, “low risk” is not the same as no risk. That small sliver of uncertainty - the tiny gap AI can’t close - keeps the obsession alive. Emily still wonders, “But what if my symptoms are different?” David still asks, “But what if this time is the exception?” And so the cycle of questioning and reassurance-seeking continues, growing stronger with every attempt to find certainty that doesn’t exist.

Why AI Feeds the Cycle

When reassurance feels good for a moment, the brain learns to keep seeking it. Over time, the threshold for feeling “safe enough” rises, and the compulsions intensify. What starts as one late-night question can quickly snowball into hours of spiraling. Exposure and Response Prevention (ERP), the gold-standard treatment for OCD, teaches us to resist the urge to reassure and instead sit with uncertainty. AI, for all its benefits, makes reassurance-seeking easier and sneakier. It feels private, accessible, and judgment-free - which is exactly why it’s so dangerous for people with OCD or anxiety.

Moving Forward

If any of this feels like you, know you are far from alone. It’s tempting to seek reassurance in any of its forms because of the temporary relief it gives us, but, long term healing comes from resisting that urge. Instead of typing your next worry into ChatGPT, pause and notice the urge itself. Remind yourself: “I don’t need to answer this thought right now.”

For those in treatment, talk to your therapist about AI reassurance-seeking, and if you’re navigating OCD without support, know that help is available, and you don’t have to do this alone.

AI is powerful, but for people with OCD and anxiety, not all answers are helpful. Sometimes, the most healing thing you can do is stop searching for certainty and start practicing acceptance of the unknown.

The quality of your life is in direct proportion to the amount of uncertainty you can comfortably deal with
— Tony Robbins

Gabrielle Eichler, RMHCI