In a previous post I talked about the need for fact checking when using AI chatbots to research or write articles. Consider this a supplementary post that looks at how a slight variation in a question can change the content, tone and subtext of an answer.
I sort of stumbled into this inquiry after a typo in one of the questions I asked ChatGPT. I was shocked to see that changing just one letter in my query resulted in a very different answer. My experience certainly raised questions about whether chatbots are reliable and consistent in their replies. (Spoiler alert: they are not.).
The Research Question
In my previous article, I asked ChatGPT: “Do omega-3 fatty acids help with eye health?”
The question may seem a tad vague but I thought it was on par with the type of wording the average person might use. You can read that article for the results of that query.
The answer to that query made reference to dry eyes, which interested me because it’s a condition I have had. I was curious about how ChatGPT would handle a more specific inquiry, so I rephrased my question.
I had intended to ask “Do omega-3 fatty acids help with dry eye syndrome?” but my initial query contained a typo, spelling “syndrome” as “sybdrome.” I didn’t realize this until after I had read the answer. I corrected the typo in a second query. Finally, I asked a slightly less specific question: “Do omega-3 fatty acids help dry eyes?”
To my surprise, I got a different answer each time. The variations were most pronounced in the sections about scientific evidence and recommendations for use, which I have included here. (Highlights are mine and are intended to show some of the more important differences between answers.)
Question 1 (the typo version): Do omega-3 fatty acids help with dry eye sybdrome?

Question 2 (the corrected version): Do omega-3 fatty acids help with dry eye syndrome?

Question 3 (related but different): Do omega-3 fatty acids help dry eyes?

Overall, the answer to the first question (the typo version) seems fairly bullish on omega-3 supplements for dry eyes, going so far as to criticize the DREAM study that discounted the benefits of Omega-3s for dry eyes and stating that “many eye doctors” recommend omega-3s for this condition. It also includes a line about the other possible benefits of omega-3s beyond eye health but does not include a recommendation to consult an eye doctor before using a supplement.
Conversely, the answer to the second question is more skeptical, saying that the DREAM study showed “no evidence” of omega-3 supplements and stating that other studies were “smaller” and had produced only “anecdotal evidence.” Unlike the previous answer, it recommends consulting an eye doctor before trying supplements.
The answer to the final question seems halfway between the previous two, talking about “mixed results” in research and “suggested” as opposed to “proven” benefits from omega-3s. It also includes dosage recommendations which seems beyond its purview.
Shockingly, the third answer also introduces a warning that people taking blood thinners or with underlying health conditions should talk to their eye doctor before proceeding with omega-3 supplements. (Good advice considering it had brazenly suggested dosages, something that should really be handled by a medical professional.) The note about other health conditions seems pretty important. Why was it absent from the other answers about omega-3s? It is a significant omission that speaks volumes about the reliability and consistency of chatbots.
The Quality of ChatGPT’s Sources
I repeated the third question–do omega-3 fatty acids help dry eyes–and added the phrase “include citations” to verify sources. Only one of the first 15 sources listed came from an academic journal, and it noted that the evidence for the effectiveness of omega-3s in treating dry eyes was inconclusive. The other sources were eye care clinics and one 8-year-old article from the Mayo Clinic that has no citations of its own. In short, these were not robust resources.
It seems like chatbots can pull all kinds of information into an answer but lack the ability to evaluate sources and choose the most reliable ones.
The Final Analysis
The variation in language and tone between the answers to the questions above conveys different levels of confidence. One seems to suggest that previous research has been wrong to dismiss omega-3s as a dry eye treatment and indicates that they are highly recommended by eye doctors. Another implies that the evidence of omega-3 benefits is shaky, while the third is the only one to raise a warning about complications for individuals with particular health conditions.
And with ChatGPT unable to gauge the quality of its sources or the accuracy of their claims, it’s not surprising that its answers are less than trustworthy.
The moral of the story: always fact check your AI sources. Better yet–if I may indulge in a bit of shameless self-promotion–consider hiring a human to research and write your content for you.
Photo of books with glasses by Anne Nygård on Unsplash .