A popular YouTuber and Instagram influencer with nearly half a million followers has issued a stark warning about the dangers of relying on AI chatbots after ChatGPT allegedly misidentified poison hemlock—a deadly toxic plant—as harmless carrot foliage. Kristi shared screenshots and a video detailing how her friend’s backyard plant photos were uploaded to ChatGPT for identification. The AI described the leaves as “very finely divided and feathery,” classic for carrot tops, and insisted it was “highly unlikely” to be poison hemlock. It even listed lookalikes like parsley, coriander, and Queen Anne’s lace, and dismissed concerns when a clearer image was provided, citing the absence of smooth, hairless stems or purple blotches—features Kristi pointed out were actually visible in the photos.
Poison hemlock (Conium maculatum) is extremely toxic, according to medical sources like the Cleveland Clinic. Ingestion can cause systemic poisoning leading to death, with no specific antidote available. Even skin contact can be dangerous in some cases. Kristi emphasized the gravity: “You eat it, you die. You touch it, you can die.” She highlighted that ChatGPT repeatedly doubled down with “absolute certainty” that the plant was wild carrot, not hemlock.
In a striking contrast, the same images identified correctly as hemlock when uploaded to Google Lens. Remarkably, when her friend tried a fresh session on ChatGPT from her phone, the AI this time correctly flagged it as poison hemlock. Kristi stressed that her friend, a responsible adult, cross-checked with her—but warned what might happen if someone less cautious trusted the first response blindly. “What if she wasn’t? They would literally be dead,” she said.
Also Read: ChatGPT’s Monopoly Fades: Global traffic share drops to 64% as Gemini climbs above 21%
In her Instagram caption and video, Kristi called out large language models like ChatGPT as “not your friend,” “not to be trusted,” and potentially “awful” enough to cause severe harm. She described the incident as ChatGPT “nearly” killing her best friend by confidently confirming a lethal plant as safe. The post urged users to exercise extreme caution with AI identifications, especially for plants, food, or health-related queries.
The viral clip has reignited debates about AI hallucination risks, the limitations of image recognition in chatbots, and the need for human verification in critical situations.
Also Read: Opposition Uproar in Parliament Over VB-G RAM G Law Replacing MGNREGA