Often my eyes glaze over at AI stories because they’re so full of doom and gloom but provide little tangible proof that anyone is actually using or encountering AI in their daily lives. But that’s all changed with my recent story, about an urgent care clinic using AI to create millions of confusing and inaccurate blog posts about medical conditions—and other random things—to inflate its Google rankings.
This Urgent Care Clinic Is Flooding the Internet With Nonsensical Posts. Is AI to Blame?
Experts warned us that AI would flood the web with useless content, and I got a firsthand look at some of this content when I got a text from a friend who works for a prominent politician. That politician kept getting alerts because their name was being mentioned in random blog posts by a place called Nao Medical. My friend was reaching out to me because he started looking into the other posts on Nao Medical’s website, and they were BIZARRE. Like ‘Derek Jeter Herpes Tree’ and ‘Where is G: Exploring the Mystery of the Letter G’ and ‘Ginger Nutts’ Christmas Circus: A Magical Holiday Extravaganza’ bizarre. Like an alien had come down from outer space and was instructed to Create Content with no actual understanding of how to do so. So, kind of like how some AI operates.
There was the post ‘Color Guard Instead of Colonoscopy: A Fun an Effective Alternative' that advised people who were ‘tired of the traditional colonoscopy procedure’ to try a more engaging way to protect their health: the color guard. In the several paragraphs that followed, it went on to detail how the color guard, “a performance art that combines elements of dance, flag spinning, and rifle work,” can provide benefits like physical fitness, mental stimulation, and emotional well-being. The post concludes that “participating in color guard can be a fun and effective alternative to traditional colonoscopy procedures,” and then saying, a sentence later, that color guard “cannot replace regular colonoscopies” for colorectal cancer prevention. To my best guess, the AI got ‘Color Guard’ confused with ‘Cologuard,’ an at-home colon cancer screening test, and ran with it.
There are millions more — if you are looking for something amusing, just Google ‘Nao Medical’ and another random word and you will find so many weird posts about things that people are apparently Googling. I searched for Nao Medical and Sky, for instance, and was served a long list of posts about some internet personalities named Valerie and Sky. Don’t worry, Internet, Nao Medical can confirm that Valerie and Sky are not together.
These posts are amusing, sure, but they’re also troubling. That’s in part because the clinic’s strategy to use AI to boost its Google reach appeared to be working; when I published the story, this rinky dinky clinic with lots of terrible Yelp reviews was ranking on the FIRST page of Google search for the terms “caffeine and autism.” Which meant anyone who Googles “caffeine and autism” has a pretty good chance of stumbling across Nao' Medical’s AI-generated, fairly unhelpful, and in some cases very inaccurate, content.
My story lists a bunch more examples of bizarro posts, as well as SEO experts’ guess as to what is going on. It also includes a quick interview with the president of Nao Medical, who admitted that the business was floundering post-pandemic and that they were trying to find ways to reach new patients, including telemedicine and operating across state lines.
Some even stranger things happened after the story ran. Some sort of AI news service appears to have copied and rewritten my story—their story seems very clearly written by AI, even though it has a human byline. Some former employees contacted me telling me that the president had made them all use Chat-GPT before he laid them off.
And then, while I was writing this Substack, I went to Nao Medical’s website and found that the company had posted a response to my story, doubling down on its strategy. Nao, it said, was “changing how people use the Internet” and attempting to democratize information by providing people with quick and clear answers. It claimed that it had saved 21 years worth of users’ time in the last month. It admitted that AI helped Nao create content and that it “slipped up on a few of these articles” but that “we’re not backing down.”
“We sincerely hope,” the post says, “no one is taking medical advice from the Internet.” That post, at least, appears to have been written by a human.
Here’s my story, once again. https://time.com/6302710/nao-medical-google-ai/
Have any other examples of AI flooding the web with useless content? Let me know.
*
Book To Fall Asleep To: The Johnstown Flood by David McCullough. Lots of bad things happen when there’s a lot of income inequality. Want a specific example? Read this book, about how a bunch of rich dudes decided to make a private country estate on top of a dam they didn’t maintain, and then the dam broke and killed more than 2,000 people after some heavy rains. With stories from people who lived through it, including people who were traveling by train through Pennsylvania and had the bad luck to be stranded in Johnstown as a river of water and detritus came tumbling down the hill like an avalanche.