On their own, fake online reviews are nothing new but what is new is a powerful tool – AI – that can write them more convincingly and more quickly β€” and it’d be short-sighted not to admit the temptation that many businesses will face. In 2022, alone, Tripadvisor detected 1.3 million and Google identified 115 million fake reviews.Β 

The importance of online reviews and ratings to business success can’t be understated. Unfortunately, this has incentivized companies of all types and sizes to push for reviews that are anything but genuine. Until recently, most fake reviews were generated by online review farms – often in third-world locations – paid to improve businesses’ online ratings. If you’ve read any of those reviews, you likely recognized them for what they were thanks to canned, overused phrases and imperfect English.Β 

But today’s AI changes everything with its command of multiple languages, its lightning speed, and the fact that it’s a low-cost tool. Already a Guardian journalist has demonstrated ChatGPT’s ability to crank out differentiated, well-written fake reviews. It may not be long before bad actors use tools like these to publish fake reviews at a rate in which today’s detection procedures simply can’t keep up.Β 

The Current (and Ever-Evolving) State of AI-Generated Content

Scroll through popular social media outlets, and you’re likely to see AI-generated imagesβ€” some comically obvious and others harder to perceive. In fact, experts have issued warnings such as: β€œDetecting deepfakes and AI-generated content is an ongoing challenge as the technology continues to evolve. As AI models improve and become more sophisticated, identifying disinformation becomes increasingly complex.”

For the purposes of AI’s effects on online reviews, let’s focus on AI-generated text. AI-generated copy has proliferated since the late 2022 release of ChatGPT, a powerful and commonly used generative AI tool. Already AI-penned content can be found in product listings on eCommerce websites, in marketing copy, and as entire eBooks published in Amazon’s Kindle store.Β 

But will the public remain comfortable with AI’s growing uses? Forbes Advisor published survey results in July 2023 that show mixed public sentiment. On one hand, more than 75% of consumers are concerned about AI’s potential to spread misinformation and a majority aren’t completely comfortable with businesses using AI. That said, 65% say they would personally use ChatGPT over search engines to find information online. And as more people interact with AI, it will get even better at appearing to write as a real personβ€”which takes us back to its potential for fake online reviews.Β 

AI’s Infiltration of Online Reviews

As early as April 2023, there were reports of AI-generated reviews on Amazon. How could the world be sure? Well, it’s a dead giveaway when a review begins: β€œAs an AI language model…” But we all learn from our mistakes, right?

Plus, as NBC News points out, β€œAI-generated reviews aren’t entirely against Amazon’s rules. An Amazon spokesperson said the company allows customers to post AI-generated reviews as long as they are authentic and don’t violate policy guidelines.” But how can platforms be certain the review is based on an authentic experience? While Amazon is allowing users to post reviews written with help from AI, it’s been cracking down on fake review providers, filing a lawsuit and requesting assistance from social media platforms where businesses connect with review brokers and where these brokers create fake accounts from which to publish fake reviews.

As mainstream review sites strategize, review farms can get ever savvier with the help of AI toolsβ€”and we mere mortals may have a harder time detecting the fakes. In fact, one study found that humans can only identify fake reviews 55% of the time. Will that number go down as AI-generated reviews get sneakier?

The Federal Trade Commission’s Response

In 2022, the Federal Trade Commission (FTC) took a stand for protecting online transparency by making an example of companies that cherry-picked reviews (by only publishing positive ones), gated reviews (by filtering rating results and only asking happy customers to write a review), and encouraged entirely fake reviews. This year, the FTC issued a warning about β€œthe widespread emergence of generative AI, which is likely to make it easier for bad actors to write fake reviews.”

To deter companies in its war against fake reviews, the FTC has sought a new weapon for its arsenal: a fine of up to $50,000 per infraction aimed at companies that either sell, buy, or promote fake reviews and ratings. The question here is: How easy is it to detect fake reviewsβ€”and will FTC enforcement be able to keep up with perpetrators?

Proactive vs. Reactive Approaches: How ClearlyRated Blocks Fake Reviews

To promote online transparency, we don’t rely on a reactive detection and removal process for fake reviews. While admirable in its aims, a reactive approach just won’t cut it, especially with the threat of AI-generated content. Our strategy positions us to be proactive instead, verifying reviews as they are collected and before any are published. With this process in place, we prevent fake reviews from being sought and published in the first place.Β 

From the beginning, we designed our platform to make it extremely difficult for any company to collect fake reviews. Our customer intake process screens for potential bad actors, and our platform only allows reviews from contacts in our customers’ data sets. This means that only real, vetted customers can rate or review companies with a ClearlyRated profile. Not one review or rating can appear on any of our customers’ profiles without the reviewer receiving a personal invitation as a member of their customer data set.

Finally, we employ a proprietary system that automatically searches for and flags various signals. This system helps us identify potential issues that may otherwise slip through the cracks, such as companies answering surveys for customers who didn’t respond. In fact, we do catch two to three rogue actors each year who try to submit fake reviews for their companiesβ€”and we have a process for remediation depending on the severity. Our goal has always been to help our customers improve their customer experience (CX) by getting a full understanding of their CX, and to recognize those that earn positive customer feedback. By cheating the system, everyone loses.

Unfortunately, most review platforms don’t have a way to verify whether each reviewer actually interacted with the company they’re rating. Therefore, it falls to consumers to keep the source in mind when reading reviews. Look into and recognize the differences between review platforms before deciding whether you trust them. You may decide that not all review platforms hold the same weight, and you’ll be wiser for it.Β 

While we sincerely hope this isn’t the case, it may not be long before having verified ratings, reviews, and testimonials will become a true differentiator. Why not start promoting that nowβ€”on your website, in proposals, and during conversations with your customers? Demonstrate your commitment to transparency and your goal of delivering a great CX. Learn more about how ClearlyRated can help you achieve that goal today.Β 

  • Nathan Goff

    Nathan Goff is the Chief Product Officer (CPO) at ClearlyRated. He manages the product strategy and development, where he gets to channel his passion for growing technology companies through continuous improvement and a focus on client and employee satisfaction.