If you, too, are so very tired of not knowing which reviews to trust on the internet, we may eventually get some peace of mind. That’s because the Federal Trade Commission now wants to penalize companies for engaging in shady review practices. Under the terms of a new rule proposed by the FTC, businesses could face fines for buying fake reviews — to the tune of up to $50,000 for each time a customer sees one.
You know those one-line reviews on Amazon listings that don’t quite seem legitimate? Like the ones that rate a product five stars and say something incredibly vague, like “This is such a great item,” without expanding on any specifics? Well, that’s just one type of fake feedback that the FTC wants to crack down on.
The FTC’s proposed rule seeks to ban several different types of disingenuous reviews and would not just punish the companies that use them but also the brokers that falsify feedback. That includes the companies that buy or sell fake reviews, as well as those that buy or sell fake followers or views on social media.
Other notable carveouts include a ban on “insider” reviews and testimonials, which would prohibit a company from posting reviews from managers, employees, and even the relatives of workers without proper disclosure. It addresses “review hijacking” as well, a deceptive practice that involves repurposing reviews from other products, something the FTC took action against for the first time this year.
In April, the FTC fined The Bountiful Company, the business behind Nature’s Bounty supplements, $600,000 for allegedly exploiting Amazon’s product variation feature. This feature allows sellers to group different colors, sizes, or flavors of the same item into a single listing that shares the same reviews. However, the FTC claims The Bountiful Company used this feature to lump completely different products in the same listing, with the goal of boosting the reviews of a lower-rated item by grouping it with a higher-rated one.
The FTC also wants to crack down on company-controlled review websites that claim to “provide independent opinions about a category of products or services that includes its own products or services.” For example, that would bar companies from making their own websites — that they claim not to be associated with — to recommend their own products. The FTC’s rule would also fine companies that try to suppress negative reviews through intimidation or other means.
For years, Amazon, Facebook, Google, Yelp, and other online platforms have been attempting to combat fake reviews. But with generative AI becoming more widespread, it’s bound to get worse — and much harder to get under control. The FTC mentions this in its proposal, noting that “the widespread emergence of AI chatbots is likely to make it easier for bad actors to write fake reviews.”
We’re already starting to see AI-generated views populating the web. As my colleague James Vincent points out, you can see just how much AI-generated junk is out there by simply Googling “as an AI language model.” That’s the disclosure AI chatbots like ChatGPT spit out when asked for their opinion on certain things, but it can also appear inside spammy content and, sometimes, fake reviews the poster didn’t care to delete.
“Our proposed rule on fake reviews shows that we’re using all available means to attack deceptive advertising in the digital age,” said Samuel Levine, the FTC’s director of the Bureau of Consumer Protection. “The rule would trigger civil penalties for violators and should help level the playing field for honest companies.”
If the rule goes into effect, it’s still not exactly clear how the FTC plans on tracking down and penalizing the companies that use or sell fake reviews. While the FTC has voted to approve the proposal, it’s now taking public comments that it will review as it moves forward with it. I’m just hoping it will at least discourage some of the low-effort fakes I’m seeing online — or maybe it’ll just inspire them to get better. Hey, if I’m going to read a fake review, at least make it good.