As you know, comments on an Amazon page can make or break a product. That’s why the company says that more than 99 percent of its reviews are legitimate because they are written by real shoppers who aren’t paid for them.
But a Washington Post examination by Elizabeth Dwoskin and Craig Timberg found a majority of reviews in certain categories with certain characteristics such as repetitive wording that people probably cut and paste in. In other words, fake reviews.
Amazingly enough, many of these fraudulent reviews originate on Facebook, where sellers seek shoppers on dozens of networks, including Amazon Review Club and Amazon Reviewers Group. Shoppers are asked to give glowing feedback in exchange for money or other compensation.
The Law Of Unintended Consequences
As I’ve been reporting (Amazon Steps Up Its Antifraud Efforts, and Amazon Rewrites Review Policy), Amazon has banned paying for reviews because consumers distrust paid reviews. Every once in a while, Amazon purges shoppers who break its policies. But the ban merely pushed an activity that used to take place openly into the underground.
There, an economy of paid reviews has flourished. Merchants pledge to drop reimbursements into a reviewer’s PayPal account within minutes of posting comments for items on Amazon, often sweetening the deal with a $5 commission or a $10 Amazon gift card. Prodded by The Washington Post, Facebook deleted more than a dozen such groups this month alone and Amazon kicked Atgoin, a five-star seller, off its site.
These days it is very hard to sell anything on Amazon if you play fairly. If you want your product to be competitive, you have to somehow manufacture reviews.
A Devastating Practice
Sellers say the flood of inauthentic reviews makes it harder for them to compete legitimately. “It’s devastating, devastating,” said the owner of a baby-products company. He said his product rankings have plummeted in the past year and a half, attributing it to competitors using paid reviews. “We just can’t keep up.” And customers are no less angry. An Amazon Prime customer says he no longer trusts five-star reviews. He sees them as a marker of likely fraud rather than excellence.
Suspicious or fraudulent reviews are crowding out authentic ones in some categories. ReviewMeta is a company which examines red flags, such as an unusually large number of reviews that spike over a short period of time or “sock puppet” reviewers who appear to have cut and pasted stock language.
For example, of the almost 50,000 total reviews for the first 10 products listed in an Amazon search for “bluetooth speakers,” two-thirds were problematic, based on calculations using the ReviewMeta tool.
Amazon aggressively polices its platform for incentivized reviews and has filed five lawsuits since 2015 against people who write paid reviews and companies that solicit them:
“We know that millions of customers make informed buying decisions everyday using Customer Reviews. We take this responsibility very seriously and defend the integrity of reviews by taking aggressive action to protect customers from dishonest parties who are abusing the reviews system. . . . We take forceful action against both reviewers and sellers by suppressing reviews that violate our guidelines and suspend, ban or pursue legal action against these bad actors.”
Facebook vs. Amazon
Problems with the authenticity of Amazon reviews come at a moment of broad public concern over the accuracy of information on online platforms. The spread of Russian disinformation and hoaxes on YouTube and Facebook has raised questions about the role of technology platforms in displaying and amplifying falsehoods, contributing to a feeling of distrust and social division.
Against this climate, a Facebook spokeswoman said:
“We are committed to increasing the good and minimizing the bad across Facebook. . . . There are many legitimate groups on Facebook related to online commerce, but the groups identified misuse our platform.”
Sellers say that Amazon’s position as the top e-commerce destination has spawned a race to master — and game — the company’s systems. More than half of all online product searches start on Amazon. Landing among the first 10 results on an Amazon search can drive an explosion in sales.
To combat fake reviews, Amazon uses artificial intelligence to analyze “hundreds of thousands” of customers who have been banned from leaving reviews and uses the data collected to build computer models of their behavior to predict future techniques.
For two decades, Amazon permitted incentivized reviews, as long as reviewers disclosed that they had received a free or discounted product. But it began cracking down on the practice in 2015, acknowledging its struggles to control it.
“Despite substantial efforts to stamp out the practice,” company lawyers wrote in a lawsuit, “an unhealthy ecosystem is developing outside of Amazon to supply inauthentic reviews.”
The Atgoin Effect
Atgoin, an electronics company based in Shenzen, China, was one such company that leapfrogged to the top of Amazon rankings. In November, its $30 headphones had just a handful of reviews. Then, over a five-day period in December, the product received nearly 300 reviews, almost all of which gave five stars.
ReviewMeta found that more than 90 percent of all the reviews for the Atgoin headphones were suspicious. Many featured repeat phrases, such as “I’ll be using this for my gym workout going forward” and “comfortable to wear.” By early February, the Atgoin headphones, which had 927 reviews, appeared at the top in non-sponsored search results.
It is unclear how Atgoin, which has now been removed as an Amazon seller, obtained the flood of positive reviews. But in February, there were nearly 100 Facebook groups, split up by geographic region and by product categories, in which Amazon merchants actively solicited consumers to write paid reviews. One such group had over 50,000 Facebook members until Facebook deleted it. There are also Reddit boards and YouTube tutorials that coach people on how to write reviews. Websites with names such as Slickdeals and JumpSend let merchants give out discounted products, using a loophole to get around Amazon’s ban.
Renee DiResta is policy lead for the nonprofit Data for Democracy, a group of technology researchers dedicated to promoting integrity online. She has conducted research on paid Amazon reviews by joining some of the Facebook groups. Her first act was to write “interested” next to a post describing a pair of Bluetooth headphones for $35.99. Almost immediately, a Facebook user called Li sent her a direct message, calling her “dear” and asking for a link to her Amazon profile. If she reviewed the headphones, Li said, he would reimburse her via her PayPal account.
Within an hour of getting this message, DiResta got a slew of direct messages from other sellers, asking her to review tea lights, containers, shower caddies, badge holders, sanding discs, rain ponchos, pocket-size vanity mirrors, and butterfly knives. The messages came in so quickly, she barely had time to respond.
DiResta spent three months monitoring the groups. She observed the sellers using tactics to avoid detection by Amazon, such as focusing on reviewers who have a long history of writing Amazon reviews. The sellers even asked her for screen images showing when she started her profile.
DiResta found that many of the Facebook accounts had no friends on the social network. Their only Facebook posts were about cheap products, and their profile pictures included stock photos. A reverse image search on Li’s profile photo (a man on a beach) revealed a stock photo called “seaside man” that appeared on various Chinese-language lifestyle websites.