What to Read Next
A few months ago, I stopped in for a quick bite to eat at Dojo, a restaurant in New York City’s Greenwich Village. I had an idea of what I thought of the place. Of course I did — I ate there and experienced it for myself. The food was okay. The service was okay. On average, it was average.
So I went to rate the restaurant on Yelp with a strong idea of the star rating I would give it. I logged in, navigated to the page and clicked the button to write the review. I saw that, immediately to the right of where I would “click to rate,” a Yelp user named Shar H. was waxing poetic about Dojo’s “fresh and amazing, sweet and tart ginger dressing” — right under her bright red five-star rating.
I couldn’t help but be moved. I had thought the place deserved a three, but Shar had a point: As she put it, “the prices here are amazing!”
Her review moved me. And I gave the place a four.
As it turns out, my behavior is not uncommon. In fact, this type of social influence is dramatically biasing online ratings — one of the most trusted sources of consumer confidence in e-commerce decisions.
An Example of Social Influence
The Problem: Our Herd Instincts
In the digital age, we are inundated by other people’s opinions. We browse books on Amazon with awareness of how other customers liked (or disliked) a particular tome. On Expedia, we compare hotels based on user ratings. On YouTube, we can check out a video’s thumbs-up/thumbs-down score to help determine if it’s worth our time. We may even make serious decisions about medical professionals based in part on the feedback of prior patients.
For the most part, we have faith in these ratings and view them as trustworthy. A 2012 Nielsen report surveying more than 28,000 Internet users in 56 countries found that online consumer reviews are the second most-trusted source of brand information (after recommendations from friends and family).
Read the Full ArticleAlready a subscriber? Sign in
1. “Nielsen: Global Consumers’ Trust in ‘Earned’ Advertising Grows in Importance,” April 10, 2012, www.nielsen.com.
2. S. Bikhchandani, I. Welch and D.A. Hirshleifer, “A Theory of Fads, Fashion, Custom and Cultural Change as Informational Cascades,” Journal of Political Economy 100, no. 5 (October 1992): 992-1026 ; and M.J. Salganik, P.S. Dodds and D.J. Watts, “Experimental Study of Inequality and Unpredictability in an Artificial Cultural Market,” Science 311, no. 5762 (February 10, 2006): 854-856.
3. S. Gordon, “Call in the Nerds — Finance Is No Place for Extroverts,” Financial Times, April 24, 2013.
4. S. Aral and D. Walker, “Identifying Influential and Susceptible Members of Social Networks.” Science 337, no. 6092 (July 20, 2012): 337-341; and S. Aral and D. Walker, “Creating Social Contagion Through Viral Product Design: A Randomized Trial of Peer Influence in Networks,” Management Science 57, no. 9 (September 2011): 1623-1639.
5. L. Muchnik, S. Aral and S.J. Taylor, “Social Influence Bias: A Randomized Experiment,” Science 341, no. 6146 (August 9, 2013): 647-651.
6. See M. Luca and G. Zervas, “Fake It Till You Make It: Reputation, Competition and Yelp Review Fraud,” Harvard Business School NOM Unit working paper no. 14-006, Boston, Massachusetts, November 8, 2013; Y. Liu, “Word-of-Mouth for Movies: Its Dynamics and Impact on Box Office Revenue,” Journal of Marketing 70, no. 3 (2006): 74-89; and J.A. Chevalier and D. Mayzlin, “The Effect of Word of Mouth on Sales: Online Book Reviews,” Journal of Marketing Research 43, no. 3 (August 2006): 345-354.
7. N. Hu, J. Zhang and P.A. Pavlou, “Overcoming the J-Shaped Distribution of Product Reviews,” Communications of the ACM 52, no. 10 (October 2009): 144-147.
8. Special thanks to Georgios Zervas of Boston University for thoughtful discussions about this particular insight.
9. D. Mayzlin, Y. Dover and J. Chevalier, “Promotional Reviews: An Empirical Investigation of Online Review Manipulation,” American Economic Review, in press.
10. Splattypus, “Why Are Comment Scores Hidden?,” June 2013, www.reddit.com; see also Deimorz, “Moderators: New Subreddit Feature — Comment Scores May Be Hidden for a Defined Time Period After Posting,” May 2013, www.reddit.com.
i. One important exception is the seminal work of Salganik, Dodds and Watts, who conducted a large-scale lab experiment in an “artificial cultural market.” See Salganik et al., “Experimental Study of Inequality and Unpredictability.” Our work takes this research one step further by examining herding effects on a live website “in the wild” and by examining both negative and positive herding.
ii. Muchnik et al., “Social Influence Bias.”