Share

Imagine two similar products in an online store. You can’t decide which to buy. Based on user reviews, Product A has a four star rating (out of five), and Product B has a three star rating. Unsure, you look further into the reviews. Product A’s four stars are based on two reviewers providing four stars each. Product B’s three star rating is the average of ten reviews – five reviewers rate it a one, and five reviewers rate it a five. You scratch your head and look closer. One of Product A’s reviewers received the product for free, and two of Product B’s reviews are more than three years old. Another of Product B’s reviews includes comments solely related to shipping speed, while yet another review is full of typographic errors. You decide to make a sandwich.

Objective Measures

The first challenge with user feedback, and this could apply to many rating situations, is that one reviewer’s five stars is another’s four. The overall rating becomes an opinion rather than a verifiable measure. Rating on numerous specific, objective measures is an improvement over one general rating. Restaurant feedback may include a quantified serving speed, for example, to allow for fair valuation between those desiring a leisurely meal and those wanting a quick bite. Hotel reviews may include total cost paid per night. Project reviews may include number of hours worked, team members managed, budget variances, etc.

Reviewer Credibility

Contributors of user feedback have different motivations. Some are driven by self-fulfillment or the desire to help others. However, many product reviews are written by customers who have no purchase record, and some may be outright fraudulent. Others are written to maintain a reviewer’s status or may be skewed positive to avoid backlash. Putting mechanisms in place to increase broad-based, validated feedback over time would improve overall user reviews. Tactical steps may include providing feedback incentives to the non-vocal, soliciting periodic or follow-up feedback, valuing a track record of feedback, reviewing the reviewers or adjusting anonymity levels.


While there’s not enough information in the mental exercise above to truly evaluate Product A or Product B, there is enough to show the underlying complications of user feedback. What seems simple in theory can be quite complex in practice. Structuring feedback mechanisms to sustain objectivity and credibility is critical. You can do better than stars.

*