Upwork: Sucess Score UX
In search of a prolbem
In April 2015, one of the lead product managers at Upwork found a problem. Upwork had previously used a five star rating system to show the quality of a freelancer on its marketplace. However, Upwork had recently switch to a percentage system with a complicated equation generating the result. When Upwork implimented the new system they found people didn't use it to help them make choices between freelancers.
Upwork asked me to look at the success score, and design soemthing that users would find useful to make choices between freelancers.
What Is Similar To A Success Score?
I started the project by looking at other examples of ratings systems. Specifically, I wanted to find other similar examples to make a comparison.
The current concepts matched services like Rotten Tomatoes, or Amazon, with their star and precentage system. These systems are Likert like system with a full range of negative to possitive.
What is it not like?
The second part of my research focused on types of rating systems that were dissimilar to Upworks rating system. This includes label systems such as moving ratings and award systems like the Michelin Guide.
Defining The Problem Part 1
With the initial research done, worked to cut away the purpose of the score from the problem it was suppose to solve. The score is meant to provide clients with a meaningful way to choose between to freelancers.
However, if you think critially about a ratings score in the senerio of freelancers, it doesn't work. It doesn't work because Upwork was asking clients to make a choice without enough information. What is a meaningful difference in score, 1%, 5%, 10%? There is not way to make this evalution without showing the variation in score.
Defining The Problem Part 2
The problem of a success score compounds in real world examples. How should a client compare a freelancer who is $10/hr with 92% score, to a freelancer $20/hr with a 96% score? Maybe the more expensive is twice as fast, and thus the better choice. Or purhaps the difference in score isn't meaningful, and the cheaper freelancer is a great deal. There is no way to know, using the Upwork success score.
Solving the problem
The solution came quickly, after further investigation into the freelancer market place. It turns out, Upwork removes freelancers from its sytem that habitually receive poor ratings. This is prudent of course, but it means that the success score is nothing like an amazon rating or rottentomatoes percentage. Both amazon and rottentomatoes have items with poor ratings. Bad movies exist, bad products exist, and neither services removes them from the system.
This realization shows that Upwork has something much closer to a Michelin Guide sytem. There are average freelancers, good freelancers, and amazing freelancers. There are no bad freelancers, as they are removed from the system.
In the end the solution was a simple one, design and educate Upwork clients on using a new good/better/best system that resembles Michelin star system.