I'd like to do a little thought experiment. This is intended to explain to people just how complex human problems really are, and why writing good software is difficult.
Let's imagine that you have been hired by a company to write software for their new site. They are going to be tackling Yelp, and Google reviews to make a single, all-encompassing source for reviews on anything. There's one major flaw that they see in every system on the market currently; trust.
You see, this company is born from a group of people who are very cautious about, well, everything. They want to be able to read relevant reviews about new products, but they don't know if they can trust the reviews they see on these other sites. So they've hired you to make their offering, Trust.er.
The basics of trust
Trust (in sense of reviews) can be defined as the amount of confidence you have that the opinion of someone else will match your own. This is important to understand because the opinion of someone else may be based on knowledge, intuition, or even a single experience. If you trust someone, then their opinions should be more important to you than that of a random stranger on the internet. A single bad review among a hundred good reviews may be more important to you because you trust the reviewer.
This is the starting point for Trust.er - building a network of people who you trust. Trust is a personal thing, so the other systems where a person might become a "trusted" reviewer don't meet the needs of this company. Just because a thousand people trust one person doesn't mean that you should too; they might be entirely different to you.
Building a network of trust
Now a big problem with Trust.er's model is that not everyone will be on Trust.er to begin with. You can't build a very large network of people who you trust until those people are there. There's no way that you'll get all of your family and friends on there, besides which, they tend to have different opinions about things than you, so adding them all as trusted people wouldn't hold much benefit. Luckily, there are plenty of other people on Trust.er who you don't know yet; some of them have even reviewed things that you own, and generally seem to have the same opinions as you. That's great! If you like what you see, you can trust them!
Now it turns out that they have also got a network of friends who they trust, and when you look through their reviews, you see that many of them are trust-worthy, too. You could keep building your network based on the idea that the people who are trusted by the people that you trust are in fact worthy of your trust, but this would take a long time. Perhaps it would be easier to just assume that the people you trust may also know what they're talking about when they trust other people.
The company decide that if you don't trust these people specifically, then their opinions may still have worth to you, but not as much as the people who you actually trust. Now we have to search for links between people you trust, and the people that they trust, and then weight their opinions accordingly.
Now we run into another problem. You trust Bob, and Bob trusts Kim, but Kim distrusts you. Maybe your opinions on the best colour for curtains don't line up. You don't have any particular opinion on Kim, but her reviews keep showing up in your stream because you trust Bob, and Bob trusts Kim. It turns out that the reason why she doesn't trust you is because your opinions about things are actually quite different. Now you barely know Kim, and don't really want to have to distrust her just to get her to stop showing up; after all, she already distrusted you, Bob trusts her opinion, and you trust Bob to know what he's talking about.
The company decides that if you haven't made an explicit decision of trust for another person, then their opinion of you (or the mutual people that you trust) should be taken into account. This means that if you want to see reviews about that new cafe that opened up down the street, it now needs to not only check trust relationships in one direction, but all the way back again, too.
Fields of trust
Trust.er has had a great launch, and users are flocking in, but a common complaint starts showing up. It goes something like this:
"I trust my friend Sharon's opinion on fashion, but she also believes in homoeopathy. Every time I look up reviews for a medicine, Sharon's reviews are listed as trustworthy even though her reviews for medicine are just rants about chemtrails. My mother's doctor, on the other hand, barely gets a look in."
The company now realises that trust isn't as simple as the complex model which they've already built. Some people are trustworthy in certain areas of life, and not in others. If you can't trust someone's opinion on everything, then without supporting fields of trust, you shouldn't trust them at all.
The simple trust relationships have just gotten more complex, and much harder to maintain. Not only does Trust.er have to check several levels of trust relationships in both directions, but it has to ascertain if a person should be trusted on a particular topic at all.
Trying to just model how trust works, something so simple and innate in humans that we don't even have to think about it, has become a massive task. This is the problem that many clients face when they hire a developer (and the problem that many inexperienced developers face when they tackle such simple concepts as "time", or "names"). The more simple, the more natural something is to us, often the more difficult it is to actually program.
So the next time that you have a "simple website", or want to build that one thing that no-one else seems to get right, just have a think about trust. Good software is hard to create, and simple ideas are harder.