Most believe that Google’s search results product solely of computer algorithms, however, a little known group of people who work from home plays a very big role in this whole process. Mode of this group of people who manually evaluated sites was a mystery, but now is in the public emerged copy of Google’s guidelines intended for this group of evaluators sites.
In this 160-page Google manual, the Google gives detailed advice on how the evaluators to rank search results on various measurements such as quality, relevantnot and spam. Evaluators were advised that the relevance judged based on “relevance”, “Utility”, “relevance”, “superficial relevance”, “useless”, or “inability Evaluations.”
The reviewers evaluated with spam: “Not Spam”, “Maybe Spam”, “Spam”, “porn” and “maliciously”.
Google’s manual, among other things, inform evaluators about how to rank search results based on various measurements: quality, relevance and spamminess. Search evaluators will judge the results for different queries and choose from any number of classes – such responses as: “Not Spam”, “Maybe Spam”, “porn”, “Off-Topic”, “Unratable,” “Vital” and others.
Google asks the assessor to consider and user intent (eg, Mountain Lion Mac OS X or the real predator?), Do not evaluate a site with invalid security credentials assessed and avoid sites that are Updating more than four months.
A group of Google’s assessors must also make decisions about the pornographic material. They have to make decisions about what is the intention of the person doing the search based on some keywords. If you search using some keywords that may be considered relevant to pornographic sites but that the intention is not pornography, then the papers should be rated as a “porn”.
Please do not assign a Porn flag sites that are not. If the landing page is not porn, it should be marked as it is, according to the Google’s Guide for the tasters.
What is perhaps the most controversial is that Google than “human” assessor is required to make conclusions about the reputation of the website. For example, Google’s issues for evaluators are:
What reputation a website has?
This is accompanied by a covering explanation of Google:
Research reputation (reputation) in the Page Quality (quality sites) in is very important. A positive reputation is based on a consensus of experts is often what distinguishes the best quality page from quality sites. Negative reputation should not be ignored in these cases Page Quality should be evaluated with “Low” or “Very Low”.
It is controversial for several reasons. The Web is not a reliable feedback system, a lot of anonymous complaints are not expected to be representative at all. A number of those who slander the sites can be motivated by things that are not obvious assessors and therefore Google’s advice to seek “consensus of experts” is not always helpful. It depends on who the “experts”.
Google is assessing agencies entrusted Leapforce and Lionbridge, who hired people to work from home. Lionbridge, which describes itself as “a global crowdsourcing” agency published a job with people seeking to work from home on evaluating Web sites. According to the job advertisement agencies Leapforce, the company employs approximately 1,500 assessors who work from home.
It’s amazing how much is Google’s advertising that its search result of highly sophisticated algorithms, in contrast to the reality!