Networking Reference
In-Depth Information
to the algorithm or constructed from statistics that are hard to manipulate. Then, the
algorithm assumes that the more trustworthy a site, the less likely it is to link to a
spam site [ 33 ].
The ranking of web pages is another type of reputation computation that is
partially based on link analysis algorithms. The rank computation assigns a score
of importance to each page, with the implication that higher-ranked pages are likely
to be more trustworthy. This is combined with methods that determine relevance.
Link analysis methods use a network that is shaped both by human activity (e.g.,
site creation and linking to other sites) as well as automated systems (e.g., link
farms and web site creation tools). These algorithms have to consider that there are
adversaries in the system that are not benevolent and correctly identify them.
However, web ranking is not identical to traditional reputation management. Due
to the highly redundant nature of the Web, it is not important to rank all the pages
correctly, only those returned at the top. Furthermore, in reputation management
systems, there is usually no underlying network of relationships, which is crucial
for link analysis methods. Finally, it is still not clear how to distinguish a good link
from a bad link, e.g., a positive endorsement vs. a link given to criticize another
site. Adding semantics to links has been proposed as a solution, in which a link
may contain a special “nofollow” tag. This is now commonly used by wikis and
discussion groups. Investigating how these links could be processed with traditional
network analysis methods is a topic of research [ 49 ].
Economic and political competition will continue to encourage individuals to
manipulate page rankings to their benefit through new means. Hence, trust for
ranking algorithms depends on the degree to which they can be manipulated by
the agendas of self-interested actors [ 20 ].
4.1.6
Crowdsourced Information
Many information services benefit from the wisdom of crowds. Due to the sheer
volume of information and interactions on the Internet, systems can make the
assumption that if a piece of information is reliable, then a lot of people will endorse
it. This is incorporated into link analysis algorithms, as we have seen in the previous
section. Crowdsourcing methods improve the trustworthiness of information by
explicitly seeking a large amount of independent human input for a problem [ 21 , 69 ].
These methods are designed to seek input from individuals who do not have a
personal stake in the final answer, avoiding the problems associated with selfish
agents. For example, the Amazon Turk system was originally developed to cheaply
conduct tasks that are hard for a computer to perform but effortless for people.
It allows individuals to be paid small sums for performing simple tasks. Today,
Amazon Turk is used widely for diverse purposes, from conducting user studies
for understanding user perceptions towards various factors [ 43 ] to data curation for
finding duplicate and incorrect information [ 50 ].
Search MirCeyron ::




Custom Search