Networking Reference
In-Depth Information
concept is different than reputation which computes how much people should trust
a specific person. The type of trust algorithm discussed here addresses how likely
people are to trust each other. Trust is not necessarily a symmetric construct; Alice
may trust Bob but Bob may not trust Alice. Most algorithms also discuss when it is
possible to infer transitivity: when Alice trusts Bob and Bob trusts Charlie, is it the
case that Alice trusts Charlie as well? From a computational perspective, necessary
and sufficient conditions for transitivity are examined in [ 5 ]. However, in small
friendship circles, transivity can be explained easily. If Alice is friends with Bob
and Bob is friends with Charlie, it is likely that Alice, Bob and Charlie all hang
out together and are friends. As a result, they are friends with each other based on
the transitive closure of social relations. This type of transitivity is used frequently
in community detection algorithms. A community is generally characterized as a
(small) group of individuals that are more tightly connected to each other than to
the outside world. Note that transitivity in this case describes the component of trust
that we characterized as trustworthiness, warmth or friendliness. It does not capture
competence which is not likely to be transitive. However, there might be external
reasons that would imply transitivity in competence. For example, Alice, Bob and
Charlie may all know each other from a prestigious college and expect that they
have certain competencies as a result.
Some work aims to understand which social behavior is a more likely indicator
of a friendship and which social behavior is a good indicator that a person
is (also) trusted for their competence [ 3 ]. Other work is based on the direct
evaluations of trust [ 22 ]. Using these constructs and assumptions, trust algorithms
find a quantitative trust value for different pairs of individuals. This type of trust
computation is especially useful for developing customized services for individuals.
For example, the well-known social phenomena of homophily implies that Alice's
friends tend to share's Alice's interests [ 54 ]. In fact, Alice's friends tend to do similar
things as Alice [ 15 ] due to the social influence friends have on each other. These
findings are used for applications like targeted advertising and recommendation
systems to help people find things of interest to them.
4.2.2
Trusting Agents
Trust is also used in the design of agent-based systems in which each agent evaluates
how much it trusts the other agents based on their behavior. The term “Semantic
Web” was coined by Berners-Lee, Hendler and Lassila [ 8 ] who imagined a set of
standards that allowed for devices and applications to communicate with each other
at a semantic level. Trust is a foundational element of this vision, allowing agents to
decide when to trust each other. Policies must be designed to describe how agents
can broker trust relationships by verifying and requesting various types of tokens,
assuming the proper authentications methods are already in place.
The trust models of intelligent agents are often inspired by cognitive trust. Agents
have internal beliefs and act on them. They make recommendations about who
Search MirCeyron ::




Custom Search