Networking Reference
In-Depth Information
Chapter 4
Trust in Computing
Trust is a very frequently used term in Computer Science. In this chapter, we will
review some of the different contexts in which it is used and draw parallels when
applicable with social cognitive trust constructs. The computing literature frequently
uses the term trust to describe an institution that is designed to make hardware
and software more reliable so that people will choose them to communicate with
each other, store personal information, find information and to perform various
other actions. Such trustable systems satisfy a number of expectations: they will
not loose or alter information, they will not expose information to unintended third-
parties, they will not fail to supply timely information, they will provide high quality
and fast answers, and so on. Some institutions are completely computational, rely
on algorithms and a computational infrastructure for their proper function, e.g.,
traditional security mechanisms for enforcing access control protocols.
Alternatively, some institutions have a human component: they require explicit
human input for their function. For example, reputation management systems have
to rely on feedback from users to produce high quality reputation values for people.
Crowdsourcing systems produce high confidence answers to problems based almost
completely on human input. In both cases, the human input is solicited as part
of a computational backbone that incorporates many design decisions, e.g., how
participants should be recruited, how their answers should be combined and when
answers should be disregarded. Recommendation systems are another example of
institutions dependent on human input obtained from the digital footprint of its
users. In this case, the human input is implicit. There are also recommendation
systems based on explicit recommendations that are aggregated over the network.
Institutions that rely heavily on the quality of human input need to take into con-
sideration the possible impact of social and cognitive factors in their computation:
human cognition, social ties, incentives for the use of systems and so on.
The same division between trusting actions and trusting information exists in
the design of different systems. Institutions designed to enable economic and social
activities tend to target trust for actions. In contrast, institutions designed to enable
information exchange or education target trust for information. However, these two
concepts remain closely related. Often, trust contexts are complex and involve
Search MirCeyron ::




Custom Search