Some of these issues are explored as part of trust in automation. What if the
person trusts the robot blindly and stops monitoring it? It may happen that the robot
can signal that it is experiencing a serious malfunction, but this may not register
with the human team member. Unfortunately, grave consequences of this problem
have been encountered in aviation.
As the human and robot work together, they evaluate their trust and calibrate their
actions accordingly. This is called learning to work together. In a social context, this
is how each team member learns the intentions of the others and builds relationships.
For effective teamwork, the person needs to trust the robot for accomplishing tasks,
as well as for providing correct and timely information. The robot needs to trust
the person to be capable of completing the tasks he is taking on, giving correct
information and reacting to problems in a timely way so as not to endanger others.
These definitions correspond to many different trust contexts. They imply different
goals, dependencies and in some cases, cognitive processes. The aim of this brief
is to explain and categorize these differences. In the remainder of this chapter, we
provide an overview of the distinctions before going into a review of the literature
in this area. First, we describe networks and the new interdependencies introduced
Trust Context in Networks
Networks, especially socio-technological networks, bring the additional element
of interdependence. When the human and robot are working together, there is no
additional network context. However, in many realistic scenarios, the actions of a
person are constrained by the network environment in which they are embedded.
In many cases, the network provides new resources and enables people to take
actions that they cannot take alone. For example, social constructs help explain how
people in the same network can benefit from the existing social relationships in
that network. Social networks allow people to come together and accomplish bigger
tasks than they can accomplish alone. The individuals trust one another within the
context of a specific network that they are part of (see Fig. 2.3 ).
In networks, the social relationships are only part of the story. Technology allows
people to interact with many other people that are not part of their social network,
but still contribute to create value for themselves and others. Many tools and
services continuously depend on human input in one way or another. Wikipedia uses
human contributors and editors to create and curate data. Amazon Turk and other
crowdsourcing systems allow people to come together to solve problems. In these
systems, the participation is either voluntary or paid, changing the motivation of the
participants. Any service that relies on human input has to incorporate methods to
assess when and if such input can be trusted and how to process the input depending
on the level of trust. In addition to the motivation of the participants, one has to
consider the issues of bias and noise in the human input.