Networking Reference
In-Depth Information
directions from a system), decision selection (e.g., using expert systems for
diagnostic help) and action implementation (e.g., sorting mail to different locations).
Such automation is quickly becoming part of daily life. Early automation systems
in manufacturing and flight control are still in existence today. On top of these, new
examples of automation span a large range from the power grid infrastructure to
mobile phones, and from intelligent information agents like Siri R
to many different
types of sensor-based devices.
Research shows that people tend to associate human-like attributes to non-human
actors, to the point that departure from human-like behavior creates an unnerving
effect (often called the “uncanny valley” in which the strong resemblance to humans
is uncomfortable) [ 40 ]. Reeves and Nass [ 58 ] show that humans project a social
identity on media and technologies that offer social cues. People talk to their
devices, and show an emotional connection towards them; they try to be polite
to technology that assists them and get angry at technology that does not work
properly. As a result, many non-human entities can be considered as trustees.
Literature in human factors design investigates what type of traits are associated
with non-human trustees.
Hancock et al. [ 25 ] discuss whether robots can be trusted as teammates. Here
“robot” is a general term given to any system that automates actions generally car-
ried out by humans. The authors argue that our understanding of what a robot is and
what it is capable of is shaped by examples in popular culture, and our expectations
of behavior based on these. Hancock et al. [ 26 ] show that robot performance-
based factors (e.g., predictability, reliability) and robot attributes (e.g., proximity,
adaptability) are the largest contributors to trust in human-robot interactions. These
can be construed as trustworthiness attributes of robots (Fig. 3.3 ).
Lee and See [ 36 ] discuss how trust in automation should be calibrated to the
capability of the trusted system. For example, over-reliance on sensors can lead to
negative consequences if the pilots fail to question the readings of faulty sensors
in time to prevent a plane crash. Lee and See identify three “traits” that impact the
trust in an automation system. The first, performance , describes what the automation
does and how reliable it is. In the canonical classification of trust attributes, this
corresponds to the ability of the trustee. The second, process , describes the degree
to which the automation's algorithms are appropriate for achieving the operator's
goals. This is similar to the definition of integrity, but is not associated with specific
values. It describes the fit of the system for the specific problem.
The final component, purpose , refers to the algorithms and operations that govern
the behavior of the automation and encode the original intent of its creators. In
popular culture, this is sometimes discussed as whether a robot can be designed
to do no harm and to which degree this can be codified in an algorithm [ 25 ].
This component resembles benevolence, but the question becomes whether the
system is perceived as having intentionality. Does the intelligent agent participate
in a shared goal with the trustor? Without a shared goal, can one talk about
intentions, or is the robot behaving according to some specific design? Without
these attributions, people seem to treat machines differently than other people [ 20 ].
In fact, neurological studies indicate that different parts of the brain are involved
Search MirCeyron ::




Custom Search