Networking Reference
In-Depth Information
or performing crucial infrastructure functions. Hence, such systems can be relied
on by a person or another automated agent for these functions. Security alone may
refer to different goals that require trust, from transmitting and storing information
to solving problems. In the taxonomy offered by Grandison et al. [ 31 ], the closest
term to this type of trust construct is “infrastructure trust”. It refers to one's trust
that a system will work properly. Security plays a major role in this problem. Often,
the capability of the systems to carry out their function is not a concern in security,
just the fact that they perform their function as intended. In the canonical taxonomy,
this corresponds to integrity of systems. We discuss this distinction shortly.
Security is complicated further by considerations of privacy and ownership of
data. Information providers may ensure security (to some degree), but may choose
to mine the personal data of their users for business purposes or sell it to third
parties. In this case, data security is still compromised if the usage violates the users'
intent to keep their data private. When systems or data are controlled by entities
other than the trustor, it is also important to consider whether these entities can be
trusted to keep data secure. In the networked world, we increasingly use systems
that are controlled by entities other than ourselves. For example, more and more
data is now stored in “the cloud”. As a result, understanding whether systems are
trustable ultimately involves non-computational factors like the terms of use as well
as computational ones.
Computing infrastructure also includes the people who use it. Actions of people
may end up harming a system and compromising its security. For example, there
are still people who use “password” as their password or simply keep default
passwords unchanged. It can be much easier to get people to give up sensitive
information than to break into secure systems to extract this information. People
are generally unaware of consequences of their actions when it comes to computer
security [ 75 ]. How can we measure how “secure” or “trustable” a system is by taking
into account the human component? For example, the push for stronger passwords
makes it even harder for people to remember them. This may lead users to employ
simple heuristics for constructing passwords which in turn make systems even more
vulnerable.
One of the approaches suggested to help human users enforce security is to
introduce an appropriate a mental model [ 11 ], such as an analogy to public health.
When the majority of a population is vaccinated against a disease, the whole public
is safe similar to the notion of herd immunity. Hence, the actions required to keep
a system secure, like installing necessary patches and virus software contributes
to the security of the whole infrastructure. Such a mental model may help users
to justify why certain actions are beneficial and modify their behavior. Another
analogy is market-based. Secure systems contribute to a public good, and vulnerable
machines pose a potential financial loss. This approach is very similar to some of the
approaches used to compute trust. Other methods involve employing large groups
of people playing online games to identify threats, improving the response time
and accuracy of such methods by relying on high volume of data provided by the
players [ 61 ]. Such an approach trusts people to help enhance security methods.
Search MirCeyron ::




Custom Search