Designing strong tools is an important step in providing security. Unfortunately even highly secure systems can be compromised by human error. Until users understand the ‘how’ and ‘why’ of security, they may forever be cast as weak links in the security process. Fortunately human error can be reduced, when systems are designed with the user in mind. When concepts of trust and comfort are leveraged in the design process, a system can support an engaged user to make informed decisions about security. The Ten Commandments for Real People, proposed in an earlier article by the same group of researchers, can guide the development of human-focused security methods and tools (see sidebar).
Marsh et al. posit that raising the profile of the discussion of trust in computing will permit users to make their own security decisions. Foreground trust integrates reasoning techniques into technology, such as in the Device Comfort model. With the Device Comfort approach the technology compiles evidence about the degree of comfort with the environment, the user and their task. The device can then present this information to the user, to advise, encourage and even warn the user about potential actions, in a way helps the user understand the security implications. Device Comfort can recognize context, so permits the user a full range of personas. For example, the parameters can handle the shift from personal use to professional settings, where the user might engage a new set of tasks on different networks. When decisions about security are made in collaboration with the user, rather than unilaterally and behind the scenes by the device, the user is better able to see themselves as part of the security process.
The interface takes the shape of an ‘annoying technology’; when an undesirable action is attempted – one that makes the device uncomfortable – the device can (charmingly) get in the way of completing the action. Picture a confirmation step that needs a few more clicks or input from the user. The required action can be more or less involved, depending on the level of device comfort. Rather than block or override the user instruction, the device asks the user to pay attention before completing the action. This momentary obstruction brings to the foreground that the user is making a security decision. The device is enabled to, in turn, empower the user to judge their level of trust.
Security concerns are not going away, but neither are users. In designing and commissioning security software, consider the Ten Commandments for Real People. Rather than relegating the user to ‘hopeless’ status, foreground trust returns some authority to the human. Security is an ongoing process and one in which people can benefit from the reasoning ability of machines to make smart decisions about security.
|Ten Commandments for Real People
The model is for people.
The model should be understandable, not just by mathematics professors, but also by the people who are expected to use and make decisions with or from it.
Allow for monitoring and intervention. Humans weigh trust and risk in ways that can- not be fully predicted. A human needs to be able to make the judgment, especially when the model is in doubt.
The model should not fail silently, but should prompt for and expect input on ‘failure’ or uncertainty.
The model should allow for a deep level of configuration. Trust and security models should not assume what is `best’ for the user. Only the user can make that call.
The model should allow for querying: a user may want to know more about a system or a context.
The model should cater for different time priorities.
The model should allow for incompleteness.
Beware your context.
Humane security decision-making assistance by technology can improve security decisions by humans.