Nov 30, 2009

Understanding scam victims: seven principles for systems security

Interesting University of Cambridge paper on how scams work and the psychological factors behind them. The authors essentially cover common scams and the reasons why they work but also take some time to explain how administrators need to consider these factors when designing system security.

For example, one of the seven principles of a successful scam is called the Dishonesty principle, whereby a scam goes unreported because the mark would have to admit some dishonest act in order to expose the fraud. The paper's authors offer some wise advice on creating corporate policy that will encourage reporting of fraud without fear of retribution.

The security engineer needs to be aware of the Dishonesty principle. A number of attacks on the system will go unreported because the victims don’t want to confess to their “evil” part in the process. When a corporate user falls prey to a Trojan horse program that purported to offer, say, free access to porn, he will have strong incentives not to cooperate with the forensic investigations of his system administrators to avoid the associated stigma, even if the incident affected the security of the whole corporate network. Executives for whom righteousness is not as important as the security of their enterprise might consider reflecting such priorities in the corporate security policy—e.g. guaranteeing discretion and immunity from “internal prosecution” for victims who cooperate with the forensic investigation.

The authors note that well designed security should make it easy for users to "authenticate" the validity of the system they are entering sensitive information into.

Much of systems security boils down to “allowing certain principals to perform certain actions on the system while disallowing anyone else from doing them”; as such, it relies implicitly on some form of authentication—recognizing which principals should be authorized and which ones shouldn’t. The lesson for the security engineer is that the security of the whole system often relies on the users also performing some authentication, and that they may be deceived too, in ways that are qualitatively different from those in which computer systems can be deceived. In online banking, for example, the role of verifier is not just for the web site (which clearly must authenticate its customers): to some extent, the customers themselves should also authenticate the web site before entering their credentials, otherwise they might be phished. However it is not enough just to make it “technically possible”: it must also be humanly doable by non-techies. How many banking customers check (or even understand the meaning of) the https padlock?

The verification must be easy enough for mortals. Likewise, any mechanism used to authenticate users should not be overly draconian since users will circumvent the system. An interesting example of this effect concerns e-mailbox quotas. When administrators limit attachment sizes to accommodate small mailbox quotas they run the risk of data leakage because users turn to consumer messaging systems, systems administrators have no control over such as Gmail, to send large file attachments to co-workers and customers.

Understanding scam victims: seven principles for systems security

No comments: