I've been following this fantastic investigative series on the business of National Security being published by the Washington Post this week. They have published 3 stories so far and have put together a web site with supporting information and media. Lots of food for thought, at many different levels.
For a short synopsis of the work listen to NPR's Kai Rysdall interview with the author of the articles, Dana Priest, on Marketplace.
National Security Inc. | washingtonpost.com
Showing posts with label security. Show all posts
Showing posts with label security. Show all posts
Jul 21, 2010
May 21, 2010
Herre we go again...
Sigh, this is classic for anyone who's worried about data privacy when developing web-based apps. The WSJ reports today that:
And don' tell me advertisers armed with URL referrers back to user profile pages are making sure they are getting user's consent before looking at the profiles.
A URL referrer (i.e., user ID of the page) is a technicality; if it goes back to the user's profile page then it is a breach of a policy not to divulge personally identifiable information to 3rd parties.
I repeat myself, I'm glad all of this is happening. The social media is growing up and it's the consumers that are ensuring that things are getting safer out there. Apparently when experts expose security issues the fixes languish:
I know it's hip to buck the established/academic technology world in social media tech circles, but sometimes these smarty-pants can actually help to prevent some embarrassing moments.
Facebook, MySpace Confront Privacy Loophole - WSJ.com
So if you click on an ad from your profile page, the referring URL is sent to the advertiser without being scrubbed. Looks like steps are being/have been taken by at least Facebook, but this is a rookie mistake. To ameliorate the sting of yet another Facebook privacy smack-down, other social networks are doing the same:The practice, which most of the companies defended, sends user names or ID numbers tied to personal profiles being viewed when users click on ads. After questions were raised by The Wall Street Journal, Facebook and MySpace moved to make changes. By Thursday morning Facebook had rewritten some of the offending computer code.
Advertising companies are receiving information that could be used to look up individual profiles, which, depending on the site and the information a user has made public, include such things as a person's real name, age, hometown and occupation.
In addition to Facebook and MySpace, LiveJournal, Hi5, Xanga and Digg also sent advertising companies the user name or ID number of the page being visited. (MySpace is owned by News Corp., which also owns The Wall Street Journal.) Twitter—which doesn't have ads on profile pages—also was found to pass Web addresses including user names of profiles being visited on Twitter.com when users clicked other links on the profiles.
And don' tell me advertisers armed with URL referrers back to user profile pages are making sure they are getting user's consent before looking at the profiles.
Facebook said its practices are now consistent with how advertising works across the Web. The company passes the "user ID of the page but not the person who clicked on the ad," the company spokesman said. "We don't consider this personally identifiable information and our policy does not allow advertisers to collect user information without the user's consent."
A URL referrer (i.e., user ID of the page) is a technicality; if it goes back to the user's profile page then it is a breach of a policy not to divulge personally identifiable information to 3rd parties.
I repeat myself, I'm glad all of this is happening. The social media is growing up and it's the consumers that are ensuring that things are getting safer out there. Apparently when experts expose security issues the fixes languish:
The sharing of users' personally identifiable data was first flagged in a paper by researchers at AT&T Labs and Worcester Polytechnic Institute last August. The paper, which drew little attention at the time, evaluated practices at 12 social networking sites including Facebook, Twitter and MySpace and found multiple ways that outside companies could access user data.
I know it's hip to buck the established/academic technology world in social media tech circles, but sometimes these smarty-pants can actually help to prevent some embarrassing moments.
Facebook, MySpace Confront Privacy Loophole - WSJ.com
Filed under:
facebook,
MySpace,
privacy,
security,
social media,
social networks,
Twitter
Nov 30, 2009
Understanding scam victims: seven principles for systems security
Interesting University of Cambridge paper on how scams work and the psychological factors behind them. The authors essentially cover common scams and the reasons why they work but also take some time to explain how administrators need to consider these factors when designing system security.
For example, one of the seven principles of a successful scam is called the Dishonesty principle, whereby a scam goes unreported because the mark would have to admit some dishonest act in order to expose the fraud. The paper's authors offer some wise advice on creating corporate policy that will encourage reporting of fraud without fear of retribution.
The authors note that well designed security should make it easy for users to "authenticate" the validity of the system they are entering sensitive information into.
The verification must be easy enough for mortals. Likewise, any mechanism used to authenticate users should not be overly draconian since users will circumvent the system. An interesting example of this effect concerns e-mailbox quotas. When administrators limit attachment sizes to accommodate small mailbox quotas they run the risk of data leakage because users turn to consumer messaging systems, systems administrators have no control over such as Gmail, to send large file attachments to co-workers and customers.
For example, one of the seven principles of a successful scam is called the Dishonesty principle, whereby a scam goes unreported because the mark would have to admit some dishonest act in order to expose the fraud. The paper's authors offer some wise advice on creating corporate policy that will encourage reporting of fraud without fear of retribution.
The security engineer needs to be aware of the Dishonesty principle. A number of attacks on the system will go unreported because the victims don’t want to confess to their “evil” part in the process. When a corporate user falls prey to a Trojan horse program that purported to offer, say, free access to porn, he will have strong incentives not to cooperate with the forensic investigations of his system administrators to avoid the associated stigma, even if the incident affected the security of the whole corporate network. Executives for whom righteousness is not as important as the security of their enterprise might consider reflecting such priorities in the corporate security policy—e.g. guaranteeing discretion and immunity from “internal prosecution” for victims who cooperate with the forensic investigation.
The authors note that well designed security should make it easy for users to "authenticate" the validity of the system they are entering sensitive information into.
Much of systems security boils down to “allowing certain principals to perform certain actions on the system while disallowing anyone else from doing them”; as such, it relies implicitly on some form of authentication—recognizing which principals should be authorized and which ones shouldn’t. The lesson for the security engineer is that the security of the whole system often relies on the users also performing some authentication, and that they may be deceived too, in ways that are qualitatively different from those in which computer systems can be deceived. In online banking, for example, the role of verifier is not just for the web site (which clearly must authenticate its customers): to some extent, the customers themselves should also authenticate the web site before entering their credentials, otherwise they might be phished. However it is not enough just to make it “technically possible”: it must also be humanly doable by non-techies. How many banking customers check (or even understand the meaning of) the https padlock?
The verification must be easy enough for mortals. Likewise, any mechanism used to authenticate users should not be overly draconian since users will circumvent the system. An interesting example of this effect concerns e-mailbox quotas. When administrators limit attachment sizes to accommodate small mailbox quotas they run the risk of data leakage because users turn to consumer messaging systems, systems administrators have no control over such as Gmail, to send large file attachments to co-workers and customers.
Understanding scam victims: seven principles for systems security
Aug 21, 2009
Collaborative Strategy Guild on NPR!
Collaborative Strategy Guild member, Pete Lindstrom, is interviewed by Bill Radke on the APR Marketplace Morning Report for August 18, 2009:
Pete Lindstrom, research director at Spire Security, talks with Bill Radke about what consumers can do to keep their credit cards secure
Filed under:
collaborative strategy guild,
credit cards,
security
Subscribe to:
Posts (Atom)