Privacy in the Age of Augmented Reality

PrivacyIn his talk, Alessandro Acquisti links two streams of research he is conducting at Carnegie Mellon University: the “behavioral economics of privacy,” and the study of privacy and disclosure behavior in online social networks.

First, he highlights how research in behavioral economics can help us make sense of apparent inconsistencies in privacy (and security) decision-making, and presents results from a variety of experiments in this area he conducted at Carnegie Mellon University.

Then, he discusses the technical feasibility and privacy implications of combining publicly available Web 2.0 images with off-the-shelf face recognition technology, for the purpose of large-scale, automated individual re-identification. Combined, the results highlight the behavioral, technological, and legal challenges raised by the convergence of new information technologies, and raise questions about the future of privacy in an augmented reality world.

Watch the talk at:

Face recognition – Anonymous no more

face recognitionIF YOUR face and name are anywhere on the web, you may be recognized whenever you walk the streets—not just by cops but by any geek with a computer. That seems to be the conclusion from some new research on the limits of privacy.

For suspected miscreants, and people chasing them, face-recognition technology is old hat. Brazil, preparing for the soccer World Cup in 2014, is already trying out pairs of glasses with mini-cameras attached; policemen wearing them could snap images of faces, easy to compare with databases of criminals. More authoritarian states love such methods: photos are taken at checkpoints, and images checked against recent participants in protests.

But could such technology soon be used by anyone at all, to identify random passers-by and unearth personal details about them? A study which is to be unveiled on August 4th at Black Hat, a security conference in Las Vegas, suggests that day is close. Its authors, Alessandro Acquisti, Ralph Gross and Fred Stutzman, all at America’s Carnegie Mellon University, ran several experiments that show how three converging technologies are undermining privacy. One is face-recognition software itself, which has improved a lot. The researchers also used “cloud computing” services, which provide lots of cheap processing power. And they went to social networks like Facebook and LinkedIn, where most users post real names and photos of themselves.


See full article at

Predicting Social Security numbers from public data

security-1202344__180Information about an individual’s place and date of birth can be exploited to predict his or her Social Security number (SSN). Using only publicly available information, we observed a correlation between individuals’ SSNs and their birth data and found that for younger cohorts the correlation allows statistical inference of private SSNs. The inferences are made possible by the public availability of the Social Security Administration’s Death Master File and the widespread accessibility of personal information from multiple sources, such as data brokers or profiles on social networking sites. Our results highlight the unexpected privacy consequences of the complex interactions among multiple data sources in modern information economies and quantify privacy risks associated with information revelation in public forums.

See article by Alessandro Acquisti and Ralph Gross.