Product
Obscurity by Design
One thinker proposes a shift in the way we think about user privacy.
The digital age once promised to eradicate privacy, turning us all into transparent, constantly erupting fonts of data. Somehow, it’s done something even stranger. We dig through Facebook’s endless privacy controls, making sure our posts appear just so, even as the company surveils all of our web activity. We carefully manicure our Instagram presences, revealing a carefully filtered version of our lives, even as we’re served ads intended to make us feel bad about those lives.
Social media has turned us all into elaborate managers of our images and reputations, at the same time that other personal information, such as social security numbers, has become harder than ever to protect. A corporate hack or accidental government data dump could leave any, or all, of us irrevocably exposed. We are at once careful guardians of our data selves and deeply vulnerable.
To make sense of this world, and to try to sift through the new emerging definitions of privacy, I turned to Woodrow Hartzog. In recent years, Hartzog has emerged as an important thinker on matters of design, privacy, and power relationships between users and tech companies. A professor of law and computer science at Northeastern University, Hartzog has written for the mainstream press about these issues, sometimes in collaboration with his colleague Daniel Solove. His book, Privacy’s Blueprint: The Battle to Control the Design of New Technologies, was published this year by Harvard University Press. We spoke recently by phone about the benefits of obscurity, why facial recognition should be banned, and how to rethink our common definition of privacy.
Hartzog at first told me that it was impossible to define privacy. He then proceeded, in methodical terms, to do just that, focusing on issues of trust, obscurity, and autonomy. Quoting his collaborator Solove, Hartzog remarked that privacy is “not one thing but a collection of problems that have family resemblances.” These are problems that stem from information use, collection, disclosure, and exclusion.
The great misconception, Hartzog explained, lies in “thinking about privacy in terms of either control or consent. This is the GDPR-type framing — i.e. ‘we want to give you control of your data.’” Top tech CEOs often say same thing, with Mark Zuckerberg, for instance, promising his users control over their data. Hartzog will have none of it: “I think that’s a really deceptive and limiting conceptualization of privacy in the modern age. And the reason why is we don’t have the resources to take advantage of the control we’re given. People could be given all the control in the universe, but they can’t meaningfully exercise it at scale. As a result, it basically turns into a trap, whereby if companies get to do their part by offering control, we don’t get to use it, and then the risk of loss is placed on us.” The emphasis on user control, he explained, “becomes a way for companies to shift the risk of loss onto users.”
An obvious example might be Facebook’s privacy controls, which claim to offer users extensive choices about how their information is shared but in reality do little to stem the flow of information Facebook collects about them. In short, you might be able to control which friends see one of your posts or which profile information is publicly visible, but Facebook-the-all-seeing-entity still observes, and harvests data on, everything you do.
Or consider the many big data breaches that have afflicted corporations in recent years. Often, as in the case of Equifax in 2015, these companies unfurl their most sincere-sounding, PR-approved apologies while promising to strengthen their systems. As in the case of Equifax, they may even offer a free subscription to a credit-monitoring or identity-theft prevention service (perhaps one that they have a financial stake in). But that just means that the consumer is now even more dependent on outside services to spy potential identity thieves. Meanwhile, the original offending company might offer the customer a chance to tinker with or download a copy of the data they hold, but it’s not as if Equifax is about to delete their hundreds of millions of consumer profiles.
Some of Hartzog’s most evocative thinking comes on the issue of obscurity. Hartzog refutes the notion that just because information is out there somewhere — anywhere — on the internet that it somehow counts as public and that people should not have an expectation of privacy. Instead, we — and the informational records of our lives — pass through various degrees of obscurity. As Hartzog explains: “Obscurity is when information or people are hard to find or unlikely to be found. In other words, there’s a high cost to finding it. We live most of our lives in obscurity. When you go out to eat at lunch and you gossip with your coworker about your boss, that conversation is obscure. No one’s probably going to hear it. When you go to the drug store or you drive to your oncologist’s office, that’s probably not going to be broadcast in Times Square. It’s obscure, unlikely ever to be found. We live our lives in these zones of obscurity and rely on it to shape our risk calculus every single day.”
“Empirical research demonstrates that internet users rely on obscurity perhaps more than anything else to protect their privacy,” Hartzog once wrote. We rely, as he said, on the knowledge that the things we do in public are not being recorded or disseminated, that they are as ephemeral as the air. (That’s why, for instance, this summer’s “plane bae” viral story felt like such a violation for the people whose airborne courtship was being catalogued for all to see.) Because much of what we do is only partially obscure, a lot of privacy scandals center around information that’s already out there in some form but then gets “discovered” and amplified.
When we lack obscurity, a dangerous feeling of exposure takes hold. “The oppressive power of having to watch your every single move is really one of the reasons that surveillance is so dangerous,” Hartzog said. In practice, “obscurity is this rich valuable resource,” one that greases the wheels of social life, giving people confidence to move through the world unhindered.
That brings us to facial recognition, which Hartzog calls “a uniquely dangerous technology” and “an obscurity-eviscerating tool.” Whereas it was once quite difficult, if not also expensive, to track people’s movements in public, now that can potentially be done easily and on entire populations.
“For most of mankind’s existence, we’ve been allowed to go to and fro in relative obscurity. Facial recognition threatens that. It threatens to be a system whereby we can no longer go anywhere in public without being identified. And that to me is a terrifying prospect. Now a lot of people push back and say we shouldn’t ban specific technologies, and generally speaking I agree.” But facial recognition is different by an order of magnitude, Hartzog argued. “Our faces are so connected to our identity. And it’s so difficult to hide our faces.”
Hartzog sees some rays of light in the GDPR’s emphasis on data protection by design. He likes DuckDuckGo and some of the privacy-centric design he sees emerging from tech giants like Google, Microsoft, and Apple. Brad Smith, the President and Chief Legal Officer of Microsoft, recently published a blog post about the need for facial-recognition regulation, which was a heartening sign that top executives are taking this stuff seriously.
But Hartzog’s optimism shouldn’t, well, obscure his pointed critiques. In that vein, he warns of “the astonishingly poor security” of Internet of Things devices, which he’s previously called “a runaway train headed for catastrophe.”
“The more and more we jam chips into things that affect our everyday lives,” Hartzog said, “the more and more the specter of physical harm starts to loom.”
And then there’s social-credit systems, which are being pioneered in China but may soon find their way stateside, and the mass collection of genetic data by companies like 23andMe and Ancestry.com, which seem to have no compunction about making deals to sell that data to big pharma.
“I anticipate that between credit systems and genetic privacy we’re going to see some major incidents in the coming years,” Hartzog said. “We’re running into some really difficult problems in a way that I don’t think regulators have fully thought through.”
Jacob Silverman is the author of Terms of Service: Social Media and the Price of Constant Connection, which is published by HarperCollins. His website is www.jacobsilverman.com.
Story published on Nov 7, 2018.