The Invisible Eyes
January 22, 2015
Suppose there's been a kidnapping incident in your neighborhood, and the police, in an attempt to get to the bottom of the mystery, has rounded up four people from the area, one of whom they think may be the kidnapper. All four of these men and women own smartphones and laptops, and as with many of us, these individuals' devices hold a lot of private information about them, including photos and videos, browsing histories, social media accounts, credit card information, privately sent messages, location histories ... the list is literally endless. Because of the sheer amount of things we can do with our pocket-sized computers and the fact that many of us carry it around literally everywhere we go, our smartphones are our most personal devices ever, and they're also equally great reservoirs of information that could be used in law enforcement as evidence in trials. So when you have these almost unlimited pockets of private information in these devices, and when a piece of information from one of these devices could potentially save a life or lead to the capture of a criminal, what do we do? Do we choose to collectively sacrifice our innermost secrets and privacy for the greater goal of security? Or do we simply conclude that privacy trumps potential risks?
Many tech corporations have been at this conversation with the government intelligence agencies for a while, as I've briefly talked about before. As the risk of security breaches and infrastructure-harming hacks increase, these companies naturally take actions to prevent further violations. This goes anywhere from increasing investment into security research, as Google has done with Project Zero, to simply encrypting or password-protecting our smartphones, making it harder for a potential thief to access private information on the devices. Recently, prominent players in the tech landscape like Apple and Google have taken another big step towards security by making it physically impossible for a third-party, including the government, to access information on a smartphone without a passcode while the device is locked. For example, this means that no organization can request Apple or Google to unlock a seized smartphone for them so they can continue an investigation into a murder case. It also means that if the government wanted access to your private information on your devices, it couldn't employ official search warrants to get to the information through the companies -- the companies would be physically unable to help them, because of the strong security measures they use in the devices they produce*. The FBI has in the past called these types of practically un-breachable security measures "allow[ing] people to place themselves beyond the law", and continued, "I'd hate to have people look at me and say, 'Well how come you can't save this kid?' 'How come you can't do this thing?'"
That raises a conflicting question of values, between privacy concerns and rights, and assisting the enforcement of the law -- between asserting our rights to personal information, and potentially saving hundreds of lives. Barely a week ago, the Prime Minister of Great Britain David Cameron also voiced his opinion on encryption and enhanced security in products, asking, "[do] we want to allow a means of communication between two people which even in extemis with a signed warrant from the home secretary personally that we cannot read? ...My answer to that question is no, we must not. The first duty of any government is to keep our country and our people safe." He does have a point -- in extreme cases, where terrorist attacks or murder/kidnapping cases are involved, information could be critical. But when we take a step back and examine what this kind of offer is really asking for, there are two big changes that this calls for. The first is the government's authority to oversee and have authority over every single piece of private or public information and communication that takes place within its jurisdictions. That would grant the government and its law enforcement workers the ability to look for possible evidence by searching through data that's intimate to our personal lives. Not only that, this kind of power would only be reasonable in a perfect world where no law enforcement body ever abuses their power for less just and more personal gains, and anyone with a reasonable sense of reality would reject that assumption at first glance. The second problem with this idea is that encryption, or protecting communication and stored data harder for third parties to read, would essentially be banned. That doesn't just spell the end of secure e-mail or text messages. Literally every single website in the top 100 websites list of 2014 uses some sort of secure communication method, and banning these security measures so the government could intercept these messages would, for all intents and purposes, eliminate hundreds of top websites off the face of the Earth, including the entirety of Google, Facebook, YouTube, Netflix, Twitter, Weibo, and Baidu -- a selection that amounts for well over three-fourths of the world's entire internet usage by themselves.
But at a fundamental level, the discussion of digital privacy isn't only about keeping online communication viable and the impracticality of banning the Internet itself. It's a basic discussion about our rights to privacy and free speech. It's a question on whether or not our individual privacy are worth more than assisting in solving crimes. But when we examine the world that we live in, immersed in technology and so reliant on safe flow of thousands of terabytes of personal information each second, opening up the floodgates to access to these information is not only impractical or unsafe, but not technically feasible. Furthermore, I think the way in which we currently live requires some level of impenetrable security. We can't just abandon the trust that we have between, say, our e-mail providers or our social media sites on our privacy. The reason so many websites and so many services rely on encryption is because secure transportation of information about you or for you is increasingly important. As a digital forensic researcher, says....
"For the sake of privacy and overall security, the only logical solution is to make products as secure as possible, and let good detective work do the crime solving, rather than an easy button." - Jonathan Zdziarski, security researcherAnd I agree wholeheartedly with that. as ideal as it would be to make sure that our private information is used if and only if they're absolutely required to solve an important case, the reality is far more vague, with too many uncertainties, to allow that kind of trust to be present with such sensitive information. In other words, these security measures are there not to annoy detectives and hide crimes, but because they're genuinely necessary and beneficial to the average user in protecting their rights to privacy. And as long as that's true**, we shouldn't need to sacrifice our safety for surveillance. There are two images of the world in clash here. One is a world where all information is free to access, and all information is open for legal, justified uses as well as malicious abuse and harassment. In this world, there is no concept of privacy. The other is a world where, although some information may be kept private, crimes are still being solved with technology. While we discuss the issue of digital privacy, I think we too frequently miss the biggest truth -- that the whole reason privacy exists is for our safety, and it's simply nonsensical to take away that safety in the name of providing more (potential) protection.
* Unfortunately, this still leaves some security vulnerabilities and loopholes in software for investigators to possible take advantage of. But in terms of using official search warrants through companies, that previously used route to private data is now off-limits.
** More commonly known as eternity.