The Right to be Forgotten

Recently, I read “To Save Everything, Click Here: The Folly of Technological Solutionism” by Evgeny Morozov. I read it because I disagreed with the premise and hoped to learn something from the experience. In it, the author argues that technologists (i.e., people who spend their time improving or advocating technology), particularly from Silicon Valley, are overconfident in their belief that they can solve the world’s problems with technology. Pick any speech by Google chairman Eric Schmidt to see an example of the type of thinking that the author rails against.

One argument from the book has stuck with me. It goes like this: there is a popular notion that the internet is not an invention or a technology, but an inevitable ideal that must be protected from all outside influence. Perhaps this visionary stance was useful in building the internet in the early days, but now that we have the internet, it's less useful.

For example, most recently, this ideal underlies a lot of the objections in the debate about the right to be forgotten. The European court of human rights recently ruled that European citizens have a right to have damaging links about them removed from search engine results. People who believe in the idealised internet warn of the dangers of regulating the internet in this way. Here is a quote from Jennifer Granick (Stanford University) in the New Yorker magazine (“Solace of Oblivion”, Sept 29, 2014 issue):

[This legislation] marks the beginning of the end of the global Internet, where everyone has access to the same information, and the beginning of an Internet where there are national networks, where decisions by governments dictate which information people get access to. The Internet as a whole is being Balkanized, and Europeans are going to have a very different access to information than we have.

This warning appears to be designed to send shivers up the spine, to paint a dystopian future where, *gasp*, Europeans see different information to everyone else. Jonathan Zittrain (Harvard Law School) makes the danger explicit in the same article:

"[... ] what happens if the Brazilians come along and say, ‘We want only search results that are consistent with our laws’? It becomes a contest about who can exert the most muscle on Google.” Search companies might decide to tailor their search results in order to offend the fewest countries, limiting all searches according to the rules of the most restrictive country. As Zittrain put it, “Then the convoy will move only as fast as the slowest ship.”

This quote makes explicit the fear of government control and of technology companies’ harmful reactions to such control.

Underlying both quotes is the assumption that the internet is sacred: its destiny is to be global and pure. But, to repeat, the internet is a technology, it is built and maintained by us because we find it useful. Replace “internet” with “car” in the above quotes, and consider again the first quote by Jennifer Granick. Does the experience of driving have to be identical wherever you are in the world? Does the fact that the speed limit varies from country to country, or that you can turn right on a red light in some US states and not in others, keep anybody up at night? Road laws, manufacturing safety laws, drink driving laws are highly Balkanised across states and countries, but no-one is worrying whether some authoritarian government of the future is going to take our cars away from us (except maybe some Tea Party activists).

Or consider the second quote, by Jonathan Zittrain. Do car manufacturers have a right to demand to sell identical cars globally? Is car manufacturing technology held back by the country with the most stringent safety laws? Of course not. But even if it were, it wouldn’t be the only consideration on the table. We don’t feel beholden to any visions that Henry Ford may have had a century ago about a future with open roads and rapid transit, certainly not when it comes to preventing people from dying horrifically on the roads.

Yet, when it comes to the internet, a lot of people believe that an unregulated internet trumps everything, including the grief of parents struggling to get over their daughter’s death when pictures of her dead body are circulating online, or the struggles of a person falsely accused of a crime who has to live with those accusations every time someone searches their name. A balanced viewpoint would look at freedom of speech versus the harm such speech causes in broad classes (e.g., criminal cases, breaches of privacy) and make a decision. Different countries have different priorities which will lead to different regulations on internet freedoms, and that’s ok. If the government is authoritarian, then it has already put huge restrictions on the local internet (e.g., China, North Korea). You can add that to the list of reasons why it’s unpleasant to live in authoritarian countries.

At this point, the person arguing for a global and pure internet retreats to practicalities. They cite two main practical barriers to the right to be forgotten: 1) it’s impossible to exert complete control over information - a determined person will find the restricted information anyway; 2) it’s too labour intensive to enact the right to be forgotten. Let’s start with the first barrier. I agree that a demand for perfection is misguided, and I don’t believe anyone is making such a demand. It’s possible to take reasonable and simple steps to allow people to be forgotten that gets you 95% of the way there. In the same way that a determinedly bad driver can still kill people, a determinedly assiduous employer will still be able to dig up information about a potential employee. But this was always the case, even before the internet.

The second practical barrier is the more important one, I feel, and is a manifestation of the fact that technology enables mass automation (e.g., indexing websites) while the judgement that society requires of Google (i.e., “is this index removal request valid?”) cannot currently be automated. While this challenge is substantial, it’s ironic that the same technologists (me included) who claim that they can solve the world’s societal problems, throw their hands up in despair when asked to automate such judgement.

policy, privacyadminComment