Jonathan Zittrain is a professor of law and computer science at Harvard who examines issues of privacy and fairness in the digital world. He is co-founder of Harvard’s Berkman Center for the Internet & Society and of Chilling Effects, a clearinghouse for cease-and-desist letters ordering the takedown of certain information on the Web. Author of “The Future of the Internet—and How to Stop It,” published in 2008, Zittrain recently addressed the current class of Nieman Fellows. Excerpts from the talk and the question-and-answer session that followed:
Will the right to be forgotten, now established in Europe, ever come to the United States?
The right to be forgotten is probably ill-named given that, as you know, it's a right to be de-linked [from certain results which may reflect unfavorably on a person] in a search engine—to de-link your name from the result even as the result may be legally entitled to exist there. In the canonical case of the Spanish fellow who had a foreclosure of sorts in a newspaper, if you search for a foreclosure in that state, the newspaper might produce a hit with his name. [You will not find the foreclosure if you are simply searching on his name.]
There's a certain elegance to that solution, because it's saying it shouldn't follow you around for the rest of your life. Google is so singular in Europe—90 percent market share—that those 10 results define you in a way that unduly empowers the roulette wheel of Google. My own reaction to that was that Google should have gotten out ahead of this and thought of making the first page of a search on a name kind of like about.me, allowing explicit and labeled curation by whoever owns the name.
Now we have to solve the John Smith problem. There are too many disambiguations to do, but let you curate a little bit and then have a learn-more button that takes you to organic search. If they had done that, it is entirely possible—that although it wouldn't have satisfied everybody—a big chunk of dissatisfied people would have been OK with that, knowing as we do that very few people go to page two of the results. Allowing somebody to take ownership of what is becoming the first thing you look up when you look for a person might have been a neat solution.
In the U.S. there is a famous vegetable defamation law in Texas. You cannot defame agricultural businesses, because of some exposés that were done on factory farming. Now there are new causes of action against those who do exposés. If there is bad stuff about Texas vegetables accessible through a Google search, would Texas demand that Google change it?
The First Amendment will be all over it if a state or federal government orders it. For a “right to be forgotten,” it'll be a tough row to hoe. Google is gingerly trying to make it so that when you have been granted the right to be forgotten, it only applies within the E.U.-specific portals for Google. It has made the Europeans unhappy, and this is a battle not yet fought. They are pushing for this to be a worldwide ban.
I wrote a paper, now 15 years ago, called, "Be Careful What You Ask For: Reconciling a Global Internet and Local Law," saying one way to reconcile this jurisdictional issue would be to have what you see—whether it's Google.com or otherwise—be shaped by the jurisdiction you are sitting in. It’s a great example of an uncontroversial application of standard international law—the sovereign gets to say what happens on its turf. But it’s bad if you think of information as different from everything else and deserving maybe of protections, especially in places that don't offer them.
It seems like Facebook daily is growing in the number of readers it sends news sites and also the amount of control it has over who gets to see news stories. I understand they're experimenting with sending readers not to an actual news website, but to a page within Facebook. It just seems like news outlets by the day are losing control over their readers and over their content. How should news organizations handle this?
Facebook can make the case that it will drive traffic. They have revenue share. It’s a win-win. But the entire ecosystem now starts to tilt to one vendor. What's the solution? It's likely either some intervention from some combination of the remaining news sites that aren't on life support getting together in the form of monopsony, to roll their own portal that will be on terms that are equally good. They will have—built into the compact that makes the portal happen—agreements about baiting and switching not being allowed, whatever that might mean.
It would help if the sites could speak with only one or two voices, rather than being plucked off as customers one at a time. Maybe there's a way to offer Facebook something that they will go for that gets everybody bought in—but that has those buying in doing so as peers of a sort, rather than customers. That's the strategic way I would try to analyze it.
China and Russia have said, "We want the United Nations to play a big role in governance of the Internet." The issue is that the United States is running the Internet, and that's not the way it's supposed to be. Then there's other people who say, "No, the Internet should be running the way it is." What do you think?
I think about this roughly similarly to the question, "Do we want Google running things? Why should they run things?” It was like, “Well, do you want those with the most money to run things by astroturfing?” But we don't want that, so who should run things? We don't trust anybody to run anything.
In Internet governance that has kind of been the problem, too. Where the rap on something like ICANN [Internet Corporation for Assigned Names and Numbers] has been, “Why should a California nonprofit in some privileged privity with the U.S. Department of Commerce be running these important things?”
We should have the governments of the world doing it. That's the U.N.; that's the ITU [International Telecommunication Union]; that's the WCIT [World Conference on International Telecommunications]. What would likely happen if you did that? What is the agenda of governments versus other stakeholders? The agenda may turn out to be surveillance, censorship, or more broadly, empowering the sovereign to say what should happen on its territory regarding the movement of bits.
When you look at what that would really mean, and don't like what you see, it may be that something like ICANN is better. I tend to be a little bit on Team ICANN in that debate, because of worry about what the Russians and the Chinese and others who do not embrace the rule of law would try to do if they had more of the levers of Internet governance in their hands.
Now we are in this era that's progressed so the government can look at your text, and Google or Verizon will allow it, or your Gmail or e-mail for whatever reason they deem as appropriate. What are the ways to guard against that? Or has it just gone Wild West and there's not enough movement among organizers or people to push back against that?
That makes me think of roughly two overall ways of trying to figure out what the balance—if a balance it is—is going to be between state and citizen in the realm of surveillance. One way of doing it is what we call “rule of law,” and there is a grand tradition of trying to structure the legal system so that the level of intrusion is within the right realm. It doesn't unduly empower the state, but it allows crime to be solved and vital information to be collected. In law enforcement, we had a certain détente around it, which would maybe date from roughly the Warren Court. We had certain ideas about how to preserve rights against self-incrimination that were still themselves balances. So, for instance—this is thanks to the Supreme Court of the Warren era—the idea of being read your rights so you actually know your rights at the time where they might be most helpful to you.
Apart from rule of law, there's “cat and mouse.” Cat and mouse says, "I do not trust the government. You talk about rule of law and balancing stuff, but at the end of the day, if they want to listen in to something they're going to do it.” And that's, in a way, what Apple is saying with the protection for your iPhone. Should it be seized, though it's all completely vulnerable in the Cloud, but put that aside.
You don't want to make search and surveillance so easy that the government need expend almost no resources, break no sweat to do it. Abusing it will become too tempting. On the other hand, there will be times when you want to solve that crime, and you're willing to put your chips on it. Should we simulate the sweat needed when the technology removes it?
To do a phone tap, it used to be that you'd have to sweat it out in the truck and put the alligator clips on the phone pole. It seems weird to have them go through the Kabuki theater of attaching them to the phone pole at which point, "OK, now you can digitally access what you need."
That was a certain—at the risk of status quo-ism—level setting that felt right. Thinking about embedding that in the rule of law would push us less to look for the crypto-anarchist solutions, which are probably thought of better as a last resort.
In light of what we now know about the way that the U.S. has used Western technology for surveillance, does the West still hold the moral high ground? Who sets these guidelines if democracies and other tyrannical regimes are using technologies in the same way?
As John Perry Barlow said in his "Declaration of the Independence for Cyberspace" in 1996, "You cannot separate the air that chokes from the air upon which wings beat." As Larry Lessig described it, maybe more tweetably, there's “horizontal portability” to technology, and we should be aware of that. If the iPhone is being built so that with the right order to Apple it can be turned on and turned into a microphone like the OnStar system, great, now China's a big market. Do we want that? Should Apple bear some responsibility for introducing a device like that into the market? It is absolutely up to the company [to set guidelines]. There has been some movement to force companies not to facilitate censorship and surveillance. It's gone nowhere.
Of course now, post-Snowden, the U.S. is totally of two minds, which I guess it's entitled to be. Now you can either say that's hypocrisy, or you could say, "Hello, the use of any form of force may be justified when done by one country that operates according to the rule of law. That identical use of force is wrong when done by a government for other purposes, like silencing the dissidents, or without the right process."
That is, maybe, not hypocritical. Your question is really meant to say, “How horizontally portable are these technologies? If they are innately horizontally portable, do we have to decide that what's helpful for catching kidnappers is going to be helpful for catching dissidents? How do we balance the interests of the dissident there against the interest of the kidnapping victim here? Tough question. Who should do it? The company, the government?” I don't know… I agree that it's a balance that somebody ought to be taking responsibility for.
Academics ought to be writing articles about why it should be set a certain way—that are trying to persuade Apple to do something. Or, trying to see if we can regionally zone the technology the way that we regionally zone Google into its different pockets.
I just wrote an article that came out in this month's Scientific American about the case for kill switches in military weaponry. ISIS seizes three divisions' worth of brand-new American military, sophisticated hardware, Abrams tanks, stuff like that, from the Iraqi army. The hypothetical is: should the Iraqi government wish that the American arms merchants and government had given them a kill switch like OnStar? If it's good enough to turn off an iPhone if it gets stolen, hell, you should be able to turn off that tank before the turret is turned against you.
Make the tank so that it expires every six months without a renewal key. The renewal key gets sent out—either by overhead satellite, like a GPS, or a PFC [private first class] that can punch it in up to a week early so you're not like, "Hurry up. It's about to stop."
Not just the Iraqi army, though. Perhaps the U.S. should publicly retain a kill switch so that if the Iraqi army starts engaging in what appear to be atrocities we can say, “You know what? No more tank for you.” We're throwing these weapons around the world. Why shouldn't we have to bear responsibility for how they're used? Especially when, in a new, networked world, we could stop them from being misused.
Will the right to be forgotten, now established in Europe, ever come to the United States?
The right to be forgotten is probably ill-named given that, as you know, it's a right to be de-linked [from certain results which may reflect unfavorably on a person] in a search engine—to de-link your name from the result even as the result may be legally entitled to exist there. In the canonical case of the Spanish fellow who had a foreclosure of sorts in a newspaper, if you search for a foreclosure in that state, the newspaper might produce a hit with his name. [You will not find the foreclosure if you are simply searching on his name.]
There's a certain elegance to that solution, because it's saying it shouldn't follow you around for the rest of your life. Google is so singular in Europe—90 percent market share—that those 10 results define you in a way that unduly empowers the roulette wheel of Google. My own reaction to that was that Google should have gotten out ahead of this and thought of making the first page of a search on a name kind of like about.me, allowing explicit and labeled curation by whoever owns the name.
Now we have to solve the John Smith problem. There are too many disambiguations to do, but let you curate a little bit and then have a learn-more button that takes you to organic search. If they had done that, it is entirely possible—that although it wouldn't have satisfied everybody—a big chunk of dissatisfied people would have been OK with that, knowing as we do that very few people go to page two of the results. Allowing somebody to take ownership of what is becoming the first thing you look up when you look for a person might have been a neat solution.
In the U.S. there is a famous vegetable defamation law in Texas. You cannot defame agricultural businesses, because of some exposés that were done on factory farming. Now there are new causes of action against those who do exposés. If there is bad stuff about Texas vegetables accessible through a Google search, would Texas demand that Google change it?
The First Amendment will be all over it if a state or federal government orders it. For a “right to be forgotten,” it'll be a tough row to hoe. Google is gingerly trying to make it so that when you have been granted the right to be forgotten, it only applies within the E.U.-specific portals for Google. It has made the Europeans unhappy, and this is a battle not yet fought. They are pushing for this to be a worldwide ban.
I wrote a paper, now 15 years ago, called, "Be Careful What You Ask For: Reconciling a Global Internet and Local Law," saying one way to reconcile this jurisdictional issue would be to have what you see—whether it's Google.com or otherwise—be shaped by the jurisdiction you are sitting in. It’s a great example of an uncontroversial application of standard international law—the sovereign gets to say what happens on its turf. But it’s bad if you think of information as different from everything else and deserving maybe of protections, especially in places that don't offer them.
It seems like Facebook daily is growing in the number of readers it sends news sites and also the amount of control it has over who gets to see news stories. I understand they're experimenting with sending readers not to an actual news website, but to a page within Facebook. It just seems like news outlets by the day are losing control over their readers and over their content. How should news organizations handle this?
Facebook can make the case that it will drive traffic. They have revenue share. It’s a win-win. But the entire ecosystem now starts to tilt to one vendor. What's the solution? It's likely either some intervention from some combination of the remaining news sites that aren't on life support getting together in the form of monopsony, to roll their own portal that will be on terms that are equally good. They will have—built into the compact that makes the portal happen—agreements about baiting and switching not being allowed, whatever that might mean.
It would help if the sites could speak with only one or two voices, rather than being plucked off as customers one at a time. Maybe there's a way to offer Facebook something that they will go for that gets everybody bought in—but that has those buying in doing so as peers of a sort, rather than customers. That's the strategic way I would try to analyze it.
China and Russia have said, "We want the United Nations to play a big role in governance of the Internet." The issue is that the United States is running the Internet, and that's not the way it's supposed to be. Then there's other people who say, "No, the Internet should be running the way it is." What do you think?
I think about this roughly similarly to the question, "Do we want Google running things? Why should they run things?” It was like, “Well, do you want those with the most money to run things by astroturfing?” But we don't want that, so who should run things? We don't trust anybody to run anything.
In Internet governance that has kind of been the problem, too. Where the rap on something like ICANN [Internet Corporation for Assigned Names and Numbers] has been, “Why should a California nonprofit in some privileged privity with the U.S. Department of Commerce be running these important things?”
We should have the governments of the world doing it. That's the U.N.; that's the ITU [International Telecommunication Union]; that's the WCIT [World Conference on International Telecommunications]. What would likely happen if you did that? What is the agenda of governments versus other stakeholders? The agenda may turn out to be surveillance, censorship, or more broadly, empowering the sovereign to say what should happen on its territory regarding the movement of bits.
When you look at what that would really mean, and don't like what you see, it may be that something like ICANN is better. I tend to be a little bit on Team ICANN in that debate, because of worry about what the Russians and the Chinese and others who do not embrace the rule of law would try to do if they had more of the levers of Internet governance in their hands.
Now we are in this era that's progressed so the government can look at your text, and Google or Verizon will allow it, or your Gmail or e-mail for whatever reason they deem as appropriate. What are the ways to guard against that? Or has it just gone Wild West and there's not enough movement among organizers or people to push back against that?
That makes me think of roughly two overall ways of trying to figure out what the balance—if a balance it is—is going to be between state and citizen in the realm of surveillance. One way of doing it is what we call “rule of law,” and there is a grand tradition of trying to structure the legal system so that the level of intrusion is within the right realm. It doesn't unduly empower the state, but it allows crime to be solved and vital information to be collected. In law enforcement, we had a certain détente around it, which would maybe date from roughly the Warren Court. We had certain ideas about how to preserve rights against self-incrimination that were still themselves balances. So, for instance—this is thanks to the Supreme Court of the Warren era—the idea of being read your rights so you actually know your rights at the time where they might be most helpful to you.
Apart from rule of law, there's “cat and mouse.” Cat and mouse says, "I do not trust the government. You talk about rule of law and balancing stuff, but at the end of the day, if they want to listen in to something they're going to do it.” And that's, in a way, what Apple is saying with the protection for your iPhone. Should it be seized, though it's all completely vulnerable in the Cloud, but put that aside.
You don't want to make search and surveillance so easy that the government need expend almost no resources, break no sweat to do it. Abusing it will become too tempting. On the other hand, there will be times when you want to solve that crime, and you're willing to put your chips on it. Should we simulate the sweat needed when the technology removes it?
To do a phone tap, it used to be that you'd have to sweat it out in the truck and put the alligator clips on the phone pole. It seems weird to have them go through the Kabuki theater of attaching them to the phone pole at which point, "OK, now you can digitally access what you need."
That was a certain—at the risk of status quo-ism—level setting that felt right. Thinking about embedding that in the rule of law would push us less to look for the crypto-anarchist solutions, which are probably thought of better as a last resort.
In light of what we now know about the way that the U.S. has used Western technology for surveillance, does the West still hold the moral high ground? Who sets these guidelines if democracies and other tyrannical regimes are using technologies in the same way?
As John Perry Barlow said in his "Declaration of the Independence for Cyberspace" in 1996, "You cannot separate the air that chokes from the air upon which wings beat." As Larry Lessig described it, maybe more tweetably, there's “horizontal portability” to technology, and we should be aware of that. If the iPhone is being built so that with the right order to Apple it can be turned on and turned into a microphone like the OnStar system, great, now China's a big market. Do we want that? Should Apple bear some responsibility for introducing a device like that into the market? It is absolutely up to the company [to set guidelines]. There has been some movement to force companies not to facilitate censorship and surveillance. It's gone nowhere.
Of course now, post-Snowden, the U.S. is totally of two minds, which I guess it's entitled to be. Now you can either say that's hypocrisy, or you could say, "Hello, the use of any form of force may be justified when done by one country that operates according to the rule of law. That identical use of force is wrong when done by a government for other purposes, like silencing the dissidents, or without the right process."
That is, maybe, not hypocritical. Your question is really meant to say, “How horizontally portable are these technologies? If they are innately horizontally portable, do we have to decide that what's helpful for catching kidnappers is going to be helpful for catching dissidents? How do we balance the interests of the dissident there against the interest of the kidnapping victim here? Tough question. Who should do it? The company, the government?” I don't know… I agree that it's a balance that somebody ought to be taking responsibility for.
Academics ought to be writing articles about why it should be set a certain way—that are trying to persuade Apple to do something. Or, trying to see if we can regionally zone the technology the way that we regionally zone Google into its different pockets.
I just wrote an article that came out in this month's Scientific American about the case for kill switches in military weaponry. ISIS seizes three divisions' worth of brand-new American military, sophisticated hardware, Abrams tanks, stuff like that, from the Iraqi army. The hypothetical is: should the Iraqi government wish that the American arms merchants and government had given them a kill switch like OnStar? If it's good enough to turn off an iPhone if it gets stolen, hell, you should be able to turn off that tank before the turret is turned against you.
Make the tank so that it expires every six months without a renewal key. The renewal key gets sent out—either by overhead satellite, like a GPS, or a PFC [private first class] that can punch it in up to a week early so you're not like, "Hurry up. It's about to stop."
Not just the Iraqi army, though. Perhaps the U.S. should publicly retain a kill switch so that if the Iraqi army starts engaging in what appear to be atrocities we can say, “You know what? No more tank for you.” We're throwing these weapons around the world. Why shouldn't we have to bear responsibility for how they're used? Especially when, in a new, networked world, we could stop them from being misused.