Interview: Matthew Prince with Eric Goldman

Eric Goldman: Hello, everyone. My name is Eric Goldman. I’m a professor of law at Santa Clara University School of Law, and today I’m talking with Matthew Prince, who is the CEO of Cloudflare, and I want to say thank you to Matthew for joining us today.

Matthew Prince: Thanks for having me, Eric.

Goldman: What we’re going to do is run through some scripted questions with maybe a little bit of back-and-forth. I want to start with a softball. Why don’t you just tell me what Cloudflare is and why you founded it?

Prince: Sure. Cloudflare’s mission, to start out with, is to help build a better Internet, which is why, when you approached us for this project it seemed right up the alley of things that we think about all the time. When we started the company, Michelle Zatlyn, Lee Holloway and I, back in 2009, we realized that, first of all, the world was moving from on-premises hardware and boxes that you would buy to services that were delivered in the cloud. All of the things that you had turned to in order to make sure that whatever you were doing online was secure, once everything moved to cloud, there was nowhere to put a box or install software, necessarily, to help protect you.

So, what we decided to do was build out a giant network, which would allow our customers, who are everything from individual developers or small businesses to some of the largest Internet properties and companies online, to use our network in order to be three things. First and foremost, secure, by fixing some of the underlying bugs of the Internet. Secondly, reliable, ensuring that if somebody drops an anchor in the Mediterranean and it cuts a piece of fiber-optic cable, your website doesn’t go offline. Basically, we can route around all of those problems and make sure that you stay reliable and stable all the time. Lastly, fast. We wanted to make sure that you are as fast as possible.

Over time, there are a couple of other things that we’ve added to that, two other pillars. One is privacy. It’s turned out that as we have thought about our business, we realized that privacy is a key of both security and what people are looking for in the Internet. And then also efficiency, which really translates to: How do we make sure that the Internet is available to everyone, everywhere in the world? That’s whether you’re a person who’s creating content and wants to reach a global audience, or you’re part of that global audience and you’re somewhere in Sub-Saharan Africa or Latin America, or somewhere that you might not have the best Internet access. We want to do what we can to make sure that the Internet is available to everyone.

Goldman: All right. Well, super. So, when I think about the Internet ecosystem, sometimes we talk about layers in a telecom stack. Let me talk that through a little bit. We think about a hierarchy of different activities. There’s people who are sometimes called “edge providers,” or websites or online services are available that are facing the end user, and then there’s hosts for those services that actually provide the servers and the cloud that are necessary for those edge providers to communicate with their users, and then there’s the Internet access providers that connect the servers to the rest of the Internet, and then there’s the physical layer, the actual hardware that moves all that data that forms the Internet. Can you tell me a little bit about where Cloudflare fits into that telecom stack, or how you think about it in a different way?

Prince: Yeah. I tend to think about it from the perspective of: Where is the ultimate responsibility for the content? So, I think it maps somewhat to what you said, but I have a slightly different take on it. I think it all starts with an individual. At some level, whenever there’s content, which is created online, there’s an individual that’s creating that content. I guess in some future it might be an AI or something else, but today it’s largely the individual, and, in a perfect world, individuals would have responsibility for that content and responsibility for the laws and norms and everything else that exists around the world.

Under that individual would be the platform that is helping promote, organize and distribute the content, so it could be a Facebook or a Twitter or a WordPress that’s actually providing that infrastructure that is distributing and organizing and making available the content that’s online. Below that, and this is where it maps to what you said, is––I think of it as being the hosting provider. In some cases, like a Facebook, they are both the host and the platform itself, but in a lot of other cases you might have two different organizations that would be responsible for that. Below that, I would think of it as being the networks that connect everything together, and then below that would be some of the foundational and functionality Internet protocols and technologies that have to exist to make things work.

What’s interesting about us is I think we would most naturally say that we’re that network layer, but I think that we have certain products that are up in that hosting layer, and we have certain products that are down in that more fundamental Internet layer, so something like domain registration, which feels like it’s a very, very, very foundational aspect to how anything works. Ultimately, the registrar of your domain probably should be at the very bottom of the stack of people who are responsible for that content.

So, as we think about what our role is online, we think about it very much on a product-by-product basis because we have certain products where we’re actually the host, and I think that in those cases we have more responsibility for the content that is on those products. We have other products where we’re just the domain registrar or we’re just the DMS provider, and I think in those cases we’re farther down the stack, and I would be reticent to be taking an editorial or any kind of content control role in that. But for most of our products, I think we sit squarely in that network provider.

That’s my taxonomy of how I think about the world and, at one level, the way I think about it is that it’s almost like Jenga blocks stacked on top of each other, where you want to do the least harm possible, or you want to have the narrowest impact possible as you’re making content decisions. If at some point you pull out the domain registrar or you pull out the network, then everything above that is tricky. We have lots of platforms or hosts that use us as customers, so we’re always very reticent to be making policy decisions on their behalf that would affect those hosts or those platforms before they have a chance to take action themselves. So, I think, in an ideal world, you start at the top and work your way down, but again, that’s how we frame the conversation. It’s how we think through some of these policy issues.

Goldman: Actually, that’s a helpful framing for the rest of our discussion, so I’m happy to work with it. When we think about something like Section 230, it makes it very clear the individual is the person who is accountable for their actions, and then everyone else below that individual in your hierarchy is absolved of liability for the conduct or content of that individual. But what we’re also seeing is a lot of pressure to push liability and editorial decisions deeper and deeper into the taxonomy that you described. I guess one of the unanswered questions of our days is: How deep should it go? Should it stop with the individual, and everyone else is absolved, or do we say, “Maybe the closer they are, but not the individual themselves, we’re going to push a responsibility further to them”?

Prince: I would love there to be a bright line. I’m not sure that there always is a bright line. I think, ideally, the individuals would take that responsibility, but there are some bad individuals in the world who do illegal things using platforms and, in that case, the platform probably has a responsibility to make sure that they are doing what they can to deal with that content. I think whether that’s what the law says now in the United States or not, around the world, if you think about it from first principles, you want to have as narrow an impact as possible, but at the same time, when we all agree that there is some societal harm, and there are some things that we could totally agree on are just terrible societal harms, that you probably do want some layer lower in the stack to take that responsibility if the individual doesn’t.

But I think you work down that process in order. You don’t want to skip down to the lower layers. If all of a sudden we effectively yank the rug out from under one of our responsible platform providers or hosting providers, then their first phone call is, “Well, how can I trust you on this?” In fact, when we have made decisions in the past to shut down particular customers, one of the very first things that we get are very legitimate, responsible, large platforms calling us and saying, “Well, how can we be sure that you’re not going to do the same thing to us someday?”

I think that’s actually a really reasonable question to ask, and for us it’s pretty straightforward. There’s a big difference between an entire hosting provider, which is dedicated to illegal content, and someone who’s the totally responsible host, or a platform that takes no responsibility and actually works hard to undercut legitimate law enforcement actions and other things versus those that are, again, part of the larger social contract. So, as we think about it, it starts with the individual, but there will be some times––and actually, probably pretty frequently––that bad individuals will need to get policed by the platforms that they use.

It will become much rarer that the underlying host will have to police the platforms, maybe much rarer still that a provider like Cloudflare will have to police the host, and then hopefully we don’t get to a point, because I think there are some really significant consequences, where you deny fundamental Internet technologies for all but the most egregious content. It should almost never get to the point there where you say, “DNS is cut off for everyone,” or, “Domain registration is cut off for everyone.” That seems like the foundation on which all of the rest of the Internet is built, and it seems very dangerous if we start tinkering with that on a policy or editorial basis.

Goldman: It reminds me of the old SOPA battles, which are coming up on their decade anniversary, where, in fact, Congress was just a whisper or hair away from imposing liability at the domain and registrar level.

Prince: It would’ve changed, foundationally, how the Internet works and, again, I think that the challenge is that if you deny someone the ability to register a domain, that is a global impact, and one of the biggest realizations after starting Cloudflare that has been a real surprise in retrospect is how diverse the policies are around the world. When we watched the company back in 2010, the day we launched we had customers in ten countries around the world. By the end of the first month, we had customers in literally every country on Earth, and we only had eight employees.

So, the challenges of figuring out how you think through what the different policy decisions are around the world are difficult. The more you can confine that to something that is closer to a local provider, again, ideally, the individual, but then also maybe whatever the platform is that is servicing that local community or the hosting provider that’s servicing a slightly larger community, by the time you start to get to us, you start to have impacts that are much broader, and I think that’s contrary to a lot of how the Internet was formed in the beginning. Then when you get down to foundational technologies like DNS or domain registration, those tend to only be able to be applied on a global basis, and it’s actually a fairly small list of content that on a global basis we can all agree should just disappear from the Internet.

Goldman: Can we go back to that almost romanticized scenario where you’ve got eight employees servicing a truly global customer base, and you said that you didn’t anticipate that, going in, what the policy consequences would be from that? Is there something you would do differently now, knowing that you’re going to have to navigate this morass of divergent or heterogeneous laws?

Prince: I mean, there are lots of things that I would’ve done differently in the beginning of Cloudflare, although I think that one of the things where we were better prepared than most was on understanding the complications and consequences of these really hard policy issues. I remember about fourteen months after we launched, the FBI showed up at our office with two national security letters, and you’re not allowed to talk about them, which is arguably a violation of the First Amendment, and there’s no check and balance. They’re entirely administered by the executive branch, which is arguably a due process violation, and I think that had I not. . . I often wonder whether the three years I spent in law school were worthwhile, but then I remember moments like that where I had either the courage or gall or hubris to go to our board and say, “You know what we’re going to do? We’re going to sue the federal government.”

So, I actually think that at some level the question that we would always ask ourselves when we were eight people above a nail salon in Palo Alto, California, was: If Cloudflare ran the entire Internet, what would be the right policy decision? I think that helped us from a very early time make some very difficult policy decisions and make them optimized for what was the long-term best interest of, really, the Internet as a whole. So, there are a ton of things I would’ve done differently on the technical side and go-to-market and all kinds of other things, but actually, I’m really proud of how thoughtful and long-term-oriented we’ve been on the policy side, and it’s part of what I still enjoy most about my job and about working at the company.

Goldman: Can we go back to this issue about bad individuals and what should be done about them? We could focus on your DDoS service. I think that’s the purest way the question arises. When I think about your DDoS service, I think about a service that’s designed to protect good actors from bad actors. But what if the person who Cloudflare is protecting is themself a bad actor? I guess there’s a second question there. How would Cloudflare or anyone else know that they’re a bad actor? But could you talk me through a little bit about this dilemma that I’m having with this idea, when it’s protecting bad actors from other bad actors, versus protecting the good actors from bad actors?

Prince: Yeah. I mean, there are certain cases where it’s pretty easy. So, there are certain entities in the United States that are subject to sanctions, rules, and requirements, and so, if somebody tries to sign up and they are at a terrorist organization or an agreed-upon, by the United States government, as a sanctioned party, then we don’t provide our services to them, and that’s pretty straightforward. It gets sometimes a little complicated if they hide their identity or other things, but I think that’s one of those places where we have, over time, gotten much more sophisticated about identifying that and figuring it out.

I think the harder time comes when it is an organization that is either doing something that seems bad––so, an example in the DDoS space was, a number of the services that you can actually hire to launch DDoS attacks against other customers, the way that they deal with rivals is they would launch DDoS attacks against each other. So, they would all sign up for the free version of Cloudflare in order to make sure that they were protected from each other, knocking each other out. I remember a security journalist, a guy named Brian Krebs, who I actually really, really admire and think highly of, but we have gotten in knockdown, drag-out fights where he basically is advocating the Mad Max kind of future of the world, which is that the way that you could solve the DDoS problem is that you would basically just drop protection for all the DDoS services, and they would all fight against each other and kill each other.

I think that there are some problems. If you study Nash equilibriums, what you might end up with is much, much stronger, more menacing, more evil DDoS services. But that was one of those early questions, and I remember it came up for us in early 2011 when LulzSec, the hacker group, signed up for us. Again, Cloudflare has a free version of our service, and we don’t screen users except against sanctions lists before they sign up. Back in 2011, we weren’t particularly sophisticated about that. But they signed up, and I remember all of a sudden you had thousands of people who were screaming at us, saying, “How can you possibly have this hacker group that is using our services?”

We often find ourselves on our back foot where we’ve never even heard of the hacker group, and yet people on Twitter or on email or whatever are yelling at us about this, that, or the other. So, the challenge, and again, this isn’t an excuse, but one of the challenges, I think, is that when you’re eight people, how you can be an expert on not only ever hacker group in the world, but when popvote.hk signs up, how do you know that’s the Hong Kong democracy movement? Then, in that case, when you get an order from the Chinese government to take it down, what do you do? When Free Catalonia signs up, how do you have the expertise that that’s the Catalonia independence site, and all of the political ramifications that come behind that? When an independent journalist in Ukraine signs up, how can you know that that’s the center of conflict in Crimea?

Again, today, even though we’re still a relatively small organization, we have a lot more of that sophistication and expertise, but every one of those questions that I just outlined as hypotheticals came up for us when we had less than thirty employees. So, I think it is hard to wrap your head around how a global technology company that really takes off almost instantaneously, and asking, leaning on them to say, “Be not only technology experts, but policy experts,” especially early in their history, is a pretty tall order and a pretty big ask.

If you do ask that, then that’s great for us. That means that there will be no disruptive Cloudflare new service that comes along, and yet we now have the resources to have the policy people and have the government relations and team to have those things, and ahead of us will be the Facebooks and Googles and Microsofts of the world. So, I think one of the challenges here is how do you start to think about what are the right responsibilities that you put in place for larger organizations, but then how do you still allow a new entrant in the marketplace to be able to compete? That’s above my pay grade, so hopefully you and the people watching are thinking about it.

Goldman: Well, it’s certainly a question that I think our project has to grapple with because there’s no doubt that’s where the tenor of the conversation is. I want to come back to this navigating the politics question in a moment, but if you’ll indulge me, I do want to talk about the Daily Stormer situation. My own personal take is that I thought that the letter you wrote when you pulled the plug on the Daily Stormer was really a great piece of transparency. You were laying bare for us you were making a decision, you didn’t want to make the decision. You told us why you didn’t want to make the decision. You really laid out the whole story behind it, where usually in a calm situation like that, there’s a single positive narrative, or you say as little as possible and get the hell out of there. So, I thought it was really a helpful artifact in Internet history, to see you lay out your thinking. If you’ll indulge me, I do want to read one piece of it.

You wrote: “Law enforcement, legislators, and courts have the political legitimacy and predictability to make decisions on what content should be restricted. Companies should not.” Can you talk me through a little bit about how the Daily Stormer is an encapsulation of these dynamics that we’ve been discussing, about when you’re dealing with someone who might have gone from being an unknown actor to maybe a bad actor, and when you say, “Okay, they’re no longer eligible for our services. We need to protect against them. They are no longer the ones we should be protecting”?

Prince: Yeah. So, starting with transparency, when we think of what are the core values of Cloudflare, and your core values have to be things that would differentiate you from other companies, so we want to be a place that’s fair and reasonable, where people can do their best work and are rewarded for the work they do, and we encourage a diverse workplace and all those things, but what are the things I think are different about Cloudflare? I think we are a relentlessly curious organization. We’re always taking on new challenges, always looking at new things. I think we’re a very principled organization. We’ll talk a little bit more about some of that, and I think we’re also just a radically transparent organization, both internally and externally.

It’s interesting because I think one of the biggest mistakes that technology companies have, and there’s almost DNA that goes back to probably at least the Fairchild Semiconductor, and maybe even before that, of a just relentless secrecy, which is almost pathological at most companies, where if you are privileged enough to get to sit in the policy meetings at a Facebook or a Google or an Apple, it’s not like they’re surprised by how hard these issues are. They have very nuanced conversations about these things. What I think they mistakenly do is they don’t actually share why these decisions are hard. They say, “We kicked them off because they violated paragraph 13-G of our terms of service.”

Well, the terms of service aren’t automatically executed. There’s still some discretion from the company behind that, and I think what’s been missing in a lot of this is the conversation about how hard these issues are. If I go back to the Daily Stormer, first of all, the way I’d describe it at one level is that they were a bunch of Neo-Nazis. I think that, more accurately, they were a bunch of Internet trolls. If loving white kittens had been the most offensive thing you could’ve done online, they would’ve loved white kittens. They just literally were looking for whatever the most offensive thing was, in part because they were just attention-seeking, at least it seemed to us.

That doesn’t mean that it wasn’t incredibly damaging. It was incredibly distasteful. It was not an organization we were proud to have using our services. But again, if we thought about it from the perspective of if Cloudflare ran the entire Internet, should they be on the Internet or not, it felt like that was a pretty tough call for us to be making. I think, over the years, we had continued to see more and more times where if some horrible thing would be using us, and we’d get a call from a policymaker or a journalist who would say, “How are you thinking about this, and why aren’t you kicking them off?,” 95 percent of the time we’d explain, “Hey, here’s where we sit in the stack. We’re different than Facebook or Twitter, and we think about it this way. That doesn’t mean we don’t have responsibility, but we think that it should play out in the following way.”

Ninety-five percent of the time, the policymaker or the journalist would say, “Yeah, that makes total sense,” and if it was a journalist, we’d say, “Will you write an article that says that makes total sense?” And the journalist would say, “‘Company does the right’ thing is not a story. So, we’re not going to write that.” Policymakers would generally be like, “Oh, that’s the right thing,” but there’s still this other thing in the corner that I hate, and I want to figure out how to control it. So, what we saw was, especially in Europe, that there was, increasingly, painting with a broad brush, a need for tech companies to do more to control bad content online.

Largely, when those policies were being written, the companies that the policymakers were thinking about were Facebook and Twitter and YouTube, but they were starting to creep into a point where we were getting worried that they might create some really undesirable consequences for us. So, all of that was the conversation that we were having on one side, and then, on the other, and internally, we said, “At some point we’re going to have to kick someone off and then talk about why that’s dangerous.” Then right around the point that we came to that conclusion, the Daily Stormer people just did some of the most repugnant things that you can imagine doing as a human being. If you’re going to fire any customer, firing Neo-Nazi customers is really fun.

So, we fired them, but then, as you said, we wrote very specifically about what the consequences were, and it’s different from a lot of technology companies do. When journalists and policymakers across the political spectrum called us, we didn’t say, “No comment.” Instead, we engaged, and we talked about it, and I spent the greater part of a year meeting with everyone from the Southern Poverty Law Center to the Cato Institute, to European policymakers, and everyone in between, and sharing a little bit about who we were, but then also talking about the various parts of the stack we’re in, and I think that there were still plenty of people who disagreed with our general approach. But I also think people appreciated that at least we were transparent about it, that we engaged, that we listened, and that we thought about it.

I think, over time, those sets of conversations have evolved into the hierarchy of individual platform hosts, network, foundational Internet technology. Again, I think we’ve gone from saying, “We would prefer to never kick anyone off,” to saying, “Listen. We’re going to be a little bit more nuanced about that.” If you have a bad individual, a bad platform, and a bad host, then sometimes it might actually fall to us to take action, but that should happen fairly rarely, and for better or worse, as there are laws around the world that compel us to take certain actions, then how can we make sure that as we take an action to satisfy the rules of China or Russia or India or Brazil or Canada, or whoever it is, that the rules of that country don’t extend beyond their borders?

I think that had we not taken that action, at the risk of having Immanuel Kant roll over in his grave––if you’re going to use someone for a means to an end, a bunch of Neo-Nazis are pretty good folks to do that with, but I think we learned a lot. It helped shape what has subsequently been more nuanced and technically understanding regulation that’s developed since then. I don’t think we’re out of the woods. I think we’re going to have to keep talking about these things. But I feel like we have gotten much more sophisticated in how we think about these policies, in part because we went through that exercise.

Goldman: Yeah. You had mentioned earlier that when you pull the plug on a customer, it rattles the confidence or trust of other customers. Did you have that experience in this case, or did your other customers say, “Finally. Yes, please. More of that”?

Prince: So, two different stories. One story, which is probably the less principled one, was that one of the things that was the final determination in us making the decision to kick this particular customer off was that one of our large customers said, basically, “It’s them or us.” That was not a comfortable conversation because we’re a business, and we have shareholders, and we responsibilities to them, and at the end of the day, sacrificing someone who’s paying us nothing and is repugnant for someone who’s paying us a lot and is a good organization––we didn’t love being in that situation, it put us in a hard place.

What’s interesting is the general counsel of that organization, about six months later, called me back and said, “I owe you an apology.” I said, “What are you talking about?” and this person had just lived through another situation––this was a software company, and they had provided services in a way that a certain group of people found offensive, and it blew up in their face. The general counsel said, “I thought that this was really straightforward, but now having lived through it myself, I see that this is incredibly complicated.” I don’t envy anyone who has to live through these questions and consequences, but I do think that as we politicize who customers are, that will mean that as people want to de-platform individuals, they will look at all the different technologies that they rely on and find ways to say, “Can we get the browser to block them? Can we get their payments processor to block them? Can we get their domain registrar to block them?”

There are just so many different routes that you could get any of those things shut off, and especially if you get that shut off industry-wide, that it really will effectively kick people off the Internet. Again, you can think about whether that’s a good thing or a bad thing, but it certainly is a thing with very significant consequences that we should be thinking through. That was basically the only customer that had given us that conversation.

Almost everyone else, especially the calls from our largest customers were, “Wait a second. Walk us through exactly how you did that, and explain to us how we can be sure that you, are never going to do that to us.” Those are big, reliable, trustworthy financial institutions and big e-commerce platforms, not Internet trolls, and not anyone who you would particularly worry about. But I think that’s something that many more people were worried about. Like, “Wow. How can we assure you won’t use that power against us” as opposed to, “How do we make sure that you use that power more going forward?”

Goldman: I like the phrase that you used. It’s politicization. Politicization of customers, because in the end, I think the decisions that your company makes have political consequences, whether you want them or not. They seem unavoidable. So, how do you navigate that environment when I think your preferred option would be, “Don’t involve me in the politics,” and if that option is available, now what?

Prince: Well, it can get really hard in other ways. For instance, we watched with quite a bit of concern in the 2016 US election as it became more and more clear that there was foreign influence through disinformation and hacking campaigns and, internally, a bunch of our team said, “There must be something that we can do to help with these sorts of issues.” So, in early 2017 we launched something we called the Athenian Project, which provides our services at no cost to any state, local or county official that is helping administer elections in any way. We can’t protect against everything. We don’t sit in front of the voting machines, but we can help protect the website that you use to register to vote or the place that you go to figure out where your polling place is, or the API that the district administrators will report back the vote results from.

Over the course of the four years between 2016 and 2020, more than half of US states and a majority of the so-called battleground states in the US signed up to use our services. So, one of the challenges is if we were seen as political in any way, like we were leaning one direction or another, whoever won or whoever lost the election could say, “Wow. We don’t trust the election results because Cloudflare was secretly behind the scenes pushing one direction or another.” There are certain institutions and organizations in the world that we specifically require to be apolitical, and the voting process is one of them, and having met with so many of the individuals that are charged on a local basis with administering elections now, that is an absolutely thankless, hard, but critically important job.

The fact that, again, we as a society, at least in the United States, have decided that that’s a job that should be very apolitical for what seem like obvious reasons, I think that then suggests that there should be certain institutions and organizations for which it’s incredibly important that they remain as apolitical as possible. Whether norms develop around various services or not, whether that’s Facebooks or hosting providers or network providers or domain registrars, whether the norms develop around that are the same or not, I think time will tell.

One of the things that’s tricky about all this is that the Internet’s still in its absolute infancy, and it took us a really long time to figure out what were the norms around the printed word, what were the norms around radio, what were the norms around television, what were the norms around the telephone system, and then once you figure out what those norms are, what then are the appropriate laws that follow that? I think we’re still so, so, so early on that it’s not clear. Because, today, if I were talking on my phone and I said something incredibly racist or offensive, and the phone operator dialed in and pulled the plug, that would be very, very strange. So, it doesn’t seem right that the underlying network operator for the telephone would be making that choice.

On the other hand, it seems totally reasonably, and in fact, it’s the norm on a global basis, at least in the time when newspapers were still thriving businesses, that in almost every major city globally, you would have two different newspapers that would represent the two political extremes that were there. I think one of the interesting questions is: What is the natural state of new technology that comes along? Right before the 2016 election, I was talking with Julius Janikowski, the former chair of the FCC. I was over at his house in DC, and we were having a beer, and I said, “Wow. I’ve never seen anything like this in terms of this political contest, and everything that’s gone on around it. Do you think that politics is ever going to get back to how I remember it growing up, watching the Nightly News with Tom Brokaw, Peter Jennings, or whoever it was?”

Julius said something that has really stuck with me over the years, which was, “Why do you think that’s the natural order of things?” He said, “That’s a response to new technology, and the new technology trying to keep itself from getting regulated,” wherein television emerges in the ’40s and ’50s, it’s this incredibly profitable new technology that comes out, and it’s limited in the United States to just three different providers, NBC, ABC, CBS. They compete with each other a little bit, but what they were really worried about was how to stave off regulation.

So, if you were going to create that as a business strategy, what do you do? You hire every one of your anchors from the middle of the country with no accent––it’s amazing if you actually look at the data of how many news anchors actually grew up in Kansas. You make sure that you cover every political convention from opening speech to balloon drop at the end, which three different networks covering the exact same thing is the worst idea, and yet to this day the major networks still cover the political conventions from beginning to end with basically the same content and the same feeds. You don’t oppose equal timelines as they get proposed, because, again, what you’re really trying to do is stave off regulation.

If you then start to think about what the platforms are of today that have incredible value and that are trying like crazy to stay neutral, like Google with search, it blows my mind that there isn’t a Fox News search engine. Overnight, if Fox launched a search engine, it would have, what, 20 percent market share, which is on its own a $40 billion company in the United States, and yet Google has done a great job, at least with search, of staying right down the center. And yet you can start to see them getting pulled in either direction, the arguments that the search results are inherently political.

Facebook is in the exact same place, and if you think of Facebook as the modern newspaper, it really is quite remarkable that there isn’t a conservative Facebook and a liberal Facebook, the same that’s happening around the rest of the world. I’m not sure it’s the natural state, that you can stay neutral. I do think that there are certain institutions like the voting apparatus that we as a society have decided are so important that they have to be that way. I think there are some other technologies, like the telephone system, that the norms have evolved to be that way. But if you look at what the natural state is now of television, where it’s obviously fractured into a million different channels, or the natural state of newspapers, which have obviously fractured into different sides, it’ll be interesting to see whether  the more editorial functions, whether that’s search ranking or social media, will be able to stay neutral over the long term.

Goldman: Let me ask my last question to you, which I had already previewed, and which you’ve already touched on, but knowing what you know now, twelve years after the founding of Cloudflare, what would you do differently if you were to go back in time? Is there something that you’ve learned along the way that says, “Boy, I really needed to plan for a different scenario than what I thought I was”?

Prince: So, it depends on what level. I think that one of the things that has become a priority for us now, which I wish had been a bigger priority earlier, is how do we figure out how to not only reduce, but make our impact on the environment literally negative? The Internet burns a huge amount of resources and energy resources, a lot of which is just wasted, and we have only recently realized how important that is. I think we’re doing a lot of things to make our carbon footprint negative. That was the first thing that sprung to mind, because it’s something that in the last eighteen months has become an internal priority, and it’s something that could’ve been a bigger priority earlier in the company’s history.

I think from the policy perspective, we did a good job of engaging in policy conversations early, and we’ve punched way above our weight, but you always can do more, and I wish that we’d done more. The place that we have not paid enough attention to that is in India, probably the most important region in the world for the future of Internet policy. I think there used to be two poles in Internet regulation. There was the Chinese way, and for a long time the Chinese way didn’t totally make sense to me, and then somebody described it in a way that really clicked, which is, “If you are launching a radio station or a TV station, you have to go to the FCC in order to get clearance for the spectrum, and there would be some somewhat arbitrary rules. There are seven words you can’t say on the air, and if you break those rules, they’ll withdraw your spectrum license.”

They said, “That’s the model for how China thinks about the Internet, which is you have to apply for an ICP license in order to publish content inside of China. There are some rules that you have to follow. If you don’t follow the rules, they withdraw your license, and you disappear from the Internet.” There are some reasons that doesn’t make sense to everyone, but if you just think about like they think of the Internet the same way that the US thinks about radio or TV, it just seems a little bit less foreign. But that was one policy direction.

The other was the US policy direction, which it’s hard to overstate how radically libertarian the US view of freedom of expression is, and I grew up as part of it. My dad was a journalist for part of his career, and we talked about the First Amendment and all that around the dinner table. So, it seems like it works pretty well to me, but it is a radical, radical, radical experiment, and it is not the majority opinion around the world. So, what’s been amazing about the Internet is it basically took the US approach to freedom of expression and exported it globally, which obviously has been very disruptive to a lot of businesses and a lot of institutions around the world.

I think that, unfortunately, the world is not going to continue to accept the US view of Internet regulation going forward, and even the US might not accept the US view of Internet regulation. I don’t think that most countries around the world are quite ready to go to the full Chinese version, and the horse is sort of out of the barn in most places, so it’s hard to put it back in. But I think the world is looking for what that new model is, and Europe has got a bunch of things, but it doesn’t have the cohesion to figure it out. Brazil has the gravitation mass, but has a number of other things that they’re focused on.

India is the country that is going to set what the new standard is for Internet regulation, and I think that’s something that, on one hand, it’s amazing that it’s a very high-functioning democracy and has very strong freedom of expression rights. On the other hand, it’s had a lot of concerning rules and regulations over the year around encryption and other things. So, I think that’s a place where we have underinvested as an organization, and maybe as a group of people who are thinking about Internet policy, and watching what happens in India and where India goes is something that we’re spending more time thinking about, and I would encourage people who are interested in the future of Internet regulation to spend time watching what happens there.

Goldman: If you’ll indulge me, on that last point, though, is there something that Cloudflare or a broader set of Internet services could’ve done that would’ve staved off the potential crisis that we might experience in India? I’m very troubled about the rules that the government’s adopting, but was that avoidable, or was that always inevitable, we just got here now?

Prince: Yeah. I guess a different way of asking the question would be: Is there any way that the world could’ve just continued on the path of following the US model of Internet regulation, which is largely an anything goes model? I’m not sure that that’s a stable place for this. I think that the Internet’s such an incredibly disruptive force for traditional organizations. Episode IV of Star Wars seemed pretty optimistic, but Episode V is “The Empire Strikes Back,” and I think we have lived through Episode IV, and we’re going into Episode V, and governments and regulators around the world are definitely going to strike back.

Again, that is, to some extent, a result of a number of excesses that have happened online. It is also a result of imposing what was a radically libertarian view of freedom of expression on a world that doesn’t necessarily accept it. When people want to criticize me, they’re like, “Matthew is a free speech absolutist,” and I’m like, “I’m not a free speech absolutist at all. I think I’m probably a due process absolutist.” But I do think that it’s hard. If you go to German policymakers, and they say, “We’d like you to control Neo-Nazi content in Germany,” and you say, “Well, what about the First Amendment?,” they’re polite if they don’t roll their eyes.

But they say, “Listen. We understand that is part of your tradition and part of your history, but please understand that we have a very different tradition because we had a very different history.” I think you have to respect that at some level. What I hope is we certainly don’t swing all the way to the Chinese FCC; everything has to have a license view of Internet regulation globally. Again, I think that would be difficult, to put the horse back in the barn, but there might be a lot of things that start to control that.

The question of how you might’ve avoided that is worth asking. What I’m encouraged by is that when we kicked a bunch of Neo-Nazis off in a somewhat arbitrary way, a number of prominent newspapers in Germany wrote, “Neo-Nazis are bad, but I’m not sure that Matthew Prince or this low-level network thing we’ve never heard of should be the one making that determination.” So, I think that if governments do have very transparent processes, and if they do follow what in the US we call due process, but around the rest of the world you’d call rule of law, as long as there’s transparency, accountability and consistency, we probably come out of this in a way where the Internet looks more like it did over the last thirty years than where those worst-case scenarios have looked more recently.

Because while there have been some real challenges the Internet has created, none of us can underestimate the amount of good that it’s done, and it’s actually important for us to all continue to remind people that. Can you imagine how much worse this pandemic would’ve been if it had happened just ten years earlier? It was obviously still a horrible event for humanity, but a lot of people were able to continue to connect with their loved ones, get work done, go about their daily lives, in part because the Internet continued to work. I’m hopeful that maybe that’s one of the things that we’ll remember and will help us fight to preserve what it is we’ve created.

Goldman: I always love ending on a note of optimism, and you gave more than I would’ve expected, so if there’s anything else you want to add now, that’s great. 

Prince: Well, the one thing I would ask is that whenever you write about the Internet, no matter what the AP says, capitalize it, because if I had to point to a moment in a time when it all started to go wrong, it was when AP said––I think in 2016––that you could now lowercase the Internet. I think what’s amazing about the Internet is that it’s a network of networks, and there is only one. So, I think it being a proper noun is important. So, my little, tiny crusade on the side is if you care about the Internet, capitalize it.

Goldman: I feel like you can truly tell the Internet’s old guard by those who still capitalize the Internet. I’m one of them. Sounds like you are, too, but we are a dying breed.

Prince: So, I think, Eric, you and I have to lead that campaign––if you care about the Internet, capitalize it.


Lessons From the First Internet Ages

What is the future of the internet? Thirty years after the creation of the first web page, what have we learned about the impact of the internet on communication, connection, and democracy?   Join the Knight Foundation for Lessons from the First Internet Ages, a virtual symposium that will explore and evaluate what key figures in the development […]

Lessons From the First Internet Ages