To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.

List of Parliamentary constituencies in Buckinghamshire

From Wikipedia, the free encyclopedia

The ceremonial county of Buckinghamshire, which includes the unitary authority of Milton Keynes, is divided into 7 Parliamentary constituencies – 1 Borough constituency and 6 County constituencies.

YouTube Encyclopedic

  • 1/5
    1 290
    1 420
    1 661
  • ✪ March 6: Power of Private Platforms
  • ✪ Winston Churchill | Wikipedia audio article
  • ✪ UN Presidential Visit - 6th August 2019
  • ✪ New Federalism: Returning Power to the People
  • ✪ 444 years young: a beautiful birthday celebration!


Good evening everybody, welcome back. So on, on, a cold February evening in 1996, John Perry Barlow, a former lyricist for The Grateful Dead, sat down to write what would become one of his best remembered pieces. It wasn't a song that he composed that night, but something grander, a declaration of independence, written on behalf of the people of cyberspace. From whom were the people of cyberspace declaring independence? From governments, all of them. Governments of the industrial world he wrote, you wary giants of flesh and steel. I come from cyberspace, on behalf of the future, I ask you of the past to leave us alone, you are not welcome among us, you have no sovereignty where we gather. Governments derive their just powers from the consent of the governed, you have neither solicited, nor received ours. We did not invite you, you do not know us, nor do you know our world. Cyberspace does not lie within your borders. It's a fascinating document, one that reveals how utopian visions of the Internet flourished in its earliest days. Cyberspace was imagined as a realm beyond the reach of law or regulation, where people of the world could come together. Central to this vision was a deep commitment to free expression. As Barlow said, we're creating a world where anyone, anywhere, may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity. In the years since, we've heard echos of this aspiration. Reddit co-founder Alexis Ohanian once suggested that the founding fathers themselves would really like Reddit, why? Because as he said, it is a bastion of free speech on the world wide web. Twitter executives famously referred to their company as the free speech wing of the free speech party. This commitment to free expression and the openness if fostered, are in part responsible for some extraordinary developments. Activists across the political spectrum and around the world have used digital tools to build the powerful movements for social change, from the Arab Spring, to the Tea Party, to Black Lives Matter. Communication channels once reserved for political and economic elites, television networks and newspapers, have been blown open. Allowing billions to engage in global conversations that previous generations could only have imagined. People like John Perry Barlow believed that connecting all of humanity online would be to our benefit, and some of their optimism was warranted. Global citizens of the Internet can and sometimes will share inspiring stories or deliberate about issues of public concern. Or arrive at a common understanding across significant political and cultural divides. But if Twitter of today is any indication, that's certainly not all they're likely to do. In addition to genuine discourse, and chatter and memes that don't harm, harm anyone, we also see some content that ranges from toxic to tragic. Hate speech aimed at the most marginalized among us. Hyper-targeted political messaging used to influence elections. Coordinated efforts to silence certain groups. The promulgation of vast and complicated conspiracy theories. And disinformation campaigns that have real consequences. Odious as some of this may be, much of this content, at least in this country, is constitutionally protected speech. In a traditional public square, white supremacists are allowed to march openly. A motivated conspiracy theorist can stand on the street corner and declare the Earth is flat, or that vaccines increase the risk of autism, without fear of being silenced. These were the protections guaranteed to them by the First Amendment. In a country that, for reasons we've discussed here before, prizes civil liberties as foundational for a democratic society. But does the fact that many of these forms of speech are protected in public mean that they should also be protected online? Even if founders of digital platforms champion the spirit of free speech, private companies aren't themselves bound exclusively by the First Amendment. Or for that matter by the speech permissions of the other countries in which they operate. Beyond compliance with the law, they are free to define their own policies and standards. So when confronted with violent or exploitative content, what's a global platform to do? Should they take it down? Let it stay? Down rank it in people's feeds so they're far less likely to see it? Do they need to explain to us what they've done, and how they've done it? What about in instances where different countries have different norms? Such as the case with something like Holocaust denial, which is protected in the United States, but illegal in many European nations. Or where cultural sensitivities differ, as around nudity or homosexuality or criticizing your government. Or where the context of something is what makes it harmful, like Pepe the Frog, a seemingly innocuous cartoon that the far-right adopted as a symbol of intolerance. There have long been debates about how best to protect and govern speech. But doing this work in a digital context presents some new challenges. Ones that are further complicated by the contested role that private platforms play in disseminating news and information. Perhaps digital platforms, at least those beyond a certain size, constitute a new type of public space. In the words of Twitter CEO Jack Dorsey, a lot of people come to Twitter and they don't actually see an app or service, they see what looks like a public square. And they have the same sort of expectations as a public square. Alternatively, we might think of digital platforms as just a modern version of the postman or the telephone network, the carriers of our messages but nothing more. If you and I plot a treacherous murder over the phone, we wouldn't think to implicate AT&T in that crime. Aren't these platforms just like that? The pipes or common carriers of our messages to one another. Of course in that case, it couldn't be held responsible for the content generated by billions of users. But wait, maybe platforms are more like newspapers or radio broadcasters. Facebook, Twitter and the rest don't just display content, they curate it. Determining what's on my proverbial front page, similar in some ways to a media outlet. That would make them publishers though, which would suggest that they should be responsible for what they host and who sees it. Each of these frameworks, provides a helpful but imperfect analogy for conceptualizing digital platforms. Public spaces, common carriers and publishers are regulated in their own distinct and familiar ways. By contrast, platforms at times operate as or claim to be akin to each of these. And it sometimes seems we as users want an impossible combination from them. We want platforms to remove harmful content, but not under any circumstance to censor vital political speech. We want them to treat content neutrally, but not to flood us with irrelevant information. And we want them to give us control over what news and information we see even as we happily consume the curated newsfeeds and playlists that they recommend to us. Making things even trickier, the content we see on platforms like Facebook, Twitter, YouTube, Reddit and others, isn't simply the product of automated systems. But the result of human moderation done by tens of thousand of content reviewers around the world. Real people, whose job it is to apply guidelines and make judgments about what we see online. Their work is both morally complicated and psychologically taxing. Many Facebook moderators, for example, spend their days reviewing some of the most disturbing content of the web. And often as contractors, with pay and benefits far below those of full-time employees. But the power of these so-called custodians of the Internet, whether human, algorithmic, or otherwise, is immense. Ranging from Twitter's handling of Russian propaganda bots during American elections to Facebook's efforts to stem ethnic violence fueled by its platform. The decisions of private platforms can have significant geo-political ramifications. And problems with content are only one instantiation of a larger question. Namely, what power should these private platforms, whose products now seem essential for our civic and social lives, have in a democratic society? In a lot of ways, Mark Zuckerberg once said Facebook is more like a government than a traditional company. We have this large community of people, and more than other technology companies we're really setting policies. If he's right, then two decades after John Perry Barlow declared the independence of cyberspace, we may well live in a world where cyberspace is not independent after all. Instead, it merely has a new set of rulers. The digital platforms that Mark Zuckerberg explained function as governments in their own right. In the process, we have to ask, are we losing site of the types of regulations and accountability measures that are needed to ensure that these new powers act in the interest of online citizens? And if at some point we decide that these platforms simply have too much power, do we possess the requisite tools to check or challenge them? Tonight, as always, we're joined by a group of distinguished experts to help us sort through these questions and others. First, we have Krishna Bharat, a technologist focused on the intersection of computing and journalism. He was formerly a distinguished scientist at Google and was the founder of Google News, an automated news aggregation and search service with more than 100,000 sources and 72 editions worldwide. Prior to that he started Google Research in 1999. He serves on the boards of Columbia School of Journalism, the John S Knight Journalism Fellowships at Stanford, and the Committee to Protect Journalists. And at Stanford he is one of the leads of the Journalism and Democracy Initiative. Then we have Marietje Schaake. She's a Dutch politician and has been serving as a member of the European Parliament since 2009. In addition to serving on a variety of parliamentary committees, she's the founder of the European Parliament Intergroup on the Digital Agenda for Europe. In 2017 she was chief of the European Union Election Observation Mission in Kenya. And is a member of the Transatlantic Commission on Election Integrity, the Global Commission on the Stability of Cyberspace, and the chair of the Task Force on Software Vulnerability Disclosure in Europe. And she's also an adviser to the Center for Humane Technology. And finally, we have with us Alex Stamos, who is a cyber security expert working to improve the security and safety of the Internet through his teaching and research. Here at Stanford where he is an adjunct professor and fellow and visiting scholar at the Hoover Institution. Prior to joining Stanford, Alex served as the chief security officer of Facebook. Where he led a team of engineers, researchers, investigators, and analysts charged with understanding and mitigating information security risks to the company and safety risks to the 2.5 billion people on Facebook, Instagram, and WhatsApp. During his time at Facebook, he also led the company's investigation into manipulation of the 2016 election and helped pioneer several successful protections against these new classes of abuse. Before joining Facebook, Alex was the chief information security officer at Yahoo! Please join me in welcoming our guest to the stage. >> [APPLAUSE] >> All right, as folks get seated, I'm gonna start off the questions. I wanna build as always on what it is that Hillary provided us in an opening framework, and to talk about the interesting evolution of the Internet or cyberspace as a, a zone or a realm apart from government. And then the evolution of large companies, these big platforms into quasi-like governments themselves. I wanna use the words of a British historian named Timothy Garton Ash, who spends summers here at the Hoover Institution at Stanford. And in a book about free speech in an online world he describes, whatever Google does with respect to expression online as far more important than what Germany does. Whatever Facebook's content moderation policies are are far more consequential in the world than whatever France has to say about content and expression. And so what I'm curious to ask any of our panelists about is this new world in which just a small number of platforms, which are the sites of our digital expression, our, our, our speech, video, audio, all of these places where we now associate and express ourselves online. Which in the United States, at least, if we were doing it in an analog fashion in a public square, parking on the public square street corner, we'd be protected by the First Amendment. But the First Amendment does not apply to private companies, they can set their own terms of speech. And this content moderation, right, you know, strategies or handbooks are extraordinarily powerful. And, and the content moderation comes in two forms. It's the banning and deletion of content and then the algorithmic upranking and downranking of content. Now if these companies are like governments, a lot what Facebook or Jack Dorsey has to say. One thing to point out here is that whereas the United States and places in Europe the actual governments are democratic governments responsive to citizens. Facebook is like a dictatorship that's responsive to exactly one person, a beneficent technocrat named Mark Zuckerberg. You can debate the beneficent part if you want. What I wanna know is how we should think about the private superpowers that are now amongst us that govern our speech and expression online that are not democratic. And how to think about the tension between actual democratic governments and the guarantees of speech, and the private governance of speech by these quasi-governmental non-democratic tech companies. Anyone wanna weigh in? [LAUGH] >> So >> Easy stuff, yeah. >> [LAUGH] Go ahead. >> Go ahead, no, no. >> So I, I wanna challenge the notion that the companies think of themselves as governments firstly. But even if they do they don't have An autocracy because their users can walk away, so they can vote with their feet, right? So they, as Lily pointed out, all the speech there is protected by the first amendment. The only reason there should be anything you know that's stopped is because the community wants it that way. It's not like that, it's not like Mark Zuckerberg particularly wants that content another way, it's just that the community of users that uses this product want it that way. So they're setting the norms, and of course they can't agree, right? And they can, and different countries have different communities and different sensibilities as was articulated. So it's a big mess because there isn't a uniform definition of what belongs and what doesn't belong. It's a lot easier when the, the content you're displaying is the product of a search or something when it's sort of active consumption. When I've said I'm sa, I'm looking for you know AT&T and then you get the first result is AT&T, that's kind of straightforward it's, it's non-controversial. But the moment it's passive consumption where something is suggested to you based on some profile [LAUGH]. Then the, there's this whole big question mark in what's appropriate and then the ranking gets invoked and they, they get accused of being editors and so forth. So it's not a s, simple answer but the, the fact is, I don't think it's the companies making all the decisions nor do they want to. It's the community that surrounds the company that wants certain standards set and, and hence all the complexity. >> Yeah, I think they're acting in a quasi governmental manner. The companies are defining what is legitimate political speech in locations around the world. They're defining what does it mean to be an authentic speaker and what level of anonymity or pseudonymity are you allowed to have in different circumstances. They operate intelligence teams, so I had an intelligence team that spoke a variety of languages including experts in the intelligence activities of the Russian Federation, The People Republic of China. They operate law enforcement teams so we had a counter terrorism team, we had a child exploitation team, we had a team specifically around fraud. So it's while you, obviously there's some significant differences the companies are acting in a quasi-governmental manner partially because the governments have not stepped up into this space. So almost by definition, all of the content that is being moderated on these platforms is content that has been at some point adjudicated as actually being legally protected. So almost without question in democracies, at least they will take down any content that there's a court order to take down. And so when we argue about the speech rights your, the baseline is things that are legal or not legal within that jurisdiction. And all of the decisions are on top of that. And it is decisions that basically the government's have kind of wiped their hands of. Or, like in the United States are restricted from doing anything about because of the First Amendment. And so, or, or the companies are stepping into the breach because the government's not acting. So since 2016, the US Congress has done nothing to redefine the laws around political advertising online. We're in 2019, we have the exact same rules for political ads, including rules that it is not completely clear that is illegal for Russians to run issue ads in the United States to try to drive division. >> And the changes there have all been made privately by the companies without any kind of democratic guidance. >> One thing just I wanna- >> [COUGH] >> Get comment Maurice's here too but just make sure I pick something you said there Alex, which I think is important. Is that, you said something to extent of, yeah, the companies are acting in this quasi-governmental fashion. But partly in response to a lack of clarity from actual governments in specifying what it is that should be legal or illegal online. >> Right, but that runs in the face of a common trope in Silicon Valley, and in a lots of other places ,especially in the United States, which is an allergy or outright opposition to governmental actual governmental regulation. So, it's, it's either happening by default as it were, we don't get sufficient guidance about what it, what should be permitted online. We'd prefer the public to decide rather than Mark Zuckerberg decide, but that instruction is not coming. But at the same time, speaking out of the other side of their mouth and saying, we're opposed to governmental regulations in general. And maybe the European Parliamentarian can comment on that, that tension. >> Think she can, she will try. Good evening, it's great to be here in Silicon Valley and to discuss these sort of tensions really. The, the growth of impact and responsibility of the big technology platforms and the gap when it comes to accountability. Because you may, you know sketch the notion that there is a community or constituency if you wanna call that. I've also seen Facebook users, for example, rising up and organizing against some measure or settings that Facebook has changed. But I think it should really not be compared to any kind of checks and balances that we believe are appropriate in, in a democratic system. People do not have the information, the, they do not have the sort of governance structures along which checks and balances and oversight are truly organized. So I think there is a huge accountability gap, that is definitely the result of the, rapid and enormous growth of these companies. The lack of oversight and transparency and indeed the lack of lawmaking. And it is true that the tech platforms until very very recently have consistently and systematically pushed back against any regulatory effort. This has been well documented and I've seen it with my own eyes. The arguments were, if you regulate technology you will stifle innovation. And the other often heard argument was, or is that if we in liberal democracies are going to regulate technology, then China will take a cue from us, and will also start regulating technology. Now I think it's safe to say that they're not taking cues from us, I sort of wish they did. >> [LAUGH] >> But, but the argument that we should not regulate or else China I think has sort of worn out. Instead what I believe we, we need to do is to, make sure that the principles that determine the fact that we are a liberal democracy, whether we're looking in Europe or the United States, are sustainable across different technological disruptions. And we have catching up to do but I also believe it's the tech company's interests to get on board with this agenda. To get on board with agenda and not to resist it. >> Jeremy? >> So I wanna push you a, a little bit further on this issue, you're the first elected legislator that we've had on, on our panel in the course of, of the discussions that we've had so far. And as an elected legislator, you respond to a different constituency than a company that's thinking about its users or its investors. And so I want you to talk a little bit for us about how you think about what is the public interest. And what we need, you know, from a regulatory architecture to serve the public interest as opposed to the private interests of the companies themselves? >> Mm-hm, so I think the interesting thing is that, that most consumers are also citizens, but citizens are, are more than just consumers or users of these platforms, but oftentimes it overlaps. But indeed the sort of incentives towards the boardroom or towards the government are very very different and vice versa as well. So what I believe is the public interest is to start with very principled notions that are I believe not very controversial. So think about non-discrimination. It's not allowed to distinguish in the treatment of people of different skin color, different ethnicity, different religion, different sexual orientation, gender Age and so on, and so on. But to actually make sure that non-discrimination is enforceable vis-a-vis, business models that span the globe. That use different algorithms with both intended and unintended consequences, is something that, that needs work. So it doesn't necessarily need new laws, but it needs different ways of looking at how these laws can be implemented. And the same can be said for fair competition and anti-trust. Or the preservation of the freedom of expression, as Alex said, as it is encoded in Constitution in the United States. Different laws in Europe to have that being leading and not, sort of normative notion of, you know, breastfeeding be, being inappropriate to show, or the mislabeling of certain news photos, for example, as being child pornography. I mean, to have the benchmark being the law, I believe should be the starting point to preserve the public interest. We can still talk about whether the laws are good enough, and there will always be disagreements about that in a democracy. But there are predictable, transparent and accessible ways to change the laws through the legislative process. And I think that, that is the avenue to take. So instead of thinking about, my goodness, we need all these new laws and where do we begin? And cyber space is apart from the real world, as John Perry Barlow. So sort of poetically, but in some ways also I think prophetically try to sketch at the time, we are now more than 20 years on. And, while the libertarian sort of world view that spoke from his declaration of independence of cyber space still lingers on this part of the world. It is safe to say that cyber space is not detached from the real world, and certainly not detached from real people. And I think that that should give us guidance in terms of where the public interest is and how we can safeguard it. >> Let me ask one follow up to this and, and invite Alex to respond afterward, which is to say, so Alex gave us the baseline. He said, look, anything that's actually against the law, in every, any given context, you can just assume that the platforms, regardless of what country they are operating in are taking that into account. And we can debate whether the company should be in China or Saudi Arabia or not, that's an important decision that each of them need to make, but they are following the law. So the issue will be. >> So, of course, about content moderation. >> Yeah. >> Which is only, I mean, it's a very important part, but it's certainly not everything, okay. >> So, so the issue becomes when, when we think as in the public interest, the impact of the platforms, say, has some negative effects on our civic culture, our civic deliberation, the health of our democracy more broadly. How do we think about who has the obligation to mitigate those costs? Should it really be in the hands of the legislators? They need to get their head around it, can we even have agreement out of our political systems? As we've seen with debates in the United States with respect to content moderation, to get ahead of those things. You know as Alex described we, we can't even get transparency on political advertising in the United States, or much progress on foreign interference. So how do you think about the appropriate role of government with respect to dealing with those harms, and of companies themselves taking on the responsibility of self regulation as those harms become evident. >> Uh-huh, so I think to clarify things a little bit, we're trying to distinguish between illegal speech according to the law of the land, and then the very gray areas slippery slope towards topics like fake news. Or information that is going viral about conspiracies that can invite real life action. You know, allegations of child molestation in basements that then have people coming to the rescue, or the anti-vaccination information. I'm the daughter and and sister of, of a doctor and I recently saw this picture that I thought was saying more than a thousand words which was, in a doctor's waiting room saying, please don't mistake your Google search for my medical degree. >> [LAUGH] >> And I think that that is unfortunately something that people need to be reminded of and where you could say, look, is it illegal to suggest that some kind of dance or some kind of diet can cure cancer, no. But is it scientifically proven that it will, no. And I think with now the, the anti-vaccination movements, you know, recent article showed how on Amazon, which is very much used in this country, the algorithm was really boosting the promotion of books that challenge, vaccinations. And there is now outbreak of measles of levels that we haven't seen in a long time. So, what I'm trying to sketch is that there are real harms of, the distribution of information that may in on itself not be illegal according to the law, but it still can have damaging impact on what I would call the public interest, of the public health in this example. So that it needs to be addressed and where I think we have big challenge is to make sure that we know better, what the impact of these algorithm is. >> So can I just run, if anti-vaccine messaging is a problem, then maybe it should be, there should be a law that says let's not talk about it. Just like, you know in certain countries, you gonna talk about, deny the whole cause or whatever, but in this country for whatever reason, they decided not to do it. So there is this gray zone, right? So then who gets to decide on a case by case basis each of these things as a problem? I mean, in many of these cases like anti-vaccine, it turns out that the only people writing in anything, like it's a data wide, right? If you search for certain kinds of things, the only people writing things about that tend to be the right? So for example, if you search for, you know, the truth about climate change, like [CROSSTALK] many of the things that end up, ends up on YouTube for example, ends up being about, you know, climate change denial from Fox News, right? So there is an active element writing content for these kinds of topics and, unless there's clarity that they should be banned, [LAUGH] then it will be a function of what's available, right? So then it comes down to the the, the pressure is on the search engines to get their ranking, right? >> Yes >> By, but, but, by whose definition? In some cases, it's it's relatively clear, like if the, if the reliable sources are saying this, then they should be on the top, and I'm sure they would agree with you and they will do their best to do it. But they can be contrived cases where either the reliable source are not writing about it at all, or reliable sources also get fooled, in this case, you know the the the example I had of truth about climate change was on Fox News. >> Yeah so, I mean I think, I think this is not only about what law needs to ban what. I think this is really about responsibility of the companies that have perfected but I mean, not just perfected a little bit but to a level of detail that none of us can truly comprehend. The matching of your search results, the likes to your friends, the kind of person you are with ads. So the level of detailed tweaking has allowed these companies to know people are pregnant before their doctor told them. I mean this is going very, very far because they have statistics on the kind of searches that, you know briefly pregnant or you know, early pregnant women do, etc., etc. So the idea that the companies do not have the capability to do a lot, I, I, I just don't buy. The question is, can they engage in more transparency and allow for more research, on the one hand about the intended and unintended consequences of their algorithms and business models, because I also believe. Believe that there are unintended consequences of the algorithms, and, and can we then see what is necessary. Whether there is a need to ban certain information or, or not. But the, but the incentives to stimulate the sharing and the high ranking of information are what I believe need to be scrutinized. Because if the incentive is to keep you online long, which is, you know, probably the incentive of YouTube, for example. Because you could also go play soccer, cook cook something, watch television, you can go spend your attention and your time elsewhere. So everyone is competing for our time including the tech platforms, so the incentive is probably to keep us long. So that we can watch more ads, share more data, stay longer on YouTube. And so, if it turns out that sensation sells, and it may just well be the case, right? Then there is an incentive to keep people looking at sensationalist stories. And this could be sensation about celebrities, it could be sensation about politicians. It could be sensation about health, it could be sensation about food scares and what not. And, if it happens by three people, there may not be a problem. If it happens by 3 million people, 30 million people, then it changes. And in Europe when we look at competition and antitrust, there's a very important notion that is called significant market power, which weighs the impact of behavior. For example, if you have price agreements, if you and I have a price agreement about selling one or two products, we're in a different kind of scrutiny than if we do the same for products reaching millions of people. And if our market share is, let's say, 80% in a certain market. So the question of how big a problem is or how big a market share is, should factor into this because it determines how big the impact is of a certain algorithm or, or surface. >> [INAUDIBLE] we take question. >> So I, I think one of the reasons that tech companies are fine with regulation now is they wanna call the bluff a bit of politicians. Because there's, there's this whole set of criticisms that ignore the fact that these are actually optimization trade-offs, that you can't have your cake and eat it too. So like there's a famous saying in engineering, you can have something done quickly, done cheaply, or done correctly. Pick two of three, all right? And there's a bunch of trade-offs here and the two most obvious for the things we're talking about is the trade-off between the power of the platforms and their responsibility for curating content. And then there's a serious trade-off between privacy and safety, right? And, and so, you know, you have these criticisms like, that effectively boil down to, Facebook is way too powerful and I want that power to be used to squash this stuff I do not like. Or Facebook needs to know nothing about anybody and they need to find all the bad guys, right? You can't have those things. And so, I think what's going to have to happen in this regulatory context, is now we are going to have to, what's happening is all the platforms, in like imagine this triangle of this optimization. Between privacy, security, monetization, authenticity versus anonymity, they are picking these places in the middle, and they get criticized from all sides. And so they are basically saying, okay, that's fine, pick for us. And what's happened so far is that there has been serious mixed messages especially in Europe, or the way the federal system works. Is that you have a peer privacy message from the federal level, from the European parliament and the European Commission in the form of GDPR, the Generally Data the Privacy Regulation. And then you have lots of asks from state governments to help with the enforcement of laws. And an anecdote that, that show it to me, this is before GDPR, but a lot of the same privacy laws were actually in place in Europe before GDPR. I had a couple of meetings in the European capital not, but, a different one. And, the first day we met with people from interior ministry, which in European countries are often responsible for internal security. And we had helped them catch a very bad person, and that person went to jail, their corps took credit for it, not us. But they would not have caught this very bad person without us and they said thank you so much for doing that, how can we work together? And in the second we met with the data protection commissioner, and they gave us a list of pieces of data. And on that list of pieces of data, for which it was a violation of the fundamental human rights to their citizens, included the data necessary to catch that bad guy. And the message we gave them is like, you guys are both from the same government, you should figure out, what do you want, right? You can't have both perfect enforcement of the rules and safety for your citizens and all of this privacy. Privacy also goes to the bad guys. And I think that is going to be the difficult place, is where do we find the optimization on this? And I'm sure we'll talk about this. I think, to a certain extent, the big Mack Zuckerberg letter of today is calling the bluff on that. And and seeing enforcing kind of the critics to actually take a side. Because so far, for people in the tech industry, it's not comfortable to do do all these things and then no matter what you do, no matter how, how much you struggle with these difficult equities you can measure it. That people that have to take no responsibility for the outcomes get to criticize you for it. And then also imply that you are a bad people because you made a difficult equity choice. And so punting that to our democracies is a good thing. The problem is that most of the countries that the tech companies operate in are not democracies. Or are democracies that do not have significant human rights standards built into their constitutions. So punting it to governments is great in the Netherlands, not so great in most of the world. And that's another challenge that we're gonna, we're gonna really see. >> I'll be happy to talk to the people at Facebook one day to make them feel better, cuz what you're describing is sort of the position politicians are in all the time, you know? >> [LAUGH] >> Can't win, damned if you do, damned if you don't. But the, the example that you gave about the not to be mentioned government and the sort of friction between intelligence and law enforcement, and data protection, you could also see as a healthy part of checks and balances. I mean, it is actually normal that there's an independent authority also checking the interior ministry and keeping them sharp on data protection. So I understand that it's frustrating for a company. >> Except, they shouldn't both be punishing the same company, right? Like they should be working that out with each other, not like we're gonna fine you when we're gonna fine you, and we're gonna sue you when we're gonna sue you. That's where it gets silly. It's just- >> Yeah. >> It's not productive and it doesn't reflect. It's a special dysfunction in Europe because the European federal system doesn't care about national security, does not have the national security and law enforcement responsibility. And so, there are, at least in the United States, if the FBI and the FTC disagree, they both report to the president. But it is unclear to me how something bad is gonna happen. Under GDPR, there's going to be a terrorist attack or there's going to be Russian interference in a European election. And the tech companies are actually going to be super happy because the data is gonna be gone because of GDPR, and so they can't get blamed for it. And they're gonna say, I'm sorry, I can't investigate this for you, I don't have the data anymore. And then that's gonna be a fascinating question, is how in this European system, where you have the, a, a government in Brussels that feels very distant, from the, the, the government in some of these countries. How are they gonna work out the outcomes of this? And I'm a little afraid, I would hope that people would figure it out now while everybody is calm, instead of in the, in the aftermath of something really bad happening. >> Yeah, no, we definitely don't hope something bad will happen. But I think, the, the idea that, there are different voices in governments is normal, it's the same in the United States. It's cumbersome, democracy can be. But if you look at sort of the tech companies, which we tried to, you know, lump together for the sake of this discussion. I can also give you a gazillion examples of where company A says, says one thing, company B says another thing and we'll just have to navigate that. What I think is more important is that we focus the discussion, ideally actually in a The sort of closer and more constructive dynamic between the private sector companies that are associated and incorporated in liberal democracies and democratic governments about how we can actually solve for some of the key problems. What we've seen, and you mentioned they're now gonna ask for regulation and call the bluff, but if that's the case, it is very new and it is a 180 turn. Because for the longest time, companies have pushed back against regulations. So if that's now changing, okay, let's see what kind of opportunity that brings and what it signals at this moment in time, that these companies are doing it. I believe that we should rather focus together on developing a liberal democratic governance model of technology, vis-a-vis the more authoritarian, top-down, dictatorial kinds of systems that we see now a developed in China, for example. And we are wasting a lot of time in the kind of back and forth between us and we're trying to, or we're we're yeah, almost risking to, to not see that clearer picture. >> So I find it, it's like false dichotomy to say it's just the tech companies and the governments will somehow figure out what's in the public interest, but the public will not be somehow involved. I almost feel like the institutions of civil society, like maybe even this forum, have a huge role to play, right? So because, frankly, if I look at some of the folks in government in this country, I don't necessarily trust them, I don't think they are well educated, they might be persuaded by lobbyists. And, and there've been lots of examples of lobbies getting to, to Congressman and getting things done that aren't necessarily great. So I think there is a discussion to be had as to how can civil society play a role? What kind of institutions do we need? For example the fact checking infrastructure is an example of that, right? The third party fact checkers who sit between Facebook and the government, because we don't want the government telling Facebook what to, what to get rid of, because they might have their own interests there. So, so I think it's important to have this third pillar of civil society that informs these discussions. It may be about community standards on what's acceptable and what's not. I don't want every citizen to kind of get involved, but I think a certain representative segment needs to get involved. So, I, I think that without that, especially in other countries where the governments are downright authoritarian, it's not gonna work. >> If, if I can, you know, push into this in, in, in a little more detail to kind of think of this notion of when you think about, you know, civil society, the public, government, and, and companies trying to make these choices together. One of the things I would kinda think of taking a technical perspective on, it is company's wanna optimize certain metrics. And there are certain metrics that they have like revenue generation, user engagement, minutes spent on platform, whatever the case may be. Those metrics get optimized relative to some constraints, and those constraints normally can be the laws, what does the government allow us to actually do. And within kind of thinking about that environment, if we think about, say I wanna maximize click through rates and I'm likely to create an echo chamber because I'm likely to show people what they would like to see. And that I can say has brought the public's fear in cuz it's using people's own voting power by clicking on certain things to say, this is what you wanted so I'm gonna give you more of it. But then we get into this notion of saying, okay, if the only constraint, say, for example in the United States, you have the First Amendment, or in Europe where GDPR and some other issues around free speech. When that, we enter that gray area to say that we're not just saying anything that is allowed by the First Amendment is allowed on the platform, right? Twitter comes along and kicks off Alex Jones. Even though the stuff he says may be reprehensible, it's still protected by the First Amendment. We now get into this case where the constraints by which the company is supposed to act, if we think about that from a regulatory framework, is different than the constraints the government has imposed, right? So we're in this world where these arbitrary decisions get made. Is that the place that you're saying regulation needs to come in to create more of those constraints, or is it that that grey area is where we should be operating? And the example I would give you is rather than let's say Mark Zuckerberg running Facebook, what if Donald Trump was running Facebook and decided that there were just some content he didn't like and it should be downranked, right? That's within the constraints of what could happen in the law, but should he be allowed to do that? >> Well, and I use that example a lot cuz, I think when we think about the power of these companies, you have to grant them the power assuming that you do not like the people running them. And I think there is a lot of my friends, with whom I politically agree with on the left, whom are very comfortable with the idea of Facebook, having these powers. Cuz they assume they will only be used against the people they disagree with, right? And, and I always say part of that is because they know Mark Zuckerberg is a socially liberal millennial from Long Island, right? Assume Mark decides screw all this, I'm gonna go live in Hawaii. And Peter Thiel the, you know, Austrian ubermensch, for, here from Stanford, takes over, what kind of powers do you want him to define what is acceptable speech? And I think, in the most cases people calling for more control of other people's speech, they would back track, and that's what you've got to. Cuz maybe it's not actually Facebook, but the thing that's happening right now is we're setting norms that are gonna exist for decades. The companies have almost unlimited legal capability to censor. You know, CDA 230 both gives them liability protection from the content but it also protects them from the people whose content they take down. >> Can you give a little bit more background here on CDA 230? >> Yeah. [COUGH] So, CDA 230, Communications Decency Act, section 230, was part of this big bill that was actually meant to censor the Internet. And part, you know, there's big parts about how you could not have basically porn, with you know, a number of different kinds of definitions of stuff that's not allowed on the Internet. But then to motivate the companies to go take that stuff down, there was a section that said you, you are not liable for content as long as you follow some other laws. You are not liable for content that's posted on your site, if you're an intermediary, it's called. And then you're also protected from being sued if you take action to, to moderate your site. Most of the Communication Decency Act was, was struck, struck down by the Supreme Court, CDA 230 survived. And so we ended up with, out of what was supposed to be actually a very controlling bill, out of kind of like the Newt Gingrich Congress, this very libertarian vision that gave the companies almost unlimited legal power to decide what is acceptable speech on their platforms. And so, there's nothing in the, very little in the United States, restricting them from being absolute, you know, tyrants on their platforms. Now on smaller curated sites, maybe that's a fine thing, you want to be in, like, this, this is something that is highly curated. I think as the companies get bigger and more powerful and kind of asymptotically approach monopoly, they have more of a responsibility to allow for more voices and to have a softer touch. And I think that's actually one of the weird ironies here is generally as companies get more powerful, historically, we've treated them more like common carriers. Not as if they have more of the responsibility to censor but less. And that's one of the kind of weird ahistorical things that we're going through here like versus the telephone companies, right? But, anyway, within that context, we've got to be really careful what norms we set. Because the only thing restricting the companies right now are the norms that have been set about what is expected of them and what powers they have. And, you know, like one norm that continues to survive is that Facebook should be politically neutral, right? But, and we just gotta be really careful throwing away these norms, because it won't always be these companies. 101 out here is lined with the bleached skulls of the tech companies that have come before, right? So, like you have to assume there will be different companies, run by different people with different aspects. And so the norms and laws we set right now should be, I think, longer term than people are thinking. And so this is, this is my just advice, especially for people who politically agree with me. When you are thinking about, I want this tech company to control the speech of somebody else or I wanna control their, their consumption of information. I wanna restrict their ability to find something online or to consume something online, think about if somebody you disagree with did that to you, would you want them to have that power? The crappy part of freedom is other people having it, right? >> [LAUGH] >> You having freedom, easy. Other people having freedom, hard, but that's how it works, right. Like they gotta have the freedom if you're gonna have it too. And I think we've kind of lost that a bit. And I think Europe is, you know I'm gonna give you the soft pitch. I think the Europeans, especially, are losing that which is really sad. At the same time that European governments, a couple of them are going semi-fascist. They're creating powers that are going to be very, very dangerous in the hands of, of folks like the, the current Hungarian government. >> So maybe, maybe just to put it, put it in a slightly more extreme context. So Peter Teal taking over is one thing. What about Alibaba buying Facebook tomorrow? How, how would the American people and all of the users globally feel? Right, so the idea that that we can have sort of good faith expectations of a number of these companies I think is not gonna cut it. At least as a law maker I don't consider whether I like a CEO or whether I trust them necessarily. Because you wanna draw one line that applies for everyone and that sets minimum standards. And I agree with you that those should be long term and they they should be anchored in law. But what I'm missing in a lot of these discussions, is that there's extraordinary scrutiny of proposals that are being made to regulate or initial regulation that's taking place, and that we do not scrutinize with the same rigor. The status quo, because under the first amendment, these private companies already have the power to decide. If they don't wanna ever report about sports anymore, they can do it, no problem. Even though reporting about sports is perfectly legal and perhaps even uncontroversial depending on who's playing who. >> [LAUGH] >> But, the the idea that they are not already making a number of decisions that are not anchored in law I think needs to be taken into account. And the fact they have freedom to make decisions on things that are quite blatant like encouraging children to take a big risk or the sharing of bullying videos or this anti-vaccination. The reason why companies are reluctant to take those actions is because it would creep in to the direction of liability. So this exemption of liability, the section 230 in your law is so vital for their business models that they do not want to proactively show that they're able to moderate content to have an editorial role. Because once it is shown in one case, something extreme, then it can creep into other areas and then this whole exemption of liability would sort of be non credible or effectively not lived up to. And I think that that is where the tension comes from. But can companies already made decisions like banning accounts from Alex Jones, right? >> Yes >> Or people like that they can, I mean if they wanna throw me off tomorrow, and I violate their term of use, they can. I think they could probably kick off even if I don't violate it. >> They don't a legal basis. >> So it's it's completely [COUGH] into private governance hands. And I think we are approaching these companies and I think that this discussion shows that. With the expectations that we have of governments, and if that's the expectation we have of these very powerful companies, and you've called it quasi governments in the kind of role that they have >> I'm the academic I try to use Latin and Greek. >> [LAUGH] >> It sounds smarter, but it's I'm not fooling anybody. >> No, but I do think that the idea that they have extraordinary power and the notion that their expectations of fairness, of redress, of checks and balances, of a sense of justice [COUGH] of buying from the constituents. I mean all these elements that we know so well from democracy are not translated in governance models. So if we think that these companies are so powerful that they have a significant impact on the public debate, on democracy, on competition, on public health on discrimination or non-discrimination. Then we have to be much more rigorous in how we're gonna safeguard some standards. >> So, p-picking up on a. >> I-I wanted-wanted to explain to that. >> Both arguments, that's my problem. I, do you think they should do more, be more aggressive about living up to their civic responsibility? >> Yes. >> Or should they be more hands off? >> No, I think, I think these companies should be more responsible towards their civic responsibility. But we should set minimum standards in law and, and, regulation. Meaning we could identify principles and give regulators so not governments depending on which political side but sort of in in Europe at least regulators like telecom regulators, competition regulators, they're independents, they're not politically appointed. They act like an extension of the judiciary. They assess whether a law is complied with. I think here the tension comes from the fact that regulators also set the standards and they're often politically appointed which makes them more you know, leaning one way or another like the FTC or the FCC, etc. So maybe that's something that needs to be rethought. But the point is why can't Facebook as a company or YouTube as a company not be part of common sense assessment of how it's products are working. And if it turns out that there deadly consequences, take the freedom that they have to act responsibly. >> That's what they do everyday. The the irony here is if you call for them to make more decisions on their own of what is it in the civic best interest. It make creates a bigger distance between them and democratic accountability, right. Because governments aren't making those decisions, they are. And that's I mean, I think I, I, I'm not even sure what we're arguing about at this point, but. >> [LAUGH] >> We're pretty deep into it. But I feel like the, the, the companies are making those decisions but we gotta, we have to, we need to have some kind of animating idea of what our one, what should motivate them And what limits they should have. And I think the best thing is they should do this themselves. >> Exactly. >> One of the biggest problems that the companies have is that they make these decisions in secrets. So if you're interested in this area I strongly recommend this article that was just in Vanity Fair. It was about a single decision that Facebook had to make, a content moderation decision that turns out to be super complicated. And they let this reporter sit in for the conversation. Which the question is there is, is, there is a group on Facebook that was called, men are scum. And it was just full of women posting things about men. Is that hate speech, that term men are scum? And when you start to kind of like diagram the sentence out, and you try to come up with a rule that will cover all of the different eventualities. You start to decide is men a gender, is it a sexuality, is it part of a innate biological, or is it an identity that you can take yourself. What is scum? is scum like, you know, what do you mean by scum? It's an incredibly complicated thing. And the problem is that these decisions happen every single day inside the companies and with this one exception of the Vanity Fair reporter being there, they're all secret. And so all that happens, in the end, in this case, they left 'men are scummers' up. But then, maybe they take down a men are really scum. >> [LAUGH] >> And all those people are super angry that they were taken down and it seems capricious and arbitrary. And they don't realize that ten JDs and it's not engineers making decisions. These are all like people with fancy liberal arts degrees and mostly lawyers. Ten of them sitting around the table yelled about this for eight hours before making a decision. When we do that in the legal system people will write that down and you can read about it, right? >> Yeah. >> And I think that is one of the things that have to happen is that decisions have too be on the outside of it. But the flip side is, then, if people don't like that, and if they don't like the decisions then government's are gonna have to take responsibility. And there's a little bit, and this is happening in Europe as well, where the governments want the companies to do things, but don't don't want to be responsible for any of the downside. In Germany, they passed this law called NetZDG, that strongly encouraged the companies to enforce Germany hate speech law with all of the penalties on if you missed taking down something, and no penalties for over censoring. And so what do you think is gonna happen? Like these companies are only penalized on one side, they massively over censor. And now the government, whenever this happens, when some comedian's taken down, that everybody agrees this should be okay. The government's like, it wasn't our fault, it was Facebook, right? And they're creating, trying to create like having it both ways. Of like, we have all this censorship, but we want you to take the blame for it. I think I think that's cowardly. I think if, you know, the company should come out and say these are our standards, these are our principles, this is how we make the decisions. But then governments are gonna have to say, yes that's reasonable. Or no, and then take the actual responsibility for adjudicating it. And then taking the heat from their own citizens. Instead of trying to create a situation that is perhaps, in some cases, not even allowed by their law, right? Or goes well beyond what they are legally authorized to do through yelling at these companies. Which in one case that I was really angry about in the UK, a minister of Parliament in the UK wrote to Facebook, threatening Facebook with regulation if they didn't take down this one specific guy's account. Which I think is completely inappropriate. >> No, that's terrible. >> For a democratically elected leader. You know what you call in a democracy a system that weighs people's various rights and comes up with a decision of whose rights win? That's called a court system, right? Like the members of Parliament should not be threatening, take down my political enemy, or you'll be regulated. >> No, that's preposterous. >> Right, you would never do that. >> But I think for this notion. >> [COUGH] >> No, no, but you've brought me to an interesting idea, and I'm just kidding. No I wouldn't do that but I just- >> [LAUGH] >> Just very briefly because- >> I count, you're right. [LAUGH] >> No, no, no, not at all, not at all. I'm a big believer in freedom of expression. >> [COUGH] >> And I, I actually, I think that this is what needs to happen much more. An engagement in this discussion. Cuz this debate tends to be, you know, what should be banned, and what should be regulated. But more engagement from tech companies about their decisions. And also being open to pushback, because this have, this is how you get to a better place. You try something with best effort, you get feedback, you reevaluate, you reiterate, you move on. I mean, that's also the way that, that laws are often made. But we cannot simply reduce this to a discussion of, you know, what laws should govern what. And by the way, these companies are, as you mentioned, global. They reach a lot of people that are outside of the US jurisdiction, outside of European jurisdiction. So even if you wanted to regulate everything, it's gonna be incredibly difficult. And that's why it's very important that these companies make their own values decisions, and be transparent about it. >> First, I wanna get you in the conversation, and then over to Hillary. >> Finally, okay, I think, you know, a lot of this has focused on, I'm blanking out finally. >> [LAUGH] >> Gosh, go ahead. >> If Alex says something, it'll come right back to you. >> [LAUGH] >> I mean, so, I'll say, I'm, I'm experiencing a similar phenomenon, where my questions are evolving in real time in response to the conversation as it's unfolding. >> Sorry, yeah. >> Go. >> I knew it, I knew it. [LAUGH] >> [LAUGH] >> So before the nightmare scenarios of somebody truly evil takes over these companies, or a foreign government acquires it. I think we have a window, while these companies at least on the surface, in terms of what they're saying, like, you know, do the right things, says Google. And, and you know, you hear what Mark is saying, he's saying I'm trying to do the right thing too. So while that window exist, we have an opportunity to say, what is it that is going prevent this future scenario when somebody else takes over it? So at the moment, these companies have, have employees of all political persuasions. They have, they have product decisions that are probably getting logged. There's probably code that's getting checked in that is auditable. So even creating sort of a culture inside the company, where stuff that's going going into the code. The product decisions are being made are being documented, and at least are subject to some internal scrutiny. Potentially also external scrutiny over time, might create the kinda culture and norms that will later let us protect us when something bad happens, right? >> Yeah, Hillary? >> So, I mean, picking up on some of the comments that were brought up earlier. So, part of this thought exercise of, you know, imagine if Mark Zuckerberg were the person you least agreed with politically. Which Donald Trump, for me is a pretty good option there, I mean, or maybe Alibaba. >> But you have to imagine they're competent, because you have to be afraid of their power. >> [LAUGH] >> Fair, even, even better, a competent, but somebody with whom I disagree. I mean, part of what it makes me realize is, you know, A, it's great when you happen to be in a political agreement with a powerful person. But B, I actually don't want anybody to have that much power, and certainly not in that kind of position. You know, this is an argument that's often made about executive power in government, why you wanna have checks on different branches. Cuz you don't want one person to have that much power. And that's what I'm thinking about with respect to a CEO of a private company, certainly. So I wanna talk a little bit about the tools that we actually have, whether as lawmakers, or as citizens, and perhaps as civil society members, to, check that power. And one of the tools that's been brought up is anti-trust, which, you know, has some real strengths, but also has some real limitations. So I want you to help us understand, you know, what, how, what can antitrust efforts get us in this conversation about checking the power of ungovernable platforms? And what problems will it actually introduce, that we should anticipate if we're trying to use that as a tool? We can start with Maricha, given that the EU has sort of led the charge in anti-trust. But then Alex, I wanna open it up to you, hearing sort of from the company's perspective. >> Yeah, so I, I think that it's important to keep in mind the origins of anti-trust and competition law, which is to enable fair competition and a healthy market place. So the idea of free markets is also fair markets, and that's where anti-trust law often comes in. But it also, in Europe at least, pertains the things like mergers and acquisitions. So what if there are four companies in the market, two of them wanna merge? Will they acquire significant market power that I mentioned before, leading to all kinds of risks in terms, of you know, the competition that's left, and, the power that they have? And I think there's more and more assessment of how to weigh the value of data. Not so much only stock market value, or, amount of customers. But to really also look, in assessing mergers, and competition, and antitrust, the way in which data is being dealt with. Now normally in Europe, anti-trust is investigated, or competition violations are investigated, on the basis of a complaint. So there might be someone with a search engine suggesting that another search engine has not respected competition laws. And then the European Commission can investigate, and if they find wrongdoing, they can impose a fine. And this is not only for tech companies, this is for oil companies, truck companies. Any company can be subject to this, from anywhere in the world, operating in Europe. So the possibility would exist to assess whether there are monopolies forming. And whether there needs to be action against monopoly forming, which I hear more and more voices calling for here in the United States as well. So you could deal with the notion of too big, too big to fail. Too, too big to allow for any innovation in the same sphere to come from smaller competitors. So that, you know, could solve the problem if you think that those are the problems. On the other hand, it would take a long time. And it would be kind of like a blunt hammer for addressing specific elements. Perhaps where, you know, we just talked about various sort of problems that some of these tech companies bring forward. And I don't think that competition in and of itself can solve all of those problems. >> Can I, can I press this just one degree further- >> Network effect. >> Go ahead. >> Sorry, I agree with you. The, it's impossible to build a Google today. It's impossible to build a Facebook today. Because the, the data strength they have, by virtue of the social graph, or by virtue of the billions of queries they get, is something no startup can possibly They compete with. >> Mm-hm. >> And I think the notion that we as, as users, are giving this data freely, and sort of fueling this monopoly, is the challenge, right? I think, ultimately, if there was a way to make that available more generally to, to other companies who might wanna come up, there would be competition. But there cannot be competition unless that, that data that they are privileged is also shared more broadly. >> All right, so one approach is to try to attack the data advantage that large companies have. I'll just offer another possibility. Which is acknowledging this mergers and acquisitions dynamic that's also true about tech companies. So it's hard to imagine, Google enjoys something like a 90% market share in it's search functionality. But of course it has a blizzard of other products as well, YouTube among lots of others. Facebook, enjoys 2.5 billion people using the Facebook platform. But it also has WhatsApp and Instagram. So it's unclear what, to me what anti-trust would mean, with respect to the search function of Google. What would it mean to break up the Google search monopoly, with 90% market share? But it's easy to imagine what it would mean to say, Google can't own YouTube as well. It's easier to imagine what Facebook could do by saying, you don't get to acquire WhatsApp and Instagram. Would that be a good approach? So the data concentration is hard to get at. But the acquisition tendency, that any modestly threatening competitor can get purchased and absorbed into the same company. That's what to ward, that's what to ward against. >> I think that's the right approach. I mean, first, people throw out anti-trust as this magic solution to every problem. >> Mm-hm. >> And so you'll hear people, you know, the Alex Jones fans, the Neo-Nazis say, we're being censored by companies that are too big and powerful, we need the anti-trust. And then you'll hear progressives, these companies are too big, they're not doing enough content moderation, anti-trust. They can't both be right, right? And, I think they're both wrong, in that the truth is is, anti-trust is competition policy. The focus should be on creating the competitive marketplace. You're not going to get what you want out of content moderation, what you want out of privacy. All of these, 90% of the things that people say, I could fix with anti-trust, they can't. But what you can do is create a competitive marketplace. And the best way to create a competitive market place is bring it back to the place that I was talking about, of all of the dead companies on 101, right? So, the Facebook main campus, that famous thumbs-up sign. If you go around and walk to the back, it's the Sun Microsystems logo. That used to belong to Sun Micro Systems. And Mark Zuckerberg intentionally did not have the back painted, cuz he wanted to remind the company, the employees of that company, that they are working within the bones of a dinosaur that had come and died before, right? That anybody can be overturned. But the reason they're not being is that, one, there's a, well I think the biggest name is the M&A. Is that the big public companies have a lot of data. They're very good at spotting who is important. And so, they make these decisions that seem insane to Wall Street, and then everybody thinks is fantastic afterwards. Mark bought Instagram for one billion dollars. People were like, he's insane, we need to have a shareholder revolt. That company's probably worth $150, $200 billion by itself now. Mark bought WhatsApp for $20 billion. People were like, this is insane waste of tax-payer, of, of shareholder value. Now everybody thinks that was a genius idea. And so I think stopping the ability for the companies, you know, Google and YouTube, for the companies to take their competitors out using their public company evaluations, their public market valuations, is a totally appropriate thing. And probably doesn't require anymore laws. Probably just requires more aggressive enforcement by FTC, DOJ, and EU. I think the other thing is then to focus on, if you're gonna create new laws to regulate tech, to really, really think, and game out, what do these mean for startups. And the truth is, is most of the regulation that has been passed in the last couple of years has been fantastic for the big American incumbents. GDPR is wonderful for Facebook. It is wonderful for some technical reasons that Facebook runs their own ads. But it's mostly wonderful because nobody knows what it means. GDPR is are interpreted by 28 different data protection authorities in 28 sovereign states. There will be actions against Facebook and Google, and other companies in 20 different places, there will be lawsuits over the next ten years. Facebook can afford to have a lawyer in every single one of those cities, and to have people interpret the law for them, and help them stay competitive. A 20 person German startup that wants to compete against Facebook cannot afford that, right? And so they are in this massive gray area of not knowing whether they're compliant, and possibly have a competitor theirs to take them out. And so if we're gonna pass laws, we should be very careful to think about, what do these do to startups? Because in the long run, we created a huge amount of regulatory. This is just exactly happening in the United States with Sarbox. We thought we were regulating the big banks, we created a market where only big banks can survive. We do not wanna create a market where only big tech can survive, and everybody else gets squashed. That doesn't mean no regulation. It just means being smart about doing things like having safe harbors, having, minimum sizes for companies before you enforce. Being really clear about your guidance, so people can make decisions without hiring lawyers. Lots of companies, the moment they've called a lawyer, they're, they're out of money, right? That's the kind of thing that regulators need to think about. >> So I just wanted to come back to the anti-trust, because I think it's an important point you're making. Just because something is called Google search, or Google news, or another function of Google Calendar, or what not, doesn't mean that they're different companies. The question is how the data is used, and how it's merged, and what their, you know, what their sort of ownership is. So i think, even if you're selling trucks, and you're opening 20 different stores, but you all own the trucks, and you determine the pricing. And there's, you know, then it doesn't matter if you have 20 store fronts or one, let's say. So I think that that's important to note. And I'm sure Alex will be happy to hear that anti-trust law has been operating from a European level, so it applies to the entire single market. And it has not been created for tech companies, which is sometimes the perception, especially here in Silicon Valley. That among others, Europe is sort of creating law, after law, after law, to go after the big successful American tech companies. And, and competition has really been one of those, actually, uncontroversial, independently assessed laws that has really also withstood a lot of political storm, I would say. And it also means that the, the Competition, Commissioner, which is a Danish woman by the name of Margrethe Vestager. She really has to keep a very clear eye, and recently made a lot of enemies in Europe by not allowing for the merger between two big, rail companies who wanted to create one European champion against China, quote, unquote. And she just didn't see reason for allowing that merger to happen. So this is kind of policy that is done on the basis of very clear principles, irrespective of, of what case. And I hope it can stay that way. Because if competition law is going to be seen as instrumentalized to deal with tech, I think it will really lose a lot of its credibility, and power, as a result. >> Can I ask a quick follow up related to antitrust, which is, you know, we've kind of gone down this path before, right? So in the late 60s, early 70s, this large anti-trust case was brought against IBM, it lasted for 25 years, didn't result in anything. Large anti-trust case brought in the late, or mid to late 90s against Microsoft, should be broken up into operating systems, applications. Those should be separate, they're clearly different kinds of things, didn't lead anywhere, right? And there, the whole argument that was made is that there's more competition, there's always gonna be competition, right? Sun Microsystems dies, Facebook moves in, maybe 20 years from now, Facebook dies, someone else moves in. I'm wondering if whether or not you think, even if someone were to bring the. The anti-trust hammer against these large companies, whether or not there's actually any chance of success of that, if there's a model that you think will lead to a real change. >> There is already a model, there's one model. And it is a case by case assessment of whether the laws are being respected and the consequence is a fine oftentimes. And you can also wonder whether it hurts the big companies. I think that's a legitimate question, whether the fines are proportionate and therefore heavy handed it off. But I'm not sure. I mean, maybe we should ask someone who knows much Microsoft better, but, I've heard from a lot of people who've looked at that case that it has had an impact on Microsoft. And that it had a deterring impact on what they wanted to do, that, that they've had a record fine at the time for their practices. They survived, but they're a different company now. So I don't know, I mean, you never have sort of the, you know, the alternative case because there's only one outcome, but. >> Neither Microsoft nor IBM had a data advantage, that without the data, you can't get in the game, right? So over time, there were other competitors which has built better products. So I feel like the social graph and the search history are incredibly, incredibly powerful and they will not be replicated. And you can fine all you want, but, [LAUGH]. >> And therefore, therefore we have to look broadly at what needs to be done. So for example, one of the things could be the use of data for micro targeting. So can you use the data that you have and combine it in all sort of ways to address people in a very detailed way with your ads, right? I think that, that's a question that is going to arise, maybe some categories are gonna be struck out. So that you, you don't have also this perpetual collection of data. Because then you have more targeted ads, people will click on it more. You get, you know, it's like a circular kind of, kind of process. So we should never think about one solution only. There is no magic wands. It's going to be complicated, and different outcomes have to be addressed with different tools. >> So Alex, then over to Jeremy. >> So when I look at the Microsoft case, I think the decision that we should look at is not what was the effect on Microsoft, but whether the case was relevant in the marketplace. So the, the Microsoft case was focused on whether Microsoft was using their near monopoly in desktop operating systems, to benefit software that was bundled with it. And one of the outcomes was, they had to give choice for things like web browsers. While this fight is going on, we end up in a world now where this phone has hardware based DRM, digital rights management, that restricts all the code in here has to be signed by Apple. And you are not allowed to change out any of the default apps including the web browser because Apple will not allow you to. And nobody talks about it, because the entire marketplace, this is the absolute worst nightmare. When people thought about Microsoft in the 1990s, Apple has gone well past like the most evil predictions of what Bill Gates would possibly do, Apple has done. But people don't worry about it because the structure of the overall market has changed, and the anti-trust, the anti-trust work was completely irrelevant. So I feel like, I think you're totally right, it's the M and A thing. Like create competition, get lots of companies in here, don't let the big companies squash them. Don't let the big companies take them off the board by buying them. And you will naturally end up with a, I think a better equilibrium. The problem with the, I agree with the data stuff. The problem there is Facebook used to have a mechanism for people to share their social graph. It was called the Graph API. You might have heard of this little company in Cambridge, United Kingdom called Cambridge Analytica that really loved the Graph API. Thanks to everybody's massive freak out about Cambridge Analytica, Facebook will never again have to open up the social graph. And so like there was a fundamental, another hard engineering tradeoff. Openness or data safety, and data security, can't have both of those. Either people can take their graphs with them or their data is in the walled garden, and it's protected. And the kind of, this wasn't like a legally decided thing, but the overall public reaction to one specific reach of that has pushed that all the way to one direction. And no company, again, will be dumb enough to allow for that kind of openness. >> Jeremy. >> So I wanna invite people to stand up by the mics. I see one on this side, I don't quite see one, is there one? >> There's, there's a mic over here too. >> Yeah, I think. >> There's one, yeah. >> Just over there and we'll turn to questions from the audience in just a minute. But if I can take us in a slightly different direction than anti-trust. And I'm just gonna throw my cards on the table about something that I'm concerned about. Which is, I'm concerned about the breakdown in journalism, given that journalism is a key accountability mechanism in our democracies. And I think if we care about liberal democracy and its preservation, we have to deal with the fact that the shift in the direction of many to many communication that we've gotten with the platforms. The wide, you know, diversity of sources that people can tap into, has come at the expense potentially of a business model that enabled our journalistic entities to invest in the kind of accountability journalism that we need to make our democracy work. And this gets at your point related to civil society. There's also fake news on top of that. There are filter bubbles. There are echo chambers, and all these other things which you may, and, and we had this conversation earlier today, Alex. You may believe that these are things that sort of are characteristic of society, and they're just amplified potentially on the platforms. But nonetheless, we're at a moment where the very viability of our democratic institutions and the survivability of our institutions, I think, are a real and salient concern in a way that they haven't been in the past. Do you agree with that, sort of laying of the cards on the table? Are you also concerned about truth and journalism, and our ability to, from the outside of government and the outside of companies to sort of have checks in society that come from this kind of truth telling. And the reasoned debate around a common set of facts that are central to democratic deliberation, and if so, what do we do about that? >> I agree with almost everything you say, but I think this problem predates the platforms, right? Before the platforms we have today existed, there was the Internet, right? Before that, we had relatively few publishers. Those publishers had massive audiences that had built up over the last century. And at that point, because of the size of the audience and the fact that they had access to classifieds and advertising, they can monetize the audience. And they had, in some cases, total power over a city or whatever, right? Along comes the Internet, and it's not just the audience that's massive but the publishers. Ecosystems are also massive and it's much more efficient to do your classifieds elsewhere and the advertising networks move off, and suddenly you're left with these publishers, who are, don't have economic model to sustain it. That is the core of the problem. So there are, there are two problems to be articulated. One is how do you sustain, you know, hard news and good journalism. And how do you also, you know, deal with the fact that there's a lot of misinformation on top of it. These are very hard problems, they're not trivial to solve. They kind of happen regardless whether the platforms existed or not, but then maybe they're getting worse because of, of the, of the alternative places we can go and get content. And particularly you can get, on WhatsApp, you can get content as a meme, right? Like it doesn't have to come from any source, you can get, and a lot of that can be actually fake. So I think the only way to, there is going to be a certain amount of consolidation, and, and some of these sources are gonna get bigger, before things get because, before they become economically viable. But still these are really hard questions, you know. How are you gonna finance the kinds of investigative journalism we need, especially at the local level where there really aren't enough subscribers. And I think at some level, either government sponsorship, or some kind of, patronage model has to come in. And to, to help this, cuz I don't, I don't see any other way to pay for it. >> Yeah, I mean, so I, I think a lot has to with the fact that all the advertisement revenue has been concentrated, so it links back to this question of monopolies, competition, etc. But I also wanna nuance it a little bit because I see a mixed bag. I see I have concerned about the, the lack of shared debates about shared information, in the sense that, you know, my mother's generation they would watch the 6 o'clock news or the 8 o'clock news. Or they would read the weekend paper, and a significant amount of people would see similar main stories and would talk about it, like The Baker at the Birthday Party at the street corner. I think now, because of this notion of filter bubbles or, you know, automatically, automatically recommended information, there's more fragmentation. And I think also there is crowding out of news by entertainments a lot. And I think we don't talk about that enough. The idea that's, you know, people like the Kardashians have more followers than any public figure, etc. Says a little bit about where young people's minds are drawn to. I don't wanna oversimplify but just touch on this issue of competition with, with entertainments, and maybe exemplified by your current president what entertainment mixed with current affairs can lead to. [LAUGH] >> What do you mean? >> [LAUGH] >> Reality show figure becoming president. >> Okay, it's never happened. >> No, no, but, but, on the optimistic side really, I mean, I also see a lot of very interesting new initiatives. One, I think the podcast as a phenomenon for longer listening, often uninterrupted, longer conversations, long reads, blogs. Products like Medium, Glendale, The Correspondents, which is a crowdsourced membership investigative journalism platform that originated in the Netherlands but is now also coming to the United States are all developments to watch as well, which would have been impossible without both sort of the platform type technologies and the widespread Internet access and, and access to mobile devices. So what we have to guard for, and I think public resources should be dedicated to that is a plural form media landscape, and also local media. I think the, the, the cutting out of local media has been very costly for oversight on local government. And I would hope that there could be public resources dedicated to that, and similarly to foreign language media services. It's really ironic that in the years that the Dutch, the French, the German, the American, the British government were cutting resources from foreign language media services, That, you know, the Russians and the Chinese were actually increasing their resources and we can all see the consequences of that. So again, before stepping in, let's look at the landscape, like let's look at where the vacuums are. Are those vacuums that jeopardize the public interest? If so, let's dedicate public resources but let's also celebrate what's going well. >> I agree it's a huge problem, and I think one of the factors that really scares me is again this bifurcation of good media costs money and crap is free, right? If you look at the top ten news sites on Facebook, they are all junk like, and, it's because it's all free. And I can pay for New York Times subscription and the Washington Post subscription, but most people won't. And so the Times is fine, the Post is fine cuz they have a large enough audience. But the vast majority of local media will not be able to, to support themselves with that. And I think this has actually been a huge missed opportunity for the big tech companies to solve this, because you know, movies and music went through the same problem of get rid of the gatekeepers. Lots of people, everybody wants something free, very hard to sell lots and lots of subscriptions, and the solution was the, these the services. Like in people, there's some people have estimated that people spend way more money when they have a Netflix subscription or the Spotify subscription. They're spending more money on music, more money on movies, but they are willing to do it because it lowers the friction, and it's so incredibly convenient. But none of the tech companies have stepped forward and done that for local news. You can't do it for the New York Times or whatever, they can, they can demand their 20 bucks a month. But for the Sacramento Bee, and the Los Vegas Review Journal, and the local paper in Brussels. Like you could, you could come up with a solution where you pay 30 bucks a month, and all of them get divvied out a little bit of money every single time. A number of small companies are trying this, they don't have the scale. Really only a Google, Facebook, Amazon, Microsoft company, Apple maybe, probably not Apple cuz their devices are only used by rich people, and you need to get to like your all kinds of people. So those companies could try a solution here, and none of them have. And I think part of it is they're so afraid, they are so burned from touching the media, and then, you know, doing a poor job and being punished. That they're not willing to try again, and I think that's not the right solution, the, the solution is to go try again and keep on trying till they find a solution that works. >> All right, we're wanna go to the audience questions. Before the night ends, I've gonna make sure we get the panelist to offer some comments on the announcements that Mark Zuckerberg made today about the, the move towards privacy and changing social, social networking itself. But let's, let's get a quite a couple of questions in, please. >> Hi, my name is Vaishnavi and I really appreciated this conversation. It's, it's one of the best ones I've heard so far in this, in this series. And I think part of that is- >> How better? Are we the best? >> [LAUGH] >> I would say- >> I don't wanna just be like, kind of good. >> Definitely top ten. [LAUGH] >> Top ten of five? >> [LAUGH] >> Wow, next question, sure. >> [LAUGH] And I think the reason it's been so interesting is because precisely we've had this discussion between representatives of like governments, as well as representatives of like private corporations. And it leads me to some of the the, the big question I have which is, there's been a lot of discussion today around like norm setting and what other norms by which we're going to design our products, and design our companies, and design our laws. But you know with, while it's great that the US and Europe are represented here, there are vast diversity of countries, my own included, that are are not represented here which have vastly differing social norms. And so I'm curious to hear from the perspective of someone who's a member of like a super national body, like the European Parliament, as well as people who work in, you know, who have worked at companies that catered to really diverse populations, populations around the world. What you think about some of these decisions being up leveled to multi national, sorry, multi stakeholder institutions groups like the United Nations, or like you know international governing bodies. And, and one of the arguments that can be made there is, we want civil society to be involved, we want governments to be involved, we want corporations to be involved. These, these institutions have a history of engaging with a vast variety of stakeholders, and coming up with conventions or treaties many of which you know we have signed on to, many of our countries have signed on to. What are your thoughts on the merits of such a, a solution, the drawbacks, the practicality, is that something that could ever happen? I'd love to know what you think. >> Mm-hm. >> So, I think your point is very important that we should always think about the possible impacts beyond boarders, and also be inclusive in decision making processes, think about all the billions that are not online yet, instead of talking only about those who are wealthy enough and connected enough, already. But then to think about sort of global norm setting, the United Nations, I'm afraid that the differences between governments at this moment in time, and the way which United Nations works are Not leading to very fruitful outcomes, and I'm being diplomatic. And it's unfortunate because while technology is connecting the world we see sort of remanifestation of nationalism in many parts of the world. And attempts by governments to sort of cut up the Internet, to fragment the open Internet, which I think will really jeopardize many of the best potential of our, our technological revolution. So you also mentioned the idea of getting around the table with multiple stakeholders, civil society being very important, government's private sector, etc. And I think that that is a good model, but it is not a magic solution in and of itself. And the, the answer to many questions cannot just be, be multi-stakeholderism for the people who deal with internal governance discussions. I think you know what I'm talking about. If it is not clear which stakeholder brings which weight and which responsibility to the table and therefore is willing to accept which accountability. Because it's really hard to compare a multi-billion dollar company with one person at the table and a small, you know, NGO dealing with women's safety online, for example. And to say, well, all the stakeholders were at the table. I mean, what does it mean? So, we need to give more meaning to those processes and then I think something can be done. I myself am involved in a number of multistakeholder initiatives that seek to establish norms as a suggestion that then others can pick up which have really been thought through by different experts in an attempt to bring minds together in viable concepts, but it's, it's hard work but it's very urgent so I'm glad that you asked the question. >> [CROSSTALK] >> I agree with you I think there is no alternative at some level. I think it's, it's easier to start at the most objective kinds of things for which we can design norms, like fact checking, right? I mean, there's already a fair bit of collaboration between the tech platforms and, you know, especially around elections and whatnot. I think moving, if you go all the way to like what is, what is indecent, [LAUGH] and not that's gonna vary tremendously, and that's even practical. So there's gonna be a certain number of additional things you can layer on top of it. And but I am also extremely skeptical like given how the United Nations works and whether it'll have teeth. But I think you gotta start somewhere, at least getting the platforms to say we're gonna build norms together involving civil society, would be a great first step. And, and sort of fighting fake news is one thing, and then going beyond that would be nice. >> So if I may guess, are you talking about India when you talk about your- >> No- >> No. >> It's actually being [INAUDIBLE]. >> Yes, so I'm gonna talk about India instead. >> [LAUGH] >> Singapore is interesting but Singapore has kind of got its act together and knows what it wants, right? I think India is by far the most interesting regulator of the Internet for the next ten years- >> Mm-hm. >> Right? So they are im, im, they are so economically important, especially the tech companies that are locked out of China, which is mostly Google and all of Facebook. You can't be locked out of China and India and consider yourself like a 21st century country, right? And so India has the economic wherewithal to enforce its rules on the tech companies which Russia, for example, does not. So that makes it incredibly important. India does not have the kind of protection for individual rights and free speech that we enjoy in North America and in Western Europe. In fact, India some years is the number one requester of censorship on Facebook and other platforms. Cuz they have blasphemy laws, they have very strong laws against defamation. There's a lot of things you can get content taken down in India. They have a massive problem with violence that is spread by rumors. That violence is being spread on WhatsApp. WhatsApp has not algorithmic ranking. WhatsApp is communications that is sent to you by your friends and family members. So it does, the existence of this problem kinda puts a lie to some of the other stuff, about it's all about the algorithm. No, it's not. It's about the emergent human behavior of how humans wanna use these systems. Whatsapp has difficulties solving this problem because they encrypt all their messages. So they have decided to give 500 million Indians complete privacy from Facebook, and from the US government, and from the Indian government. But the flip side is they cannot stop violence. And the government is complaining about violence on Whatsapp while the BJP, the current ruling party is also running at least one warehouse of people who are sending fake messages that are trying to spur up violence against Muslims. So it is a fascinating, incredibly complicated country. And what happens in India is gonna be really, really important as India looks at the American system, the Wild West, but we're making a ton of money. The European system, highly controlled, but of the top 20 Internet companies by revenue, only two of them are European. They'll be one after Brexit, Spotify, so Europe's not really making money of of of the Internet like anybody else. China, making money, economic growth, but huge totalitarian control. India is going to try to pick and choose from these models and what they choose will become a model for Brazil, for Indonesians, for subsidy in Africa. They will set the standard for what is allowable and not. And I think it is a massively under-covered story and something that we in the Western aren't paying enough attention to because it is, because India is a very complicated place. >> Next question. >> Thank you, I'm Raj. I think here I'll go by the title, whatever it says, pr, power of private platforms. Just only thing I'd like to add here is along with the government. So power of platforms, there have one party in India, and just wanna call it out. They use Google Doc as a medium to distribute, what content to distribute. They use when to distribute using the Google Doc to define the timing, and they use Facebook and Twitter to post the messages what they want to com, communicate. And use Cambridge Analytica to analyze who to target and use phone number to match with Facebook, to analyze that people's profiles so that they can be targeted if they're to manipulate the voting system. So the question, and Whatsapp is under the method to distribute to. Now Mark Zuckerberg is planning to consolidate all the messaging platforms, so that's even going to get, get even worse. So my question is why we are allowing all the political parties to run on private platforms and what, what is the regulatory, body is failing to stop using private part, private platforms to manipulate people's vote and polarizing people? Is it, there is a Ted talks about automating the politician. Should we go towards that model where we can get rid of all these politicians and focus on artificial intelli- >> And focus on- >> [LAUGH] >> Wow. >> So, that's the- >> That's a destructive technology. >> That's a- >> You gotta be taxi drivers first. But first it's [LAUGH]. >> Yeah, yeah. >> So if we can get rid of the the first per, person who is causing the problem, then we can subsequently solve all the other problem. Just want to hear from the public and the private side. The reason why I said the power of platforms is viductible, so far only one platform. But combining these platforms together with the governing political parties, it's a dangerous trend which is getting set, not just in India, it's going to get transferred to other parts of the world. Just wanted you to share my, share your thoughts, thank you. >> Weigh in's, so not Russian interference on the platform, but the governing party interference to maintain power on the platform across a unified messaging system that's, end to end encrypted. >> Right, which is way more more common problem than the Russians. Like while we've been focused on foreign interference The truth is the vast majority of inauthentic political interference and information operations are conducted by governments and by private actors within countries against their own citizens. This makes sense, it is actually quite difficult for the Russians to fill a whole building full of people who speak well enough English to pretend to be Americans in Texas and then also pay them nothing, right? But if you're within one of these country and you have a activated group so like that BJP problem in India. They don't pay these people, they've got like a million people who are basically signed up to be part of like politically active and who will forward these messages on their behalf. And so it is a massive problem. I think there's not a lot you can do about the actual messaging, because if they're able to get hundreds of thousands of people to do something, they can do that no matter what. What we do need to focus on is the advertising because the advertising is where you can have the hyper targeting. And I totally agree I think we need to have standards that take the ability to hyper target, ads, out of the hands of politicians. I would love to see a standard between the United States and Europe, because if the U.S. and Europe came up with like, this is the definition of what a political ad is. And this is how we wanna restrict the targeting, then that would become a world-wide standard pretty quickly. But we've gotta do that now in the United States because right now, billionaires are gunning up for the 2020 election. Mike Bloomberg announced that he's gonna spend hundreds of millions of dollars on a data operation targeting Donald Trump. Even if you agree with that goal, which I do, that should still be terrifying. That's it's like our billionaires are better than your billionaires, so we're gonna win. We gotta stop that and have, you know, disarmament of both sides. And the only way to do that is with the law. >> Let's talk- >> I'm so glad to hear this. You know, this was really music to my, to my ears but, I think you are absolutely right. It's also the the notion that money plays such an extraordinary role here in politics and that the tech platforms have really, you know, amplified that. Because even if the platforms themselves and the algorithms are 100% neutral, still the more money you pay the more people you can reach. And, and we have a profoundly different system, in most European countries. Even though they're two, there is no transparency about party funding and we have the same problems with micro-targeted advertisements. But I think another point that you made or at least that I thought you were making is about where politicians broadcast official messages. I think there are two there is a tendency to do you know a live stream on Facebook or to go to a platform instead of doing a publicly available website. And I think that that too could be very simple agreement that political parties and politicians make together that they will share their information in a, in a public space like the website of a political party or a blog of sorts. And that they also take responsibility not to use hacked, stolen, docs information, not to use deep fakes against opponents. I have worked on a pledge across the Atlantic which is always nice, where we ask political parties and candidates to sign up. It's a certain kind of behavior which, which also includes educating campaign staff, for example, because there's also mockertive sort of very perverse incentives, kind of practices that we saw here in, in Alabama. Maybe you've read about it candidates that was using Kremlinesque tactics to beat the opponents. And after I think the Washington Post investigated this story, everybody is pointing to everyone. So the candidate says, I had a campaign team, campaign team says, well we hired a consultant, consultant says, well I don't know, ask the donors. And at the end of the day nobody knows but fake accounts were created, misinformation was shared, and that also is your road in democracy. And if, if that's happening in our societies, I totally agree we have to be extra mindful when hate speech incitement of violence is shared whether it's on the public or private platform. I have seen it first hand in Kenya where I was leading the election observation mission where people were sending pictures of dead bodies in closed chat groups, messenger groups. And it really led to real life action even though the pictures came from another country, another year, and had nothing to do with, with the election that was at hand. So it's very, very dangerous. >> So the upcoming India election is gonna be really scary because as you mentioned, the BGP has got for example, at they have two WhatsApp groups for every polling place that is going to be responsible for local distribution. You don't even have to do micro targeting, those people know how to customize those messages, right? So you don't need extra data. And and so memes are gonna be spread in advance for the election that is going to, end up influencing and we don't know who. Nobody is responsible because none of this is visible right, is all epic. And I think that is a big concern I have as Zack moves ahead with his you know, all of my properties gonna be completely epic. Is that there is gonna be zero accountability if its happening there is gonna happen here and we gonna have this replay of the 2016 election except with no editing. >> But that's where I wanted to go. So the Zuckerberg announcement for any of you that haven't seen it is that, next week we have Brian Acton coming, the co-founder of WhatsApp. And so, the approach that WhatsApp pioneered, this end-to-end, fully encrypted communication which then Zuckerberg bought and Facebook owns, and what continued in this end-to-end, encrypted way. Zuckerberg has announced, that he's going to now unify all the messaging services, Facebook Messenger, Instagram messaging, WhatsApp messaging, on the same platform in an end-to-end encrypted way. Mean exactly that there's a massive wading of privacy against security and accountability, external accountability. So I wanna ask anyone on the panel how you think about that decision, this would be the value of the national security or safety. And then secondly, we were talking about targeted advertising here. End-to-end encryption, it's not that government can't get access to the message. Neither can the company, and so there's no targeted advertising against the content of messages that are transmitted through end to end encryption. So what's the, what's the monetization strategy if all of Facebook is moving and it's messaging services at least, to end an encryption where you can't do targeted ads. Privacy and the monetization, Alex you spent some time at Facebook you wanna start? >> Yeah, so, effectively the way you think about this is like I said, Facebook is traditionally being the center of this triangle of optimization, Zuck has pulled the level all the way to the privacy side. He's ex, he's throwing over, he's trying to tweak in to fix things, and I think he's realized he has gone himself in the trap where nobody would be happy with his content moderation decisions. And where everybody, the same people can complain there's none of privacy and you are not invading people's privacy enough the ba, the people I don't like. You're not invading their privacy enough. And so he has flipped the table and said we're going all the way to the privacy side, and that will have economic problems for the company, but we'll figure those out later. He has, that has happened multiple times in Facebook's history, where they have done crazy changes and then somebody else figures out how to make money later. And so while he, Zack seems very confident, I'm sure there are other people, who are drinking heavily tonight, whose job it is to make the shareholders happy. So, anyway it's gonna have humongous impact, it's going to provide a huge amount of privacy to people. There will be a huge privacy uplift. He's also kind of calling everybody's bluff like you wanted privacy, now you're gonna get it. And the flip side of privacy is everybody's gonna have to be individually responsible for the information they consume and the information they send. He's putting in everybody's hands, you are responsible for what you do online, I can't stop you from doing bad things anymore, I'm not gonna stop you from doing bad things. Economically, how are they gonna do it? They're working on a block chain model for payments, you are gonna have Zuckebucks on your mobile device. And they will because other people have tried this, but because Facebook can rolled out up to 2.5 billion people, they will create a new super natural currency that will float, that will be tied to a basket of multiple currencies, not just the dollar. That is going to be able to be used for remittances and payments around the world with very very low friction. We will take a small percentage of that, I would not buy Western, Western Union stock or Visa stock, tonight, because if they are successful, they will be able to completely change all economics and they will not need ads anymore. But it is a humongous gamble, right? Like they are definitely, they have jumped off the bridge, and they're putting the, they're putting the, they're sewing the parachute as they fall off the bridge, right? >> [LAUGH] >> And so that's crazy, and then something I have said privately to my friends here and publicly in Twitter, nobody has made these decisions before especially at this scale. I think Facebook needs to have these discussions publicly. And needs to take the input from people, from law enforcement who is gonna freak out about this, from civil society, from privacy advocates. These discussions need to happen in the open, not just in private, because when they make decisions in private, they don't take all that input and they always will get trouble with. It is better to have a public discussion of how this trade-off of works. Because despite him having the big picture, there are tiny little decisions that were actually have a lot of privacy, and safety, and security impact. >> He, he didn't talk about, encrypting, newsfeed per se. So it is theoretically possible that he encrypts the other three, but keeps newsfeed clean. In which clear, in which, in which case you can actually profile your, and you can put ads on your WhatsApp or messenger, cuz they are all tied together. And you, there's a profile available from what's happening on your newsfeed, okay, firstly. But I think it, it should really scare us to think of more things becoming opaque. And the analogy that Zack likes to use is like it's like you're living room, what's happening in your living room is private to you, what's, what you're saying in a phone conversation is private. But these groups are 256 people. So you can, and, and in a couple of hops it can be with 10,000, 10,000 people. Now, what, what does that really mean? How can you have something that is unpoliceable that is being transmitted to 10,000 people? Is it a point at which you can say it's effectively in the public space and then it can be policed, right? So it, and, and, and that is the question we have to have as a society, what is the norm around what can be policed and what's really privileged? >> Mm-hm, good answer. >> Well, I was just thinking back about the Declaration of Independence of Cyber Space and all the promises that John Perry Barlow made in that, and where we are 22 years later. So, I'm always, and maybe it's my Dutch, heritage, a little bit skeptical about very, very big promises. Because, I think there'll be many factors impacting this. But, I agree with what Alex said that this is of a way of calling the bluff, and, we'll have to see what it leads to. I mean, I think this, this idea of, of getting input for many would've been a good thing to do prior, but let's see. Because it's always harder to walk back while saving face if you made a big announcement. I, I never really see leaders doing that so easily whether they're politicians or in in the private sector, but let's see, let's see. I think the big problem Facebook has, and Mark Zuckerberg personally has right now is that there is so much trust lost that it's gonna be hard for people to take this at face value. And to think well, he says A, so he must mean A, so let's, let's read it as such. I think that a lot of people are gonna read, and I already saw initial responses are gonna read, ill intention into it. >> But that, I mean, that's great, it doesn't matter though. Like, they're gonna do it or they're not gonna do it, and it's gonna be mathematically proven or not, and I think that's what he's doing. He's like, you don't have to trust me. If I mathematically build this, that everything's encrypted. And on the, so, civil society governments have an option here, and the only laws that have passed around here have been pro-privacy, right? None of the laws, none of the legal requirements have been put on Facebook for the last couple of years, especially from Europe. None of those requirements have been around responsibility or safety. They've all been for privacy, and so he's partially reacting to that. GDPR might destroy the entire idea of online ads in Europe. If he's reacting to that then this it it. But like, this is kind of, like I said, if you say that's all we care about, is all we care about privacy and data control. And data should never leave, because if Cambridge Analytica was the worst thing that ever happened on the planet. And running ads is immoral, then this is the kind of outcome you're going to get if these companies are gonna survive. >> Yeah but the question is whether it's gonna be allowed I think still. So if, yeah, I, I mean, of course it has to be mathematically proven. But I'm, I'm thinking about American law enforcement now because I think Americans hate their government, unless it's about national security. And I think government is pretty hands-off unless it's about national security. So if suddenly all the communications of people across different platforms are gonna be sucked out of public sight, I think we may see a first move on the American side. >> Right, it is definitely a pivot. It's a pivot away from China, because he can never go back to China now. >> Mm-hm. >> That is dead as long as the Chinese Communist Party is in charge probably, right? But it's also a pivot away from America because, you're right, for most of the world it is massively privacy enhancing. Because this data has been available to the US government under FSA and FAA 702, which is the whole point of the big fight in the privacy shield. >> Yeah. >> And that, and so, for European citizens you now have the benefit of the American government has no access to the communications of your citizens. The flip side is European governments have no access to the, the communications of their citizens. And so that is the weighing that's gonna have to happen. And so the privacy people are gonna be super excited. Privacy shield doesn't even matter any more because it's data is completely out of, out of, mathematically out of the network. >> But by when you think this could have been, by when you think this is realized, cuz it's not, it's been announced now, so what is the realistic timeline for this? >> Right, so, so I, I tink pretty clearly he's leaving his options open of how far they go, right? >> Mm-hm. >> So people have proven that one-to-one messaging, totally doable. Small group messaging, doable. Facebook has, WhatsApp has implemented stories, so SnapChat-like functionality, totally doable. Newsfeed, as people know it right now, I don't know how I would architect that to make it and encrypted. That is going to be very difficult. And so I think we're probably looking at 18 months to two years, at which point the name spaces are merged and you have ended encryption between Instagram, WhatsApp, and Messenger. And then now they've got a platform by which they can experiment of what other parts of Facebook can now be ported on top of this to take it to make it encrypted. And then part of that will be what is the legal pushback, what has been the privacy pushback, what has been the safety pushback on us doing these things? And so like, while he said, this is something we want to do, he's really only promised it for messaging. Which he's already done with WhatsApp, so he hasn't promise much beyond, what has been kind of built in and accepted by governments. But you're right, like just the announcement, there was I am sure an interesting phone call, from Washington DC to, to Menlo Park tonight. And I think the US government and the other five eyes, because the other anglophone countries can get access via FAA through requests from US government. So the Australia, New Zealand, Australia especially has been very aggressive at encryption, they're gonna be very upset about this. >> All right well we, we wind up our class next week with a final session. No one week break, we'll see you next week in the 13th, but thank our guests for, this fantastic discussion this evening. >> [APPLAUSE]



  Independent   Conservative    Labour    Liberal Democrat   UKIP   Speaker    Green Party

Constituency[nb 1] Electorate[1] Majority[nb 2] Member of Parliament Nearest opposition Map
Aylesbury CC 82,546 14,656   David Lidington   Mark Bateman
Beaconsfield CC 77,524 24,543   Dominic Grieve   James English
Buckingham CC 79,615 25,725 John Bercow   Michael Sheppard
Chesham and Amersham CC 71,654 22,140   Cheryl Gillan   Nina Dluzewska
Milton Keynes North CC 89,207 1,915   Mark Lancaster   Charlynne Pullen
Milton Keynes South BC 92,417 1,725   Iain Stewart   Hannah O'Neill
Wycombe CC 77,087 6,578   Steve Baker   Rafiq Raja

History of constituencies and boundaries

Prior to 1832

Since 1295, the Parliamentary County of Buckinghamshire along with all other English Counties regardless of size or population, had elected 2 MPs to the House of Commons in accordance with the freehold property franchise. The county also included six Parliamentary Boroughs, namely Amersham, Aylesbury, Buckingham, Chipping Wycombe, Great Marlow and Wendover, all returning 2 MPs each.


The Great Reform Act of 1832 radically changed the representation of the House of Commons, with the County's representation being increased to 3 MPs and the Boroughs of Amersham and Wendover abolished. Unusually, the contents of the Parliamentary Borough of Aylesbury were defined within the Act itself to include the "Three Hundreds of Aylesbury", which extended the seat to include Wendover and Princes Risborough.[2]

Under the Reform Act of 1867, the representation of the Boroughs of Buckingham, Chipping Wycombe and Great Marlow were reduced to 1 MP.


Under the Redistribution of Seats Act 1885, the County was divided into 3 single-member constituencies, namely the Northern or Buckingham Division, the Mid or Aylesbury Division and the Southern or Wycombe Division. The remaining Parliamentary Boroughs were all abolished and absorbed into the County Divisions which took their names, with Great Marlow being absorbed into the Wycombe Division, which also included Beaconsfield and Slough.

The table shows an approximate representation of the development of constituencies in Buckinghamshire since 1885. The text below gives a more detailed description.

1885-1945 1945-1950 1950-1974 1974-1983 1983-1992 1992-2010 2010-present
Buckingham CC Buckingham CC Buckingham CC Buckingham CC Milton Keynes CC North East

Milton Keynes CC

Milton Keynes North CC
Milton Keynes

South West BC

Milton Keynes South BC
Buckingham CC Buckingham CC Buckingham CC
Aylesbury CC Aylesbury CC Aylesbury CC Aylesbury CC Aylesbury CC Aylesbury CC Aylesbury CC
Chesham and

Amersham CC

Chesham and

Amersham CC

Chesham and

Amersham CC

Chesham and

Amersham CC

Wycombe CC Wycombe CC Wycombe CC Wycombe CC Wycombe CC Wycombe CC Wycombe CC
Eton and Slough CC South Buckinghamshire CC Beaconsfield CC Beaconsfield CC Beaconsfield CC Beaconsfield CC
Eton and Slough BC Eton and Slough BC Transferred to Berkshire (Slough BC)
BC = Borough Constituency (prior to 1950 - Parliamentary Borough or Division thereof)

CC = County Constituency (prior to 1950 - Parliamentary County of Division thereof)


Under the Representation of the People Act 1918, the three County Divisions were retained, with altered boundaries: north-eastern parts of Aylesbury, including Linslade and Wing were transferred to Buckingham; Beaconsfield and Amersham were transferred from Wycombe to Aylesbury; and Wycombe gained Eton from the abolished Parliamentary Borough of New Windsor in Berkshire.


The House of Commons (Redistribution of Seats) Act 1944 set up Boundaries Commissions to carry out periodic reviews of the distribution of parliamentary constituencies. It also authorised an initial review to subdivide abnormally large constituencies in time for the 1945 general election. This was implemented by the Redistribution of Seats Order 1945 under which Buckinghamshire was allocated an additional seat. As a consequence, the new County Constituency of Eton and Slough was formed from the Wycombe constituency, comprising the Municipal Borough of Slough, the Urban District of Eton and the parishes to the south of Beaconsfield making up the Rural District of Eton. To compensate partly for the loss of these areas, the parts of the Rural District of Wycombe not in the constituency thereof, which included Princes Risborough and Hughenden, were transferred from Aylesbury.


The Representation of the People Act 1948 increased the county's representation once again, from 4 to 5 MPs, with the creation of the County Constituency of South Buckinghamshire. This comprised Beaconsfield, Amersham and the Chalfonts, transferred from Aylesbury, and the Rural District of Eton, transferred from Eton and Slough (which was redesignated as a Borough Constituency). There were no changes for the 1950 general election under the First Periodic Review of Westminster Constituencies.


Under the Second Periodic Review, representation was increased to 6 MPs with the formation of the new County Constituencies of Beaconsfield and Chesham and Amersham, which largely replaced the abolished South Buckinghamshire constituency. Beaconsfield comprised the Urban District of Beaconsfield and the Rural District of Eton, while Chesham and Amersham combined Amersham and the Chalfonts with Chesham and the remaining, northern part of the Rural District of Amersham, transferred from Aylesbury. The northern parts of the Rural District of Wycombe, including Princes Risborough (but not Hughenden) were transferred back from Wycombe to Aylesbury. Buckingham lost Linslade which had been transferred to Bedfordshire on its amalgamation with the neighbouring Urban District of Leighton Buzzard and was now included in the constituency of South Bedfordshire.


The Third Review reflected the changes to the county of Buckinghamshire arising from the Local Government Act 1972, resulting in Eton, Slough and some surrounding areas being transferred to Berkshire. The constituency of Eton and Slough was abolished with the area constituting the Borough of Slough forming the new Borough Constituency of Slough, and the small Urban District of Eton which was absorbed into the Royal Borough of Windsor and Maidenhead being included in the County Constituency of Windsor and Maidenhead. The parishes of the Rural District of Eton also transferred to Berkshire, which included Datchet, were transferred from Beaconsfield and included in the new County Constituency of East Berkshire.

In the north of the county, the new County Constituency of Milton Keynes was formed from parts of the Buckingham constituency. This reflected the growth of the new town of Milton Keynes since its foundation in 1967. The new constituency comprised the Borough of Milton Keynes, with the exception of Stony Stratford and Wolverton, which were retained in Buckingham. In turn, Buckingham gained north-western parts of the Aylesbury constituency.

Elsewhere, Great Missenden was transferred from Chesham and Amersham to Aylesbury, Hazlemere from Wycombe to Chesham and Amersham and areas to the east of High Wycombe from Wycombe to Beaconsfield.


Uniquely outside the normal cycle of periodic reviews by the Boundaries Commission, the constituency of Milton Keynes, due to its rapid growth, was split into two separate constituencies for the 1992 general election: the County Constituency of North East Milton Keynes and the Borough Constituency of Milton Keynes South West. Stony Stratford and Wolverton were transferred from Buckingham and included in Milton Keynes South West.[3]


The Fourth Review saw only minor changes to the Buckinghamshire constituencies, included the transfer of the District of Aylesbury Vale ward of Aston Clinton from Aylesbury to Buckingham.


In the Fifth Review the Boundary Commission for England proposed changes to realign constituency boundaries with the boundaries of current local government wards, and to reduce the electoral disparity between constituencies. The changes included the return of Great Missenden to Chesham and Amersham, Hazlemere to Wycombe and Aston Clinton to Buckingham. In addition, Marlow was transferred from Wycombe to Beaconsfield and Princes Risborough from Aylesbury to Buckingham. The boundary between the two Milton Keynes constituencies was realigned and they were renamed as Milton Keynes North and Milton Keynes South.

Name Pre-2010 boundaries
  1. Aylesbury CC
  2. Beaconsfield CC
  3. Buckingham CC
  4. Chesham and Amersham CC
  5. Milton Keynes South West BC
  6. Milton Keynes North East CC
  7. Wycombe CC
Revised name Post-2010 boundaries
  1. Aylesbury CC
  2. Beaconsfield CC
  3. Buckingham CC
  4. Chesham and Amersham CC
  5. Milton Keynes North CC
  6. Milton Keynes South BC
  7. Wycombe CC

Changes proposed for 2022

The Boundary Commission for England submitted their final proposals in respect of the Sixth Periodic Review of Westminster Constituencies (the 2018 review) in September 2018. If these proposals are approved by Parliament they will come into effect at the next UK general election which is due to take place in May 2022 under the terms of the Fixed-term Parliaments Act 2011.

Under the terms of the Parliamentary Voting System and Constituencies Act 2011, the Sixth Review was based on reducing the total number of MPs from 650 to 600 and a strict electoral parity requirement that the electorate of all constituencies should be within a range of 5% either side of the electoral quota. The review was carried out using the official UK electorate figures for 2015 and the electoral quota was set at 74,769, establishing a range of 71,031 to 78,507.[4]

In order to meet these requirements, the Commission was able to treat Buckinghamshire (including the unitary authority of Milton Keynes) as a sub-region of the South East Region and recommended that the county retained seven seats, one of which (Beaconsfield) was unchanged. The constituency of Aylesbury would be moved northwards, gaining south-eastern parts of Buckingham, including Wing. South-eastern areas, including Stokenchurch, would be transferred to Wycombe and south-western areas, including Greater Hughenden, to Chesham and Amersham.[4]

It was proposed that Buckingham regain the parts of the Borough of Milton Keynes it had lost in 1992 - Wolverton from Milton Keynes North and Stony Stratford from Milton Keynes South and, consequently, be renamed Buckingham and Milton Keynes West. The boundary between the two Milton Keynes constituencies would be realigned once again, with Bradwell and Stantonbury being transferred from North to South, and south-eastern parts of the Borough moving in the opposite direction. Milton Keynes North would be renamed Milton Keynes North East.[4]

Current constituencies Electorate[5] Proposed constituencies[6] Electorate[6]
Aylesbury CC 77,463 Aylesbury CC 77,715
Beaconsfield CC 73,984 Beaconsfield CC 73,984
Buckingham CC 74,882 Buckingham and Milton Keynes West CC 77,080
Chesham and Amersham CC 68,560 Chesham and Amersham CC 77,089
Milton Keynes North CC 83,348 Milton Keynes North East CC 78,294
Milton Keynes South BC 86,585 Milton Keynes South BC 74,374
Wycombe CC 71,712 Wycombe CC 77,998
536,534 536,534


The total number of aggregate votes cast for each political party or individuals which fielded candidates in constituencies which comprise Buckinghamshire in the 2017 general election was as follows;[7]

Party Votes Votes% Seats
Conservatives 190,111 47.0 6
Labour 118,514 29.3
The Speaker 34,299 8.5 1
Liberal Democrats 25,828 6.4
Greens 16,335 4.0
UKIP 13,031 3.2
Independents 6,258 1.6
Christian Peoples Alliance 169 0.0
Total 404,545 100.0 7

Historical representation by party

A cell marked → (with a different colour background to the preceding cell) indicates that the previous MP continued to sit under a new party name.

1885 to 1945

  Conservative   Liberal   Liberal Unionist

Constituency 1885 1886 89 91 1892 1895 99 1900 1906 Jan 10 Dec 10 12 14 1918 1922 1923 1924 1929 1931 1935 37 38 43
Aylesbury F. de Rothschild W. de Rothschild L. de Rothschild Keens Burgoyne Beaumont Reed
Buckingham E. Verney Hubbard E. Verney Leon Carlile F. Verney H. Verney Bowyer Whiteley Berry
Wycombe Curzon Grenfell Herbert Cripps du Pré Woodhouse Knox

1945 to present

  Conservative   Independent   Labour   Speaker

Constituency 1945 1950 1951 52 1955 1959 1964 1966 1970 Feb 1974 Oct 1974 78 1979 82 1983 1987 1992 1997 2001 2005 09 2010 2015 2017 19
Eton and Slough Levy Brockway Meyer Lestor moved to Berkshire
Aylesbury Reed Summers Raison Lidington
Buckingham Crawley Markham Maxwell Benyon Walden Bercow
Wycombe Haire Astor Hall Whitney Goodman Baker
Buckinghamshire South / Beaconsfield (1974) Bell Smith Grieve
Chesham and Amersham Gilmour Gillan
Milton Keynes / NE Milton Keynes (1992) / MK North (2010) Benyon Butler White Lancaster
Milton Keynes SW / Milton Keynes S (2010) Legg Starkey Stewart

See also


  1. ^ BC denotes borough constituency, CC denotes county constituency.
  2. ^ The majority is the number of votes the winning candidate receives more than their nearest rival.


  1. ^ "2017 Electorates".
  2. ^ "H.M.S.O. Boundary Commission Report 1868, Aylesbury". Retrieved 9 February 2019.
  3. ^ "The Parliamentary Constituencies (England) (Miscellaneous Changes) Order 1990". Retrieved 10 February 2019.
  4. ^ a b c Boundary Commission for England, 2018 Review, Associated consultation documents (September 2018). "Final recommendations report".CS1 maint: multiple names: authors list (link)
  5. ^ Boundary Commission for England, 2018 Review, Associated consultation documents (Document type: Electoral data) (24 February 2016). "The electorate of each region subdivided by both local authorities and each existing constituency".CS1 maint: multiple names: authors list (link)
  6. ^ a b Boundary Commission for England, 2018 Review, Associated consultation documents (September 2018). "Final recommendations constituency list (with wards)".CS1 maint: multiple names: authors list (link)
  7. ^ "2010 Electorates".
This page was last edited on 14 October 2019, at 00:09
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.