To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Verified Audit Circulation

From Wikipedia, the free encyclopedia

Verified Audit Circulation
Private company
IndustryPublishing
FoundedLos Angeles (1951)
FounderGeraldine Knight
Headquarters
Area served
United States
Key people
Tim Prouty (CEO)
ProductsCirculation audit reports
delivery verification reports
research
WebsiteVerifiedaudit.com

Verified Audit Circulation is a United States company founded by Geraldine Knight in 1951 that conducts circulation audits of both free and paid print publications and of traffic figures for web sites. The company also provides custom research and verifies field delivery of products such as yellow pages, branded delivery bags, and door hangers.[1]

As an independent audit firm, Verified Audit Circulation works with publication circulation figures provided by their publisher clients, to verify or adjust these circulation numbers, based on examining a publication’s printing and financial records. Field research also may be conducted to confirm circulation figures. Circulation audit findings are compiled and released in audit reports, which advertisers use to make decisions about advertising placements.[2]

YouTube Encyclopedic

  • 1/3
    Views:
    20 428
    572
    3 040
  • ✪ The Future of Computational Journalism
  • ✪ ODH Lightning Rounds 2018
  • ✪ OhHeyMatty AMA Future Of Cryptocurrency EOS ethereum analysis NEO ICON Wanchain NEX Tether

Transcription

[MUSIC] Good evening, everyone. We were a little worried that we wouldn't have a good turnout, because of the rain. But thank you all for braving that rough weather out there. I told my students today who were all complaining that at any other college, they'd be dealing with this every day. But at Stanford, we panic when it rains. But I think we're, I think we made it through the day and thank you again for coming. I've probably introduced some of you before, because we have a series of events for alumni. So thanks to the Stanford Alumni Association for co-hosting this event. It's also sponsored by the School of Engineering and the School of Humanities and Sciences. In addition to the alumni we have here, the graduate and undergraduate alumni, we have members of the community here and we are also live streaming to thousands of alumni and friends all over the world. So this is the second in our series called Intersections and the point of this series is to bring together faculty from the school of engineering, and the school of humanities, and sciences to share insights on a common theme. And tonight's theme is journalism and particularly how journalism is being influenced by by big data, and new computational tools. At Stanford in general, we like to recognize and emphasize that when we work on world's miscellaneous problems, we don't do it by scholars in isolation. We do it by working together across disciplines and today's panelists are a perfect example of that philosophy. So let me go ahead and introduce the panelists. We have Maneesh Agrawala. Maneesh is also an alum of Stanford, so you'll fit in with the group. He got an undergraduate degree in math in 1994 and he cites his most memorable course as an undergraduate math major as introduction to computer graphics. So that we said on the course to pursue a PhD in Computer Science, which she did here in Stanford. He received that degree in 2002, then he went across the bay to Berkeley where he served on the faculty on the EE and CS department for nearly a decade. One of the interesting things that he did during that time was spent a sabbatical in New York City. Where instead of just visiting another computer science department, he split his time between the graphics department at the New York Times and the public radio program Studio 360 with a goal of learning about audio and visual story telling. In the year 2015, we were really delighted to recruit them back to Stanford and to join the computer science faculty where he is the Forest Baskett professor of computer science. He is also the director of the Brown Institute for Media Innovation which is a collaboration between Stanford Engineering and Columbia's Journalism School with a mission of developing new technology, and techniques for telling stories through the media. So I'm sure you'll enjoy hearing from Maneesh. We also have Jay Hamilton. Jay is the Hearst Professor of Communication. He's a senior fellow at the Stanford Institute for Economic Policy Research and he's the director of the Stanford Journalism Program. His bachelors and PhD were from Harvard. >> [LAUGH] >> Before Stanford Jay was faculty at Duke school of public policy where he directed the DeWitt Wallace Center for media and democracy. In 2015, the same year we recruited Maneesh, Jay launched Stanford's Computational Journalism Lab. Some of the questions he works on in his lab are for example, how do we sustain the accountability function of journalism? Or how can journalists use computational methods to benefit society? Obviously, both are extremely important topics in this area and also in the era. Jay and Maneesh are not strangers to each other. In fact, they co-teach a course. The course is called exploring computational journalism and it's offered through both the Computer Science and the Communications Departments at Stanford. Finally, we have our moderator, Janine Zacharia. She is the Carlos Kelly McClatchy visiting lecturer in Stanford's Department of Communication. From 2005 to 2009, she worked as Chief Diplomatic Correspondent for Bloomberg News based in Washington, DC. She traveled to more than 40 countries with then US Secretary of State, Condoleezza Rice and with other senior administration and military officials. From 2009 to 2011, she was the Jerusalem Bureau Chief and Middle East Correspondent for the Washington Post. She has reported widely through the Middle East, including Israel, the West Bank and Gaza Strip, Egypt, Jordan, Lebanon, Iraq,Bahrain, Saudi Arabia, the UAE and Turkey. Did I miss anything? >> [LAUGH] >> Not off the bat. She has reported on the uprisings in Egypt and Bahrain as they began in early 2011, so that must have been quite exciting. She appears regularly on cable news shows and radio programs as a Middle East analyst. So please join me in welcoming our guest for what's sure to be a great discussion. >> [APPLAUSE] >> Thank you, Dean Widom. Can everybody hear me okay in the back? Greetings to all of you here from the Engineering School, alumni and people in the community who braved the atmospheric river to join us. And to those of you watching us on the live stream, it's a pleasure to be your moderator this evening for what I think is an extremely timely discussion of the intersections between journalism and computation. A quick housekeeping note before we get into the discussion. You all should have received some index cards. If you didn't, they'll be coming around. But I'm gonna briefly try and frame the discussion, and facilitate a chat between Maneesh, and Jay for about 40 minutes. And then about 7:50, I'll start integrating those questions. Someone will come around and collect your cards. So looking forward to weaving those in the discussion. What a wonderful moment to be exploring this topic. As most of you probably noticed, the news industry over the past decade or so has been experiencing quite a period of convulsion. Triggered in no small part by the way the internet has transformed the way people get their news as well as which and what kind of news they get, how much they are willing to pay for it, etc. On the local level, news outlets have shuddered leaving deficits in that coverage in accountability journalism on the local level. National coverage remains, but I'd say, it's inadequate and major news outlets as your quite well aware. Struggle with budgets, pay walls, subscriptions and all these trying to raise enough revenue via digital advertising and other ways to sustain a robust reporting staff. International reporting as you heard, which was my long time former identity as contracted. Take for example, just the Los Angeles time as an example. If in 1991, they had 28 full staff foreign correspondents. Those numbers saw pretty well through to 2004. Today, there are six at least that was the number as of May. An investigative journalism, the most expensive, perhaps and consequential of all these subgroups you could argue remain in great peril since Jay Hamilton writes in his and he's too modest. So I had to bring it to show you. Award winning book on the economics of investigative journalism, Democracy Detectives which I commend you all. Especially so you can meet the most colorful North Carolina Pulitzer Prize Winner Pat Stiff in chapter seven. As Jay writes monitoring and analyzing, and investigating can entail substantial cost, and there is a likelihood of accountability stories going untold. So we need some solutions for these problems to keep the cost of public accountability journalism low, so we can continue to have this element of our democracy continue. Because put simply, without journalist trained incredible fact-based news reporting, and data mining to hold public officials accountable, we're in deep trouble. So to that end, we're gonna discuss this tonight. The collaborations that we see across the school of engineering and in the department of communication, and throughout Stanford. On developing some of the cutting edge solutions to some of these problems, so it won't be all doom and gloom. So let's talk about this, Jay, let's start with you. When I used to work at Bloomberg and Washington Post there was always the car reporter, the computer assisted reporter. And all of us print journalists would freak out any time we had to do anything in Excel. And probably a lot of journalists still are freaking out about Excel, but that was a long time ago. Now, new tools are being developed to simplify the data gathering and analysis process. And you've founded this computational journalism. What is computational journalism? >> So computational journalism is a really clunky term. And I think it's gonna be defined like data science, by the set of tools that journalists come to use. Right now I think it's using computation to change how journalists discover, or tell, or distribute, or monetize stories. And, in the realm of public affairs reporting, I tend to think of it as reporting by, through, and about algorithms. So stories by algorithm, if you think about the Associated Press, each time quarterly earnings reports come out, they write about 4,000 stories by algorithm. In the old days, when humans wrote, they could only cover 300 companies. Now with those 4,000 companies, they've expanded the set of people they can focus on. And that is actually effective, the trading volume in these small companies, cuz they now get a story about them. Stories through algorithm, that's sort of like electronic tip mining, it's finding the basis of a story. This year was the first year the Pulitzer prize had a machine learning story. The Atlanta Journal-Constitution wanted to write about doctors engaged in sexual abuse. They scraped all 50 states, the medical societies, the regulators, they found 100,000 discipline cases. They couldn't read 100,000 cases, so they wrote a machine learning algorithm that was able to estimate the probability that the case involved sexual abuse. That took the number they had to read from 100,000 to 6,000, and then they were able to write that story about sexual abuse nationwide. So you have stories by, through, and then about algorithms. That's holding algorithms themselves accountable. So if you think about ProPublica, they have a series called Machine Bias, that was another Pulitzer finalist this year. What they were able to do was look at Princeton Review and look at the price discrimination Princeton Review engages in. What they were able to do was simulate being from zip codes around the country, and then they looked at the prices that were quoted. And they found that controlling for things like income, the higher the Asian percentage in a community, the higher the price that the students taking the SAT prep course were charged. That's not something that a company's going to advertise, but reporters trying to reverse engineer algorithms could tell you what's going on. So to me that's what computational journalism is today. >> And just as a media economist, how did you fall into this? How did this become your mission? >> It really is a mission. The reason I came to Stanford was that, if you look across the university, there are lots of people who are trying to say, I wanna take on structure data, turn it into structured information, and tell you about a pattern. That happens in political science, where people are trying to do big data to analyze politics. Some of you are probably familiar with the Paradise Papers that came out, the investigation that came out. The International Consortium of International Journalists, they used software that was developed in the digital humanities work at Stanford. There was a project about five years ago called Mapping the republic of letters, who wrote whom in the 1700s? And that software was open sourced, it was taken up and used by journalist, first in the Panama Papers. That data visualization software that used to be who is writing whom, became which offshore entities are associated with oligarchs from particular companies. So, that's why I came to Stanford. There are so many people who are trying to think about how you can use data to understand and hold institutions accountable. >> I mean, and that really leads us to the niche in terms of the intersections between computation, CS and communication. How do you see this relationship going and defining computational journalism's prospects for solving some of these problems? >> Yeah, so I think Jay's definition is really great. It's about how stories are told by, through, and about algorithms. And I can give a few more examples of those categories. So certainly, we've seen many, many examples of using computer assisted reporting to gather information and analyze it, that then goes into stories. When I think of telling stories through algorithms, I think a lot about the algorithms that have been designed to synthesize information, and write the stories for you. So there are companies now that have algorithmic approaches to writing stories that are published in newspapers by the Associated Press, excuse me. And there are other companies that are designing algorithms that will take media, audio, and video, and put it together to tell the stories in a much more visual way. And you've probably seen these in your Facebook feeds and on other sites on the web. So that's synthesis and content creation through algorithm. Certainly holding algorithms accountable is a very important part of what journalism does. But when I think of computational journalism, I also think about a few other things. A lot of the ways in which journalism and stories are distributed is through the Internet. And there are algorithms at play that help determine which stories you will see. For example on your Facebook feed, on Google, the top stories are ranked by algorithm. And so, distribution is another place where computation plays a big role. And then finally on the funding. The business models behind Journalism are also changing very significantly. And we've seen a big upheaval in the industry because of these changes. >> What do you mean by holding the algorithms accountable? >> So, one of the questions that we face as a society, is understanding some of the algorithms that are delivering information to us. We don't know how these algorithms are making the decisions that they make. And there are groups of journalists, groups of computer scientists, that are trying to reverse engineer these algorithms, to understand what it is that they are doing under the hood, and- >> You mean the companies, what the social media companies are doing? >> Exactly. >> Okay, so I wanna come back to that, but I wanna get into this issue of, that I know you worked on, that relates to online deception. which is also an area where there is a lot of collaboration going on between, especially department of communication and engineering. It's weird to quote myself. But I wrote something a couple of weeks ago that kind of summed this up a little bit. The problem that keeps my up at night and maybe some of you as well. So I wrote this for a piece for the San Francisco Chronicle. A fragmented media landscape populated by news outlets and imposter outlets that abide by different journalistic standards has transformed what was once a basic task. Reading the daily news, into a major challenge. In an era of unprecedented access to information. We're experiencing an unprecedented era of noise. Today I don't have to only encourage my Stanford students, many of whom are here tonight, to read the news. I need to teach them how to do so. And this sort of gets into maybe some of these tools that we're talking about. And one of the things I pointed out in the piece was that's it's not only print that we have to worry about. With the fake news and all these things. And we're not gonna talk the whole night about the fake news. But, new lip synching technology lets researchers put words in Barack Obama's mouth. Or doctored photos are flooding the Internet. And human vision is struggling to keep up. So this scenario, we have a lot of expertise at Stanford. And I know, Maneesh, you've worked a little bit on this, right? In terms of the images. Can you talk a little bit about that? >> Yeah, so a lot of my research is on developing new, better simpler techniques for manipulating, editing audio, video and photographs. And these techniques have many, many useful purposes. Right. There's lots of instances in which you take a photograph of your family. And you want to touch it up a little bit so that a person becomes a little sharper, a little clearer. You might want to blur the background to put more emphasis on the foreground and so on. So there are a number of reasons that editing tools are really useful. There are a numbers of tasks for which they're useful. But at the same time these tools can be used for a well, for producing things that are that one would call fake news, right. And so in my lab we've developed tools for editing video of talking heads. So we'll have an interview of someone speaking to a camera. And we have tools that will allow you to get a text transcript of what the person is saying. Time align that with the video. And then you can do cut, copy, paste on the text and propagate back to the video in a way that's really seamless. You can't tell that the edit has been made. And a useful thing or a place where this is useful is for example. To what I would say maybe is touching up the video. So if there is an or an or someone stutters or misspeak a little bit. This is a place where it might be useful in certain situations to, you know, cut out those mistakes. But this can also be used for not so good purposes. And this is something that I think we all need to be cognizant of. One, that there are technologies that allow you to edit a video and produce something that didn't actually happen. But two, that we as journalists and authors, when we are making these edits. We need to maintain a certain level of ethical responsibility in putting out what is a truthful video. >> So do you wanna talk a little bit maybe about, either using algorithms to help solve surface. What we were talking about really quality credible information. Or maybe some of the projects that are are happening in the class that I know your co-teaching I think with Don Garcia as well. [INAUDIBLE] founder of google news also one of your co-instructors is here or not. >> Yeah. I think when I look at tools, I think of them as democratizing story telling. Allowing people to tell a really engaging story which raises the probability somebody will watch it. And one of the things that we're working on in in our lab is story discovery trying to actually find a story. And it, and a favorite example of that is the Stanford Open Policing projects. So Cheryl Philips who's a lecture in journalism. Two years ago she had her students do freedom of information act request on all 50 states asking for electronic versions of State Police stop data. Two years later we have 130 million records from 31 states. And she combined with Sherwin Gold in the engineering department. To basically develop a set of algorithms that try to tell you what rule of thumb does a police officer use when he pulls you over? When he decides to go into your car and what the outcome is. And I think it's a great example, because the data has all been made public. They've, on a website, allowed local reporters to download the information. And look at how their state police are operating. And today that data is being used by the economist, by NBC, by National Geographic, by The Marshall Project and Trevor Noah. Now Trevor Noah didn't do the math the Daily Show called Cheryl up and said, we'd like to look at broken tail lights. And how that may be used for disparate impact across racial demographics in the US. So she and Sharod ran the numbers. And were able to generate that. They've also trained over 100 journalists across the country at journalism conventions in how to use that data. So I think it's a good example of HNS professors and engineering professors at Stanford combining because it not only is generated journalism. Two PhD students in statistics have actually written new algorithms that try to estimate, what's that rule of thumb police are using. And they've actually shown how for African Americans and Hispanics, in most states. When they will pull you over on a much lower probability or expectation that they're actually gonna find something. >> So do you wanna add to that at all, Minish? On this topic of that project or similar projects where they're collaborating in story discovery using data. >> I think you're seeing it more and more all over the place. Computer science and the tools of algorithmic analysis and synthesis. And distribution are just being used all over the place. I think one of the better examples is this one with Sherada and Cheryl. >> Yeah and I think that it was like Reuters came also to Cheryl and said. We better look at these killings that are happening in the Philippines right now. And she has some students working on that as well. But I think one of the things we're talking about is the role of the university. Actually journalism vis a vis helping democracy. And I think it right. So you're involved a little bit in the Stanford long term range proposals that are coming in on that front. Can you talk a little about both you maybe, about what Stanford's role collectively is in all this? >> Sure, so if you go back to the Stanford charter. It says that we are supposed to help educate our students in part for citizenship. And if you look at Stanfords' organization. We're a non-profit in part. Because we're supposed to generate public goods and positive spillovers on society. And right now, you've mentioned the economics of journalism. I think that there is a real tumult in the business model. And they're really five incentives that generate news. One is, I want to sell your attention to somebody. That's advertising. One is, pay me. That's subscription. One is, I wanna change how you think about the world, that's nonprofit. One is, I want your vote, that's partisan and one is I just like to talk, that's expression or social media. And we're seeing a world where Google and Facebook do a tremendous job with targeted advertising and that shifted revenue away from news. And it's put more of a pressure on the nonprofit and the subscription motive. If you come back to Stanford we can do the R&D for the industry, especially related to accountability reporting. At its heart, public affairs journalism involves a market failure. We all have different information demands in our lives as consumers, or as workers, or audience members, or as voters. And the first three markets worked pretty well, because if you don't seek out the information, you don't get the benefit. That fourth demand, the voter demand, it's subject to rational ignorance. Because my vote doesn't matter in a statistical sense, I often don't seek out the information to inform it. It sets up this gap between what we need to know as citizens and want to know as audience members. So I think Stanford, when it is looking at the future, we can really through our students and through the research, the type of research that Manish does and the type of research that we are doing. We can actually help with that voter demand. Silicon Valley has been tremendous in our lives as workers or audience members, but consumers much less on voter. And maybe a negative on voter. >> Probably, probably in negative, we will get there in a second. Manish, do you wanna talk about, in terms of the university's role in all these cuz you are also working with the Brown Institute which partners with Columbia and Stanford. >> Yeah, another major role of the university is to educate students and computer science and journalism have only really gotten together in the last, I would say 30 to 40 years. And there isn't a lot of curriculum at the university to try to bring these two things together, these two disciplines. And I think, one of the things that we are really excited about, I've been working with Jay and Don Garcia, the director of the JSK Fellows program here, and Krishna Bharat, the founder of Google News. To think about ways of developing a curriculum around computational journalism so that we can really develop students that are able to do that computer assisted reporting and analysis of data. Can think about how to use computational tools to tell stories through algorithms, and also to do that kind of investigative reporting. Understanding how the algorithms work, and reporting on there algorithmic biases or other issues in algorithms that are out there. >> And Jeanine, you asked about computational journalism? Another way to think of it is, it's like a Reese's cup. It's data journalism and storytelling together. And I think that the students that we have in our master's program, sometimes we have people with a quantitative background. So we have two folks who were CS undergrad majors. They're really focused on learning how to tell a story. And we also often have students who have always been storytellers and been writers, but they see the advantages of using data to find a story that nobody else can see. So I think it's really working across both those types of skills that will get you the future of journalism. >> So, algorithms have been in the news in the last couple of weeks. In particular what you referenced earlier Manish about the role that algorithms play in surfacing the news that we see on social media. And there were a set of hearings on Capitol Hill, on November 1st, I don't know how many people saw them or paid attention a little bit. Where Congressmen and Senators really grilled the general council from Facebook, Google and Twitter about these algorithms and the role they play unwillingly perhaps in helping Russia spread disinformation and propaganda. And there was one of the questions that came up was from Senator Kennedy, a Republican from Louisiana. He said, are you a media company or a neutral technology platform? Is what he said to these companies. And they, neutral technology platform. So, I wondered if you could, not to get to controversial but spice it up a little. In terms of how you see these companies which are a place where I think I read the stat was like 60% of us are getting at least some news from them. So they're becoming a larger portion of our news diet. What are they in this realm and the role that other social sciences and engineers could play in finding technological solution to their problem. >> Yeah, so to me the role that these companies are playing, the Google's and the Facebook's is that they are the ones that are really distributing the information to the eyeballs of the viewers and the audiences. And in that role they hold a lot of power. [LAUGH] And their power comes from deciding which stories they are gonna surface at the top of your news feed. At the top of the ranking when they give you news stories for example. And so one of the ways in which I think there's there can be some issues is that these companies don't reveal the algorithms that they're using to do this ranking or to show you what's in the News Feed. That there are many good reasons for them not to share their algorithms in great detail. Because as soon as they reveal their algorithm there is the issue of gaming the algorithms and spammers trying to get their fake news sites up to the top of the list. At the same time there's so little transparency that we don't actually know in much detail what they're doing. And so there's no way to really audit those algorithms and figure out how they're making the decisions that they're making. And so we've put a lot of trust in these companies and it would be, I think, helpful to get a little bit more transparency. So that we can understand better how it is that they're making the decisions that they make algorithmically. >> Yeah, Jay. >> So if a company generates content to wrap around advertising. That to me is a media company, and so to me Google and Facebook are both media companies. They're making decisions about how to engage you, what's the priority, and two things to note. That they're both dual stock structure companies. So historically if you go back to the 1980's, there were two industries where individual or family ownership were predominant, sports franchises and media outlets. Because in both of them the owners took psychic income for being part of their community and contributing. If you look at, say, the Sulzberger family in New York, or the Grahms in Washington. They may have even provided more than an optimal or profit-maximizing amount of public affairs because of that notion of civic duty. Fast forward to today, if you read the annual report of Alphabet, it notes that the voting control about 58% of the voting shares, are held by three people. For Facebook it's one person. And both of their annual reports say we may take interest against our shareholders. What that means though is that they have the freedom to incorporate democracy or participation as a goal if they chose. Because they've told people ahead of time that they may not maximize profits. When I look at Facebook, I think you could think of it pre 2016 as saying, we're gonna maximize revenue. And we are gonna redistribute, through biomedical research and other things like that. And so to me as an economist, they were saying we aren't gonna think about our positive externalities. We're going to earn a lot of money and given in philanthropy, but after 2016, I think the question became not are you leaving some positive externalities are still reserve on the table, but are you actually generating negative externalities by the way in the way that you do this. And if you think about the scale of what they do, both Google and Facebook, each have budgets. That are bigger than the entire budget of Stanford University, if you pull out our medical center. >> Together or individually? >> No, individually. >> Individually? >> Individually. So- >> What about Twitter? Is Twitter in there too? >> Twitter has lost approximately $2.5 billion in ten years. >> Ok. Different. >> Yeah, contrast with Google which had about 19 billion in profits last year. So different scale. So I guess what I'm trying to say is that, they could take into account having an impact on democracy. And actually, if you go to the 2010 midterm elections, Facebook did a fascinating study, where they for 61 million people in their news feed on election day, it showed you where your poll was and it showed you the picture of up to six of your friends who voted. And when Facebook was still publishing social science research, what they told you was, they estimate they increased voter turn out by 400,000 people in 2010, by that experiment on 61 million people. Nobody talks about that study anymore, but if you look at 2016 the national election was decided by fewer than 120,000 votes. So, I think they're they aren't around where they should probably think about the impact that they're having. >> So you're saying that Mark Zuckerberg has the ability to do it, he's cleared it with the shareholders. It's a question of, will he do it given the profit motive at the core of Facebook. And Manesh, for you, is there an algorithmic technical solution to some of these problems, the way there is, for example, pornography and spam don't come up in my feed? Whereas you could potentially see a lot of hoaxes and things like that. >> Yeah. I think that it's gonna be a combination of technology, and figuring out what the ethics are of these companies. They have a lot of power to decide what you see, and what you don't see. And they are gonna have to decide where they wanna put their effort. Is it profit maximization? Is it to really build a strong democracy, where should they lie. >> And I think you have seen some changes post election, I think for three reasons, number one advertisers, advertisers are experiencing some back lash for being associated with fake news or controversy, now that people are watching. Number two employee morale, people within both those companies want to work in a way that contributes to society, and they were embarrassed by the performance on both. And then number three, the owners are human beings, and they are factually started to see what the impact of fake news, disinformation is. Krishna Bharat, the person that we're co-teaching along with Don Garcia, from the Knight Fellows. Krishna has a post on medium that says, suppose you wanted to stop fake news from going viral. If you took a level like 10,000 shares, if you wanted to stop fake news at 10,000 shares, what you would wanna do is look at when things are like at 2,000. And if it suspect based on machine learning, showed to a human and had them do additional fact checking, and then based on what you see you could then slow it down. So Krishna, as a founder of Google News I think has street cred, and I think that he has basically said that it's a matter of will, it is not necessarily an engineering problem. >> So then you're bringing the human element into this, which the companies seem a little allergic to. Even though we know, that there are lots of people working there that are creating this experience for us. So what you're suggesting more of a sort of honest reckoning with this, that we actually, yes we need humans. We are curating it. Is that part of the solution? >> I think humans are certainly part of the solution. And I think the companies have all recognized this, and in fact say it. The bigger question is, to me, is how much transparency they will allow outside of the companies. So that third parties can understand a little bit. We here at Stanford, could understand a little bit what they're doing, and really in a way audit the algorithm and the techniques. And maybe the human algorithm hybrid that is used to make these decisions. >> But I'd like to say. If you asked me what the biggest failure in journalism is, I wouldn't be the distribution of platforms, it would be the stories that go untold at the local level, because of the collapse of the business model of local newspapers. So right now, we are properly focused, on what Russia did in the 2016 election. But if you look across the country, there are city councils that don't have a reporter covering them. There's school boards voting, making decisions and nobody is watching. So, I think that's something where computational journalism can really make an impact. There's a saying the future is here, but it's unevenly distributed. The best use of AI now, is in business reporting. And actually we've seen this, you mentioned computer-assisted reporting. The business side of newspapers always gets the tech first, and then eventually it goes to the reporters. So right now, the Washington Post, they've got amazing software that helps put a headline on something, that helps figure out where to push it on social media, it helps match it with your interests. But so far, they haven't really used their tools to go inside to discover the story, to find stories in different ways. So I think, if you have a strong interest in engineering and data, trying to help us figure out the stories that go untold, especially at the local level. The smaller the radius of a story, the harder it is to generate eyeballs to fund it. And that's where software, that's where engineering, that's even where economics, because institutions break down in predictable ways. And if we can use those signals, if we could look at even Google searches. If you go back to Flint, Michigan, before any reporter wrote about water quality in Flint, Michigan, there was a beginning spike of the search, why is my water brown? So there are people who are telling us things are wrong, but there's no reporter out there, not a live body out there. So, I think that if we were trying to think about projects at Stanford that could help journalism, telling the local journalism story and telling the stories of low income people. Because if you go back to what creates a story, low income people are your advertising target for many companies. They vote in lower level, so actually they're less likely to be contacted. And they have a lower willingness to pay for information. So that generates news deserts across the country. I think I'm showing you why economics is a dismal science. This is a little bit of a downer. But, I think there is hope. Because of the part of campus we're on. Because of the tools that Monisha's operating on. And because of the things that you all are probably capable of doing too with your skills. >> But you say you're a community member and you're concerned about the news desert in East Palo Alto. Or even in Palo Alto, there's some coverage in Palo Alto. And we try to do it through our own publishing site here, and our students work in the Peninsula Press. But there are these new deserts all around the country. So there's this, would you call it a market failure? I don't wanna get too wonky about it. But there's the absence of local new reporting that is undermining, in some ways. Democracy is what you're saying. >> Yeah, it's funny, I worked as a consultant for the Federal Communication Commission on a study entitled The Information Needs of Community. And it was 400 pages, the word market failure was only used once and it was in a footnote, cuz they forget to take it out. >> [LAUGH] >> They didn't want the notion, they accepted the logic of rational ignorance. They accepted the logic that the market doesn't fully reward you for telling a story that changes laws. And my book on investigative reporting, I do case studies where I find for a dollar invested in some investigative reporting, you can generate several hundred dollars in net policy benefits when you change public policy. So, we have suboptimal levels because it's hard for the market to do that. But if you can lower the cost of finding the story you're gonna get more of them. And so in our exploring computational journalism class, one of the teams is looking at the question. If we looked at real estate data what public policy story could you write? What anomalies could you spot? There's a great website called Open Secrets that looks at campaign finance data, they have something called Anomaly Tracker. And it looks at things like, does more than half of your elected officials money comes from out of State. Whereas your elected official being lobbied by somebody who only cares about one bill in Congress because you can deduce that from lobbyist registration data. So at the national level, people are beginning to use anomaly tracking. And in the class that Maneesh and Krishna and Don and I are teaching, that's one of things that the students are taking. >> Maneesh, before we turn to the audience, can you talk a little about what's on your wish list right now in terms of projects in this area, or what are you most excited about in a positive way? >> Yeah, yeah, so one of the things that we are excited about is building tools to aid in synthesizing stories. So, one of the areas that we work in my lab, is in visualization and trying to take complicated datasets and turn them into visualizations that make the information a little easier to parse and digest and understand. And visualizations do this because they provide a lot of context for the information that you're seeing. But designing really effective visuals for a given dataset is a challenging task for most people, even people who deeply understand how to create good visualization, it takes some time and effort to do it. And so, the kinds of things that I am really excited about are to build tools, that will make it much, much faster and easier to produce high quality visualizations, that really tell the story, that highlight the important aspects and takeaways of the data that a reader, that a viewer should really focus on. Another aspect of that is to make these visualizations interactive, so that an interested and engaged audience member can go and work with the visualization. And really learn more about the data by filtering it, or by transforming the data, and looking at it in different ways to understand what the data is all about. We're seeing lots and lots of efforts these days at making data more publicly available. So there are websites where you can get government data, census data, all kinds of data, local data, right? And one of the problems with these sites is that yes, the data is all mined, but it's very difficult to access it. And so what we need are tools that make it much more accessible to a wider range of the public and journalists for that matter to really understand what's going on, to find the anomalies and then build the stories around them. >> So if you're an engineer alum, or a person concerned about journalism and this intersection that we're talking about. Is there a way before we turn to questions and we can start bringing those up that you can get involved in all this and these solutions here at Stanford? Jay? >> Yes. [LAUGH] >> People are here, obviously they're interested. I'll have to create a homework assignment for everybody. >> Yeah, yeah, exactly. So our students love to work on puzzles. They love to work with data. They love to have an impact on society. Many of them could be doing other things, but they are in our classes because they really care about democracy. And Maneesh and I, and Christian and Don have been working on something that we call The Stanford Journalism in Democracy Initiative. And what we're trying to do is focus more people on Stanford, and in the broader Stanford community, on the challenges that are computational. And that relate to data that could be solved, and that would help journalism. So if you have ideas, please email me jayth@stanford.edu, that is my mainframe computer name from 1980. >> [LAUGH] >> And I held on to it across all the universities. And so you can go to the Stanford Computational Journalism labs and sign-up for our newsletter. You could go to the Brown Institute and sign up for their newsletter. We would love ideas, we would love data, we actually partner with people in our classes on projects too. And so, yeah, we are trying to crowdsource things here. >> And let me just add, many of you, at least in the audience here in this room, are in the Bay Area, come visit us, we'd love to talk to you. If you're interested in any of these issues around journalism, around computation and algorithms, and the relationship to journalism. We would love to talk with you. >> And one more product placement. I have written a book called Democracy's Detectives. Because you came out tonight, if you send me an email or come up to me afterwards and give me your card I'll give you a free copy of my book, of course, there is the opportunity cost of your time if you read it. >> [LAUGH] >> [LAUGH] Okay. >> [APPLAUSE] >> So. >> Yeah. [LAUGH] >> So I used to run around, as you heard, the Middle East. And now I spend most of my time fretting about how we're gonna make sure that everybody gets a daily diet of credible, fact-based news. And more or less the same credible fact-based news. And to that end I curled up with a study out of Yale the other day that looked at, because they don't have the data from the companies. They have to try and what is it called, reverse engineer it? So, and it was kinda distressing because it found that, one of these ideas that's out there is labeling. Let's just label true, false, whatever, verified, not verified, whatever language you want. And what happened was there was this unintended effect that whatever wasn't labeled, and this was done as a study, was assumed to be true. So if you label, that means that you'd have to label everything in the entire Internet for that to work. And then the other idea that's been floating around is, well, what if we just tell them with more clearly label what the news outlet is, The New York Times, Politico and what, now it turns out that didn't matter at all for people. And then I've seen this a little bit with students that I've talked to, we ran a little, it wasn't a study, but in a bigger lecture class that I went and talked in. And I asked the TA to ask the students what sources they considered the most reliable in news. And they said, and she gave me a list, and it was Apple News, Facebook, Twitter, in other words, we conflated this notion of, some of us, of platforms and news. And so to that end, there are these questions here from the audience, many versions of this question. Are there algorithms or other tools to discern fake news from real news, and along those lines, what role could algorithms play, or in fact checking? Maneesh, do you wanna try that one? >> Yeah, let me start by talking about images and video. So, well, with images and video, one of the questions is how much has a photograph been manipulated, right? Every photograph that you see has likely been manipulated a bit, okay? Certainly, people make brightness adjustments, contrast adjustments, and things like this. So, manipulation is happening at every level, even forming the image is a form of manipulation. When you develop an image in the old days, in a developer, you are manipulating the image to some extent. You're choosing the exposure, different parts of it. So, one of the questions is how much manipulation is okay, and where does it cross a line? And this is a deep ethical question that I think all of us need to answer. Now, at a technological level, we could develop technologies that are tracking all of the manipulations that are happening. And we could store all that information in some trusted way, so that you have the full prominence information for exactly of all the manipulations that have happened. Whether that will work to get a audience member to understand that the photograph has been manipulated is far less clear. So, a good example of this is advertising. The photos that you see in advertising, every single one of them, has been manipulated a lot, right? And yet, many of us don't really recognize how much these things have been manipulated. So, there's a technological solution here that can go part of the way. We need to also work on education and think collectively about what is acceptable and what is not acceptable in these kinds of manipulations. >> And some good news, just today from a former unite fellow Sally Lehrman, she's been working for several years on something called The Trust Project with Richard Ingres from Google. And it was announced today at the museum that they have 75 partners around the world who have agreed to label their articles with information about the reporter. So giving you a bio on background as on a real person. Their mission statement, how they are funded should be interesting for RT and others if they ever choose to participate. >> [LAUGH] >> Also, what type of article it is analysis, opinion, and those indicators. You've talked about how it can sometimes be hard for a reader to see, but the platforms have also been involved in this discussion, and they can use those indicators as priorities. There's a separate JSK Fellow, Frederic Filloux, who is here, and he's working on equality indicators for journalism. And he's working with a million stories, and he's hoping his quality score will be used by advertisers who maybe for reasons of brand want to be associated with truth, something like that. So, his quality scoring is another example. >> Right, but when you think about The Trust Project, correct me if I'm wrong you have to be participating though, so they're not gonna score everybody. >> Right, yeah, cuz it is- >> So Breitbart's not participating, for example. >> [LAUGH] >> Right, I don't think, but as an example of. >> Right, right, exactly. >> The kinds of news outlets that are- >> But it will happen if you self-identify and start ranking. It provides the platforms with the way or reason to treat your content differently. And again, Christian has also talked about how there are lots of signals that you can use about whether something is fake or not. And how long has the account been opened? Does the person have a track record of things like that? So, we talked about the platforms in a negative way for part, but they can also use those signals and prioritize, if they choose. The difficulty, as you pointed out, the reason Yale had to do field experiments was that it's really hard to know. Even the people at PolitiFact who are trying to do fact checking for Facebook now, they've said that Facebook is not transparent. They don't know how their information is being used. As a researcher, it's hard to collaborate if you sign a non-disclosure agreement and still don't know what's going on. >> Yeah, and this whole question of fact checking is sticky, because there's a lot of skepticism around. Who are the fact checkers gonna be? And this came up, I think that was Trey Gowdy on Capitol Hill, who said well, who are your liberal fact checkers gonna be? And that's one of the questions here. Media and journalists are heavily liberally biased. Most of the Democrats, it would seem that tools to create and distribute stories based on big data will result in much more liberally biased media journalism. Please comment, so how can you assuage those concerns? >> A couple, one is, I wrote a book called All is Fit to Sell, which is about media economics and public affairs coverage, and what you see in public affairs is product differentiation. You could think of media bias as product differentiation. So, if you looked at the mean ideology, if we ask you on a seven point scale how liberal a conservative you are, each media outlet in the US has a mean ideology of its audience. If you ask what is biased, I can predict what you think is biased by the difference between your ideology and the mean ideology of that outlet. And we have more outlets now because the cost of having an outlet is lower. So, people are going to have a higher probability of having their world view reflected back at them. That's all a way of saying it's one person's media bias is another person's nirvana. >> [LAUGH] >> And I would actually say, I wouldn't necessarily agree with the fact that the media are liberal for ideological reasons. One of the things I showed in my book was if you look at network news, it has a liberal bias in terms of issues that it covers because it was trying to target women in their 40s. A woman in her 40s was the marginal viewer at the time I was watching, valuable for two reasons, more likely to make purchasing decisions based on consumer data, and more likely to be on the edge of watching or not. That meant that the network news was more likely to talk about gun control, poverty and issues of family with children. Not because of any ideology, because of green, because it was profitable. So when people tell stories through the media, you often see it, people think of it as ideological, but it can also be driven for advertiser value. >> So along these sort of you've made a little reference, Jada, the issue of polarization, I think. And one of the questions here is about how algorithms, this not a good night for algorithms- >> [LAUGH]. >> It seems that algorithms are accelerating the polarization in our society, and is there a way to address this through computational methods? I've got something helpful. There is a job market candidate in the Political Science department this week, Kevin Munger from NYU. He's a PhD student. He had done an experiment in 2016, where he looked at 100 people on Twitter who were vitriolic, who were Democrats, and 100 who were vitriolic who were Republicans, he then created fake personas on Twitter. He was able to buy a thousand followers, there is a market for followers. >> [LAUGH] >> And basically, he had a Democratic and Republican persona, and he found that if he sent a tweet after somebody was mean on Twitter, and the tweet was moralistic, it said, hey, remember the person you're talking about is a human and has feelings. That actually caused the person to drop their level of vitriol, if the person shared their ideology. So Democrats responded to Democrats, and Republicans responded to Republicans. I agree that there is that tribalism. But sometimes they could be used for good, or social science or tenure. >> [LAUGH] >> I can just add a little bit here. So, there are lots of people these days that are interested in engaging in a thoughtful way. In a debate with people that hold, maybe different ideologies than they do. So, a Democrat wants to talk to a Republican and vice versa on individual issues, and really try to understand what the other side is thinking. And one of the great things about the Internet, is that it provides a way to connect to people. And so there are a number of groups, including one of the projects in our computational journalism class, that are really focused on trying to bring together individual people of different ideologies, and serve as a moderated platform where they can really try to engage in a debate with the other side, without leading to all the fire and vitriol that might occur, if there is no moderation. So that also leaves me very hopeful. >> So one of the other things that just launched at Stanford is something called, the Global Digital Policy Incubator. It launched on October 6th, in Hillary Clinton was here for the launch. There was a whole day on this topic, and Timothy Garton Ash, a thinker on some of these issues, he talked about free speech being indispensible for democracy. He talked about needing freedom of expression, freedom of information, but also a certain quality of democratic discourse or debate, which I think that's what we're talking about, is being eroded right now. So one of these questions from the audience is, I don't know if it's apropos computational journalism, but a very, I think important core question about, should freedom of speech be reexamined in the age of Twitter bots and AI? Cuz one of the things that Sheryl Sandberg did when she went to Washington was say, if you're for free speech you're all in. I think I'm paraphrasing what she said. And they are focused on the issue of authentication of the user, but less interested in the truth of the matter, of whatever that Russian agent is putting out, as long as they identify as a Russian agent. So, this question of freedom of expression, does that need to underlie some of these questions that we're talking about? And again, could Stanford be a place, cuz we've got people who think about that issue as well? >> I think that we've been critical of the platforms, but if you think about the market for truth. The market for truth has always been slightly problematic for the following reason. If I thought my car ran on sand, I wouldn't really get around much. But, if I believe that Saddam Hussein was involved in 9/11, and that global warming is a Chinese hoax I might get high fives. I might even get elected president. >> [LAUGH] >> So if you think about the market for truth, I think that we can't blame the platforms alone, I think we have to look in ourselves. And talking about free speech, if you go back to the founding. A lot of what we've been talking about tonight are imperfections in media markets. But they're just really a reflection of imperfections in ourselves. If you have a shout-out to the Federalist Papers. J Hamilton and Madison. >> [LAUGH] >> They said, if men were angels, cuz that's how they talked back then- >> [LAUGH] >> If men were angels, there'd be no need for government. And they said, we need to design our institutions for quad people. Ambition should counteract ambition. And we should supply by opposite rival incentives, the defect of better motives. That's all a way of saying, we should acknowledge that people are sometimes gonna try to deceive us. And we should have stand up for things like the scientific method, and free speech. And the reason that I put them together is that, in the Stanford long range planning process, when we heard from alums, when we heard from faculty, many of them said we're concerned that people are now attacking the scientific method itself. And there was a great article about behavioral economics in journalism that, essentially said, guess what? The polar bears don't care who your friends are. And by that, they meant global warming is happening. And in some areas of the country that's unpopular. And in some areas you might lose friends by saying it. But, you should acknowledge that there are some things, such as facts, such as the scientific method. And so, if you're talking about defending free speech, I would also add in defense of the scientific method. And it's not just because I'm in the engineering building right now. >> [LAUGH] >> Do you wanna add anything on that, Minish? Free speech question? >> You know, I think this is a question that we're gonna be, we're just as a country, as a world, we're gonna be grappling with this going forward. There have always been some limits to free speech. And I think figuring out where those limits are in this new age, is always gonna be something that we have to think about. >> So Jay, you mentioned the polar bears. And one of the questions here is about, how you incentivize good local news reporting. But twinned with that also is How do you incentivize, thinking mostly about broadcast, right? Which is probably driven completely, but to do stories that actually matter as opposed to political horse race reporting, or I'm always amazed that we have 24 hour news networks that can only do one story for, not a whole day, weeks, months, and there's a giant world of issues out there. How can we incentivize coverage of stories that actually impact people, and they need to know to be functioning in our democracy? >> How many people listen to KQED? Okay. >> We're not a representative audience. >> [LAUGH] >> No, no, no, no, no. How many gave last year? Okay, so the rest of you are free riders. >> [LAUGH] >> That is, in itself, something that you can do. Because if you're basically asking other people to step up and consume public affairs, one of the things you need to do is also support it. So it goes back to those five incentives. Sometimes I think it's hard. In the 1970s there were actually requirements to get your free broadcast license that you would broadcast in the public interest, convenience and necessity. What does that mean? That's actually a phrase from railroad regulation that they borrowed when they started regulating radio in the 1920s. It's always been amorphous because of the First Amendment. We would like people to broadcast in the public interest, but as soon as we became specific about it, it would violate their First Amendment. I think it's hard to expect profit oriented broadcasters to give us the spinach, in a way. And actually, we have tried to do it with children's educational programming. There's a Children's Television Act that says they have to do three hours a week of that. I sent my students in my media class to the local television station to figure out what they were claiming was educational programming for kids. Geraldo, Beverly Hills 210- >> [LAUGH] >> Yeah. And there was actually a federal form that one of my students found that said the Beverly Hills 90210 episode, Beach Blanket Brandon, about having sex at the prom was educational. >> [LAUGH] >> Yeah, so when you tell a profit person maximize profits and tell us something educational, they'll relabel it. That's why I'm a stronger believer in non-profit or subscription based. Advertising-supported media has always been problematic because advertisers just care whether you watched, and not how happy you were. And advertising is also biased against high quality because, again, it's just whether you saw it, not how much you enjoyed it. That's why Netflix is better than some of the other things. So when you pay for things through subscription, or when you give through philanthropy, that can generate the information. And the good news is not everybody has to see it. Facts can circulate for free. Facts can also get to legislators and staff members. >> There was a question along those lines of advertising which gets away from the broadcast of sponsored news content and whether when you go in your feed and it says sponsored news content, yeah, it takes you a second to decipher that. Should we push back as that societally, do we have a right? I mean, after all these things are free products, so what do you think, Maneesh? >> Yeah, I think that the labeling is a great start, right? We have the ability to see that this is a sponsored news product. I would love to see more labels, right? Like, why did I get surfaced this particular story in my feed? Just the way we get advertising. And on certain advertising you can actually click a button and get more information about why you got that advertising. I think you were telling me about this. >> That's right, fear of government regulation motivates self-regulation sometimes. So the industry does have this collision that they put a little i and you can click on it and you can see why you got targeted. I think labeling helps. I don't see a sponsored content necessarily as deceptive like the way the New York Times does it, because it's clearly labeled. I would say another positive thing is that the media is starting to generate revenue through events. So that's been a nice thing. It's not advertising, but the Texas Tribune, The New York Times, the Washington Post they all have events, they charge money, and that's become a separate revenue stream. >> It was a little bit controversial, though Jay, wasn't it, when they first started it? >> Yeah, so the implementation was a little bumpy when the publisher was gonna eventually have a salon in her house, and the price was $20,000 or $30,000. That was it's selling access, but as soon as it was revealed that party never happened, because of the scrutiny. I like the Times talk, again, another product placement, I like the Times talk because the tickets are like $20 or $30. So here's a kind of a fun question, a little off topic, but interesting, and I think relevant, Maneesh, especially for you. What are your thoughts on the role of crypto currencies, blockchains, and distributed applications in the next stage of content distribution and transparency? >> Yeah, good question. [LAUGH] So I think one of the things that I find very interesting about the blockchain in particular is that it's a public record of transactions, right? And with a little bit of effort, people are now starting to think about how this public record can be used to maintain a public record of other things. So I mentioned a little while back that one of the things that we could do is build some technology that would record every operation that you perform on a photograph for example. So every time you brighten it, every time you remove an object or move around some of the pixels. We could just write a piece of code that would record all of those operation and then store all of that information in something like the blockchain, okay? And so this would be a public record of all the manipulations that have happened to your photograph, and this is just photographs, but you could think about doing this for any form of data. And now the full providence information that gets you from the original photograph all the way out to the published piece is stored in this public record that anyone can go back and look at and check that you didn't do some nefarious manipulation to the photograph, right? And you could think about doing this for lots of different forms of data. And so one of the things that I'm really interested in is how we can use this kind of public database to just keep a record of the manipulations that have happened? And in doing that can we perhaps increase the trust in media and the data that we're actually seeing? >> How does AI fit into all this? There's a couple question on AI and sort of what we should expect if we're projecting forward? >> So one of the things that AI is doing is that it's making it easier and faster to analyze the data and also to synthesize the data into stories. So we mention these automatic story writing tools, they're using a form of AI underneath, a natural language understanding and processing, to synthesize the stories We're seeing AI being used to help generate imagery and videos. Some of our work in my lab is focused on this. So those are two places where we're directly seeing AI. One of the challenges with AI is that there can be problems with the AI results Due to biases in the data. So many of you have probably seen these examples of chat bots released by Microsoft and others that went on Twitter and very quickly learned from conversations how to be racist >> [LAUGH]. >> And other things like that. And this was really a problem of bias in the data [LAUGH] That these algorithms we're learning from. And so a deep issue for AI is understanding the biases in the data that is being used to train the algorithms. And it's a problem of unknown unknowns and so we're just starting to try and grapple with this issue. >> And it goes back to an interesting debate in social science. In the 1950s, Milton Friedman wrote an article called The Positive Methodology of Economics. Where he argued, the point of economics is not explanation, it's prediction. And you judge a model not by whether the assumptions make sense, but whether it predicts well. If you flash forward to today with AI, a lot of AI models used for prediction, the people who write them don't know why they work. It's very hard for you to develop an explanation. And then if you hand that to a journalist who's trying to write a story that can sometimes be difficult. One thing that depresses me about AI is that the best uses today are in financial journalism like there's a great startup in the Bay Area that's built on the following premise. Twitter was hacked about four years ago, and it seemed like the associated press sent out a story that there had been an explosion at the White House. When that happened, the stock market dropped tremendously, and then once people realized that hadn't happened, the price went up. What people realized is if you could use AI to monitor what's going on and recognize right away that AP story was false, you could make a lot of money cuz you could start realizing that price was falling for the wrong reason. So AI is being used for business stories, for business trading but not the Palo Alto's city council. So if any of you are here and no AI, hey, I'm surprised cuz you have such a high value and opportunity cost. >> [LAUGH]. >> All of the things that I read say that the price is astronomical. But if you are interested in public affairs reporting, I can guarantee you that there are many people working on story discovery related to government or politics or life in our community. >> That's a nitch? >> Yeah. >> Jay, this is just straight up, are the computational tools you're developing open source, or will they be for both of you? >> Many of the ones in my lab are, yes. >> Yeah, and if you look, I haven't mentioned our two data journalism professors much, Cheryl Phillips and Dan Nguyen. Cheryl came from the Seattle Times. Dan Nguyen came from ProPublica. Did a great series called Dollars for Docs. Dan's stuff is up on GitHub. He shares his code. And he also has great tutorials. And, again, the fact that the Stanford digital humanity software was open-sourced, that's why it was able to repurposed for the journalists who were writing about the offshore accounts. >> So we've spent a lot of time talking about Google and Facebook, but the question here is about, where does a company like Apple, and their technology, come into this conversation, since their product is not advertising? So could they be in better position than Google or Facebook to avoid the profit motivation in curating the news model? I think we're all ready seeing that, if I'm not mistaken. Jay, have you spent a little time looking at that? >> I am not a big user of Apple News, but that's just a true taste. Yeah, I think that they have been more open to helping some news publishers, especially in terms of sharing. But Google now is trying to also help people be more likely to subscribe to local news outlets. So Google is moving towards that, too. >> I thought I saw that they are making a decision that they're gonna have- This is gonna a curated model, because it is a quarter. That Apple News is not a quarter of their business model, so they have that luxury. I think that's a good point. This is a good question. How do we get people to care during a time when people allegedly don't wanna read good journalism? How do I figure out which news outlets to trust and read? Sort of a general journalism question now. So- >> [LAUGH] Well, I think. >> [INAUDIBLE]. [LAUGH] Getting people to care, I think, is a matter of writing the stories that they're gonna care about. So stories that our reporting on what's going on locally, a lot of us are interested in what's happening locally in our communities. We care a lot about that places that we live. I think stories that can have an impact on what's happening in our democracy, those are very important stories. I think telling the stories and really producing the high quality journalism should be the first and foremost goal. Trust is a very difficult issue. So, how do you build trust in a news organization? I think that's one of the central questions that we are facing these days as people are starting to lose trust in the, let's say, the objectivity of different news organizations. And coming to see CNN is a liberal news source or FOX News is a conservative news source. And so, we may be moving away a little bit from the notion of an objective truth that these news sources are reporting on and perhaps more to a place where the news sources just need to be transparent about the angle that they're pursuing. All ready, you can make the case but audiences are seeing new sources as lying on one side or the other of this spectrum and so they are all ready implicitly seeing the news sources as slanted in a particular way. >> And you talked about getting the people to watch or consume. Sesame Street used to say you have to reach before you teach, that you have to engage people. And I think on the demand side, if the news were personalized, and that's another project that people are working on in our exploring computational journalism. Imagine a site that knew what you knew, knew what you had read, knew that you liked graphs more than videos. That type of product differentiation would mean that there wouldn't be a good substitute for that story. You'd be more likely to subscribe, be more likely to read it. In around 2008, Google, New York Times, and Washington Post tried an experiment called Living Stories and it was a failure. It still lives on the web, but it was too hard back then to personalize with that type of data. Now, I think it is more possible. And one of the student groups in the class that we're teaching with Ton and Krishna, is really trying to say, imagine different stories, same topic, but different context, different facts because of different knowledge that people have? So I think that that would be a way to get people to read, if they was more tailored towards them. And I know that you're in a world,- we could be in a world of silos. But I'm thinking more stylistically and knowledge-based. Yeah, Jay I'm nervous about the tailor, even more. We're so tailored now that we're not all getting the same information. So if you're talking about story presentation on a certain set of stories that just appeals because that person's a visual, that person likes data, and that person likes a well-told narrative, maybe, right? But not the kinds of stories, to be clear. Not the kinds of stories. People also, because of [CROSSTALK] >> Or subjects, I mean. >> Yeah, because of the breaking of the bundle. You know, the newspaper, you used to have to read the front page in order to get to the sports. Sometimes some news outlets are also trying to program in serendipity and that's another thing that people are trying to do. To get you sometimes exposed to something that you didn't know that you wanted to know. >> So we had, in my News Reporting and Writing Fundamental class, we have David Fahrenthold Skype in from the Washington Post, who won the Pulitzer this year for national reporting for his stories about, what was it, President Trump and his [CROSSTALK] >> Lack of philanthropy [LAUGH]. >> Lack of philanthropy, he promised the veterans this, and so what made me think of it though, was what you said, Minesh, about the need for more transparency. So just to give some context, as a journalist, this notion of showing how the sausage is made was really something that journalists would push back on. A lot. Until very recently, but it seems that, so what he does is he takes his reporter's notebook and he'll make a list of all the charities that Trump said he gave to. And then he'll cross it off or he'll write notes like didn't get a call back, or, this is not true. And he'll show the reporting process, which seems to go to address this issue of transparency, which I think I expect that we may see more of in this effort to build that trust. But I also think we need to sort of educate people about what is credible, fact-based news reporting. And which news organizations actually do all the things that we teach. In, when we're teaching journalism. Verifying sources, has the right balance, verifies information, etc. Because I think people are in this. One of the things that I think we're all talking about is that once everybody became a publisher, and everybody was put on the same level, we created this era of noise that we're living in, and it's very hard to slough through. And there's people here worried about manipulation in general, and what kind of legal rights do people have to guard against manipulation of the news using computational tools on sort of, the flip side of some of this. Is there anything? >> I don't know the law on this. >> Yeah. [LAUGH] >> So I have to admit. But there are people that are working on what's called image forensics, for example, and the idea here is to detect if imagery has been manipulated. And there are a variety of techniques to do this. So they will go down and inspect lighting in the images. And if there are differences in the lighting, so in this part of the image, the shadows are in this direction. And in the other part of the image the shadows are in a different direction. That may be an indication that there's been some manipulation in the photograph. And there are a whole set of techniques to do this. So there are people that are actively investigating technology for uncovering manipulation. Once you have done that, then it's a mater for the court and lawyers to figure out how to deal with that and make amends. >> Which is the technology outpacing the law in this realm? >> I think it's always been true that commercial speech is more heavily regulated because he used the idea of a rational consumer, and would she be decieved? Political speech has been much less regulated because we realized that we could get it wrong, and the government might have an incentive to over-regulate, so that's why in the realm of political speech we've been much more likely to label required disclosure. But not try to make it easier to sue somebody for deception. >> So we only have about three minutes left, so I thought I'd save this last question here so everybody has just a minute or so, a minute and a half to answer. But, if you took over for Mark Zuckerberg for a month, what change would you implement right away? >> [LAUGH] >> First of all, I'd love to have hair again. >> [LAUGH] >> That would be great. What I'd probably try to do is use part of the philanthropy to sponsor research on story discovery, storytelling related to public affairs. That would be one thing that I would do. And, related to that would be actually running experiments. I know they have great social scientists who work inside, but I'd also talk more to the community around this area, and try to lower the priority of things which have a lower probability of being true. So, with my philanthropy I would try to stimulate the production of stories that are going untold today that have positive spillovers on the community. And then within my own business I would try to, well the phrase, don't be evil has been taken, but I would try to do something to depress the circulation of things that have a lower probability of being true. Yeah, I would do two things. So one, we've been talking a lot about transparency, and I would try to figure out as a company what we could do to be more transparent about the algorithms that we're using. You know, I recognize that we can't be fully transparent, but I think we can be better at transparency than we are today. The other thing is that I would try to support these efforts of bringing people of different ideologies together to discuss in a thoughtful way, why they differ and why they have such deeply-held beliefs. I think we could all benefit from these kinds of conversations. And even seeing other people engage in these conversations could be really useful. I think Facebook is a place where people all over the world connect. This can be another thing that a company like Facebook could provide. >> Mark, I hope you're watching. >> [LAUGH] >> And we're here to chat with you. I wanna thank everybody for joining us both on the live stream and here tonight. The school of engineering for hosting, and we hope you enjoyed the panel. >> [APPLAUSE]

History

During the mid-twentieth century, traditional paid-circulation newspapers and magazines were joined by a new publication category: free-circulation newspapers and magazines, known as trade or controlled-circulation publications. As free newspapers and magazines increased in number, in 1951, Geraldine Knight founded Verified Audit Circulation as the first company dedicated to auditing these publications.[3]

Subsequently, Verified expanded its services to include audits of paid publications, free rack-distributed publications, and products delivered to the door.[4] The addition of web site audits provided site publishers with independent confirmation of site visitor activity and with assurance of web site ad delivery.[5]

By 2008, the company was auditing more than 1,000 free and 250 paid publications, along with more than 100 weekly alternative newspapers.[6] Its audit clients include The Washington Post, The Chicago Tribune,[7] and Questex Media.[8]

In 2009, Verified developed and launched an expanded form of integrated audit report to track a larger range of circulation and audience parameters for clients. The new report type addresses the growing diversification of media, beyond print and into electronic formats. The cross-platform audit report supplements print-circulation figures with data on digital editions, events, web sites, webinars, e-newsletters, and supplements.[9][10]

During 2011, Verified expanded its circulation guidelines to allow publishers to include publications distributed at trade shows and events among their qualified circulation figures. Prior to the update, trade show and event distribution of publications counted as unqualified circulation.[11]

Membership

Verified clients are known as members of the organization, and they have full access to the circulation-reporting and member resources on the Verified Audit Circulation web site. Media buyers, advertisers, and advertising agencies are eligible for free associate membership, which provides online access to audit reports, publisher statements, circulation data downloads, and a Verified e-newsletter.[12]

Verified Audit Circulation is headquartered in Larkspur, California.[13]

References

  1. ^ "Resource Focus: Verified Audit Circulation". Article. MagazineLaunch. 13 June 2005. Retrieved 2 August 2011.
  2. ^ Armor, Jennifer. "How to Purchase Print Advertising Wisely". National Mail Order Association. Retrieved 4 August 2011.
  3. ^ "Company History". Verified Audit Circulation. Retrieved 3 August 2011.
  4. ^ "Company History". Verified Audit Circulation. Retrieved 29 July 2011.
  5. ^ "Resource Focus: Verified Audit Circulation". Article. MagazineLaunch.com. 13 June 2005. Retrieved 3 August 2011.
  6. ^ Hanzlik, Mark (February 2008). "Print Circulation Audit Firms Continue to Survive in the 21st Century". Article. themacwizard.com. Retrieved 5 August 2011.
  7. ^ "Circulation and City and Regional Magazine Association". Brochure. GulfShore Media, LLC.
  8. ^ "Verified Completes First Integrated Media Audits on Questex Brands". Article. Audience Development.com. 10 February 2010. Retrieved 1 September 2011.
  9. ^ "Questex Adopts New Total Audience Audit Process". Article. FOLIO. 14 April 2009. Retrieved 29 July 2011.
  10. ^ "Verified Completes First Integrated Media Audits on Questex Brands". Article. Audience Development.com. 10 February 2010. Retrieved 30 July 2011.
  11. ^ Mendolera, Katrina (14 June 2011). "Magazines & newspapers can now audit trade show circulation". Article. Vocus.com. Retrieved 1 September 2011.
  12. ^ "Verified Associate Member Qualification Form". Verified Audit Circulation. Retrieved 29 July 2011.
  13. ^ Hanzlik, Mark (1 February 2008). "Print Circulation Audit Firms Continue to Survive in the 21st Century". Article. themacwizard.com. Retrieved 5 August 2011.
This page was last edited on 26 July 2019, at 02:02
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.