To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Google Crisis Response

From Wikipedia, the free encyclopedia

Google Crisis Response is a team within Google.org that "seeks to make critical information more accessible around natural disasters and humanitarian crises".[1] The team has responded in the past to the 2010 Haiti earthquake, 2010 Pakistan floods, 2010–11 Queensland floods, February 2011 Christchurch earthquake, and the 2011 Tōhoku earthquake and tsunami among other events,[2] using Google resources and tools such as Google Maps, Google Earth, Google Person Finder, and Google Fusion Tables.

YouTube Encyclopedic

  • 1/5
    Views:
    8 486
    12 754
    821 717
    16 782
    1 575
  • Google NYC Tech Talks: Crisis Response @ Google
  • Google.org Crisis Response and the Google Maps APIs
  • Thanking First Responders: A Moment in Search
  • Crisis Maps from Google.org Crisis Response Team
  • Inside Google Marketing: Sustainability & Crisis Response

Transcription

NIGEL SNOAD: Thanks everybody for coming. We're really, really happy to have you here, and to have a chance for some of my engineering colleagues to talk a bit about what we're doing on crisis response here in New York, and in our teams in Mountain View and elsewhere. As I said, I'm a product manager at Google here, and I focus on civic innovation in a crisis response work, which is basically some work that we host out of google.org, the philanthropic part of Google. But really, a lot of it's a cross-company initiative where we see how Google can really contribute to making the world a better place ultimately. And the crisis work is something that Alice will at least talk a bit about the history about how we started doing that. But just a couple of things I wanted to note here was that you're all here because part of this talk series that we're giving, and just to remind you-- you would have seen this-- the tech talks fill up really quickly. So for next time, you made it here, so you know how to play the game. But tell your colleagues to sign up fast. And we really want to hear what you want to talk about. So as we said, we're super psyched to have you here to talk a bit about what our work is, and to hear what you have to say, and ask of us. This is the agenda. We're just going to move quickly through my part, which is the least important bit, and then have the Q&A afterwards. Just briefly about Google New York City. It is a fairly old office. 13 years-- this building, 10 years, I believe now, right? And Craig Nevill-Manning, who was the first eng lead for Google's crisis team, actually was the first engineer here, and the engineering director here in New York. So there's a ton of history about what we've been able to do. But it's a very large office. We've got 3,000 people here, 1,000-plus engineers. A whole range of projects-- from Drive to some Maps work. There's clearly lots of ads work, and so forth. And there's a bit about our crisis response and civic innovation team here, as well. Critically, I have to say we're hiring. We love to find really good talent, and we need more people to come and work with us on a pile of awesome stuff that we're trying to get done. The culture-- you're in a great space here. There's no doubt about that. We've got great cafeterias. I have to warn anybody who's thinking about coming, there's this thing called the Google 15. That is the 15 pounds you put on because of the free food that arrives when you join the company. And we basically have a great time, except for the fact we're all working pretty damn hard to get our projects done. So the summary on tonight-- I did want to just briefly introduce. As I said, I'm the product manager, the product lead for this team. But Alice and Phil, you can read their bios here. They'll do their own intros. They're two of the, I'd like to say, really key and critical engineers on our project. And they've both been-- well, Alice certainly has been with the crisis team for a lot longer than I have. And Phil's been a sort of key component of it. And we do a mix, as Alice will talk about, between responding to disasters, like seeing how we can help with something like Hurricane Sandy, and building tools and infrastructure that are sort of ongoing enhancements to help people find information during a disaster. There's an interesting mix about how involve the rest of the company in our work-- volunteers from around the company, and how we actually do straight up, normal product and hardcore engineering work to support ongoing responses and build tools. That's enough from me. I'll hand over to Alice to get is started, I think. But thanks a lot, and we really want to hear your questions, and see what you think of us. [APPLAUSE] ALICE BONHOMME-BIALS: So I'm going to use this mike. Can you hear me well? Also in the back? OK. OK, so Google Crisis Response, as Nigel mentioned-- oh, thank you. So as Nigel mentioned, Google Crisis Response, our goal is to make critical information available in times of disaster. Google mission is to organize the world's information, making it universally accessible. And in times of crisis, it's even more critical. We want to deliver accurate information. That's very important it's accurate, but also as fast as possible. And we want to do that on different types of products. Mobile-- mobile is really key during a disaster. You're not always in front of your computer. And doing that for open standards. So we're going to talk today about some of the tools we developed, some of the lessons learned, as well as some standard that we're using. But before doing that, I wanted to give you a little bit of background about how that team was created at Google. So it actually all started on January 12th, 2010, when a magnitude seven earthquake did strike Haiti near Port-au-Prince. The main city. You all heard, and I'm sure you remember, the devastation that happens. They were more than 300,000 people died that day, and it leads to a huge devastation in the country. A few hours after the earthquake, some volunteers here at Google got together to see how we can help. And we did what we did in past disasters, such as the Katrina hurricane. The first thing we did was trying to get satellite imagery. Satellite imagery is really key just in the aftermath of a disaster to try to do damage assessment, or try to see what's going on on the ground. At that time, we managed to get satellite imagery available in Google Earth and Google Maps in less than 24 hours. And to give you an idea of what that looks like, here's a picture from the satellite imagery of the Petionville golf course. Petionville is really near Port-au-Prince. And that was before the earthquake. So you can actually see this big, green area. This is the exact same picture one day after the earthquake. And you can see here that people started to go on the golf course, trying to go away from the ramble, away from places that were unsafe. This is the same picture 10 days later. That became one of the biggest camps in Haiti, where people went and lived there while waiting to find another place to live. From this satellite imagery, a lot of things can be done. You can try to automate and do some damage assessments. You can try to locate, what are the bridges that still seem to be up and running? Some people located hospital and things like this. So this is something that, in the past, Google has done and tried to do as fast as we can, update this satellite imagery. But the devastation in Haiti was so big that we thought there's something else we can do, that there was devastation in terms of going there, and helping people go out of rambles, but also trying to help all the responders and people on the ground coordinate. So some volunteers at Google got together to create some maps. There was a lot of data sets out there on the internet, using different formats, being hosted in different websites. So we worked on putting them together. A lot of this work at the time was very manual, trying to get some files-- in shape files, for example, putting that into some key mount, to put that in some good viewers. And so a lot of these things were a little bit chaotic, but we put that together to make it available to people so that a lot of people could download it, could go on site, and watch this data, and being able to share it with other people. We had a list of hospitals, and a list of campus, and things like this. Another big issue that happened just after the earthquake was people trying to find their relatives, find their friends. How do you get in contact with-- how do know your friends are fine and are safe? Cell phones didn't work. Most text messages didn't work. How do you do that? So some people started to put different websites with list of missing people. You could find this list of missing people on CNN. You could find that on the New York Times. And some people had set up some sites specifically for that. Haitian Quake was one. So if you were looking for someone, you had to go to all of these different websites to find information. And some of these sites also had issues with load, because so many people were just searching for people that some websites just went down because they couldn't handle the load. So issues about organizing information in different places and scale. This is something that at Google, we know how to do. So a group of volunteers got together and said, we need to be able to help here. And we started a hackathon to try to build a tool that we called Person Finder, that will allow to connect all of these databases in one place and exchange information, so that if you go to any of these places, you would be able to see the list of missing people from any of these other sites. So we started this hackathon with volunteers that were helping from most of our engineering office, from Australia to London to Israel to the east coast or the west coast of the US. So pretty much, we had 24-hour coding happening. And in 72 hours, we got our first version of Person Finder out there, launched in English, French, and Haitian Creole. And I'll talk more a little after that about the standard that we used, and how that worked technically. But from this experience, a group of these volunteers that helped, that there's so much more we can do with technology in terms of disaster. Why don't we just have a team that work full-time on this. So I went to see Google leadership, and said, why don't we create a crisis response team? And that's how the team was created. So since the Spring 2010, we have engineers that work full-time on trying to solve this problem. And the first thing we did was, wondering we're a company mainly developing internet tools, is the internet even working during a disaster? You can say, if it's not working, you can just the best tools, but no one will be able to use them. So we studied that, looking at traffic from places where there was a disaster into Google, but also looking at feedback that we got from users during crisis situations. And here's what we saw. We saw that there's always a little bit of internet. So this graph is the graph of queries going to Google from Haiti. So you can see here, on January 12, there's a big drop. This is where the earthquake happened. And actually, Haiti is a country that was not that well connected before the earthquake. And what you can see here is that it took four months for Haiti to go back to their connection pre-earthquake. But what is interesting is you can see that the blue line doesn't completely go to the ground. Like there was still some internet available just after the earthquake. And we know that from graph, but we also know that from experience. I've been traveling to Haiti in the past 10 years, so I had contacts there. And two days after the earthquake, I got a chat on Gmail from a student I met at conference prior to the earthquake. And he was at an internet cafe trying to reconnect with friends. He was at an internet cafe 48 hours after the earthquake, had a connection, and could chat people. He could not call them. He asked me to call his family here, so that I could actually give some news. But he couldn't call, but he had internet. We saw this also here. That's a similar a graph for an earthquake that happened in Chile in Concepcion on February 27, 2010. The earthquake happened during the night. So the shape like this is this is during the day, and this is during the night. And what you can see in one week here, we go back to the pre-earthquake traffic. And the day after the earthquake, there are some connections. The last graph is about the earthquake that happened in Japan on March 11, 2011. And you can see here, there's also drop at the time of the earthquake. But very quickly after, you can actually see that there's some connection. And this is just based on TCP/IP. TCP/IP has been designed to be resilient. So we can see from this graph that there's some internet. And we can also hear that from our users. This is an email we received from one of our users in Japan that told us that on March 11, that list.email didn't work. It didn't go through. But phones and SMS didn't work, either. But Gmail connected them. It connected them to each other in the company, but also to their friends around Tokyo and around the world. And that was not only Gmail. That was also Facebook, that was Twitter, that was the internet. And people connected, even if SMS or voice on their phone didn't go through, internet worked. So if people can access the internet, and turn to the internet to find information, what do they search for? This is a graph of queries that goes to Google for users in Hawaii. And these are only queries that are about tsunami. You can easily see the two spikes after each earthquake-- after the earthquake in Chile, and after the earthquake in Japan. If you zoom in on the day of the earthquake in Japan, this is what it looks like. In red, this is the tsunami warning that was issued at, like, 7:56 PM. And just after, you can see the number of spikes. The top of the graph here represents 22%. 22% of people in Hawaii at the time that were searching on Google were searching about the tsunami. So people turn to the internet to find information. You can see here, the second big spike was when the wave did hit Hawaii. So people went back, and searched for information. So from there, we can say people turn to the internet. And what type of information are they looking for? When you're in a situation of disaster, you're looking for three types of information-- what has happened. How bad is the event? If you did feel the ground shaking a little bit, it wasn't just like a small earthquake that is near you? Or is it like a super strong earthquake that was far away, but you just felt a little bit of the shaking? What are the road conditions? If you're at work, is it safe to go back home? Where are your family? What are the resources? If you need to go to a shelter, or if there's no more power, how do you find information? How do you find information about the nearest hospital? These are the type of information that people are looking for. And our role is when organizing this information, to make it available to users as fast and as reliable as we can. So for that, what we learned from the last three years is that anything we build will have to be simple, standard, and open. Simple, as we did a simple UI, a simple flow. Time of crisis is not the right time to learn about a new UI, to sign up for a new account, to go through a CAPTCHA, all of these things. It's like, people are stressed, and people are really in despair, so you want something very simple to use. You want any data that you produce would have to be standard. Collaboration is key during a disaster. You want people to collaborate. But if everyone has data in different formats, how can you just change this data? It's really hard, whereas if everyone just produced data in the same format, then you can really work together. And finally, you want an open system-- open in terms of open source, so other people can reuse your code, but also open APIs so you can exchange open data, having your data public, having your data available for other people to use. So we're going to talk now about different products that we think implement this simple, standard, and open. And Phil is going to talk about the first product. That is Crisis Map. [APPLAUSE] PHIL COAKLEY: Yeah, thanks, Alice. Hi, everyone. I'm Phil. So the first tool that I'm going to tell you about is one that we've used very effectively to help get information out in times of crisis. Alice started to motivate the problem, but I'm going to continue before I tell you exactly what the tool is. So when a crisis strikes, we know that there are many important geographical data sets that can be vitally useful to preparation and survival, to recovery, and to response efforts. Just a few random examples-- the National Weather Service puts out forecasts of hurricane tracks and river flooding. So this is a screenshot of a map viewer on their website, but they also make the underlying data sets available for download. FEMA publishes data sets for mass evacuation routes, among many other things. This is an organization called Geomap that publishes maps of current wildfire boundaries. So it takes tremendous human and organizational resources to put these data sets together, but the maps are only effective if you can get the right maps in front of the right people at the right time. All of these maps are hosted on different websites. Not everybody knows where to find them. Some of them don't get the audiences that they deserve. Every website has its own map viewer. Every map viewer looks a little different. Every map viewer works a little different. Some of them work better than others. So don't work on mobile devices, which is really important in times of disaster. And perhaps the biggest problem-- because the data is all separate on separate websites, it's hard to see it all in context. So here, we've got the American Red Cross publishing valuable, real time data about which of their shelters is active, and what capacity remains. They're really good at this. In disaster relief, we're always talking about coordination. This is a great example of coordination between different Red Cross shelters. But even if other response organizations were to follow their lead and also publish their operational data to a map, there wouldn't be a really easy way to combine all of that information to a common picture so you knew what everyone was doing. Recap-- relevant, high quality maps, not always easy to find. People affected by disaster, responders on the ground probably can't be expected to chase down every different websites that has relevant information. Two-- map viewers vary in quality. Some are awkward to use. Some not accessible on smartphones or tablets. And three-- data is isolated. When it's siloed on individual websites, it's hard to see in context. So these are some of the problems we're trying to solve with our team's Crisis Map tool. So here's a screenshot. Crisis Map is a tool that lets anyone make a mash-up of multiple maps related to an event. So by way of example, check out-- this is the Google Crisis Map for Superstorm Sandy last October. This is a screenshot of the map we published as it looked just before the storm hit land. So each of the layers you see on the map-- and there are many all at once-- they come from one of the sources on the right-hand side, and are being updated in real time on the map. So all of this data is constantly changing. While the National Hurricane Center is updating its outgoing feed of the hurricane path and forecast, the Red Cross is also updating their outgoing feed of open shelters. And then, so the person curating the map-- in this case, the Crisis Response Team-- is only deciding which layers we need to promote or deprecate, and what other data to show in context with it. So any viewer of this map can alter the view by zooming or changing the view port, by toggling layers, by affecting transparency on some of the layers, like the cloud layer. You can search for a location, and then you can re-share it to highlight that information that's relevant to you. So I said before that good maps are hard to find. This remains true, but with a tool like this, the task of finding, compiling, and curating map data can be handled by a small number of individuals or organizations. And you can then promote the single mash-up map to the affected population and to the responder community. So in the case of Superstorm Sandy, there was so much additional data available that we actually created a second map just for the New York City area. So well before the storm reached New York, we published this mash-up geared at preparation. As you can see, in context, the mandatory evacuation zones are in orange. The evacuation centers are all over the boroughs. Centers run by the Red Cross show up as pins. Centers run by the New York City Office of Emergency Management show up as red dots. Same map. After the storm passed, the needs changed. The pins you see here represent community organized volunteer centers. And through partnerships with external organizations like NOAA, we were able to obtain updated aerial imagery for coastal areas. This proved useful both for individuals who had evacuated the area to assess the state of their own homes, and for response organizations to understand the extent of damage. So we think that the simple, open, and standard principles apply here. This application is based around the Google Maps interface that many internet users are already familiar with. To navigate one of these maps should be a fairly familiar experience, which is important, because as Alice said, mid-crisis is not the time to learn an entirely new application. The interface adapts seamlessly to different display sizes, from full desktop, tablet, mobile. And we make it really easy to embed on another site throughout an IFrame. The application is all built on open APIs and open source libraries. I'll tell you about that in a second. And by open sourcing the code, we also allow anyone to stand up an instance of the app, and begin to create and publish maps-- not just us. Finally, the entire back end is built around open data formats. So we support rendering maps using a bunch of different formats. The most common one is called KML. Is anyone here familiar with that? OK, a few people. I'm going to talk about that in a second. So the first two on the list here, KML and GeoRSS, are both XML-based formats. We really like these, because they're both really easy to generate. They're within reach of essentially any developer who knows how to write code that spits out XML. And we'll take a closer look, like I said, at KML in a second. The third option on this list, WMS, is a bit more advanced. Or sorry, the fourth option as a bit more advanced. It stands for Web Map Service. It's a protocol maintained by the Open Geospatial Consortium. This is actually a brand new addition to Crisis Map, so if there any GIS experts in the room, please take note. Tile service is also an advanced option. This basically allows you to prepare map overlays in the form of tiled images across the globe. And supporting this is what allows us to slurp up new aerial and satellite imagery in a real hurry. The next options here, Google Fusion Tables and Google Maps Engine, are applications that allow you to author maps of your own. So Fusion Tables is available to the public free of charge. You can check it out. Google Maps Engine is an Enterprise product, but there is a light version available that you can check out on the internet. So recall that Crisis Map is a mash-up tool. Since it doesn't host any geo data, it can't actually support authoring or editing map content. During a response, we'll often lean on products like Google Maps Engine and Google Fusion Tables to create maps with place marks out of tabular data. This allows us, for instance, to maintain a spreadsheet with a list of volunteer centers, maybe, each with an address or lat-long point, and then we can have each row plotted on a place mark on the map layer. So here's a real simple example of KML. It was originally developed for use with Google Earth, formerly called the Keyhole Earth Viewer, hence Keyhole Markup Language. So every KML file specifies a set of features here. A place mark is a type of feature that we use to make a pin appear on the map. When you click on that pin, you'll see a heading with the text, Google NYC, that you see inside the name, and the description. This text will show up in an info bubble. It will. So you can create KML files that pinpoint locations-- think map pins. You can specify bounding polygons. You can add image overlays. You can specify timestamps, timespans, and a lot more. There's a lot of information at this link down here in case you're interested in learning how to make your own KML. This is what our system deployment diagram looks like. Since this is a tech talk, I thought we should go into this. So top center is our map viewer client, which runs in any browser on a computer, tablet, or phone. The client is written in pure JavaScript using Closure tools. Is anyone familiar with Closure tools? Few people. Great. So Closure helps us a lot. Currently, our JavaScript code base is 1.2 megabytes of uncompressed JavaScript, and that's clearly too big to send out over the wire to any random tablet or smartphone over the cellular network. Post Closure Compiler, though, that gets down to about 285 kilobytes. Not bad. Still too big for a quick, download over the mobile network. So with the help of Closure tools to dynamically load modules as needed, we're able to get our initial download down to about 75 kilobytes. Still a little bit higher than we'd like, but pretty good compared to 1.2 megabyte of uncompressed. So that client code on the top leans very heavily on Maps API on the right-hand side of the slide. In fact, we delegate almost all of our format support to those underlying Maps API services. Now on the left side, the data model that describes one of these mash-ups-- including all of their layers, their data sources, their default properties-- is a format that we call MapRoot that we've also open sourced. It's basically a JSON object. Client knows how to read and serialize, pass along to our server, which is written in Python. Handles things like authorization, mappings between map IDs, and published URLs. And it acts as an intermediary with the data store. So I should mention that Crisis Map runs on App Engine, which we're using primarily because it scales, and it scales really well. So as you guys can imagine, Crisis Map is not exactly an application with a steady state traffic pattern. When there's something going on, our traffic hockey sticks just like those graphs that you saw. And so we had really, really good experiences with App Engine scaling up quickly for us, which leads us to focus on the task of map curation rather than chasing down production bottlenecks. So during our response, our small team spread across New York and Mountain View remains engaged around the clock. Over the course of the Sandy response, we curated more than 50 layers between our two publicized maps. Just out of curiosity, did anyone here create or work on any data sets that were involved in the Sandy crisis response? Few people. Awesome. I'd love to hear about your experiences later on. So here, I'm just going to give you a few examples of layers that we included in our map thanks to the help of responders like yourselves, organizations like the MTA, and the community at large. So after Sandy rolled through here, it took a little time to restore normal subway service. So in the interim, the MTA ran shuttle buses over some of the bridges. Thanks to their helpful coordination, getting on the phone and emailing with us, we were able to keep an up-to-date layer with those shuttle stops. Also thanks to the MTA and data published by WNYC, we were able to keep this up-to-date layer of running subway lines. And you'll see on this day, the ACE line, the blue line on the left here, stopped at 34th Street. I think later this day it had started running again. A community member and doctor by the name of Wen Dombrowski helped to crowd source and curate details on services offered specifically to senior citizens. Is Wen in the audience tonight? Maybe not. But using this form that when set up, you could submit information, including the category of service you had to offer, the location, contact information, detailed description. And this data made its way to a Google spreadsheet from where we were able to import it as this layer on the Sandy Crisis Map. So next to wrap up Crisis Map, I'm going to dive into an end-to-end story of one of our crowd source map layers-- the gas stations of New York and New Jersey. As many of you might remember, not long after Sandy's landfall, gas availability became a major issue. Lots of people were waiting on lines for hours. People were driving really far, and using lots of gas while they drove really far to get more gas. We heard from the NJOEM, which is the Office of Emergency Management, that they had a gas station information. This is great. Basically, they have a data set that shows us which stations are open, which are closed. So this slide shows you how the OEM was making this information available to affected people on the ground. What we have is a PDF embedded in a PDF viewer on a website. It's great. The data is out there. Actually really great that this data is open. We'll talk about what we did with it in a minute. So they were getting this data from the All Hazards Consortium, who in turn was aggregating it basically from point of sale units. They were monitoring point of sale units, and they say, oh, if your credit card reader is off, you're probably closed, so that's a good signal you should not go there to get gas. You know, I think that's a pretty crafty solution in the time of crisis. Our task at hand was how to get this information from this PDF onto a map. And what happened was we had this very helpful volunteer by the name of [INAUDIBLE] who manually geocoded all of these points in the PDF, and made it into a map layer on Google Fusion Tables, which ended up in this spreadsheet. So you'll notice how each row gets a unique stable ID, which helps us to track changes of status over time; has columns added for latitude and longitude, so we know exactly where to place pins on the map. And here's the spreadsheet information on the map. Success, right? So we've tried to do all the right things. We took data from an authoritative source. We made it more easily available. But it turns out that the data we were getting wasn't exactly up to date. The OEM was updating its PDF only once per day. And we were actually, in the end, doing a disservice not having the most up-to-date information. I guess it kind of sucks going to a gas station that you thought had gas only to find that it actually had gas yesterday. So here, we thought about getting help from users on the ground. Users would come to our map. Now we're going to ask them, does this station have gas? Does it not have gas? We took this data from that the OEM published once a day. We kept it up to date with user reports on the ground. And this was circular-- once we'd get those reports, we'd send them back to the OEM. And here are just a few of the crowd source responses we got from users. The first one says, "It's open. I just filled up there. All types of gas available." Second one says, "It's open." So this just pointed us at the need to do more crowd sourcing with user-generated content in addition to authoritative content. Next, we got pointed to Mappler. This was a gas station map being maintained by a team of students, led by Wonsoo Im out of New Jersey. These students were working around the clock, manning phones. They were calling gas stations. They were keeping what became the official data that the Department of Energy was looking at to figure out where to send trucks with gas and generators. Eventually, this data became our default gas layer. But on the bottom right, notice recent feedback from Google Crisis Map. We were able to put their crowd sourced data on our map, and also get comments from users and send them back to them so that they could update their data again. So we were able to create a little ecosystem with these students. Here's a photo showing this great group of students helping their community. So the takeaways here for us are that authoritative data is great, but in a disaster, it can become outdated very quickly for all kinds of reasons. And so, you really need to take advantage of local expertise on the ground to bring about the best, most actionable data in a crisis. If any of you find yourself in the position of curating a data set like this, especially in time of crisis, please keep in mind that single-source hosted data is the easiest to keep up to date. As soon as you send out an email with a file attachment, that data begins to get stale. So for this reason, we tend to prefer hosting rapidly changing data in tools like Google Spreadsheets, where all you have to do is reload the page to see the latest version. But if you're interested in learning anything more about Crisis Map, we've open sourced the code. Everything's available under the Google Crisis Map project on Google Code. And if you're interested in playing with Crisis Map, there's a screencast linked from this page that will walk you through more of the details. There's also a link that will let you sign in to our hosted instance with any Google account. That's all for Crisis Map. I'm going to pass you back over to Alice, who will tell you more about Person Finder. Thanks. [APPLAUSE] ALICE BONHOMME-BIALS: Thanks. So I talked before a little bit of Person Finder, and how it was initially created as a hackathon after the earthquake in Haiti. But the situation in Haiti after the earthquake of people searching for their loved ones and their friends was not specific to Haiti. Usually after every earthquake, that is something that happens. And we had to launch Person Finder again for the earthquake in Chile in 2010, and then the earthquake in Japan in 2011. More recently, we launched it for the explosion in Boston. And right now, there's still an instance running for the floods that are happening in India I want to give you here a little bit of walk-through of how this tool is working, and then we'll go into details about the standard that we're using here to exchange data. So Person Finder is very simple. As we mentioned, anything that we do has to be simple. So if you go to Person Finder-- and you can use the URL, google.org/personfinder/test-nokey, which is a test instance that is always running. So people can play with it, get familiar. You have just two buttons that they can click-- I'm looking for someone, or I have information about a person that I want to share with others. So if you click on I'm looking for someone, you get a list of results. So you're looking for your favorite friend, John Doe, who you haven't heard of since the earthquake. You can also have several results because there are several people that can have the same name, or entered several times. Here, you will have the status about that person in green. If you click on one of these links, you go to detail page that looks like this. You have information about the person-- their name, information about their location, some descriptions. So here, usually about the size, brown hair. So you can see the person who was very stressed, who forgets the age. And so you have information about that person, and then you have notes attached to it. So anyone can go here, and if they have information about this person, they can add a note saying, I'm also looking for that person, or I talked with that person on the phone, or I heard my brother had a discussion with that person. And so you can share information with other people, even if you don't know these people. And that's the point of Person Finder, is sharing any information that you have about a person with other people that you might not know. You can also subscribe to a record if you want to have the dates, and receive an email each time someone posts the notes. If you think that this is not the John Doe that you are looking for, that there's another one that is not in Person Finder, then you can go back to the original page, and click on this Create a Record. And if you think that, then you just fill in the fields. In that zone, you have the information about the record. And you can go, and regularly go, see if other people put some notes. It's also used the a lot, and that was the case after the explosion in Boston, for people to just tell people that they are fine. So they will enter their own name. So I will enter name-- first name Alice. Bonhomme-Bials, my last name. And then, I'm just going to put a note saying, I'm Alice bounding and I'm fine. And so people going there will be able to see that I'm fine. So this code, as most of the tools that we develop, is right now open source. So you can go to code.google.com googlepersonfinder, and you can find more information about it. This is developed on App Engine. And as Phil mentioned for Crisis Map, we have exactly the same patterns on Person Finder where most of the time, there's no traffic, then something happens and it spikes. And we say it spikes, for the earthquake in Japan in 2011, in a few hours, it did spike to thousands of requests per second. So from zero to thousands of requests per seconds, and we had no issue, because App Engine just created several instances as much as it's needed. It's App Engine, and it's Python on App Engine. We use the data store, the App Engine Data Store, to store the records of the person. What is very important is that we have a Data API, and this Data API is key to Person Finder, because this is with this API that we can exchange data-- a list of missing people-- with other repositories that are out there of missing people. So we implement a standard, and I'll talk about this later-- PFIF, or Person Finder Interchange Format. But PFIF is a standard that already existed before the earthquake in Haiti. It was created in 2005 for the Hurricane Katrina. And it's been evolving since then. In Person Finder, we implement a Search API, a Read API, and a Write API. For this API, you need to have an authentication key that you will need to request, just because we don't want to have everyone be able to download a long list of missing people. So we just want to be able to track who has access to what. If you want to go, again, we have an instance running-- the test instance. Doesn't require any key, so people can try and develop some application using, sorry, the API without requiring a key. So if we deep dive little bit more into PFIF. So PFIF stands for Person Finder Interchange Formats. And the link here goes to that particular format. It's been based on very basic principle. The first one, convergence. So we're going to have different repositories of data that have different types of data, and we need to be able to bring them together. And the standard has to support that. Then, this data must be traceable. Very important to know where the data come from, because you don't want to trust information if you don't know the source of the information. So that's very key. Then, there's no central authority. You have different repositories, and that's the aggregator of these different repositories that can decide which data source to trust. And finally, there are going to be some duplicates. And we talked about this-- in times of crisis, people are very stressed, and they get [INAUDIBLE] they're going to enter several times the same person in different places, or you're going to have two people doing the same person, entering the same records. So the standard has to be able to support that, and be able to reconciliate that. If you go into more details, PFIF-- pretty much, if you have a list of records for persons, and then you can attach to each record a note, as we saw on the UI where we implemented PFIF. Then, you can have different repositories. Each repository will have a list of person records and note records. Each repository is identified by a domain name, and that's how we know where the data come from. And this domain name is used to identify every record and everything note. And that's very important, because then when repositories are going to exchange data, they're going to keep this single identifier. So let's see imagine that you have this foo.com. You have a record on there that you want to copy into another one, into another repository. You're going to make this copy, but keep the original identifier. So now let's take an example of having four repositories, and we're going to create some entries, and copy them among repositories, and see how we reconciliate that. So let's assume that on January 4, we create a record for Bob. What is important is the source date. The source date corresponds to the dates that this particular entry was created. Then, if this particular record is copied from foo.com to abc.net on January 5, what happens is the copy of that record keeps the original source dates, and we'll have a field called an entry date. The entry date keeps track of when the entry was created in a repository, and the source date keeps track of the date when this record was originally created in the originally repository. So now let's assume that you go to back to foo.com, and you change the records. In that case, everything changes-- the source dates, and the entry dates, because you change the content. So now, let's make things a little bit more complex. Let's say that you make a copy of this updated version of the record into a third repository on January 7. So as we said before, we keep the same source date that was January 6 of the original record. And then, you update the entry date to be January 7. If at the same time, a copy happens-- or a day later, a copy happens from one of the old versions of the records into another repository, what happens here is that the source date will still be the January 4-- so the out-of-date versions-- and the entry date will be the new one. So now what happens, if you bar.org, and xyz.gov want to exchange the data, what they'll realize is that there's a conflict there. They don't have the same version. And which one wins? How do we know? So for that, we heavily rely on the source dates. The source date is the authoritative information, and we just cheap the more recent source date. So in that case, xyz.gov will get the latest version of the record because it's going to take the one that was created on January 6. We use the entry date mainly to synchronize between repositories. And when people implement PFIF, they use a lot some API where you would talk to your repository and say, give me any record that was updated since that particular date. And in that case, we use the entry dates. So we use an API where that will take a field for the entry date, and give you anything that changed since that date. So as I mentioned before, in Person Finder application that we developed at Google, we implemented PFIF in our API in order to be able to exchange data. And I want to give you a concrete example of what that means. So when we launched Person Finder for the earthquake in Japan on March 11, 2011, we launched it a few hours after the earthquake. Very quickly, it spikes. A lot of records went in there. A lot of-- because people couldn't call, and couldn't use SMS, so they entered information. And we got contacted by a lot of people that also had lists of missing people to put that into Person Finder, and exchange information. One thing in particular was that a lot of people had to run from home, and go into some shelters. And they didn't necessarily have time to bring everything with them . So what happens in this shelter is that people to let people around them know that they were in this shelter, they put their name on a piece of paper, and it looked like this. Whoops, sorry. So they wrote names on piece of papers, and put this piece of paper on the wall of the shelter. And the few people that did bring their phone with them took a picture, and uploaded that into a Picasa album. So there was all of these pictures are of list of missing people in different shelters, and because of [INAUDIBLE]. Then, volunteers from all over the world that did speak Japanese would go on this Picasa album, look at the picture, and transcribe the names. So as the comments to the photo, you had a list of names. And then from there, with an API, you could use the Person Finder Search API to locate if that name was in Person Finder. If yes, then add notes to the records saying, we saw that particular name of that particular person in this particular shelter. So people could have some lead, so it's never sure if it's exactly the same person. But the people that enter information in Person Finder could certainly receive this information, and be in contact with that particular shelter. From that effort done completely by volunteers, we got 9,000 pictures that were taken, and able to update 137,000 records in Person Finder. And this type of thing is possible because we have an open API, because also, this standard PFIF is simple, so when something happens, developers can quickly understand the format, quickly understand the API, and quickly develop some solutions. And we have stories like this in lots of different disasters, where people would just code something on the fly to interact with Person Finder. And that's possible because of this open API. So that's the story for Person Finder. I'm going to hand over again to Phil to talk about one of our last project that is mainly based here in New York called Public Alerts. [APPLAUSE] PHIL COAKLEY: Hi, again. So up until this point, we've talked mostly about tools to aid in the recovery and the aftermath of a disaster. But one type of information that's critical to mitigating the effects of disasters is official public warnings and alerts. So for TV watchers, the US has the Emergency Alert System, which is a successor to the Emergency Broadcast System, that announces important alerts to targeted areas from official agencies, like FEMA or the National Weather Service. So what's the equivalent for the internet age? People are able to find a deluge of unofficial information through social media, but it's really critical that they can easily find official information, as well. In the past, almost every official agency that produces such alerts uses its own format and its own distribution mechanism. Some publish alerts to a website. Some have a Twitter feed. Some offer email or SMS subscriptions. Some don't have any information online at all. And even if the information was published on a website-- common theme-- people going about their daily lives aren't necessarily visiting those sites, nor are the sites necessarily organized for the right use cases. And very often, those sites are not provisioned for sudden spikes in traffic, like in the case of emergency. So it would be much easier if agencies and organizations used the same standard ways of disseminating warning information. Luckily, there is such a standard. It's called the common alerting protocol-- CAP. So it's an XML-based alerting spec. It normalizes the formatting across many types of alert messages. It can provide a standard and flexible way to target alerts by dimensions like language, category, geography. So this was designed in consultation with over 100 different emergency managers, and it's been adopted by multiple standards bodies as an international standard. In particular-- I'm supposed to read it out-- Oasis, the Organization for the Advancement of Structured Information Standards; and the ITU, the International Telecommunications Union. So organizations like the US Geological Survey and the National Weather Service already produce feeds of warnings and notifications using CAP. Why is it so important? If data providers can provide Atom or RSS feeds of their data in this format, then the alerts can be easily exchanged across various platforms. So our team has embraced this standard, and we're doing as much as we can to drive adoption of it across the globe. So I'm going to show you what we've done to address this need for an emergency broadcast system. There are a few different ways that we surfaced this data. But first, we'll call this slide from the beginning. We know that people come to Google to search when a major event happens. So in this case, imagine that you are a user who's aware of a wildfire nearby. You heard something about an evacuation warning, but you need more information. You search for evacuation on Google. People do this. So if there is an active warning at the time of your search related to your search query and location, we'll show you a One box like this. Note-- the snippets in this particular One box indicate that there is a mandatory evacuation in effect. So you would click on the More Info link. You're taken to a details page hosted on App Engine with extended information about the alert you just saw. This details page is actually for a tsunami warning. I picked a different one just to keep you on your toes, because it has a little bit more information on it. Note-- the map on the right-hand side with the red markers. Clicking on any of those markets will show you the estimated arrival time and the expected severity of the tsunami at that location. And we have similar map integrations and structured data tables for all sorts of different alerts. If anybody wants to play with this, if you want to see the alerts active in our database at any given time, you can go to google.org/publicalerts. So in addition t desktop web search, our team has done a lot of work to make public alerts available across a broad surface area. And these are some screenshots of alerts as they show up on mobile web search and on Google Maps. But if you haven't already gotten word that there's something going on, you probably aren't going to know to search for it. So enter Google Now. Google Now is available for both Android and iOS. And broadly speaking, it's designed to get you the right information at the right time. So it'll tell you today's weather before you start your day. It'll tell you how much traffic to expect before you leave work. And if there's a public alert near you, Google Now will show you a card very similar to this one to let you know about it. So here's a sneak peak at our system diagram. I'm going to step through it in a little more detail. But very quickly, alerts enter our system-- in the red box at the upper left. They flow through the alert hub, which manages polling, subscriptions. It makes alert data accessible to other subscribers without adding load to publisher sites. Alerts then enter our ingest server, where they're processed and saved off to a geospatial index for quick serving, via Maps, Web Search, and Google Now. We'll focus first on the alert hub. So we run an instance of the open source pubsubhubbub server. It's kind of a funny name. Has anybody heard of it? Yes, excellent. So "pub," it stands for "publish;" "sub," "subscribe;" "hub" is a hub, and then bub. It was developed by a couple of Googlers on their 20% time, and now in use across many Google and non-Google products. It uses feeds essentially of HTTPS posts. It's simple, and there's no complex APIs. It supports HTTPS, like I said, so you can pass through digitally signed XML, which is important for us. It's efficient. It finds an easy way for publishers to push alerts using HTTP posts. All the complexity is hidden in the hub. It's open. We know this is important because the standard is public. And there's a bunch of open source code for both publishing and subscribing. But finally, it's scalable. So if you push alerts to a hub, not only will Google be able to subscribe to them, but so will anyone else. You can have hundreds-- you can thousands of subscribers. Everyone will get near-instant notifications. The hub will handle the load so that you don't have to, and the hub does duplicate detection to subscribers will see alerts only once. This is one of the things we're doing to promote CAP publishing, because if you're publishing an alert that people need to see, not everybody actually has the resources to run the server to maintain that sort of load. So this Alert Hub is responsible for procuring updated XML feeds of alerts. Publishers can either ping Alert Hub when I have new data, or Alert Hub can pull them at a configurable interval. Once it's discovered new content on a feed, Alert Hub publishers that content to all of the registered subscribers in blue. And we allow anyone to subscribe to feeds through our hub. As I said, this is just one way that we stay true to our commitment to open data and standards. And our own ingest server is also a subscriber. So like other tools, like everything we do, this runs on App Engine. After receiving a new alert, the ingest service job is to get it saved off as quickly as possible so that we can start showing it to users. The first step is permissions check. Was this publisher allowed to publish alerts in this geographical area? Does the alert expire in a reasonable amount of time? So a tornado warning is typically active for under 30 minutes. We don't expect to see one that lasts 24 hours, for example. That would get really confusing to people. And then we have a number of other checks for things we've seen go wrong over time. We assign a score. It's necessary to make sure that our system has some notion of the relative severity of events. So a tsunami warning would get a very high score while an air quality warning-- still important and interesting to some-- would get a relatively low score. Next comes the really cool part. Our geospatial index is based on tokens generated by the s2-geometry-library. I would love to tell you more about this. You can approach me afterwards, or take a picture or write down this link. There's some more information. And then we finally commit to the database. Part of the deployment of this geospatial index includes an in-memory cache that we replicate widely that keeps track of all of the active alerts in our system, and is capable of answering queries in under two milliseconds. Google Maps, Google Web Search, Google Now all communicate with this index as a part of responding to a user request. So we rely on trusted partners to provide this authoritative information on emergency events. And this is just a sampling of the providers that we work with, and the types of events that they publish. And of course, we're constantly working to add support from more publishers across the globe. So if an alert provider has a well implemented CAP feed, our job of adding their support for them to Public Alerts is pretty straightforward. We've noticed, though, that even though the CAP is a standard, there are a few things left unspecified that can make it a little bit more cumbersome. So to help new implementers of the CAP spec we host a CAP validator tool that can help you recognize systemic issues with your CAP, deviations from the specs, and sort of Lint-style messages based on our observation. So we've open sourced this CAP evaluator, along with the Java library, to aid in the creation of CAP. You can find more information about that and Public Alerts at these URLs. That's all for Public Alerts. Thank you. [APPLAUSE] PHIL COAKLEY: Thanks. So as we work towards wrapping this up, as an engineer working on these tools, I can say it feels like we're doing the right things. But how do we know that we're actually having a positive impact? There are various measures of success, but within our team broadly, we'd say that if we've contributed to saving a life, to preserving property, to averting some sort of misery, then we've probably been successful. At Google, we like data. You know this. But these things are hard to measure. It's not a new problem. Lots of response organizations have similar challenges. So here are some of the things we look at. We start with user feedback. This sort of thing is really helpful. Sometimes it comes to us via submit feedback links on our product. Sometimes it comes through email or social media. The feedback for us that carries the most weight is that which tells us how we made a difference, how we helped someone make an informed decision before or after an event. So the substantial feedback we get tends to be positive. This is good. In fact, this is as much measurement as most organizations ever get. But we put the bar very high, and this is still not quite quantitative analysis. So we do have some hard numbers about the amount of traffic our products serve. We saw absolutely huge amounts of traffic to our Crisis Map during Hurricane Sandy. We know things about this traffic, also. On October 29th, the day of landfall in New York City, we saw that 82% of our traffic was in the non-Google referral category. So in other words, external sites embedding our Crisis Map-- sites like Huffington Post, other new sites. And other third parties sharing the map helped to drive traffic towards us. So this is a pretty good signal that other free agents thought our tools useful, so we also watch these numbers very closely. But we're still left with this divide. How do you make that leap between page views and the numbers I'm talking, about lives saved? We don't know. We're still working on this. So our hypothesis is that there are some set of actions that are strong indicators or predictors that our tools delivered meaningful value. This is not just page views or clicks or bounce rate. This is deciding which interactions on our tools are actually substantive. So in Person Finder, if you perform a search, you find a record for a person that contains a meaningful status. Maybe that person has checked in, or left a note that they're safe. It probably falls in that misery averted category, so check. If you printed something or you asked for directions, you're probably going to change your offline behavior based on that content. While that doesn't necessarily translate to saving a life, it might be as close as we are able to measure it at this point in time. I'm going to pass it back to Alice for final conclusions. [APPLAUSE] ALICE BONHOMME-BIALS: So as conclusion what Phil said is something that we're actively working on-- how do we know we have an impact? And how do we measure that to also know what should be the next thing that we should be building, or how to improve the current tool that we have? So there's tons of things that remains to be done in the field. And some of them are, how could we respond to more disasters? So in the last three years, we responded to more than 30 disasters in 10 different languages, but there's much more we could do. There's some disaster that we couldn't respond to. And one of the keys for that is more automation. Public Alerts is a great example of that. During the recent tornadoes in Oklahoma, we displayed tornado warnings to millions of users without having us to do anything. Everything was automated. Tornado warning was issued, and then it just completely went through our pipeline directly to users. But that's not the case for Crisis Map or Person Finder, where when a disaster strikes, we manually need to create the Crisis Map, enable the Crisis Map, get some data set, collect information. So we're always working on automating more. Also, trying to engage and empower citizens to respond themselves. If people can actually use the tools directly, or with Crisis Map, people can create your own data set, and make them available to users. We also want to better engage the community. So we saw with the Crisis Map during Sandy about the gas station, where people will just go on-- if you're in the gas station, and update the status. I want to do more of that. I think that that is something that really has a lot of potential, of how do you engage the community to update all of this data? Among that goes all the social data. There's so many tweets and updates on Facebook and Google+ during a crisis. How do we make use of it? Right now it's really hard. What about if there's an earthquake, and there's 10 tweets that arrive about a person being stuck under rubble? How do you know if it's one person or 10 people that are stuck under rubble? How do you know that this is something that is still true five hours from now? How do you know that there's a radio search and rescue team that is on the ground working, trying to save these people? How do you coordinate all of these efforts? That is something that a lot of responders, agencies are struggling with. There's this overflow of information-- how to make better sense of it? That's a big challenge. And finally, offline-- what do we do for places, countries, or communities that don't have internet access, or very low penetration. This is something that we're still thinking of, and what should be our role in this place. But I want to conclude on reaching out to you. If you're interested in getting involved, what can you do? There's a lot of things that can be done. First, there's communities you can join. Crisis Mapper is a community that leverage mobile and web-based applications that also share lots of data sets of imagery-- a geospatial platform, visualization tool. CrisisCommons is the organization that actually started Crisis Camps. I'm not sure if some of you are familiar with Crisis Camp. So when some big crisis happened, they set up some camp all over in different cities. And people go there to help. That could be developers to help develop some tools with some APIs, could also be translator, or the group we saw that was in New Jersey about the different stations. This is a little crisis camp where people go and take their phone, and update data sets. So if you go to crisiscommons.org, there is a lot of information about this. There's also lots of all the tools. Ushahidi and Sahana are some examples of search tools to help during crisis managements. Finally, if you just want to spend a day trying to help and hack some solution, go to some hackathons. Random Acts of Kindness is an example an organization that runs different hackathons all over the world at different times of the year, so you can go on their website, and check when they do some hackathons. And the last call for-- I'm happy to see there's several woman engineers or developers here in the room. There's the Grace Hopper conference, which is a conference for women in computing. And there's a special day for open source day that's focusing on humanitarian projects. And the project like Sahana or Google Crisis Map is going to be at this conference for a day hacking in Minneapolis on October 6. So if you're interested, please go. And I think that's it. So thank you very much for your time. And we're going to take questions if you go to the mike. Thank you. [APPLAUSE] ALICE BONHOMME-BIALS: Yeah, we're just want the question into the mike so that this could be recorded. AUDIENCE: Bob Gazalter. A observation and question about the future. We have increasing amounts of broadband connectivity provided by Fiber with local battery at the subscriber location. We have a known problem which occurred after Sandy where the cell sites had maybe 24-hours of battery power before they died. Any statistical data in the past with regards to your queries as to how much of the installed base, so to speak, of connectivity is vulnerable to disruption due to power? And any thoughts on how we fix this underlying problem? Because frankly, if the mobile sites go down, because the land line power is out-- armies travel on their stomachs, and the easy way to knock out an armored brigade is always been to blow the tanks of the fuel train. And we're all hung on our umbilicals of power. PHIL COAKLEY: Yeah. I mean, the idea of trying to gain some insight about power connectivity of a community, and how that's vulnerable to disaster is a very interesting one. We haven't done anything with that. I know that you always have to be careful when you look too closely at any data. So I don't even know what level of that data would be available. But that's something very interesting for us to think about in the future. AUDIENCE: My apologies, but a follow-up. Your data sets may not go back far enough to September 11 of 2001. PHIL COAKLEY: No, certainly not. AUDIENCE: I'm talking about Google's generic activity. But one girder coming off the tower took out broadband for most of southern Manhattan. The AT&T switching hub next to the Trade Center got shishkabobbed. And I had clients on 34th Street without broadband for months. ALICE BONHOMME-BIALS: There's actually some organization-- and that was the case in Haiti-- that comes on the ground after a disaster to actually set up some networks. And so, that could be through satellite. That could be through other means. AUDIENCE: That's the weak underbelly of this whole problem. ALICE BONHOMME-BIALS: Yeah. AUDIENCE: The responders can't use their devices. ALICE BONHOMME-BIALS: Yeah, so if you don't have enough access, you need to bring something. PHIL COAKLEY: Yeah. Acknowledged. AUDIENCE: Hey, I'm David, a mobile engineer. And there's been a bunch of articles about Project Loon that Google has launched with a similar question where Google has launched Wi-Fi balloons to help situations. Can you talk about that, and how it's progressing? PHIL COAKLEY: So I saw the video, like the rest of you, and that's about all I can say. It looks pretty cool, though, right? ALICE BONHOMME-BIALS: But you can imagine having a lot of potential in times of crisis. AUDIENCE: Yeah. Anchor one over a [INAUDIBLE] disaster, and just don't let it float. AUDIENCE: Hi, my name's Lev. I have a question in line with the whole gas situation through Hurricane Sandy. How reliable is the traffic information on Google Maps? And how real time is it actually? Because one of the ideas I had-- sounds simple enough-- that if it's pretty real time, the more people are lining up, cars are lining up at a particular gas station, you should be able to see that on Google Maps, and try to avoid that. So it should be easy enough to create a Google mash-up of sorts. PHIL COAKLEY: Absolutely. So our Crisis Map actually has a layer for Google Traffic, the same layer that you would see on Google Maps. And we did receive feedback from users that they were using that Traffic layer in conjunction with the Mappler gas layer to figure out which gas stations had gas, and the shortest lines. And there are definitely cases where it did work. I can't comment as to the exact accuracy, but I know that there are definitely cases where that approach has worked. AUDIENCE: OK, and also one thing that I found. I tried making up a mash-up. It seemed kind of tough, because you guys don't have any direct APIs, so all you really have is just enabling the Traffic layer, and just seeing traffic for a particular address or a particular view port. Are there any plans in the making for an API for traffic data, and making that open source? ALICE BONHOMME-BIALS: I don't know. PHIL COAKLEY: You know, I'm actually not aware of it. We don't actually work on the Traffic team. We just get to turn on the layer. AUDIENCE: OK. Just thought it might be helpful with the whole-- PHIL COAKLEY: But that's a great idea. AUDIENCE: --topic of making everything open source for a crisis situation. PHIL COAKLEY: Absolutely. Thank you. AUDIENCE: I had a question regarding, I guess, the prevention in part of the Public Alerts. So I guess, are there any plans to-- once we realized they're able to say a tornado warning or something like Hurricane Sandy, would there actually be-- I guess would there be any plans to kind of conglomerate like shelters you go to, or basically any supplies we could get beforehand? I guess something like that? And I guess maybe even further to kind of that to a non-crisis mode. Is there any way we can make our lives a little bit easier so that when a crisis does happen, we're not suddenly all frantic, and everything's suddenly spiking. ALICE BONHOMME-BIALS: Yeah, so you're talking about how to get prepared? And preparedness is also a big side of what we're doing. And actually, the details page of when there's a public alert, it's generating the warning, and you click on it, there's links to what to do just before a tornado, and what to do during, and what to do after. So there is information here on how to get prepared. We also do some working with some organizations to do campaigning about raising awareness of how to get prepared in a particular region. I know in [INAUDIBLE], there was things about hurricane, and how to get prepared for it. Tornado is a little bit different. Hurricane, you have a few days to get-- you know things are coming. And definitely here, there's information. And I think some sites like FEMA have information on how to get prepared. And so we try to link as much as we can to these places. PHIL COAKLEY: Yeah. The state of Florida has actually adopted Crisis Map to build a preparedness map for their entire state along those lines. And so we definitely help to drive that forward wherever we can. AUDIENCE: Am I allowed to ask a second question? I guess this is maybe more of a side one, was I guess-- so what do you guys do if you're feed false data? Or maybe someone-- let's say in People Finder, I enter a name, and I accidentally misspell a person's name. And so I end up creating an entry for this person who I think maybe something happened. Is there like a good way to deal with a case like that? ALICE BONHOMME-BIALS: Yeah, actually, we even had some spam in Person Finder where people enter bad information on purpose, like some notes saying wrong information. So there's a way to on Person Finder, you can delete a record. You can also report some notes as spam, so they would actually be hidden, and other people would not see them. So we integrate that into the tool to allow people-- so either, as you said, make mistakes, and you can delete, because in times of crisis, you're going to make mistakes. Or also, to just block people that have bad intention to hurt other people. AUDIENCE: Hi. My name is Tokam Bohemmian. I think everything you're doing here is great. I love it. But I want to ask in terms of Africa, too-- Nigeria, Ghana, South Africa. I mean, there's some places where the internet connection is not great, but they could use help. So I just want to know what your thoughts are on places like that-- quote, unquote, third world countries. ALICE BONHOMME-BIALS: Yeah. I mean, you're completely right where that's a part of, what I mentioned, offline. What do we do where people are not that well connected? And I think some of the things we can do here is actually have responder organizations that are on the ground by providing some of the tools. You could imagine running your own, instance of Person Finder in some places, and then you have people going on the ground, and talk to people, and then entering it to a few places where you have internet connections. But so far, we mainly put most of our efforts where there's a good connection. And this is something we're thinking of. And it's actually-- working on this team sometimes, you're frustrated because we could help. There's so many other places where we could help. And so far, it's sort of hard to see what would be our impact in places like this. So if you have any suggestions, we welcome them. AUDIENCE: I was born there. AUDIENCE: Hi, my name is-- ouch. My name is Marco [INAUDIBLE], communications and information firm Israel. My question is basically about the reliance on the internet. Of course everybody keeps asking the same question. Once upon a time, we had ChaCha right? You could do a Google search through SMS. There were many times where the internet availability is not anywhere near what SMS is capability is. For example, there might be more support for older GSM generations, and not for broadband or any advance in [INAUDIBLE] support. The question is whether or not we could be able to put support for SMS-based access to the Person Finder, or even the Crisis Map. This is something that I find that should be brought back. I don't know why ChaCha was put down. PHIL COAKLEY: Absolutely. So an SMS gateway for Person Finder is actually something that's come up many times. They have it in Japan, right? ALICE BONHOMME-BIALS: Actually, in India. PHIL COAKLEY: In India, [INAUDIBLE]. ALICE BONHOMME-BIALS: So basically, there's some flooding-- I'm not going to try -- PHIL COAKLEY: Uttarakhand ALICE BONHOMME-BIALS: Yeah, Uttarakhand, in India. And some feedback was that a lot of people there will have access to SMS, but not smartphones. So we have an open API. So we just got an SMS gateway, and there's a number where you can actually just SMS search, and the name of a person, and then you get some results. And that was launched as of, like, a few weeks ago. So definitely, in some countries, SMS is the way to go. For Crisis Map, I guess that would be a little bit more of-- it depends exactly what you want to support for SMS Crisis Map. AUDIENCE: Well, right. Location services are available usually in those situations. I think it's also very wise to access that the same way where you can guess where someone is on Google Maps, you can certainly guess where someone-- ALICE BONHOMME-BIALS: I think that's the part, if you have open APIs, then other people can just build some other parts that will just integrate with here. Because it's people on the ground that understand best what are the needs on the ground, and what are the resources. So if we can empower them, which is to say that the data there-- if you've made the middle bridge between these apps working on the internet and people here, then go and use the API, and build a bridge. AUDIENCE: OK. So basically empowering people to expand the API. AUDIENCE: Hi. My name is Louis. You have talked about crisis response in cases where nature hits. And we always see that we have crisis that are not related to nature but political turmoil or war. And I was wondering if you have in your roadmap the intention to cover alternatives or options for people living in countries that are suffering that kind of crisis, and mothers looking for their kids. A lot of what you mention for nature crises resonates with these kinds of situations. I'm not talking about supporting a government or supporting any rebel forces, but bringing some tools for families to get reunited. It sounds like a lot of what you have been-- especially what you talked about. Do you have something like that in the future? Are you looking into it? ALICE BONHOMME-BIALS: I mean, that's something that we get asked very often. And so far, we've put most of our efforts trying to respond to natural disaster. But one important part of what we do is our tools are open source. So anyone can go and create their own instance of Crisis Map, can create their own instance of Person Finder, for example. And so, that's why we open source that, so that if we can not engage and we cannot respond, then other people can do. At the same time, you also need to be careful in the way that you use these tools. You think Person Finder in places where there's a lot of political conflicts and war, it means you're going to have a list of people that you put out there on the internet. And everyone can access it. So you have to think about, is it going to help the people, or can it hurt them more because this list will be in the hands of people that shouldn't have this list? So that's something that is very important to keep in mind. But you can use these tools directly. AUDIENCE: My question might have been covered kind of a little bit with that. But I was wondering if you guys have thought of-- with Google having experience of having, I guess, their Enterprise level drop in search servers where it would be in-house, if you would pre-set up instances of this where it they may not have internet, but you might be able to want to generate, or where you can go drop off an instance of this, and kind of bring everyone to an essential place in order to link them up together easier. ALICE BONHOMME-BIALS: I know for Haiti, something like this was done. After the earthquake, Google worked with some ISP to bring them servers, and have them go back up to speed much faster. We also did bring on the ground some laptop with Google Earth, and some data uploaded directly into that, because it was-- you say, oh, we put satellite imagery available for Haiti, and people with their little dial-up connection, how can they just get the satellite imagery? So there's a few things like this. But it was more like some one-off. Yeah, it's-- PHIL COAKLEY: Yeah, I was just thinking, like with the size of Google Now and the research you have behind you, it'd be great for-- especially for google.org-- to have those resources in place and ready to go with partnering with Red Cross in order to bring those to wherever it might be needed. AUDIENCE: Hi Can you hear me? OK. So my name is Calpina. I have two questions basically. One of them might have been covered. But the first one is basically how you are handling this whole data. Like you have data coming from Google Maps. I'm more curious, are you considering the data coming from Facebook, Twitter? If so, how are you integrating it. I would like to know the technology that's used behind, maybe like the big data thing, or the node SQL thing, like more technology-wise rather than from a business perspective. I'd like to know about. And the second one is probably somebody already raised-- not just natural disasters, like the thing that happened two days ago. Suppose it's a full flight number, one, two, four-- I'm looking for. Nobody was on the flight that I knew. But I just did look for the flight like my app said. Flight data app said that it landed. It. So I looked up on Google Now, which said that the state of this is uncertain, or something like that, basically giving me [INAUDIBLE], which is kind of a good sign. If my husband is travelling, I'd like to know if the flight landed, or if something-- at that point, if you can call airlines, you can call Norwegian cruise lines, which has like 4,000 [INAUDIBLE] but nobody you can't reach. In that kind of situation like this, this basically group of data-- like you know the passenger's information. Is Google going to [INAUDIBLE] that, you know what I mean? Like basically, it's not random data. Youi have specific data. If there's a way of handling that. Like if there's a good way of-- PHIL COAKLEY: I mean, so for the first one-- to make sure I understand the question-- is if you're interested in knowing about the data standards we use to interoperate-- AUDIENCE: Right. I like to pose the data is coming from PDF, and there's a Twitter feed which is coming in JSON. How are you bringing it together? PHIL COAKLEY: Absolutely. So I think there was a slide where I went over all of the different sorts of map data types that we support. Underlying Maps API is sort of a very large infrastructure that supports KML natively. So that XML format that supports WMS, this web map service that supports titles on these sorts of things. How does it all come together? AUDIENCE: What I'm trying to say is basically there's a huge amount of data coming in one [INAUDIBLE] and there are. And this can happen [INAUDIBLE] You might see data in the very first hour. You might not see for the rest of two or three days. That's only when you have to bring it together, you know what I mean? I'd like to know not two days after. ALICE BONHOMME-BIALS: That's a big field of research. If you could just ingest all the social data, and [INAUDIBLE]-- lots of research are working on that right now. AUDIENCE: Right. I'm trying to understand if you're putting some machine learning algorithms together behind it. Or is there anything perspective-wise if you can throw out some ideas. PHIL COAKLEY: We would love that. ALICE BONHOMME-BIALS: So like everyone, these are the problems we're thinking about. But we don't have anything right now. AUDIENCE: [INAUDIBLE] ALICE BONHOMME-BIALS: Which was, how to best have, like, real time updates about things? AUDIENCE: Right. Like you have a specific data set. You don't have so many number of people [INAUDIBLE] but you just know how many people-- ALICE BONHOMME-BIALS: Well, I think that's part of the-- after in San Francisco, after the plane just crashed, I think the fastest information you have come from social networks, right before like any official sources-- in that type of situation. So I think that goes back completely to your first question, about how you could have that type of information be a feed of like Public Alerts. And then you will have in Google Now a card that not just says, tornado warning, but hey, here is the information. And right now, some of that makes it through, like, Google News, for example. It's not in seconds, but now it's in a few minutes. And right now, if you go in-- and I heard about the crash very quickly just after all of these things. And yeah, that will be interesting to almost have. We have a variety of sources as input into, like, this Public Alerts tool. But you could imagine having a community feed based on magically understanding all of this social data, and making it through, and being able to tell people about it. But I think pretty much, it's just an issue on how to do it. PHIL COAKLEY: Hard problem. AUDIENCE: The fastest data-- just to follow up the last question-- the fastest data on a flight status is probably Flight Tracker, which copies the FAA's air traffic feed digitally. airline. Dispatch uses that to schedule things like the baggage carts, because there's a sterile cockpit rule below 10,000 feet. ALICE BONHOMME-BIALS: That one, for a while, marked the plane has landed, or just [INAUDIBLE] AUDIENCE: Well, I believe they identified it was a problem real fast. The comment I was going to make to reduce load in emergencies, there are two categories of users of People Finder-related tools. There is the incidental user, who's involved on the receiving end of a disaster. There's also what I'll call professional responders-- namely, first responders, volunteers, and the like are going to be in the system chronically. A suggestion-- produce a app download-- not HTML-based and JavaScript-based-- to compile it into an app for iOS and Android so they can be directly downloaded onto a tablet, and will only make the data queries up the stream. In an emergency response situation where you've maybe got a truck with the mast and a Wi-Fi node on it and a satellite phone, knocking down 70k or 100k to 3 or 2 will go a long way toward increasing usability. And I bet if you look at your IP addresses and your traffic logs, there's a distinct population which is banging away on it constantly. Take them off the bandwidth, take them off everything else. Major change. Second of all, a related comment-- instead of a server, deploy a proxy. Have a proxy box. It could be literally the size of a handheld that provides the JavaScript files locally. A local ISP can resolve peoplefinder.JavaScript.google.com themselves, and answer it directly off the handheld. That will also eliminate a lot of uplink traffic chattering back and forth. ALICE BONHOMME-BIALS: I think that completely makes sense. I think the part about installing an app, the issue is that people don't install in case there's a disaster. AUDIENCE: Professional responders. ALICE BONHOMME-BIALS: For professional responders. OK. AUDIENCE: Yes, they will. ALICE BONHOMME-BIALS: Yes, exactly. AUDIENCE: If you can knock off 50% of the traffic on local net, and another large amount by proxy at the local ISP. ALICE BONHOMME-BIALS: So then you say the bandwidth, the ID-- I'm just repeating because you don't have the mike. AUDIENCE: You're stretching scare resources. ALICE BONHOMME-BIALS: Yeah. The idea is try to save the bandwidth for very important traffic. That's pretty much the idea there. FEMALE SPEAKER: I think we have time for two more. ALICE BONHOMME-BIALS: Two more questions. Thank you. AUDIENCE: So for projects like this, it sounds like you're dealing with a lot of governments-- particularly, local governments. And local government often doesn't have particular sophistication when it comes to the IT technologies that they have. Have you compiled lists of recommendations beforehand that local government could use? I mean, going beyond your file formats, presumably there are some other types of data that they could be gathering in case of emergency, maybe have things ready in particular formats. And also, I'm wondering how often you end up collaborating with local charitable organizations that are active in dealing with the responses to these crises. PHIL COAKLEY: We do have several people on our team who work sort of nonstop in partnerships mode, working on preparedness with local governments and emergency agencies. So to that question, absolutely. Absolutely, we go out and we work with people. Second part? I'm sorry. What was the second part again? AUDIENCE: [INAUDIBLE] ALICE BONHOMME-BIALS: Oh, local agent, local organizations. PHIL COAKLEY: Charitable organizations. ALICE BONHOMME-BIALS: Yeah, I mean, pretty much I think for each crisis, we work with whoever is responding on the ground. So for example, like this group of students in New Jersey that was mapping the gas station, we worked with them. So pretty much, there's a lot of local efforts, and some groups that will get together and try to do something. And this group may not exist before the crisis. They may just spontaneously get created at the time of a crisis. And if they have valuable data, and we think that it makes sense to work together, then yes, we do. AUDIENCE: Hi. PHIL COAKLEY: Hi. AUDIENCE: Liz. PHIL COAKLEY: Hi, Liz. AUDIENCE: You had mentioned earlier that someone was asking you to make a direct connection to their relatives, because they didn't have telephone service. Have you guys considered implementing something maybe in People Finder, where people could do that on behalf of others? In other words, make a direct connection. Does that make sense? So people would volunteer-- people who did have phone service someplace else, in other words, could volunteer to make those direct connections on behalf of people looking to reach their family members. That make sense? ALICE BONHOMME-BIALS: So for example, that was done a little bit like manually in some of this Crisis Camp that I mentioned before. So in Haiti, after the earthquake in Haiti, there was some groups of volunteers that pretty much will look over Person Finder where someone said, I want to know let my family know. They're in New York, and here is their phone number. And there are some volunteers that will through that, and make these phone calls. And then will update Person Finder back on this. So there's no right now, like, tools support for that. It's more like spontaneous groups that [INAUDIBLE]. But I think there was some discussion-- so Ushahidi is a tool that was created after the riots following the election in Kenya 2008. And some part of that of the tool-- the goal is to be able to map some incidents. And I know that they tried also to integrate some tools that will load some things a little bit like this where there's some volunteers. For the earthquake in Haiti, they developed a project called 4646. So people on the ground could test a message at 4646 that was a free phone number where they would say, I'm stuck here, and I'm located in this particular place. Usually, that was sent in Haitian Creole. So that was sent to a server that Ushahidi was hosting in the US. And then you had volunteers that did speak Haitian Creole will read that particular message, will geocode it on a page, because the location was not an address. The location was usually, I'm near the supermarket that is near the church in this neighborhood, which if you're from there, you know where it is. But if you're not, you don't know. So people would go read that, but it on the map, categorize it. Is it someone stuck under rubble, someone needing water? And then based on this, it will branch to go to-- there was US Coast Guard to go, and how people. Or some were sending an SMS saying, let my family know I'm safe. And then they will send for the Person Finder API. So that they did something that is trying to use the crowd, and crowd sourcing the information. FEMALE SPEAKER: That's our time. Thank you so much, Alice and Phil. [APPLAUSE]

About

Google Crisis Response organizes emergency alerts and news updates relating to a crisis and publishes the information on its web properties or dedicated landing pages. It also provides opportunities for donation in collaboration with agencies like UNICEF, Save the Children, International Medical Corps, and local relief-providing bodies. Google also builds and provides tools to help crisis responders and affected people communicate and stay informed, such as Google Person Finder, Google Crisis Map, Google Public Alerts, Google Maps, Google Earth, Google Fusion Tables, Google Docs, and Google Sites.

Tools

Google Person Finder

Google Person Finder helps in locating missing persons. It acts as a message board for survivors, families and friends of those affected in a natural disaster by putting in live updates about missing persons. During the 2011 Tōhoku earthquake and tsunami, several Japanese family members were able to locate each other using Google Person Finder.[3][4]

Google Maps

Google Maps supplies critical crisis information to the public through search engines. It is used to provide crisis information such as road closure, areas covered in debris, roads which are passable, and resources such as for emergency medical stations. Using the My Map feature, KPBS, a broadcast station, created a map which provided real-time updates on the San Diego wildfires in 2007. The map received more than two million views within a couple of days. Google Maps was used to track the path of Hurricane Irene which hit the US eastern coast in August 2011. Besides mapping, Google Maps also displayed 3–5 day forecasts for Hurricane Irene, showed evacuation routes, and marked out the coastal areas that were in danger of the impending storm surge.[5][6]

Google Earth

Google Earth is a virtual globe that allows extensive customization with editing tools to draw shapes, add text, and integrate live feeds for information on earthquakes, cyclones, landslides, and oil spills as they occur. During the 2010 Haiti earthquake, International Medical Corps and Doctors Without Borders used the Google Earth application to track response efforts and visualise cholera case origins.[7]

Google Fusion Tables

Google Fusion Tables is an application which gathers, visualises, and shares data online with response organisations and constituents. It instantly visualises the data ranging from shelter lists to power outages in the form of maps and charts. It also helps in playing a crucial role in crisis decision making by identifying data patterns. During the 2011 riots in London, this application was used in creating maps which showed indices of deprivation and riot locations.[8]

Google Sites

Google Sites facilitates creation and updates of a website with critical response information available from anywhere in the globe at any point of time. Its highlight being that it can be created or updated without the help of web developers or any knowledge of HTML programming making it easier to use. A variety of information can be put up like forms to collect information, videos of the crisis, photos of the devastation, and maps that protect important natural resources and that help in search and rescue operations. Save the Children, an independent organization involved in rescue of children in case of natural calamities, has been regularly using this application.

Donations

Google.org, the philanthropic arm of Google, has donated several million dollars to the different relief organizations during natural disasters such as Hurricane Katrina and Cyclone Nargis.

References

  1. ^ "Google Crisis Response: Frequently Asked Questions". Retrieved 18 May 2011.
  2. ^ "Google Crisis Response: Response Efforts". Retrieved 18 May 2011.
  3. ^ "Archived copy" (PDF). Archived from the original (PDF) on 2011-11-25. Retrieved 2011-10-14.{{cite web}}: CS1 maint: archived copy as title (link)
  4. ^ "Google tool tracks the missing in Japan quake". CNN-IBN. 18 May 2011. Archived from the original on 13 March 2011. Retrieved 12 November 2011.
  5. ^ "Newsrooms use Google Maps to improve wildfire coverage". The Online Journalism Review. 29 October 2007. Retrieved 12 November 2011.
  6. ^ AFP (27 August 2011). "Google plots Hurricane Irene with online map". The Times of India. Retrieved 12 November 2011.
  7. ^ Google Crisis Response (January 2010). "Doctors Without Borders in the Haiti Earthquake" (PDF). Google.org. Archived from the original (PDF) on 25 November 2011. Retrieved 12 November 2011. {{cite web}}: |author= has generic name (help)
  8. ^ Simon Rogers (31 March 2011). "Mapping the riots with poverty". The Guardian. Retrieved 11 June 2013.

External links

This page was last edited on 14 August 2023, at 21:45
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.