To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Eye vein verification

From Wikipedia, the free encyclopedia

Eye vein verification is a method of biometric authentication that applies pattern-recognition techniques to video images of the veins in a user's eyes.[1] The complex and random patterns are unique, and modern hardware and software can detect and differentiate those patterns at some distance from the eyes.

YouTube Encyclopedic

  • 1/3
    Views:
    9 687
    143 519
    575
  • 2010 Google Faculty Summit: Cloud Computing and Software Security
  • Genetics, epigenetics and disease
  • Active Matter Summit: Session 6

Transcription

>> Ulfar had previously been at Microsoft and had been our professor at the University of Reykjavík and done a couple of start-ups. He's a very broadly knowledgeable security person, as you'll see, but also one that is interested in applying the newest things to make a difference in an extremely important domain because we all know there are significant security concerns on the Internet broadly. He'll be talking about the intersection of how security and cloud computing fit together, which is exactly the bridge we want as we move from cloud computing and the main agenda into security. Ulfar? Thank you. >> ERLINGSSON: Thanks. So, maybe I'll start by talking about security research at Google since I actually joined into Google Research and so people have asked me multiple times already since yesterday, so what's the difference between research and the rest of Google? And to be fair, really there isn't much difference. So Google has a very integrated research, and development sort of environment and so both in all of the product groups, the infrastructure and other groups, there is actually what one will think of as research. There's also a security group and Eric Grosse leads that and will give a talk this afternoon and there is lots of advanced work there on security privacy cryptography as on. So it's the--maybe within Google Research, the difference would be slightly longer term focus, more external collaborations, although there's a lot of those already from our researchers, and there's over 100 or just above 100 publications that are on the research website, and security and a lot of those are in collaborations with external parties. So the eventual goal of trying to look to longer term is that security is not exactly where we'd like it to be and so it's worthwhile to try to actually attempt moving--sort of taking a slightly larger step forward. And so on that, I'm going to talk about security and cloud computing and I feel kind of embarrassed because I just joined Google recently and so unlike a lot of the people here I haven't--and the people that will be giving talks this afternoon, I have not been working on this exact topic, Security for Cloud Computing, for a long time. Unlike Andrew Fikes, I haven't actually built real massive security systems that serve millions and millions of users. But I'll try to give some perspective, at least, and maybe starting with sort of how is this different from what comes before? And, of course, there has been large infrastructure and plans for large infrastructure, multiple times throughout the year starting with maybe Multics but even earlier time-sharing systems. And there were also been online communities starting with systems like the PLATO teaching network in Illinois in the '70s and all the way through bulletin boards and multi-user dungeons and CompuServe, this is an ad for socializing over the Internet, or actually not the Internet, but over the CompuServe network from 1983. Feels quite appropriate for what people like to do today or is cloud computing really maybe more similar to scientific computation? Large data centers, lots of machines and lots of important data that's being processed that needs integrity guarantees, et cetera. And so in trying to resolve this in my own head and trying to figure out, so what is cloud computing security? Look a little bit at the NIST definition of cloud computing and also what's at an ISAT workshop a couple of weeks ago where this was sort of a recurring motivation. And so the NIST definition of what is cloud computing really feels quite similar to what one would think of as scientific computation. So it's this very scalable network services, these massive capabilities, and then you have good access to them and the access is metered or controlled in some way. And there's a categorization into software platforms and infrastructure, but overall it's very much a infrastructure-like definition and a lot of the thinking has sort of been in that vein. Although security is perceived as a top obstacle and--so maybe coming from the outside, I'm a little bit more naīve than people have been working for a long time on the infrastructure. But to me, what seemed to be sort of maybe more interesting were the existing very successful cloud services, what one can really think of as being cloud. Things like Gmail, things like YouTube, blogger, even the image sharing and Picasa and so on. And so, I observed that there is various commonalities that seem to be sort of more to the line of thinking where I would think that cloud would be and how that touches on security and the first one is the really strong focus on users and data. So cloud being really about following--focusing on the user and moving data that previously sat on various devices or in homes and so on--into ubiquitously accessible cloud services where they can also be more easily shared even when you have very large or complex data items, where they can be more easily maintained up to date at the very latest fresh copy, so instead of having multiple email boxes in various places that you have to synchronize and so on, there is one of them. And where you also have more reliability because unlike any one of these devices where you had--where you might have a copy of the data and that might somehow get lost, you actually have multiple copies and redundancy in the cloud. And, of course, this centralization of user data cannot be done without giving the users control of that data and Google has been very good at doing that and dealing with the issues and privacy, authentication, and public policy, and we'll hear more about that this afternoon. This logo is from the Data Liberation Front, that if users give data into these cloud services, they should be able to take them out and permanently take them out. And, of course, there is the massive scale which we also heard about this morning and so to actually utilize all the world's information, well, all the world's information is getting larger all the time now. A lot of it is video, so as every single device out there seems to have a video camera in it and videos are very large, so every--the infrastructure needs to get larger and larger and larger. So Jonathan Smith in this ISAT study from YouPan had a very nice sort of term for what one really would like, one has a short query and infinite data sort of in the back-end. So you have a very easy way to express this simple query you want to make and you have this infinite data set that's infinitely available and fresh, and so on and you want fast answers that are correct and up-to-date even when machines fail, and machines will eventually fail, without doubt. And to deal with that, of course, you need redundancy, but you also need redundancy just for availability and performance as well. And since all kinds of partitions and so on can happen, you need even geo-distribution of redundancy. And this redundancy is not just on the server side, it also has to be on the client, which really comes into--the client needs to be mostly stateless, which also helps utilizing a new client to access your data. So clients can fail so they have to be stateless. Now, a third property that seems to be common amongst all these popular cloud services is, there's really a new definition of software, where software is the first time--well, as you just opened that webpage, you got a new copy of the software. So maybe you had some previous version of the software cached already, so large properties like Gmail have a tremendous amount of JavaScript and other code that actually is used to implement them. But through caching and various other means, you're always running the most up-to-date version of that software. And this, of course, simplifies greatly management, you don't have any applications to install or remove or maintain up-to-date. So, even on the desktop, as we'll see in a little bit, we're actually getting to this point where you're always having the most up-to-date version which, of course, helps with security vulnerabilities, making sure that any fixes for those are actually in. It also help for all kinds of randomized experiments and instrumentation, it helps with fail-stop error handling because the clients are actually stateless or even in the server, you have redundancy and you also have statelessness. You actually have--all of a sudden, this very draconian measure of--If you think something is wrong, you actually can do that. You can roll out new versions incrementally, et cetera. So, as a final sort of going back to the user, the--there's really a new user model of what is software, what is an application. And so as a user picks one of the hundreds of thousands or even millions of applications to use at this particular time on their device, they're really interacting with a single-user device that may be running multiple applications. But each of those applications on that local device is relatively isolated. So they can communicate through the cloud, but within the local device up until this point and the different implementations of these on phones, on--in web browsers and so on, local communication is actually not that easy. So this is actually very different from the time shared multi-user systems of the past. So it's really a relatively fundamental change especially when coupled with malleability--this very dynamic nature of software and the scale of the back-end cloud services. So that's sort of my short summary of what is cloud computing. So--a particular point of view, but I think--especially this notion of the different application model, that's really a big change that's happening and I'll get back to that later. Now, what are the challenges here? And--so in all of the existing large scale infrastructures, be it bulletin boards or MODS or even games slots, you always have security problems. And, commonly, when you have users, you have the same security problems again and again. So people would hack into bulletin boards to--for various purposes. People would cheat at games, et cetera. So when you have users, when you have something to gain, you will have people trying to actually get those games, and so, that's one commonality. Now, actually, in this new world, the biggest new thing, because there is this emphasis on user data, is really Privacy. And so, there really have to be means for pervasive privacy protection. And there's, actually, a lot of research to be done, a lot of work to be done on that topic. And Betsy Marcielo will tell a little bit about what Google has done that--on that in the afternoon. Also, Dirk Balfanz will talk about authentication which is another big topic. So, to protect the user's data, you actually have to control who is accessing it. Some data might be completely public. Other data might be for your eyes only, other for families or people you have chosen to share with. And what's interesting is that this relationship has to continue for the long haul. So, cloud services are something where you put the data in and you might actually keep it there for a long time. And then the final thing, which, actually, is sort of where I have done most of my research, is really the problem that software has bugs. And so, one actually has to deal with the fact that the software is not correct, software has bugs, those bugs should not allow arbitrary things to happen. Now, just looking a little bit deeper into this, so, Google has--as one of--if you look at the Google WebPages and sort of about the company, one of the things that's very prominently displayed there is the privacy principles. And this is very nice sort of--in the spirit of what Alfred was talking about the setting some clear objectives and then working towards them. And the first one is simply that any information that's brought in is actually utilized to create value for the users. So collection of information is not just there because it's actually there to bring valuable services to the users. And all of the processing and utilization of that information is in accordance with the highest laws and privacy standards in the sort of state of the art practices. The user is aware of what data has been collected. So, making all of this information transparent and they have choices. They can take the data out, they can delete it, they can move to another service provider in particular, they can choose to have certain data not be collected, et cetera. And then finally, because the data is there, it's in massive datacenters and so on, a lot of effort has to be put in place to protect it. So, to the question that Ahmed asked earlier about, "Is data actually lost?" So, absolute emphasis has to be taken to actually protect the user's data. So, on the next sort of these three points, Privacy, Authentication, and Software Security, I was struck when--actually, going through making these slides, that some of the longest relationships that I've actually had in my life had been with cloud service providers because I got--yeah. So, I was realizing that sort of about 15 years ago, I opened these email accounts and they've been accumulating information since then. And so, really, all that's protecting that information from being deleted or from having, you know, some bad access is somebody getting a password. So that, actually, we have to work on improving that and we have to work on making sure that we really have strong guarantees both about losing the data but also about who is accessing it and so on, the integrity and availability. And I might have an accident. I might not remember one of these passwords. I, sometimes, had difficulty remembering them when I don't log-in to some of the early cloud services that I became a member of. So, I have to struggle. So, maybe one day I'll forget that password. So, it has to be a mean for accounts recovery but that cannot then be a means for getting other people's data. So, I thought that those are sort of an interesting observation but--so to the, sort of, point where I actually know what I'm talking about and where I've actually done research, and which will be sort of the most of the rest of the talk is Software Security. And Software Security really stems from the fact that computers don't obey people. Computers obey software and you don't really know what the software is doing. You don't know how it's written. You don't know what behavior may be induced in the software for any given set of inputs. Certainly, even the best and the most benign software will have some bugs and care has to be taken to actually constrain what can happen despite those bugs. And so, even the most basic questions like, "What software are we running?" "Is this a particularly known bad piece of software?" "What's going on right now?" "Is that really what should be happening?" "If something bad happens, can I get my data back?" So, those are all traditionally, extremely hard questions to answer. And so, one of the things that Google has been working on is actually to try to make those questions easier to answer in various means and so, I'm going to focus a lot of it and rest of the talk on that. So, really, improved security, in particular, for the cloud client and I'll sort of explain why I focused on the cloud client. So, the datacenter, we control. We have very good people. We have lots of very good people. You'll meet many of them this afternoon. And we sort of have a handle on what's going there. It's--State of Affairs is relatively good, sort of comparatively. So, this is actually some slides that I took from Brad Chan who gave them sometime last year. And so, the client is messy, and there is a lot of sort of bad elements, the rule of law is not all that it should be maybe so, you have--you don't really know whether your machine is part of a bocnet controlled by somebody trying to send spam or for other nefarious purposes. And you don't really know where information is going; there is no real strong guarantees about privacy et cetera. So, it's worth trying to really change that. And so, going back to this notion of--actually the way applications and software works is really changing quite rapidly, you have these locally isolated applications that are connected through the cloud and so, instead of in time sharing where you had multiple users and so on, on the android phones, user accounts--it runs Linux so, it's sort of on that legacy of multi-user operating system, but it uses the user accounts for different applications. So, it's really application accounts, it's not user accounts, and this is partly because applications really should have different rights, the right to access email services, and potentially update your valuable cloud information, should not be given to a particular game, et cetera. So, a lot of the work that Google has done is very impressive at least to me when I was outside Google is on the Chrome web browser, and so, it's fast, it's simple, and it's secure and it really is quite an improvement. Sort of the aim was for a fundamentally more secure web browser, and I think to a large extent, that has been a success and has also influenced the other main browser vendors. So, it's an open source project, so, everything is pretty open. Google chrome is really the packets instance from Google, and the basic security story of the architecture of Chrome is really to move the expectation of the user closer to reality, and actually move reality closer to the expectation of the user by having sort of a more read only experience of the internet. So, when you visit a webpage, arbitrary things should not basically happen. So, you have this green border here around all of the activity related to going to this webpage, and so, all of the effect should be contained within that green border sandbox. And there's various other technologies that are implemented in Chrome as well, phising and malware blacklist and so on and they've been really leading in security features. One of the things that that they've been leading in is actually making software updating faster. And so, it's interesting to note that according to some public statistics that I saw, 10% of web browsers--users of web browsers right now are running a version of a browser that really started in 2001 so, they might be running an app...released version but really, they're running a web browser from 2001. So, Chrome has a built in auto update functionality which downloads in the background latest version of the browser and then next time you start a new browser instance, it will actually use the most up to date version. So, this is relatively seamless from the point of view of the user, and it seems to really work. So here are some various browsers, and on Google Chrome, we're seeing that more than 90% of the users are running the latest version of the program within a week so, which is quite an improvement from running something that was really built in 2001 which is 10% of users right now. So, and this--so there was a recent announcement that they're actually moving to a six week update cycle, so this will happen every six weeks basically that there's a new version of the browser, and then within a week you should see a curve like this, 90% of users. Now of course, this will help with getting fixes to bugs out there, but that's not all you have to do, you also have to deal with the bugs that you haven't fixed, and there's a wide range of mitigations and technologies you can apply there, you can randomize the address layout, you can use various staff cookies and so on. So, it's a sort of a general term as mitigation adherence. There's various things that are actually applied there and a new one that I learned of only recently is for instance not allowing instructions that perform direct SYS calls--system calls from the regions of code where just in time compilers like the java script--just in time compilers puts there output machine code. But this raises the bar, but attacks will really still be possible. So, more--even more sort of strong guarantees are warranted, and that's actually where one of the really impressive projects I think that Google has spearheaded is making some ideas that actually started in the late 60's, but were really revisited and made concrete in the mid 90's on software fault isolation to make those ideas really reality. And there's this native client project which is again an open source project which aims to make machine code--the code that really runs on your computer as safe as Java script or as running a new program should be as safe as visiting a webpage. And so, there's an open source implementation for now three architectures, and what's important and interesting is this really allows for arbitrary, low-level, even optimized code that uses SIMD instructions and so on for--to get really good performance, so actually making more efficient use of the world's computing infrastructure. In particular, it allows--so, the guarantees that it gives come from being able to see exactly what code is. Those were one of those early questions that I said are surprisingly hard to answer, like what code am I running? And Native Client actually allows seeing exactly what machine instructions, what will the hardware be actually utilizing. And by doing that, you can actually circumscribe, like if there is no integer division instruction, you know that none--no such operation will actually happen or more to the point, if there's no debugging system call, you know that no debugging system call will actually be executed at runtime. And this comes at some performance cost but that performance cost is actually much less than the gains you gain from going to low-level code. So, if you tried to implement this in JavaScript or Java, you would actually have more performance overhead. And so, the aim here is for high assurance, so there's both academic papers; Oakland, sort of the iTripoli security and privacy and UseNeXT security have papers on this in the last couple of years, and there's also been a public bug competition, et cetera. So, I won't really go into the details, but suffice it to say that there are slight changes into what actually is the machine code that execute on--executes on the system. This is where the overhead comes from to some extent and in this case, it's really just making sure that when you're jumping around in bytes that actually determine behavior that you have a certain alignment property that you know you're actually going to a 32-byte aligned block. And those are the blocks that then can be stitched together in arbitrary sequence; this is a relatively weak form or something called control flow integrity. And--but looking individually at this block, you can actually figure out what are the possible machine code instructions that might execute. So why do this? Well, web applications, we want lots and lots of apps and users want lots and lots of apps. And at I/O--Google I/O, there was an announcement that are--there's actually going to be a Chrome web store where you will be able to get lots and lots of apps, and those apps might actually come as very rich Native Client applications and here are some examples of applications that exist already. I'm not particularly thrilled about this LEGO character, so it's a Star Wars LEGO character. I wish there had been a sort of more pretty one that I could find in a screenshot. But here are three example applications that actually run at relatively native performance with 3D graphics in your web browser, completely safely app as if this was an image on a webpage or at least that's the goal. The goal is a high assurance guarantee of that. And the anatomy of the internals of such an application is basically as follows, you have some application, in this case, a calculator. Actually, behind the scenes, it's composed of JavaScript and a read module which is contained by the Native Client sandbox and the read module is just purely native machine code running directly on the hardware. Now, the Native Client sandbox and this module are actually sitting within a new plug-in interface that has been developed by Google called Pepper. So there's the NaCl and Pepper sort of make sense if you think about it in certain ways. Now--so the JavaScript component then would dictate high level user interactions, mouse input and so on. The Pepper plug-in interface, which is a new variant of the NPAPI Netscape plug-in interface, actually is then hosting the rest of the program which might, in fact, be most of the work happens there. All of that work happens within the sandbox and the sandbox is really driving the execution of these bundles of aligned 32-byte sequences of machine code. So with this, you can actually imagine a platform, a whole platform, where you have lots and lots of rich applications going all the way down to the lowest layers, sort of extending this notion of trying to have low-level of security that is a very rich platform and offers good performance. Now, one of the things that you would want to know there is, again, what code am I running and once you get into the sort of operating system realm of things, you actually boot computers and you would like to know well, what have I just booted. And so, in the Chrome OS, which is sort of the extension of the Chrome effort into hardware, that is actually going to be an implementation that sort of, for that particular type of open consumer device, is the first one that will allow verified boot where the very early firmware that's on the device actually will verify the next state of the operating system and then that actually will verify the final stages and the applications, et cetera. And importantly, this will allow people to have guarantees that even if they lose their device or there's--somebody installs the wrong operating system or something, their data won't fall into the wrong hands. So, the data will be accessible only when they're running the real operating system without any root kits, et cetera. I'm going to just speed up a little bit. I'm not going to talk about the details of this, but I'm going to go back up to the user. So all of this low-level security is not going to be very important unless you actually have secure means of communicating with the user and that really means giving the user abstractions and ways to think about things which actually come somewhat naturally but also fit with what actually happens underneath the hood. You have to sort of remove the impedance mismatch between various security dialogues of asking questions, "Do you want to allow this strange thing you don't really know what to do to do this strange thing that you also don't know quite what to do?" is the wrong way of doing things. So you have to have abstractions and mechanisms which are closely aligned with the user's models--mental models, and with how things actually work. And so one of the things that we have started in Google Research since I joined is actually an effort to look into those types of secure user interfaces that provide stronger security guarantees and also high usability using capabilities as sort of the way to think about things. And capabilities have some promising sort of intuitiveness properties and there are also principled abstractions that can be distributed, and all of these cloud computing applications are distributed, and they can go all the way to the lowest levels. So with that, I'm just going to finish with a couple of slide on what are the cloud opportunities. I'm afraid I went a little bit overtime maybe so there's no a lot of time for questions, but we'll see. So the real opportunity here that I think is changing with cloud computing is this new application model. This is very different software from multi-user, shared computer software where you had a single server with multiple users connected and trying to protect against each other. Typically, in the days of the design of that software, each of those users was also a programmer. If the machine in the center actually went down if you actually had to terminate some processing there that was a very big deal for potentially all of the users. In particular if there was a shared service like the file system or something you thought might actually be having trouble, terminating that would actually cause everybody to suffer. So we know about planned outages and so on. So this is--the new model of software is also different from the model of software client server computing in the '80s where you had relatively thick clients and you had again the shared servers, and if a thick client servicing--some purpose actually went away, people were not all that happy. It might be difficult to reconnect back to the server, establish the session, et cetera. You might lose--even stayed on the server. So the new application model is really one of the logical applications that really exist, sort of as an emergent thing in the cloud. You have millions of applications, you have billions of users, each user has lots of devices and is running a number of applications at any given time on those devices, but none of that really matters, the applications are logical, so. And you have location and device transparency. So thinking of this as the end-to-end distributed systems actually seems to be the right way to make progress, at least from the security point of view as far as I can tell. And, so important there is the statelessness of the devices and the malleability of the software. So one can really determine what software is running on any given one of these machines. Both in the back-end, in the server, each of the servers that's in the replicated instances of this software might actually be slightly different, might be specialized to a purpose, might have some randomized instrumentation or experimentation applied to it. Again, fail-stop enforcement is actually an option across all of these machines that provide the logical application abstractions. So--and just as a final point, one thing one could possibly do is actually to leverage what my colleagues here at Google have written a paper, Peter Norwick and--so Peter Norwick is one of the authors on the unreasonable effectiveness of data. So when you simply have a preponderance of evidence that something is in a certain way, you can actually take that as the ground truth and say conservatively, that will be good enough to allow all good behavior, and the example that might be useful to think about there is the solitaire game on Windows which was there already in Windows 1.0 and has moved forward. Now, the solitaire game throughout all these years and even today can do absolutely anything you can do. And it can do any networking, it can debug other applications, it can definitely grovel through your file system, and that's kind of wrong. And it certainly doesn't fit with this new model of thinking about applications and assigning rights to applications. But how do we actually figure out what the rights are? Well, throughout those 25 years of the lifetime of the solitaire game, it actually hasn't been debugging other applications. It hasn't been groveling through file systems and sending data over the network. So that type of preponderance of evidence can certainly be applied to determine those types of models. But of course there's lots of other things like which parts of the program are slow and buggy? So it's not just for security, which parts are dead code? So actually eliminating dead code is a very important security goal. And the nice thing here is that we can sample both the good executions and the bad executions and compare and contrast. So, anyway, with that, I'll take questions. Yes? >> Yes, how do you relate to the security being reactive and so on? >> ERLINGSSON: So, that's a fine question. A lot of security is reactive and pattern based and so on. Everything that are, sort of--we talked about in the middle there, in particular Native Client and Chrome, and so on, tries to be proactive. And so, it tries to actually build defenses that innately give strong guarantees. Now you may still have zero-day attacks, but you will have switched where they might occur. If you really have strong guarantees, then you will know that they're not zero-day attacks that are violating those particular guarantees. So that's one way of answering your question, is--try to actually have strong guarantees in some areas and at the level of the machine code, simply knowing that execution precedes via a sequence of 32 byte bundles, it's a very strong guarantee but it's a very weak guarantee at the same time. But it turns out that just that simple guarantee allows you to make other guarantees as a consequence. And so starting from, sort of, bottom up days is one approach and, of course, there will be zero-day attacks because there will be security vulnerabilities at various levels of the system. And so the--you know, I'm from the Nordic countries, so there's the joke of the Norwegian virus which is an ASCII text message, which asks you politely to forward to all of your friends and then delete all of your data. And so, now--so, yeah, it's just within the Scandinavian countries we make fun like that. But so you will have a--so the Norwegians are honest. They do--you know if you ask them politely they might do it. So the--but it sort of goes to show that you will have problems at various levels of the system. That's not a phishing attack, that's--but you actually have to take a multi-tiered approach. So, any other--yes? >> What about Multics ring-layered protection models? >> ERLINGSSON: A sort of ring-based protection model. Yes, in fact, in various--oh sorry, so the question was what about Multics ring-layered protection models. So they had seven rings of protection, maybe based on Dante or--I'm not quite sure. So where--at the core there was something very small and very secure that you've had strong guarantees from. And then, as you moved further out you had fewer guarantees and sort of more permissiveness, and I actually think that, sort of, pervasively, yes, we're going to see more and more of that. And so back to the, sort of, notion of this verified boot. As you go up the stack, more to its application, you're actually allowing more and more things to happen. And now Multics ring model was not really a huge success in hardware and so on, so there had been implementations like the 386, but typically I only use one or two rings. But actually I think we're seeing more and more layering of software and this goes back to the talk previously about layered storage services that each level of those layers you need guarantees, because you're allowing more permissive behavior above it, so. Yeah? >> You're talking about the [INDISTINCT]. >> ERLINGSSON: So I think, eventually, if you look at how a lot of those things are used, you actually have applications that look very much like what I was describing, running on top of those backend services. So--but I guess--there's a number of things you actually need to do when you're running backend services. In general, and there's some new things that are exposed when you tried to resell like Amazon does, those backend services, and that's sort of in the NIST definition of things. But there are not a whole lot of new security challenges there that I could identify. So I'm--I guess that, sort of, has to suffice. I think--yeah, okay. Yes? >> [INDISTINCT] >> ERLINGSSON: Right. >> [INDISTINCT] >> ERLINGSSON: So maybe back to the sort of, you know, what's the real change here, right? Having infrastructure that you can run on is a change, but its somewhat--it's a new form of the centralized multi-user computer that we actually have been using since the 70's or the Multics security model. You have a centralized service, you have lots of parties using that server with maybe, you know, different interests and you have to do this securely. So there has been a lot of work in that field. I think this new application model and the focus on the user actually is a bigger change, because it fundamentally redefines what is software, how does it work? In a way that hasn't happened before. So I think, we actually should go and have lunch, unless there are more questions.

Introduction

The veins in the sclera—the white part of the eyes—can be imaged when a person glances to either side, providing four regions of patterns: one on each side of each eye. Verification employs digital templates from these patterns, and the templates are then encoded with mathematical and statistical algorithms. These allow confirmation of the identity of the proper user and the rejection of anyone else.[2] Advocates of eye vein verification note that one of the technology's strengths is the stability of the pattern of eye blood vessels; the patterns do not change with age, alcohol consumption, allergies, or redness. Eye veins are clear enough that they can be reliably imaged by the cameras on most smartphones.[3] The technology works through contacts and glasses, though not through sunglasses. At least one version of eye vein detection uses infrared illumination as part of the imaging, allowing imaging even in low-light conditions.[4]

History

Dr. Reza Derakhshani at University of Missouri, Kansas City, developed the concept of using the veins in the whites of the eyes for identification. He holds several patents on the technology, including a 2008 patent for the concept of using the blood vessels seen in the whites of the eye as a unique identifier.

More recent research has explored using vein patterns in both the iris and the sclera for recognition.[5]

Uses

Eye vein verification, like other methods of biometric authentication, can be used in a range of security situations, including mobile banking, government security, and in healthcare environments.[1][6] EyeVerify, a Kansas City, Kansas, company, markets eye vein verification with a system called Eyeprint.[7] In 2012, EyeVerify licensed the technology developed and patented by Derakhshani. And Derakhshani now serves as chief science officer of EyeVerify.[8]

Advantages

  • Eye vein patterns are unique to each person[9]
  • Patterns do not change over time and are still readable with redness[9]
  • Works with contacts and glasses
  • Resistant to false matches

Disadvantages

  • Phone must be held close to face
  • Not supported on devices without cameras or on older smartphones

See also

References

  1. ^ a b Ungerleider, Neal (22 November 2013). "Your Next Password Might Be Your Eye". Fast Company. Retrieved 20 February 2014.
  2. ^ Stacy, Michael (22 February 2012). "Kansas City startup EyeVerify sees opportunity in the whites of your eyes". Silicon Prairie News. Retrieved 20 February 2014.
  3. ^ Miller, Michael (5 March 2014). "Beyond Passwords: Log In With Your Voice, Your Eyes, or Your Face". PC Mag. Retrieved 19 March 2014.
  4. ^ Davies, Chris (24 February 2013). "EyeVerify eye-vein biometrics hands-on". PC Mag. Retrieved 19 March 2014.
  5. ^ Zhou, Zhi; Du, Eliza; Thomas, N. Luke; Delp, Edward J. (2013). "A comprehensive multimodal eye recognition". Signal, Image and Video Processing. Springer Science+Business Media New York, NY, USA. 7 (4): 619. doi:10.1007/s11760-013-0468-8. S2CID 255380965.
  6. ^ Blyskal, Jeff (2013-05-23). "CR Money Minute: Better smart phone banking security?". Consumer Reports News. Retrieved 2014-02-21.
  7. ^ "EyeVerify: Mobile Authentication Through Eye Vein Biometrics". Retrieved 2014-02-21.
  8. ^ Team, S. P. N. (2012-02-22). "Kansas City startup EyeVerify sees opportunity in the whites of your eyes". Silicon Prairie News. Retrieved 2023-08-15.
  9. ^ a b Derakhshani, R.; Ross, A. (2007). "A Texture-Based Neural Network Classifier for Biometric Identification using Ocular Surface Vasculature". 2007 International Joint Conference on Neural Networks. IJCNN, Orlando, FL, USA. pp. 2982–2987. doi:10.1109/IJCNN.2007.4371435. ISBN 978-1-4244-1379-9. S2CID 1042317.
This page was last edited on 15 August 2023, at 08:48
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.