To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Matthew D. Green

From Wikipedia, the free encyclopedia

Matthew Daniel Green
Matthew Green
Born
CitizenshipAmerican
Alma materJohns Hopkins University
Oberlin College
Known forZerocoin, Zerocash, TrueCrypt Audit, Sealance
Scientific career
FieldsComputer Science
Cryptography
InstitutionsJohns Hopkins University

Matthew Daniel Green (born 1976) is an American cryptographer and security technologist. Green is an Associate Professor of Computer Science at the Johns Hopkins Information Security Institute. He specializes in applied cryptography, privacy-enhanced information storage systems, anonymous cryptocurrencies, elliptic curve crypto-systems, and satellite television piracy. He is a member of the teams that developed the Zerocoin anonymous cryptocurrency[1] and Zerocash.[2] He has also been influential in the development of the Zcash system. He has been involved in the groups that exposed vulnerabilities in RSA BSAFE,[3] Speedpass and E-ZPass.[4] Green lives in Baltimore, MD with his wife, Melissa, 2 children and 2 miniature dachshunds.

YouTube Encyclopedic

  • 1/3
    Views:
    6 125
    3 608
    3 204
  • Cryptography is a systems problem (or) 'Should we deploy TLS'
  • DEF CON 22 - Kenneth White and Matthew Green - The Open Crypto Audit Project
  • Matthew Hodgson - P2P Matrix: Where we're going we don't need servers!

Transcription

[ Silence ] >> It is my great pleasure to introduce Mr. Matthew Green who is going to talk about cryptography as a system's problem. Now, we've been looking in the course, we've been looking at the lower level of things starting from libraries and going down into the kernel innards but all of that exists to support applications, the so-called Layer 7 where bring-three [phonetic] as we know it. And so this is going to be a look at-- about what does-- all of those innards are really for. >> Okay, that sounds-- that sounds like an easy description. So I was not even completely aware that was an OS class. So I'm going to confess to you guys that this is probably going to have the most tenuous connection to OS of any lecture that you're going to hear in this course but it might be very interesting, anyway, and it might be helpful at least in terms of OS. So, before I get started with, in terms of the actual presentation, let me introduce myself. My name is Matt Green. I teach at Johns Hopkins. I'm a research professor there. I also, sometimes, hang out at the University of Maryland. My job is a little hard to describe because a few years ago, what I did is I started that consulting firm called Independent Security Evaluators and I spent the next few years kind of both being a grad student and I also work in a consulting firm looking at real cryptographic systems. So I got to see a lot of kind of really scary stuff, and good stuff too, but mostly scary stuff, which is neat because I was actually looking at the real world. Now, I'm a research professor, I do research most of the time. I still do some of the work. I asked, whether I should come here and give you guys a talk about research or give you a talk about real stuff, and real stuff won out. So we're going to talk today about kind of this system's aspects of crypto and, in particular, the example I'm going to give you guys is the SSL/TLS protocol which, I think, is probably the most common widely used cryptographic protocol out there. It's what everybody thinks of when they think of secure communications. And that's it. So the last thing I'm going to add is that, as there were sounds of people here, I was actually born in Hannover, so this is kind of like a kind of cool thing for me. I spent a lot of my junior high sneaking into the computer building here at Dartmouth, thinking that if I just looked really serious, nobody would recognize that I wasn't a college student. And so I would sneak in. I would use like that Tektronix terminals and stuff, it was a sad time and get kicked out. So it's kind of neat for me to actually be in the Dartmouth college computer building and not feel like I need to look over my shoulder for somebody there [inaudible]. So, anyway. So, let me start by motivating this presentation, why are we giving this presentation? If you-- before I can get started, how many of you guys have taken a crypto class? Okay, good. That's a good number, that's kind of what I was expecting. If you are not somebody who's really an expert in crypto, probably, you've seen quotes like this around. Probably your exposure to crypto, maybe it's more than this but at least you've probably been exposed to some statements along this line, which is that for people who know something about computer security and system design, crypto is kind of a problem that we know how to solve, software, total disaster, secure everything else, total mess. But cryptography, that's the one area that really shines because we know how to do that right. Sometimes, people put it like this, crypto was the strongest link in the chain but everything else is terrible. Let's see. This is, I don't know who said this, I was trying to find Bruce Schneier saying something bad but I couldn't get him say something stupid, so instead, what I found is this, where people concentrate too much on the cryptography of the system, which is the equivalent of the strength, I mean, I hate this strongest link in the chain crap, lots of people say this. So the basic point of this is yes, if you're not a cryptographer, this might be your view of cryptography but everything that these people are saying is wrong because crypto is just as broken as everything else. The difference is that most people don't know enough cryptography to realize that. So this presentation is going to be kind of our introduction to all the bad things that you should know about in your-- that you use in your day to day existence. So when it comes to those kind of solved problems, this is really what things look like. We do have one solved problem, cryptographic algorithms for the most part, for the minute, right now these are actually, I would say that this is a problem that we don't absolutely need to be focusing our time on. But that's it. Everything else, and this is kind of this arrow indicates the confidence that we have in these arrows, in these areas but kind of in the inverse, so it will get worse as we go down. Protocol design, not so great. But then going beyond protocol design implementation, terrible. We are just bad at implementing things. Library API design, you think that would be simple, right? It's encryption, decrypt, encrypt, right? Not very good at this stuff. Now, I'm not even going to say anything about this because, clearly, we don't know how to use this stuff, right, or else we wouldn't be hearing all these stories and the problem is that probably 80 percent of the people on the internet don't even use crypto which is just the whole problem that's outside of this presentation all together. When I was looking at these slides this morning I felt like maybe this did not give a proper view of how bad things were, so, I just illustrated a little bit with these guys but that's kind of the situation. I just-- to further illustration of kind of how bad things are right now. Looks like I didn't get all of the headlines in here, but just a couple of the headlines I managed to copy over here, lots and lots of stuff doesn't work that's why-- here you go, sorry, these are the slides I wanted. Lots and lots of attacks that had happened over the last few years, many of them implementation-related attacks, some of them protocol-related attacks, and unfortunately, this doesn't seem to be kind of an anomaly, the pace of these things is peaking up and it's becoming more serious, as more people actually start to rely on crypto. Anybody read about the flame attack of last summer? This is a-- we'll come back to that but this is a-- was a very, very serious attack on the certificate authority. Okay. We are not going to talk about all crypto today, that's just too broad a topic. We're going to talk about one specific protocol which is the secure socket layer, AKA transport layer security protocol sweep. We care about TLS because I don't think it's really wrong to say that TLS is the most important security protocol in the world, there's things like IPsec out there, there's other kinds of crypto but TLS is how most-- the least transport security gets down these days. Most people look at this and they kind of see a vision of how TLS works that's exactly like this diagram. What you do is you're an application, you want to talk to someone, you say, "Okay, I wouldn't normally just make a socket connection to this person but that's unsecured so I'll just throw TLS on it and now it's a secure socket connection, so it's like a tunnel and no matter what happens out here in this cloud, I'm safe. "This is not exactly what TLS really looks like, but this is how people want to view it. The other thing I should mention about TLS is not only is it widely used but it's becoming much more widely used. You've probably seen things like this, Facebook is moving to 100 percent TLS, Google, Gmail has already moved to 100 percent SSL. Lots of people are pushing, pushing, pushing, to get everybody to do this. Yes? >> Is there a reason why they didn't deploy them before? >> It's hard. It turns out to be very hard to get these things deployed. It takes more server hardware to do the decryption side. It turns out to be a nightmare when you have things like applications, like a distributed site, we have like some servers running applications and all these stuff. Getting SSL running on everything turns out to be a big mess. >> Is there a factor by which it's supposed to [inaudible]. >> It does and, actually, the biggest problem is latency. So the SSL handshake is a two-round communication. So normally, if I want to talk to you, I do talk, you know, a TCP handshake is one round. SSL added to the TCP handshake plus the SSL handshake, which, when you're doing it over like an intercontinental cable or something like that, it can have like a significant fraction of a second. So there's computation, there's resources and there's latency. A lot of people don't like that. But really the big problem is just dealing with all these other headaches that come up like your 20 servers need the same key or certificates, and so it becomes a big headache. So that kind of stuff is a problem. But the bigger companies are slowly getting over this and they seem to be kind of moving towards the universal TLS-SSL framework. The importance of this is that the people who are pushing for SSL everywhere are not nuts, they're not paranoid. We're entering an era where actually having a secure encryption is no longer a kind of an academic thing. Just this September 2011, back in 2010-2011, there was a big high profile hack of a certificate authority called DigiNotar, where somebody stole certificates or keys that is, and were able to make a certificate for Google which they then used basically for everybody or a huge number of people in the Iranian network-- internet or connection to the internet to do a man-in-the-middle-attack against Gmail where they were able to steal people's Gmail credentials. This kind of thing is actually becoming more common not just certificate authority attacks but all kinds of attacks against these systems because people want to-- now that people are finally using crypto, people are finally attacking crypto. So, in this presentation, we're going to look at kind of a whole bunch of different problems with the SSL-TLS protocol, starting not just with the way people code it up but starting with design, starting with even some like the formal analysis-- some kind of what's lacking there down to the implementation and down to the way that people use it. Before we get into all those details, I'll start with a very quick history of the protocol. So SSL was actually invented at a company that no longer exists, but you guys should know of Netscape Inc, way back in I think, 1995, that's kind of a rough date. It was-- they have this proprietary server product and a proprietary browser at the time which became Firefox. And they wanted a way to communicate securely so that people could do, you know, 20-dollar credit transactions over the internet. So they came up with this thing, they called it Secure Sockets Layer, they never released the first version. A year later in 1996 or thereabout, they release the actual first public release which was SSLv2. And that had some very serious bugs. And the only reason I've actually included this picture here, is that for those of us who grew up in the 90s, it doesn't seem like that long ago. But when you look at Will Smith. [laughter] You can see, the things were different then and in terms of a design of photographic protocols, our understanding of how to design these things was very, very different and very primitive compared to one of these today, yes? >> Was the SSL one or two, they add a bug where they only seeded by the-- to the second, so there are only like 60 or 20 seeds? >> There's-- I didn't even include that method in this presentation but there was a huge [inaudible] debug, what if I include that in this presentation? When I get to the implementation or stop me and say, what about the-- >> I remember, one of the first releases of the Netscape SSL, it's seeded crypto that seeded round number with the seconds-lock [phonetic] of their [inaudible]. >> -- That one I heard about, but I believe you, that's-- that's, yeah, that's not good. [laughter] Yeah, I mean it was bad. I mean, now I mean if the implementation-- you think, okay people make coding mistakes, but you can fix coding mistakes. But they put a lot of bugs into the protocol itself, and we're still fixing those today. So, yeah, that's-- I mean, it was not an open protocol at that time and then eventually it became open, they got slowly better. But it's still getting better today which is bad, okay? So they have serious bugs including apparently implementation bugs that I didn't even include here, which is fine, okay. So that got fixed, we have a new release, I don't have a date here. SSL version 3 comes along and says we're going to fix all these problems with SSLv2. But it turns out that this has some slightly less serious bugs. However, and I did find something by Bruce Schneier. He says with Kelsey, he says, we conclude that while there are still a few technical vulnerabilities, technical wrinkles to iron up, on a hole, SSL version 3.0 is a valuable contribution towards practical communication security. So this is SSL version 3. Okay. He was wrong, TLS version 1.0 comes out. Now they changed the name of the protocol to TLS. It's confusing but that's all there is to it. Instead of calling it SSL version 4, they decided to call it TLS version 1. It turns out that TLS version 1 have problems that people, even theory people haven't discovered yet. One of which is that the initialization vectors in encryption are not properly generated. I'll talk about what that means maybe later. But the upshot is that, you can, using a little bit of JavaScript and some simple arithmetic using X or you can actually decrypt cookies that are sent over secure connections. The attack that discovered this was by a guy named Greg Bard at the University of Maryland, he published a paper on this. I mean, this have been known since maybe 2001, but in 2006, he published a paper that said, here is a problem with SSL, you guys should fix it. He was told by his adviser that he should probably work on more interesting publishable work, first of all. And then, the paper was completely ignored by everybody. And so and I'm looking at the date of this till 2011 in September. When a couple of guys actually went out and found this paper and, you know, implemented it, and made it better in the process. But, they actually did this and they called it the BEAST attack which stands for Browser Exploit Against SSL, TLS this-- there's acronym problem, so they do this. They called it this and they did a cool video of doing it and it's really neat, you can actually take an SSL connection and watch it by-by-by pull out this cookie out of this AES encrypted connection which is very neat. So it turns out that these vulnerabilities were already baked into this back even in TLS 1.0. Later the next year, the same guys came up with a new attack called CRIME which stands for Compressed Record Information-Leak Made Easy. And this paper took another paper from 2002 and actually implemented it and obviously did a lot of cool implementations up to make it work. And they were able to do the same thing using TLS compression. So all of these flaws were in a speck and nobody really thought about it. >> What happens to these authors after-- - >> They were killed, no, I don't know. These guys, they're still out there, they're working on a new attack. I hear-- is it serious that they really have something called terror? >> There was a T of it, it might be called the timing. >> Were they joking or was that for real? [Inaudible Remark] >> My God. Oh, man, we were hoping that they didn't do a rapid password exploit or something because who knows who's going to come with that. >> Yeah, it's really going to try to escalate those stupid acronyms. >> They are, they did it longer, and they're going to get worse. So anyway, they have a new attack coming out, who knows what this is, but it could be timing-related. Anyway, so these things exist in there and there. Let me give you again, this is the, what you want to think of TLS and SSL is looking like, you know, this nice tunnel, where you don't have to think about the details, you just turn it on and it works and everything you send is encrypted and protected. The real SSL protocol and I'm still giving you a high level view but maybe a more accurate one. It looks like this, it starts with some kind of negotiation. Where you and I say, hey, we support these ciphers, we both do compression, let's talk about it, let's agree on something that we can both support, followed by key exchange protocol where we agree on a secret key using public key crypto. Followed by the actual, most important part of the protocol which is to actually encrypt data in both directions. Securely, followed by all kinds of crazy stuff that can happen when we re-negotiate at the end of it. So I'm going to take a few minutes. This is going to be the crypto-crypto part of the talk. And we're actually going to involve you guys, doing an interactive exercise. We're going to see how that goes. So this is the part where we're going to talk about crypto, I'm going to talk about these protocol design issues with TLS. The best description of the problems in SSL and TLS, the ones that are still kind of haunting us today come from Eric Rescorla. Well I think, he's one of the authors of the some of the versions of SSL and TLS. He describes the problem here being that TLS's use of "pre-historic cryptography," meaning that 1995 error stuff. He uses a lot of bad stuff. One of the things that it does is it uses AS in something called CBC mode which we can talk about later. Using a MAC, which a MAC is a message authentication code that prevents tampering with the message. He uses it in a particular construction that we call MAC-then-Encrypt. What I mean by that is we have this nice thing that, oops, we have a message which I'll call M. And we have this nice thing called a MAC. And the only thing you need to know about that is it protects the message. If anybody tries to tamper with this on the way over the wire, we'll be able to detect this at the other side by checking the MAC. If you don't need to know any crypto, think of it like a checksum, okay? But it's a cryptographically-secured one. Then what happens is we take this whole mess and we encrypt it. This is called MAC-then-Encrypt, MAC-then-Encrypt. I'll come to why this is a problem in just a second but this is what happens inside of a standard CBC mode of TLS. It uses a mode of RSA encryption with a padding scheme called RSA-PKCS#1 v1.5. And I want this to go away simply because I don't want to ever have to say RSA-PKCS#1 v1.5 again. But this turns out to be extremely broken, broken to the point that you can actually completely decrypt an RSA ciphertext by sending in, chosen, manipulated messages to a server. You can decrypt an RSA ciphertext and anywhere from a few minutes to a few hours by doing that, so they're very bad. It uses RC4. I'm going to come back to why RC4 is bad, we'll talk about that in a second. But a lot of that, the biggest reason that it uses all of these old obsolete crypto, is that the people who design these protocols, wanted them to be backwards compatible. So they made a bad version in 1995. And here we are in 2013, stuck with their bad versions because they wanted backwards compatibility and they were not willing to break compatibility because they thought it would scare people away. So this is kind of the problem. All right. So let's talk about one specific example. This is a very easy one to understand. Okay, so TLS, like I just described in MACs, the record first put the MAC on the end. And then, it encrypts it. Now the thing that I didn't show you up here is that the encryption that we use which is called CBC mode encryption for AES for example, has a requirement. We can only encrypt messages that are multiple of, if we're using AES 16 bytes. So if I encrypt a message where this whole thing comes to a total of 23 bytes, what am I going to do? [Inaudible Remark] Good. Simple answer. That was a warm-up question, that was not a tough one, okay? So we're going to pad it, we're going to stick on some bytes at the end until we get it up to 32 bytes. Now, we have a good message and now we can encrypt it. And obviously, when we decrypt the message, what are we going to have to do? So first, we'll strip off this encryption, and then we'll remove the path which means, we need a padding format that we can recognize, so that we can always, you know, unambiguously take it off. That's why. Let me actually draw in what the padding format looks like here. Just for fun. The padding that they designed for TLS looks like this. It ends with a link byte. So let's say 23 and 32, so that's a 9-byte pad. So we would start by putting and 09 byte. I always get this wrong 'cause they do something a little weird. Actually, you know, what they would do. They have a length byte that doesn't include itself. So, they would put an 08 at the end. And then they would put 8 bytes. You're going to hate me for doing this. I'm just putting a bar, so you can see where the separation to the bytes are here. Okay, just pretend that there's-- there's a bytes there 'cause I'm getting-- I'm too lazy to draw them all. And this would be, in total, a 9-byte pad and that would get you up to 32 bytes. So that's the problem. So, the problem here now with padding articles, somebody very clever back in 2002, named-- a guy named Seraj Vodney [phonetic] discovered that you can actually use the structure of the padding to do something very useful. So, what happens when people decrypt is they take off the encryption. The first thing that they always do is they look at the padding. And in this case, this padding happens to have a very specific format which is the person doing the decryption first looks at the length. And then he should expect to see 8 bytes before it and because of the way the padding is put together, all 8 of these bytes should have the value 8. So he checks for that, simple check. And if that works, he strips it off and he goes on with the rest of it, and then he checks the MAC and whatever. If the padding doesn't check out-- if any of these bytes is not 8, he usually returns an error and a typical error is something like bad padding. Now, what could you do with that in order to take advantage of this? Well, it turns out that the CBC mode encryption that we use here is malleable. And what that means is even if I only see the ciphertext, it turns out that I can flip bits on the ciphertext that will cause exactly the same changes to appear in the underlined plain text after it's decrypted. And so what this guy Vodney discovered is, I want to give you a very specific example here. Let's imagine that the last byte, I'm going to draw a line here just to illustrate. Let's imagine that the last byte of the message that we want to attack are the last byte of the MAC really, is an 09-byte. Okay? So, what-- what can happen here is if I send this message, I could-- I could-- tampering with the message, I can flip this to an 09. Will this message, is this going to pass the padding check? No, okay. Well, let's say that I am going to change all of these values to 09s, using the same tampering. I can just flip a bit in each of these byte positions to cause all of these to read as 09s. Now, what's going to happen? [Inaudible Remark] It's going to say, yes. There's 9-byte of padding here including this last byte. So I can go through every possible transformation. I can say, "Okay, it's not an 08." I'll flip these maybe and I'll turn them into 9. I'll turn them into 10s. I'll turn them into 11s, and so on. So this is just a very simple example. One of those values, if I'm careful in flipping my, if I'm careful in changing this padding around correctly, one of these padding values will turn out to match up correctly with the last byte of the secret encrypted message that I actually want to find out. So for all the wrong things, I'll get a padding here. But for the one that actually matches this byte that I want to learn about, I'll get something else. So I get a, maybe or probably a MAC error or something. And just using that condition, I've learned one byte of encrypted information. Now, how do I take that one step further? >> You flip that one too. You go to the next [inaudible] >> Exactly. So, it turns out that I can just basically say, "Okay." Well, let's turn on these-- let's try incrementing all these. Let's try turning these on to 10s and, okay, if the last byte here is 10 then I've learned something or I could try other bytes. So actually, what I can do is I can flip this byte too and I can flip it until, you know. If this XOR-ed with a certain value is 10 then I'll get a correct check. And if I know the value that I XOR it with, which I do, then I can figure out what original value is here. So, this is of the world's quickest explanation of a kind of a complicated tech, but it's not that complicated in man-in-the-middle-attack. You could work this out, I mean, this is not something you need to be a PhD in crypto to understand. This is a very simple attack. And it's a fast attack because it requires 256 decryption attempts per byte, which is not that bad. I mean, that's-- that's actually pretty reasonable. It's like Hollywood-style decryption. They can actually like really decrypt something. [Inaudible Remark] Yes. And that's the-- that's the key here. I'm not going to go to the details of CBC because I don't think that's going to add very much here, but that's exactly right. You can XOR a bit into a certain position of the ciphertext and that same exact bit will be changed, flipped in the decryption, except it will be one block further on in the decryption of the plain text. If you look up CBC mode on Wikipedia you'll see why that is. Yes? [ Inaudible Question ] That-- well, this is true. But it turns out that byte. So, for example, let's say we have a case here. Let's say this is not in that 15. Let's say this is-- let me think of a good one. Let's say this is FF. Right? That's definitely not a valid padding byte, okay? But, let's say that I've guessed that, you know, these are all 09 and that would include this. So I can flip this also, this say, 09. And then, I can XOR in a value into here. By XOR-ing this with something. No, let's say, I'll XOR this with some value Y, it will come up to be equal to O9 which will give you valid padding. So, maybe, that's not the best explanation. So, FF, XOR with some Y equal OX10 and we pick Y, we can try every value of Y until we get one value of Y that causes thing to be equal to 10 and once we know, well, we know that the Apple's here is 10, we pick the particular value than we can recover whatever value we will know that the input was app, okay? So-- so, basically, this is a very bad explanation but yes, it doesn't matter that its just 15 values of padding. This attack pretty much works against anything and it's a very nice attack against TLS. So, to fix this attack, there are-- there's a recommended way and there's a way that TLS did it. The recommended way is simple. It's called change the order of the way you do your MAC. You do something called encrypt, then MAC. All that means is encrypt your data first with your padding and then stick you MAC over here on the outside. That prevents anybody from doing this crazy XOR stuff because the MAC is like a check and would prevent that catch you, you'll be fine. They didn't do that. So, they actually implemented a bunch of countermeasures not all at the same time. Okay. So counter measure number one. This is the-- if you assign-- well, I don't want to say anything rude about undergraduates. So, I'm not going to-- this is the-- if you assign a first semester freshman computer science student to fix this problem without changing too much code and they're lazy. Well the first thing they're going to say is they're going to say, "Well, this padding area is the problem. Not the encryption." Let's just make it so that no matter what happens here, we give one error. Would that fix the problem? Does that fix the problem? >> No. >> Okay, do you have any idea what might cause the problem here? [Inaudible Remark] Very good. Yeah, exactly. So what somebody realized is that-- so this actually in theory, this would work except for the fact that there's a timing attack. What happens in most implementations, well the first thing you do is you check the padding. If the padding is bogus, you bail. If it's not bad, then you go on and you check this MAC. That actually takes a bit of computation. So somebody actually timing you can typically tell that you've done this extra work and then they know whether the padding succeed or failed and then they just implement the attack the same way they did it before. And that actually works really well 'cause these MACs are pretty fast. But there are also cryptos that would take a bit of computation. So, this-- this actually turned out to bring this attack back to life. Okay, we need another patch, okay? So, here's another patch. All right, constant time decryption. What we're going to do is we're going to make sure that even if the padding check fails, we're going to still compute a MAC. That should fix the problem, right? Can anyone tell me why that might still be a little tricky? >> Power. >> No. It didn't. It's not even that. It's-- I'll give you a hint. It's still timing. Still timing problem. This is in the spec, by the way. It turns out that when the padding is bad, you don't know how long the message that you're supposed to MAC is. >> Wait. >> So, if the padding is bad, right? You can have a padding-- the padding could be up to 255 bytes long, by the way, I should mention that. So, if you have a valid 255-byte padding, you might have like a one-byte message followed by a MAC, followed by a huge amount padding. Or if somebody totally screws up your padding, you don't if that's 254-byte message followed by, you know, a little bit of padding. You just don't know because the padding is broken. So, the countermeasure in the spec says, "Yeah, we know that's a problem." So, what-- what you should do is compute a MAC on something, you know. Maybe just MAC the whole message without, you know, including whatever might be here in the padding. Just MAC it all. That's probably, that's a small enough time in general, no one's going to be able to break it. It turns out that people can break it, okay? And then, that lead to this. This is my favorite headline on recent weeks just because it includes the word buffoonery in it. I really like that. This is an attack and I want you guys if you can see it from back there, to look at the date at the bottom of this attack. It's February 4, 2013. This is a new brand new TLS attack called lucky 13. It just came out this week. And the attack takes advantage of this. It turns out that the people designing TLS thought that nobody could possibly think of that time in timing channel. But by doing many, many measurements, millions of measurements, you can detect that and that brings these padding article attacks back to life. And this is the current version of the spec. So, the patch there is to write even more complicated code in your decryption library to fix that. >> At least have the moment over the internet, the latency and the [inaudible] is too much. >> For that moment. >> Yes. I wouldn't count that on a ten-year plan. >> I wouldn't count on that for the six-week plan. Especially now with the terror attack coming out, and so. I mean, but seriously I wouldn't count on that because I think that there are ways probably to amplify that. And you know, this idea that you have to be far away from things, that's kind of an old-fashioned view, right? 'Cause in the old-fashioned view, we put our servers way up here-- what's that? >> Yeah if you're on the LAN, this is a very valid concern right now. >> But it's wrong and it's loud, our server is here and where the three virtual machine's over. So, it's very possible that we're going to be a lot closer to the attack server than we think we are. So these kind of things may seem theoretical, but they probably aren't in the long run. Yeah? >> So, in general, this is bad ways to fix the crypto implementation? >> Never apply band aids, yeah. I mean if there-- sometimes they're not a good way and then you're stuck. You have to find the best bad way to pick from. This is a case where doing this, really would have saved all of these problems. Because here, you check the MAC, and then no matter what somebody does to the ciphertext, none of these padding article attacks apply. And they actually had a chance with TLS1.1 when they broke compatibility with the older version. They had the chance to do this. They chose not to do it and now we're stuck with this kind of stuff today. So, that's one thing. With the last detail, another bug that I already mentioned is called is called BEAST. It has to do with the fact that-- this thing called an initialization vector in CBC mode is not picked correctly in TLS version 1.0. I won't talk about it too much except just to remind you guys that it was known since 2002, described in I guess 2005, and nobody really cared about it. For years, it was found or discovered in 2011. The last thing that I want to mention in this section is that the practical solution is to use a cipher that doesn't have padding of course, a cipher suite. The one that everybody loves today is called RC4. Does anybody here know about RC4? Anyone here used RC4? >> I know it's old. >> Its old, it's developed-- it was developed by Ron Rivest like 1984 of something. Its-- [Inaudible Remark] It's not older than me, but it's-- I was young when it was out and it was-- I actually have a demo, believe it or not, this is a crypto presentation with a demo. I'm going to show you something about RC4 that won't kill your terror or anything but-- you know. I wrote a little program. So, here's the basic idea, RC4 has biased in it. A proper stream cipher should-- basically what it does, is it outputs a stream of bits and then it XORs those bits with a message. And the idea there is that that XOR-ing of random-looking data with the message should protect, you should hide the message itself. But it turns out that RC4, the algorithm has biases. Certain bytes that come out of it occur with higher probability or lower probability than a real random generator would. Probably the biggest example is the second byte of the output, always close to zero with twice the probability it should. Now, that seems like a tiny, tiny little thing here, 256 possible bytes. So, it produces zero with, you know probability 1 over 128 instead of probability 1 over 256. How could that possibly be useful? Well, it turns out that if you have the same message encrypted with many different RC4 streams, you can use that bias to actually recover what's encrypted. I wrote a program to do this. This is only decrypting the second output byte. I encrypted the value A 5,000 different messages. I looked at the second output byte. And this program which is not a 100 percent accurate as you can see, is recovering what byte is encrypted to that position. Again, crypto demos suck. But this is the best crypto demo I've ever done. It almost always works. And that would be a problem if RC4 only had a bias at the second output byte. But that's actually not the case. RC4 has biases here at the first-- this is zero index. So the second output byte, the fifth, the sixteenth, the 32nd and the 48th. And I think some other ones later on. So, little tiny biases admittedly, but biases. So RC4 is an old cipher and these things are going to buy those eventually. >> Oh, they already did this in the web [phonetic]. >> Yes, RC4 really did us some web. That's why you should not use web. But these things are still active in TLS. RC4 is very popular and more people today are even moving to RC4 because they are afraid of these kinds of attacks. Okay, so I'll spend a lot of time on these sections. So I'll just pick up the pace 'cause I know you guys are-- this is an OS class and I have to say something, systems really did at some point, right? So-- all right, I'm going to skip the compression stuff. I just want to quickly talk about analysis. Okay, so a lot of you guys are saying, here's the problem, academic security is fine, academic cryptography is fine. But for some reason we have entrusted the design of real crypto in the real world to a bunch of people who don't really know what they're doing and we're telling them to do it right and they're not listening. That's actually not the case. So, cryptographers have been trying to analyze TLS for years and trying to come up with some kind of formal method for proving them as secure. This is one person's summary of what the TLS protocol looks like in an academic paper. Anybody who's looked at TLS ever know what is missing from this diagram? It starts out with people thinking random numbers and then going right to a key exchange. There is no negotiation. There is no certificate exchange. There is no exchange of any messages at all. All the stuff that actually happens in the TLS, that's been the place where it breaks. So, academic analysis right now, we're still looking at like cartoon versions of TLS and we're still not making that much progress in trying to prove them as secure. To give you an example of this negotiation phase where I say, I support AES and you support this and I support RSA and you support Diffie-Hellman. The way it works now, is both parties say, here are the cipher suites that I support. And then once we agree on them, once we agree on the algorithm, then we do a key agreement using the algorithm we just decided to negotiate on. Does anybody see any potential weaknesses or problems with this kind of design? >> If someone pretends they only know the worst possible. >> Exactly or if something in the middle does exactly what you described. Edits out the messages where I say, I support great crypto system and great crypto system, and leaves only the one where I support, gimpy, you know, terrible crypto system then we end up agreeing on gimpy terrible crypto system. This is a real problem in some versions of SSL. >> Wasn't there a null cipher available in [inaudible]. >> There is a null cipher, there were export weakened ciphers that only supports a 40-bit crypto because we're afraid of exporting it to the rest of the world. Just bad stuff-- bad stuff. [Inaudible Remark] But we fixed a lot of the stuff. We fixed it in sort of kludgy, band-aidy ways that nobody has really formally analyzed they probably work. But nobody has ever actually looked at a crypto conference and said, can we prove this particular pattern to be secure? You think that we've analyzed this, right? We have all this crypto-confidence. We have hundreds of publications every year, a non-malleable zero knowledge optimal round of proofs. But there is only one recent publication I know of it, the big crypto conference that analyzed TLS with Diffie-Hellman ephemeral protocol. Nobody has ever done a formal analysis of the RSA handshake which is the one that everybody actually uses. Okay. I'm running out of time. So I'm going to move quickly, 'cause this is the fun part. So everything up until now was the good stuff. This was the stuff that we know how to do well. Now, we're going to talk about code. All right. We have these specs but obviously you can't use a spec any more than you could eat a recipe. So, when you implement TLS, you end up typically, you do it yourself if you hate yourself or you use a library. Examples here, the biggest most common one are OpenSSL, GnuTLS which nobody really uses and NSS which I think is used in Chrome and in Firefox but on a few servers but not so many. >> Pidgin uses the one you said, the one users-- >> What's that? >> Pidgin-- chat. >> Oh, okay. So-- [ Inaudible Remarks ] >> And it's like, why do you use that? It's like, well that's what our intern implemented. >> Yeah, okay. So, that's-- that deserves its own slide, that-- our intern implemented it in 1995. That is definitely a problem. I'm also going to look at OpenSSL today because OpenSSL is the most popular implementation. But you know, there are other bad ones too. How many people have ever looked at the OpenSSL code? [Inaudible Remark] I can tell, there is only a couple of lines, 'cause you don't have that wide-staring eyed look that people have really "looked to the OpenSSL code" look. Let me give you a fragment. Okay. This is a good piece of OpenSSL code. I just want to point you to a couple of nice features, okay. So I like to put curly braces on some of my ifs just because I think it's less confusing, but whatever, maybe I'm picky . I have never yet put an if zero around any code. This is a code pattern that I have never seen but they used throughout OpenSSL to keep you I guess from hitting error conditions. It's a disaster. You look like you're about to say, "I use that all the time." [ Inaudible Remark ] It's very possible. There is a talk of cleaning up OpenSSL to the point where you could run an auto-indent program on it. But nobody can get it [inaudible]. Because if you noticed, the indenting is also crazy. This is an example. But actually, I'm going to break out and I'm going to show you 'cause that example doesn't do it justice. So, let's get-- let's get the next code up here 'cause I brought my favorite piece. Okay. So, this is a function. A very important function in the SSL handshake. [Inaudible Remark] Well, if I zoom it, well you won't get the complete effect of awfulness. But let's see if we can-- oh God-- unfortunately, I'm not even sure how you change font sizes in [inaudible]. I probably have to go to preferences, and, there you go, fonts, oh my God. I tried to be dummy. Oh man, well I'm sorry. I just-- just-- >> You can make it full screen and-- >> Yeah, maybe I can, here-- maybe I'll make it bigger. This is going to end up being horrible. Come on, why can't I make it bigger? Can I just select all, oh man. Yes! Okay now what I'd do, here we go. Font, here we go. Let's get you get up to 18 point and close. All right. Okay. So, let's start-- here is the top of the function, SSL3 connect, and [inaudible] can scroll quickly and you guys get a shout when you see the end. Look, if open this little [inaudible]. And if you see any if zeros, let me know. Okay. We're still not at the end of the function. But we are going to get there soon, oh no we're not. Okay, we're still not there. Come on. Okay, this is the guy that should get it tiring here, let's go a little faster. There, there we go right there, that's the end. Not even intended all the way down to the end, but that's OpenSSL and that's still good code, there's worse in OpenSSL. And it's very hard to tell what code is even being used and what code is buggy because it's just a mess. >> [Inaudible] paper. No function should be longer than you were told. >> Yeah, this-- some functions in here are longer than like everybody in this room, lying down on the floor would be together. There is a bad stuff. There is also some really funny comments in there. This is my favorite, you can't read it from back there, blah-blah-blah-blah we should change this, this would obviate the ugly and illegal kludge in crypto memory-leak cb. Otherwise, the code police will come and get us. Some of the variable names, sometimes they don't even bother with the comments, they just straight up clear what their-- their variable. But these are all minor things. There are a couple of actual serious-- serious problems here. So let's talk about a real serious problem. A real implementation, what's that? [Inaudible Remark] Nobody knows, nobody will admit to being an OpenSSL developer. It's like a weird secret club. I know two people who would now work on it. But the rest of them, they don't like talk in public, they don't like it's weird. And it's actually worse for NSS. I don't even know who does NSS. Okay. So, this is the spec for TLS1.2. That's the most recent version of TLS. Here is-- this is RSA decryption by the way. This is a very important part of the RSA handshake. Step 1, generate a string of 46 random bytes, simple enough. Step 2, decrypt, this is the RSA message to recover the plain text M. Simple enough. Now, this uses a padding scheme called PKCS1 V15 padding. If the padding is not correct or if something else goes wrong, what you do is you use this random string, R, that you generate in the first step instead of the message you would have gotten now. This prevents a particular attack that is very similar to this padding article attack that we described up here about works on RSA encryption. This is an attack that can be given a few a hours-- let somebody. If they are able to find out, if the message decrypts with bad padding, they can actually decrypt any RSA plain text. So this is a very serious attack. Their countermeasure is to first generate a string of random bytes and then if there's a problem with the padding, just use those random bytes silently, instead of the ones that you'd ordinarily get and not produce an error. So it's very similar to what we are talking about here, where instead of producing a padding error, we kind of just go ahead with the protocol. Simple enough. How much you attack a protocol like this if you are an attacker. It's not giving you an error, but you want to find out if something went wrong. Or you might hope that what these people are doing is not generating the random string first, but rather, waiting until they get a padding error and then generating a random string. Because that might take a little extra time and that might lead to a timing attack. Nobody would be stupid enough to do that, would they-- would they? Okay, good. So here we go. This is OpenSSL version 1.01C. This was the current version up until a few weeks ago. It's still the current code as far as I know. Step 1 is decrypt. Step 2, if the decryption fails, generate a bunch of random bytes. And then finally-- do this stuff. I had a grad student who runs some experiment with the last week and he was able to get-- no, not big timings, but 20 microsecond timing, which are big enough, that with multiple samples, you can detect them over a network. You can actually probably implement a real attack on real deployed OpenSSL today using this vulnerability. And certainly, any kind of embedded device that uses this code is really in danger. And I've notified the people with this, but they don't care. And they're not going to care until somebody codes up an exploit and actually does it. >> You need a cool acronym. >> I know, what do you think? Let's have-- let's have a brain storm session about that after the presentation. It has to be something better than terror, so. So these are real bugs that-- >> Eagle. The American. >> Errors in, anyway. >> Yeah, given that these guys have been releasing those things in Argentina. >> Yeah, I know-- >> Juliano Rizzo? >> Echo parks, which I can't seem to be able to make. >> Yeah, is that a real conference or they just made it up so they can put it in Argentina? >> It is in fact a real conference. I mean, two years ago, it was absolutely fantastic. >> Okay. >> Last year, you know, it was pretty damn good. >> Okay, so it's-- you're really excited about it, like a laser and some stuff that they present in their SSL [inaudible]. >> Yeah and they have this strange like communist symbol schemes, you know, arms grabbing or red arms grabbing. [ Inaudible Remarks ] >> A couple of other things, I guess I didn't finish with this. Not only does it generate random bytes, but there's thread-locks [phonetic] all around this random number generator which means if you get two of them to hit each other on a multithreaded implementation, you could probably expand that. The good news is that NSS does not have a set of thread-locks. They have two, so this problem is not isolated actually to OpenSSL. It's in all in these libraries. This one-- I'm running out of time, just another-- this is another good how not to implement code example. Hard to read so I'll just example quickly. When you're doing a signature comparison for certain kinds of signature, the recommendation is, you put your expected value in a buffer, and then you do the RSA stuff and then you get a value and you compare the two buffers. You do not do the RSA thing and then pars to see if the thing is correctly formed, not that because it's easy to get it wrong. OpenSSL, instead of doing something that would be two lines, does it with, you know, with 20. So this is the kind of thing you see throughout OpenSSL. Now, these problems are not limited solely. I promised that I was going to talk about APIs and ABIs a little tiny bit, so I'm going to try that here. These problems are not limited to the internals of the code which would at least be forgivable or understandable. The APIs OpenSSLs are still difficult to use. And there was a paper that came out sometime in the last few months called, "The most dangerous code in the world validating SSL certificate to non-browser software" from Stanford. And what they did basically is they went through a huge pile of Android and OpenSSL-using applications to look for people who are not doing SSL correctly. >> All of them? >> All of them? Essentially, they found like 99.7. >> All day at my job. >> Yes. >> No one can do this right. >> That's basically it. So a lot of people-- I don't even know-- >> Twitter does it right? >> Twitter does it right, okay. >> That is one exception. >> You think that this would be important, so the idea is, you have to use the API correctly, how to check it. If you turn on certificate verification, you have to check all these things, but people don't do it. And just by doing static analysis, they were able to find a lot of people who weren't. The reasons are two-fold, one is that people do testing, I guess. And they checked to see, you know, if you don't have an SSL certificate installed in your server, well, what you're going to do just for your testing, right? You're going to flip that off. You're going to not check the certificate and then you're going to run your test. And then inevitably, you're going to forget to turn it back on and you're going to shift the code without having it-- check the SSL search and that happens a lot. The other reason though, you can completely blame an OpenSSL because you would think that, in most places, if you had an INT enable flag on a API, there would be two values for that. One would be, one yes-- well actually, there'll be two values. One would be zero, no and then anything else will be yes. It turned out that there are three values and I'll describe. This is actually a problem with C-U-R-L CURL. CURL is an option called SSL VERIFYHOST. When at zero, it does exactly what you'd expected. It doesn't verify SSL certificates. When it's two, it verifies SSL certificates. When it's one, it does something really weird. It checks the certificate kind of like half way. It checks to see-- even to search any host names and then accepts the certificate no matter who gave it to you. So it checks the certificate, but it doesn't care if it's actually a certificate to the site you're visiting. So simply bypassing one which is the expected value, you can completely misuse this library and make yourself vulnerable to man-in-the-middle-attacks. So this is just stupid and this happens all over the place. And part of the reason that happens is that using any of these use of software, even if the API level is very complicated. Here's an example of doing encryption with OpenSSL. Mostly, this is involved with pulling an RSA public key off a disc and using it. It looks kind of like OpenSSL code does, right? It's, you know, like crazy. It's complicated. This is hard to use and people make mistakes. This is kind of a space-shuttle version of how you get things encrypted. A better way to do it would be something like the library NaCl does it, which is, don't make people think about algorithms. Give them a low, a high-level API like this one. It's called CryptoBox where you basically just pass it some strings and you say, encrypt this stuff, and it does everything for you with the right algorithms. You don't have to think about it. And so this is something that hopefully we will get better at doing in the future. Yes? >> Is that an issue in the real world, and they want to optimize, you know, Crypto and that's why they'd only use libraries? >> I mean, no. I mean, people wanted to use libraries, but-- [ Inaudible Remark ] People use the library that is the most popular one out there, and the fastest one out there. So I guess that's what you're saying is the one that has the best optimizations. They don't care if it's a terrible crappy library. If it encrypts a yes, no, 10 percent faster, they might go for that. That's a reason. That's definitely a reason. But the real reason is there are no good, well-supported libraries out there that do everything right. You have a choice of being very well-supported with lots of people working on it in bad, and not very well-supported and not, you know, deployed very much and good. And that's something that's need to change. >> And that of course, where it develop in time, would kill your company. >> Yes. You can't afford-- yeah, you can't afford to do things right if it hurts you and cost your mind. How much time do I have? Am I overtime now? >> Uh-hmm. Technically-- technically, this is it but you can probably have another few minutes. >> How many guys have to run to a class 'cause I need to-- all right. I'm going to take two more minutes-- two more minutes. The last thing I want to talk about is so simple that it'll only take a minute and then I can say thank you. This is kind of a typical experience if you ever go to a secure website. This is just human beings. After all, even if you get all the technical things right, this is what actually happens when most people go to a secured website. You don't type HTTPS American Express dot com in. You type in americanexpressdo.com. And almost always what happens is your browsers goes first to the insecure American Express dot com site where then it gets redirected to the secure site. And from there on, you're great. Now, this is a big problem. Anybody who's on the wire at this point can change this redirect. They can do anything they want to you in between so you don't go to that secure website. This is starting to change with the introduction of protocols like HSTS which are hacks that basically tell your browser, any time you go to American Express after the first time you go there. Always go to the secured site and reject any attempt to make me not go to the secure site. So, we're fixing this problem. But this impractice turned out to be more important than all of these, you know, fancy crypto attacks that I've told you about. And then here's one last really, really stupid one that I love, which is that, a lot of the time, this is what a secured website should like, right? You want to have a little something in the corner that says, "I'm secured" with a little lock. But, let me move over-- sorry. We moved already these examples that I have 'cause I'm in the wrong page. This is a big example of what often happens on websites today, which is that, people stick a lock, you know, on the page itself and say, it's a secured website. This is an actual Wachovia page [inaudible] year old, where they didn't actually serve the page over HTTPS, but they did put a lock on the actual HTML of the page to make you feel better. And this kind of stuff is just you know. >> I did check, the host is HTTPS. >> But, it doesn't matter. If I get the page itself, I can make a post to anywhere I want. >> Yeah. >> So it's just, you know, this kind of usage problem turns out to trump all the good stuff that we're doing, but we're finally fixing it. And so that gave me 30 seconds to give you my last slide which I think sums up this entire talk. Actually, what I'm not saying is we're all good. I think there's a lot of good news. We're getting better at building these systems. We're getting better analyzing these systems, but I don't think that cryptographers really take it seriously enough. Academic cryptographers are off doing multi-party secure round optimal zero knowledge. They don't realize that the real world-- that cryptography is not a branch of theoretical computer science. And that we-- and that means all of us including people who are, you know, not grad students, but thinking about it someday, should be thinking about these problems because they actually matter. And so that's it. So thank you guys. [ Applause ] >> In the class, we are going to be looking at synchronization next. And you will have fun measuring various executing [inaudible]. And implementing the-- one of these timing effects would make a good choice. So long as you would appropriately instrument-- >> Yes, yes. >> The code. [Inaudible Remarks] >> So, well, we're still [inaudible] for question for people who aren't-- [ Inaudible Remarks ] >> OpenSSL's use is so popular, I find out there is some kind of problem emerging trying to [inaudible]. I'm really sure there's going to be something more secure in the-- >> So you think that should be or you-- >> I think it should be. >> I absolutely agree with you. I think the people who write OpenSSL don't and I'm not joking now. I think they really don't want to be doing it anymore. They want people to be using another library. The problem is, there just isn't a good alternative to move to it right now. So what people should be doing is either take it over and really fix it or find one of the other libraries and take it over. I agree with the problem. [ Pause ] [ Inaudible Question ] I have no idea. I don't think that they realized. I don't think they realized how much they're relying on OpenSSL and I think they do rely on OpenSSL a lot more than they know. I just don't think it's even on the radar that these attacks were important. So I don't know how to say to that one. [ Inaudible Remark ] >> Where, on most parts, it's just the awareness that's actually mattering and still growing 'cause we do have some customers that are really difficult and just don't understand that these things are even important and they're kind of-- probably that they're even spending money on to begin with. >> Yeah, that's-- I mean they have the money, but they-- right. This is where they want to spend it. Yeah. But they're not shy about spending it. It's just that they don't want to do other things with it. >> There must be an awareness because out of a risk mitigation in terms of marketing, they do not tell us they're being attacked. >> A lot of our customers, if someone in the organization that has influence says, we are taking 10,000 dollars and paying for security audit. And then the people we have to work with are like, this is dumb. I don't see. This isn't that important. This [inaudible] can't be practically exploited on and on and on and on. And like, we're just here because our manager decided we're doing this. >> Yup. Like a lot of that too and it's one of the reasons I don't like to do that kind of consulting anymore. There's people who really-- yeah. First of all, there's an attitude that if you can't exploit it, it doesn't exist. Which is why people have spent all this time developing this, you know, BEASTs and terrors and all the stuff just to prove that things we knew were broken in the first place, need to be fixed. And then the other problem is yeah, people feel like their managers are pushing them-- at least their managers are pushing them into getting a security audit. At least in, you know, at least you're looking at the good stuff. >> Yeah. >> The bad stuff is this stuff that you're not seeing.

Education

Green received a B.S. from Oberlin College (Computer Science), a B.M. from Oberlin College (Electronic Music), a Master's from Johns Hopkins University (Computer Science), and a PhD from Johns Hopkins University (Computer Science). His dissertation was titled "Cryptography for Secure and Private Databases: Enabling Practical Data Access without Compromising Privacy".

Blog

Green is the author of the blog, "A Few Thoughts on Cryptographic Engineering". In September 2013, a blog post by Green summarizing and speculating on NSA's programs to weaken cryptography, titled "On the NSA", was controversially taken down by Green's academic dean at Johns Hopkins for "contain[ing] a link or links to classified material and also [using] the NSA logo".[5] As Ars Technica notes, this was "a strange request on its face", as this use of the NSA logo by Green was not "reasonably calculated to convey the impression that such use is approved, endorsed, or authorized by the National Security Agency", and linking classified information published by news organizations is legally entirely uncontroversial. The university later apologized to Green, and the blog post was restored (sans NSA logo), with a Johns Hopkins spokesman saying that "I'm not saying that there was a great deal of legal analysis done" as explanation for the legally unmotivated takedown.[6]

In addition to general blog posts about NSA, encryption, and security, Green's blog entries on NSA's backdoor in Dual_EC_DRBG, and RSA Security's usage of the backdoored cryptographically secure pseudorandom number generator (CSPRNG), have been widely cited in the mainstream news media.[7][8][9][10][11]

Work

Green currently holds the position of Associate Professor at the Johns Hopkins Information Security Institute. He teaches courses pertaining to practical cryptography.

Green is part of the group which developed Zerocoin, an anonymous cryptocurrency protocol.[12][13][14][15][16] Zerocoin is a proposed extension to the Bitcoin protocol that would add anonymity to Bitcoin transactions. Zerocoin provides anonymity by the introduction of a separate zerocoin cryptocurrency that is stored in the Bitcoin block chain. Though originally proposed for use with the Bitcoin network, zerocoin could be integrated into any cryptocurrency. His research team has exposed flaws in more than one third of SSL/TLS encrypted web sites as well as vulnerabilities in encryption technologies, including RSA BSAFE, Exxon/Mobil Speedpass, E-ZPass, and automotive security systems. In 2015, Green was a member of the research team that identified the Logjam vulnerability in the TLS protocol.

Green started his career in 1999 at AT&T Laboratories in Florham Park, New Jersey. At AT&T Labs he worked on a variety of projects including audio coding/secure content distribution, streaming video and wireless localization services. As a graduate student he co-founded Independent Security Evaluators (ISE) with two fellow students and Avi Rubin in 2005. Green served as CTO of ISE until his departure in 2011. He also co-founded Security Companies: Zeutro and Sealance.

Green is a member of the technical advisory board for the Linux Foundation Core Infrastructure Initiative, formed to address critical Internet security concerns in the wake of the Heartbleed security bug disclosed in April 2014 in the OpenSSL cryptography library. He sits on the technical advisory boards for CipherCloud, Overnest and Mozilla Cybersecurity Delphi. Green co-founded and serves on the Board for Directors of the Open Crypto Audit Project (OCAP), which undertook a security audit of the TrueCrypt software.[17][18]

See also

References

  1. ^ Miers, I.; Garman, C.; Green, M.; Rubin, A. D. (May 2013). "Zerocoin: Anonymous Distributed E-Cash from Bitcoin". 2013 IEEE Symposium on Security and Privacy (PDF). IEEE Computer Society Conference Publishing Services. pp. 397–411. doi:10.1109/SP.2013.34. ISBN 978-0-7695-4977-4. ISSN 1081-6011. S2CID 9194314.
  2. ^ "Zerocash: Decentralized Anonymous Payments from Bitcoin" (PDF). Zerocash-project.org. Retrieved 2016-05-13.
  3. ^ "On the Practical Exploitability of Dual EC in TLS Implementations" (PDF). Dualec.org. Retrieved 2016-05-13.
  4. ^ Schwartz, John (29 January 2005). "Graduate Cryptographers Unlock Code of 'Thiefproof' Car Key". The New York Times. Retrieved 2016-05-13.
  5. ^ Nate Anderson (2013-09-09). "Crypto prof asked to remove NSA-related blog post". Ars Technica. Retrieved 2016-05-13.
  6. ^ Nate Anderson (2013-09-10). "University apologizes for censoring crypto prof over anti-NSA post". Ars Technica. Retrieved 2016-05-13.
  7. ^ Fink, Erica (2013-06-07). "Prism: What the NSA could know about you - Video - Technology". Money.cnn.com. Retrieved 2016-05-13.
  8. ^ Perlroth, Nicole; Larson, Jeff; Shane, Scott (5 September 2013). ".S.A. Able to Foil Basic Safeguards of Privacy on Web". The New York Times. Retrieved 2016-05-13.
  9. ^ "How the N.S.A. Cracked the Web". The New Yorker. 2013-09-06. Retrieved 2016-05-13.
  10. ^ "Behind iPhone's Critical Security Bug, a Single Bad 'Goto'". WIRED. 2014-02-22. Retrieved 2016-05-13.
  11. ^ Joshua Brustein (2014-04-09). "Why Heartbleed, the Latest Cybersecurity Scare, Matters - Bloomberg". Businessweek.com. Archived from the original on April 9, 2014. Retrieved 2016-05-13.
  12. ^ "Hopkins researchers are creating an alternative to Bitcoin - tribunedigital-baltimoresun". Articles.baltimoresun.com. 2014-02-01. Retrieved 2016-05-13.
  13. ^ "Bitcoin Anonymity Upgrade Zerocoin To Become An Independent Cryptocurrency". Forbes.com. Retrieved 2016-05-13.
  14. ^ "Researchers Work to Add More Anonymity to Bitcoin". The New York Times. 19 November 2013. Retrieved 2016-05-13.
  15. ^ Peck, Morgen E. (2013-10-24). "Who's Who in Bitcoin: Zerocoin Hero Matthew Green - IEEE Spectrum". Spectrum.ieee.org. Retrieved 2016-05-13.
  16. ^ "'Zerocoin' Add-on For Bitcoin Could Make It Truly Anonymous And Untraceable". Forbes.com. Retrieved 2016-05-13.
  17. ^ "Technical Advisory Board". Open Crypto Audit Project. Retrieved 30 May 2014.
  18. ^ White, Kenneth; Green, Matthew. "Is TrueCrypt Audited Yet?". Istruecryptaudiedyet.com. Retrieved 30 May 2014.

External links

This page was last edited on 14 May 2024, at 06:15
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.