To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Jonathan Koomey

From Wikipedia, the free encyclopedia

Jonathan Koomey is a researcher who identified a long-term trend in energy-efficiency of computing that has come to be known as Koomey's law. From 1984 to 2003, Dr. Koomey was at Lawrence Berkeley National Laboratory, where he founded and led the End-Use Forecasting group,[1] and has been a visiting professor at Stanford University, Yale University, and the University of California, Berkeley. He has also been a lecturer and a consulting professor at Stanford and a lecturer at UC Berkeley. He is a graduate of Harvard University (A.B) and University of California at Berkeley (M.S. and Ph.D). His research focuses on the economics of greenhouse gas emissions and the effects of information technology on resource use. He has also published extensively on critical thinking skills and business analytics.

YouTube Encyclopedic

  • 1/3
    Views:
    2 028
    1 001
    12 730
  • Computing Trends Change Everything | Jon Koomey | Talks at Google
  • ARM Techcon Keynote: Jonathan Koomey: Why Ultra-Low Power Computing Will Change Everything
  • "How green is the Internet?" summit: Internet infrastructure | Jon Koomey

Transcription

>>Luiz Barroso: My name is Luiz Barroso and thank you for coming here and helping me welcome Jonathan Koomey to Google today. Many of us here have known Jon for a very long time because Google and many of us share the passion for understanding the impact of computing technology in the world. Jon's been working on this for a very long time. For those of you who are not in the audience and are from Google and, therefore, are tired on knowing this, Google has cared about the energy efficiency and our impact on the environment for quite awhile. I believe Google's been carbon neutral since 2007. We have, by now, invested nearly a billion dollars in renewable energy products and we have been able to share with the industry and the academic community some of our findings in how we are able to achieve a very high efficiency in our daily centers as well. Now, Jon has been very influential figure in this field at least over the past decade or so and even though his very data driven approach may be considered a little bit old fashioned these days- [Laughter] >>Barroso: given how quickly some of our political candidates divorce themselves from factual data. >>Barroso: Jon really likes data and he likes principled analysis. Some of you may be familiar with some of the studies he did that are closer to what we do here at Google estimating energy consumption of servers and data centers. To both studies, the one in 2007 as well as one, another one last year, are very widely cited and respected as good data points in this area. Jon is here today to talk about computing energy efficiency and you see the title, The Computing Trend That Will Change Everything. After many years at Lawrence Berkeley National Laboratories he has just joined, let me make sure I say that correctly, Steyer, The Steyer Taylor Center for Energy Policy and Finance at Stanford University as a research fellow. He has co authored dozens of research publications and has been the single author of at least two books one of which is recent and is here right now; "Cold Cash, Cool Climate." We happen to have copies of the book here available for purchase after the talk and Jon will be happy to sign them. Please help me in welcoming Jon Koomey. [Applause] >>Jon Koomey: Luiz, thank you very much for that gracious introduction. So in the technology industry there's always a lot of loose talk about revolutionary change. I’m convinced that we're on the cusp of a revolution worthy of the name and it's one driven by the energy efficiency of computing and communications. Those trends and the efficiency of these devices are enabling a proliferation of gadgets that are cheap, that are smart, that are small, that are connected and that are so low powered that often times they can scavenge the energy they need from ambient energy flows. So as an example of what's becoming possible, this is a company called Proteus Biomedical. They've created a one cubic millimeter device that goes inside a pill. This pill, in its current carnation, is a placebo pill that's taken along with other medications. When it hits your stomach, the digestive juices in your stomach allow it to generate just enough power to send a signal to a patch that you have on your skin near your stomach and that then tells the doctor when you took your pills. Now, for most of us it's not a problem to remember when to take your pills but there are large classes of populations where you actually need to know this and for certain kinds of diseases this is very, very important. So this is a tiny thing, it uses very little power, it scavenges the energy it needs from your internal workings and it will transform, ultimately as more sensors are added to this kind of device, it will transform the way we understand the human body. So this is just one of many examples, I'm in the process of compiling case studies of different technologies and business models that are enabled by the trends that I'm gonna talk about for the rest of this discussion. So the research question that I wanted to answer was how has the energy efficiency of computing changed over time? At the end of the talk I'll discuss what I think some of the implications are of that and I'll ask for your help because all of you have lots of interesting ideas that I don't know about on technologies and business models that are potentially relevant. So if you have examples for me, by all means, come up after the talk or send me an email. So everyone knows about Moore's Law. It's not a law in the physical sense it's an empirical observation about economics of chip production. In 1965, he looked at, Gordon Moore looked at the trends in the production of chips and he found that the component density was doubling roughly every year. In 1975, he revisited that because of changes in the economics, the trend moved to doubling every two years and that trend in density has helped for more or less the last 30 plus years. Now it's a characterization of the economics of chip production not physical limits and it's often imprecisely cited. So for those of you who are interested in understanding how this, the law as it's precisely stated, evolved over time, there's a great article by Mollick in 2006 in the IEEE Annals that I heartily recommend. So this is Moore's original graph and on the Y axis, we have the relative manufacturing cost per component, on the X axis the number of components per integrated circuit. Over time each of these curves is a snapshot in a particular year and the minimum point of this curve is the point of lowest cost production. As you can see over time the cost per component goes down and from the progress in the minima of these curves we get, what's now known as Moore's Law. [Coughing in background] >>Jon Koomey: This has led to more popular presentations of this. This is a graph most of you have seen, number of transistors per chip over time and this is from data from James Larus. It's from '71 to 2006, the doubling time is about every 1.8 years for transistor density. So originally when I started looking into this question of transit efficiency I wanted to replicate some work I had done for servers where I looked at energy efficient-, energy use, cost and performance. In the research for this later work I actually found a great article by Bill Nordhaus who's an economist at Yale. And his, most of his academic work is actually in the climate area, he's done a great deal of work on the economics of climate mitigation but he's also an aficionado of the history of computing. And he compiled a long term trend in performance and cost for computing that I was able to build upon. So once I found that work I realized that it didn't make sense for me to go into the cost side but instead I wanted to focus just on energy and performance. So I started looking into this, this was my calculation from two computers that I had at my house that had some data on the cost of the old IBM PC XTAT and the cost of ENIAC. So this was kind of exploratory. It looks, more or less, like a straight line on a log plot. That's fine, that's cost, calculations per second, per dollar of per purchase cost. Then I made this graph, computations per kilowatt hours, similar sort of picture. But it was this graph that really sent me down the path to understanding what was going on with efficiency of computing. So my friends at Lawrence Berkeley Lab who run their super computer group, they had some estimate data on the power use performance of their devices. So they had a Cray-1 back in the day, these Nurkse machines are a set of gradually more powerful super computers that they had installed at the lab and that, that really peaked my interest and I said, "Okay, maybe if I fill in data in this graph I'll learn something interesting and important." So the method that I used here was to focus on the peak performance of the computers and the energy use at peak performance. So number of computations per hour at full load divided by the measured electricity consumption at full load, so full load, full computing load, maximum performance. So this says nothing about the machine powering down this is all about the active performance at highest output as well as the energy use at that period. So I, when you see computations per kilowatt hour, that's what this means. Now the Nurkse data, it turns out I didn't actually end up using those data cause they're not measured. So every point on the graph that I will show you is measured data for electricity consumption and performance that's been normalized to Nordhaus' database or taken directly from it for purposes of the performance side. So there's a lot of different published data on power use. There's a great set of reports that was done for the Office of Naval Research by Weik in '55, '61 and '64. That was back in the day when you could actually count the number of computers there were and there were dozens or low hundreds. I also visited various archives and people who had old computers. In general, when I was doing these measurements the computer was fully utilized for portables. There are a few in the database I subtracted out the screen power. Not it's an important caveat, all of you know this, but actual performance trends with real software in use are not necessarily the same as performance trends as measured from benchmarks and those are not the same as transistor trends. There's often a lot of confusion in the popular discussion of this, it's important to state that up front. Now in the PC era, starting with microprocessors, performance per computer doubled more or less every year and a half. So this is something that I call the popular interpretation of Moore's Law, when you talk about Moore's Law with most folks that's what they say. But Moore, to my knowledge, never talked about performance just density of components. This is a graph of computations per second per computer, the maximum computing output for a variety of different devices over time. Most of these data come from the Nordhaus work. There's several dozen more that I added to the database but you can see a clear upward trend as you'd expect. You also see this interesting result here. In 1960 around the time of the shift to transistors, you start to see a bit of a jump in performance and we'll see that again in the efficiency curve as well. So I did a variety of measurements on different computers. I visited the Microsoft computer archive which is a temperature and humidity controlled vault that has many old computers including an original Mac, Fat Max, original IBM PCs and a variety of other devices. I crawled under the desks of some of my colleagues at Lawrence Berkeley National Lab because they had dozens of computers that I could easily get access to. My in-laws are photographers, they had some high powered computers as well and then we'll see in a second, Erik Klein is a history of computing buff and he had a variety of computers in his own archives that I couldn't find anywhere else. The Computer History Museum here in Mountain View, I'm sure all of you have visited there, if you haven't it's an amazing resource and there's still lots of online activity from people who are aficionados of old computers and, so, people who actually understand how vacuum tubes work were very patient in educating me as to the subtleties. So does anyone know this computer? You recognize this? >>male #1: Altair >>Jon Koomey: Altair 8800. This was a kit computer. It appeared on the cover of "Popular Electronics" in 1976 and it sold 10s of thousands of units. It was the first recognizable personal computer. It looks a lot different than what we're used to nowadays. How about this one? >>male #2: Osborne. >>Jon Koomey: Osborne, yes. This is what passed for a portable computer. [Laughter] >>Jon Koomey: I'd say luggable is more like it. The screen is about four inches, diagonally it's a monochrome screen and you have to plug it in. So it was really for people who were using it on site and they had power, they needed to carry a computer with them. This one's obvious, the old Apple 2. It brings back memories, I did my senior thesis in college on an Apple 2E. So this is just, this is Erik Klein, he has a garage full of old computers, some of which I just showed you. So this is the bottom line result from that analysis. The efficiency of computing defined as the number of computations you can do at full load divided by the kilowatt hours used during that same period has doubled more or less every year and a half since the 1940s. So longer than Moore's Law starting with vacuum tube computers. This results in about a hundred fold improvement every decade in the efficiency of computing and, in essence, this trend enabled the existence of laptops and smart phones. So for the statistics whizzes in the audience, a good correlation here R squared at .98 for all computers, .97 for PCs, these statistics are good as well. Erik Brynjolfsson is a friend at MIT. I ran my regressions past him just to make sure I had done it right and he said, "Yeah, it all looks good and damn I wish my regressions looked like that, too." [Laughter] >>Jon Koomey: but doubling time for computations per kilowatt hour, 1.6 years or so for all computers, 1.5 years for PCs and slightly fast improvement in efficiency during the vacuum tube era. There's a big jump, as you can see, when we switch from tubes to transistors. So typically for, on a per switch basis, transistors are 10 to 20 times more efficient than tubes but what you see here is a more, is about a 2 order of magnitude, maybe a little more jump in the efficiency. What I think is going on there, it's not just having more efficient transistors but it's also innovation in the way people were using and designing these devices that led to an increase in performance. So not as many devices in the discrete transistor era but then, of course, by the time we got to microprocessors, there were more computers to measure. So here are some summary implications, the things you do to improve performance, at least over the period I analyzed, almost invariably improve computations per kilowatt hours. So for transistors you make them smaller, you have shorter distance from the source to the drain, you have fewer electrons in the transistor that all reduces power use. For tubes, it's a similar story. You make them smaller, they have lower capacity and slower currents that all reduces power use as well. The trends that I'm describing here make mobile and distributed computing ever more feasible. So I you think about it in terms of a device that has a battery, the battery life improves 100 fold per decade at constant computing power. So this is one example of the result of this, these trends. So this is installed base of personal computers, laptops and desktop. In 2009, for the first time according to IDC data, laptops outsold desktops. So this is, this graph shows installed base but just to give you the perspective here, laptops outsold desktops in 2009 and they will continue to do so going forward. So this trend toward more and more laptops in the installed base is only going to continue and increase. This is another interesting example, this is the full belly, sorry, the big belly trash compactor and this is an example of both computing and communications reducing environmental impacts in other parts of a business process. So these are installed typically in parks, places where a truck needs to go around and pick up the garbage and first off it compacts the trash five times. So right off that's five times fewer trips for the big garbage trucks but it also, it sends a text message when it's full. So it's actually better than an 80 percent improvement. And it generates its own power from a photovoltaic panel on top. So self powered device would not be possible without very efficient information technology as well as efficient compacting. So this is an economic and environmental home run, clearly. What it does is it allows you to substitute bytes for atoms and any time you can do that, almost invariably, that improves environmental performance. So here's another example. I'm going to visit these folks on the 17th of September, Josh Smith, at University of Washington, used to be at Intel. He's designed sensors that scavenge energy from stray radio and TV signals. So in active mode these are using 60 microwatts for similar kinds of devices there are other possible power sources so light, heat, motion, bodily fluids as we talked about. But, clearly, by the time you start getting useful computing work being conducted at the microwatt level, you are starting to imagine, you can start to imagine applications that we could never ever do before and that's ultimately what I think the source of the revolution will be. So, other implications here, there's a lot of focus on big data but Erik Brynjolfsson at MIT likes to talk, instead, about nano data, data that's focused on specific transactions and specific individuals and our ability to do very detailed monitoring of transactions and characteristics of people and institutions will actually give us great insight invisibility into what's going on there in a way that we never could before. We'll have ever more precise control of processes, we'll have better ability to do real time analysis, what's actually happening in the world and it will, of course, enable the internet of things to come about. So bottom line on the implications here will have better matching of energy services demanded with those supplied. So people always talk about the end of Moore's Law and Luiz and I were talking earlier about the end of Dennard's Scaling, most folks when they talk about that, they talk about it from the perspective view saying, "Here's our technology as we have it now. We're seeing real problems in how we implement this going forward and increased density is increased performance. What are we gonna do to fix that?" I'm gonna take a different view here using historical perspective here. Back in 1985, Richard Feynman the physicist, he calculated a theoretical limit to the efficiency of computing. Now he did it by making an assumption. He said, "Let's assume a three atom transistor." If I do that, using my computation wizardry, I can come up with if the ultimate physical limit for computing. He did that. I plotted it on this curve, so here's that same graph that we saw. If you were to extrapolate the trend out to that physical limit, it would take us about three decades, so to 2041 or so. What this graph tells you, and it's the same story you're getting from the folks worried about Dennard scaling, is that sometime in the next few decades we are going to have to radically change how we do computing. So this take comes at it from the theoretical limit perspective rather than looking at our current technology. Now, one interesting little clever tidbit, earlier this year, researchers at Purdue, the university not the chicken company, and the University of New South Wales created a reliable one atom transistor. It uses electron energy levels within one atom to do switching. Now, it runs at liquid helium temperatures. [Laughter] >>Jon Koomey: But what it says is that if we're, you know, if we, again, can radically change how we do this computing then there's a potential for much higher efficiencies. But, again, we're gonna have to change what we're doing. So there are some big unanswered questions here. One, to me is, are there innovations either in software and hardware that will allow us to substantially exceed this historical rate of change in efficiency of computing? Conversely, what road blocks might get in the way and prevent the trends from continuing after this current innovation pipeline of the next five to ten years is exhausted? And then, of course, what do we do after we reach Richard Feynman's theoretical limit or the one atom transistor limit? Obviously, you know, I don't have the answer to those questions but I wanna summarize what I think some of these, the big picture, implications are. So, it's not just about computing efficiency. Low power is actually more important than efficiency. You can make slightly less efficient devices that are really low power that perform useful computing service, you can put them in places like pills or in every device here giving an IP address to the lights and other things that don't need to be connected by wiring. The revolution that I'm talking about is being driven by a confluence of trends. So it's the efficiency of computing but also the efficiency of communications and the efficiency of sensors. So as we get MEMS devices that can do sensing at very low power levels then you get all sorts of interesting things that we couldn't do before, and then, of course, efficiency of controls. As soon as you starting getting these really low power devices energy storage becomes really important and the energy harvesting becomes important as well. In most of these devices, the bulk of the electricity use is actually in idle mode. So the active power is not the most critical thing in many cases, it's actually how fast can you put the device to sleep and how low can you make the power use go? So the engineering challenges are different from building the fastest computer at its peak output, that's a very different set of engineering challenges than building one that is elegantly simple, that powers down quickly, that has very low idle power, it's a kind of elegant simplicity that's a little different than the focus on ever more powerful active mode performance. So let me just summarize results, here, so we leave time for questions. In the PC era, the performance per computer doubled every year and a half. The efficiency of computing also doubled every year and a half during that period. So those things are connected. From ENIAC to the present, computations per kilowatt hour doubled, more or less, every 1.6 years. And the things you do to improve performance also almost invariably improve energy efficiency of computing. We're far from the theoretical limits but, as all of you know, there are some technological challenges that we're facing in the coming years. The biggest implications of these trends is from mobile technologies, distributed technologies that are small and cheap and connected and the focus now, not just in the popular press but in the engineering world, is a focus on low power, figuring out ways to make tiny, cheap, low power devices that can be self powered and put in places that we never could imagine putting them before. And that's, as I said, a different set of engineering challenges. Some of the best engineering talent in the world is now focusing on that area and I think great things will come from that. So, uh, my three year old, three and a half year old twins have better Spanish pronunciation than I do but, Viva la Revolucion! This is where I wanna ask for your help. So, if any of you are aware of technologies, business models, companies doing interesting work enabled by ultra low power, ultra high efficiency, energy scavenging devices, I would like to hear about it. So come up after the talk, send me an email and let me know what you're thinking. With that I will say thanks to my funders and coauthors and ask any quest-, answer any questions that you may have. >>male #3: Have you done any comparisons between the rise in efficiency and the rise in the total energy use of computers? >>Jon Koomey: So the question is, have I looked at the relationship between the rise in efficiency and the rise in total energy use of computers? The most recent work that I've done on total energy use is in data centers alone. So I haven't done careful analysis on the whole IT picture. That's, I think, what you're getting at, right? Because as you have more and more distributed devices there's more energy associated with these different devices. So back in 2000 or so, there were a couple of guys running around talking about how the internet was gonna use half of all electricity in ten years and we did pretty careful analysis to summarize for PCs and all office equipment and data centers, everything else, the number for the US was around three percent of all electricity use. My bet is that that's a little higher now but it's probably still single digit percentage because we're, what's happening is we're at the same time as we're increasing our usage of computing but we're also starting to shift towards technologies that are more efficient. So a shift away from CRT screens, shift towards laptops, more computing being done on hand-held devices. So you have counter veiling trends that it's clear that there are, that there is, we're using a little more electricity for IT than we did in 2000 but it's not exploding in the same way as the amount of information services is exploding. >>Luiz Barroso: One of the, we're looking at these from a technology standpoint, you sort of understand the economics better than I do, Chuck Moore, another Moore not that one, an architect at AMD used to talk about this virtual cycle in computing which new technology was created, that created devices that were totally amazing and those devices had a market value that allowed you to get a lot of money to invest in new technology, right, and this kept going forward. At the time that he mentioned that at first, he was concerned that, then being in the PC business, that perhaps computers are fast enough and therefore this virtual cycle would end due to the fact that it was harder and harder to create new value on top of what they had already at that point. What do you, what do you think of that? >>Jon Koomey: So the question relates to the idea that somehow as you, you reach kind of a saturation in the performance of the, I'm sorry, the need for people's, for computing for people that the PC business in particular would be effected by this. That's one way to- >>Luiz Barroso: That's one example. >>Jon Koomey: That's one example. So what I think you're seeing is devices that are coming closer and closer to tasks. So the PC is a general purpose device and I think we have saturated, for most folks, you know, my mom using email, she doesn't need much more performance, right? But what we do need is distributed performance, distributed computing that is much closer to the actual tasks that we're trying to perform and can help us make decisions in real time. You aren't gonna carry your desktop computer with you all around but you have your phone. So I think, I don't think it's going to result in a slower innovation path or less value, it's just gonna be value to different companies. Companies who are able, then, to create innovation for particular sets of users, for particular kinds of tasks and deliver that innovation to folks who need it. So, a strange example, I saw a scale that is connected via Wi-Fi to the internet. >>Luiz Barroso: I have that. >>Jon Koomey: You have that scale. [Laughter] >>Jon Koomey: Okay. So this is a very interesting thing because you, how much do you think a scale costs? Just a normal scale. >>female #1: 20 bucks. >>Jon Koomey: 20 bucks, 30 bucks. How much do you think they sell these scales for? >>Luiz Barroso: $100. >>Jon Koomey: $100. Okay, so, here is an example where, this is pretty much the same device, it has a very, you know, some chips, a few other little things that maybe cost 5 or 10 bucks but they're delivering customer value that makes you wanna pay 100 bucks for it. So you can't lie about your weight anymore. But you have, you know, automatically it comes up to the Cloud and it, you know, creates a nice graph for you of what's happening over time. So I think that's an example where the customer value that you're creating is because you're really close to their need for information and their decision and that's a different thing from creating a general purpose computing device. I think we're gonna really focus much more on that need for specific, focus on specific tasks. >>male #4: Yes, hi, I had a quick question for you. Could you comment on the trends in the chemical batteries? >>Jon Koomey: Chemical? >>male #4: Batteries, cause I can see that there's this like, on a log scale you have this straight line for computing power increasing over the decades but what's been the situation with the batteries innovation in chemical batteries? >>Jon Koomey: OK, so the question is on the rate of innovation in chemical batteries and obviously the trends are, the changes are much slower. So we're starting to see people, and that's gonna be part of the case study work that I'm doing is to show those trends as well over time, for batteries. But, much, much slower and what, because the computing trends improved so fast we're getting now into the regime where current battery chemistries can allow us to create really small devices. So we're getting a little bit better and as people focus more on, for example, lithium ion batteries and other similar technologies for cars, we're starting to see real R and D going into this in a way, you know, 40 years ago there just wasn't that kind of R and D going into batteries in the, at the same scale. So I think there will be improvements there but because the computing trends improved so fast that's where most of the change is gonna happen and it will start to bring more and more applications into the realm of feasibility for certain battery chemistries. You're also starting to see people playing around with flow batteries and fuel cells and other things like that and so, I visited MIT about a month and a half ago and one of their scientists there showed me a device they were gonna use for a space mission. The only way they could get it to operate for 5 years was a little fuel cell. No battery technology was gonna do it and so I think that there's certainly a trend, you know, there's ways to improve battery chemistry stuff but really the action is on the electronic side. >>male #5: So, I think you mentioned this a little bit already but I was wondering if you could say more about the, the trends that we've seen in both aggregate and energy consumption and the use of electronics and computers and things like that as a relative percentage of that and also the trade-offs in energy consumption we see as more and more things go virtual and go online and go to the Cloud. Do you have data or research on either of those things? Like, I'm thinking for example in my home, you know, I've got a lot more gadgets now than I used to but the total power I use in my house hasn't changed that much over time and the bulk of it is, you know, goes to lighting and appliances and things like that that are completely different technologies. >>Jon Koomey: So you have a bunch of things going on there, so the question relates to what have been these kinds of aggregate trends in total electricity use and how does that relate to our increased use of competing technology? >>male #5: Right. >>Jon Koomey: So, my best guess now is we're talking single digit percent in the US for total direct electricity used by IT equipment but what you're seeing is, let's call it five or six percent. You've got that five or six percent of direct electricity used for computing, couple percent of that is data centers, but we also have the other 95 percent which is being effected, in many cases, by the IT equipment. And you also have the rest of fuel consumption, non electricity consumption, that could potentially affect it as well. So I think it's important to track the total electricity used by IT, I think it's something I've done, you know, something I've done for a long time, I think it's important to do but I think it's also important to try to understand the effect of that IT on the rest, the other 95 percent. And that, to me, is the more interesting question. So one example that I did, that I worked on recently relates to a trade-off between buying music on a CD versus downloading it and we found that even in the best case for CD, sending a CD compared to the worst case for downloading, that it was still a 40 percent savings and it's much likely to be in emissions, it's much likely to be more than that. So I think that it's important to start to do more and more case studies like that to understand these systemic effects because direct electricity use, while it's more important, it's not something we should be getting very excited about because it's, at the end of the day it's a pretty good use for the electricity, in my view. And on the data center side, you've got, let's just call it 90 percent of data center floor space is in house data centers in companies whose main business is not computing. It's, in general, very, very inefficient. Utilization for servers of five to 15 percent, a number of comatose servers not doing anything but still using electricity, 20, 25, 30 percent of the servers. Huge inefficiencies there as there'll be a shift more towards Cloud implementations there'll be actually a reduction, increase in the efficiency of producing those services as a reduction in energy use. So there's a huge potential for improvements in data centers simply by shifting towards a more efficient way of delivering the services. So there's a lot of moving parts in this but you're asking the right question I just think the focus really needs to be on the other 95 percent and what the IT can be doing for that. >>male #6: When you mentioned five to six percent usage by IT does that count the cooling of those data centers or is it- >>Jon Koomey: Yes, so the two percent- >>male #6: so it's the total. >>Jon Koomey: for data centers, two percent for data centers includes everything related to the infrastructure. >>male #6: Okay, thank you. >>Jon Koomey: The cooling, fans, pumps, everything like that. >>male #7: I had a similar question about your data set that span many decades, did you try to make, include in some way the cooling or other overhead outside of the, you know, the ENIACs and the- >>Jon Koomey: So that's a good question, actually, for this particular set of trends I did not include cooling cause I wanted to get at the characteristics of the IT equipment itself. >>male #7: I wonder if the effective efficiency doubling time for some periods of history actually been shorter. >>Jon Koomey: it's a good question. I think that there's been changes in the way that services have been delivered that would be hard to track but I agree that it's something that should be looked at. >>male #8: Have you looked at the delivered computation as opposed to the computational efficiency? In particular what I'm thinking of is the software efficiency, now, 20 years ago I had a Window Manager that was about as responsive as the one today but it takes way more instructions. >>Jon Koomey: Yeah, so the question relates to looking at software efficiency as opposed to hardware and this is to me, actually this is a fascinating area because in the past we've mainly thought of performance as a question of hardware. We said Intel will give us or AMD will give us more performance, we just don't have to worry about our code. Well I think that's changing particularly as the need for parallelism, you know, you have a lot of cords you have to change your software that's very important. It turns out that there are big potential gains there. There are some folks at LBL that have looked at what's called code design of software and hardware. This is a term that comes from the embedded system space but they're starting to apply into the super computer space because when they came to Steve Chu when he was the director of the lab there and said our new super computer is gonna use 200 megawatts and they said, Steve Chu said, "No, go back and redesign it" and they started to look at ways to co-design the software and hardware and they were finding, you know, tours of magnitude improvement in the efficiency just because they optimized the software and the hardware together to attack certain kinds of problems. They were gonna see a lot more of that and it's really true, I think everyone here has a story about, you know, we're doing the same basic thing using a whole lot more cycles. And part of what I think goes on is that the costs of inefficient code are not being brought to bear on the people who are designing it and as soon as you can get to the place where someone selling data centers services is able to give that information to the people using the services, when we get to that point, then you're gonna start to see much more optimization. We're not there yet but you can imagine it being done to technology that exists today and that's where I think we need to get to so that people actually understand the true total cost of a computing cycle when they're using it and then they'll change their behavior. Thanks to all of you. [Applause]

See also

Works

  • Koomey, Jonathan. 2017. Turning Numbers into Knowledge: Mastering the Art of Problem Solving. 3rd ed. El Dorado Hills, CA: Analytics Press.
  • Koomey, Jonathan. 2008. Worldwide electricity used in data centers. Environmental Research Letters. vol. 3, no. 034008. September 23.
  • Koomey, Jonathan G., Stephen Berard, Marla Sanchez, and Henry Wong. 2011. Implications of Historical Trends in the Electrical Efficiency of Computing. IEEE Annals of the History of Computing. vol. 33, no. 3. July–September. pp. 46–54.
  • Koomey, Jonathan G. 2012. Cold Cash, Cool Climate: Science-Based Advice for Ecological Entrepreneurs. El Dorado Hills, CA: Analytics Press.
  • Koomey, Jonathan, Zachary Schmidt, Holmes Hummel, and John Weyant. 2019. "Inside the Black Box: Understanding Key Drivers of Global Emission Scenarios." Environmental Modeling and Software. vol. 111, no. 1. January. pp. 268-281. [1]

References

  1. ^ "EUF Staff - Jonathan Koomey". Archived from the original on 2012-01-18. Retrieved 2012-03-12.

External links

This page was last edited on 23 August 2022, at 10:44
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.