To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

William Lorimer (scholar)

From Wikipedia, the free encyclopedia

William Laughton Lorimer, FBA (1885–1967) was a Scottish scholar. Born at Strathmartine on the outskirts of Dundee, he was educated at the High School of Dundee, Fettes College, and Trinity College, Oxford. He is best known for the translation of the New Testament into Scots.

Lorimer spent his professional life as a scholar of Ancient Greek at various universities, ending his career as Professor of Greek at the University of St Andrews. However he also had a lifelong interest in the Scots language and besides the translation, was a longtime contributor to the Scottish National Dictionary. For the last ten years of his life he worked on translating the New Testament from the original Greek sources into Scots. Although he did not finish the final revision of his translation, the work was completed by his son Robin and published posthumously on his behalf in 1983.

YouTube Encyclopedic

  • 1/3
    Views:
    8 646
    5 907
    6 141
  • Mind/Brain Lecture March 2015
  • Sustainable Living, Lec 1, Environment 185A, UCLA
  • The Miller Collection, Scott Trepel, Maynard Sundman Lecture 2006

Transcription

Hello. I would like to welcome you to the nineteenth annual Swartz Mind-Brain Lecture. I first, of course, want to thank the Swartz foundation, in particular Jerry Swartz. In fact this is the nineteenth year of the brain lecture series. He established the Swartz foundation in 1994 and set up this lectureship for Stony Brook in collaboration with Stony in 1997 and it's - this is really collaboration between the Department of Neurobiology and Behavior, of which I'm chair which is why I'm standing here. To really acquaint the community with current research and neuroscience and really to have a mechanism for sharing the excitement with what the latest discoveries in the field and our ongoing efforts to understand how all of our brains work. The nineteenth lecturer we are lucky to welcome Dr. William Bialek, who is the John Archibald Wheeler Battelle Professor of Physics at Princeton University. I'm just gonna call him Bill, I hope you're okay with that. You're okay with that, right? Bill is really - he's an explorer. He's a pioneer, really. He brings a really unique, intellectual sophistication and yet a totally welcoming approach to studies that are right at the interface of physics and biology. He's one of a very small but now growing group of physicists that are really primarily interested in the phenomena of life. He's definitely a practitioner of Galileo's declaration that the Book of Nature is written in the language of mathematics. But as you will see, he presents this with an open mind and open arms that allow all of us, whether we're mathematically inclined or not, to really appreciate the richness that's derived from applying physicsy ways of thinking to biological problems in general and to the workings of our brains in particular. So a few facts. Bill graduated with a BA in 1979, Phi Beta Kappa, from the University of California at Berkeley in Biophysics, and a PhD in 1983 in the same area. After a stunning only three years of postdoctoral fellowship, he became an assistant professor at the University of California at Berkeley. He then joined the NEC research institute where he was a senior research scientist and then a fellow, working there until 2001 when Princeton wooed him away into being a professor of physics. He is then was given this named professorship - the Wheeler Battelle professorship. And he is the director of the programs in applied and computational mathematics, neuroscience, and biophysics at Princeton. He has received many prizes. I'm sure he'd be delighted to receive still more. In particular, in 2013 he got the Swartz prize for theoretical and computation neuroscience which is given by the largest society of neuroscientists which is about 50,000 people each year. He is a member of the National Academy of Sciences to which he was elected in 2012 a Fellow of the American Physiological Society since 1996 and a number of other prizes. I think, particularly impressive, is the percentage of his CV that reflects his dedication to educating undergraduates and even before undergraduates in quantitative biological approaches. Hes got a Phi Beta Kappa award for excellence in undergraduate teaching and a President's Award for Distinguished Teaching also at Princeton. He's extremely eclectic. His papers range from subjects of statistical mechanics of the Supreme Court - really - to entropic forces in flocks of birds to efficiency in ambiguity in an adaptive neurocode. He's written two books, both of which are really fantastic. One is called Spikes and it's really fantastic. It's basically a Bible for computational neuroscientists. And most recent, 2012, is a textbook called Biophysics :Searching for Principles, and I have to say that this is the first textbook that I've ever read that's like a good read, and, I mean and actually amusing in spots, so it's You have to read ti to believe it. I recommend it. So but given his talents of bridging Physics and Biology, I'm going to get out of the way. I'm delighted to present our 19th annual Swartz Mind Brain lecturer, Bill Bialek, and his title is Searching for Simplicity: A Physicist's Quest for Theories of Mind and Brain. Thanks Laurna Flip. Very gracious introduction. It's a pleasure to be here. I think the first time I set foot on the Stony Brook campus, our son was 5 months old. We were visiting a very good friend in the Physics department, and our son will be 30 this year, so... and he's getting his PhD in the Philosophy of Science. I don't know about you, but one of my favorite sections of the newspaper are the obituaries. That's okay. Now, I haven't reached the age where there's any sort of a personal interest in it, but, you know. But the reason I like it so much is that it's the only section of the newspaper where in order to explain why what happened yesterday is important, they have to tell you the history that lead up to it. The fact that somebody died yesterday is by itself not news. It's who they were that matters. And I think that sometimes when we try to speak to the public about science, we're so caught up in the news of the moment and trying to bring you to where we as the scientific community are right now, that we forget that a lot of what we bring, and a lot of what we are doing, is grounded in this incredibly rich culture. And as a physicist who's interested in biological problems, I'm very aware of this, because the cultures of Physics and Biology themselves are very different from one another. And so in fact, even within the scientific community, I think we often have communication problems that result from this cultural gap which C. P. Snow famously talked about before I was born. It hasn't gotten better. The gap that he was talking about between in some sense the two parts of the university: the science is at one end and the arts and literature at the other. That gap can now be seen opening within the sciences, and of course that gap exists between the academy and the generally educated public. Especially between science and the generally educated public. So what I was to do today is a combination of two things, and I don't know whether I'll succeed in doing both. I want on one hand of course to tell you about some things about the brain that I think you really should know, some of which my colleagues and I had something to do with figuring out. Although mostly not. That's another thing we tend to do wrong. We tend to overemphasize the parts that we were involved in. But the other thing I want to do is give you a flavor for the perspective that physicists bring to these problems. And so that means I'm going to do my job as a physics professor for a little while. Don't worry, there's no quiz at the end. And as Lauren already mentioned, physicists are the intellectual heirs of Galileo. Galileo famously made this remark, which is paraphrased in English as, "The book of Nature is written in the language of mathematics." Those of you who speak Italian, or who are familiar enough with some Romance Language will realize that it was much more elegant in the original. I mean, there's a little bit that you can translate literally. But this phrase misses the image of the grand book that lies continually open in front of our eyes and so on, and also misses the part where if you don't understand the language, you're sentenced to wander in an obscure labyrinth. Now... physicists really take this seriously. We actually believe that we can describe the world around us in mathematical terms. And if you want to continue the analogy or the metaphor that Galileo gave us, it's not only true that the Book of Nature is written in the language of mathematics--we also believe that the book isn't that long and there's one book. There are not different books for different parts of nature. And that's an extraordinary belief which you might question reasonably. In fact, you may know that biology traditionally has not been a terribly mathematical enterprise. And one very distinguished 20th century biologist in describing the grand history of his subject remarked quite directly that the only reason Galileo said this was because he didn't know any biology. And they say that we physicists have hubris. Now, I should say that this belief in the power of mathematics to describe the real world around us is something that is puzzling not only sometimes when physics confronts biology it's puzzling in other contexts as well. And let me emphasize that one of the features of believing in a mathematical description in the world or in searching for mathematical description of the world, I think you should not believe that Galileo was speaking as if he had seen that book. Rather this was meant, if you will, as a call to arms that we should be searching for this mathematical description. We should be trying to learn the language of this metaphorical book. of Nature. And part of doing that is sometimes to take the mathematics seriously to the point where structures that we see and the mathematics that doing immediately correspond with what we see in the world, we actually make predictions that there should be things that correspond to those structures. Maybe the most famous recent example is the discovery of the Higgs particle in Geneva a couple of years ago. Now this is as I say shocking not only to people who are NOT mathematically educated, sometimes shocking to mathematicians. This is David Mumford, now professor meritus at Brown and surely one of the 20th century's great mathematicians. And I found myself along with a few other physicists at a conference with David and he was defending a philosophical point of view that many mathematicians have apparently about the world of platonic ideals. So this goes back, back to the Greeks obviously, this notion that there is some perfect world in which the theorems live. And what happens when we make mathematical discoveries, when we make discoveries about nature presumably these things were out there in the world and we discovered them. It's not so obvious what's happening when we make mathematical discoveries because the theorems were true before we proved them but WHERE were they true if no one had actually thought that thought before? This is partially what this philosophical construction of Plato's is meant to solve. That's a view which I must say as a physicist I find very difficult to appreciate and my fellow physicists and I were giving David a rather hard time about this. Really a rather hard time... Um, to be honest, it seemed to us rather silly to imagine that there was this world in which there were these beautiful theorems that just lived out there and every once in a while you got a glimpse and it was all very mystical. And after riding him for quite some time, he pounded his fist on the table and said this: "Well you physicists believe in something even more ridiculous." "You believe that this beautiful world describes the world out there." And um, we had to admit that he had us. We do. We believe that the world in its richness and complexity is nonetheless susceptible to mathematical description. And relatively simple mathematical description at that. So why on earth would we think this? Well, there's a lot of evidence out there. This is but one example. It's kind of spectacular one. It has to do with the fact that the equations of motion for fluids are the same no matter what the fluid is. and that motions on different scales can be related to one another by simple transformations. This is a vortex that you might generate in your kitchen sink and this is in water by just dropping something and letting it spin. And this is a tornado. And this is a vortex in a rather larger pool of water. Perhaps the ocean. And this is a vortex in the atmosphere called a storm. And it's not a coincidence that these things look alike because they are described by exactly the same equations and we know what they are. So the fact that you can do an experiment in your kitchen sink where the scale is half a meter and you can make observations on the atmosphere where the scale is a few hundred kilometers so perhaps a million times larger than that and yet these look the same is prima facia evidence that there is some underlying mathematical description that is universal across this million fold range of scales. It's really quite astonishing. It would be astonishing even if we didn't know what the mathematical description was. Just being confronted by this comparison is striking but it's even better because we actually know the equations. Now there's a less spectacular but perhaps even deeper example which is very old but only reached its fruition in the 1970s. This is a plot of something that you all know about, which is if you heat up water on the stove there is steam that sits above the water. So that means that at that temperature and pressure the liquid state and the gaseous state coexist. You can have some gas and you can have some liquid and they are in equilibrium with each other. The liquid is more dense so if you plot the density over here, there's a high density part which is the liquid and the low density part which is the gas and as you increase the temperature the densities change and in fact they approach each other. At some critical temperature the densities of the gas and liquid will be the same and you can't tell them apart anymore. Now what's interesting about this plot which is already from 1945 is the demonstration that you can do this not just with water and steam on your cooktop, but you can do it with neon, argon, krypton, xenon, nitrogen oxygen, carbon monoxide not - recommended at home -, and methane and if you use the right units all of these data fall on the same curve. And so that tells you that these very different materials, which, you know, for many practical purposes, you would not want to confuse with each other -- certainly not Oxygen and Carbon Monoxide -- in fact behave in exactly the same way, as with the water in your sink and the atmosphere. And actually, what happened between 1945 and the 1970s is that people learned how to blow up this region experimentally and they discovered that the pattern here was even simpler and more beautiful than this. And in fact, it's not only liquids and gases that can be placed on the same curve, but other kinds of transitions in nature. So if you think about the liquid crystals that you used to have in watches if you're old enough, they make a transition between being ordered and being disordered, where all the little molecules line up. If you take a magnet and heat it, eventually it's not a magnet anymore. And there are many, many examples like this. And all of them exhibit this kind of universal behavior. And we have a quantitative theory of this. These are two icons of the development of these ideas. We have a quantitative theory of this that describes what happens in the neighborhood of these transitions between different phases of matter, and that description in fact has no adjustable parameters in it. So if I want to know the behavior of this curve, it can be calculated without knowing anything about what these materials are made out of. I don't need to know their molecular structure or anything else. I can write down with pure numbers what's happening here, which is as close to a demonstration of Mumphord's complaint as I can imagine. We have this theory that lives entirely in the world of pure mathematics, and out pop numbers that can be used to describe an experiment you can do on your stove top. Very real world situation. Now how am I going to get from this physicist view of this beautiful world described by pure mathematics, to something as rich and complicated as the brain, let alone the mind? Well the turning point that comes in 1952 when these two gentlemen, Hodgkin and Huxley... By the way, to close the loop with C. P. Snow's remark, you should know that this Mr. Huxley is the half brother of Aldous Husley. They decided that they would like to understand how it is that the nerve cells in your brain communicate with each other. And what we know is that they communicate with each other electrically, so if you put a small wire next to a nerve cell in your brain, you can actually pick up the electrical signals as they spread through the surrounding water. And the most dramatic examples of communication are the ones that happen over long distances. So when you tap here, there are receptors in your fingertip that feel the pressure, and there are individual cells -- you usually have the sense that the cells that make up your brain and the cells that make up your body are thing that you need to look under a microscope in order to see, and that is usually true, but there are some special examples where it's not true. So one end of the receptor cell is in your fingertip, and the other end is in your spinal cord, roughly one meter away. And so signals have to propagate over this very long distance, and the mechanism that's used for propagating the signals over those very long distances, is the same mechanism that's used on much shorter distances for neurons inside your brain to talk to each other. But of course it's easier to study the one that happens over a big distance. And it would be even easier if instead of doing it in a creature like us in which there are thousand of little, tiny diameter neurofibers that lead from your fingertips to your spinal cord, you found an animal in which there was one big neurofiber, one big nerve cell, and that's, although it doesn't appear in the title of this paper, in the previous papers you learn that that's a squid. So when a squid escapes, many of you perhaps have seen this, right? They set out a jet of water, the trigger for that comes from a signal that's propagated along a single nerve cell that's roughly the length of the squid's body, but has the diameter of a small straw. So you can see it, you can pick it up, and you can play with it, and what Hodgkin and Huxley did, was to analyze the dynamics of current and voltage moving in and out of the cell, and in response to voltage differences between the inside and the outside of the cell, much as you might analyze an electrical circuit that's sitting in one of your home appliances. And in one of the important counterexamples to the notion that biology proceeds without much interaction with mathematics, there are beautiful series of papers culminated in a quantitative description which is contained within the 4 Hodgkin-Huxley equations that they wrote down. Now we can talk about exactly what's going on in these equations and what physical processes inside the cell they're describing, but that's not really the point of what I want to emphasize here. What I want to emphasize is that in trying to arrive at that description, biology is complicated. And so the equations they wrote down are not just pure numbers. They're actually constants. You know, when I say pure numbers, it would mean you're allowed to write down pi, you're allowed to write down e, and the other things should be integers, you know: 1, 2, 3, 5. That's okay, right? But you shouldn't have arbitrary numbers appearing. And you'll notice that when you look at these equations, there are all sorts of arbitrary numbers. There's a one-one hundredth, which actually has units, because this is a rate, so that's one-one hundredth per second, and this is a 10, but that's added to a voltage, so that must be 10V. And you go through the paper and you discover that their description of what is one of the simplest nerve cells that canyone has ever studies has 20 parameters in it. So in order to describe this simple system, you need what is in this sense a very complicated model. Now they were at great pain to try and estimate these parameters from independent experiments, and then these experiments were done quite beautifully, where they essentially short circuited all the flow of electricity along the length of the cell by passing a wire down the middle, and so they studied only only the flow in and out of the cell. And then both in practice and mathematically, you take the wire out and then ask what happens. And the answer is that the equations predict that if you put a little current at one end of the cell, there will be a pulse of voltage that propagates from one end to the other And that pulse is called the action potential, and that is what actually happens if you put current for example, if you inject current to the receptor cells by pressing on something, then you get an action potential that propagates along the length, and not only is it a pulse, it's a pulse that has a shape, and it propagates at a very particular speed. And having measured all these 20 parameters, Hodgkin and Huxley were able to calculate that speed and it comes out right. And so this was the triumph of their work, was to show that having dissected the current flow in and out of the cell, they really could describe how it is that neurons communicate over long distances, by sending these pulses, because they could predict the shape and speed of the pulse. But it takes 20 parameters. So 5 years later, and I should say that, you know, the speed at which the nerve impulse propagates, is something that had been measured in the late 19th century. So this was a puzzle of many years, right? 50+ years. 5 years later, we have another milestone paper now back in the core of theoretical physics. SO many of you know that if you take an ordinary piece of metal, like aluminum, and you cool it down to very, very low temperatures, something remarkable happens, which is that all the resistance to current flow disappears, and the material becomes what's called a superconductor. Among the many remarkable things, if you take a loop of superconducting wire, and start a current flowing in it, it will flow forever. Because there's no resistance. So this was another puzzle of long standing. It had been discovered in the 1910s, and in this remarkable paper, John Bardeen, Leon Cooper, and Bob Schrieffer solved this problem, they developed a theory which explains all this. This is the John Bardeen, who together with Shockley and Brattain had invented the transistor, anyway. And there's this remarkable paragraph which is at the beginning of the discussion section at the end of the paper, which is a spectacular combination of modesty and triumphalism, and I didn't really know Bardeen, but I do know both Cooper and Schrieffer, and I can tell you which one was responsible for the modesty and which one was responsible for the triumphalism. But the crucial sentence is here: "Only the critical temperature involves the superconductive phase. The other two parameters are determined for the normal phase." So what does this mean? it means that the entire theory only has 3 numbers you need to know. Two of them you can measure from the high temperatures, that is to say, normal temperatures, not ultra low temperatures where the thing just behaves as a normal wire, and the other one is the temperature at which the transition happens, so you can just take that out and use that as your scale of temperature. So in fact, this theory really doesn't have any parameters at all. So we have this confrontation between two milestone pieces of work: one in advancing our understanding of the inanimate works, one a first foray into a mathematical description of something which is at the heart of every operation our brain does, and you see this incredible difference where in one case we essentially don't need any parameters at all. We have a theory that is, again, almost pure math, and the other case we have this tremendously complicated thing. So for those of you who keep track of these things, all of these photographs are taken from the Nobel website. Just so you know that my claim that these are milestones is not... It's shared by others. So. What are you gonna do? Obviously, as a theoretical physicist, I'm looking for the kind of description that Bardeen, Cooper, and Schrieffer gave us. I'm looking for a description in which the mathematics does all the work, and I don't need to look up 20 different numbers in order to figure out what's going to happen. On the other hand, anything that I'm interested in describing in biology seems on the face of it to be vastly more complicated than any of the things that we've succeeded in describing in the core of physics. So what's your point of view on this? Well there's one unspeakable possibility. And then the other possibilities I think, you know, this is something that lots of us are wrestling with, and so there are two different points of view. One is that somehow, although in trying to reach a description of some particular system, it seems that you need to know lots of parameters, somehow it doesn't really matter. So somehow nature has found a way of putting together the pieces of biological systems so that somehow all of these parameters end up not being so important. It seems like they're important when you start, but but somehow, little bit magically, the really important things, how the system actually works in ways that matter for the survival of the organism, are going to emerge from all of the interactions among these components in a way that is as people say, robust to variations in the parameters. And that is a really interesting idea, but it's not the one that i'm going to talk about today. The other possibility is that the parameters that nature has chosen. and "chosen" is a good word here, because all the systems that you see today are the byproduct of millions and millions of years of evolution. And furthermore, when you try to describe the behavior of some small piece of a living organism, the things that look to you like parameters are sometimes things that are under the control of the organism itself. So cells can decide how many copies of every protein they want to make. From one point of view, the number of copies of the protein is a parameter. But from another point of view, it's something that the organism itself can choose to adjust. And so on. There are many more examples like this. So maybe there's something about the process of evolution which has driven organisms, the organisms that have survived and prospered, something that has driven them to a very special place in their parameter space. And that's, if that's true and we can identify what that place is, then I don't need to measure all 20 parameters. I just need to know what it is that defines that special place. And if I can say that in mathematical terms, then I would be able to find that point without actually going out and measuring all those parameters. So that's the spirit, and I want to work you into examples of this, and we're going to work our way from the outside of the mind and brain, deeper in. But before we start, at various times, I'm going to say, "we", and as Mark Twain would have it, that is neither because I'm an editor nor because I have worms as far as I know, but rather because if I say that we have understood something, it probably was "we" and not "I". The ideas that I'm going to talk about are things that have occupied my colleagues for quite some time, and during that time, there have been many collaborations, and a not inconsiderable number of students and post-docs who have contributed to my own understanding of these things, in addition to making it extraordinarily fun. And part of the pleasure of a life in science is to watch these young people grow up and become scholars and teachers and scientists in their own right. It is as somewhat as one Physicist entitled his own autobiography, A Privileged Life. So with that, let's dig in. So if you've ever looked down on the head of a fly, you may have noticed that there are lots of little lenses. Flies and other insects ave compound eyes and you might think, if you think that what one of their lenses does is like what the lens in your eye does, then you too might get confused. This is somewhat unusual, because, as most of you know, usually Gary Larson got his natural history right, or actively played with it. He did admit to the sin of putting homonids and dinosaurs in the same cartoon. But this one he just got wrong. In our eyes, there's the outside of your eye, the cornea, and then there's one lens, and then if you sort of imagine sitting outside somewhere and tracing a ray through the lens, you'll come back and find the receptor cells on the retina in the back of the eye. So another way of thinking about this is that this cell in the back of your eye, which is the one that actually detects the light, as with the pixels in your camera if you will, is looking out through the lens in this direction. There's another cell which is looking out through the lens in that direction. This is fantastic. The optics of our eyes is very, very good, especially on the axis. We have incredibly high spacial resolution. We're able to discern fine details. I can more or less make out what's written on the shirt of the person sitting in the back. However, it has a defect, which is that there's a lot of empty space here, between the receptor cells, or the retina, and your lens. We know that that empty space actually occasionally clouds up and is a problem, but if you were a fly, it would be even more catastrophic of a problem, because this space, if you'll pardon the image, this space in our eye is approximately the size of a fly. So flies just can't do it this way, right? They're not big enough. So what they do instead, is to build the other way, and have every one of the cells on their retina have its own private lens that looks out at the world. And so this lens looks in this direction, and this lens looks in that direction, and so as a result, you don't get multiple images. You get multiple pixels, just as you do in a digital camera. Now, the problem however is that these lens are very small. So we'd like to understand why nature has chosen the arrangement of lens that she has and at first you think that small lenses are a good idea. because the first approximation of the head is a sphere. Some of you may know the joke about the spherical horse and the physics student. Um... Suprisingly few of you it seems. Yeah the...somebody sends their son off to MIT to study physics. Someone of modest upbringing themselves and after a few years wonders if perhaps their son has learned enough to be of some use. And he suggests that the problem of betting on horses would be a good application of all the physics that he's learned. And the son dutifully goes off to think about this and months pass and the father asks how it's going and the son responds "Well I've worked out the case of the spherical horse." Which uh...goes back to the physicist's search for descriptions of the world in terms of pure and ideal mathematical things. So um now heads are more nearly spherical than horses so this isn't so bad. If you imagine that you have these little lenses, then roughly speaking in the fly's eye there's one receptor cell on the retina behind each lens so these really are like the pixels of your digital camera and you see if you try to pack in more lenses by making them smaller then you get more pixels and you all know that that's good because that's what you pay for when you buy a digital camera. But there's a problem. And the problem is that if you make the lenses too small, then your intuition that when you look out through the lens, you only see all the light that comes in this narrow cone--that intuition breaks down because light is a wave. And waves undergo something called diffraction. And you can demonstrate that, again, in your kitchen sink by filling the sink up with water, putting a little barrier here. Sticking your hand in the sink back here somewhere, shaking it up and down and launching a way. And you'll notice that the wave is going fairly straight. You can see that because the wave peaks are nicely lined up. But then as it passes through this very small hole--a hole which is comparable in size to the wavelength. Then it diffracts and starts to spread out. And so the same thing is going to happen in the fly's eye if he makes these lenses too small. Then the light, instead of only being --coming through--the paths of light, instead of only coming through this axis will spread out beyond the region. subtended by the lens itself. And effectively this lens will see things that are in a much broader cone than you'd expect just from the geometry. So if you make the pixels too big, you lose details. If you make the pixels too small, you get diffraction blur. This is just geometry but this has to do with the wave nature of light and so the point at which these cross over is determined not only by the size of the head but also by the wavelength of the light that you're trying to say. Now, this argument, quite remarkably --or a version of this argument first appeared in the 1890s not long after the phenomenon of diffraction itself had first been understood. The argument appears again in Feynman's famous lectures on physics from the 1960s What Feynman didn't know was that it actually had been checked ten years before by this fellow, Horace Barlow This drawing of him from rather later than 1952. He's still with us and doing all sorts of remarkable things about vision in his 90s. He was a little younger then. And what Horace did was not only to go through this argument but then to realize that he could go into the drawers of the Museum of Comparative Zoology in Cambridge and find lots of insects that have different sizes eyes or different sized heads. And if this argument is correct, that what they've done is to try and squeeze as much angular resolution as the laws of physics allow, then if you plotted the size of the lenses in the eye versus the square root of the size of the head, you'd get a straight line. And actually if you worked harder you'd get the slope right too. And that's what Horace sought. So this is beautiful because it's a plot which is faithful to the great diversity of nature Every point on this plot is a different kind of insect. And yet, the prediction for what you should see comes from these very fundamental physical principles. And these are tied together by the hypothesis that what nature has done is to push for eyes to be as good as they can be given the limitations of the laws of physics. Didn't have to be true. Could be that there's not much pressure to see better, although it's hard to imagine. But you could imagine a world in which there wasn't much pressure to see better. And then in that case there'd be no reason that all of the eyes should cluster along the line that's defined by this point. In fact, this argument, that you should try and get as much resolution as possible out of your camera is not correct if, for example, your moving very, very fast, because then the images that you see are blurry anyway, because of your motion. And so, and there's another point, which is that if it's very dark outside, then the images are going to be grainy, for those of you who are old-fashioned photographers. I don't think you say "grainy" about digital photographs, because there aren't any grains anymore. And so, not only do you predict that this should be true, you also predict that it should not be true if you're an insect that flies very fast or who flies when it's dark outside. And so indeed, this is for big, slow insects who fly in the noonday sun. And deviations from this can actually be calculated by trying to think more carefully about what the design of the eye is really for. Let's now think not about insects, but about us. Suppose that you sit in a very dark room. Somebody tells you, "I'm going to flash a light and your job is to tell me whether you saw it." Now if you do this, you'll discover that, of course, if the light is very, very bright, you pretty much see it every time, and if the light is very, very dim, you never see it. And neither of those observations are very interesting. What's really interesting is that there's a region in between where the person keeps flashing the lights, and sometimes you see it, and sometimes you don't. You think, "Okay, I know what's happening. I'm fading out, I'm not paying attention," and so you resolve to be more vigilant. But nonetheless, there's still a range where for a certain brightness, of the flash, you still have only 50% chance of seeing it and you have this sort of graded behavior in between. So why is that? Well I told you that light is was a wave and is subject to diffraction. But many of you know that one of the great discoveries of the early 20th century is that is the discovery of quantum mechanics, and light is not only a wave, but it's also a particle. And when it interacts with matter, it's the particle-like properties that are often more evident, and so if you try to build something that absorbs light, like in your retina, then the absorption events are one light quantum at a time. Like quanta are called photons. And the individual absorption events are random. And so that means that even if I deliver a flash of light which is of the same intensity at your cornea, the number of photons that are absorbed by your retina will vary from trial to trial. The curve here is on the assumption that what you're doing when you sit there is that if you manage to count up to 6 photons, you're willing to say, "Yes, I saw it." If not, not. And that describes the data. So this is the kind of remarkable idea It says that the randomness of your behavior is not the result of the--some randomness deep inside your brain. It's the result of the physics of the light that's impinging on your retina. You'll notice that this was done quite some time ago already. The first suggestion that this might be true actually comes in 1911 which is about 6 years after Einstein first talked about photons and the way in which the quantum nature of light absorption. So this transfer of ideas from thinking about the quantum physics of light to thinking about vision was very fast conceptually but it took thirty years for the first generation of experiments. Now why this funny number 6? We'll get back to that in just a second. But if it's really true that what the brain is doing is counting up photons, the way the experiment was setup, you were only asked "yes or no?" So maybe I can convince you not to say "yes" or "no", but to tell me how many photons you actually counted. So there's this amazing experiment by Barbara Sackett from the 30 years later in which she asked people to do this. They basically, when they see a flash, they should spit out a number between 0 and 6. And it turns out that the numbers they spit out have exactly, or very accurately, the distribution you would expect from the randomness of photons being absorbed. The average number that they spit out is proportional to the intensity of the light. Although it doesn't go through 0. You should hold that thought for a moment. But the statistics are exactly the statistics of photon arrivals. It's quite amazing. In fact, all these experiments are done where the light falls, not on one cell in your retina, but on a region that contains hundreds of cells. But if you're only counting 5 or 6 photons, then with hundreds of cells, any one cell never sees two. So it must be that every single cell can respond to one photon. And so this is a blow up of one of those cells, not from us, but from a salamander. The business end is out here, where the cell is packed with molecules of rhodopsin, which is the molecule that actually absorbs the light. Current flows across this membrane, and that produces a signal, which eventually finds its way to the end of the cell, where it's connected normally to another cell, and then somewhere in the basement of the building, you find your way through several layers of circuitry in the retina, eventually to the cells that form the optic nerve and carry information to your brain. If you didn't pack the cell with billions of rhodopsin, with the order of a billion rhodopsin molecules, the light would just pass though it. And it wouldn't be absorbed. So you have no choice but to put lots and lots of molecules there. Now, in order to understand how this works, you should appreciate that counting individual photons is the best that physics allows you to do. There's no sense in which you can count half a photon. So what does the retina, what do these cells and the retina, and the rest of your brain have to do in order to make this possible? Well they have to get a lot of things right. So the first thing you might worry about is that with billions of rhodopsin molecules, maybe every once in a while, one of those molecules does what it's supposed to do when it absorbs light, it just does it at random in the dark, because it's being bombarded by all the other molecules in the cell. And the answer is yes, it does that. And in fact, Barlow again had suggested that the reason that you have to count up to 6 in those 1940 experiments, the reason you have to count up to 6, is to be sure that you're seeing a flash of light from the outside, and not just at random, a few extra of these random events. The first clean measurement, the first direct measurement of those random events was actually done by Gary Matthews, who's sitting over there. who, with his colleagues in the early 1980s, showed that in fact, the rate at which these events are occurring in the cells of the retina, is more or less what Barlow predicted. So it really is true that your ability to see in the dark is finally limited. You're counting every single photon down to the limit that's set by these random events, which are occurring in the molecules at the very start of the process. Everything that happens after that is essentially perfect. And the only reason that works, right, you have a billion molecules. And so each one of them, if left to its own devices, would only make this random transition once per thousand year, which wouldn't be a problem, but with a billion of them, that corresponds to roughly once per minute, and now that's enough to limit your ability to see in the dark. It's one thing to appreciate that you can count single quanta of light But then you realize that the way it works is you have this cell that has a billion molecules in it. And one of those molecules absorbs a quantum of light, a photon, and changes its structure. One out of a billion. And the cells responds to that and produces a current flowing across, a change in the current flowing across its membrane, which is large enough that it's eventually going to make a contribution to your perception. One molecule in a billion. So this raises the question of whether there are other examples in biology where the fundamental sensing event is so sensitive that it actually responds to single molecular events. And the answer is yes. Well we don't have time to explore those. We'll come back to it at the end. I'm trying to give you different ways to think about how extraordinary this ability of the visual system really is. Now, there's another problem which is that one of these cells that responds to a single quantum of light produces a signal, but all the other guys don't. And then you have to pass these signals through all the circuitry in the retina and be sure that they don't get lost. And it's worse than that, because the cells that aren't seeing photons they're not completely quiet. The current that flows across their membrane is fluctuating a little bit. And so there's this background rumbling throughout the retina that's happening all the time. And these individual events somehow have to stand out against that background. And so this is really the question of how does the nervous system manage, how does the brain manage to find meaningful signals in this rumbling, noisy background. And that's something that we encounter all the time on different scales, but these problems are a place where this is very accessible to experiment and theory. And finally, I think you all know that if you're trying to see something, it's much easier to see it if you know what to expect. And so, implicitly, when we say that that visual system can do this well at something like counting photons, that's on the hypothesis that your brain actually knows something about what to expect. If it didn't know at all what to expect and it had to look in different places, you know, different places in space, at different moments in time, then you wouldn't really be able to do this well. So in order to demonstrate this performance, you actually have to sort of teach this subject in the experiment exactly what it is they're looking for. But remember, the signals that they get include all this rumbling in the background and the arrivals of the photons themselves are random. So what's happening is very probabilistic. The thing we expect is not some single, rock solid thing. It's a distribution of events. And so somehow in the background is the question of how we actually learn anything about probabilities. So let me give you an example of this question about signals and noise. And the example I'm going to use is watching something move. So imagine that you're trying to estimate how fast I'm moving across the stage right now just by looking at me. And in order to get a feeling for the problem that your brain is trying to solve imagine that you can take one slice through the image that you see. And let's put that slice at the bottom here. And then, as I walk across the stage, let's follow what happens with that slice. Now what happens is that the various pieces of the image that were over here, gradually move to the other side in time. And every piece moves at the same speed, because I'm moving rigidly, right? If I start going this, it's another story. but if I move at a constant speed, all the little pieces of the image move together. And so what that means is that if you ask, all that you have to do in order to estimate speed is to figure out the slope of these lines. And the way you do that is to ask, what's the difference between this point and the point right next to it in space? A neighboring pixel. And what's the difference between what you see at one moment in time and what you see at the very next moment in time? Because what's happening is that the little piece of the image that was a little bit to the left in your image becomes the next piece on the right if you just go step by step by step in time. And so that means that if you plot what differences are from one moment to the next, so technically the derivative in time, versus the difference between neighboring pixels in the image, you just get a straight line, and the slope of that straight line is the speed at which I'm moving. So this makes it sound like estimating motion in the visual system is really easy. But imagine that instead of looking at this image, what you actually see is this one, which looks suspiciously similar, but isn't exactly the same, because this is a little bit noisy. Why is it noisy? Well, I told you that the photons arrive at your retina at random. So if nothing else, there has to be that source of randomness. And there's a little more. Your photoreceptors aren't perfect. Especially not if the lights are very bright. if the lights are bim, they're pretty much counting the individual photons. But as the lights get brighter, then you switch over to actually being able to do color vision, which you need in order to see this picture, then the receptor cells themselves are a bit noisier. If you try doing the same computation we did before, now you get a mess, and, well, you might think that roughly speaking, this slope is still identifiable, but it's not clear how to do it. I mean, why shouldn't you compare these points instead of those points, or maybe this point and that point. Who knows. It's a big blob. So what this tells you, is that any noise in the image sets a limit to the accuracy with which you can perceive motion. And in fact, there are experiments, some of which I was involved with in designing and analyzing, which show that real brains, not just us, but those in flies in particular, which have a wonderful place in which to study their motions and sensing. They actually get close to this limit. So in flies, you can put electrodes in the receptor cells of the retina and measure how noisy they are, and then you can put electrodes in the part of the brain that sense motion, and see how accurate the estimate of motion is, and you can show that the precision of the estimate is close to what you'd expect from the noise that's at the beginning of the process. So that means that the calculation that the brain is doing must be just the right one to squash this noise down and get as accurate an estimate of velocity as it possibly can. And when we first started trying to understand this, we did it in a purely theoretical way, and then more recently, my colleagues have tried to approach this in a sort of experimental way where they actually go in the real world and take movies in a camera that's specially calibrated and is fitted with gyroscopes so they know how it's moving, so you can ask when you see a particular pattern of light intensity falling on the camera, what velocity does that really correspond to? And what you find is that indeed, if the differences between neighboring pixels is very big, then the pattern corresponds to just detecting those slopes like I showed before, but deep inside, actually during most of your walk through the woods, where they did the experiment, the patterns don't correspond at all to these straight lines. So things that are at constant velocity are not along a slope. They actually seem to be along a curve in some very funny way, which actually corresponds to what we found theoretically many years before. And there's an important part of this, which is that when I tell you that you must do a certain computation, in order to be as accurate as possible, it's important that being as accurate as possible does not mean that you get everything right. You get it as right as you can. And the only way to get things as right as you can, you're making a trade between being susceptible to all the randomness, and doing things which are a little bit wrong on average. So a familiar example is that if you're looking at something and you're trying to predict, let's say in the arguments about climate change for instance, if you see the temperature moving up but fluctuating, you have to decide over what window of time should you average. If you don't average, then you see these wild fluctuations, and you don't know how to interpret them. That's noise in effect. On the other hand, if you average for too long, then you'll obscure the effect that you're looking for, because you'll smooth it out too much. There's no way to get 0 error. All you can do is trade one kind of error against another kind of error. And so if you do that as best as you can, you're still left with a little bit of error. And if I understand the computation that you're doing, I can exploit that error and design a visual stimulus that will go right for the mistake that you're making. And that means that I can get you to see something moving when there's actually noting moving. And I can change the speed at which you think you see it moving. And I can stop it. And I can get it to go the other way. And I assure you that there are no moving objects in that movie. And not only do you see if moving, but the fly we were studying, it also sees it moving, in the sense that, actually you can get it to try and turn in response, but also the neurons in the fly's brain that are responsible for reporting on motion, respond to these movies exactly as you expect. So we've seen your ability to count photons, your ability to estimate motion. Let's do something that relates to your ability to understand something about probability. And now we're moving perhaps somewhere across the boundary between brain and mind. One of the fundamental things that we do in the world is to decide whether the thing we're looking at is a real structure or just something that happened at random. So imagine that somebody shows you a coin that he flipped 10 times and you see one of these things. So I think that if you see this, you should ask to see the coin. But what about this? Tails heads, tails heads, tails heads. That's kind of weird, right? Why all the repetition or alternation? So this is a prototype for a whole variety of problems that we have to solve in the real world, and it's fundamentally about our ability to learn the rules that underlie things that are still a little bit random. There's a rule, but not a perfect rule. So in fact what's happening in these images is that this is actually a fair coin in which every time you flipped a coin, you get an independent head or tail. This is a kind of special coin that remembers its last flip and is biased toward doing the same thing again. However, you'll notice that even though this is the coin with memory and this is the independent coin, every once in a while, the independent coin has a long run and every once in a while the coin that's trying to remember its flip and do the same thing, nonetheless alternates. And that's because it's a little bit random. But what that means is that if you're faced with these things, you cannot, it is not possible to distinguish perfectly between the two alternatives. But there is a "best" you can do. You'll notice that most of the time, this doesn't happen. And so you can tell the difference. Most of the time, if you see a long run, it's because the coin had a memory, and most of the time, if you see alternation, it's because the thing actually is just being random. But not all the time. So if you do the best you possible can, then realizing that 10 heads in a row is from the biased coin, or the coin with memory, is not hard. You should be able to get that right almost all the time. And similarly, if you see perfect alternation, you should be able to get that right almost all the time. And people do what... The performance of people is in yellow, and the theoretical limit of how well you can do in principle is in red, and you can see that they're very close if not perfect. One of the most difficult things is that in learning this task, the way this has been set up, the random coin generates alternations. So if you see alternation, you should claim that it's random. And that's a very hard thing to learn, because if you see alternations, there ought to be some structure underneath. But it's an artifact of the way this experiment was setup that you should assign that to the random one. But people will eventually learn this. An interesting feature of this, there's an enormous literature on our failure to understand probability, whether it's in medical testing, or financial risk, or many other real world examples that we care about a lot, and there's a lot of emphasis on peoples' systematical errors in dealing with probability. This shows you that at least under some conditions we've known for a long time, that people can be nearly perfect. But an important feature of these experiments is that you can completely screw them up if you tell people what they should be looking for. So the best thing to do is to just let people answer and tell them whether they got it right. And then they'll learn. If you start explaining to them what random means and what correlated means, they get confused, or more precisely, 30% of them get sufficiently confused; they never learn how to do this. So what this shows us, I think, is that, in fact, our brains are very good machines for dealing with probabilistic situations. What's not so good is our language for talking about them. If I try to instruct you on what to do in the task, I make your performance worse. It's one of the few examples of this that you can find. So. I've shown you several examples where the brain is performing at the limits of what the laws of physics allow it to do. We're talked about photon counting and vision, but there are many more examples of the same flavor. If you sit, not in a dark room, but in a quiet room, and you listen to the faintest sound that you can just barely hear, that sound is so faint that your eardrum is moving by less than the diameter of an atom. Fish have a sense that we don't have, as far as we know. They can measure electric fields in the surrounding water. The most sensitive of these is so sensitive that it can measure the equivalent of a 9V battery connected with one pole somewhere in the South Pacific and the other one in Hawaii. And it can measure the 9V distributed over a reasonable fraction of the circumference of the Earth. Another way of thinking about this is that a shark has electric receptors which are so sensitive that it can sense the electric field that is induced by it swimming through the Earth's magnetic field. I talked about the fact that part and parcel of photon counting is molecule counting. The bacteria that live in your gut, E.coli, when they are out in the rest of the world, the transition to which we won't think about at the moment, is unlike the pushy environment they find in your gut, out in the real world they have to swim and look for food, and in order to do that, they have to tell in which direction things are getting more interesting and more tasty, and the precision which they do that is so high, they are effectively counting every single molecule of sugar or other substances they like to eat that arrives at their surface. I can keep going. I gave you the example of motion estimation. There's the problem of how bats navigate using listening for the echoes of their ultrasonic pulse. There's our human ability to recognize symmetry or discriminant complex pitches. There's the wonderful phenomenon of ventriloquism, which you might be wondering what it's doing on this list. Ventriloquism is an example where you have two sensory stimuli -- your vision and your hearing -- that provide you with conflicting data, and you have to decide which one to trust. And you might think that this is a matter of natural history. We are visual animals and therefore we trust our eyes. Not true. Some years ago, people did experiments combining different sensory cues. In this case, vision and touch. And they showed that if they distort the images so that the errors in your visual estimate become the same as the errors in your tactile estimate, then you give them equal weight. The reason that you trust your eyes are because on average, your eyes are more accurate. You're doing the correct statistical combination of the data. It's just that the ventriloquist has contrived to create a situation in which that fails, like in the movie that I showed you. Finally, the example of learning random verses correlated sequences is about how we deal with probabilities. There's a series of beautiful things about how we deal with probability in everyday cognition. My colleagues and I spent a lot of time trying to understand the way in which signals get encoded into those spikes that I told you about from the work of Hodgekin and Huxley, which really is about dealing with the probability distribution of the inputs to your sensory systems. There are gorgeous experiments about how we and other animals track changing odds in various situations and so on. All of these point to the idea that the mechanisms that have been chosen by nature in the brain and if you will, in the mind, are those which are as accurate as possible. Those which reach the limits that are set by physical and mathematical constraints. So I'm come to the end. Wy should you care about this? There's a very popular view of the brain and of biology i general, which is given to us, I think, by evolutionary biologists who are reacting, or advocates, for the theory of evolution, who are reacting to their critics. There are two parts to evolution. There's the random generation of variations, and then there's selection. Where the things that don't work get crossed out. And the things that do work prosper. The anti-evolutionary point of view is that everything is the way it is because it was put there by the creator. So if I want to oppose that view very strongly, I should emphasize the randomness in evolution. And indeed you find books with titles like "The Blind Watchmaker." You don't need to read the book in order to get the message. So this view of evolution as tinkerer arises...the tinkering notion arises in other places. There's this wonderful Calvin Hobbes cartoon. Calvin sees the load limit of ten tons. He asks how do they know the load limit on bridges. And the father responds "They drive bigger and bigger trucks off the bridges until it breaks. They weigh the last truck and rebuild the bridge." And the mother responds "If you don't know the answer just tell him." And Calvin looking as puzzled as ever, says well I should have guess. So there's a lot going on in this cartoon. It's about gender roles and parenting. Um.. But the point that I want to leave you with is that while evolution is a tinkerer in the sense that things are generated by trial and error, the errors might be expunged very very quickly. And if that's the case, then evolution has the power to push the organisms that we see to the edge of what they are allowed to do. So if you're trying to hunt in the dark of night. Being more sensitive to light, counting every single photon that arrives at your retina is an obvious advantage over your competitors. If you're listening for the rustle of the prey that you're trying to catch or for the danger of the predator, then being able to hear the quietest sound is obviously an advantage. Much more abstractly if you're trying to decide how to expend your limited energy in searching for food, then finding, discovering the underlying patterns in the movement of the animals around you is obviously of great benefit. And so there is an evolutionary pressure to not just detect the smallest signal that the laws of physics allow but to be as efficient as possible in detecting very weak patterns in nature and finding the rules that underlie what otherwise might seem like almost random processes. What we don't know and this is true! Then as I tried to indicate in the example of motion estimation then that place at the edge of what the laws of physics allow is a place where as a theoretical physicist I understand how to calculate what will happen. And I can do that well enough that I can fool you in that movie. What we don't know is whether all these places at the edge of what physics allows really connect up. So that we have not just the collection of stories about each individual system but a perspective on the phenomena as a whole and that is the edge of our understanding. So I will leave you there. Thank you for your patience. *applause from audience* Female Speaker: Thank you very much. Dr. Bialek is willing to take some questions. If you can please come to the microphones that are here and on the other side. Audience Member: I was going to ask you if you think that this sort of perfection that we're achieving is a property. a property. A property in terms of all different aspects, modalities of processing. Dr. Bialek: So I think the problem is, you know, to get back to the very beginning--in physics, we expect some degree of universality. We expect that if we see similar things in many cases, that really there's a single underlying explanation and that there's something deeper there. In biology we don't know how many examples constitutes a rule and so I certainly had the experience of giving lectures like this where I give more and more examples and somebody stands up and says "Well but of course we know that in general, this can't be right." And I guess we know that in general it can't be right but what I don't know is whether you know, is it a more productive starting point to imagine that that things are near the edge of what's allowed or is it a more productive starting point that no, the edge is infinitely far and where you happen to be is not special at all. And I think that's really the question. Not whether everything is exactly at the edge but whether that's the productive starting point. And as many examples as we accumulate, we don't know how many is enough to start to feel convincing. *no sound* Audience member: In one of your papers, you found that the ...using maximum entropy principle and retina ....data from retina..uh that the entropy per neuron equals the energy per neuron. Dr. Bialek: Yes. Audience Member: I want to know if you think that this is true even in other areas of the brain or at least the general principle of maximum entropy, do you think it's true ... Dr. Bialek: So this is a somewhat technical question but um...the.. I guess to translate it a little bit um..the ...what you're referring to is work that my colleagues and I have been doing trying to get a description not of what one neuron at a time is doing but what whole populations of neurons are doing together. And what we saw was that the pattern of behavior was, again, very special. It suggested that the collective activity in this network was, again, off at the edge of where it could be. What we don't know is...in that case, we don't have the same argument from the sort of biological significance. I mean, when I tell you that you can count single photons. That's an edge whose importance for the animal I understand. Or if I tell you that you want to package information into action potentials so that you're efficient as possible--I think I understand what that means for the funcition of the organism. I might be wrong but at least I understand what I'm saying. Um..in these other examples, we see things that are extremely interesting and provocative but I don't know how to map it. I don't know how to convincingly map it into what I have been telling you about here. And for that reason we don't know what's general and what's particular. We're still just exploring. Audience Member: So, umm... If I understand what you said, the general message is that you think you found a pattern that evolution realizes which is always pushing to the limit, to the critical point, where physics allows you to get. Is that right? Dr. Bialek: That's fair. Yeah. It depends a little bit on what happens on what happens next, but go ahead. Audience Member: Too late. You said it's right. Is this new input in how evolution works? Is this at odds with what we expect from evolution? Can we deduce that from evolution? Can I do some initial estimate of if evolution can get there? How much time it takes for evolution to get there? What is the interpretation? Dr. Bialek: I'm certainly hoping that what I'm saying is not at odds with evolution. I think that you will find So again, for historical reasons, many evolutionary biologists have argued against so famously Goulden Wonten argued a critique of the adaptationist paradigm in which you choose... You look at every single feature that you see in an organism and you say, "Oh that's there because of this, and that's there because of that, and that's there because of that." And indeed, that's very dangerous reasoning in part because, to take, to put my physicist hat back on, it's not very mathematical. I mean, I can look at something and say "Oh, that would be good for this reason." But I don't really know that. That's just a guess. And in particular, I don't know that there aren't 15 other things that would be better. What's interesting about these examples is that you can actually calculate what it means to do as well as possible and ask how close the organism comes. And the answer is, they come very close in this example, and this example, this example, and this example. So that's a demonstration that evolution has managed to get there in these cases. Now I don't know how to state the general case. This is the question that Lauren asked at the beginning how many of these examples gives you a rule, and what does it even mean to say, "in general" that it does this? I think I'm starting to get a feeling for it, but I don't quite know how to do it. And we certainly don't know how to calculate, you know, in the space of all possible mechanisms So think about the molecules that are relevant, the genes that are relevant, for photoreceptor function. So can I imagine walking around in the space of sequences generating random variations and asking how hard is it to find one that can count single photons? I don't even think that's the right question, because we know that there are bacteria, there are arcae bacteria that do phototaxis. They swim toward the light. And there are experiments suggesting that they are so sensitive that they ca count single photons. So that doesn't show that it's easy. It shows that it's old. So some of these things are problems that apparently were serious enough that they got solved a very long time ago. And so now we see layer upon layer of this. Audience member: First, I'm a practicing evolutionist and evolutionary phsychologist and any smackdown of Goulden Moloten is find by me, so thumbs up for that. So you study communication and computation of the brain, and I was wondering if you have an opinion about whether memories are stored at synapses as opposed to critics like Randy Gallistel, I'm sure you're probably familiar with, arguing that's a really bad story for how memories are actually stored, at least over the long term, and I think other people speculated and that there might be much better storage, such as within in the cell, with DNA would actually be a much better memory medium. I wonder if you have an opinion about that debate if you worked on that stuff. Dr. Bialek: I don't have an opinion. No, it's really... I'm trying to think of... I guess once upon a time I did one thing that might be relevant. But only peripherally, so I don't have... I can recite back to you what's in the papers that you're talking about, but I can't do more than that. Sorry. Although, Randy, by the way, Gallistel is responsible for some of the nicer experiments about tracking, changing odds, and demonstrating the efficiency with which animals can do that, which is quite beautiful work. I don't think it's deeply connected to his concerns about the site of memory storage, but it is beautiful work characterizing the behavior. Audience Member: Just one question about, from the physics end of things, which you obviously emphasize. Now, some of the phenomena you discussed can be understood from classical physics. Some of them, like absorption of photons are obviously quantum mechanical. And also, since randomness played a big role, do you agree that, if it's to the extent that it's classical, "randomness" is fully compatible with a completely deterministic system." Obviously, with quantum mechanics it's more complicated, but what is your overall point of view about the role of quantum... I mean, everything of course fundamentally is quantum, but classical often does the trick. Dr. Bialek: You can have something that for all intents and purposes is random, but completely classical. So if you think about the trajectory of the gas molecule, if you'd follow one gas molecule through this room. Of course if you follow all of the gas molecules, then you can predict where that one will go because all it ever does is bump into the other ones. But if you only follow one of them, it'll look like it's moving completely at random. So classical physics gives you many ways. I mean, that's the simplest one, but it gives you many ways of generating things that as far as you're concerned, look random. So randomness by itself is not a signature of quantum mechanics. It's perfectly possible to have it classically. And so for instance, the random arrival of a signaling molecule. So the random arrival of a sugar molecule at the surface of a bacterium that tells it that there's more food on this side than on that side, that's purely classical randomness. The random absorption of photons is in some sense a quantum mechanical randomness, but it's quantum nature ends within a tiny fraction of a section after the photon arrives. As sort of a millionth of a millionth of a second. When you ask about the role of quantum mechanics in biology, it's a very painful subject to me, because a certain amount of my wasted youth went into this. It's very hard to get... So we now understand that it's possible to have the sort of truly mysterious parts of quantum mechanics appear on a more macroscopic scale, and not just... You can create situations in which the peculiarities of quantum mechanics are not confined to the scale of atoms, but can be magnified up and they can last, not for an instant, but for much longer times. But figuring out how that might happen at the temperature of our body in molecules that are bathed in water is very hard. And so what is true is that there are phenomenon in biology like the absorption of light by photosynthetic plants and bacteria where the initial events in absorbing the light and capturing the energy, which, I should say, is the source of all the energy for life on the planet, so this is not a small effect, the very first steps are deeply quantum mechanical. And that was a surprise because people would have guessed that the timescale where you had to think about quantum mechanics was a thousand times smaller than the time scale on which it actually happened. But that thousand times longer than you expected is still one million times shorter than the, let's say the time difference between the sound coming between your two ears, just to pick something more brain-like, right? Which is already a thousand times smaller than the width of one action potential. So what we know is that our intuition about how far up the scales of space and time the mysteries of quantum mechanics can survive against the onslaught of randomness, classical randomness in the environment, that intuition is often wrong, but nobody's shown yet how it could be wrong by so much that it becomes a dominant effect, let's say, in the functioning of our brain. I wish... If I knew how to do that, I would you know, I'd be happy to show it. But I don't. Audience Member: And so, just a final, quick comment, I used to read some of the papers from your wasted youth in the early 90s. They were very stimulating. I was working on similar theoretical aspects. Dr. Bialek: So maybe it wasn't wasted entirely. Or maybe it was! I'm sorry. Audience Member: In the beginning you listed three possibilities and how to describe biology with math, and if I'm not wrong, the second one you mentioned that maybe there's some robustness in parameters. It really reminds me of topology and can you say a few words about this? How people... Dr. Bialek: Look, I can articulate the intuition that you have for everybody else. So, you know, in mathematics, there are properties. There's geometry and there's topology. If I want to know how far I have to drive, how far I have to sail on the surface of the earth in order to get across the ocean, the fact that the earth is curved means that, you know, it's not a trivial calculation, and that's geometry. The fact that if I could keep sailing, I would come back to the same place, that's because the Earth is a sphere, but actually if it were a kind of squished up sphere, or a slightly twisted sphere, that would all still be true, and that's topology. And in particular, the statement that if you keep sailing in one direction you'll eventually come back to the same place, is a statement which is, in this language, robust to all the parameters of the geometry of the Earth. So there's the idea that maybe some of the things, and actually this happens in many physical systems, inanimate systems, maybe one way of making the things that are important in biology not be so sensitive to parameters is to have them be topological in this sense. That's a great and beautiful idea. Interestingly, that's not what most people are looking for when there's a large literature on this idea of robustness to parameter variation, and that isn't the path that they take, which is interesting. But that doesn't mean it isn't the right path. And I cannot think, off the top of my head, of an example where this intuition is instantiated in a very particular biological biological example, but I don't see why not. I mean, a good place to look would be in pattern formation in development. So if I'm a fly, my body has well 14, but 7 segments, and it has 7 segments no matter what, and big flies have 7 segments, and small flies have 7 segments. 7 stripes in the patterns of gene expression. And so you might hope that the 7 was topological, and thus robust a parameter variable. But none of the models out there actually do that. Although that would seem like a good place to look, but I don't think anybody's done it. Audience Member: Yeah, to go back to the experiment where they had the light and there was the random distribution of photon release is the same as how often people report how much they see it. In your opinion, does that seem to suggest that the limits of what we can have sort of conscious awareness of or, by approximation, what we can know is limited by the physical properties underlying our own cognitions. Look, the essence of that body of experiments is that, at least our immediate, the randomness of our immediate perception is limited by the physics of what happens at the very beginning. Now, if that were true of all of our perceptions, then it would be the case that our accumulated sensory knowledge of the world would necessarily be limited only by the physics of what happens at that beginning. Now, that's surely not true. There are situations in which we don't have to push that hard. You know, the lights are bright, you don't have to count every photon. It's no problem, right? And so, parts of our perception are not limited in that way. On the other hand, they may well be limited like i the example of motion estimation, or in the example of detecting the, recognizing the patterns in the heads and tails by the statistical properties of the signals that we take in from the world. And so it's perfectly possible that we are efficient in some deep sense in gathering and purifying the data which are relevant, and then in that sense it would be true that our understanding of the world is must more intimately tied and much more deeply limited by the physical properties of the environment than we're inclined to think. Many years ago, when my student Dan Rudament and I started looking at the statistical structure of natural images, coincidentally, I was working with other colleagues and we were doing the first measurements of how much information could actually be transmitted along neurons. And what we found was that, although you have the impression that you're overwhelmed by visual data and the world is rock solid and there's no noise and so on, if you actually took the measured statistical properties of images that you see when you take a walk in the woods, you added the measurements that people had made of the noise in the receptor cells on your retina, what you found was that the amount of information that you had in every pixel was approximately the amount of information that every nerve fiber that was attached to that pixel could carry, that those were very close to each other. They were not light years apart, as most people had expected. We don't know how general that kind of result is, right? But certainly I would argue that that's a direction which you should look. Audience Member: I have a problem with the definition of randomness. Dr. Bialek: Most people do. Go ahead. Audience Member: Well it lies at the root of everything you've been saying. You use the word a lot. If you look at a particular sequence of digits, is it really possible to tell whether it's truly random or whether in some say you haven't selected prejudice in some way? And also it's connected with the probability of this particular instance Dr. Bialek: Yes. This is a great question. This is a great question. Let me answer it by going back here. So. Suppose I show you this one, okay? Or this one. Tails tails heads tails heads tails heads tails tails heads. And I ask you, was that generated by a fair coin? As you point out, that is not a well-posed question. I can't answer whether this sequence of 10 things is random. That doesn't mean anything, okay? Without casting aspersions, let me note that there are many places in which there are discussions of human reasoning about probability that essentially make that mistake, okay? They say that people will see patterns in things that are random, without recognizing the point that you've made, which is that defining randomness is actually very difficult, okay? And in particular, if I show you one instance, there is not meaningful answer to the question, "Is it random?" That's why, in order to make this... So sometimes when I've talked about these ideas, I've tried to introduce it by giving a badly posed but plausible sounding example and then back my way into this. Maybe it's better this way. This is a well-posed example, because what I've done is to draw sequences from two different distributions. This one has a correlation. this one does not. These are generated by something independent. These are generated by something that's correlated. I can then ask you on any instance which one of these is the more likely source. That is a well-posed problem, and if people can do that, then they'll reach the performance that I showed you. And so it's a way of taking this idea of forcing you to choose between two alternatives is a way of taking these difficult questions about the meaning of randomness, and turning it into something precise, because I'm asking you about two signals drawn from two different distributions. Audience Member: But you're making a statement that the list on the right is random, aren't you? Dr. Bialek: But I made them, right? So I generated the sequence, or Lopus and Odin in 1987, well these I made, but the ones that they made, they generated by you know, drawing random numbers. So you're worried about their random number generated. But look, I mean building something that's a good random number generator for strings of 10 is not hard, right? If you want to generate something that looks like a good random number generator, you know, for a year, I agree with you that there are challenges. But this is very seat-of-the-pants. I think this escapes the, I mean, you know, do you believe that we can generate random numbers on the computer if we need to? Audience Member: I have my own doubts that anything such as a truly random sequence. And also, you know, the thing about probability and randomness is that it depends upon prior knowledge. Dr. Bialek: Again, in this case, the signals either come from, the sequences either come from this distribution or this distribution, so I can write down what these distributions are, and I then have to generate a numerical procedure for sampling from them in a fair way. And if people don't know, learning to do the task is indeed learning something about the things you need to know about those distributions in order to tell them apart. And so the evidence that people reach near optimal performance is that they have in fact learned that. They obviously didn't know it a priori, but over the course of the experiment, they learned it. I get the sense that we're near the end, right? Audience Member: Thank you very much. I promised one gentlemen that he will have access to you at the reception. Everybody else is also invited to the reception which is up in the gallery, second floor, and thank you very much for coming.

External references


This page was last edited on 10 September 2023, at 01:51
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.