To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

17th Lambda Literary Awards

From Wikipedia, the free encyclopedia

The 17th Lambda Literary Awards were held in 2005 to honour works of LGBT literature published in 2004.

YouTube Encyclopedic

  • 1/3
    Views:
    445 211
    37 539
    5 474
  • ✪ Die Riemannsche Vermutung (Weihnachtsvorlesung 2016)
  • ✪ 20. Option Price and Probability Duality
  • ✪ Art and Craft: Teaching Writing, with André Aciman, Colum McCann & William P. Kelly

Transcription

Hello everybody, It's always nice to see so many people voluntarily attending a math lecture. Math can't be that bad... The subject of today's talk, or rather the lurid title I used to lure people here is: How to earn a million dollars without leaving bed. The bed part I'll explain in a second, the million dollar part is on the next slide. But first let me say, see below right, the nice title isn't from me but from a Russian-American mathematician called Alex Kontorovich who used it for a talk at Yale (I think). Generally, if you spot any beautiful ideas in this talk, they are certainly not mine. The big ideas are from great mathematicians I'll mention, and the smaller ideas are most likely from someone else as well. More about this at the end of the talk. So, why not leave bed? This obviously has to do with how math works. In other sciences you need to conduct experiments, go to the lab, spend a lot of money, interview people, all sorts of things. In math, you don't have to do any of this. You can do it in bed. You need, at most, a pencil and some paper, some don't even need that. René Descartes, for example, you may have a heard his name, he's the one with "cogito ergo sum", (if you want to reduce his philosophy to one sentence) he also was a great mathematician, and he apparently never got up before noon, because he lay in bed all morning thinking about math. So, you can do math in bed (which is very useful). But that's not the real title. That was just to allure you to come. The real title of the talk is: "The Riemann Hypothesis". If you've been here last year, back then I told you about a few selected unsolved mathematical problems. They are still unsolved, by the way. What have you been doing all year? These were all very interesting and some of them have been open for a long time. But this one is THE unsolved problem, so to say. If you'll do a poll among mathematicians, probably nine out of ten will say that this one is the most important, the biggest unsolved problem. That will be today's subject. And I'll start, because this is a Christmas lecture, with a slide called "Every Year Again" (or rather every hundred years). That's David Hilbert. At the last turn of the century, around 1900, he was one of the most famous mathematicians of the world. But I should first say, although our equal opportunity representative isn't here today, you'll only see men on the slides today. I'm sorry for that. But I can at least provide a nice anecdote about Hilbert. David Hilber was THE great mathematician in Göttingen, which at that time was one of the world centers of math, and he had a very talented female doctoral called Emmy Noether. That was 1915. At that time, Germany had no female professors. Back then, that was unthinkable. But Hilbert wanted Mrs. Noether to qualify as a professor. That's why he suggested this to his faculty. Or rather he suggested, the faculty should make a request to the German government that Mrs. Noether should be able to qualify as a professor. But he met fierce resistance, the conservative old farts in his faculty said: "We can't have a female professor here." Finally, Hilbert was so enraged that he pounded the table and said: "Gentlemen! Is this a public bath or a faculty?" At least this convinced the faculty to make the request, but the government declined. "Women can't be professors!" So, she had to wait a few more years, until the end of World War I. But finally she became German's first female professor. But I wanted to talk about something else... In 1900, one of the first International Congresses of Mathematicians took place in Paris (where there was the world fair at the same time). And David Hilbert was asked to give one of the keynotes. As it was the turn of the century, his chosen subject was: "What will be the problems that will keep us mathematicians occupied in the next 100 years?" In his talk he presented ten problems. And in the accompanying paper he expanded the list to 23 problems. Those, for him, were the big open questions in math. And he was rather prescient, because many of these questions really occupied many mathematicians for most of the next century. It might have been a bit of self-fulfilling prophecy, though, because Hilbert was famous. Just because he said a problem was important, many others would dash at it and try to solve it. For example, the first problem in the list was the continuum hypothesis. And the eighth of 23 problems was the Riemann hypothesis, our subject for today. Now we fast-forward 100 years. In Cambridge, in the US, a millionaire called Clay established a foundation with the aim to increase mathematical knowledge. They carry out several nice projects, trying to make math more popular. And one of these efforts was, 100 years later, to do something similar to what Hilbert had done. "Let's ponder what the problems of the next 100 years in math will be." So the Clay Mathematics Institute (CMI) asked the world's best mathematicians to prepare such a list. They agreed on seven problems. The big difference to Hilbert: With Hilbert's problems you could only earn glory, while the CMI offers one million dollars for the solution of each of its seven problems. So, if you can solve one of these questions, not only will you be famous forever, you'll also receive a million dollars! One of the seven problems is, for example, the Birch/Swinnerton-Dyer Conjecture. And so on. Number six is the Riemann hypothesis which is still open. It's the only open question that's on both lists! This one million dollar prize is called the Millennium Prize, by the way. So we'll talk about the sixth Millennium Prize problem today. As a side note, one of the seven problems WAS solved, the so-called Poincaré conjecture which had also been unsolved for 100 years. It was solved by a Russian called Perelman. But he turned down the prize. He said he just wanted to solve the question and hadn't done it for the money. He was also offered the Fields Medal, the equivalent of the Nobel Prize for math, but he also declined. He now lives a withdrawn life in Russia with his mother. He doesn't give interviews, he talks to nobody, but he solved one of these seven problems... Well, he could have taken the money if he had wanted to. We'll also forget the millions dollars now. We'll only think about the actual question now. The rest of the talk will be about the Riemann hypothesis. First some background information. Why is this such a big deal? Riemann formulated this hypothesis in 1859 in a small scientific article of only eight pages. Its title was: "On the Number of Primes Less Than a Given Magnitude". It was published in the "Monatsberichte der Berliner Akademie". More details to come, but if you're dealing with prime numbers, you're in a branch of mathematics called "number theory". The funny thing is that Riemann didn't work in number theory. This article was the only one about number theory he wrote in his whole life. But this article is still considered to be one of the most importand and seminal number theory papers ever written. Might be a but frustrating for other mathematicians who work in number theory for their whole lifes: There comes this "outsider" who writes one article which keeps everyone occupied for 150 years... So, this question, which Riemann only mentioned casually, has been unsolved since 1859. What exactly is the question? Well, you'll have to wait a bit more... Since 1859, there have been countless efforts to prove or disprove this conjecture. Many people tried in vain and found it a tough nut to crack. Until now, nobody succeeded. It often was like in this New Yorker cartoon - somewhere within that "awesome proof" there was a gap and so it wasn't a proof at all. Let me give two examples. There's a 84-year-old French-American called Louis de Branges, who - it seems - spent at least 30 years of his life with this problem and became kind of a tragic hero. Every now and then he publishes long articles that nobody reads anymore because he has made so many mistakes by now. It is of course not impossible that he has proved the Riemann hypothesis by now but it seems nobody wants to check. If you're interested, 3Sat (TV station) had a documentary about him once. You'll find it on YouTube. It's called "Die Codeknacker". It's mostly about Mr. de Branges. But there are other funny stories as well. Just last year, 2015, many newspapers and even the BBC reported that a mathematician from Nigeria, Opeyemi Enoch, had proven the Riemann hypothesis. Nobody had heard of him before, but he was kind of famous for a few weeks, and then he vanished again... And he never published a scientific article. Let's just assume he didn't prove the Riemann hypothesis. It's still an open problem. One thing that's very special about the Riemann hypothesis: It is quoted in a lot of scientific articles in a very specific way. Most scholarly papers in math are very similar. If the Pythagorean theorem were actually really from Pythagoras, and if Pythagoras would live today, then he'd publish a paper about this theorem. The paper would start with a statement of the theorem: "In a right triangle, the square of the hypotenuse is equal to the sum of the squares of the other two sides." This would be followed by a proof. That's a typical mathematical article. A sentence followed by its proof. But there are several hundred mathematical articles which start with: "I'll proof that the following theorem is true IF... ...the Riemann hypothesis is true." Some people estimate that there are 500-1000 articles containing proofs of facts which are only true if the Riemann hypothesis holds. There's nothing comparable in mathematics to this, lots of people relying on a "proof to come" which will automatically proof their theorem as well. This obviously shows the importance of the Riemann hypothesis. One more thing to assess the Riemann hypothesis, because there are many CS students here: The Riemann hypothesis is the first mathematical problem ever that was attacked with the help of a computer. With this computer, that is. The Manchester Mark I. It's the successor of a prototype which in turn was the first computer ever to have a von Neumann architecture. (You'll know what this is from your CS lectures.) This was shortly after World War II. You'll all have heard of Alan Turing. In 1950, Turing used the Mark I to compute 1104 zeros of the "zeta function". This function is related to the Riemann hypothesis. You'll learn more about it in a few minutes. And then you'll understand that it's important to know where the zeros of this function are. Before one could use computers to help with the task, people had to compute these zeros manually. The record in 1950 was at about 1,000 zeros, computed manually by an English colleague of Turing. Turing's aim was to use this new computer which he helped to construct at the university of Manchester to compute 5,000 zeros to investigate the Riemann hypothesis. Unfortunately, the machine broke after 1,104 zeros. So his record wasn't a lot better than the old one. The purpose wasn't establishing a new record, of course. But at least this didn't pan out the way Turing had intended. By the way, already in 1939 Turing mentioned the idea to use computers to work on the Riemann hypothesis. At that time, he wanted to utilize something we would nowadays call an analog computer. If you're interested, look up "tide-predicting machine" on Wikipedia. They already had machines in the 1930s which could predict the tides using analog (or mechanical) means. In 1939, Turing developed ideas to use tide-predicting machines for the Riemann hypothesis, but was subsequently "interrupted" by World War II. You'll probably know he was masterminding the project to break the Nazi codes. So he had no time left to convert tide-predicting machines into devices to compute zeros of the zeta function. For the computer scientists among us: This is a typical output of the Manchester Mark I. It looked like this because the people who constructed the Mark I utilized a teletype code which was in common usage at that time. Each of the characters represents 32 bits. These 32 bits could stand for a number which you'd have to read backwards and decode manually. Or the 32 bits could be part of an instruction. Each instruction was 20 bits long. You can image it wasn't easy to program this thing. There were a lot of things you had to do manually. Some data. You see on the photo that the Mark I needed a whole room of its own. If this machine were on display in a store of today, you'd read about 1280 Bytes of RAM and a clock rate of about 500 Hz. For comparison, the photo above has about 65 kilobytes, so the whole main memory of the Manchester Mark I was just enough for the little stool there. So, to use such a machine to compute more than 1,000 zeros of the zeta function was actually quite an accomplishment. OK, so much about the significance of the Riemann hypothesis. Remember that Hilbert mentioned it as one of his 23 problems. He later said: "If I were to wake up after sleeping 1,000 years, my first question would be: 'Has the Riemann hypothesis been proved?'" After 100 years, the answer is still: "No". For now, we can let him sleep and maybe wake him up again after another 100 years to see if it has been solved by then. Now a few facts about Riemann himself. This is Mr. Riemann. If you're close enough, you see that he looks a bit grumpy and his glasses are askew. He was born 1826 in Breselenz. That's not far from here, at the Elbe in Lower Saxony. If you jump into your car right now, you'll be there before the talk ends. But maybe you should better stay here. He was a doctoral student of the famous mathematician Gauß whom we'll meet again later. Riemann probably was one of the most influential mathematicians of the 19th century. Here are a few areas he was active in. I already mentioned number theory which was more of a sideline of his. (Although that is today's topic.) Two other things I should mention: One is the Riemann integral. Everybody who has heard about integration in school, has learned the Riemann integral. That's the integral usually taught in schools. Maybe even more important: He continued and extended the work of his doctoral advisor Gauß in differential geometry. That was in the middle of the 19th century. What Riemann developed at that time and what some others later refined became the basis, 50 years later, of Einstein's general theory of relativity. This theory is, in a sense, a geometrical description of spacetime. And the geometry you need for this is the geometry that mathematicians like Gauß and Riemann invented. But that's not what we're interested in today. We'll talk about number theory. A sad fact at the end: Riemann died 1866 in Italy, near the Lake Maggiore. He was only 39 years old. He died of tuberculosis which was incurable at his time. If you're attending my math lectures, you'll know that Galois died at the age of 20, Abel died at the age of 28, also of tuberculosis. Many people died very young in those days. But later in the talk, we'll fortunately also learn that some mathematicians grew very old. One last thing about Riemann. After his death, his housekeeper apparently cleaned up very thoroughly. An in the process, she burned most of Riemann's notes. Many people would have loved to read them... A few were preserved and Carl Ludwig Siegel found interesting stuff in there. So, obviously those notes contained a lot more material than Riemann had published. But most of it is gone forever, because his housekeeper did such a good job. Here's the plan for today. First, I'd like to tell you who the "main protagonists" are. Then, before telling you what the Riemann hypothesis is, I'll talk about some of its consequences. And then I'll finally tell you what Riemann actually conjectured. And the last chapter has the nice title "The Music of the Primes" (The title is not mine.) Let's start with the main protagonists. But let me first warn you. We'll now be talking about mathematics, but as this is a Christmas lecture, there'll also be a lot of "hand-waving". I'll leave out a lot of details and I'll also be imprecise at times. We mathematicians usually don't like that. I'd rather explain everything in full detail. But if I did that, you'd probably run away screaming. So I'll instead concentrate on nice visualizations, I also programmed a few animations, I hope that's the right stuff for today. But as I'm recording this: If you're seeing this on YouTube or wherever and you're an expert in number theory, please accept my apologies for things that aren't correct. I know that some of this isn't quite correct and I don't like that myself, but... OK, the main protagonists. What is this all about? I found something nice. You've all heard of "Don Quixote", a famous Spanish novel from the 17th century. You might recall that Don Quixote was in love with Dulcinea. Somewhere in the book, Don Quixote meets a poet. He asks this poet to write a love poem for him. I'll read this part for you: "He then begged the bachelor, if he were a poet, to do him the favor of composing some verses for him and to see that a letter of her name was placed at the beginning of each line, so that, at the end of the verses, 'Dulcinea del Toboso' might be read by putting together the first letters." The poet then says: "Though he saw a great difficulty in the task, as the letters which made up the name were seventeen; so, if he made four ballad stanzas of four lines each, there would be a letter over, and if he made them of five, what they called decimas or redondillas, there were three letters short; nevertheless he would try to drop a letter as well as he could." "It must be, by some means or other," said Don Quixote. But this won't work, and you probably know why. It's because Dulcinea's full name consists of 17 letters. And 17 is a what? It's a prime number. Here's a reminder for prime numbers. Here's a list of some numbers together with their divisors. 6, for example, can be divided by 1, 2, 3, and 6. 9 can be divided by 1, 3, and 9. But some numbers can only be divided by very numbers. 5 can only be divided by 1 or 5. 11 can only be divided by 1 or 11. 1 is kind of an "outsider". It can only be divided by 1. We now call a number which has exactly two divisors a "prime number". So we have excluded 1 and confined ourselves to the interesting numbers like 2, 3, 5, 7, and so on. Those are the prime numbers. And because 17 is also a prime number (not listed) Don Quixote's poet won't succeed. Why are prime numbers important? First of all, they are important for the mathematicians, from a theoretical standpoint. They are something like the "atoms" in the world of numbers. What I won't repeat here and most of you will have heard of already anyway, is the so-called fundamental theorem of arithmetic, already known in Ancient Greece, which says that every other number can be "built" from the prime numbers. That's why I'm talking about "atoms". Every molecule can be built from atoms, every number can be built from prime numbers in a unique way. That alone already makes prime numbers very important. They are the Lego blocks of all of mathematics, and thus you want to know as much as possible about them. That's why they've been under investigation for thousands of years. The branch of mathematics that deals with primes numbers is called number theory. That's how the theory views prime numbers. But as this is a University of Applied Sciences, I'm used to questions like: "What does this mean in practice?" "What can this be used for?" Mathematician usually don't like these questions. But you probably want to know the answer. The English mathematician Hardy gave a reply in 1940. By the way, if you saw the movie "The Man Who Knew Infinity", this is the guy who was played by Jeremy Irons. If not, you should probably go see it. Hardy wrote a book in the 40s. Among other things he wrote about how sad he was that so many of his colleagues, physicists, chemists, and so on, were involved in war efforts. And then he wrote: "There is one comforting conclusion which is easy for a real mathematician. Real mathematics has no effect on war. No one has yet discovered any warlike purpose to be served by the theory of numbers, and it seems very unlikely that anyone will do so." So, Hardy was sure that his branch of mathematics, number theory, was totally useless. And that's why it's unimportant for people doing war. Unfortunately, he was wrong. Because it turned out that prime numbers are a great tool for cryptography. So, everybody who wants to encrypt something, like the intelligence services, wants to know as much as possible about prime numbers. Nowadays, the NSA is the world's largest employer for PhDs in mathematics. And most of them probably are number theorists. So, if you want to know if this can be used in practice: Yes, it can. But that's not the reason I'm giving this talk. But the primes are not only important, they are also very enigmatic. There are lots of questions about them, some of which I sometimes discuss in my lectures, that are unsolved. For example, are the infinitely many prime twins? Nobody knows so far. Is every even number the sum of two primes? Nobody knows. Do odd perfect numbers exist? Nobody knows. Are there infinitely many Fibonacci primes? Nobody knows. And there are many more questions concerning the primes numbers which are still open. Most of these have in common that they are easy express, but sometimes very hard to answer. Sometimes one of them is solved. I crossed out one. That was the question whether Diophantine quintuples exist. This question was actually answered two months ago. A paper was published two months ago proving that such things don't exist. But the question had been open for a long time. And the biggest question concerning the prime numbers probably is the Riemann hypothesis. And as I said, we'll first look at some consequences of the Riemann hypothesis before I'll tell you what the hypothesis is. One of the situations where I should be more accurate. These aren't only consequences of the Riemann hypothesis, they are actually equivalent to it. So, if the Riemann hypothesis is true, the two things I'll tell you now are true as well. But it's also the other way around: If one of these two is true, then the Riemann hypothesis must also be true. Which means there are potentially different ways to prove it. These two consequences look contradictory at first sight. The first one can be paraphrased like this: The succession of prime numbers, 2, 3, 5, 7, 11, and so on, obeys strict rules. The second one can be paraphrased as: The prime numbers seem to appear randomly amidst the other numbers. Which somehow contradicts the part about "strict rules". Don Zagier, an American mathematician working in Bonn, once put it this way: "The prime numbers grow like weed among the natural numbers." Every now and then a prime number raises its head, but nobody can explain this systematically. Let's start with the first part, the one about the strict rules. There's a connection between this statement and the question: "How many primes are there?" So, how many primes are there? Infinitely many. The ancient Greeks already knew this. This is called "Euclid's theorem". You've heard about it. I usually prove it in my lectures. I won't prove it now. But what we also know is: There might be infinitely many primes, but they become rarer and rarer. I'll first show you what is meant by "rarer". That's pretty easy to visualize. Using something you'll also know, something even CS students who never attended a math lecture will have heard about, the sieve of Eratosthenes. Eratosthenes is also one of those ancient Greeks, he was a librarian in Alexandria. At that time, about 200 BC, Alexandria hosted the world's largest library. He wasn't only doing math, though. He also was the first man who tried to compute how big Earth is. The ancient Greeks already knew that Earth is a sphere. Eratosthenes used measurements of shadows in different places to estimate the size of Earth. Using only the primitive means available at his time, he arrived at a value that was only five percent off. Funnily, more than 1,500 years later, when Christopher Columbus discovered America, he believed, all his life, he had found India. Which is where he wanted to go in the first place. And the reason he believed he was in India was the he relied on newer computations that were much worse than those of Eratosthenes. With Eratosthenes's numbers, it would have been perfectly clear he couldn't be in India. Using newer and thus supposedly "better" numbers turned out to be a mistake. But Eratosthenes also invented this so-called "sieve". That's an idea on how to "sieve" the prime numbers, for example with a computer program. It works like this: You start with the first "interesting" number. That's 2 which is a prime number. Now you cross out all multiples of 2, because those obviously can't be primes. Primes are numbers only divisible by 1 and themselves, and the multiples of 2, like 4, 6, 8, and so on, are all divisible by 2 and thus not primes. So we strike them all out and many numbers "disappear". Now pick the next number which is still "there". That's 3. This number must be prime. Now you do the same as before. Cross out all numbers divisible by 3. Those also, for the same reason, can't be primes. Some numbers, like 6 or 12, are striked out twice. But that's not a problem. In any case, many numbers drop out. Again, you have a smallest number not crossed out yet. In this case it's 5. That's your next prime and, again, you cross out all multiples of 5. Next one is 7, strike out all multiples of 7. If you're only interested in the numbers up to 118, like here, you can already stop. Homework question: Why can I stop at 7? All numbers which aren't crossed out plus the four encircled ones at the top are prime numbers. If you visualize this process, it looks like this: I start with a "staircase" which goes "one step up" for each number. I start with 2 and go one up. Then comes 3, one up, and so on. The result obviously has a slope of 45 degrees. But it's a "staircase" and not a straight line. And now I start applying the sieve of Eratosthenes. I first strike out all multiples of 2, except for 2 itself. That looks like this. The staircase drops down a bit. We now only have a step for every second number. Here's 2 where we go up one step. Here's 3 where we go up one step. Here's 4 and we don't go up one step. At 5 I go up again. Here's 6. It was crossed out, so we don't go up. Now the next step in the sieve of Eratosthenes. We cross out the multiples of 3. The staircase becomes flatter. For example, the number 9 is a place where we used to go one step up, but now we don't. So the whole thing sags more and more. In the next step the multiples of 5 are removed. The staircase sags again. Now the multiples of 7. It sags a little bit more. Now imagine extending the staircase to the right. It starts with a certain slope, but the more numbers you cross out, the flatter the staircase becomes. The curve you'll get in the end is very important in number theory. It's the prime-counting function which is usually written as π (pi). It has nothing to do with the π you know. The number theorists apparently just couldn't come up with another letter. π(n) is defined as the number of primes up to n. For example, if I want to know how many primes there are up to 30, I'll look here and see that the number is 10. I can get the values from the steps of this staircase. It goes up one step at each prime number. It goes up at 2, 3, 5, 7, then 11. All other steps have been removed. This prime-counting function is at the core of number theory. If you know enough about this function, you know everything you need to know. If, for example, you want to know whether 17 is a prime number, and if you have a method to compute these values quickly, then you just compute π(17) and π(16). If those values differ, 17 is a prime number. If they don't differ, 17 is not prime. That's why people are investigating this function. Riemann did this. Remember the title of his paper: "On the Number of Primes Less Than a Given Magnitude". So it was about the function π. This is how π looks if you go further to the right. Note that the x axis now has a different scale then the y axis. This is the step function you get for the primes up to 500, approximately. And we'd like to know more about it. Let's repeat the slide we already saw. The ancient Greeks already knew there are infinitely many primes. We've seen that the primes become rarer over time which means that our staircase becomes flatter. But it doesn't stop ascending, because we know there are infinitely many primes. It keeps ascending but its slope gets smaller. A modern question, not something the ancient Greeks wondered about, is the following: Can we quantify what is meant by "rarer"? Are there other methods to describe the prime-counting function other than the obvious and costly one. Right now, if you want to know the value of π(100), and if you want to know the exact answer, you really need to compute all primes below 100. What people would like to have would be some kind of method where you ask a computer to compute π(100) and it instantly answers, using some fancy formula. That's what meant by "quantify". Or maybe we can at least approximate the values of π. One of the first mathematicians who thought about such an approximation was Riemann's doctoral advisor, Carl Friedrich Gauß. When the Euro was introduced, I was totally opposed to it. Not for political or economic reasons, but because the old mark bills had a protrait of Gauß which was about to be removed. I'm still sad about this. Many people think that Gauß was the greatest mathematician of all times. He worked in many different areas. One of them was the question on how to quantify the prime-counting function. Here's a letter from the Gauß archive in which he writes about this question and when he first thought about it. He started thinking about this problem and made a correct conjecture when he was 14 years old! That was at the end of the 18th century. How did he do it? To make such a conjecture, you should have seen quite a few primes to get a feeling for the behavior of π. Remember, we're talking about the 18th century. They didn't have computers back then. You could buy tables. This one, see below, is from 1770. It was published by the Swiss Lambert who, among other things, proved that π (the other π) is irrational. He also was a well-known mathematician. In case you can't read Gothic print, I'll read this aloud: "Supplement to the logarithmic and trigonometric tables for the facilitation and reduction of the calculations that incur when applying mathematics, produced by J.H. Lambert." Young Gauß, with 14, procured such a book. And the book is filled with tables like this. Mr. Lambert recorded, manually, for the numbers from 1 up to approximately 100,000 whether they are primes or not. He started with the clever observation that it is immediately apparent whether a number is divisible by 2, 3, or 5. It's divisible by 2 if the last digit is even. It's divisible by 5 if the last digit is 5 or 0. For 3, you take the digit sum. Lambert left these numbers out. He didn't write down all numbers between 1 and 100,000, but "only" those not divisible by 2, 3, or 5. The rest was written down in the following way: For example we have 811 here and 19 here. So this part of the table is for the number 81,119. The hyphen here means that this number is prime. This table cell is for the number 81,121. Here we have "23" instead of a hyphen. That means this number is not prime and its smallest factor is 23. So for the non-primes you got, as a bonus, their smallest factors. Young Gauß studied tables like this. With 14 years, mind you. And then, from just staring at these tables, he conjectured how the prime-counting function should look like. I marked his conjecture in the letter I showed you earlier. I will show this graphically. This is the prime-counting function we already saw. Gauß said: "If we compute this integral," the so-called integral logarithm, often abbreviated as 'Li', "then the resulting yellow curve will be a very good approximation of the prime-counting function." And he was right. But it took another 100 years before someone actually proved that Gauß was right. That's the famous prime number theorem. It says that the yellow and the blue curves we just saw are asymptotically equivalent. That means: If n grows larger and larger, i.e., if we move farther to the right, the quotient of the two values will approach 1. Remember that if you divide two numbers that are equal, then their quotient is of course 1. So, if the quotient of two numbers approaches 1, the two numbers approach each other. This theorem was proved in 1896, about 100 years after Gauß conjectured it, by the Frenchman Hadamard and the Belgian de la Vallée-Poussin. And they both used methods that were based on Riemann's ideas. I earlier said that Riemann and some other mathematicians died very young. Not these two. Hadamard lived to 97 years and de la Vallée-Poussin to 95 years. After all, mathematics might not be THAT dangerous. You can get old doing it. Hadamard, by the way, wrote an interesting book about the role of intuition and creativity in mathematics. OK, this has been proved. Maybe we should look at some numbers. Here's a table. It starts with 10 to the 8th and 10 to the 9th. Already pretty big numbers. 10 to the 8th is 100 million, for example. I had my PC compute for me the number of primes up to n. That's on the right, in blue,. For example, among the first 100 million numbers we have 5,761,455 prime numbers. The yellow curve, the one from Gauß, has approximately the value 5,762,208.33 at this point. If you divide the two, the result is 1.00013075. If n gets higher, the quotient has more zeros. That's the meaning of "approaching 1". The bigger n gets, the better the blue curve is approximated by the yellow one. That's what the prime number theorem says. You don't have to remember this. If you want to remember any of this, remember what I call "the party version" of the theorem: If among the first n numbers you pick one at random, the probability of picking a prime number is approximately 1 over the logarithm of n. This is really feasible for a bet at a party. For example, how many primes are there among the first 10 million numbers? Now you need to compute the natural logarithm of 10 million. You probably can't do that using mental calculations. But you can easily compute the common logarithm of 10 million. That's just the number of zeros. So the common logarithm of 10,000,000 is 7. And then you need to know that to get the natural logarithm you need to multiply by 2.3. 7 times 2.3 is approximately 16. So, approximately a sixteenth of the numbers up to 10 million are prime numbers. Done, that's how you win your party wager. That's the easy version of the prime number theorem. Of course, we are more interested in higher mathematics. Here's our table again. Some parts are now marked in green, but please ignore this for now. Other than the color, does anything catch your eye when comparing the two columns? The number on the left is always a bit bigger than the one on the right. The left number is an approximation for the right number. It COULD be sometimes bigger and sometimes smaller than the right number. But it seems, and we also saw this graphically, the left number is always greater than the right one. And the reason some digits are green and some are black is: The part where the two numbers agree gets bigger. For example, in the last row the first six digits are equal. If you look closely, you'll notice that approximately half of the digits are the same in each row. With a bit of mathematical experience you'll conclude that the error made by the approximation has the order of the square root of n. Taking the square root is roughly equivalent to halving the number of digits. Of course, you want to know the error as precisely as possible. The prime number theorem makes a very general statement. It says that the yellow function approaches the blue one. But it doesn't say WHEN this will happen. You can be sure that the two curves meet at infinity, but if you need an error estimate for, say, one billion, the prime number theorem won't help you. So we need other means to figure out bounds for the error. This here looks like a clue. Maybe the error is always smaller than the square root of n? With our table and with the help of computers we could now conclude two things: Let's call this "numerical evidence". First, the values on the left are always greater than those on the right. For the second conclusion, let's look at some actual error values. The error in the first row is smaller than 800, the one in the second row is smaller than 1700, etc. This is clearly always smaller than the square root of n. Judging from "numerical evidence", from everything that was EVER calculated by a computer, it seems very clear that both conclusions are right. That would mean the following: You draw the yellow curve, the integral logarithm. Then you draw, in gray, a "safety zone" around it. And the width of this zone is exactly the square root of n. Based on the two conjectures we just made and based on the numerical evidence of all computations performed so far, the blue curve, the one we're really interested in, should always stay within the safety zone and it should always stay below the yellow curve. But that's not true! Already in 1914, long before computers were invented, an English mathematician called Littlewood proved that eventually two things must happen: The blue curve must eventually cross the yellow curve. And the blue curve must eventually leave the safety zone. Littlewood could even prove that these two events would happen infinitely often. But he couldn't say WHEN this would happen. He could prove, without doubt, that it would happen, but he couldn't figure out when. He delegated this problem to his doctoral student, a South African called Skewes. He asked him to try to find an estimate for when this will happen for the first time. When, for example, will the blue curve cross the yellow curve for the first time? And Skewes at least found an estimate for this. The number he computed was so incredibly large that it was named after him: the Skewes number. At that time, it was the biggest number that actually made some sense mathematically. Of course, everybody can make up numbers as large as they want. But a number of this size with an actual purpose was new in mathematics. By now, better estimates have been found. The current best estimate for when this will happen for the first time is: somewhere between 10 to the 19th and 10 to the 316th. The crucial point is: This is an area that is beyond what our computers can currently calculate. The table we've seen is at the border of how far computers can help you. They will of course get faster and then we'll touch this zone. Maybe we'll reach 10 to the 19th or 10 to the 20th. But we'll NEVER reach 10 to the 316th. The moral of this story is: It helps to do experiments with computers, but that won't give you authoritative answers. Authoritative answers come from mathematical proofs. And what Littlewood proved, long before computers were invented, is that the curves will cross, no matter what the numbers say. OK, it's time to return to the Riemann hypothesis. IF the Riemann hypothesis is true, you can make the safety zone a little bit thicker (some hand-waving here), you can extend the safety zone by a very thin "double safety zone", and the blue curve is then guaranteed never to leave this zone. IF the Riemann hypothesis is true! So, if the Riemann hypothesis is true, the error estimate using the square root is pretty close to the correct estimate. And in that case you can really write down the correct estimate. But I won't do this now, it's a long formula. But in that case you can not only say that the two curves will approach each other, you can also quantify how far they'll be apart from each other. For what it's worth, here's something not directly related to the Riemann hypothesis, but to what Riemann worked on: There are other ways to approximate the blue curve. The canonical way, so to say, is the yellow curve, the integral logarithm. There's another estimate that's easier to compute. Just divide n by the natural logarithm of n. [Video is wrong!] This'll give you the green curve shown here. This is also asymptotically equivalent to π. Riemann himself found an even better estimate which is nowadays called Riemann's R-function. His idea was to start with uncle Gauß's integral logarithm and to then compute its value not only at n but also at the square root of n. Multiply it by one half and subtract it from what we already have. Then again at the cube root of n and subtract a third of this. And so on. The sign will in turn depend on another, somewhat complicated function called the Möbius function. If you compute this function R, you get this curve. This is obviously an even better estimate. But the practical value of this function is limited. Computing the signs and so on is costly while the yellow and the green function can be computed pretty fast. What we're really interested in is the yellow curve. OK, so much for the first consequence of the Riemann hypothesis. The error is relatively small. The second consequence was the one about the seemingly random weed-like behavior of the prime numbers. So, this one's about randomness. And now you can participate. Please get a coin out of your pocket. If you don't have any coins, ask your neighbor. Because it's alsmost Christmas Eve, maybe you don't have to give it back. Now please flip the coin once and let it lie on your table in front of you. Heads is 1, tails is 2. Now, could you all please, in turn, tell me your numbers. Not so fast! What we have here is what mathematicians call a "random walk". The coin flips decide whether to go up or down. And the result is a funny line like this. We just had a practical example. As we don't want to flip coins all day, I've automated this process. I've written a program which simulates what we just did using a random number generator. We can also make it run faster. It always starts anew and draws new random walks. I did this to demonstrate the power of probability theory. We'll soon need it. One thing I can do: (I'll slow it down a bit first.) I can draw a copy at the bottom using a larger scale. But at the bottom the curve only shows how much the random walk deviates from the base line. The higher it is, the farther away from the base line it is. Right now, we have a pretty large amplitude. When the walk has finished and starts anew, we draw the average of the last amplitude and this one. So, the blue line at the bottom "remembers" the last walk and computes the average of both. And now, with the third walk, it computes the average of all three. You are probably expecting that the curve will level out over time. I'll increase the speed again. We have very many random walks now. And you see the curve really levels out and approaches the orange curve. I knew this would happen because this is a consequence of the central limit theorem of probability theory. In a way you can predict what will happen. The orange curve is, by the way, the square root of 2n over π. There we have π again. This is related to the normal (or Gaussian) distribution which, by the way, was also on the 10 mark bill. And I can show you another thing. I'll remove this first. And I slow it down again. I outlined two areas here. The gray area on the left, a bit hard to see, which looks a bit like as triangle; that's the area where, at least theoretically, the random walk could be located. Imagine everyone had had heads when we flipped coins. The walk would have gone up in each step and it would have followed the thin gray line. But what really happens is: Heads and tails even out, so to say. The random walks never veer away a lot from the orange area. That's called the standard deviation in probability theory. And you can quantify how much such a curve can deviate from the standard deviation. With a bit of hand-waving: If you make the orange area a little bit wider and then divide the random walk by the orange curve, then the quotient will converge to zero with a probability of 100 percent. What does that have to do with prime numbers? Let's talk about the Liouville function, invented by Joseph Liouville, a Frenchman, who lived at about the same time as Riemann. His function was meant as a tool to investigate the prime numbers. It's usually written as λ (lambda). It works like this: Take a number, for example 10, and write it as a product of its prime factors: 2 times 5. Count the number of factors, 2 in this case. If this count is even (like 2), the direction is "up". Now take another number, say 12. Again, write down the prime factors. We have 2 times 2 times 3. The number of factors is 3. If the count is odd (like 3), the direction is "down". You do this for every number: perform a prime factorization, count the factors in the product, go up or down depending on the parity (even or odd) of the count. Then you'll get the curve we already saw. I fooled you - it wasn't a random walk. It's the Liouville function. So, this curve tells us something about the prime numbers, but it looks like a random walk. And IF the Riemann hypothesis is true, then this curve not only looks like a random walk, it also behaves like one, mathematically. This means: What I just told you about the standard deviation, that you can exactly quantify how "hard" it is for a random walk to leave the orange area, applies to the Liouville function as well. In other words, in a certain well-defined way this function which describes the distribution of the prime numbers behaves exactly like a random walk. In that sense it is fair to say that the prime numbers "grow like weed among the natural numbers". You've probably heard that Einstein once said: "God doesn't play dice." That was about quantum mechanics. Einstein wasn't happy with how it was interpreted. There's a follow-up quote by a mathematician, Paul Erdős, who said: "God may not play dice with the Universe, but there's something strange going on with the prime numbers." Maybe randomness is really involved there. Let's summarize. Two important consequences of the Riemann hypothesis. We've seen this slide already. The succession of prime numbers obeys strict rules. This means that if we estimate the prime counting function with the integral logarithm, the error is minimal. That'd be one consequence of the Riemann hypothesis. The estimate is as good as it possibly can be. The second consequence was: The prime numbers seem to appear randomly amidst the other numbers. That means that the Liouville function behaves in such a way that you can't distinguish it from a random walk using statistical tools. But if we know all this already, if we already have good numerical estimates, why bother with a proof? (If we forget about the million dollars for a second...) Why do mathematicians always want proofs? New insights! It looks as if, once the Riemann hypothesis is proved, there's a whole new country lurking behind it, full of new things to be discovered. It will surely be the case that a proof, once it arrives, will have to use new methods that nobody ever saw before. Typically, such "big proofs" produce lots of new questions and new theories. That's why a proof is so desirable. Now let's finally talk about what Riemann actually conjectured! The reason Riemann's eight pages were so important was that he did something completely new. He linked two branches of mathematics that so far seemed to have nothing in common. One branch is number theory, not Riemann's "home turf", which belongs to an area called "discrete mathematics", an area that deals with simple computations, with algebra, with things that consist of individual and separate units. The "staircase function" we saw is a typical example of "discrete mathematics". And then there's "continuous mathematics", Riemann's "native country", so to say. This part of mathematics deals with "stepless" phenomena like oscillations, smooth curves, derivatives, integrals, and so on. In his seminal paper, Riemann married these two branches. He demonstrated that one can solve number theory questions, from the left side, using methods from the right side. An idea nobody had before him. I'll try to explain in a few slides what he was doing there. He started with the so-called Euler product which originated in the 17th century. Euler was a very famous and very prolific Swiss mathematician. He wrote dozens of books and published hundreds of articles. At some point he went blind but he simply continued publishing scientific papers by dictating the contents to his secretary. In the beginning of this talk, I said you only need a pencil and paper for doing mathematics. It seems Euler didn't need that. Euler was the first to investigate this product. p runs through all primes and each factor of the product is 1 over 1 minus 1 over the square of the prime p. You need someone like Euler to come up with such a construct. The product starts as follows. The first primes are 2, 3, 5, 7, and 11. You now multiply... [see slide]. Now imagine you continue multiplying factor by factor indefinitely. Euler calculated the result... If you're already past the second semester, you'll probably see that each factor looks like the sum of the geometric series. Which means you can rewrite this factor like so. So, each factor can be written as an infinite sum. 1 plus... [see slide] In other words, I can write the Euler product like this, making it more complicated, in a way. I now have a product consisting of infinitely many factors. And each of these factors is itself an infinite sum. It doesn't seem easier now, does it? But we'll soon see that it is indeed easier now. Euler, at least, could cope with this and he found out, in the 17th century, that the product is π squared divided by 6. π again... But let's rather see how this can help us. And you can really figure this out using only simple high school math. We see here not infinitely many, but only the first three factors. And only the first three summands of each factor. If I now expand this, I have three times three times three products, so we should have 27 products. You might want to check this at home. Forget the square in each term for now. Just concentrate on the products in the denominators. We have, more or less, all natural numbers here. Here's 1, here's 2, here's 3, here's 4, here's 5, here's 6. 7 is missing because we stopped at the third factor. But what you hopefully see is: If we continue like this, if we really expand all sums in all factors, then each number will occur once and only once in the expansion. This is of course a consequence of the fundamental theorem of arithmetic which says that each number has a unique prime factorization. This is something Euler saw. He saw that the product he started with can be written more easily as an infinite sum. Namely, as the sum of the reciprocals of all squares. That is very interesting, because on the left side we have a product that's defined using the prime numbers while the sum on the right has nothing to do with prime numbers. The sum runs through ALL numbers, not only through the primes. An in a certain sense, it turns out, it's easier to work with the sum on the right, because you don't have to worry about prime numbers. We'll now approximate this value to see what's happening. If I start writing down the summands of the series, the first term is 1 over 1 squared which is just 1. See the chart below. Then we have 1 over 2 squared which is a quarter. We've added a quarter to the chart as well. Now we have 1 over 3 squared which is a ninth. Another small chunk in the chart. Then 1 over 4 squared, i.e. 1 over 16. A tiny chunk is added to the chart. You can now imagine how we could carry on forever. But you know what to expect from the math lectures. As I already told you, the final result will be π squared divided by 6. So far, the exponent in the denominators has always been 2. But you can do this with other numbers as well. This sum is called ζ(2). [zeta] And if you replace 2 with another number s, the new sum is called ζ(s). Look at the chart, this is the value I get for s=3, this is the value I get for s=4. You can even take it one step further and compute the values in between. You know from your math lectures that exponents don't necessarily have to be integers. So you end up with a curve like this. There's one catch, though. You can see it here, at the left margin. The curve ascends and we have a so-called pole there. The reason is that you can't apply ζ to 1 because that will give you the series 1 plus one half plus one third plus one quarter, and so on. And you might remember that this is the so-called harmonic series which doesn't converge. The result is "infinity", so to say, and that's why we have this pole. But all the values to the right can be computed. That's how far Euler already got. Riemann went one step further. Riemann knew a lot about complex analysis. He asked himself: "What happens if I apply this function ζ to a complex number s?" If you have never heard of complex numbers, I can only tell you that they somehow linger in the plane. And they have some funny properties. For example, multiplication is essentially rotation. I'll try to visualize this again. What happens, for example, if we apply ζ to 2+i? We compute the sum 1 over 1 to the 2+i,... [see slide] But what does "to the 2+i" mean? It's actually quite simple. The first summand is obvious. 1 to whatever is always 1. I can now pull apart the second term. It's one quarter times one half to the i. A quarter is like before. And one half to the i is a rotation in the complex plane. We have the same green chunk as before, but this time it's rotated. The rotation is due to the factor one half to the i. Raising a number to the power i means rotation. And when the number gets smaller, the angle of rotation gets bigger. So the next summand is a smaller chunk which has been rotated a bit more. And the next one is an even smaller chunk which has been rotated even more. And if I continue like this I end up with some kind of spiral which looks like this. Again a number we can actually compute. So, it actually makes sense to apply the function ζ to complex values. But this only works as long as we stay on the right side of the "magical barrier" 1 we already encountered. We can't compute values to its left. This is what Riemann was working on. By the way, let me mention here that this graphical idea is from "3Blue1Brown" who creates nice math videos on YouTube. A week ago, when I had already prepared this talk, he released a video about the same subject that I liked so much that I had to program it myself. It describes the situation Riemann was in. This is the part of the complex plane where he was able to compute values of ζ. With a lot of effort, without computers. Here's 1. And to the right of 1, he could, in theory, compute all values. Here's a grid of some values to which ζ can be applied. And I'll now show you an animation of what ζ does to them. The problem always is that complex-valued functions can't really be visualized because you'd need four dimensions for that. But you can do it if you use time as the fourth dimension. So, again, these are the values to which ζ will be applied. And I'll now "start the clock" so we can see how the animation transforms them and moves them to the function values. This looks pretty nice, actually. Watch! What does ζ do to these values? That's what happens. I'll show it again. The lines are more densely arranged near 1. This is on purpose. That's because 1 is the critical value. And the nearer the values are to 1, the more they are pushed away from each other. That's because I'm nearer to the pole. You might want to concentrate on those lines. The lines which are farther away will be pulled together instead. Again, this is where Riemann was in his investigations. And somehow, this beautiful transformation looks as if it WANTS to be extended to the whole complex plane. You definitely want to see what's happening on the left side, how this would look. And complex analysis has methods to do something like this. It's not easy, though. If you have a function that's defined on a part of the complex plane, and if it "behaves properly" there, for example, it should be differentiable there, then it might be possible to extend this function to the whole complex plane. That's called "analytical continuation". And IF this is possible, then there's only one unique way to do it. And of course Riemann knew this. That means that this function, which so far is only defined on the right side of 1, already contains all the information about its (potential) behavior on the right side. Riemann figured this out. He was able to compute the analytical continuation of ζ. So, ζ is now defined everywhere, except for this one pole at 1. And if we now run the animation again, it looks like this. We now have a beautiful symmetric transformation. This is what Riemann's ζ function does on the set of all complex numbers. So, one of the main achievements of Riemann was to find, with rather complicated means, this analytical continuation of ζ. Maybe one thing to demonstrate that he really did the right thing. According to the theory of complex analysis, the analytical continuation of ζ must be "holomorphic". Which means it must be differentiable everywhere. In the complex plane that implies that angles must be preserved. Note that the horizontal lines all have reddish colors. And the vertical lines have "cold" colors like green and blue. And they are of course perpendicular. If I now apply ζ again, the red curves must afterwards still be perpendicular to the green and blue curves. And you can see this is really the case. If you pick an intersection somewhere, then the tangents are perpendicular at that point. This demonstrates that this is a holomorphic function. This is how ζ looks if you watch the complex plane from "above". Because we are restricted to three dimensions, we can't show everything. But at least we can see the absolute value. One of the things you see here are the two green points. Those are the first two values we computed, this is ζ(2) and this is ζ(3). The values Euler calculated. And this black line here is the real axis. Here we see the pole at 1 which is cut off because it has infinite height. The other black line is the imaginary axis. But actually this isn't the most interesting part. That's why I'll switch to "bird's eye view" now and move further away. Let's see what happening over there. It looks like this. The small part at the bottom is what we saw earlier. If you go "higher up" in the direction of the imaginary axis, you'll notice some "holes" there. Those are zeros (roots). And they are what the hypothesis is about. They are the values Turing wanted to compute in 1950. He wanted to find out their exact positions. In the announcement to this talk, I promised to only present formulas to show off. Here's one way to compute Riemann's ζ function everywhere, not only to the right of 1. Looks a bit complicated at first sight, but it ain't that bad. It's actually quite easy to convert this formula into a computer program. This is how it would look in Python. The formula isn't well-suited to compute ζ fast and efficiently. But one can use it to calculate some example values. I'll do this now. Here's ζ(2), for example. Euler said this should be π squared divided by 6. Let's see: pi * pi / 6. As you can see, this is really the same value, apart from rounding errors. We can compute, say, ζ(1). The result is "infinity". That's the pole. What I actually wanted to show you: If you compute ζ(-1), the result is, expressed as a fraction, negative one twelfth (-1/12). You'll sometimes find videos on YouTube which claim that if you compute 1 plus 2 plus 3 plus 4 and so on the result will be negative one twelfth (-1/12). This is what these videos are based on. But what they claim is not true. If you apply the ORIGINAL ζ function to -1, then you will indeed compute 1+2+3+4+... But this series doesn't converge and the original ζ isn't defined at -1. What we are really doing is: We apply the analytical continuation of ζ to -1 and get -1/12. But that's not the sum 1+2+3+4+... anymore. You can't just say that 1+2+3+4+... IS -1/12 without explaining the context and without admitting that you're not really computing a series. Just in case you're coming across one of the videos. Don't believe what's on the Internet, trust me instead... :) There are other ways to compute values of ζ. But I won't delve into that. There are many things here you might not have seen yet, contour integration, the gamma function, etc. I just wanted to show that there are ways to compute the values of ζ. But what is actually important about ζ? We just looked at its absolute values from above. We'll now look at its arguments from above. The argument (phase) of a complex number is its angle. It looks like this. If viewed from above, these are the angles of ζ's values. The function looks quite boring on the right. The interesting stuff happens on the left. You can spot a few things if you know where to look. Here's the pole, the smokestack we saw earlier. It's at 1. One thing that's relatively easy to compute, if you know the math, that is, is that all of these are zeros of ζ. That's at -2, -4, -6, -8, and so on. Mathematicians call these the "trivial zeros". Mathematicians call things "trivial" once they've understood them. It's pretty easy to compute that ζ is zero for negative even integers. And it turns out these zeros aren't that important. But... there are more zeros. Up here. Those are the interesting zeros. And there are more of these. That's why we'll have a closer look at this part where three of them are located. I'll zoom in a bit. Now it looks like this. There we have the three zeros. That's what Riemann was interested in. By now we know, because it was proven: All zeros of the ζ function which are not trivial lie inside this strip. It's called the "critical strip". Its width is 1, i.e. it extends from 0 to 1 and it runs along the imaginary axis. Again, all nontrivial zeros lie inside this strip. That's a fact, it has been proved. And there's more we know. There are infinitely many nontrivial zeros. And they are symmetric with respect to the real axis. All of this is well-known. And here's Riemann's conjecture: He conjectured that all these zeros lie on this line. In other words, all nontrivial zeros lie exactly in the middle of the critical strip. That's the Riemann hypothesis! And if it's really true that all nontrivial zeros lie on this "critical line", then all the stuff I told you follows. It's just this "little fact" that nobody could prove so far. What has been done: Computers have been utilized to compute many millions of zeros. And they really all lie on the critical line. But remember the Littlewood story I told you earlier. That you have millions of values "confirming" something doesn't necessarily imply that it's true. There's a lot of evidence in favor of the Riemann hypothesis, but nobody knows for sure. But at least you now know what Riemann conjectured. Another visualization. To demonstrate the beauty lurking in this function I'll now "walk" along this critical line. The blue point on the left is on the critical line and will slowly move upwards. The point is not on the origin, but 1/2 to the right. At the same time, the coordinate system on the right will show you the value of ζ applied to this point. Every now and then this value will go through the origin, in which case it will briefly flash up. That always means we've met one of the zeros. And so you can see the whole rigmarole the function goes through. Remember: s on the left, ζ(s) to the right. Here it went through the origin. That was the first zero. And again. And again. And... again. And while we're watching this, I can tell you some more fun facts about ζ. A Russian named Voronin proved at the end of the 20th century that the Riemann ζ function is, in a sense, maximally chaotic. That means, with some hand-waving, that the critical strip contains a copy of every other holomorphic function. So, the critical strip is very entertaining. It's not really predictable With minimal restrictions, every holomorphic function you can imagine appears on the critical strip as an approximation. Watching the point here, you can already see that it seems to have a lot of fun. Each time we pass the origin, we mark one of those famous zeros of ζ the first 1,000 of which Turing computed back then. We're almost through. In the last section, I'd like to talk about how to interpret this and why the zeros are important. What do they mean? The Euler product demonstrated their connection to the primes, but at a certain point one tends to forget that the ζ function originated from this formula. And one just stares at this complicated and chaotic complex-valued function. To understand this, I'll give you a two-minute crash course in Fourier analysis. Some of you know this already. Simple stuff. Well, not simple at all, but I'll try to make it simple. Fourier, a Frenchman, started with this a few decades before Riemann's work. Fourier developed the theory to investigate thermal conduction. Fourier, by the way, was the first person to call attention to the so-called greenhouse effect. That was in the 19th century and based on purely theoretical considerations. Although Fourier analysis was originally intended to investigate other physical phenomena, I think it can be best explained in the context of music. If you hear music, that's a wave. Something generates a sound. This "something" oscillates and thus displaces air molecules. Those air molecules hand over the pressure to their neighbors. And because the source of the sound oscillates, the pressure varies. This varying pressure wave arrives at you ear. The frequency of this wave, the speed at which it oscillates, determines the tones you hear. You could, for example, use a tuning fork. Tuning forks are used because they have almost no overtones. They generate almost pure tones. There are no really pure tones in nature. A pure tone would be like this sine wave. What you usually hear are fundamental frequencies together with their overtones. That's the reason why different musical instruments make different sounds. If you play the same note on an oboe or a piano, they sound totally different although it's the same note. This is because the different constructions and the different ways of sound generation stimulate different overtones. That's what we call timbre or tone color. Here's an example of an overtone. In addition to the original, orange oscillation, you could have a curve with twice the frequency. That's called an octave, you have one tone and another one which oscillates with twice the frequency. You could also have a third tone oscillating with three times the base frequency. That gives you a ratio of 2:3, called a fifth. Here we have all three tones together, octave and fifth. To summarize, a sound that we hear is usually composed of several oscillations. It's not as easy as it looks, though. The individual oscillations don't do us the favor of arriving individually at our ear. We hear just ONE sound. So, in reality, the oscillations add up. What arrives at your ear looks like this, IF there are only two tones. With three tones, you'd end up with a sum of three waves, looking like this. If you have good ears, you can nevertheless "sound out" the individual tones. If you're a trained musician, you can say something like: "I hear a fifth." You were able to sound out two tones from the mixture of sounds. Fourier analysis does essentially the same thing, but with mathematical methods. It works like this: You start with a "mixed curve" as input. And the Fourier analysis will extract the individual "tones". You'll get the so-called "spectrum". I'll skip the mathematical details, but with the input on the left, you'd get the result on the right, showing three frequencies. That's what Fourier analysis does for you. And you can go back and forth. You can, on the one hand, disassemble a sound into its spectrum. But on the other hand, you can also reassemble the sound from its spectrum. The spectrum on the right is kind of a "construction manual" for the sound. That was the two-minute crash course. What happens if your oscillations are more complicated? It becomes difficult to create a spectrum, if your input is a function which isn't smooth. Here's a function with spikes. A so-called sawtooth wave. If you've done Fourier analysis before, you will know that you can still compute a spectrum for this function. The catch is, though, that you will need infinitely many frequencies. That's what the dots stand for. So you can only reassemble the original curve if you use infinitely many "building blocks". If you only use a finite number of blocks, you end up with an approximation. We'll look at examples. Note the frequencies on the right. The 1st, 2nd, and 3rd; the 4th almost invisible. Imagine this going on forever. I'll use just the first two frequencies now. And I'll "reassemble" them. That'll give me this curve. It tries to look like the original, but fails. With a finite number of components, I'll always get a smooth result. But I want a "spiky" curve. Now I'll use three frequencies. I get this curve as a result. Better than before, but still smooth. It'll never be like the original. But you will be able to imagine, the more frequencies I use, the better my approximation becomes. But I need ALL to really have a match. Now the primes must make an appearance again. This is the last mathematician to be introduced today. Chebyshev, a Russian, another contemporary of Riemann. He invented another way to investigate the primes, the (second) Chebyshev function ψ (psi). I'll explain it using an example. You want ψ(20). You look at the numbers up to 20 and take only those which are primes or powers of primes. Let's start with the prime number 2. Below 20 is 2 itself, but also 2 squared, 2 cubed, and 2 to the 4th. So, 4 powers of 2 below 20. To compute ψ(20), we thus have to take the (natural) logarithm of 2 and multiply it by 4. The logarithm is used for certain reasons related to "weighting" the summands. Now the prime 3 which appears twice, itself and its square. So we add the logarithm of 3 twice. Then we have the primes 5, 7, 11, 13, and 17 which all appear only once. So we add the corresponding logarithms. Then you compute the sum, and the result is approximately 19.2657. In case you're interested in some homework, you could calculate ψ in another way. I won't prove this here, but you could first calculate the least common multiple of all integers up to x and then take the logarithm. The result must be the same. And it is. But let's draw Chebyshev's ψ function. Again, one of those "staircase functions". Like with the staircase we've already seen, this one too contains everything you need to know about the prime numbers. I'll show you an intimidating formula now, but don't worry. The formula will show a connection between Chebyshev's ψ and the zeros of Riemann's ζ. That's this formula here. On the left hand side is Chebyshev's staircase which we just saw. The right side starts with x itself. That's not really surprising as the staircase looks almost like the identity function. And then we have "correction terms". First you subtract a constant term, log(2π). Then you subtract this strange sum, where ρ (rho) runs through all zeros of the ζ function. The result will be Chebyshev's function. So, IF you knew all zeros of Riemann's ζ function, you could calculate Chebyshev's ψ function exactly. We can make this sum seemingly more complicated to clarify what's happening. You do this by remembering that there are two kinds of zeros: the "boring" (trivial) ones and the interesting zeros on the critical strip. So we split the sum into two parts, one for the trivial and one for the nontrivial zeros. The result looks like this. The first terms didn't change. The second logarithm comes from the trivial zeros. It's also trivial and pretty small too. The sum for the nontrivial zeros looks a bit more demanding. The most important part, for us, is: There's a cosine function. Which means we could interpret the last sum as some kind of spectrum. Like we did in Fourier analysis. Imagine that Chebyshev's staircase is something like the sawtooth wave we saw earlier. And the want to approximate it using frequencies coming from the zeros of Riemann's ζ function. I'll show you a picture. There's a "DC component", like in Fourier analysis. Then there's the "first tone". The first summand in this sum, the one for k=1 (the first zero), will give us this "first tone". "DC component", "first tone". Here's the "second tone", coming from the second zero. And so on. And now you add up all these waves. Then you'll get a curve as a result. It's like the "construction manual" in Fourier analysis. And it'll look like this: Here's Chebyshev's function which we want to approximate. The "DC component" is the zeroth approximation. It looks like so. Now the "DC component" plus the "first tone". Looks like this. Now you add the "second tone". Which gives you this curve. Continuing up to the "fifth tone", it'll look like this. And like this for the "tenth tone". It looks even better with "higher tones", so I've animated this. Here's ψ again. I'll just let this run, adding more and more "tones" in the process. Please watch and see how this approximates our staircase. At the bottom you see how many zeros of ζ we've used so far. I'll increase the speed a bit. The curve really huddles against the staircase. We're still seeing some spikes here. This is the so-called Gibbs phenomenon, in case you're familiar with Fourier analysis. But other than that, we are really approaching the staircase. We're at 200 zeros already. This is further evidence for the importance of the zeros of Riemann's ζ function for the prime numbers, as this is once again one of those "prime number staircases". And obviously this staircase can be approximated using the "oscillations" coming from the roots. This is the formula we just had. This one's correct and has been proven. IF the Riemann hypothesis is true, all zeros appearing in this formula will lie on the critical line. So, the exponent here would always be one half. So we'd have the square root of x here. Remember our error estimate from the beginning of the talk. That's the reason for the error being so small if the Riemann hypothesis is true. But there's a "musical interpretation" as well. The "tones" we just saw, look like this (with a different scale). This is the "first tone". If the Riemann hypothesis is true, the wave can't leave the gray area. What does this "tone" do? The frequency gets smaller and smaller, so it starts with a high pitch and then gets lower. At the same time it gets louder: the amplitude is increasing. But if the Riemann hypothesis holds, it only gets louder in a controlled fashion. It has to stay within the gray "square root sound level". The same holds for the "second tone". Here too the pitch gets lower. And the volume increases. Of course this holds for all "tones". So, if the Riemann hypothesis holds, all these "tones" behave in the same way. They'll have decreasing frequencies and increasing volume even without the Riemann hypothesis. But if the hypothesis is true, none of them can "break ranks" and be louder than the others. The Riemann hypothesis imposes the same "volume rules" on all "tones". A poetic way to express this is: If Riemann was right, the music of the primes is harmonic. And because mathematicians care a lot about their results being esthetic, I think this is a nice conclusion. This is certainly one of the reasons all mathematicians hope that someone will eventually prove this. Because we'll then see how everything is connected in the most beautiful way. If you want to know more about this: (And this includes some ideas I used for this talk.) There are many good books about the Riemann hypothesis. I've selected three for this slide. I'll start at the bottom. The third one is of the "popular science" category. Marcus du Sautoy does great math documentaries for the BBC. The book's title is where I stole one of my section titles from. The book contains some mathematics, but it's mostly about the history and the persons involved. The other two books, both pretty new, are intended for readers who want to "think along" and do their own experiments. But they don't expect you to have specific math knowledge. High school math should suffice. Maybe after this talk you're up for more details. Any questions?

Special awards

Category Winner
Independent Press Award Suspect Thoughts Press, Bella Books

Nominees and winners

Category Winner Nominated
Anthologies/Fiction
Blue ribbon
Edmund White and Donald Weise, eds., Fresh Men: New Voices in Gay Fiction
  • Angela Brown, Best Lesbian Love Stories 2004
  • Peter Burton, Serendipity: The Gay Times Book of New Stories
  • Clint Catalyst and Michelle Tea, Pills, Thrills, Chills, and Heartache: Adventures in the First Person
  • Lori L. Lake, The Milk of Human Kindness: Lesbian Authors Write About Mothers & Daughters
Anthologies/Non-Fiction
Blue ribbon
Greg Wharton and Ian Philips, eds., I Do/I Don't: Queers on Marriage
Autobiography/Memoir
Blue ribbon
Alison Smith, Name All the Animals
Biography
Blue ribbon
Alexis De Veaux, Warrior Poet: A Biography of Audre Lorde
Children's/Young Adult
Blue ribbon
Alex Sánchez, So Hard to Say
Drama
Blue ribbon
Doug Wright, I Am My Own Wife
  • Donald Reuter, Fabulous!
  • David Gere, How to Make Dances in an Epidemic
  • Sharon Bridgeforth, love conjure/blues
  • Claude J. Summers, The Queer Encyclopedia of Music, Dance and Musical Theater
Erotica
Blue ribbon
Richard Labonté, Best Gay Erotica 2005
Gay Debut Fiction
Blue ribbon
Blair Mastbaum, Clay's Way
Gay Fiction
Blue ribbon
Colm Tóibín, The Master
Gay Mystery
Blue ribbon
Anthony Bidulka, Flight of Aquavit
  • Greg Herren, Jackson Square Jazz
  • John Morgan Wilson, Moth and Flame
  • Gary Zebrun, Someone You Know
  • Dorien Grey, The Role Players
Gay Poetry
Blue ribbon
Luis Cernuda, Written in Water
Humor
Blue ribbon
David Sedaris, Dress Your Family in Corduroy and Denim
Lesbian Debut Fiction
Blue ribbon
Judith Frank, Crybaby Butch
  • Mary Vermillion, Death by Discount
  • Kristie Helms, Dish It Up, Baby!
  • Laurinda D. Brown, Fire & Brimstone
  • Bridget Bufford, Minus One: A Twelve-Step Journey
Lesbian Fiction
Blue ribbon
Stacey D'Erasmo, A Seahorse Year
Lesbian Mystery
Blue ribbon
Katherine V. Forrest, Hancock Park
  • Ellen Hart, An Intimate Ghost
  • Jennifer Jordan, Commitment to Die
  • Mary Vermillion, Death by Discount
  • Claire McNab, The Wombat Strategy
Lesbian Poetry
Blue ribbon
Beverly Burch, Sweet to Burn
LGBT Studies
Blue ribbon
Elisabeth Kirtsoglou, For the Love of Women: Gender, Identity and Same-Sex Relations in a Greek Provincial Town
Photography/Visual Arts
Blue ribbon
Evan Bachner and Harry Abrams, At Ease: Navy Men of World War II
Romance
Blue ribbon
Steve Kluger, Almost Like Being in Love
  • Karin Kallmaker, All the Wrong Places
  • Chris Kenry, Confessions of a Casanova
  • Gerri Hill, Gulf Breeze
  • Marianne Martin, Under the Witness Tree
Religion/Spirituality
Blue ribbon
Will Roscoe, Jesus and the Shamanic Tradition of Same-Sex Love
  • Randy Conner and David Hatfield Sparks, Queering Creole Spiritual Traditions
  • Marvin M. Ellison, Same-Sex Marriage: A Christian Ethical Analysis
  • Donald L. Boisvert, Sanctity and Male Desire
  • Steven Greenberg, Wrestling with God & Men
Science fiction, fantasy or horror
Blue ribbon
Jim Grimsley, The Ordinary
  • Michael Jensen, Firelands
  • Greg Herren, Shadows of the Night: Queer Tales of the Uncanny and Unusual
  • Jean Stewart, The Wizard of Isis
  • Nicola Griffith, With Her Body
Transgender
Blue ribbon
Mariette Pathy Allen, The Gender Frontier

External links

This page was last edited on 8 December 2018, at 01:14
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.