To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.

National Register of Historic Places listings in Manhattan from 14th to 59th Streets

From Wikipedia, the free encyclopedia

Map all coordinates using: OpenStreetMap 
Download coordinates as: KML

This is intended to be a complete list of properties and districts listed on the National Register of Historic Places on Manhattan Island, the primary portion of the New York City borough of Manhattan (also designated as New York County, New York), from 14th to 59th Streets. For properties and districts in other parts of Manhattan, whether on Manhattan Island, other islands within the borough, or the neighborhood of Marble Hill on the North American mainland, see National Register of Historic Places listings in Manhattan. The locations of National Register properties and districts (at least for all showing latitude and longitude coordinates below) may be seen in an online map by clicking on "Map of all coordinates".[1]

This National Park Service list is complete through NPS recent listings posted July 2, 2021.[2]

Contents: Counties in New York
Albany (Albany)AlleganyBronxBroomeCattaraugusCayugaChautauquaChemungChenangoClintonColumbiaCortlandDelawareDutchess (Poughkeepsie, Rhinebeck)Erie (Buffalo)EssexFranklinFultonGeneseeGreeneHamiltonHerkimerJeffersonKingsLewisLivingstonMadisonMonroe (Rochester)MontgomeryNassauNew York (Below 14th Street, 14th to 59th Streets, 59th to 110th Streets, Above 110th Street, Islands)NiagaraOneidaOnondagaOntarioOrangeOrleansOswegoOtsegoPutnamQueensRensselaerRichmondRocklandSt. LawrenceSaratogaSchenectadySchoharieSchuylerSenecaSteubenSuffolkSullivanTiogaTompkinsUlsterWarrenWashingtonWayneWestchester (Northern, Southern, New Rochelle, Peekskill, Yonkers)WyomingYates

YouTube Encyclopedic

  • 1/2
    165 096
    1 689
  • 7. Value At Risk (VAR) Models
  • Grand Central Station: How a Train Transformed America


The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at KENNETH ABBOTT: As I said, my name is Ken Abbott. I'm the operating officer for Firm Risk Management at Morgan Stanley, which means I'm the everything else guy. I'm like the normal stuff with a bar over it. The complement of normal-- I get all the odd stuff. I consider myself the Harvey Keitel character. You know, the fixer? And so I get a lot of interesting stuff to do. I've covered commodities, I've covered fixed income, I've covered equities, I've covered credit derivatives, I've covered mortgages. Now I'm also the Chief Risk Officer for the buy side of Morgan Stanley. The investment management business and the private equity holdings that we have. And I look after lot of that stuff and I sit on probably 40 different committees because it's become very, very, very bureaucratic. But that's the way it goes. What I want to talk about today is some of the core approaches we use to measure a risk in a market risk setting. This is part of a larger course I teach at a couple places. I'm a triple alum at NYU-- no I'm a double alum and now I'm on their faculty [INAUDIBLE]. I have a masters in economics from their arts and sciences program. I have a masters in statistics from Stern when Stern used to have a stat program. And now I teach at [INAUDIBLE]. I also teach at Claremont and I teach at [INAUDIBLE], part of that program. So I've been through this material many times. So what I want to do is lay the foundation for this notion that we call risk, this idea of var. [INAUDIBLE] put this back on. Got it. I'll make it work. I'll talk about it from a mathematical standpoint and from a statistical standpoint, but also give you some of the intuition behind what it is that we're trying to do when we measure this thing. First, a couple words about risk management. What is the risk do? 25 years ago, maybe three firms had risk management groups. I was part of the first risk management group at Bankers Trust in 1986. No one else had a risk management group as far as I know. Market risk management really came to be in the late '80s. Credit risk management had obviously been around in large financial institutions the whole time. So our job is to make sure that management knows what's on the books. So step one is, what is the risk profile of the firm? How do I make sure that management is informed about this? So it requires two things. One, I have to know what the risk profile is because I have to know it in order to be able to communicate it. But the second thing, equally important, particularly important for you guys and girls, is that you need to be able to express relatively complex concepts in simple words and pretty pictures. All right? Chances are if you go to work for big firm, your boss won't be a quant. My boss happens to have a degree from Carnegie Mellon. He can count to 11 with his shoes on. His boss is a lawyer. His boss is the chairman. Commonly, the most senior people are very, very intelligent, very, very articulate, very, very learned. But not necessarily quants. Many of them have had a year or two of calculus, maybe even linear algebra. You can't show them-- look, when you and I chat and we talk about regression analysis, I could say x transpose x inverse x transpose y. And those of you that have taken a refresher course think, ah, that's beta hat. And we can just stop it there. I can just put this form up there and you may recognize it. I would have to spend 45 minutes explaining this to people on the top floor because this is not what they're studying. So we can talk the code amongst ourselves, but when we go outside our little group-- getting bigger-- we have to make sure that we can express ourselves clearly. That's done in clear, effective prose, and in graphs. And I'll show you some of that stuff as we go on. So step one, make sure management knows what the risk profile is. Step two, protect the firm against unacceptably large concentrations. This is the subjective part. I can know the risk, but how big is big? How much is too much? How much is too concentrated? If I have $1 million of sensitivity per basis point, that's a 1/100th of 1% move in a rate. Is that big? Is that small? How do I know how much? How much of a particular stock issue should I own? How much of a bond issue? How much futures open interest? How big a limit should I have on this type of risk? That's where intuition and experience come into play. So that's the second part of our job is to protect against unacceptably large losses. So the third, no surprises, you can liken the trading business-- it's taking calculated risks. Sometimes you're going to lose. Many times you're going to lose. In fact, if you win 51% of the time, life is pretty good. So what you want to do is make sure you have the right information so you can estimate, if things get bad, how bad will they get? And to use that, we leverage a lot of relatively simple notions that we see in statistics. And so I should use a coloring mask here, not a spotlight. We do a couple things. Just like the way when they talk about the press in your course about journalism, we can shine a light anywhere we want, and we do all the time. You know what? I'm going to think about this particular kind of risk. I'm going to point out that this is really important. You need to pay attention to it. And then I could shade it. I can make it blue, I can make a red, I can make it green. I'd say this is good, this is bad, this is too big, this is too small, this is perfectly fine. So that's just a little bit of quick background on what we do. So I'm going to go through as much of this as I can. I'm going to fly through the first part and I want to hit these because these are the ways that we actually estimate risk. Variance, covariance [? as ?] a quadratic form. Monte Carlo simulation, the way I'll show you is based on a quadratic form. And historical simulation is Monte Carlo simulation without the Monte Carlo part. It's using historical data. And I'll go through that fairly quickly. Questions, comments? No? Excellent. Stop me-- look, if any one of you doesn't understand something I say, probably many of you don't understand it. I don't know you guys, so I don't know what you know and what you don't know. So if there's a term that comes up, you're not sure, just say, Ken, I don't have a PhD. I work for a living. I make fun of academics. I know you work for a living too. All right. There's a guy I tease at Claremont [INAUDIBLE] in this class, I say, who is this pointy headed academic [INAUDIBLE]. Only kidding. All right, so I'm going to talk about one asset value at risk. First I'm going to introduce the notion of value at risk. I'm going to talk about one asset. I'm going to talk about price based instruments. We're going to go into yield space, so we'll talk about the conversions we have to do there. One thing I'll do after this class is over, since I know I'm going to fly through some of the material-- and since this is MIT, I'm sure you're used to just flying through material. And there's a lot of this, the proof of which is left to the reader as an exercise. I'm sure you get a fair amount of that. I will give you papers. If you have questions, my email is on the first page. I welcome your questions. I tell my students that every year. I'm OK with you sending me an email asking me for a reference, a citation, something. I'm perfectly fine with that. Don't worry, oh, he's too busy. I'm fine. If you've got a question, something is not clear, I've got access to thousands of papers. And I've screened them. I've read thousands of papers, I say this is a good one, that's a waste of time. But I can give you background material on regulation, on bond pricing, on derivative algorithms. Let me know. I'm happy to provide that at any point in time. You get that free with your tuition. A couple of key metrics. I don't want to spend too much time on this. Interest rate exposure, how sensitive am I to changes in interest rates, equity exposure, commodity exposure, credit spread exposure. We'll talk about linearity, we won't talk too much about regularity of cash flow. We won't really get into that here. And we need to know correlation across different asset classes. And I'll show you what that means. At the heart of this notion of value at risk is this idea of a statistical order statistic. Who here has heard of order statistics? All right, I'm going to give you 30 seconds. The best simple description of an order statistic. PROFESSOR: The maximum or the minimum of a set of observations. KENNETH ABBOTT: All right? When we talk about value at risk, I want to know the worst 1% of the outcomes. And what's cool about order statistics is they're well established in the literature. Pretty well understood. And so people are familiar with it. Once we put our toe into the academic water and we start talking about this notion, there's a vast body of literature that says this is how this thing is. This is how it pays. This is what the distribution looks like. And so we can estimate these things. And so what we're looking at in value at risk, if my distribution of returns, how much I make. In particular, if I look historically, I have a position. How much would this position have earned me over the last n days, n weeks, n months. If I look at a frequency distribution of that, I'm likely-- don't have to-- I'm likely to get something that's symmetric. I'm likely to get something that's unimodal. It may or may not have fat tails. We'll talk about that a little later. If my return distribution were beautifully symmetric and beautifully normal and independent, then the risk-- I could measure this 1% order statistic. What's the 1% likely worst case outcome tomorrow? I might do that by integrating the normal function from negative infinity-- for all intents and purposes five or six standard deviations. Anyway, from negative infinity to negative 2.33 standard deviations. Why? Because the area under the curve, that's 0.01. Now this is a one sided confidence interval as opposed to a two sided confidence integral. And this is one of these things that as an undergrad you learn two sided, and then the first time someone shows you one sided you're like, wait a minute. What is this? Than you say, oh, I get it. You're just looking at the area. I could build a gazillion two sided confidence intervals. One sided, it's got to stop at one place. All right so this set of outcomes-- and this is standardized-- this is in standard deviation space-- negative infinity to 2.33. If I want 95%, or 5% likely loss, so I could say, tomorrow there's a 5% chance my loss is going to be x or greater, I would go to 1.645 standard deviations. Because the integral from negative infinity to 1.645 standard deviations is about 0.05. It's not just a good idea, it's the law. Does that make sense? And again, I'm going to say assuming the normal. That's like the old economist joke, assume a can opener when he's on a desert island. You guys don't know that one. I got lots of economics jokes. I'll tell them later on maybe-- or after class. If I'm assuming normal distribution, and that's what I'm going to do, what I want to do is I'm going to set this thing up in a normal distribution framework. Now doing this approach and assuming normal distributions, I liken it to using Latin. Nobody really uses it anymore but everything we do is based upon it. So that's our starting point. And it's really easy to teach it this way and then we relax the assumptions like so many things in life. I teach you the strict case then we relax the assumptions to get to the way it's done now. So this makes sense? All right. So let's get there. This is way oversimplified-- but let's say I have something like this. Who has taken intermediate statistics? We have the notion of stationarity that we talk about all the time. The mean and variance constant is one simplistic way of thinking about this. Do you have a better way for me to put that to them? Because you know what their background would be. PROFESSOR: No. KENNETH ABBOTT: All right. Just, mean and variance are constant. When I look at the time series itself, the time series mean and the time series variance are not constant. And there also could be other time series stuff going on. There could be seasonality, there could be autocorrelation. This looks something like a random walk but it's not stationary. It's hard for me to draw inference by looking at that alone. So we want to try to predict what's going to happen in the future, it's kind of hard. And the game, here, that we're playing, is we want to know how much money do I need to hold to support that position? Now, who here has taken an accounting course? All right, word to the wise-- there's two things I tell students and quant finance programs. First of all, I know you have to take a time series course-- I'm sure-- this is MIT. If you don't get a time series course, get your money back because you've got to take time series. Accounting is important. Accounting is important because so much of what we do, the way we think about things is predicated on the dollars. And you need to know how the dollars are recorded. Quick aside. Balance sheet. I'll give you a 30 second accounting lecture. Assets, what we own. Everything we own-- we have stuff, it's assets. We came to that stuff one of two ways. We either pay for it out of our pocket, or we borrowed money. There's no third way. So everything we own, we either paid for out of our pocket or borrowed money. The amount we paid for out of our pocket is the equity. The ratio of this to this is called leverage among other things. All right? If I'm this company. I have this much stuff and I bought it with this much debt, and this much equity. Again, that's a gross oversimplification. When this gets down to zero, it's game over. Belly up. All right? Does that make sense? Now you've taken a semester of accounting. No, only kidding. But it's actually important to have a grip on how that works. Because what we need to make sure of is that if we're going to take this position and hold it, we need to make sure that with some level of certainty-- every time we lose money this gets reduced. When this goes down to zero, I go bankrupt. So that's what we're trying to do. We need to protect this, and we do it by knowing how much of this could move against us. Everybody with me? Anybody not with me? It's OK to have questions, it really is. Excellent. All right, so if I do a frequency distribution of this time series, I just say, show me the frequency with which this thing shows. I get this thing, it's kind of trimodal. It's all over the place. It doesn't tell me anything. If I look at the levels-- the frequency distribution, the relative frequency distribution of the levels themselves, I don't get a whole lot of intuition. If I go into return space, which is either looking at the log differences from day to day, or the percentage changes from day to day, or perhaps the absolute changes from day to day-- it varies from market to market. Oh, look, now we're in familiar territory. So what I'm doing here-- and this is why I started out with a normal distribution because this thing is unimodal. It's more or less symmetric. Right? Now is it a perfect measure? No, because it's probably got fat tails. So it's a little bit like looking for the glasses you lost up on 67th Street down on 59th street because there's more light there. But it's a starting point. So what I'm saying to you is once I difference it-- no, I won't talk about [INAUDIBLE]. Once I difference the timeshares, once I take the timeshares and look at the percentage changes, and I look at the frequency distribution of those changes, I get this which is far more minimal. And I can draw inference from that. I can say, ah, now if this thing is normal, then I know that x% of my observations will take place over here. Now I can start drawing inferences. And a thing to keep in mind here, one thing we do constantly in statistics is we do parameter estimates. And remember, every time you estimate something you estimate it with error. I think that maybe the single most important thing I learned when I got my statistics degree. Everything you estimate you estimate with error. People do means, they say, oh, it's x. No, that's the average and that's an unbiased estimator, but guess what, there's a huge amount of noise. And there's a certain probability that you're wrong by x%. So every time we come up with a number, when somebody tells me the risk is 10, that means it's probably not 10,000, it's probably not zero. Just keep that in mind. Just sort of throw that in on the side for nothing. All right, so when I take the returns of this same time series, I get something that's unimodal, symmetric, may or may not have fat tails. That has important implications for whether or not my normal distribution underestimates the amount of risk I'm taking. Everybody with me on that more or less? Questions? Now would be the time. Good enough? He's lived this. All right. So once I have my time series of returns, which I just plotted there, I can gauge their dispersion with this measure called variance. And you guys probably know this. Variance the expected value of x i minus x bar-- I love these thick chalks-- squared. And it's the sum of x i minus x bar squared over n minus 1. It's a measure of dispersion. Variance has [INAUDIBLE]. Now, I should say that this is sigma squared hat. Right? Estimate-- parameter estimate. Parameter. Parameter estimate. This is measured with error. Anybody here know what the distribution of this is? Anyone? $5. Close. m chi squared. Worth $2. Talk to me after class. It's a chi squared distribution. What does that mean? That means that we know it can't be 0 or less than 0. If you figure out a way to get variances less than zero, let's talk. And it's got a long right tail, but that's because this is squared. [INAUDIBLE] one point can move it up. Anyway, once I have my returns, I have a measure of the dispersion of these returns called variance. I take the square root of the variance, which is the standard deviation, or the volatility. When I'm doing it with a data set, I usually refer to it as the standard deviation. When I'm referring to the standard deviation of the distribution, I usually call it the standard error. Is that a law or is that just common parlance? PROFESSOR: Both The standard error is typically for something that's random, like an estimate. Whereas the standard deviation is more like for sample-- KENNETH ABBOTT: Empirical. See, it's important because when you first learn this, they don't tell you that. And they flip them back and forth. And then when you take the intermediate courses, they say, no, don't you standard deviation when you mean standard error. And you'll get points off on your exam for that, right? All right, so, the standard deviation is the square root of the variance, also called the volatility. In a normal distribution, 1% of the observations is outside of 2.33 standard deviations. For 95%, it's out past 1.64, 1.645 standard deviations. Now you're saying, wait a minute, where did my 1.96 go that I learned as an undergrad. Two sided. So if I go from the mean to 1.96 standard deviations on either side, that encompasses 95% of the total area of the integral from negative infinity to positive infinity. Everybody with me on that? Does that make sense? The two sided versus one sided. That's confused me. When I was your age, it confused me a lot. But I got there. All right so this is how we do it. Excel functions are var and-- you don't need to know that. All right, so in this case, I estimating the variance of this particular time series. I took the standard deviation by taking the square root of the variance. It's in percentages. When you do this, I tell you, it's like physics, your units will screw you up every time. What am I measuring? What are my units? I still make units mistakes. I want you to know that. And I'm in this business 30 years. I still make units mistakes. Just like physics. I'm in percentage change space, so I want to talk in terms of percentage changes. The standard deviation is 1.8% of that time series I showed you. So 2.33 standard deviations times the standard deviation is about 4.2%. What that says, given this data set-- one time series-- I'm saying, I expect to lose, on any given day, if I have that position, 99% of the time I'm going to lose 4.2% of it or less. Very important. Think about that. Is that clear? That's how I get there. I'm making a statement about the probability of loss. I'm saying there's a 1% probability, for that particular time series-- which is-- all right? If this is my historical data set and it's my only historical data set, and I own this, tomorrow I may be 4.2% lighter than I was today because the market could move against me. And I'm 99% sure, if the future's like the past, that my loss tomorrow is going to be 4.2% or less. That's [? var. ?] Simplest case, assuming normal distribution, single asset, not fixed income. Yes, no? Questions, comments? AUDIENCE: Yes, [INAUDIBLE] positive and [INAUDIBLE]. KENNETH ABBOTT: Yes, yes. Assuming my distribution is symmetric. Now that's the right assumption to point out. Because in the real world, it may not be symmetric. And when we go into historical simulation, we use empirical distributions where we don't care if it's symmetric because we're only looking at the downside. And whether I'm long or short, I might care about the downside or the pretty upside. Because it'll be short, and I care about how much is going to move up. Make sense? That's the right question to ask. Yes? AUDIENCE: [INAUDIBLE] if you're doing it for upside as well? KENNETH ABBOTT: Yes. AUDIENCE: Could it just be the same thing? KENNETH ABBOTT: Yes. In fact, in this case, in what we're doing here of variance covariance or closed form var, it's for long or short. But getting your signs right, I'm telling you, it's like physics. I still make that mistake. Yes? AUDIENCE: [INAUDIBLE] symmetric. Do you guys still use this process to say, OK-- KENNETH ABBOTT: I use it all the time as a heuristic. All right? Because let's say I've got-- and that's a very good question-- let's say I've got five years worth of data and I don't have time to do an empirical estimate. It could be lopsided. If you tell me a two standard deviation move is x, that means something to me. Now, there's a problem with that. And the problem is that people extrapolate that. Sometimes people talk to me and, oh, it's an eight standard deviation move. Eight standard deviation moves don't happen. I don't think we've seen an eight standard deviation move in the Cenazoic era. It just doesn't happen. Three standard deviation-- you will see a three standard deviation move once every 10,000 observations. Now, I learned this the hard way by just, see how many times do I have to do this? And then I looked it up in the table, oh, I was right. When we oversimplify, and start to talk about everything in terms of that normal distribution, we really just lose our grip on reality. But I use it as a heuristic all the time. I'll do it even now, and I know better. But I'll go, what's two standard deviations? What's three standard deviations? Because by and large-- and I still do this, I get my data and I line it up and I do frequency distributions. Hold on, I do this all the time with my data. Is it symmetric? Is it fat tailed? Is it unimodal? So that's a very good question. Any other questions? AUDIENCE: [INAUDIBLE] have we talked about the [? standard t ?] distribution? PROFESSOR: We Introduced it in the last lecture. And the problems set this week does relate to that. KENNETH ABBOTT: All right, perfect lead in. So the statement I made, it's 1% of the time I'd expect to lose more than 4.2 pesos on 100 peso position. That's my inferential statement. In fact, over the same time period I lost 4.2% 1.5% of the time instead of 1% of the time. What that tells me, what that suggests to me, is my data set has fat tails. What that means is the likelihood of a loss-- a simple way of thinking about it [INAUDIBLE] care whether what that means in a metaphysical sense, a way to interpret it. The likelihood of a loss is greater than would be implied by the normal distribution. All right? So when you hear people say fat tails, generally, that's what they're talking about. There are different ways you could interpret that statement, but when somebody is talking about a financial time series, it has fat tails. Roughly 3/4 of your financial time series will have fat tails. They will also have time series properties, they won't be true random walks. True random walks says that I don't know whether it's going to go up or down based on the data I have. The time series has no memory. When we start introducing time series properties, which many financial time series have, then there's seasonality, there's mean reversion, there's all kinds of other stuff, other ways that we have to think about modeling the data. Make sense? AUDIENCE: [INAUDIBLE] higher standard deviation than [INAUDIBLE]. KENNETH ABBOTT: Say it once again. AUDIENCE: Better yield, does it mean that we have a higher standard deviation than [INAUDIBLE]? KENNETH ABBOTT: No. The standard deviation is the standard deviation. No matter what I do, this is standard deviation, that's it. Don't have a higher standard deviation. But the likelihood of-- the put it this way-- the likelihood of a move of 2.33 standard deviations is more than 1%. That's the way I think of it. Make sense? AUDIENCE: Is there any way for you to [INAUDIBLE] to-- KENNETH ABBOTT: What? AUDIENCE: Sorry, is there any way to put into that graph what a fatter tail looks like? KENNETH ABBOTT: Oh, well, be patient. If we have time. In fact, we do that all the time. And one of our techniques doesn't care. It goes to the empirical distribution. So it captures the fat tails completely. In fact, the homework assignment which I usually precede this lecture by has people graphing all kinds of distributions to see what these things look like. We won't have time for that. But if you have questions, send them to me. I'll send you some stuff to read about this. All right, so now you know one asset var, now you're qualified to go work for a big bank. All right? Get your data, calculate returns. Now I usually put in step 2b, graph your data and look at it. All right? Because everybody's data has dirt in it. Don't trust anyone else. If you're going to get fired, get fired for being incompetent, don't get fired for using someone else's bad data. Don't trust anyone. My mother gives me data, Mom, I'm graphing it. Because I think you let some poop slip into my data. Mother Theresa could come to me with a thumb drive [INAUDIBLE] S&P 500. Sorry, Mother Teresa. I'm graphing it before I use it. All right? So I don't want to say that this is usually in here. We do extensive error testing. Because there could be bad data, there could be missing data. And missing data is a whole other lecture that I give. You might be shocked at [INAUDIBLE]. So for one asset var, get my data, create my return series. Percentage changes, log changes. Sometimes that's what the difference is. Take the variance, take the square root of the variance, multiply by 2.33. Done and dusted. Go home, take your shoes off, relax. OK. Percentage changes versus log changes. For all intents and purposes, it doesn't really matter and I will often use one or the other. The way I think about this-- all right, there'll be a little bit of bias at the ends. But for the overwhelming bulk of the observations whether you use percentage changes or log changes doesn't matter. Generally, even though I know the data is closer to log normally distributed than normally distributed, I'll use percentage changes just because it's easier. Why would we use log normal distribution? Well, when we're doing simulation, the log normal distribution has this very nifty property of keeping your yields from going negative. But, even that-- I can call that into question because there are instances of yields going negative. It's happened. Doesn't happen a lot, but it happens. All right. So I talked about bad data, talked about one sided versus two sided. I'll talk about longs and shorts a little bit later when we we're talking multi asset. I'm going to cover a fixed income piece. We use this thing called a PV01 because what I measure in fixed income markets isn't a price. [? Asian ?] measure a yield. I have to get from a change of yield to a change of price. Hm, sounds like a Jacobian, right? With kind of a poor man's Jacobian. It's a measure that captures the fact that my price yield relationship-- price, yield-- is non-linear. For any small approximation I look at the tangent. And I use my PV01 which has a similar notion to duration, but PV01 is a little more practical. The slope of that tells me how much my price will change for a given change of yield. See, there it is. You knew you were going to use the calculus, right? You're always using the calculus. You can't escape it. But the price yield line is non-linear. But for all intents and purposes, what I'm doing is I'm shifting the price yield relationship-- I'm shifting my yield change into price change by multiplying my yield change by my PV01 which is my price sensitivity to 1/100th percent move in yields. Think about that for a second. We don't have time to-- I would love to spend an hour on this, and about trading strategies, and about bull steepeners and bear steepeners in barbell trades, but we don't have time for that. Suffice to say if I'm measuring yields the thing is going to trade as a 789 or a 622 or a 401 yield. How do I get that in the change in price? Because I can't tell my boss, hey, I had a good day. I bought it at 402 and sold it at 401. No, how much money did you make? Yield to coffee break yield to lunch time, yield to go home at the end of day. How do I get from change in yield to change in price? Usually PV01. I could use duration. Bond traders who think in terms of yield to coffee break, yield to lunch time, yield to go home at the end of the day typically think in terms of PV01. Do you agree with that statement? AUDIENCE: [INAUDIBLE] KENNETH ABBOTT: How often on the fixed income [? desk ?] did you use duration measures? AUDIENCE: Well, actually, [INAUDIBLE]. KENNETH ABBOTT: Because of the investor horizon? OK, the insurance companies. Very important point I want to reach here as a quick aside. You're going to hear this notion of PV01, which is called PVBP or DV01. That's the price sensitivity to a one basis point move. One basis point is 1/100th of a percent in yield. Duration is the half life, essentially, of my cash flow. What's the weighted expected time to owe my cash flows? If my duration is 7.9 years, my PV01 is probably about $790 per million. In terms of significant digits, they're roughly the same but they have different meanings and the units are different. Duration is measured in yield, PV01 is measured in dollars. In bond space I typically think in PV01. If I'm selling to long term investors they have particular demands because they've got cash flow payments they have to hedge. So they may think of it in terms of duration. For our purposes, we're talking DV01 or PV01 or PVBP, those three terms more or less equal. Make sense? Yes? AUDIENCE: [INAUDIBLE] in terms of [INAUDIBLE] versus [INAUDIBLE]? KENNETH ABBOTT: We could. In some instances, in some areas and options we might look at an overall 1% move. But we have to look at what trades in the market. What trades in the market is the yield. When we quote the yield, I'm going to quote it going from 702 to 701. I'm not going to have the calculator handy to say, a 702 move to a 701. What's 702 minus 701 divided by 702? Make sense? It's the path of least resistance. What's the difference between a bond and a bond trader? A bond matures. A little fixed income humor for you. Apparently very little. I don't want to spend too much time on this because we just don't have the time. I provide an example here. If you guys want examples, contact me. I'll send you the spreadsheets I use for other classes if you just want to play around with it. When I talk about PV01, when I talk about yields, I usually have some kind of risk free rate. Although this whole notion of the risk free rate, which is-- so much of modern finance is predicated on this assumption that there is a risk free rate, which used to be considered the US treasury. It used to be considered risk free. Well, there's a credit spread out there for US Treasury. I don't mean to throw a monkey wrench into the works. But there's no such thing. I'm not going to question 75 years of academic finance. But it's troublesome. Just like when I was taking economics 30 years ago, inflation just mucked with everything. All of the models fell apart. There were appendices to every chapter on how you have to change this model to address inflation. And then inflation went way and everything was better. But this may not go away. I've got two components here. If the yield is 6%, I might have a 450 treasury rate and 150 basis point credit spread. The credit spread reflects the probability of default. And I don't want to get into measures of risk neutrality here. But if I'm an issuer and I have a chance of default, I have to pay my investors more. Usually when we measure sensitivity we talk about that credit spread sensitivity and the risk free sensitivity. We say, well, how could they possibly be different? And I don't want to get into detail here, but the notion is, when credit spreads start getting high, it implies a higher probability of default. You have to think about credit spreads sensitivity a little differently. Because when you get to 1,000 basis points, 1,500 basis points credit spread, it's a high probability of default. And your credit models will think different. Your credit models will say, ah, that means I'm not going to get my next three payments. There's an expected, there's a probability of default, there's a loss given default, and there's recovery. A bunch of other stochastic measures come into play. I don't want to spend any more time on it because it's just going to confuse you now. Suffice to say we have these yields and yields are composed of risk free rates and credit spreads. And I apologize for rushing through that, but we don't have time to do it. Typically you have more than one asset. So in this framework where I take 2.33 standard deviations times my dollar investment, or my renminbi investment or my sterling investment. That example was with one asset. If I want to expand this, I can expand this using this notion of covariance and correlation. You guys covered correlation and covariance at some point in your careers? Yes, no? All right? Both of them measure the way one asset moves vis a vis another asset. Correlation is scaled between negative 1 and positive 1. So I think of correlation as an index of linearity. Covariance is not scaled. I'll give you an example of the difference between covariance and correlation. What if I have 50 years of data on crop yields and that same 50 years of data on tons of fertilizer used? I would expect a positive correlation between tons of fertilizer used and crop yields. So the correlation would exist between negative 1 and positive 1. The covariance could be any number, and that covariance will change depending on whether I measure my fertilizer in tons, or in pounds, or in ounces, or in kilos. The correlation will always be exactly the same. The linear relationship is captured by the correlation. But the units-- in covariance, the units count. If I have covariance-- here it is. Covariance matrices are symmetric. They have the variance along the diagonal. And the covariance is on the off diagonal. Which is to say that the variance is the covariance of an item with itself. The correlation matrix, also symmetric, is the same thing scaled with correlations, where the diagonal is 1.0. If I have covariance-- because correlation is covariance-- covariance divided by the product of the standard deviations. Gets me-- sorry-- correlation hat. This is like the apostrophe in French. You forget it all the time. But the one time you really need it, you won't do it and you'll be in trouble. If you have the covariances, you can get to the correlations. If you have the correlations, you can't get to the covariances unless you know the variances. That's a classic mid-term question. I give that almost-- not every year, maybe every other year. Don't have time to spend much more time on it. Suffice to say this measure of covariance says when x is a certain distance from its mean, how far is y from its mean and in what direction? Yes? Now this is just a little empirical stuff because I'm not as clever as you guys. And I don't trust anyone. I read it in the textbook, I don't trust anyone. a, b, here's a plus b. Variance of a plus b is variance of a plus variance of b plus 2 times covariance a b. It's not just a good idea, it's the law. I saw it in a thousand statistics textbooks, I tested it anyway. Because if I want to get fired, I'm going to get fired for making my own mistake, not making someone else's mistake. I do this all the time. And I just prove it empirically here. The proof of which will be left to the reader as an exercise. I hated when books said that. PROFESSOR: I actually kind of think that's a proven point, that you really should never trust output from computer programs or packages-- KENNETH ABBOTT: Or your mother, or Mother Teresa. PROFESSOR: It's good to check them. Check all the calculations. KENNETH ABBOTT: Mother Teresa will slip you some bad data if she can. I'm telling you, she will. She's tricky that way. Don't trust anyone. I've caught mistakes in software, all right? I had a programmer-- it's one of my favorite stories-- we're doing one of our first Monte Carlo simulations, and we're factoring a matrix. If we have time, we'll get-- so I factor a covariance matrix into E transpose lambda E. It's our friend the quadratic form. We're going to see this again. And this is a diagonal matrix of eigenvalues. And I take the square root of that. So I can say this is E transpose lambda to the 1/2 lambda to the 1/2 E. And so my programmer had gotten this, and I said, do me a favor. I said, take this, and transpose and multiply by itself. So take the square root and multiply it by the other square root, and show me that you get this. Just show me. He said I got it. I said you got it? He said out to 16 decimals. I said stop. On my block, the square root of 2 times the square root of 2 equals 2.0. All right? 2.0000000 what do you mean out to 16 decimal places? What planet are you on? And I scratched the surface, and I dug, and I asked a bunch of questions. And it turned out in this code he was passing a float to a [? fixed. ?] All right? Don't trust anyone's software. Check it yourself. Someday when I'm dead and you guys are in my position, you'll be thanking me for that. Put a stone on my grave or something. All right so covariance. Covariance tells me some measure of when x moves, how far does y move? [? Or ?] for any other asset? Could I have a piece of your cookie? I hardly had lunch. You want me to have a piece of this, right? It's just looking very good there. Thank you. It's foraging. I'm convinced 10 million years ago, my ape ancestors were the first one at the dead antelope on the planes. All right. So we're talking about correlation covariance. Covariance is not unit free. I can use either, but I have to make sure I get my units right. Units screw me up every time. They still screw me up. That was a good cookie. All right. So more facts. Variance of xa times yb x squared variance a y squared variance b plus 2xy covariance ab. You guys seen this before? I assume you have. Now I can get pretty silly with this if I want. xayb you get the picture, right? But what you should be thinking, this is a covariance matrix, sigma squared, sigma squared, sigma squared. It's the sum of the variances plus 2 times the sum of the covariances. So if I have one unit of every asset, I've got n assets, all have to do to get the portfolio variance is sum up the whole covariance matrix. Now, you never get only one unit, but just saying. But you notice that this is kind of a regular pattern that we see here. And so what I can do is I can use a combination of my correlation matrix and a little bit of linear algebra [? ledger ?] domain, to do some very convenient calculations. And here I just give an example of a covariance matrix and a correlation matrix. Note the correlation matrices between negative 1 and positive 1. All right. Let me cut to the chase here. I'll draw it here because I really want to get into some of the other stuff. What this means, if I have a covariance structure, sigma. And I have a vector of positions, x dollars in dollar yen, y dollars in gold, z dollars in oil. And let's say I've got a position vector, x1, x2, x3, xn. If I have all my positions recorded as a vector. This is asset one, asset two, and this is in dollars. And I have the covariance structure, the variance of this portfolio that has these assets and this covariance structure-- this is where the magic happens-- is x transpose sigma x equals sigma squared hat portfolio. Now you really could go work for a bank. This is how portfolio variance, using the variance covariance method, is done. In fact, when we were doing it this way 20 years ago, spreadsheets only have 256 columns. So we tried to simplify everything into 256-- or sometimes you had to sum it up using two different spreadsheets. We didn't have multitab spreadsheets. That was a dream, multitab spreadsheets. This was Lotus 1-2-3 we're talking about here, OK? You guys don't even know what Lotus 1-2-3 is. It's like an abacus but on the screen. Yes? AUDIENCE: What's x again in this? KENNETH ABBOTT: Position vector. Let's say I tell you that you've got dollar yen, gold, and oil. You've got $100 of dollar yen, $50 of oil, and $25 of gold. It would be 100, 50, 25. Now, I should say $100 of dollar yen, your position vector would actually show up as negative 100, 50, 25. Why is that? Because if I'm measuring my dollar yen-- and this is just a little aside-- typically, I measure dollar yen in yen per dollar. So dollar yen might be 95. If I own yen and I'm a dollar investor and I own yen, and yen go from 95 per dollar to 100 per dollar, do I make or lose money? I lose money. Negative 100. Just store that. You won't be tested on that, but we think about that all the time. Same thing with yields. Typically, when I record my PV01-- and I'll record some version, something like my PV01 in that vector, my interest rate sensitivity, I'm going to record it as a negative. Because when yields go up and I own the bond, I lose money. Signs, very important. And, again, we've covered-- usually I do this in a two hour lecture. And we've covered it in less than an hour, so pretty good. All right. I spent a lot more time on the fixed income. [STUDENT COUGHING] Are you taking something for that? That does not sound healthy. I don't mean to embarrass you. But I just want to make sure that you're taking care of yourself because grad students don't-- I was a grad student, I didn't take care of myself very well. I worry. All right. Big picture, variance covariance. Collect data, calculate returns, test the data, matrix construction, get my position vector, multiply my matrices. All right? Quick and dirty, that's how we do it. That's the simplified approach to measuring this order statistic called value at risk using this particular technique. Questions, comments? Anyone? Anything you think I need to elucidate on that? And this is, in fact, how we did this up until the late '90s. Firms used variance covariance. I heard a statistic in Europe in 1996 that 80% of the European banks were using this technique to do their value at risk. It was no more complicated than this. I use a little flow diagram. Get your data returns, graph your data to make sure you don't screw it up. Get your covariance matrix, multiply your matrices out. x transpose sigma x. Using the position vectors and then you can do your analysis. Normally I would spend some more time on that bottom row and different things you can do with it, but that will have to suffice for now. A couple of points I want to make before we move on about the assumptions. Actually, I'll fly through this here so we can get into Monte Carlo simulation. Where am I going to get my data? Where do I get my data? I often get a lot of my data Bloomberg, I get it from public sources, I get it from the internet. Especially when you get it from-- look, if it says so on the internet, it must be true. Right? Didn't Abe Lincoln say, don't believe everything you read on the internet? That was a quote, I saw that some place. You get data from people, you check it. There's some sources that are very reliable. If you're looking for yield data or foreign exchange data, the Federal Reserve has it. And they have it back 20 years, daily data. It's the H.15 and the H.10. It's there, it's free, it's easy to download, just be aware of it. Exchange-- PROFESSOR: [INAUDIBLE] study posted on the website that goes through computations for regression analysis and asset pricing models and the data that's used there is from the Federal Reserve for yields. KENNETH ABBOTT: It's H15 It's for yields, it's probably from the H.15. [INTERPOSING VOICES] PROFESSOR: Those files, you can see how to actually get that data for yourselves. KENNETH ABBOTT: Now, another great source of data is Bloomberg. Now the good thing about Blumberg data is everybody uses it, so it's clean. Relatively clean. I still find errors in it from time to time. But what happens is when you find an error in your Bloomberg data, you get on the phone to Bloomberg right away and say I found an error in your data. They say, oh, what date? June 14, you know, 2012. And they'll say, OK, we'll fix it. All right? So everybody does that, and the data set is pretty clean. I found consistently that Bloomberg data is the cleanest in my experience. How much data do we use in doing this? I could use one year of data, I can use two weeks of data. Now, times series, we usually want 100 observations. That's always been my rule of thumb. I can use one year of data. There are regulators that require you to use at least a year of data. You could use two years of data. In fact, some firms use one year of data. There's one firm that uses five years of data. And there, we could say, well, am I going to weight it. Am I going to weight my more recent data heavily? I could do that with exponential smoothing, which we won't have time to talk about. It's a technique I can use to lend more credence to the more recent data. Now, I'm a relatively simple guy. I tend to use equally weighted data because I believe in Occam's razor, which is, the simplest explanation is usually the best. I think we get too clever by half when we try to parameterize. How much more does last week's data have an impact than from two weeks ago, three weeks ago. I'm not saying that it doesn't, what I am saying is, I'm not smart enough to know exactly how much it does. And assuming that everything's equally weighted throughout time is just as strong an assumption. But it's a very simple assumption, and I love simple. Yes? AUDIENCE: [INAUDIBLE] calculate covariance matrix? KENNETH ABBOTT: Yes. All right, quickly. Actually I think I have some slides on that. Let me just finish this and I'll get to that. Gaps in data. Missing data is a problem. How do I fill in missing data? I can do a linear interpolation, I can use the prior day's data. I can do a Brownian bridge, which is I just do a Monte Carlo between them. I can do a regression based, I can use regression to project changes from one onto changes in another. That's usually a whole other lecture I gave on how to do missing data. Now you've got that lecture for free. That's all you need to know. It's not only a lecture, it's a very hard homework assignment. But how frequently do I update my data? Some people update their covariance structures daily. I think that's an overkill. We update our data set weekly. That's what we do. And I think that's overkill, but tell that to my regulators. And we use daily data, weekly data, monthly data. We typically use daily data. Some firms may do it differently. All right. Here's your exponential smoothing. Remember, I usually measure covariance sum of xi minus x bar times y minus y bar divided by n minus 1. What if I stuck an omega in there? And I use this calculation instead, where the denominator is the sum of all the omegas-- you should be thinking finite series. You have to realize, I was a decent math student, I wasn't a great math student. And what I found when I was studying this, I was like, wow, all that stuff that I learned, it actually-- finite series, who knew? Who knew that I'd actually use it? So I take this, and let's say I'm working backwards in time. So today's observations is t zero. Yesterday's observation is t1, t2, t3. So today's observation would get-- and let's assume for the time being that this omega is on the order 0.95. It could be anything. So today would be 0.95 to the 0 divided by the sum of all the omegas. Tomorrow it will be 0.95 divided by the sum of the omegas. The next would be 0.95 squared divided by the sum of the omegas. 0.95 cubed and get smaller and smaller. For example, if you use 0.94, 99% of your weight will be in the last 76 days. 76 observations, I shouldn't say 76 days. 76 observations. So there's this notion that the impact declines exponentially. Does that make sense? People use this pretty commonly, but what scares me about it-- somebody stuck these fancy transitions in between these slides. Anyway, is that here's my standard deviation [INAUDIBLE] the rolling six [INAUDIBLE] window. And here's my standard deviation using different weights. The point I want to make here, and it's an important point, my assumption about my weighting coefficient has a material impact on the size of my measure volatility. Now when I see this, and this is just me. There's no finance or statistics theory behind this, any time the choice-- any time an assumption has this material an impact, bells and whistles go off and sirens. All right, and red lights flash. Be very, very careful. Now, lies, damn lies, and statistics. You tell me the outcome you want, and I'll tell you what statistics to use. That's where this could be abused. Oh, you want to show high volatility? Well let's use this. You want to show low volatility, let's use this? See, I choose to just take the simplest approach. And that's me. That's not a terribly scientific opinion, but that's what I think. Daily versus weekly, percentage changes log changes. Units. Just like dollar yen, interest rates. Am I long or am I short? If I'm long gold, I show it as a positive number. And if I'm short gold, in my position vector, I show it as a negative number. If I'm long yen, and yen is measured in yen per dollar, then I show it as a negative number. If I'm long yen, but my covariance matrix measures yen as dollars per yen-- 0.000094, whatever-- then I show it as a positive number. It's just like physics only worse because it'll cost you real-- no, I guess physics would be worse because if you get the units wrong, you blow up, right? This will just cost you money. I've made this mistake. I've made the units mistake. All right, we talked about fixed income. So that's what I want to cover from the bare bones setup for var. Now I'm going to skip the historical simulation and go right to the Monte Carlo because I want to show you another way we can use covariance structures. [POWERPOINT SOUND EFFECT] That's going to happen two or three more times. Somebody did this, somebody made my presentation cute some years ago. And I just-- I apologize. All right, see, there's a lot to meat in this presentation that we don't have time to get to. Another approach to doing value at risk is rather than use this parametric approach, is to simulate the outcomes. Simulate the outcomes 100 times, 1,000 times, 10,000 times, a million times, and say, these are all the possible outcomes based on my simulation assumptions. And let's say I simulate 10,000 times, and I have 10,000 possible outcomes for tomorrow. And I wanted to measure my value at risk at the 1% significance level. All I would do is take my 10,000 outcomes and I would sort them and take my hundredth worst. Put it in your pocket, go home. That's it. This is a different way of getting to that order statistic. Lends a lot more flexibility. So I can go and I can tweak the way I do that simulation, I can relax my assumptions of normality. I don't have to use normal distribution, I could use a t distribution, I could do lots, I could tweak my distribution, I could customize it. I could put mean reversion in there, I could do all kinds of stuff. So another way we do value at risk is we simulate possible outcomes. We rank the outcomes, and we just count them. If I've got the 10,000 observations and I want my 5% order statistic, well I just take my 500th. Make sense? It's that simple. Well, I don't want to make it seem like it's that simple because it actually gets a little messy in here. But when we do Monte Carlo simulation, we're simulating what we think is going to happen all subject to our assumptions. And we run through this Monte Carlo simulation. Simulation of method using sequences of random numbers. Coined during the Manhattan Project, similar to games of chance. You need to describe your system in terms of probability density functions. What type of distribution? Is this normal? Is it t? Is it chi squared? Is it F? All right? That's the way we do it. So quickly, how do I do that? I have to have random numbers. Now they're truly random numbers. Somewhere at MIT you could buy-- I used to say tape, but people don't use tape. They'll give you a website where you can get the atomic decay. That's random. All right? Anything else is psuedo random. What you see when you go into MATLAB, you have a random number generator, it's an algorithm. It probably takes some number and takes the square root of that number and then goes 54 decimal places to the right and takes the 55 decimal places to the right, multiplies those two numbers together and then takes the fifth root, and then goes 16 decimal places to the right to get that-- it's some algorithm. True story, before I came to appreciate that these were all highly algorithmically driven, I was in my 20's, I was taking a computer class, I saw two computers, they were both running random number of generators and they were generating the same random numbers. And I thought I was at the event horizon. I thought that light was bending and the world was coming to an end, all right? Because this this stuff can't happen, all right? It was happening right in front of me. It was a psuedo random number generator. I didn't know, I was 24. Anyway. quasi random numbers, it's sort of a way of imposing some order on your random numbers. You random numbers, one particular set of draws may not have enough draws in a particular area to give you the numbers you want. I can impose some conditions upon that. I don't want to get into a discussion of random numbers. How do I get from random uniform-- most remember generous give you random uniform number between 0 and 1. What you'll typically do is you'll take that random uniform number, you'll map it over to the cumulative density function, and map it down. So this gets you from random uniform space into standard deviation space. We used to worry about how we did this, now your software does it for you. I've gotten comfortable enough, truth be told. I usually trust my random number generators in Excel, in MATLAB. So I kind of violate my own rules, I don't check. But I think most of your standard random number of generators are decent enough now. And you can go straight to normal, you don't have to do random uniform and back into random normal. You can get it distributed in any way you want. What I do when I do a Monte Carlo simulation-- and this is going to be rushed because we've only got like 20 minutes. If I take a covariance matrix-- you're going to have to trust me on this because again, I'm covering like eight hours of lecture in an hour and a half. You guys go to MIT so I have no doubt you're going to be all over this. Let's take this out of here for a second. I can factor my covariance structure. I can factor my covariance structure like this. And this is the transpose of this. I didn't realize that the first time we did this commercially I saw this instead of this and I thought we had sent bad data to the customer. I got physically sick. And then I remembered AB transpose equals B transpose A. These things keep happening. My high school math keeps coming back to me. But I had forgotten this and I got physically sick because I thought we'd sent bad data because I was looking at this when it's just the transpose of this. Anyway, I can factor this into this where this is the a matrix of eigenvectors. This is a diagonal matrix of the eigenvalues. All right? This is the vaunted Gaussian copula. This is it. Most people view it as a black box. If you've had any more than introductory statistics, this should be a glass box to you. That's why I wanted to go through this even though I'd love to spend another hour and a half and do about 50 examples. Because this is how I learned this, I didn't learn it from looking at this equation and saying, oh, I get it. I learned it from actually doing it about 1,000 times in a spreadsheet, and sunk in like water into a store. So I factor this matrix, and then I take this, which is the square root matrix, which is my transpose of my eigenvector matrix and diagonal matrix contain the square root of my eigenvalued. Now, could this ever be negative and take me into imaginary root land? Well, if my variances are positive or zero, then that will be a problem. So here we get into this-- remember you guys studied positive semidefinite, positive definite. Once again, it's another one of these high school math things. Like, here it is. I had to know this. Suddenly I care whether it's positive semidefinite. Covariance structures have to be positive semidefinite. If you don't have a complete data set, let's say you've got 100 observations, 100 observations, 100 observations, 25 observations, 100 observations, you may have a negative eigenvalue. If you just measure the covariance with the amount of data that you have. My intuition-- and I doubt this is the [INAUDIBLE]-- is that you're measuring with error and you have fewer observations you measure with more error. So it's possible if some of your covariance measures have 25 observations and some of them have 100 observations that there's more error in some than in others. And so there's the theoretical possibility for negative variance. True story, we didn't no this in the '90s. I took this problem to the chairman of the statistics department at NYU said, I'm getting negative eigenvalues. And he didn't know. He had no idea, he's a smart guy. You have to fill in your missing data. You have to fill in your missing data. If you've got 1,000 observations, 1,000 observations, 1,000 observations, 200 observations, and you want to make sure you won't have a negative eigenvalue, you've got to fill in those observations. Which is why missing data is a whole other thing we talk about. Again, I could spend a lot of time on that. And I learned that the hard way. But anyway, so I take this square root matrix, if I premultiply that square root matrix by row after row of normals, I will get out an array that has the same covariance structure as that with which I started. Another story here, I've been using the same eigenvalue-- I believe in full attribution, I'm not a clever guy. I have not an original thought in my head. And whenever I use someone else's stuff, I give them credit for it. And the guy who wrote the code that did the eigenvalue [? decomposition-- ?] this is something that was translated from Fortran IV. It wasn't even [INAUDIBLE], there's a dichotomy in the world. There are people that have written Fortran, and people that haven't. I'm guessing that there are two people in this room that have ever written a line of Fortran. Anyone here? Just saying. Yeah, with cards or without cards? PROFESSOR: [INAUDIBLE]. KENNETH ABBOTT: I didn't use cards. See, you're an old-timer because you used cards. The punch line is, I've been using this guy's code. And I could show you the code. It's like the Lone Ranger, I didn't even get a chance to thank him. Because he didn't put his name on the code. On the internet now, if you do something clever on the quant newsgroups, you're going to post your name all over it. I've been wanting to thank this guy for like 20 years and I haven't been able to. Anyway, [INAUDIBLE] code that's been translated. Let me show you what this means. Here's some source data. Here's some percentage changes. Just like we talked about. Here is the empirical correlation of those percentage changes. So the correlation of my government 10 year to my AAA 10 year is 0.83. To my AA, 8.4. All right, you see this. And I have this covariance matrix which is the-- the correlation matrix is a scaled version of the covariance matrix. And I do a little bit of statistical ledger domain. Eigenvalues and eigenvectors. Take the square root of that. And again, I'd love to spend a lot more time on this, but we just don't-- suffice to say, I call this a transformation matrix, that's my term. This matrix here is this. If we had another hour and a half I'd take the step by step to get you there. The proof of which is left to the reader as an exercise. I'll leave this spreadsheet for you, I'll send it to you. I have this matrix. This matrix is like a prism. I'm going to pass white light through it, I'm going to get a beautiful rainbow. Let me show you what I mean. So remember that matrix, this matrix I'm calling t. Remember my matrix is 10 by 10. One, two, three, four, five, six, seven, eight, nine, ten. 10 columns of data. 10 by 10 correlation matrix. Let's check. Now I've got row vectors of sorry-- uncorrelated random normals. So what I'm doing then is I'm premultiplying that transformation matrix row by row by each row of uncorrelated random normals. And what I get is correlated random normals. So what I'm telling you here is this array happens to be 10 wide and 1,000 long. And I'm telling you that I started with my historical data-- let me see how much data have there. A couple hundred observations of historical data. And what I've done is once I have that covariance structure, I can create a data set here which has the same statistical properties as this. Not quite the same. It can have the same means and the same variances. This is what Monte Carlo simulation is about. I wish we had another hour because I'd like to spend time and-- this is one of these things, and again, when I first saw this, I was like, oh my god. I felt like I got the keys to the kingdom. And I did is manually, did it all on a spreadsheet. Didn't believe anyone else's code, did it all on a spreadsheet. But what that means quickly, let me just go back over here for a second. I happen to have about 800 observations here. Historical observations. What I did was I happened to generate 1,000 samples here. But I could generate 10,000 or 100,000, or a million or 10 million or a billion just by doing more random normals. I could generate-- in effect, what I'm generating here is synthetic time series that have properties similar to my underlying data. That's what Monte Carlo simulation is about. The means and the variances and the covariances of this data set are just like that. Now, again, true story, when somebody first showed me this I did not believe them. So I developed a bunch of little tests. And I said, let me just look at the correlation of my Monte Carlo data versus my original correlation matrix. So 0.83, 0.84, 0.85, 0.85, 0.67, 0.81. You look at the corresponding ones of the random numbers I just generated, 0.81, 0.82, 0.84, 0.84, 0.64, 0.52. 0.54 versus 0.52. 0.18 Versus 0.12. 0.51 versus 0.47. Somebody want to tell me why they're not spot on? Sampling error. The more data I use the closer it will get to that. If I do 1 million, I'd better get right on top of that. Does that make sense? So what I'm telling you here is that I can generate synthetic time series. Now, why would I generate so many? Well because, remember, I care what's going on out in that tail. If I only have 100 observations and I'm looking empirically at my tail, I've only got one observation out in the 1% tail. And that doesn't tell me a whole lot about what's going on. If I can simulate that distribution exactly, I can say, you know what, I want a billion observations in that tail. Now we can look at that tail. If I have 1 billion observations, let's say I'm looking at some kind of normal distribution. I'm circling it out here, I'm seeing-- I can really dig in and see what the properties of this thing are. In fact, this can really only take two distributions, and really, it's only one. But that's another story. So what I do in Monte Carlo simulations, I'm and simulated these outcomes so we can get a lot more meat in this tail to understand what's happening out there. Does it drop off quickly? Does it not drop off quickly? That's kind of what it's about. So we're about out of time. We just covered like four weeks of material, all right? But you guys are from MIT. I have complete confidence in you. I say that to the people who work for me. I have complete confidence in your ability to get that done by tomorrow morning. Questions or comments? I know you're sipping from the fire hose here. I fully appreciate that. So those are examples. When I do this with historical simulation I won't generate these Monte Carlo trials, I'll just use historical data. And my fat tails are built into it. But what I've shown you today is what we developed a one asset var model, then we developed a multi-asset variance covariance model. And then I showed you quickly, and in far less time than I would like to have shown you is how I can use another statistical technique, which is called the Gaussian copula, to generate has data sets that will have the same properties as my source historical data. All right? There you have it. [APPLAUSE] Oh you don't have to-- please, please, please. And I'll tell you, for me, one of the coolest things was actually being able to apply so much of the math I learned in high school and in college and never thought I'd apply again. One of my best moments was actually finding a use for trigonometry. If you're not an engineer, where are you going to use it? Where do you use it? Seasonals. You do seasonal estimation. And what you do is you do fast Fourier transform. Because I can describe any seasonal pattern with a linear combination of sine and cosine functions. And it actually works. I have my students do it as an exercise every year. I say, go get New York city temperature data. And show me some linear combination of sine and cosine functions that will show me the seasonal pattern of temperature data. And when I first realized I could use trigonometry, yes! It wasn't a waste of time. I still pull the coordinates, I still haven't found a use for that one. But it's there. I know it's there. All right? Go home.

Current listings from 14th to 59th Streets

[3] Name on the Register Image Date listed[4] Location Neighborhood Description
1 14th Street-Union Square Subway Station (IRT; Dual System BMT) July 6, 2005
Broadway, 4th Ave., and E. 14th St.
40°44′07″N 73°59′28″W / 40.735278°N 73.991111°W / 40.735278; -73.991111 (14th Street-Union Square Subway Station (IRT; Dual System BMT))
Union Square Subway station (4, ​5, ​6, <6>​, L​, N, ​Q, ​R, and ​W trains)
2 240 Central Park South May 12, 2009
240 Central Park South
40°46′04″N 73°58′52″W / 40.767664°N 73.980983°W / 40.767664; -73.980983 (240 Central Park South)
Columbus Circle
3 28th Street Subway Station (IRT) March 30, 2005
Under Park Avenue S, bet E 29th and 27th Sts.
40°44′36″N 73°59′04″W / 40.743333°N 73.984444°W / 40.743333; -73.984444 (28th Street Subway Station (IRT))
Rose Hill Subway station (4, ​6, and <6> trains)
4 33rd Street Subway Station (IRT) September 17, 2004
33rd St. and Park Ave.
40°44′53″N 73°58′55″W / 40.748056°N 73.981944°W / 40.748056; -73.981944 (33rd Street Subway Station (IRT))
Murray Hill Subway station (4, ​6, and <6> trains)
5 59th Street-Columbus Circle Subway Station (IRT) September 17, 2004
Junction of Broadway and Central Park South
40°46′05″N 73°58′57″W / 40.768056°N 73.9825°W / 40.768056; -73.9825 (59th Street-Columbus Circle Subway Station (IRT))
Columbus Circle Subway station (1 and ​2 trains)
6 69th Regiment Armory January 28, 1994
68 Lexington Ave.
40°44′28″N 73°59′05″W / 40.741111°N 73.984722°W / 40.741111; -73.984722 (69th Regiment Armory)
Rose Hill
7 Actors Temple May 19, 2005
339 W. 47th St.
40°45′40″N 73°59′22″W / 40.761111°N 73.989444°W / 40.761111; -73.989444 (Actors Temple)
Hell's Kitchen
8 Alwyn Court Apartments December 26, 1979
180 W. 58th St.
40°45′57″N 73°58′48″W / 40.765833°N 73.98°W / 40.765833; -73.98 (Alwyn Court Apartments)
Midtown Manhattan
9 American Fine Arts Society May 6, 1980
215 W. 57th St.
40°45′57″N 73°58′52″W / 40.765833°N 73.981111°W / 40.765833; -73.981111 (American Fine Arts Society)
Midtown Manhattan
10 American Radiator Building May 7, 1980
40–52 W. 40th St.
40°45′10″N 73°59′04″W / 40.752778°N 73.984444°W / 40.752778; -73.984444 (American Radiator Building)
Midtown Manhattan Black and gold building aka American Standard Building and recently The Bryant Park Hotel
11 Appellate Division Courthouse of New York State July 26, 1982
27 Madison Ave.
40°44′32″N 73°59′13″W / 40.742222°N 73.986944°W / 40.742222; -73.986944 (Appellate Division Courthouse of New York State)
Madison Square
12 Chester A. Arthur House October 15, 1966
123 Lexington Ave.
40°44′34″N 73°58′57″W / 40.742778°N 73.9825°W / 40.742778; -73.9825 (Chester A. Arthur House)
Kips Bay Home of former US president Chester A. Arthur
13 Association of the Bar of the City of New York January 3, 1980
42 W. 44th St.
40°45′18″N 73°58′57″W / 40.755°N 73.9825°W / 40.755; -73.9825 (Association of the Bar of the City of New York)
Midtown Manhattan
14 Bank of the Metropolis November 15, 2003
31 Union Square West
40°44′12″N 73°59′29″W / 40.736667°N 73.991389°W / 40.736667; -73.991389 (Bank of the Metropolis)
Union Square
15 Biltmore Theater October 27, 2004
261–265 W. 47th St.
40°45′37″N 73°59′14″W / 40.760278°N 73.987222°W / 40.760278; -73.987222 (Biltmore Theater)
Theater District
16 Building at 304 Park Avenue South March 15, 2005
304 Park Ave. S
40°44′24″N 73°59′14″W / 40.74°N 73.987222°W / 40.74; -73.987222 (Building at 304 Park Avenue South)
Flatiron District
17 Building at 315–325 West 36th Street May 27, 2004
315–325 W. 36th St.
40°45′15″N 73°59′39″W / 40.754167°N 73.994167°W / 40.754167; -73.994167 (Building at 315–325 West 36th Street)
18 Candler Building July 8, 1982
220 West 42nd St. and 221 West 41st St.
40°45′22″N 73°59′18″W / 40.756111°N 73.988333°W / 40.756111; -73.988333 (Candler Building)
Times Square
19 Carnegie Hall October 15, 1966
7th Ave., 56th to 57th Sts.
40°45′55″N 73°58′48″W / 40.765278°N 73.98°W / 40.765278; -73.98 (Carnegie Hall)
Theater District Internationally-known classical music venue
20 Central IND Substation February 9, 2006
136 W. 53rd St. (btwn 6th & 7th)
40°45′44″N 73°58′52″W / 40.762222°N 73.981111°W / 40.762222; -73.981111 (Central IND Substation)
Midtown Manhattan
21 Central Synagogue October 9, 1970
646–652 Lexington Ave.
40°45′34″N 73°58′16″W / 40.759444°N 73.971111°W / 40.759444; -73.971111 (Central Synagogue)
Midtown Manhattan
22 Century Association Building July 15, 1982
5–7 W. 43rd St.
40°45′16″N 73°58′52″W / 40.754444°N 73.981111°W / 40.754444; -73.981111 (Century Association Building)
Midtown Manhattan
23 Century Building September 18, 1997
33 E. 17th St.
40°44′13″N 73°59′25″W / 40.736944°N 73.990278°W / 40.736944; -73.990278 (Century Building)
Union Square
24 Chanin Building April 23, 1980
122 E. 42nd St.
40°45′04″N 73°58′32″W / 40.751111°N 73.975556°W / 40.751111; -73.975556 (Chanin Building)
Murray Hill
25 Chelsea Historic District December 6, 1977
Roughly bounded by 19th and 22nd Sts., 9th and 10th Aves.
40°44′43″N 74°00′15″W / 40.745278°N 74.004167°W / 40.745278; -74.004167 (Chelsea Historic District)
26 Chrysler Building December 8, 1976
405 Lexington Ave.
40°45′05″N 73°58′31″W / 40.751389°N 73.975278°W / 40.751389; -73.975278 (Chrysler Building)
Midtown East
27 Church of St. Mary the Virgin Complex April 16, 1990
145 W. 46th St.
40°45′30″N 73°59′02″W / 40.758333°N 73.983889°W / 40.758333; -73.983889 (Church of St. Mary the Virgin Complex)
Times Square
28 Church of the Holy Apostles April 26, 1972
300 9th Ave.
40°44′57″N 73°59′57″W / 40.749167°N 73.999167°W / 40.749167; -73.999167 (Church of the Holy Apostles)
29 Church of the Holy Communion and Buildings April 17, 1980
656–662 6th Ave.
40°44′28″N 73°59′40″W / 40.741111°N 73.994444°W / 40.741111; -73.994444 (Church of the Holy Communion and Buildings)
Flatiron District Location of The Limelight nightclub
30 Church of the Immaculate Conception and Clergy Houses March 28, 1980
406–414 E. 14th St.
40°43′52″N 73°58′56″W / 40.731111°N 73.982222°W / 40.731111; -73.982222 (Church of the Immaculate Conception and Clergy Houses)
East Village
31 Church of the Incarnation and Parish House July 8, 1982
205–209 Madison Ave.
40°44′54″N 73°58′57″W / 40.748333°N 73.9825°W / 40.748333; -73.9825 (Church of the Incarnation and Parish House)
Murray Hill
32 Church of the Transfiguration and Rectory June 4, 1973
1 E. 29th St.
40°44′44″N 73°59′14″W / 40.745556°N 73.987222°W / 40.745556; -73.987222 (Church of the Transfiguration and Rectory)
33 Church Missions House June 3, 1982
281 Park Ave., S.
40°44′21″N 73°59′14″W / 40.739167°N 73.987222°W / 40.739167; -73.987222 (Church Missions House)
Gramercy Park
34 <i>CIRCLE LINE X</i> (sightseeing vessel) September 22, 2014
Pier 83 & West 42nd St.
40°45′46″N 74°00′06″W / 40.762897°N 74.001780°W / 40.762897; -74.001780 (CIRCLE LINE X (sightseeing vessel))
Hell's Kitchen Former World War II landing craft later used for sightseeing; being converted into a museum.[5]
35 Civic Club September 16, 1982
243 E. 34th St.
40°44′41″N 73°58′34″W / 40.744722°N 73.976111°W / 40.744722; -73.976111 (Civic Club)
Murray Hill Nowadays it's the New York Estonian House.
36 Colony Arcade Building September 10, 2014
63–67 W. 38th St.
40°45′08″N 73°59′07″W / 40.7522°N 73.9854°W / 40.7522; -73.9854 (Colony Arcade Building)
Garment District Neo-Gothic 1912 building was crucial to city's developing fashion industry in the early 20th century. Now redeveloped into a hotel.
37 Columbus Monument November 20, 2018
Columbus Circle
40°46′05″N 73°58′55″W / 40.7681°N 73.9819°W / 40.7681; -73.9819 (Columbus Monument)
Midtown Manhattan Italian sculptor Gaetano Rosso's only work in the U.S.; erected in 1892 for quadcentennial of Columbus's voyage.
38 Daily News Building November 14, 1982
220 E. 42nd St.
40°44′58″N 73°58′25″W / 40.749444°N 73.973611°W / 40.749444; -73.973611 (Daily News Building)
Turtle Bay
39 Decker Building November 21, 2003
33 Union Square W.
40°44′12″N 73°59′29″W / 40.736667°N 73.991389°W / 40.736667; -73.991389 (Decker Building)
Union Square
40 DeLamar Mansion August 25, 1983
233 Madison Ave.
40°44′59″N 73°58′54″W / 40.749722°N 73.981667°W / 40.749722; -73.981667 (DeLamar Mansion)
Murray Hill
41 Adelaide L. T. Douglas House July 15, 1982
57 Park Ave. (btwn East 37 & 38)
40°44′57″N 73°58′48″W / 40.749167°N 73.98°W / 40.749167; -73.98 (Adelaide L. T. Douglas House)
Murray Hill now the Guatemalan Mission to the UN
42 The Emerson August 20, 2009
554 W. 53rd St.
40°46′01″N 73°59′30″W / 40.767017°N 73.991528°W / 40.767017; -73.991528 (The Emerson)
Hell's Kitchen
43 Empire State Building November 17, 1982
350 Fifth Ave.
40°44′53″N 73°59′10″W / 40.748056°N 73.986111°W / 40.748056; -73.986111 (Empire State Building)
Koreatown Seventh tallest building in Manhattan and an international symbol of New York City
44 Engineering Societies' Building and Engineers' Club August 30, 2007
23 and 25-33 W. 39th St. and 28, 32-34, and 36 W. 40th St.
40°45′09″N 73°59′04″W / 40.7525°N 73.984444°W / 40.7525; -73.984444 (Engineering Societies' Building and Engineers' Club)
Midtown Manhattan
45 ENTERPRISE (space shuttle) March 13, 2013
Pier 86, W. 46th St. and 12th Ave.
40°45′56″N 74°00′07″W / 40.765693°N 74.001874°W / 40.765693; -74.001874 (ENTERPRISE (space shuttle))
Hell's Kitchen First space shuttle built; was never taken into orbit but helped immensely in development of later vehicles that were. Now on exhibit at Intrepid museum.
46 Father Francis D. Duffy Statue and Duffy Square March 12, 2001
Triangle bounded by Broadway, Seventh Ave., W. 47th. and W. 46th St.
40°45′32″N 73°59′07″W / 40.758889°N 73.985278°W / 40.758889; -73.985278 (Father Francis D. Duffy Statue and Duffy Square)
Times Square
47 Film Center Building September 7, 1984
630 9th Ave.
40°45′35″N 73°59′30″W / 40.759722°N 73.991667°W / 40.759722; -73.991667 (Film Center Building)
Hell's Kitchen
48 Flatiron Building November 20, 1979
5th Ave. and Broadway
40°44′29″N 73°59′27″W / 40.7414°N 73.9908°W / 40.7414; -73.9908 (Flatiron Building)
Flatiron District
49 Fred F. French Building January 28, 2004
551 Fifth Ave.
40°45′20″N 73°58′47″W / 40.755556°N 73.979722°W / 40.755556; -73.979722 (Fred F. French Building)
Midtown Manhattan
50 FRYING PAN SHOALS LIGHTSHIP NO. 115 (lightship) January 28, 1999
Pier 66, W. 26th and West Side Highway.
40°45′00″N 74°00′37″W / 40.75°N 74.010278°W / 40.75; -74.010278 (FRYING PAN SHOALS LIGHTSHIP NO. 115 (lightship))
51 Garment Center Historic District November 5, 2008
Roughly bounded by Sixth Ave. on the E., Ninth Ave. on the W., W. 35th St. on the S., and W. 41st St. on the N.
40°45′14″N 73°59′25″W / 40.753758°N 73.990392°W / 40.753758; -73.990392 (Garment Center Historic District)
Garment District
52 General Electric Building January 28, 2004
570 Lexington Ave.
40°45′26″N 73°58′38″W / 40.757222°N 73.977222°W / 40.757222; -73.977222 (General Electric Building)
Rockefeller Center
53 General Society of Mechanics and Tradesmen November 12, 2008
20 W. 44th St
40°45′19″N 73°58′53″W / 40.755328°N 73.981317°W / 40.755328; -73.981317 (General Society of Mechanics and Tradesmen)
Midtown Manhattan
54 George Washington Hotel May 20, 2019
23 Lexington Ave.
40°44′23″N 73°59′03″W / 40.7397°N 73.9842°W / 40.7397; -73.9842 (George Washington Hotel)
Rose Hill 1930 Renaissance Revival apartment hotel, recently restored to that use, is the last major one from that era.
55 Germania Life Insurance Company Building May 25, 2001
50 Union Sq. E
40°44′12″N 73°59′21″W / 40.736667°N 73.989167°W / 40.736667; -73.989167 (Germania Life Insurance Company Building)
Union Square
56 Gilsey Hotel December 14, 1978
1200 Broadway
40°44′45″N 73°59′19″W / 40.745833°N 73.988611°W / 40.745833; -73.988611 (Gilsey Hotel)
57 Gramercy Park Historic District January 23, 1980
Roughly bounded by 3rd and Park Aves., S., E. 18th and 22nd Sts.
40°44′16″N 73°59′10″W / 40.737778°N 73.986111°W / 40.737778; -73.986111 (Gramercy Park Historic District)
Gramercy Park
58 Grand Central Terminal January 17, 1975
71–105 E. 42nd St.
40°45′10″N 73°58′35″W / 40.752778°N 73.976389°W / 40.752778; -73.976389 (Grand Central Terminal)
Midtown Manhattan 1913 Beaux Arts landmark still used by Metro-North Railroad. Construction helped trigger development of Park Avenue.
59 Grand Central Terminal Park Avenue Viaduct August 11, 1983
71–105 E. 42nd St., Park Ave. between E. 40th and E. 42nd Sts.
40°44′59″N 73°58′40″W / 40.749722°N 73.977778°W / 40.749722; -73.977778 (Grand Central Terminal Park Avenue Viaduct)
Pershing Square Elevated roadway bringing Park Avenue around Grand Central Terminal
60 Grand Hotel September 15, 1983
1232–1238 Broadway
40°44′50″N 73°59′19″W / 40.747222°N 73.988611°W / 40.747222; -73.988611 (Grand Hotel)
61 Greenacre Park February 2, 2018
217 E 51st St.
40°45′23″N 73°58′09″W / 40.75626°N 73.96930°W / 40.75626; -73.96930 (Greenacre Park)
Midtown East Small park with modernist design in mid-block, built 1971 on land donated by Abby Rockefeller Mauzé, is an early pocket park
62 Greenwich Savings Bank November 16, 2005
1352–1362 Broadway
40°45′05″N 73°59′15″W / 40.751389°N 73.9875°W / 40.751389; -73.9875 (Greenwich Savings Bank)
Near Herald Square
63 Harvard Club of New York City March 28, 1980
27 W. 44th St.
40°45′18″N 73°58′53″W / 40.755°N 73.981389°W / 40.755; -73.981389 (Harvard Club of New York City)
Midtown Manhattan
64 Hotel Chelsea December 27, 1977
222 W. 23rd St.
40°44′35″N 73°59′39″W / 40.743056°N 73.994167°W / 40.743056; -73.994167 (Hotel Chelsea)
65 Hotel Gerard February 10, 1983
123 W. 44th St.
40°45′48″N 73°59′34″W / 40.763333°N 73.992778°W / 40.763333; -73.992778 (Hotel Gerard)
Midtown Manhattan
66 House at 146 East 38th Street May 21, 2008
145 E. 38th St.
40°44′55″N 73°58′37″W / 40.748567°N 73.977°W / 40.748567; -73.977 (House at 146 East 38th Street)
Murray Hill
67 House at 17 West 16th Street May 26, 1983
17 W. 16th St.
40°44′16″N 73°59′38″W / 40.737778°N 73.993889°W / 40.737778; -73.993889 (House at 17 West 16th Street)
68 House at 20 West 16th St May 30, 2007
20 W. 16th St.
40°44′15″N 73°59′40″W / 40.7375°N 73.994444°W / 40.7375; -73.994444 (House at 20 West 16th St)
69 House at 203 East 29 Street July 8, 1982
203 E. 29th St.
40°44′33″N 73°58′49″W / 40.7425°N 73.980278°W / 40.7425; -73.980278 (House at 203 East 29 Street)
Rose Hill
70 Houses at 311 and 313 East 58th Street November 14, 1982
311–313 E. 58th St.
40°45′35″N 73°57′53″W / 40.759722°N 73.964722°W / 40.759722; -73.964722 (Houses at 311 and 313 East 58th Street)
Midtown East
71 Houses at 326, 328 and 330 East 18th Street September 30, 1982
326–330 E. 18th St.
40°44′03″N 73°58′57″W / 40.734167°N 73.9825°W / 40.734167; -73.9825 (Houses at 326, 328 and 330 East 18th Street)
Kips Bay
72 Houses at 437–459 West 24th Street October 29, 1982
437–459 W. 24th St.
40°44′53″N 74°00′11″W / 40.748056°N 74.003056°W / 40.748056; -74.003056 (Houses at 437–459 West 24th Street)
73 Houses at 647, 651–53 Fifth Avenue and 4 East 52nd Street September 8, 1983
647, 651–53 5th Ave. and 4 E. 52nd St.
40°45′34″N 73°58′36″W / 40.759444°N 73.976667°W / 40.759444; -73.976667 (Houses at 647, 651–53 Fifth Avenue and 4 East 52nd Street)
Midtown Manhattan Consists of the Morton F. Plant House (Cartier Building) and 647 Fifth Avenue (Versace Building)
74 JOHN J. HARVEY (fireboat) June 15, 2000
Pier 63, North R.
40°45′00″N 74°00′39″W / 40.75°N 74.010833°W / 40.75; -74.010833 (JOHN J. HARVEY (fireboat))
75 Hudson Theatre November 15, 2016
139–141 W. 44th St.
40°45′25″N 73°59′05″W / 40.756944°N 73.984722°W / 40.756944; -73.984722 (Hudson Theatre)
Theater District Israels & Harder theater built in 1903 was home to many important Broadway shows
76 Knickerbocker Hotel April 11, 1980
142 W. 42nd St.
40°45′19″N 73°59′12″W / 40.755278°N 73.986667°W / 40.755278; -73.986667 (Knickerbocker Hotel)
Times Square
77 Knox Building June 3, 1982
452 5th Ave.
40°45′08″N 73°58′57″W / 40.752222°N 73.9825°W / 40.752222; -73.9825 (Knox Building)
78 Lamb's Club June 3, 1982
128 W. 44th St.
40°45′23″N 73°59′07″W / 40.756389°N 73.985278°W / 40.756389; -73.985278 (Lamb's Club)
Near Rockefeller Center Former home of the Lamb's Club, is being gut renovated into a hotel in 2009.
79 James F. D. Lanier Residence June 3, 1982
123 E. 35th
40°44′51″N 73°58′49″W / 40.7475°N 73.980278°W / 40.7475; -73.980278 (James F. D. Lanier Residence)
Murray Hill
80 Lescaze House May 19, 1980
211 E. 48th St.
40°45′15″N 73°58′17″W / 40.754167°N 73.971389°W / 40.754167; -73.971389 (Lescaze House)
Midtown East
81 Lever House October 2, 1983
390 Park Ave.
40°45′34″N 73°58′23″W / 40.759444°N 73.973056°W / 40.759444; -73.973056 (Lever House)
Midtown Manhattan
82 Lincoln Building September 8, 1983
1 Union Sq. W.
40°44′08″N 73°59′32″W / 40.735556°N 73.992222°W / 40.735556; -73.992222 (Lincoln Building)
Union Square
83 Look Building February 24, 2005
488 Madison Ave.
40°45′31″N 73°58′33″W / 40.758611°N 73.975833°W / 40.758611; -73.975833 (Look Building)
Midtown East
84 R. H. Macy and Company Store June 2, 1978
151 W. 34th St.
40°45′41″N 73°58′22″W / 40.761389°N 73.972778°W / 40.761389; -73.972778 (R. H. Macy and Company Store)
Herald Square Home of first major American department store.
85 Marble Collegiate Reformed Church April 9, 1980
275 5th Ave.
40°44′44″N 73°59′15″W / 40.745556°N 73.9875°W / 40.745556; -73.9875 (Marble Collegiate Reformed Church)
86 McGraw-Hill Building March 28, 1980
326 W. 42nd St.
40°45′25″N 73°59′31″W / 40.756944°N 73.991944°W / 40.756944; -73.991944 (McGraw-Hill Building)
Hell's Kitchen
87 Mecca Temple September 7, 1984
131 N. 55th St.
40°45′50″N 73°58′48″W / 40.763889°N 73.98°W / 40.763889; -73.98 (Mecca Temple)
Midtown Manhattan Known as New York City Center Theater. Moorish-revival interior and exterior details. Harry P. Knowles, architect.
88 Merchants Refrigerating Company Warehouse May 31, 1985
501 W. 16th St.
40°44′38″N 74°00′28″W / 40.743889°N 74.007778°W / 40.743889; -74.007778 (Merchants Refrigerating Company Warehouse)
89 Metropolitan Life Home Office Complex January 19, 1996
Roughly bounded by Madison Ave., E. 23rd St., Park Ave. S. and E. 25th St.
40°44′28″N 73°59′14″W / 40.741111°N 73.987222°W / 40.741111; -73.987222 (Metropolitan Life Home Office Complex)
Flatiron District
90 Metropolitan Life Insurance Company June 2, 1978
1 Madison Ave.
40°44′29″N 73°59′16″W / 40.741389°N 73.987778°W / 40.741389; -73.987778 (Metropolitan Life Insurance Company)
Flatiron District
91 William H. Moore House March 16, 1972
4 E. 54th St.
40°45′38″N 73°58′31″W / 40.760556°N 73.975278°W / 40.760556; -73.975278 (William H. Moore House)
Midtown Manhattan
92 Pierpont Morgan Library November 13, 1966
33 E. 36th St.
40°44′56″N 73°58′54″W / 40.748889°N 73.981667°W / 40.748889; -73.981667 (Pierpont Morgan Library)
Murray Hill Now known as "The Morgan Library & Museum"
93 Murray Hill Historic District October 5, 2003
E. 34th, 35th, 36th, 37th, 38th & 39th Sts., Lexington, Madison & Park Aves.
40°44′54″N 73°58′47″W / 40.748333°N 73.979722°W / 40.748333; -73.979722 (Murray Hill Historic District)
Murray Hill One of last intact 19th-century residential districts in Manhattan; many buildings by prominent architects. Boundary increase on February 27, 2013.
94 New Amsterdam Theater January 10, 1980
214 W. 42nd St.
40°45′21″N 73°59′18″W / 40.755833°N 73.988333°W / 40.755833; -73.988333 (New Amsterdam Theater)
Theater District
95 New York Bible Society February 5, 2014
5 E. 48th St.
40°45′26″N 73°58′39″W / 40.757213°N 73.977398°W / 40.757213; -73.977398 (New York Bible Society)
Midtown Manhattan Since 1978 in use by the Swedish Seamen's Church[6]
96 New York Life Building June 2, 1978
51 Madison Ave.
40°44′34″N 73°59′09″W / 40.742778°N 73.985833°W / 40.742778; -73.985833 (New York Life Building)
Flatiron District
97 New York Public Library October 15, 1966
5th Ave. and 42nd St.
40°45′12″N 73°58′56″W / 40.753333°N 73.982222°W / 40.753333; -73.982222 (New York Public Library)
Midtown Manhattan Main branch of New York Public Library system; one of the world's largest public libraries.
98 New York Public Library and Bryant Park October 15, 1966
Avenue of the Americas, 5th Ave., 40th and 42nd Sts.
40°45′12″N 73°58′56″W / 40.753333°N 73.982222°W / 40.753333; -73.982222 (New York Public Library and Bryant Park)
Midtown Manhattan
99 New York Savings Bank January 7, 2000
81 Eighth Ave.
40°44′24″N 74°00′11″W / 40.74°N 74.003056°W / 40.74; -74.003056 (New York Savings Bank)
100 New York School of Applied Design December 16, 1982
160 Lexington Ave.
40°44′38″N 73°58′56″W / 40.743889°N 73.982222°W / 40.743889; -73.982222 (New York School of Applied Design)
Murray Hill
101 New York Yacht Club October 29, 1982
37 W. 44th St.
40°45′19″N 73°58′54″W / 40.755278°N 73.981667°W / 40.755278; -73.981667 (New York Yacht Club)
102 Andrew Norwood House July 9, 1979
241 W. 14th St.
40°44′23″N 74°00′07″W / 40.739722°N 74.001944°W / 40.739722; -74.001944 (Andrew Norwood House)
103 Old Colony Club April 23, 1980
120 Madison Ave.
40°44′43″N 73°59′06″W / 40.745278°N 73.985°W / 40.745278; -73.985 (Old Colony Club)
Rose Hill
104 Old Grolier Club April 23, 1980
29 E. 32nd St.
40°44′47″N 73°59′02″W / 40.746389°N 73.983889°W / 40.746389; -73.983889 (Old Grolier Club)
Rose Hill
105 Osborne Apartments April 22, 1993
205 W. 57th St.
40°45′56″N 73°58′51″W / 40.765556°N 73.980833°W / 40.765556; -73.980833 (Osborne Apartments)
Midtown Manhattan
106 Pier 57 August 11, 2004
Eleventh Ave. at end of W. 15th St.
40°44′37″N 74°00′40″W / 40.743611°N 74.011111°W / 40.743611; -74.011111 (Pier 57)
107 The Players October 15, 1966
16 Gramercy Park
40°44′15″N 73°59′13″W / 40.7375°N 73.986944°W / 40.7375; -73.986944 (The Players)
Gramercy Park
108 Plaza Hotel November 29, 1978
Fifth Ave. and Fifty-ninth St.
40°45′51″N 73°58′27″W / 40.764167°N 73.974167°W / 40.764167; -73.974167 (Plaza Hotel)
Grand Army Plaza Famous Upper East Side hotel being remodeled into condos. Setting for Eloise books.
109 Prince George Hotel February 12, 1999
10–20 E. 28th and 17–19 E. 27 Sts.
40°44′41″N 73°59′12″W / 40.744722°N 73.986667°W / 40.744722; -73.986667 (Prince George Hotel)
110 Public Baths April 23, 1980
Asser Levy Pl. and E. 23rd St.
40°44′09″N 73°58′35″W / 40.735833°N 73.976389°W / 40.735833; -73.976389 (Public Baths)
Kips Bay
111 Public School 35 October 27, 1980
931 1st Ave.
40°45′17″N 73°57′57″W / 40.754722°N 73.965833°W / 40.754722; -73.965833 (Public School 35)
Turtle Bay
112 Queensboro Bridge December 20, 1978
59th St.
40°45′26″N 73°57′22″W / 40.757222°N 73.956111°W / 40.757222; -73.956111 (Queensboro Bridge)
Lenox Hill and Roosevelt Island Also listed in the Borough of Queens
113 R & S Building September 22, 1986
492 First Ave.
40°44′26″N 73°58′31″W / 40.740556°N 73.975278°W / 40.740556; -73.975278 (R & S Building)
Kips Bay
114 Racquet and Tennis Club Building July 13, 1983
370 Park Ave.
40°45′31″N 73°58′25″W / 40.758611°N 73.973611°W / 40.758611; -73.973611 (Racquet and Tennis Club Building)
Midtown Manhattan Ornate private club; now a foil
115 Radio City Music Hall May 8, 1978
1260 Avenue of the Americas (50th and 6th)
40°45′36″N 73°59′03″W / 40.76°N 73.984167°W / 40.76; -73.984167 (Radio City Music Hall)
Rockefeller Center Major live-entertainment venue since the 1920s; home to the Rockettes
116 Residences at 5-15 West 54th Street January 4, 1990
5–15 W. 54th St.
40°45′42″N 73°58′35″W / 40.761667°N 73.976389°W / 40.761667; -73.976389 (Residences at 5-15 West 54th Street)
Midtown Manhattan Consists of five residences at 5, 7, 9–11, 13, and 15 West 54th Street.
117 Rockefeller Center December 23, 1987
Bounded by Fifth Ave., W. Forty-eighth St., Seventh Ave., & W. Fifty-first St.
40°45′32″N 73°58′46″W / 40.758889°N 73.979444°W / 40.758889; -73.979444 (Rockefeller Center)
Midtown Manhattan Trend-setting urban office complex. Home to many NBC broadcasts. Setting and location for 2000s sitcom 30 Rock.
118 Bayard Rustin Residence March 8, 2016
340 W. 28th St. (Building 7B Apartment 9J of Penn South)
40°44′56″N 73°59′52″W / 40.74887°N 73.99772°W / 40.74887; -73.99772 (Bayard Rustin Residence)
Chelsea Bayard Rustin, a civil rights activist who later came out and fought for gay rights, lived in this apartment building for most of his later life
119 St. Bartholomew's Church and Community House April 16, 1980
109 E. 50th St.
40°45′26″N 73°58′25″W / 40.757222°N 73.973611°W / 40.757222; -73.973611 (St. Bartholomew's Church and Community House)
Midtown East
120 St. George's Episcopal Church December 8, 1976
E. 16th St. and Rutherford Place
40°44′04″N 73°59′06″W / 40.734444°N 73.985°W / 40.734444; -73.985 (St. George's Episcopal Church)
Stuyvesant Square
121 St. Luke's Evangelical Lutheran Church June 1, 2007
208 W. 46th St.
40°45′35″N 73°59′21″W / 40.759722°N 73.989167°W / 40.759722; -73.989167 (St. Luke's Evangelical Lutheran Church)
122 St. Patrick's Cathedral Complex December 8, 1976
Bounded by 5th and Madison Aves., E. 50th and E. 51st Sts.
40°45′31″N 73°58′35″W / 40.758611°N 73.976389°W / 40.758611; -73.976389 (St. Patrick's Cathedral Complex)
123 St. Thomas Church and Parish House April 9, 1980
1–3 W. 53rd St.
40°45′39″N 73°58′36″W / 40.760833°N 73.976667°W / 40.760833; -73.976667 (St. Thomas Church and Parish House)
124 Salmagundi Club July 25, 1974
47 5th Ave.
40°44′03″N 73°59′00″W / 40.734167°N 73.983333°W / 40.734167; -73.983333 (Salmagundi Club)
Kips Bay
125 Margaret Sanger Clinic September 14, 1993
17 W. 16th St.
40°44′17″N 73°59′39″W / 40.738056°N 73.994167°W / 40.738056; -73.994167 (Margaret Sanger Clinic)
Chelsea Workplace of birth control pioneer Margaret Sanger.
126 Scribner Building May 6, 1980
153–157 5th Ave.
40°44′25″N 73°59′27″W / 40.740278°N 73.990833°W / 40.740278; -73.990833 (Scribner Building)
Flatiron District Known also as the Old Scribner Building. Note, Charles Scribner Building is different.
127 Seagram Building February 24, 2006
375 Park Ave.
40°45′30″N 73°58′22″W / 40.758333°N 73.972778°W / 40.758333; -73.972778 (Seagram Building)
Midtown Manhattan Milestone modernist building by Ludwig Mies van der Rohe
128 Seville Hotel February 24, 2005
22 East 29th St.
40°44′40″N 73°59′10″W / 40.744444°N 73.986111°W / 40.744444; -73.986111 (Seville Hotel)
129 Sidewalk Clock at 200 5th Avenue, Manhattan April 18, 1985
200 5th Ave.
40°44′30″N 73°59′24″W / 40.741667°N 73.99°W / 40.741667; -73.99 (Sidewalk Clock at 200 5th Avenue, Manhattan)
Flatiron District
130 Sidewalk Clock at 519 3rd Avenue, Manhattan April 18, 1985
519 3rd Ave.
40°44′46″N 73°58′41″W / 40.746111°N 73.978056°W / 40.746111; -73.978056 (Sidewalk Clock at 519 3rd Avenue, Manhattan)
Kips Bay No longer in location.
131 Sidewalk Clock at 522 5th Avenue, Manhattan April 18, 1985
522 5th Ave.
40°45′16″N 73°58′50″W / 40.754444°N 73.980556°W / 40.754444; -73.980556 (Sidewalk Clock at 522 5th Avenue, Manhattan)
Midtown Manhattan
132 Sniffen Court Historic District November 28, 1973
E. 36th St., between Lexington and 3rd Aves.
40°44′51″N 73°58′40″W / 40.7475°N 73.977778°W / 40.7475; -73.977778 (Sniffen Court Historic District)
Murray Hill
133 Society for the Lying-In Hospital September 1, 1983
305 2nd Ave.
40°44′05″N 73°59′03″W / 40.734722°N 73.98416°W / 40.734722; -73.98416 (Society for the Lying-In Hospital)
Gramercy Park
134 Stuyvesant Square Historic District November 21, 1980
Roughly bounded by Nathan D. Perleman Pl., 3rd Ave., E. 18th and E. 15th Sts.
40°44′02″N 73°59′06″W / 40.733889°N 73.985°W / 40.733889; -73.985 (Stuyvesant Square Historic District)
Gramercy Park
135 Substation 13 February 9, 2006
225 W 53rd St.
40°45′50″N 73°59′03″W / 40.763889°N 73.984167°W / 40.763889; -73.984167 (Substation 13)
Midtown Manhattan
136 Substation 42 February 9, 2006
154 E. 57th St.
40°45′37″N 73°58′07″W / 40.760278°N 73.9685°W / 40.760278; -73.9685 (Substation 42)
Midtown Manhattan
137 Ed Sullivan Theater November 17, 1997
1697–1699 Broadway
40°45′49″N 73°59′00″W / 40.763611°N 73.983333°W / 40.763611; -73.983333 (Ed Sullivan Theater)
Theater District
138 Sutton Place Historic District September 12, 1985
1–21 Sutton Pl. & 4–16 Sutton Sq.
40°45′28″N 73°57′57″W / 40.757778°N 73.965833°W / 40.757778; -73.965833 (Sutton Place Historic District)
Sutton Place
139 Theodore Roosevelt Birthplace National Historic Site October 15, 1966
28 E. 20th St.
40°44′18″N 73°59′21″W / 40.738333°N 73.989167°W / 40.738333; -73.989167 (Theodore Roosevelt Birthplace National Historic Site)
Flatiron District Birthplace of Theodore Roosevelt
140 Tiffany and Company Building June 2, 1978
401 5th Ave., at 36th
40°45′00″N 73°58′53″W / 40.75°N 73.981389°W / 40.75; -73.981389 (Tiffany and Company Building)
Midtown Manhattan Former Tiffany's building
141 Samuel J. Tilden House May 11, 1976
14–15 Gramercy Park South
40°44′15″N 73°59′14″W / 40.7375°N 73.987222°W / 40.7375; -73.987222 (Samuel J. Tilden House)
Gramercy Park Also known as "National Arts Club". Home of Tilden, winner of popular vote in disputed 1876 presidential election
142 Times Square Hotel May 4, 1995
255 W. 43rd St.
40°45′28″N 73°59′22″W / 40.757778°N 73.989444°W / 40.757778; -73.989444 (Times Square Hotel)
Times Square
143 Times Square-42nd Street Subway Station September 17, 2004
Junction of W. 42nd St. and Broadway/7th Ave.
40°45′19″N 73°59′15″W / 40.755278°N 73.987500°W / 40.755278; -73.987500 (Times Square-42nd Street Subway Station)
Times Square Subway station (1, ​2, ​3​, 7, <7>​​, N, ​Q, ​R, ​W, and S trains)
144 Town Hall April 23, 1980
113–123 W. 43rd St.
40°45′21″N 73°59′05″W / 40.755833°N 73.984722°W / 40.755833; -73.984722 (Town Hall)
Midtown Manhattan Public-affairs media developed here with "America's Town Hall of the Air" radio program in 1930s. Has been host to many major artists as concert venue.
145 Trinity Chapel Complex December 16, 1982
15 W. 25th St.
40°44′37″N 73°59′25″W / 40.743611°N 73.990278°W / 40.743611; -73.990278 (Trinity Chapel Complex)
146 Tudor City Historic District September 11, 1986
Roughly bounded by Forty-third St., First Ave., Forty-first St., and Second Ave.
40°44′56″N 73°58′17″W / 40.748889°N 73.971389°W / 40.748889; -73.971389 (Tudor City Historic District)
Turtle Bay
147 Turtle Bay Gardens Historic District July 21, 1983
226–246 E. 49th St. and 227–245 E. 48th St.
40°45′15″N 73°58′13″W / 40.754167°N 73.970278°W / 40.754167; -73.970278 (Turtle Bay Gardens Historic District)
Turtle Bay
148 U.S. General Post Office January 29, 1973
8th Ave. between 31st and 33rd Sts.
40°45′37″N 73°59′03″W / 40.760278°N 73.984167°W / 40.760278; -73.984167 (U.S. General Post Office)
Midtown Manhattan
149 Union Square December 9, 1997
Bounded by E 14th & E 17th Sts. and Union Square East & Union Square West
40°44′10″N 73°59′25″W / 40.736111°N 73.990278°W / 40.736111; -73.990278 (Union Square)
Union Square Site of many political demonstrations over the years
150 United Charities Building Complex March 28, 1985
105 E. 22nd St,. 289 Park Ave. S. and 111-113 E. 22nd St.
40°44′22″N 73°59′14″W / 40.739444°N 73.987222°W / 40.739444; -73.987222 (United Charities Building Complex)
Gramercy Park
151 University Club April 16, 1980
1 W. 54th St.
40°45′40″N 73°58′34″W / 40.761111°N 73.976111°W / 40.761111; -73.976111 (University Club)
Midtown Manhattan
152 US Post Office-Madison Square Station May 11, 1989
149–153 E. 23rd St.
40°44′23″N 73°59′04″W / 40.739722°N 73.984444°W / 40.739722; -73.984444 (US Post Office-Madison Square Station)
Gramercy Park
153 US Post Office-Old Chelsea Station May 11, 1989
217 W. 18th St.
40°46′07″N 73°59′56″W / 40.768611°N 73.998889°W / 40.768611; -73.998889 (US Post Office-Old Chelsea Station)
154 USS INTREPID (aircraft carrier) January 14, 1986
Intrepid Sq., 45th and West Side Highway
40°45′51″N 73°59′59″W / 40.764167°N 73.999722°W / 40.764167; -73.999722 (USS INTREPID (aircraft carrier))
Hell's Kitchen
155 Villard Houses September 2, 1975
29½ 50th St., 24-26 E. 51st St., and 451, 453, 455, and 457 Madison Ave.
40°45′29″N 73°58′31″W / 40.758056°N 73.975278°W / 40.758056; -73.975278 (Villard Houses)
Midtown Manhattan
156 Webster Hotel September 7, 1984
40 W. 45th St.
40°45′22″N 73°58′56″W / 40.756111°N 73.982222°W / 40.756111; -73.982222 (Webster Hotel)
Midtown Manhattan
157 West 28th Street Subway Station (Dual System IRT) March 30, 2005
Seventh Ave. bet. West 26th and West 29th Sts.
40°44′48″N 73°59′39″W / 40.746667°N 73.994167°W / 40.746667; -73.994167 (West 28th Street Subway Station (Dual System IRT))
Chelsea Subway station (1 and ​2 trains)
158 The Wilbraham May 4, 2018
284 5th Ave.
40°44′46″N 73°59′12″W / 40.7462°N 73.9867°W / 40.7462; -73.9867 (The Wilbraham)
NoMad 1888 Romanesque Revival building original built as bachelor flats, complete with communal dining room
159 R. C. Williams Warehouse February 24, 2005
259–273 Tenth Ave.
40°44′57″N 74°00′14″W / 40.749167°N 74.003889°W / 40.749167; -74.003889 (R. C. Williams Warehouse)
160 Women's Liberation Center May 17, 2021
243 West 20th St.
40°44′35″N 73°59′56″W / 40.7430°N 73.9988°W / 40.7430; -73.9988 (Women's Liberation Center)
161 Women's National Republican Club February 27, 2013
3 W. 51st St.
40°45′34″N 73°58′39″W / 40.759562°N 73.97744°W / 40.759562; -73.97744 (Women's National Republican Club)
Midtown Manhattan Georgian home of early women's political organization

See also


  1. ^ The latitude and longitude information provided in this table was derived originally from the National Register Information System, which has been found to be fairly accurate for about 99% of listings. Some locations in this table may have been corrected to current GPS standards.
  2. ^ National Park Service, United States Department of the Interior, "National Register of Historic Places: Weekly List Actions", retrieved July 2, 2021.
  3. ^ Numbers represent an alphabetical ordering by significant words. Various colorings, defined here, differentiate National Historic Landmarks and historic districts from other NRHP buildings, structures, sites or objects.
  4. ^ The eight-digit number below each date is the number assigned to each location in the National Register Information System database, which can be viewed by clicking the number.
  5. ^ Circle Line X National Park Service
  6. ^ "Swedish Seaman's Church - New York City". Retrieved 2017-07-26.

This page was last edited on 31 May 2021, at 13:52
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.