To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Order statistic

From Wikipedia, the free encyclopedia

Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter.
Probability density functions of the order statistics for a sample of size n = 5 from an exponential distribution with unit scale parameter.

In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value.[1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

YouTube Encyclopedic

  • 1/5
    Views:
    40 540
    21 274
    2 209
    3 131
    13 462
  • ✪ Lecture 25: Order Statistics and Conditional Expectation | Statistics 110
  • ✪ Order Statistics Part 1
  • ✪ Exam P MUST KNOW | 1st Order Statistic & Exponential Distribution
  • ✪ Order Statistic Part 2
  • ✪ Find the 'K'th Largest Element in 'N' Sorted Arrays

Transcription

So we were talking about the beta distribution and the gamma distribution separately. And I mentioned that they're connected but we haven't seen, how is beta connected to gamma other than being consecutive letters of the Greek alphabet? So there's one very, very famous example that connects the beta and the gamma in a very neat way. So that seemed like a natural starting point to bring the beta and the gamma together. Another thing we haven't done yet is figure out the normalizing constant for the beta distribution, except in the special cases where we could use Bayes Billiards arguments. So we want to have the normalizing constant in general for beta and gamma. So, here's an example, it's a story that connects the beta and gamma. I call it a bank and post office. You can call it whatever you want. I just like having a concrete example in mind and for some reason I call this the bank post office example. So the idea is you have to go to the bank and do something. You have to go to the post office and do something. Both of them you have to wait in line for some amount of time. And let's define some notation. So let's say first you go to the bank and you wait for a total of X minutes to be served and suppose that X is gamma of a lambda. If a is an integer, I didn't just like randomly write down gamma, there would be a natural interpretation here. If a is an integer, let's gamma five lambda. Remember we saw that the gamma five lambda, you can think of it as the sum of five i.i.d exponential lambda. So that would be like the assumption that everyone, there's five people in line before you and each of them takes an exponential lambda amount of time to be served. And there's only one line and we could wait in line and then eventually your turn. Okay, and then let y be gamma of b lambda same thing. X is how long you wait, that's your waiting time to be served at the bank. And Y is your waiting time at the post office. And we'll assume they're independent. X is independent of Y, that's the assumption. So that's a statement of the set up. Now the question is, first of all, what's the distribution, Of your total waiting time? And let's call that T, T for total. That we already actually know. At least for the integer case. If a and b are integers, this is the sum of a i.i.d exponentials. This is the sum of b i.i.d exponentials. So together they must be a plus b i.i.d exponentials. So just from that story you immediately say that it's gamma a plus b, lambda and we don't have to do anything else. If it's not an integer, you could just to the MGF, that we also did, and just multiply the MGFs. And again, that will confirm gamma of a plus b lambda. All right, so that was easy. We didn't need to do anything. But suppose, another natural thing to look at, is the fraction of time, that's just of your total waiting time, what fraction of that was spent waiting at the bank? That's another natural thing to look at. I'll just call this thing W just to have a name for it. I want the distribution of these two things, not just one. Since there's two of them, actually what we want is the joint distribution. We already know that the marginal of this is gamma a plus b lambda. We don't yet know the distribution of W, and we don't yet know the joint distribution. So you might think to yourself a little bit, if you had to guess, do you think that the total time spent waiting is independent or dependent of the fraction of time spent waiting at the bank? Do you wanna guess? How many of you think that they're independent? Okay, just guessing. And how many of you think they're dependent? Okay, pretty close. It's not obvious. At least I don't think it's obvious. So we have little calculation to find out cuz I don't think it's intuitively obvious one way or the other whether they're independent or not. They don't look independent cause you have the X plus Y here, but on the other hand just knowing how long you waited, what does that tell you about the fraction? So, you can make an argument either way, it's not going to resolve the question. So let's actually let lambda equal 1 just to simplify notation. It's not really any harder with general lambda, just more notation floating around. For the general case, anyway, we talked about the fact that you could go from gamma a1 to gamma a lambda, just by dividing by lambda. So at the end of the day, if we want to put lambdas back, we could put lambdas everywhere. It's not gonna affect W, because you're kind of scaling everything by the same factor, it's gonna cancel out anyway. And then this one would be gamma a plus b lambda instead of gamma a plus b1. It's just a scaling thing, so this doesn't really lose any generality. All right. So now we just find the joint PDF. Okay, so here's the joint PDF. F sub TW of t,w equals. So we're gonna start with the joint PDF of x and y. And then we need to adjust it by multiplying by this Jacobian, so we're gonna have to do a Jacobian. Probably this is the last Jacobian I'll ever do in lecture, but we do need one here. So there's gonna be this Jacobian thing, d x,y d tw. Se we just need to compute these things and then simplify. So, first of all the joint, because I assumed x and y are independent the joint PDF of x and y will just multiply the PDF of x times the PDF of y. That's very straightforward. Just a gamma 1 over gamma of a. I'll just put the two normalizing constants here those are both gammas. Remember the gamma PDF is x to the a e to the minus x. And then we have the one for y, y to the b, e to the minus y. And then there was an extra one over xy. Just in the gamma PDF. Times this Jacobian thing. Which we'll fill in. All right. Let's do the Jacobian. Here are the transformations written with capital letters but let's write the corresponding thing with lowercase letters. That is we have x plus y equals t. And we have x over x plus y equals w. I think it's a little bit easier if we invert this and solve for t, solve for x and y in terms of t and w, and that's not hard to do. Because, notice from this second equation, that's just x over t. To the denominators t. So x over t = w, so x = w. So that's just easy algebra. And let's solve for y. If we take one minus this, we get, take one minus on both sides. So I'll just write that separately. y over x + y = 1- w. I just took one minus on both sides. But that's just y over t. So y = t 1- w. So that's what the transformation is, it is in the other direction, right? So, okay, now we just need to do a 2x2 determinant. We take this notation means take the derivatives of x and y with respect to t and w. So let's start with the x, take the derivative with respect to t and we'll get w. Take the derivative with respect to w and we'll get t. Now, go tot he y, take the derivative with respect to t, that's 1- w. And then the derivative with respect to w, that's really just t- tw derivative with respect to w, let's write that down, t- tw. Now we're taking the derivative with respect to w, so we're treating that part as a constant. And so then we would just have- t. Okay, and then we do the determinant, that times that minus that times that. So that's- tw, and then- t times 1- w. So it's- tw, and it's-- tw, tw's cancel. The only thing that's left is this -t. So the Jacobian simplifies really nicely, it's just- t. All right, and then in the Jacobian formula, we're actually doing these absolute value bars here are kind of serving a double purpose. That means, take the determinant and then take the absolute value. Sometimes you might see it written with like double or little bars like that, which looks kind of ugly. But just to remind you, to take the determinant, take the absolute value, we don't want a negative PDF. Also, so that's just t here, so I don't need to write the absolute value signs. T is positive, so we're using t, not -t. All right, so now we have the joint PDF. The only thing we have to do that that's left is to write this in terms of t and w, cuz we still have xs and ys here. So I'm just gonna plug in, Plug in the definition of x and y in terms of t and w. So x is tw, so that's, let's just write the w stuff first. So that's w to the a, and this one, y, I'm just plugging in what I wrote for y there. That's 1- w to the b. Then, we also had, because it was tx, we have t to the a, t to the b, so that's t to the a + b. Exponential is e to the- of x + y. But x + y is just t, so that's e to the -t. So this is starting to look like a gamma PDF. And then this stuff at the end, x, well let's see, x is tw, so we're gonna have and y, this has a t, so we have 1 over t cuz it's t over t squared. And then we also have an extra w and an extra 1- w. So we'll just write that, a- 1 and b- 1. And is that everything? We can put a 1 here. So I was just trying to write is as a function of w times a function of t. I was trying to do that. Well, actually, I did do that. So that's a function of w times a function of t, therefore, they're independent. Not at all obvious, but now we just proved that they're independent. Okay, well, the next question is what's the PDF? Well, we already knew this was a gamma a + b lambda, and this confirms that. The only thing that's kind of weird about this is we don't have the normalizing constant. And I was discussing this in office hours Wednesday night with a couple people like, where's the normalizing constant? And my answer is, if you don't see the normalizing constant, you can always just multiply by 1, right? So I'll just put it there, gamma a + b. Well, of course, I can only do that if I multiply and divide by the same thing, so I'll put back gamma of a+b here. Writing it this way, now this is exactly the gamma of a+b1 PDF. All right, so some conclusions from this. First of all, they're independent. Secondly, let's find the marginal PDFs. If we integrate this thing, Let's get the marginal PDF of w. So remember to get the marginal PDF, we just integrate the joint PDF. And that's actually an easy integral. [COUGH] So we're just gonna integrate this joint PDF dt, right? Integrate out the t, then we'll get the marginal of w. [COUGH] But that's an integral you can do in your head, so I'm not gonna rewrite this whole thing. Because imagine sticking an integral sign here, we're integrating dt, so this whole thing, this is a function of w. So since this integral is dt, you just take out this entire function, it's just serving as a constant. All we're left with then in the integral is the integral of this, that's the gamma PDF. We already know it integrates to 1. So the integral is just gamma of a + b over gamma of a gamma of b, w to the a- 1, 1- w of the b- 1. As a consequence of that, we've actually just derived the normalizing constant of the beta, it has to be this. Notice that this is a beta ab up to a constant. We didn't know what the constant was yet. Now we know what the constant is, because if this were the wrong constant, then this would not be a valid marginal PDF. But we just proved that this is the marginal PDF. If it's the marginal PDF, it has to integrate to 1. So that's the normalizing constant of the beta. Okay, so in particular this tells us, first of all, now we know the normalizing constant of the beta. It didn't look like we set out to, a lot of times in math you kind of just discover something that you're not trying to do. You can't solve a problem, so you try to solve some other problem and then you can't solve it, but you learn something else. So like that here, like We were just like, this is a pretty natural problem with gammas. If you just try directly to find the normalizing constant, the beta, you're faced with a very different difficult integral. In this case, we just happen to notice that the marginal was beta. And we got this normalizing constant for free essentially. And we also know that T is Gamma(a+b1). And W that is Beta(a,b) and they're independent. So that's a very special property of the gamma and beta, how they fit together in that way. Actually, there's a theorem that essentially says, if you replace this gamma by any other distribution, these will not be independent anymore. That's hard to prove. But that happens to be true. This is a very, very special property of the gamma and the beta. Okay, so just as an example of how you might use this, which actually relates to the next homework. In general, so suppose we want to find the expected value of W, where W, using the same notation, W is Beta(a, b). There's two ways to do it. One would be to use LOTUS. And, I'm not gonna write out the LOTUS right now, because that should be automatic, unconscious by now. Hopefully, you could all write down the LOTUS really easily and you can visualize what would the LOTUS look like. Well, it just means you're integrating, well, it's not even LOTUS, it's just the definition. But in fact, if you wanted e of w to the 12th or something, that's gonna be just as easy just using LOTUS. But just for the mean, you'd write down the PDF times w, right? So that would just become w to the a. But that still looks like a beta, right? So once you're comfortable with the pattern of the beta distribution, you can just write down the answer. As long as w power, 1-w a power, it's still gonna look the same. From the expected value of w to the 100th power, it's not anymore difficult, cuz then you're just adding 100 up here, but it still looks the same. So once you understand what the beta distribution looks like, then you can do it that way. As I said, that's one way to do it, I'll also just say LOTUS/definition. But there's a cooler way to do it. I drew a line here which means that that problem was over and we're starting a new here thing. And we just have W as Beta(a, b), and we've forgotten all the stuff above the line. But we can remember the stuff that was above the line. That's okay, right? Because the expected value, that's what I mean, a distribution has to have, if it has an expected value at all, it has to be some well- determined expected value. It's not like you do this with one particular beta random variable and you get one thing, and you do with another one and you get another thing, right? If you have a beta random variable, there is a well-defined thing, so I can choose my favorite way to generate a beta, all right? I can choose any way I want as long as it's beta(a,b). Well this right here is my favorite way to generate a beta(a,b). So I may as well choose that one. So now, even though this w didn't necessarily have to have that interpretation, we can choose to give it that interpretation. Because that won't affect the expected value. So if we interpret it that way, we have E (x / x + y). = E (x) / E (x + y), not by linearity, that's not linearity. This is actually one of the most common mistakes in probability, is to write this kind of thing without justification. So you have a homework question about this. Usually, if I write E of something over something is E of the numerator over E of the denominator, usually, that's completely wrong. So I'll just say be very careful, I'll show you why this is true in this case. I'm using the same notation as up there. So I'm not gonna rewrite all the all the notation. That's gamma A1, and that's gamma B1, same notation. Usually, this will be a horrible, horrible blunder. In this case, this is true. The reason that that is true is that, so why is that true here? Why is E(x/x+y), let's write it the other way, put the E(x+y) on the other side of the equation. E(X) in this special problem, I'm not saying this is always true. In the special problem of gammas, and I'll just call it the Gamma-Beta or the bank post office. Well, usually this is wrong. However, we have a very important factor which is that T was independent of W. So we know that X / X + Y for this particular problem is independent of X + W, X + Y, right? The fraction is independent of the denominator, and independent of the total. Therefore they're uncorrelated, independent implies uncorrelated. Definition of uncorrelated for two random variables X and Y is equals E of X, E of Y. So that says uncorrelated. This being uncorrelated of this just says that this times this is E of this times this, but when you multiply X over X + Y by X + Y, you get X. So it's true because they're uncorrelated. By the way, I mean, that tells us the answer is just a/a+b. Because we already know that the mean of the gamma a1 is a. And you could use linearity in the denominator, or use the fact that that's gamma of a+b1. Okay, so the expected value of beta is a over a plus b, a to ab. Similarly, you can get the variance, or various other functions of the beta. Okay, so those are the main things about gamma and beta. So let's go on to our next big topic, which is order statistics. I tried thinking of a good segue from beta to order statistics. But it's kind of, it's closely connected. But you won't see why until we get to the end of order statistics, not the beginning of order statistics. The beta will come up again, though, so don't worry. You haven't said goodbye to the beta forever. All right, so let me tell you what order statistics are. We've already seen some order statistics. You had a difficult homework problem about the maximum and the minimum. Meant to be difficult, but meant to make you think, right? One key insight from that is that the covariance of the max and the min is not the covariance of x and y. You can't make the argument that one of them's the x, and the other one's the y, and so on. So that was meant to get you to start thinking about order statistics. This is gonna be the more generalization of that kind of thing with the max and the min. So if we have random variables, X1 through Xn, and let's assume that they're i.i.d. And this is just the definition right now. Order statistics means we put them in order, and then those are the order statistics. Which sounds pretty simple but you really have to think carefully at what does it actually mean to do that. That is hopefully for the homework and other stuff with the maximum and the minimum. You started thinking that what is it actually mean to think as the max x,y as a random variable, right? Anyone can take the maximum of two numbers just take the bigger number. What's the maximum of two random variables that surround variable and hopefully you've thought by now what actually that means. Well, I'll talk about the more general version here. So order of statistics are X(1), this is just notation, X(1) less than or equal to X(2) less than or equal to blah, blah, blah, less than or equal to X(n). So we put this is the standard notation, where you subscript them with parentheses where this notation means that X1 is the minimum, Right? And then X2 is the second smallest, X3 is the third smallest and so on, all the way up to Xn. Let's put Xn, yeah, all the way up to Xn is the maximum. So those are the two extreme order statistics, the max and the min, but then we have everything in between is also of great interest, right? For example, the median, right? Median is a familiar concept. That's just so you don't take all these values, take the one in the middle. So there's some ambiguity about how to define median if n is even, which is not really relevant for our purposes here. But if n is odd and let's say you had five numbers and you asked for the median, it's just the middle number. That would be the third one, right? So the median is, X sub n+1/2, that will be the middle number. If there's an even number, different people use different conventions. The most common thing would be to take the two middle numbers and then average those. But anyway, point is, we can get the median. We can get other percentiles, let's call it a quantile and so on. Quantile, just think of percentiles. You can find the one that 75% of the observations are below that point that kind of thing. Median's right at the middle at the 50 percentile, right? So, okay, so those are the order statistics. And there's a huge literature on order statistics, a huge amount of applications of order statistics in statistics, because it's just a natural thing that a lot of times you have data and you don't necessarily care about. The data might have come to you independently in this particular order, but you may not care about that. I mean, and often you want to rank things, right? And look at the largest, the smallest, the middle value, things like that. Interquartile range is a famous concept in statistics. That just means take the 75th percentile minus the 25th percentile. Things like that. All kinds of statistical quantities are defined in terms of order statistics, so we wanna be able to say something about their distribution. That's actually a pretty hard problem, though. The main reason that they're difficult to work with, Since they are dependent. Even though that's the one key insight about order statistics in fact, they're gonna be positively correlated. Even though we started with i.i.d., there still dependent. Why? Well, it's like you saw in the homework, the maximum is always bigger then the minimum. So if you know that the minimum is really big, that forces the maximum to be even bigger. And that same argument extends to any, if I tell you X the third-order statistic, well the seventh-order statistic is even bigger than that. So they give you information about each other. So we have to be very careful about that dependence. And also they're tricky in the discreet case, so we're mostly gonna assume the continuous case. Discreet case we have to worry about ties, that is two random variables being exactly equal, that's a serious issue in the discrete case. In the continuous case which we're gonna talk about for the rest of today. Then we don't have to worry about that issue cuz the probability is zero that two continuous random variables will be exactly equal to infinitely many decimal places. Okay, so now let's assume that they're continuous. And let's derive the distribution of one of the order statistics. So let's let X1 through Xn be i.i.d., continuous let's say they have a PDF little f and CDF capital F. And we wanna find the CDF and the PDF of the kth order statistic CDF and PDF, Of X(j) with parenthesis, okay? So just for getting the maximum and the minimum is not too difficult. And you've already done similar sorts of problems before. Like if you want, if you say that the maximum is less than some number, say ten that's the same thing as saying that they are all less than ten and then use independence. But we wanna get all and any of them, right, any extra. This is gonna be the marginal, right now. So another, since they're dependent, we would also like to know the joint distributions, but let's focus on the marginal for now. All right, so let's do the CDF. So we wanna know what's the probability that X( j )is less than or equal to some number X, i t's the definition of CDF. And I find, for working with order statistics, it really helps to draw a picture. So we just have a number line, we have some number X, and let's just interpret this event in terms of a picture. This says that the jth order statistic is to the left. Maybe it's over here somewhere. Of course the other order statistics like X( j )- 1 are even smaller. Maybe X(1) is over here, right? So the first j order statistics All have to be to the left. Now it could be that xj plus 1 is over here, and it could be that it's over there, we didn't specify, right? So I'll say here that there may be more. We could draw two pictures, one where actually xj plus one, and xj plus two, and xj plus three are over here. And finally, xj plus four crosses over x. Or we could draw another picture where xj plus one is already to the right of x. Either way, this event has occurred. So in words, Just by looking at pictures like this and then interpreting it in words. Notice that the saying the jth order statistic is less than or equal to x, is exactly the same thing as saying that at least, j of the let's say Xi's are less than or equal to x, right? Cuz we need to have j of them to the left, maybe more than j, that's okay, right, but at least j. So that's exactly the same event. Now we can actually just write down a formula directly. If there's at least j of them, let's actually specify it more and say how many are there? Well let's suppose there are k of them, so k goes from j to n. So I'm just gonna break this up into disjoint cases. Where the cases are counting how many of these X's are to the left of little x, right? Just breaking it up into cases. Okay, so assume there are exactly k. What's the probability of that happening? Let's say it a different way. What's the distribution? >> [INAUDIBLE] >> It's a binomial distribution. How are you defining your success? >> [INAUDIBLE] >> Success means to the left of X, failure means to the right of X. So this says, this is at least j successes where we're gonna sum that up over exactly k successes, k goes from j to n. So it's just a binomial. n choose k, the probability of success is the probability of being to the left of X, by definition that's just the CDF. So this is F(x) to the k (1- F(x)), that's the probability of failure, to the n- k. So that's the CDF. Now of course, once we have the CDF, we can get the PDF by taking the derivative of this. But that's gonna be this big ugly sum and I'm too lazy to try to simplify that sum. So let's do the PDF a different way. You get the same answer. It just takes some work to deal with that sum. So I'd rather think more directly. All right, so here's the PDF. How do we get the PDF? Well I'm just gonna draw a picture and think about what it means. So, all right, here's the picture. PDF of xj, let's call it fx(j) PDF (x) =, and let's just write down the answer, cuz that's a nicer way to do it. >> [LAUGH] >> So, here's x. We want the PDF at x. Now, we have to be a little bit careful that, Density is not probability, like the density could be greater than one and all that. But we talked about the fact before that if you take a PDF and multiply it by some tiny increment, then that's essentially the probability that the random variable is in a tiny interval of that length. That is, imagine this tiny, I'm drawing the interval big enough so that you can see it. But imagine this is a tiny, tiny interval of length dx. So it's infinitesimally small, okay? And so we're actually gonna look at F(x) dx. As we're just multiplying by the width of that interval and then we're letting it go to zero. We're letting it taking a limit essentially. Now we can interpret this as saying the probability that the j order statistic is in this tiny interval, okay? Now let's just think about how could that happen. Well, first of all it means that one of these observations must lie in this tiny interval. So we'll take n there're n choice for which one? That is this could've been, one of them has to be here, right? Good chance to use the pink chalk again. One of them has to be like right here, somewhere in this tiny little interval. There're n choices for which one goes there. Now assume that that, and that has probability f(x)dx of actually happening. Okay, then let's look at the rest. There're n- 1 observations left. This is supposed to be the jth order statistic. So that means there must be j- 1 of them, To the left. Right? j- 1 of them are anywhere to the left of this tiny, tiny interval. One of them is in this interval and then the rest of them are to the right. So that's n- j of them. Now we chose one of them that went right there, we need to choose which ones are to the left. So, there're n- 1 left to choose from and we can choose any j- 1 of them. Then we just need to know well what's the probability that those specific j, now you can imagine that we chose specific set of j minus 1 of these observations. They all need to go to the left. So just F(x), that's the probably of being to the left of X to the j- 1 times (1- F(x)), To the n- j, because this n minus j need to be to the right. So just to simplify this a little bit, the PDF is n(n- 1, choose j- 1) F(x). It's kind of a neat looking formula, because it involves both the CDF and the PDF. So the CDF to a power, 1 minus the CDF to a power, and the PDF. So you can also take the derivative of this and you'll still have a sum, but if you do some algebra on the sum it'll simplify to this eventually. Okay, so that is the marginal PDF. I was tempted to put on on the homework a similar problem with the joint PDF of two order statistics. I didn't put it on the homework but I still think it's good practice. Maybe I'll put it on the strategic practice. I haven't posted the next strategic practice yet, but I'll do that sometime tonight. Maybe I'll put this on it, maybe I won't. But what I'm trying to say is what if you wanted the joint PDF of like the third order statistic and the seventh order statistic? You could draw a picture Like this except that then you have to say you kind of have two of these infinitesimal intervals and you do an analogous argument. You can get joint PDF that way, there's other ways to do the joint distributions too. In stat 110, we're mainly concerned with the marginal ones as long as you keep in mind that they are dependent. Okay, so let's do a quick example. The easiest example to think about, but it's also a very useful example just to have in mind is the uniform order of statistics. So, let's select U1 through Un the iid uniform between 0 and 1, And we wanna find the distribution of the J order statistic. So F sub u, j of x. The PDF of the j order statistic. I'm just gonna apply this result. So, okay, well, that just says n times n- 1 choose j- 1. F(x), remember that the CDF for the uniform just increases linearly, cuz the PDF is a constant. So that's just x to the j- 1, 1- x to the n- j. And the PDF is just one, at least this is for x between 0 and 1, 0 otherwise, Well, that should look kind of familiar, right? x to the power, 1 minus x to a power. I don't even care about, the constant is just whatever it has to be, to make it integrate to 1. And you can check that this is consistent with the constant we derived earlier, but the key part of this is this, it's a beta. So we know that u sub parenthesis j, the jth order statistic is beta j n- j + 1. Which relates back to something we did a while ago with the expected difference between two uniforms. So we did this calculation when we were doing 2D Lotus. The expected value of u1- u2, right, and I drew this little picture here. And I said, okay, one of them could be there, one of them could be there. One-third, two-thirds, so on average, the difference would be one-third. So, okay, so what was I really doing? Well, another way to say this is that the expected value. The absolute difference is the max minus the min, as you were thinking about on the homework. So that's the expected max of u1, u2- the expected min of u1, u2, but according to this the maximum is distributed as beta of 2,1. And the minimum is distributed as beta of 1, 2. So this has mean two-thirds, and this has mean one-third equals one-third. So that just confirms the earlier result that, instead of having to do with a 2D LOTUS and things like that, we just say, we recognize this as a difference of two betas, the max is not independent of the minimum, but linearity always holds. So that's nice. All right, one last thing, just to give you something to think about over the weekend, if you don't have enough to think about. So our next big topic is conditional expectation. In a sense, you already know what conditional expectation is if you understand conditional probability, because it just means take expectation, Using conditional distribution rather than using the distribution. So for example, we're gonna use this notation, E(x|A), where A is an event. On Monday I will talk more about what this means. But I claim that this is something that should already understand, cuz it just says use the conditional distribution, given A, right, so it's basically saying we're already familiar with, but I will go into a lot of detail about its properties. For example, we can say that E(x) = E(x|A)P(A), where A is any event, + E(x|A complement) given P(A complement), which looks completely like the law of total probability, except they were using expectation instead of probability. And in the discrete case, just to quickly see how would you prove something like this, in the discrete case, we'd have the expected value is the sum of values times their probabilities, all we would have to do is expand this using, The law of total probability. That is, we would break this up into two terms, and then just break it into two sums and you get to this. So this is a pretty intuitive formula I think, we'll go into more detail about it. It's just the expectation version of the log total probability. All right so one quick little puzzle for you to think about, and is called the Two Envelope Paradox. I think this paradox is much more deserving of the name Paradox than like, Monty Hall. Luckily, people don't usually say the Monty Hall Paradox, but if they did, this one is more deserving of the name Paradox. Here's the problem. I'll just tell you the problem quickly, we wont' try to solve it today since there's not time, but anyway it's fun to think about. You're given two envelopes. Each envelope contains a check for some amount of money. You're given that one envelope contains exactly twice as much money as the other one. Twice as much money as the other, but it's symmetrical. The envelopes look the same, you can't judge the thickness or anything. They look completely identical to you, it's symmetric. All that you know is that one of them has double of the other one, okay. They both have some positive amounts of money, okay. So you get to pick one. Well, so far there's not, pick either one you want, it's symmetrical. Let's just say you pick that one, okay. Now, first assume you get to, there are variations where you get to open it, variations where you don't. First assume you get to open it, let's assume there's $100 there. Then you're given the question like in Monty Hall, do you wanna switch? So you might reason, while the other one, you know this ones 100 now. The other one, it could be 50, it could be 200. Seemingly, those are equally likely. So if your average 50 and 200, you get $125, which means you should switch. Now, that argument did not depend on this being a $100, though. So let's just call this one x, the other one is either half x or 2x. The average of half x and 2x is greater than x, so you should switch, but in that case, why bother even opening the envelope. That's gonna be some value x, the other one, on average, is greater than x, so you should switch to that one. But then you should switch back, right? And then you switch forever and that's not good, all right. So have a good weekend.

Contents

Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6, 9, 3, 8,

they will usually be denoted

where the subscript i in indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a time series.

The order statistics would be denoted

where the subscript (i) enclosed in parentheses indicates the ith order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size n, the nth order statistic (or largest order statistic) is the maximum, that is,

The sample range is the difference between the maximum and minimum. It is a function of the order statistics:

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number n of observations is odd. More precisely, if n = 2m+1 for some integer m, then the sample median is and so is an order statistic. On the other hand, when n is even, n = 2m and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

Given any random variables X1, X2..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.

When the random variables X1, X2..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Cumulative distribution function of order statistics

For a random sample as above, with cumulative distribution , the order statistics for that sample have distributions as follows (where r specifies which order statistic):

This follows the pattern of a binomial distribution. As it can be derived by considering x being less than or equal to the given number of X's as a success, and greater than it, a failure [2].

What's more, there are two special cases, which have CDFs which are easy to compute.

Which can be derived by careful consideration of probabilities.

Probability distributions of order statistics

Order statistics sampled from a uniform distribution

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the Beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that is a random sample drawn from a continuous distribution with cdf . Denoting we obtain the corresponding random sample from the standard uniform distribution. Note that the order statistics also satisfy .

The probability density function of the order statistic is equal to[3]

that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.[3][4]

The proof of these statements is as follows. For to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already , so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals , and respectively. This equals (refer to multinomial distribution for details)

and the result follows.

The mean of this distribution is k / (n + 1).

The joint distribution of the order statistics of the uniform distribution

Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be

which is (up to terms of higher order than ) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals , , , , respectively.

One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:

One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region .


Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of , i.e. maximum minus the minimum. More generally, has also a Beta distribution.

From these formulas we can derive the covariance between two order statistics:
The formula follows from noting that
and comparing that with
where , which is the actual distribution of the difference.

Order statistics sampled from an exponential distribution

For random samples from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution

where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi.[5][6]

Order statistics sampled from an Erlang distribution

The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method[clarification needed].[7]

The joint distribution of the order statistics of an absolutely continuous distribution

If FX is absolutely continuous, it has a density such that , and we can use the substitutions

and

to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:

where
where

Application: confidence intervals for quantiles

An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.

A small-sample-size example

The simplest case to consider is how well the sample median estimates the population median.

As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is

Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability

With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.

Large sample sizes

For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by

For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:

where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946.[8] Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds.

An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.

Proof

It can be shown that

where

with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.

Application: Non-parametric Density Estimation

Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator.[9] Suppose, we want to estimate the density at the point . Consider the random variables , which are i.i.d with distribution function . In particular, .

The expected value of the first order statistic given total samples yields,

where is the quantile function associated with the distribution , and . This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,

  Input:  samples.  points of density evaluation. Tuning parameter  (usually 1/3).
  Output:  estimated density at the points of evaluation.
  1: Set 
  2: Set 
  3: Create an  matrix  which holds  subsets with  samples each.
  4: Create a vector  to hold the density evaluations.
  5: for  do
  6:     for  do
  7:         Find the nearest distance  to the current point  within the th subset
  8:      end for
  9:      Compute the subset average of distances to 
 10:      Compute the density estimate at 
 11:  end for
 12: return   

In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.[10]

Dealing with discrete variables

Suppose are i.i.d. random variables from a discrete distribution with cumulative distribution function and probability mass function . To find the probabilities of the order statistics, three values are first needed, namely

The cumulative distribution function of the order statistic can be computed by noting that

Similarly, is given by

Note that the probability mass function of is just the difference of these values, that is to say

Computing order statistics

The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).

See also

Examples of order statistics

References

  1. ^ David, H. A.; Nagaraja, H. N. (2003). Order Statistics. Wiley Series in Probability and Statistics. doi:10.1002/0471722162. ISBN 9780471722168.
  2. ^ Casella, George; Berger, Roger. Statistical Inference (2nd ed.). Cengage Learning. p. 228. ISBN 9788131503942.
  3. ^ a b Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN 9780387981444.
  4. ^ Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m’th order statistic from a random sample of size n from the uniform distribution (on (0,1)).
  5. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 2. Basic Distribution Theory", Order Statistics, Wiley Series in Probability and Statistics, p. 9, doi:10.1002/0471722162.ch2, ISBN 9780471722168
  6. ^ Rényi, Alfréd (1953). "On the theory of order statistics" (PDF). Acta Mathematica Hungarica. 4 (3): 191–231. doi:10.1007/BF02127580. Archived from the original (PDF) on 2016-10-09.
  7. ^ Hlynka, M.; Brill, P. H.; Horn, W. (2010). "A method for obtaining Laplace transforms of order statistics of Erlang random variables". Statistics & Probability Letters. 80: 9–18. doi:10.1016/j.spl.2009.09.006.
  8. ^ Mosteller, Frederick (1946). "On Some Useful "Inefficient" Statistics". Annals of Mathematical Statistics. 17 (4): 377–408. doi:10.1214/aoms/1177730881. Retrieved February 26, 2015.
  9. ^ Garg, Vikram V.; Tenorio, Luis; Willcox, Karen (2017). "Minimum local distance density estimation". Communications in Statistics-Theory and Methods. 46 (1): 148–164. arXiv:1412.2851. doi:10.1080/03610926.2014.988260.
  10. ^ David, H. A.; Nagaraja, H. N. (2003), "Chapter 3. Expected Values and Moments", Order Statistics, Wiley Series in Probability and Statistics, p. 34, doi:10.1002/0471722162.ch3, ISBN 9780471722168
  • Sefling, R. J. (1980). Approximation Theorems of Mathematical Statistics. New York: Wiley. ISBN 978-0-471-02403-3.

External links

This page was last edited on 28 May 2019, at 10:40
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.