In statistics, Spearman's rank correlation coefficient or Spearman's rho, named after Charles Spearman and often denoted by the Greek letter (rho) or as , is a nonparametric measure of rank correlation (statistical dependence between the rankings of two variables). It assesses how well the relationship between two variables can be described using a monotonic function.
The Spearman correlation between two variables is equal to the Pearson correlation between the rank values of those two variables; while Pearson's correlation assesses linear relationships, Spearman's correlation assesses monotonic relationships (whether linear or not). If there are no repeated data values, a perfect Spearman correlation of +1 or −1 occurs when each of the variables is a perfect monotone function of the other.
Intuitively, the Spearman correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully opposed for a correlation of −1) rank between the two variables.
Spearman's coefficient is appropriate for both continuous and discrete ordinal variables.^{[1]}^{[2]} Both Spearman's and Kendall's can be formulated as special cases of a more general correlation coefficient.
YouTube Encyclopedic

1/5Views:178 41229 54187 25114 62837 500

✪ Spearman's Rank Correlation part 1

✪ Research Methods  Spearman's Rank Correlation Coefficient

✪ Spearman Correlation

✪ Rank Correlation [When] Ranks are givennot givenEqual Ranks are givenin Statistics

✪ SPEARMAN'S RANK CORRELATION IN HINDI with solved Example By JOLLY Coaching
Transcription
Contents
Definition and calculation
The Spearman correlation coefficient is defined as the Pearson correlation coefficient between the rank variables.^{[3]}
For a sample of size n, the n raw scores are converted to ranks , and is computed from:

 where
 denotes the usual Pearson correlation coefficient, but applied to the rank variables.
 is the covariance of the rank variables.
 and are the standard deviations of the rank variables.
 where
Only if all n ranks are distinct integers, it can be computed using the popular formula

 where
 , is the difference between the two ranks of each observation.
 n is the number of observations
 where
Identical values are usually^{[4]} each assigned fractional ranks equal to the average of their positions in the ascending order of the values, which is equivalent to averaging over all possible permutations.
If ties are present in the data set, the simplified formula above yields incorrect results: Only if in both variables all ranks are distinct, then (Calculated according to biased variance.). The first equation — normalizing by the standard deviation — may be used even when ranks are normalized to [0, 1] ("relative ranks") because it is insensitive both to translation and linear scaling.
The simplified method should also not be used in cases where the data set is truncated; that is, when the Spearman correlation coefficient is desired for the top X records (whether by prechange rank or postchange rank, or both), the user should use the Pearson correlation coefficient formula given above.^{[5]}
The standard error of the coefficient (σ) was determined by Pearson in 1907 and Gosset in 1920. It is
Related quantities
There are several other numerical measures that quantify the extent of statistical dependence between pairs of observations. The most common of these is the Pearson productmoment correlation coefficient, which is a similar correlation method to Spearman's rank, that measures the “linear” relationships between the raw numbers rather than between their ranks.
An alternative name for the Spearman rank correlation is the “grade correlation”;^{[6]} in this, the “rank” of an observation is replaced by the “grade”. In continuous distributions, the grade of an observation is, by convention, always one half less than the rank, and hence the grade and rank correlations are the same in this case. More generally, the “grade” of an observation is proportional to an estimate of the fraction of a population less than a given value, with the halfobservation adjustment at observed values. Thus this corresponds to one possible treatment of tied ranks. While unusual, the term “grade correlation” is still in use.^{[7]}
Interpretation
The sign of the Spearman correlation indicates the direction of association between X (the independent variable) and Y (the dependent variable). If Y tends to increase when X increases, the Spearman correlation coefficient is positive. If Y tends to decrease when X increases, the Spearman correlation coefficient is negative. A Spearman correlation of zero indicates that there is no tendency for Y to either increase or decrease when X increases. The Spearman correlation increases in magnitude as X and Y become closer to being perfect monotone functions of each other. When X and Y are perfectly monotonically related, the Spearman correlation coefficient becomes 1. A perfect monotone increasing relationship implies that for any two pairs of data values X_{i}, Y_{i} and X_{j}, Y_{j}, that X_{i} − X_{j} and Y_{i} − Y_{j} always have the same sign. A perfect monotone decreasing relationship implies that these differences always have opposite signs.
The Spearman correlation coefficient is often described as being "nonparametric". This can have two meanings. First, a perfect Spearman correlation results when X and Y are related by any monotonic function. Contrast this with the Pearson correlation, which only gives a perfect value when X and Y are related by a linear function. The other sense in which the Spearman correlation is nonparametric in that its exact sampling distribution can be obtained without requiring knowledge (i.e., knowing the parameters) of the joint probability distribution of X and Y.
Example
In this example, the raw data in the table below is used to calculate the correlation between the IQ of a person with the number of hours spent in front of TV per week.^{[citation needed]}
IQ,  Hours of TV per week, 

106  7 
86  0 
100  27 
101  50 
99  28 
103  29 
97  20 
113  12 
112  6 
110  17 
Firstly, evaluate . To do so use the following steps, reflected in the table below.
 Sort the data by the first column (). Create a new column and assign it the ranked values 1,2,3,...n.
 Next, sort the data by the second column (). Create a fourth column and similarly assign it the ranked values 1,2,3,...n.
 Create a fifth column to hold the differences between the two rank columns ( and ).
 Create one final column to hold the value of column squared.
IQ,  Hours of TV per week,  rank  rank  

86  0  1  1  0  0 
97  20  2  6  −4  16 
99  28  3  8  −5  25 
100  27  4  7  −3  9 
101  50  5  10  −5  25 
103  29  6  9  −3  9 
106  7  7  3  4  16 
110  17  8  5  3  9 
112  6  9  2  7  49 
113  12  10  4  6  36 
With found, add them to find . The value of n is 10. These values can now be substituted back into the equation: to give
which evaluates to ρ = −29/165 = −0.175757575... with a Pvalue = 0.627188 (using the t distribution).
This low value shows that the correlation between IQ and hours spent watching TV is very low, although the negative value suggests that the longer the time spent watching television the lower the IQ. In the case of ties in the original values, this formula should not be used; instead, the Pearson correlation coefficient should be calculated on the ranks (where ties are given ranks, as described above^{[where?]}).
Determining significance
One approach to test whether an observed value of ρ is significantly different from zero (r will always maintain −1 ≤ r ≤ 1) is to calculate the probability that it would be greater than or equal to the observed r, given the null hypothesis, by using a permutation test. An advantage of this approach is that it automatically takes into account the number of tied data values there are in the sample, and the way they are treated in computing the rank correlation.
Another approach parallels the use of the Fisher transformation in the case of the Pearson productmoment correlation coefficient. That is, confidence intervals and hypothesis tests relating to the population value ρ can be carried out using the Fisher transformation:
If F(r) is the Fisher transformation of r, the sample Spearman rank correlation coefficient, and n is the sample size, then
is a zscore for r which approximately follows a standard normal distribution under the null hypothesis of statistical independence (ρ = 0).^{[8]}^{[9]}
One can also test for significance using
which is distributed approximately as Student's t distribution with n − 2 degrees of freedom under the null hypothesis.^{[10]} A justification for this result relies on a permutation argument.^{[11]}
A generalization of the Spearman coefficient is useful in the situation where there are three or more conditions, a number of subjects are all observed in each of them, and it is predicted that the observations will have a particular order. For example, a number of subjects might each be given three trials at the same task, and it is predicted that performance will improve from trial to trial. A test of the significance of the trend between conditions in this situation was developed by E. B. Page^{[12]} and is usually referred to as Page's trend test for ordered alternatives.
Correspondence analysis based on Spearman's rho
Classic correspondence analysis is a statistical method that gives a score to every value of two nominal variables. In this way the Pearson correlation coefficient between them is maximized.
There exists an equivalent of this method, called grade correspondence analysis, which maximizes Spearman's rho or Kendall's tau.^{[13]}
See also
 Kendall tau rank correlation coefficient
 Chebyshev's sum inequality, rearrangement inequality (These two articles may shed light on the mathematical properties of Spearman's ρ.)
 Distance correlation
 Polychoric correlation
References
 ^ Scale types
 ^ Lehman, Ann (2005). Jmp For Basic Univariate And Multivariate Statistics: A Stepbystep Guide. Cary, NC: SAS Press. p. 123. ISBN 9781590475768.
 ^ Myers, Jerome L.; Well, Arnold D. (2003). Research Design and Statistical Analysis (2nd ed.). Lawrence Erlbaum. p. 508. ISBN 9780805840377.
 ^ Dodge, Yadolah (2010). The Concise Encyclopedia of Statistics. SpringerVerlag New York. p. 502. ISBN 9780387317427.
 ^ Al Jaber, Ahmed Odeh; Elayyan, Haifaa Omar (2018). Toward Quality Assurance and Excellence in Higher Education. River Publishers. p. 284. ISBN 9788793609549.
 ^ Yule, G. U.; Kendall, M. G. (1968) [1950]. An Introduction to the Theory of Statistics (14th ed.). Charles Griffin & Co. p. 268.
 ^ Piantadosi, J.; Howlett, P.; Boland, J. (2007). "Matching the grade correlation coefficient using a copula with maximum disorder". Journal of Industrial and Management Optimization. 3 (2): 305–312. doi:10.3934/jimo.2007.3.305.
 ^ Choi, S. C. (1977). "Tests of Equality of Dependent Correlation Coefficients". Biometrika. 64 (3): 645–647. doi:10.1093/biomet/64.3.645.
 ^ Fieller, E. C.; Hartley, H. O.; Pearson, E. S. (1957). "Tests for rank correlation coefficients. I". Biometrika. 44 (3–4): 470–481. CiteSeerX 10.1.1.474.9634. doi:10.1093/biomet/44.34.470.
 ^ Press; Vettering; Teukolsky; Flannery (1992). Numerical Recipes in C: The Art of Scientific Computing (2nd ed.). p. 640.
 ^ Kendall, M. G.; Stuart, A. (1973). The Advanced Theory of Statistics, Volume 2: Inference and Relationship. Griffin. ISBN 9780852642153. (Sections 31.19, 31.21)
 ^ Page, E. B. (1963). "Ordered hypotheses for multiple treatments: A significance test for linear ranks". Journal of the American Statistical Association. 58 (301): 216–230. doi:10.2307/2282965. JSTOR 2282965.
 ^ Kowalczyk, T.; Pleszczyńska, E.; Ruland, F., eds. (2004). Grade Models and Methods for Data Analysis with Applications for the Analysis of Data Populations. Studies in Fuzziness and Soft Computing. 151. Berlin Heidelberg New York: Springer Verlag. ISBN 9783540211204.
Further reading
 Corder, G.W. & Foreman, D.I. (2014). Nonparametric Statistics: A StepbyStep Approach, Wiley. ISBN 9781118840313.
 Daniel, Wayne W. (1990). "Spearman rank correlation coefficient". Applied Nonparametric Statistics (2nd ed.). Boston: PWSKent. pp. 358–365. ISBN 9780534919764.
 Spearman C (1904). "The proof and measurement of association between two things". American Journal of Psychology. 15 (1): 72–101. doi:10.2307/1412159. JSTOR 1412159.
 Bonett DG, Wright, TA (2000). "Sample size requirements for Pearson, Kendall, and Spearman correlations". Psychometrika. 65: 23–28. doi:10.1007/bf02294183.CS1 maint: Multiple names: authors list (link)
 Kendall MG (1970). Rank correlation methods (4th ed.). London: Griffin. ISBN 9780852641996. OCLC 136868.
 Hollander M, Wolfe DA (1973). Nonparametric statistical methods. New York: Wiley. ISBN 9780471406358. OCLC 520735.
 Caruso JC, Cliff N (1997). "Empirical size, coverage, and power of confidence intervals for Spearman's Rho". Educational and Psychological Measurement. 57 (4): 637–654. doi:10.1177/0013164497057004009.
External links
Wikiversity has learning resources about Spearman's rank correlation coefficient 
 Table of critical values of ρ for significance with small samples
 Spearman’s Rank Correlation Coefficient – Excel Guide: sample data and formulae for Excel, developed by the Royal Geographical Society.