To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Clustered standard errors

From Wikipedia, the free encyclopedia

Clustered standard errors (or Liang-Zeger standard errors)[1] are measurements that estimate the standard error of a regression parameter in settings where observations may be subdivided into smaller-sized groups ("clusters") and where the sampling and/or treatment assignment is correlated within each group.[2][3] Clustered standard errors are widely used in a variety of applied econometric settings, including difference-in-differences[4] or experiments.[5]

Analogous to how Huber-White standard errors are consistent in the presence of heteroscedasticity and Newey–West standard errors are consistent in the presence of accurately-modeled autocorrelation, clustered standard errors are consistent in the presence of cluster-based sampling or treatment assignment. Clustered standard errors are often justified by possible correlation in modeling residuals within each cluster; while recent work suggests that this is not the precise justification behind clustering,[6] it may be pedagogically useful.

Intuitive motivation

Clustered standard errors are often useful when treatment is assigned at the level of a cluster instead of at the individual level. For example, suppose that an educational researcher wants to discover whether a new teaching technique improves student test scores. She therefore assigns teachers in "treated" classrooms to try this new technique, while leaving "control" classrooms unaffected. When analyzing her results, she may want to keep the data at the student level (for example, to control for student-level observable characteristics). However, when estimating the standard error or confidence interval of her statistical model, she realizes that classical or even heteroscedasticity-robust standard errors are inappropriate because student test scores within each class are not independently distributed. Instead, students in classes with better teachers have especially high test scores (regardless of whether they receive the experimental treatment) while students in classes with worse teachers have especially low test scores. The researcher can cluster her standard errors at the level of a classroom to account for this aspect of her experiment.[7]

While this example is very specific, similar issues arise in a wide variety of settings. For example, in many panel data settings (such as difference-in-differences) clustering often offers a simple and effective way to account for non-independence between periods within each unit (sometimes referred to as "autocorrelation in residuals").[4] Another common and logically distinct justification for clustering arises when a full population cannot be randomly sampled, and so instead clusters are sampled and then units are randomized within cluster. In this case, clustered standard errors account for the uncertainty driven by the fact that the researcher does not observe large parts of the population of interest.[8]

Mathematical motivation

A useful mathematical illustration comes from the case of one-way clustering in an ordinary least squares (OLS) model. Consider a simple model with N observations that are subdivided in C clusters. Let be an vector of outcomes, a matrix of covariates, an vector of unknown parameters, and an vector of unexplained residuals:

As is standard with OLS models, we minimize the sum of squared residuals to get an estimate :

From there, we can derive the classic "sandwich" estimator:

Denoting yields a potentially more familiar form

While one can develop a plug-in estimator by defining and letting , this completely flexible estimator will not converge to as . Given the assumptions that a practitioner deems as reasonable, different types of standard errors solve this problem in different ways. For example, classic homoskedastic standard errors assume that is diagonal with identical elements , which simplifies the expression for . Huber-White standard errors assume is diagonal but that the diagonal value varies, while other types of standard errors (e.g. Newey–West, Moulton SEs, Conley spatial SEs) make other restrictions on the form of this matrix to reduce the number of parameters that the practitioner needs to estimate.

Clustered standard errors assume that is block-diagonal according to the clusters in the sample, with unrestricted values in each block but zeros elsewhere. In this case, one can define and as the within-block analogues of and and derive the following mathematical fact:

By constructing plug-in matrices , one can form an estimator for that is consistent as the number of clusters becomes large. While no specific number of clusters is statistically proven to be sufficient, practitioners often cite a number in the range of 30-50 and are comfortable using clustered standard errors when the number of clusters exceeds that threshold.

Further reading

References

  1. ^ Liang, Kung-Yee; Zeger, Scott L. (1986-04-01). "Longitudinal data analysis using generalized linear models". Biometrika. 73 (1): 13–22. doi:10.1093/biomet/73.1.13. ISSN 0006-3444.
  2. ^ Cameron, A. Colin; Miller, Douglas L. (2015-03-31). "A Practitioner's Guide to Cluster-Robust Inference". Journal of Human Resources. 50 (2): 317–372. CiteSeerX 10.1.1.703.724. doi:10.3368/jhr.50.2.317. ISSN 0022-166X. S2CID 1296789.
  3. ^ "ARE 212". Fiona Burlig. Retrieved 2020-07-05.
  4. ^ a b Bertrand, Marianne; Duflo, Esther; Mullainathan, Sendhil (2004-02-01). "How Much Should We Trust Differences-In-Differences Estimates?". The Quarterly Journal of Economics. 119 (1): 249–275. doi:10.1162/003355304772839588. hdl:1721.1/63690. ISSN 0033-5533. S2CID 470667.
  5. ^ Yixin Tang (2019-09-11). "Analyzing Switchback Experiments by Cluster Robust Standard Error to prevent false positive results". DoorDash Engineering Blog. Retrieved 2020-07-05.
  6. ^ Abadie, Alberto; Athey, Susan; Imbens, Guido; Wooldridge, Jeffrey (2017-10-24). "When Should You Adjust Standard Errors for Clustering?". arXiv:1710.02926 [math.ST].
  7. ^ "CLUSTERED STANDARD ERRORS". Economic Theory Blog. 2016. Archived from the original on 2016-11-06. Retrieved 28 September 2021.
  8. ^ "When should you cluster standard errors? New wisdom from the econometrics oracle". blogs.worldbank.org. Retrieved 2020-07-05.
This page was last edited on 20 December 2023, at 06:54
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.