To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

Mixed model

A mixed model, mixed-effects model or mixed error-component model is a statistical model containing both fixed effects and random effects.[1] These models are useful in a wide variety of disciplines in the physical, biological and social sciences. They are particularly useful in settings where repeated measurements are made on the same statistical units (longitudinal study), or where measurements are made on clusters of related statistical units. Because of their advantage in dealing with missing values, mixed effects models are often preferred over more traditional approaches such as repeated measures ANOVA.

History and current status

Ronald Fisher introduced random effects models to study the correlations of trait values between relatives.[2] In the 1950s, Charles Roy Henderson provided best linear unbiased estimates (BLUE) of fixed effects and best linear unbiased predictions (BLUP) of random effects.[3][4][5][6] Subsequently, mixed modeling has become a major area of statistical research, including work on computation of maximum likelihood estimates, non-linear mixed effects models, missing data in mixed effects models, and Bayesian estimation of mixed effects models. Mixed models are applied in many disciplines where multiple correlated measurements are made on each unit of interest. They are prominently used in research involving human and animal subjects in fields ranging from genetics to marketing, and have also been used in baseball[7] and industrial statistics.[8]

Definition

In matrix notation a linear mixed model can be represented as

${\displaystyle {\boldsymbol {y}}=X{\boldsymbol {\beta }}+Z{\boldsymbol {u}}+{\boldsymbol {\epsilon }}}$

where

• ${\displaystyle {\boldsymbol {y}}}$ is a known vector of observations, with mean ${\displaystyle E({\boldsymbol {y}})=X{\boldsymbol {\beta }}}$;
• ${\displaystyle {\boldsymbol {\beta }}}$ is an unknown vector of fixed effects;
• ${\displaystyle {\boldsymbol {u}}}$ is an unknown vector of random effects, with mean ${\displaystyle E({\boldsymbol {u}})={\boldsymbol {0}}}$ and variance–covariance matrix ${\displaystyle \operatorname {var} ({\boldsymbol {u}})=G}$;
• ${\displaystyle {\boldsymbol {\epsilon }}}$ is an unknown vector of random errors, with mean ${\displaystyle E({\boldsymbol {\epsilon }})={\boldsymbol {0}}}$ and variance ${\displaystyle \operatorname {var} ({\boldsymbol {\epsilon }})=R}$;
• ${\displaystyle X}$ and ${\displaystyle Z}$ are known design matrices relating the observations ${\displaystyle {\boldsymbol {y}}}$ to ${\displaystyle {\boldsymbol {\beta }}}$ and ${\displaystyle {\boldsymbol {u}}}$, respectively.

Estimation

The joint density of ${\displaystyle {\boldsymbol {y}}}$ and ${\displaystyle {\boldsymbol {u}}}$ can be written as: ${\displaystyle f({\boldsymbol {y}},{\boldsymbol {u}})=f({\boldsymbol {y}}|{\boldsymbol {u}})\,f({\boldsymbol {u}})}$. Assuming normality, ${\displaystyle {\boldsymbol {u}}\sim {\mathcal {N}}({\boldsymbol {0}},G)}$, ${\displaystyle {\boldsymbol {\epsilon }}\sim {\mathcal {N}}({\boldsymbol {0}},R)}$ and ${\displaystyle \mathrm {Cov} ({\boldsymbol {u}},{\boldsymbol {\epsilon }})={\boldsymbol {0}}}$, and maximizing the joint density over ${\displaystyle {\boldsymbol {\beta }}}$ and ${\displaystyle {\boldsymbol {u}}}$, gives Henderson's "mixed model equations" (MME) for linear mixed models:[3][5][9]

${\displaystyle {\begin{pmatrix}X'R^{-1}X&X'R^{-1}Z\\Z'R^{-1}X&Z'R^{-1}Z+G^{-1}\end{pmatrix}}{\begin{pmatrix}{\hat {\boldsymbol {\beta }}}\\{\hat {\boldsymbol {u}}}\end{pmatrix}}={\begin{pmatrix}X'R^{-1}{\boldsymbol {y}}\\Z'R^{-1}{\boldsymbol {y}}\end{pmatrix}}}$

The solutions to the MME, ${\displaystyle \textstyle {\hat {\boldsymbol {\beta }}}}$ and ${\displaystyle \textstyle {\hat {\boldsymbol {u}}}}$ are best linear unbiased estimates (BLUE) and predictors (BLUP) for ${\displaystyle {\boldsymbol {\beta }}}$ and ${\displaystyle {\boldsymbol {u}}}$, respectively. This is a consequence of the Gauss–Markov theorem when the conditional variance of the outcome is not scalable to the identity matrix. When the conditional variance is known, then the inverse variance weighted least squares estimate is BLUE. However, the conditional variance is rarely, if ever, known. So it is desirable to jointly estimate the variance and weighted parameter estimates when solving MMEs.

One method used to fit such mixed models is that of the EM algorithm where the variance components are treated as unobserved nuisance parameters in the joint likelihood.[10] Currently, this is the implemented method for the major statistical software packages R (lme in the nlme package, or lmer in the lme4 package), Python (statsmodels package), Julia (MixedModels.jl package), and SAS (proc mixed). The solution to the mixed model equations is a maximum likelihood estimate when the distribution of the errors is normal.[11][12]

References

1. ^ Baltagi, Badi H. (2008). Econometric Analysis of Panel Data (Fourth ed.). New York: Wiley. pp. 54–55. ISBN 978-0-470-51886-1.
2. ^ Fisher, RA (1918). "The correlation between relatives on the supposition of Mendelian inheritance". Transactions of the Royal Society of Edinburgh. 52 (2): 399–433. doi:10.1017/S0080456800012163.
3. ^ a b Robinson, G.K. (1991). "That BLUP is a Good Thing: The Estimation of Random Effects". Statistical Science. 6 (1): 15–32. doi:10.1214/ss/1177011926. JSTOR 2245695.
4. ^ C. R. Henderson; Oscar Kempthorne; S. R. Searle; C. M. von Krosigk (1959). "The Estimation of Environmental and Genetic Trends from Records Subject to Culling". Biometrics. International Biometric Society. 15 (2): 192–218. doi:10.2307/2527669. JSTOR 2527669.
5. ^ a b L. Dale Van Vleck. "Charles Roy Henderson, April 1, 1911 – March 14, 1989" (PDF). United States National Academy of Sciences.
6. ^ McLean, Robert A.; Sanders, William L.; Stroup, Walter W. (1991). "A Unified Approach to Mixed Linear Models". The American Statistician. American Statistical Association. 45 (1): 54–64. doi:10.2307/2685241. JSTOR 2685241.
7. ^ analytics guru and mixed model
8. ^ Mixed models in industry
9. ^ Henderson, C R (1973). "Sire evaluation and genetic trends" (PDF). Journal of Animal Science. American Society of Animal Science. 1973: 10–41. doi:10.1093/ansci/1973.Symposium.10. Retrieved 17 August 2014.
10. ^ Lindstrom, ML; Bates, DM (1988). "Newton–Raphson and EM algorithms for linear mixed-effects models for repeated-measures data". JASA. 83 (404): 1014–1021. doi:10.1080/01621459.1988.10478693.
11. ^ Laird, Nan M.; Ware, James H. (1982). "Random-Effects Models for Longitudinal Data". Biometrics. International Biometric Society. 38 (4): 963–974. doi:10.2307/2529876. JSTOR 2529876. PMID 7168798.
12. ^ Fitzmaurice, Garrett M.; Laird, Nan M.; Ware, James H. (2004). Applied Longitudinal Analysis. John Wiley & Sons. pp. 326–328.