To install click the Add extension button. That's it.

The source code for the WIKI 2 extension is being checked by specialists of the Mozilla Foundation, Google, and Apple. You could also do it yourself at any point in time.

4,5
Kelly Slayton
Congratulations on this excellent venture… what a great idea!
Alexander Grigorievskiy
I use WIKI 2 every day and almost forgot how the original Wikipedia looks like.
Live Statistics
English Articles
Improved in 24 Hours
Added in 24 Hours
Languages
Recent
Show all languages
What we do. Every page goes through several hundred of perfecting techniques; in live mode. Quite the same Wikipedia. Just better.
.
Leo
Newton
Brights
Milds

L1-norm principal component analysis

From Wikipedia, the free encyclopedia

L1-PCA compared with PCA. Nominal data (blue points); outlier (red point); PC (black line); L1-PC (red line); nominal maximum-variance line (dotted line).

L1-norm principal component analysis (L1-PCA) is a general method for multivariate data analysis.[1] L1-PCA is often preferred over standard L2-norm principal component analysis (PCA) when the analyzed data may contain outliers (faulty values or corruptions).[2][3][4]

Both L1-PCA and standard PCA seek a collection of orthogonal directions (principal components) that define a subspace wherein data representation is maximized according to the selected criterion.[5][6][7] Standard PCA quantifies data representation as the aggregate of the L2-norm of the data point projections into the subspace, or equivalently the aggregate Euclidean distance of the original points from their subspace-projected representations. L1-PCA uses instead the aggregate of the L1-norm of the data point projections into the subspace.[8] In PCA and L1-PCA, the number of principal components (PCs) is lower than the rank of the analyzed matrix, which coincides with the dimensionality of the space defined by the original data points. Therefore, PCA or L1-PCA are commonly employed for dimensionality reduction for the purpose of data denoising or compression. Among the advantages of standard PCA that contributed to its high popularity are low-cost computational implementation by means of singular-value decomposition (SVD)[9] and statistical optimality when the data set is generated by a true multivariate normal data source.

However, in modern big data sets, data often include corrupted, faulty points, commonly referred to as outliers.[10] Standard PCA is known to be sensitive to outliers, even when they appear as a small fraction of the processed data.[11] The reason is that the L2-norm formulation of L2-PCA places squared emphasis on the magnitude of each coordinate of each data point, ultimately overemphasizing peripheral points, such as outliers. On the other hand, following an L1-norm formulation, L1-PCA places linear emphasis on the coordinates of each data point, effectively restraining outliers.[12]

YouTube Encyclopedic

  • 1/5
    Views:
    3 699
    706
    950
    1 293
    355
  • BWCA Lecture 9 (Compressive Sensing)
  • Norm (L1, L2) - 벡터의 크기 또는 길이 측정 시 사용하는 개념
  • Neural Networks - Inception Network
  • R을이용한 bioinformatics-PCA분석의 개념 및 이론 (2)
  • Mod-03 Lec-27 Multivariate Analysis - XII

Transcription

Formulation

Consider any matrix consisting of -dimensional data points. Define . For integer such that , L1-PCA is formulated as:[1]

 

 

 

 

(1)

For , (1) simplifies to finding the L1-norm principal component (L1-PC) of by

 

 

 

 

(2)

In (1)-(2), L1-norm returns the sum of the absolute entries of its argument and L2-norm returns the sum of the squared entries of its argument. If one substitutes in (2) by the Frobenius/L2-norm , then the problem becomes standard PCA and it is solved by the matrix that contains the dominant singular vectors of (i.e., the singular vectors that correspond to the highest singular values).

The maximization metric in (2) can be expanded as

 

 

 

 

(3)

Solution

For any matrix with , define as the nearest (in the L2-norm sense) matrix to that has orthonormal columns. That is, define

 

 

 

 

(4)

Procrustes Theorem[13][14] states that if has SVD , then .

Markopoulos, Karystinos, and Pados[1] showed that, if is the exact solution to the binary nuclear-norm maximization (BNM) problem

 

 

 

 

(5)

then

 

 

 

 

(6)

is the exact solution to L1-PCA in (2). The nuclear-norm in (2) returns the summation of the singular values of its matrix argument and can be calculated by means of standard SVD. Moreover, it holds that, given the solution to L1-PCA, , the solution to BNM can be obtained as

 

 

 

 

(7)

where returns the -sign matrix of its matrix argument (with no loss of generality, we can consider ). In addition, it follows that . BNM in (5) is a combinatorial problem over antipodal binary variables. Therefore, its exact solution can be found through exhaustive evaluation of all elements of its feasibility set, with asymptotic cost . Therefore, L1-PCA can also be solved, through BNM, with cost (exponential in the product of the number of data points with the number of the sought-after components). It turns out that L1-PCA can be solved optimally (exactly) with polynomial complexity in for fixed data dimension , .[1]

For the special case of (single L1-PC of ), BNM takes the binary-quadratic-maximization (BQM) form

 

 

 

 

(8)

The transition from (5) to (8) for holds true, since the unique singular value of is equal to , for every . Then, if is the solution to BQM in (7), it holds that

 

 

 

 

(9)

is the exact L1-PC of , as defined in (1). In addition, it holds that and .

Algorithms

Exact solution of exponential complexity

As shown above, the exact solution to L1-PCA can be obtained by the following two-step process:

1. Solve the problem in (5) to obtain .
2. Apply SVD on  to obtain .

BNM in (5) can be solved by exhaustive search over the domain of with cost .

Exact solution of polynomial complexity

Also, L1-PCA can be solved optimally with cost , when is constant with respect to (always true for finite data dimension ).[1][15]

Approximate efficient solvers

In 2008, Kwak[12] proposed an iterative algorithm for the approximate solution of L1-PCA for . This iterative method was later generalized for components.[16] Another approximate efficient solver was proposed by McCoy and Tropp[17] by means of semi-definite programming (SDP). Most recently, L1-PCA (and BNM in (5)) were solved efficiently by means of bit-flipping iterations (L1-BF algorithm).[8][18]

L1-BF algorithm

 1  function L1BF(, ):
 2      Initialize  and 
 3      Set  and 
 4      Until termination (or  iterations)
 5          , 
 6          For 
 7              , 
 8                              // flip bit
 9                             // calculated by SVD or faster (see[8])
10              if 
11                  , 
12                  
13              end
14              if                     // no bit was flipped
15                  if 
16                      terminate
17                  else
18                      

The computational cost of L1-BF is .[8]

Complex data

L1-PCA has also been generalized to process complex data. For complex L1-PCA, two efficient algorithms were proposed in 2018.[19]

Tensor data

L1-PCA has also been extended for the analysis of tensor data, in the form of L1-Tucker, the L1-norm robust analogous of standard Tucker decomposition.[20] Two algorithms for the solution of L1-Tucker are L1-HOSVD and L1-HOOI.[20][21][22]

Code

MATLAB code for L1-PCA is available at MathWorks[23] and other repositories.[18]

References

  1. ^ a b c d e Markopoulos, Panos P.; Karystinos, George N.; Pados, Dimitris A. (October 2014). "Optimal Algorithms for L1-subspace Signal Processing". IEEE Transactions on Signal Processing. 62 (19): 5046–5058. arXiv:1405.6785. Bibcode:2014ITSP...62.5046M. doi:10.1109/TSP.2014.2338077. S2CID 1494171.
  2. ^ Barrodale, I. (1968). "L1 Approximation and the Analysis of Data". Applied Statistics. 17 (1): 51–57. doi:10.2307/2985267. JSTOR 2985267.
  3. ^ Barnett, Vic; Lewis, Toby (1994). Outliers in statistical data (3. ed.). Chichester [u.a.]: Wiley. ISBN 978-0471930945.
  4. ^ Kanade, T.; Ke, Qifa (June 2005). "Robust L₁ Norm Factorization in the Presence of Outliers and Missing Data by Alternative Convex Programming". 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05). Vol. 1. IEEE. pp. 739–746. CiteSeerX 10.1.1.63.4605. doi:10.1109/CVPR.2005.309. ISBN 978-0-7695-2372-9. S2CID 17144854.
  5. ^ Jolliffe, I.T. (2004). Principal component analysis (2nd ed.). New York: Springer. ISBN 978-0387954424.
  6. ^ Bishop, Christopher M. (2007). Pattern recognition and machine learning (Corr. printing. ed.). New York: Springer. ISBN 978-0-387-31073-2.
  7. ^ Pearson, Karl (8 June 2010). "On Lines and Planes of Closest Fit to Systems of Points in Space". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 2 (11): 559–572. doi:10.1080/14786440109462720. S2CID 125037489.
  8. ^ a b c d Markopoulos, Panos P.; Kundu, Sandipan; Chamadia, Shubham; Pados, Dimitris A. (15 August 2017). "Efficient L1-Norm Principal-Component Analysis via Bit Flipping". IEEE Transactions on Signal Processing. 65 (16): 4252–4264. arXiv:1610.01959. Bibcode:2017ITSP...65.4252M. doi:10.1109/TSP.2017.2708023. S2CID 7931130.
  9. ^ Golub, Gene H. (April 1973). "Some Modified Matrix Eigenvalue Problems". SIAM Review. 15 (2): 318–334. CiteSeerX 10.1.1.454.9868. doi:10.1137/1015032.
  10. ^ Barnett, Vic; Lewis, Toby (1994). Outliers in statistical data (3. ed.). Chichester [u.a.]: Wiley. ISBN 978-0471930945.
  11. ^ Candès, Emmanuel J.; Li, Xiaodong; Ma, Yi; Wright, John (1 May 2011). "Robust principal component analysis?". Journal of the ACM. 58 (3): 1–37. arXiv:0912.3599. doi:10.1145/1970392.1970395. S2CID 7128002.
  12. ^ a b Kwak, N. (September 2008). "Principal Component Analysis Based on L1-Norm Maximization". IEEE Transactions on Pattern Analysis and Machine Intelligence. 30 (9): 1672–1680. CiteSeerX 10.1.1.333.1176. doi:10.1109/TPAMI.2008.114. PMID 18617723. S2CID 11882870.
  13. ^ Eldén, Lars; Park, Haesun (1 June 1999). "A Procrustes problem on the Stiefel manifold". Numerische Mathematik. 82 (4): 599–619. CiteSeerX 10.1.1.54.3580. doi:10.1007/s002110050432. S2CID 206895591.
  14. ^ Schönemann, Peter H. (March 1966). "A generalized solution of the orthogonal procrustes problem". Psychometrika. 31 (1): 1–10. doi:10.1007/BF02289451. hdl:10338.dmlcz/103138. S2CID 121676935.
  15. ^ Markopoulos, PP; Kundu, S; Chamadia, S; Tsagkarakis, N; Pados, DA (2018). "Outlier-Resistant Data Processing with L1-Norm Principal Component Analysis". Advances in Principal Component Analysis. pp. 121–135. doi:10.1007/978-981-10-6704-4_6. ISBN 978-981-10-6703-7.
  16. ^ Nie, F; Huang, H; Ding, C; Luo, Dijun; Wang, H (July 2011). "Robust principal component analysis with non-greedy l1-norm maximization". 22nd International Joint Conference on Artificial Intelligence: 1433–1438.
  17. ^ McCoy, Michael; Tropp, Joel A. (2011). "Two proposals for robust PCA using semidefinite programming". Electronic Journal of Statistics. 5: 1123–1160. arXiv:1012.1086. doi:10.1214/11-EJS636. S2CID 14102421.
  18. ^ a b Markopoulos, PP. "Software Repository". Retrieved May 21, 2018.[permanent dead link]
  19. ^ Tsagkarakis, Nicholas; Markopoulos, Panos P.; Sklivanitis, George; Pados, Dimitris A. (15 June 2018). "L1-Norm Principal-Component Analysis of Complex Data". IEEE Transactions on Signal Processing. 66 (12): 3256–3267. arXiv:1708.01249. Bibcode:2018ITSP...66.3256T. doi:10.1109/TSP.2018.2821641. S2CID 21011653.
  20. ^ a b Chachlakis, Dimitris G.; Prater-Bennette, Ashley; Markopoulos, Panos P. (22 November 2019). "L1-norm Tucker Tensor Decomposition". IEEE Access. 7: 178454–178465. arXiv:1904.06455. doi:10.1109/ACCESS.2019.2955134.
  21. ^ Markopoulos, Panos P.; Chachlakis, Dimitris G.; Prater-Bennette, Ashley (21 February 2019). "L1-Norm Higher-Order Singular-Value Decomposition". 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). pp. 1353–1357. doi:10.1109/GlobalSIP.2018.8646385. ISBN 978-1-7281-1295-4. S2CID 67874182.
  22. ^ Markopoulos, Panos P.; Chachlakis, Dimitris G.; Papalexakis, Evangelos (April 2018). "The Exact Solution to Rank-1 L1-Norm TUCKER2 Decomposition". IEEE Signal Processing Letters. 25 (4): 511–515. arXiv:1710.11306. Bibcode:2018ISPL...25..511M. doi:10.1109/LSP.2018.2790901. S2CID 3693326.
  23. ^ "L1-PCA TOOLBOX". Retrieved May 21, 2018.
This page was last edited on 27 August 2023, at 02:02
Basis of this page is in Wikipedia. Text is available under the CC BY-SA 3.0 Unported License. Non-text media are available under their specified licenses. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc. WIKI 2 is an independent company and has no affiliation with Wikimedia Foundation.