Canonical Correlation Analysis Pdf

flexyellow
17 min readJun 28, 2021

--

Download here

The maximum canonical correlation is the maximum of ρ with respect to wx and wy. 3 Algorithm In this section we will give an overview of the Canonical correlation analysis (CCA) and Kernel-CCA (KCCA) algorithms where we formulate the optimisa-tion problem as a generalised eigenproblem. 3.1 Canonical Correlation Analysis. Purpose of Canonical Correlation Analysis Canonical Correlation Analysis (CCA)connects two sets of variables by finding linear combinations of variables that maximally correlate. There are two typical purposes of CCA: 1 Data reduction: explain covariation between two sets of variables using small number of linear combinations. Canonical correlations Canonical correlation analysis CCA is a means of assessing the relationship between two sets of variables. The idea is to study the correlation between a linear combination of the variables in one set and a linear combination of the variables in another set.

Machine learning and
data mining

  • Ensembles
  • DBSCAN
  • Graphical models
  • RNN
  • Convolutional neural network

In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, .., Xn) and Y = (Y1, .., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y which have maximum correlation with each other.[1] T. R. Knapp notes that ‘virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables.’[2] The method was first introduced by Harold Hotelling in 1936,[3] although in the context of angles between flats the mathematical concept was published by Jordan in 1875.[4]

  • 2Computation

Definition[edit]

Given two column vectorsX=(x1,…,xn)′{displaystyle X=(x_{1},dots ,x_{n})’} and Y=(y1,…,ym)′{displaystyle Y=(y_{1},dots ,y_{m})’} of random variables with finitesecond moments, one may define the cross-covarianceΣXY=cov⁡(X,Y){displaystyle Sigma _{XY}=operatorname {cov} (X,Y)} to be the n×m{displaystyle ntimes m}matrix whose (i,j){displaystyle (i,j)} entry is the covariancecov⁡(xi,yj){displaystyle operatorname {cov} (x_{i},y_{j})}. In practice, we would estimate the covariance matrix based on sampled data from X{displaystyle X} and Y{displaystyle Y} (i.e. from a pair of data matrices).

Canonical-correlation analysis seeks vectors a{displaystyle a} (a{displaystyle a}∈Rn{displaystyle in mathbb {R} ^{n}} ) and b{displaystyle b} (b∈Rm{displaystyle bin mathbb {R} ^{m}}) such that the random variables aTX{displaystyle a^{T}X} and bTY{displaystyle b^{T}Y} maximize the correlationρ=corr⁡(aTX,bTY){displaystyle rho =operatorname {corr} (a^{T}X,b^{T}Y)}. The random variables U=aTX{displaystyle U=a^{T}X} and V=bTY{displaystyle V=b^{T}Y} are the first pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the second pair of canonical variables. This procedure may be continued up to min{m,n}{displaystyle min{m,n}} times.

(a′,b′)=argmaxa,bcorr⁡(aTX,bTY){displaystyle (a’,b’)={underset {a,b}{operatorname {argmax} }}operatorname {corr} (a^{T}X,b^{T}Y)}

Computation[edit]

Derivation[edit]

Let ΣXX=cov⁡(X,X){displaystyle Sigma _{XX}=operatorname {cov} (X,X)} and ΣYY=cov⁡(Y,Y){displaystyle Sigma _{YY}=operatorname {cov} (Y,Y)}. The parameter to maximize is

ρ=aTΣXYbaTΣXXabTΣYYb.{displaystyle rho ={frac {a^{T}Sigma _{XY}b}{{sqrt {a^{T}Sigma _{XX}a}}{sqrt {b^{T}Sigma _{YY}b}}}}.}

The first step is to define a change of basis and define

. Ennio morricone nuovo cinema paradiso. previous. next.

c=ΣXX1/2a,{displaystyle c=Sigma _{XX}^{1/2}a,}d=ΣYY1/2b.{displaystyle d=Sigma _{YY}^{1/2}b.}

And thus we have

ρ=cTΣXX−1/2ΣXYΣYY−1/2dcTcdTd.{displaystyle rho ={frac {c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}d}{{sqrt {c^{T}c}}{sqrt {d^{T}d}}}}.}

By the Cauchy–Schwarz inequality, we have

(cTΣXX−1/2ΣXYΣYY−1/2)(d)≤(cTΣXX−1/2ΣXYΣYY−1/2ΣYY−1/2ΣYXΣXX−1/2c)1/2(dTd)1/2,{displaystyle left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}right)(d)leq left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}cright)^{1/2}left(d^{T}dright)^{1/2},}ρ≤(cTΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2c)1/2(cTc)1/2.{displaystyle rho leq {frac {left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}cright)^{1/2}}{left(c^{T}cright)^{1/2}}}.}

There is equality if the vectors d{displaystyle d} and ΣYY−1/2ΣYXΣXX−1/2c{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}c} are collinear. In addition, the maximum of correlation is attained if c{displaystyle c} is the eigenvector with the maximum eigenvalue for the matrix ΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}} (see Rayleigh quotient). The subsequent pairs are found by using eigenvalues of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices.

Another way of viewing this computation is that c{displaystyle c} and d{displaystyle d} are the left and right singular vectors of the correlation matrix of X and Y corresponding to the highest singular value.

Solution[edit]

The solution is therefore:

  • c{displaystyle c} is an eigenvector of ΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}}
  • d{displaystyle d} is proportional to ΣYY−1/2ΣYXΣXX−1/2c{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}c}

Reciprocally, there is also:

  • d{displaystyle d} is an eigenvector of ΣYY−1/2ΣYXΣXX−1ΣXYΣYY−1/2{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1}Sigma _{XY}Sigma _{YY}^{-1/2}}
  • c{displaystyle c} is proportional to ΣXX−1/2ΣXYΣYY−1/2d{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}d}

Reversing the change of coordinates, we have that

  • a{displaystyle a} is an eigenvector of ΣXX−1ΣXYΣYY−1ΣYX{displaystyle Sigma _{XX}^{-1}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}},
  • b{displaystyle b} is proportional to ΣYY−1ΣYXa;{displaystyle Sigma _{YY}^{-1}Sigma _{YX}a;}
  • b{displaystyle b} is an eigenvector of ΣYY−1ΣYXΣXX−1ΣXY,{displaystyle Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1}Sigma _{XY},}
  • a{displaystyle a} is proportional to ΣXX−1ΣXYb{displaystyle Sigma _{XX}^{-1}Sigma _{XY}b}.

The canonical variables are defined by:

U=c′ΣXX−1/2X=a′X{displaystyle U=c’Sigma _{XX}^{-1/2}X=a’X}V=d′ΣYY−1/2Y=b′Y{displaystyle V=d’Sigma _{YY}^{-1/2}Y=b’Y}

Implementation[edit]

CCA can be computed using singular value decomposition on a correlation matrix.[5] It is available as a function in[6]

  • MATLAB as canoncorr (also in Octave)
  • R as the standard function cancor and several other packages. CCP for statistical hypothesis testing in canonical correlation analysis.
  • SAS as proc cancorr
  • Python in the libraryscikit-learn, as Cross decomposition and in statsmodels, as CanCorr.
  • SPSS as macro CanCorr shipped with the main software
  • Julia (programming language) in the MultivariateStats.jl package.

CCA computation using singular value decomposition on a correlation matrix is related to the cosine of the angles between flats. The cosine function is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precisioncomputer arithmetic. To fix this trouble, alternative algorithms[7] are available in

  • SciPy as linear-algebra function subspace_angles
  • MATLAB as FileExchange function subspacea

Hypothesis testing[edit]

Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i{displaystyle i} is zero implies all further correlations are also zero. If we have p{displaystyle p} independent observations in a sample and ρ^i{displaystyle {widehat {rho }}_{i}} is the estimated correlation for i=1,…,min{m,n}{displaystyle i=1,dots ,min{m,n}}. For the i{displaystyle i}th row, the test statistic is:

Browse the list below to find the driver that meets your needs. To see more matches, use our custom search engine to find the exact driver. Tech Tip: If you are having trouble deciding which is the right driver, try the Driver Update Utility for CSR Bluetooth Chip.It is a software utility that will find the right driver for you — automatically. Oct 05, 2019 CSR v4.0 Bluetooth Dongle does not work on my Windows 10 computer Hi there. This CSR dongle doesn’t work on my computer. My computer runs windows 10 and all drivers across the whole computer were updated today. I’ve tried updating the driver from the disk supplied; the CSR website; using windows to search for the latest drivers and all it says. Jun 07, 2018 Seems like bluesuite downloads has been removed from the CSR website, at least for me anyways. Is there an alternative place to download it? I feel more comfortable on programming this thing on Linux but if only Windows builds are available, I’ll just live with it. CSR Bluetooth Module Programming: I’ve made a few Bluetooth speakers recently (links below) and whilst they are great to look at and fantastic to listen to but the ‘Name’ that comes up on my phone (or Bluetooth streaming device) is either:1) Something boring like ‘CSR 8645’! Windows 10 download. Sep 30, 2019 CSR BlueSuite is used by 10 users of Software Informer. The most popular versions of this product among our users are: 2.1, 2.2, 2.4, 2.5 and 2.6. The names of program executable files are BCFMCli.exe, BlueFlash.exe, BlueTest.exe, BtCliCtrl.exe and dfubabel.exe.

χ2=−(p−1−12(m+n+1))ln⁡∏j=imin{m,n}(1−ρ^j2),{displaystyle chi ^{2}=-left(p-1-{frac {1}{2}}(m+n+1)right)ln prod _{j=i}^{min{m,n}}(1-{widehat {rho }}_{j}^{2}),}

which is asymptotically distributed as a chi-squared with (m−i+1)(n−i+1){displaystyle (m-i+1)(n-i+1)}degrees of freedom for large p{displaystyle p}.[8] Since all the correlations from min{m,n}{displaystyle min{m,n}} to p{displaystyle p} are logically zero (and estimated that way also) the product for the terms after this point is irrelevant.

Note that in the small sample size limit with p<n+m{displaystyle p<n+m} then we are guaranteed that the top m+n−p{displaystyle m+n-p} correlations will be identically 1 and hence the test is meaningless.[9]

Practical uses[edit]

A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets.[10] For example, in psychological testing, one could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests.

One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.[11]

Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.[12]

Examples[edit]

Let X=x1{displaystyle X=x_{1}} with zero expected value, i.e., E⁡(X)=0{displaystyle operatorname {E} (X)=0}. If Y=X{displaystyle Y=X}, i.e., X{displaystyle X} and Y{displaystyle Y} are perfectly correlated, then, e.g., a=1{displaystyle a=1} and b=1{displaystyle b=1}, so that the first (and only in this example) pair of canonical variables is U=X{displaystyle U=X} and V=Y=X{displaystyle V=Y=X}. If Y=−X{displaystyle Y=-X}, i.e., X{displaystyle X} and Y{displaystyle Y} are perfectly anticorrelated, then, e.g., a=1{displaystyle a=1} and b=−1{displaystyle b=-1}, so that the first (and only in this example) pair of canonical variables is U=X{displaystyle U=X} and V=−Y=X{displaystyle V=-Y=X}. We notice that in both cases U=V{displaystyle U=V}, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.

Canonical Correlation Analysis Interpretation

Connection to principal angles[edit]

Assuming that X=(x1,…,xn)′{displaystyle X=(x_{1},dots ,x_{n})’} and Y=(y1,…,ym)′{displaystyle Y=(y_{1},dots ,y_{m})’} have zero expected values, i.e., E⁡(X)=E⁡(Y)=0{displaystyle operatorname {E} (X)=operatorname {E} (Y)=0}, their covariance matrices ΣXX=Cov⁡(X,X)=E⁡[XX′]{displaystyle Sigma _{XX}=operatorname {Cov} (X,X)=operatorname {E} [XX’]} and ΣYY=Cov⁡(Y,Y)=E⁡[YY′]{displaystyle Sigma _{YY}=operatorname {Cov} (Y,Y)=operatorname {E} [YY’]} can be viewed as Gram matrices in an inner product for the entries of X{displaystyle X} and Y{displaystyle Y}, correspondingly. In this interpretation, the random variables, entries xi{displaystyle x_{i}} of X{displaystyle X} and yj{displaystyle y_{j}} of Y{displaystyle Y} are treated as elements of a vector space with an inner product given by the covariancecov⁡(xi,yj){displaystyle operatorname {cov} (x_{i},y_{j})}; see Covariance#Relationship to inner products.

The definition of the canonical variables U{displaystyle U} and V{displaystyle V} is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X{displaystyle X} and Y{displaystyle Y} with respect to this inner product. The canonical correlations corr⁡(U,V){displaystyle operatorname {corr} (U,V)} is equal to the cosine of principal angles.

Whitening and probabilistic canonical correlation analysis[edit]

CCA can also be viewed as special whitening transformation where random vectors X{displaystyle X} and Y{displaystyle Y} are simultaneously transformed in such a way that the cross-correlation between the whitened vectors XCCA{displaystyle X^{CCA}} and YCCA{displaystyle Y^{CCA}} is diagonal.[13]The canonical correlations are then interpreted as regression coefficients linking XCCA{displaystyle X^{CCA}} and YCCA{displaystyle Y^{CCA}} and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability.

See also[edit]

References[edit]

  1. ^Härdle, Wolfgang; Simar, Léopold (2007). ‘Canonical Correlation Analysis’. Applied Multivariate Statistical Analysis. pp. 321–330. CiteSeerX10.1.1.324.403. doi:10.1007/978–3–540–72244–1_14. ISBN978–3–540–72243–4.
  2. ^Knapp, T. R. (1978). ‘Canonical correlation analysis: A general parametric significance-testing system’. Psychological Bulletin. 85 (2): 410–416. doi:10.1037/0033–2909.85.2.410.
  3. ^Hotelling, H. (1936). ‘Relations Between Two Sets of Variates’. Biometrika. 28 (3–4): 321–377. doi:10.1093/biomet/28.3–4.321. JSTOR2333955.
  4. ^Jordan, C. (1875). ‘Essai sur la géométrie à n{displaystyle n} dimensions’. Bull. Soc. Math. France. 3: 103.
  5. ^Hsu, D.; Kakade, S. M.; Zhang, T. (2012). ‘A spectral algorithm for learning Hidden Markov Models’(PDF). Journal of Computer and System Sciences. 78 (5): 1460. arXiv:0811.4413. doi:10.1016/j.jcss.2011.12.025.
  6. ^Huang, S. Y.; Lee, M. H.; Hsiao, C. K. (2009). ‘Nonlinear measures of association with kernel canonical correlation analysis and applications’(PDF). Journal of Statistical Planning and Inference. 139 (7): 2162. doi:10.1016/j.jspi.2008.10.011.
  7. ^Knyazev, A.V.; Argentati, M.E. (2002), ‘Principal Angles between Subspaces in an A-Based Scalar Product: Algorithms and Perturbation Estimates’, SIAM Journal on Scientific Computing, 23 (6): 2009–2041, CiteSeerX10.1.1.73.2914, doi:10.1137/S1064827500377332
  8. ^Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press.
  9. ^Yang Song, Peter J. Schreier, David Ram´ırez, and Tanuj Hasija Canonical correlation analysis of high-dimensionaldata with very small sample supporthttps://arxiv.org/pdf/1604.02047.pdf
  10. ^Sieranoja, S.; Sahidullah, Md; Kinnunen, T.; Komulainen, J.; Hadid, A. (July 2018). ‘Audiovisual Synchrony Detection with Optimized Audio Features’(PDF). IEEE 3rd Int. Conference on Signal and Image Processing (ICSIP 2018).
  11. ^Tofallis, C. (1999). ‘Model Building with Multiple Dependent Variables and Constraints’. Journal of the Royal Statistical Society, Series D. 48 (3): 371–378. arXiv:1109.0725. doi:10.1111/1467–9884.00195.
  12. ^Degani, A.; Shafto, M.; Olson, L. (2006). ‘Canonical Correlation Analysis: Use of Composite Heliographs for Representing Multiple Patterns’(PDF). Diagrammatic Representation and Inference. Lecture Notes in Computer Science. 4045. p. 93. CiteSeerX10.1.1.538.5217. doi:10.1007/11783183_11. ISBN978–3–540–35623–3.
  13. ^Jendoubi, T.; Strimmer, K. (2018). ‘A whitening approach to probabilistic canonical correlation analysis for omics data integration’. BMC Bioinformatics. 20 (1): 15. arXiv:1802.03490. doi:10.1186/s12859–018–2572–9. PMC6327589. PMID30626338.

External links[edit]

Canonical Correlation Analysis Pdf Download

  • Discriminant Correlation Analysis (DCA)[1] (MATLAB)
  • Hardoon, D. R.; Szedmak, S.; Shawe-Taylor, J. (2004). ‘Canonical Correlation Analysis: An Overview with Application to Learning Methods’. Neural Computation. 16 (12): 2639–2664. CiteSeerX10.1.1.14.6452. doi:10.1162/0899766042321814. PMID15516276.
  • A note on the ordinal canonical-correlation analysis of two sets of ranking scores (Also provides a FORTRAN program)- in Journal of Quantitative Economics 7(2), 2009, pp. 173–199
  • Representation-Constrained Canonical Correlation Analysis: A Hybridization of Canonical Correlation and Principal Component Analyses (Also provides a FORTRAN program)- in Journal of Applied Economic Sciences 4(1), 2009, pp. 115–124
  1. ^Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). ‘Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition’. IEEE Transactions on Information Forensics and Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061.

Retrieved from ‘https://en.wikipedia.org/w/index.php?title=Canonical_correlation&oldid=906579811'

Machine learning and
data mining

  • Ensembles
  • DBSCAN
  • Graphical models
  • RNN
  • Convolutional neural network

In statistics, canonical-correlation analysis (CCA), also called canonical variates analysis, is a way of inferring information from cross-covariance matrices. If we have two vectors X = (X1, .., Xn) and Y = (Y1, .., Ym) of random variables, and there are correlations among the variables, then canonical-correlation analysis will find linear combinations of X and Y which have maximum correlation with each other.[1] T. R. Knapp notes that ‘virtually all of the commonly encountered parametric tests of significance can be treated as special cases of canonical-correlation analysis, which is the general procedure for investigating the relationships between two sets of variables.’[2] The method was first introduced by Harold Hotelling in 1936,[3] although in the context of angles between flats the mathematical concept was published by Jordan in 1875.[4]

  • 2Computation

Definition[edit]

Given two column vectorsX=(x1,…,xn)′{displaystyle X=(x_{1},dots ,x_{n})’} and Y=(y1,…,ym)′{displaystyle Y=(y_{1},dots ,y_{m})’} of random variables with finitesecond moments, one may define the cross-covarianceΣXY=cov⁡(X,Y){displaystyle Sigma _{XY}=operatorname {cov} (X,Y)} to be the n×m{displaystyle ntimes m}matrix whose (i,j){displaystyle (i,j)} entry is the covariancecov⁡(xi,yj){displaystyle operatorname {cov} (x_{i},y_{j})}. In practice, we would estimate the covariance matrix based on sampled data from X{displaystyle X} and Y{displaystyle Y} (i.e. from a pair of data matrices).

Canonical-correlation analysis seeks vectors a{displaystyle a} (a{displaystyle a}∈Rn{displaystyle in mathbb {R} ^{n}} ) and b{displaystyle b} (b∈Rm{displaystyle bin mathbb {R} ^{m}}) such that the random variables aTX{displaystyle a^{T}X} and bTY{displaystyle b^{T}Y} maximize the correlationρ=corr⁡(aTX,bTY){displaystyle rho =operatorname {corr} (a^{T}X,b^{T}Y)}. The random variables U=aTX{displaystyle U=a^{T}X} and V=bTY{displaystyle V=b^{T}Y} are the first pair of canonical variables. Then one seeks vectors maximizing the same correlation subject to the constraint that they are to be uncorrelated with the first pair of canonical variables; this gives the second pair of canonical variables. This procedure may be continued up to min{m,n}{displaystyle min{m,n}} times.

(a′,b′)=argmaxa,bcorr⁡(aTX,bTY){displaystyle (a’,b’)={underset {a,b}{operatorname {argmax} }}operatorname {corr} (a^{T}X,b^{T}Y)}

Computation[edit]

Derivation[edit]

Let ΣXX=cov⁡(X,X){displaystyle Sigma _{XX}=operatorname {cov} (X,X)} and ΣYY=cov⁡(Y,Y){displaystyle Sigma _{YY}=operatorname {cov} (Y,Y)}. The parameter to maximize is

ρ=aTΣXYbaTΣXXabTΣYYb.{displaystyle rho ={frac {a^{T}Sigma _{XY}b}{{sqrt {a^{T}Sigma _{XX}a}}{sqrt {b^{T}Sigma _{YY}b}}}}.}

The first step is to define a change of basis and define

c=ΣXX1/2a,{displaystyle c=Sigma _{XX}^{1/2}a,}d=ΣYY1/2b.{displaystyle d=Sigma _{YY}^{1/2}b.}

Download snow leopard installer. And thus we have

ρ=cTΣXX−1/2ΣXYΣYY−1/2dcTcdTd.{displaystyle rho ={frac {c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}d}{{sqrt {c^{T}c}}{sqrt {d^{T}d}}}}.}

By the Cauchy–Schwarz inequality, we have

(cTΣXX−1/2ΣXYΣYY−1/2)(d)≤(cTΣXX−1/2ΣXYΣYY−1/2ΣYY−1/2ΣYXΣXX−1/2c)1/2(dTd)1/2,{displaystyle left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}right)(d)leq left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}cright)^{1/2}left(d^{T}dright)^{1/2},}ρ≤(cTΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2c)1/2(cTc)1/2.{displaystyle rho leq {frac {left(c^{T}Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}cright)^{1/2}}{left(c^{T}cright)^{1/2}}}.}

There is equality if the vectors d{displaystyle d} and ΣYY−1/2ΣYXΣXX−1/2c{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}c} are collinear. In addition, the maximum of correlation is attained if c{displaystyle c} is the eigenvector with the maximum eigenvalue for the matrix ΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}} (see Rayleigh quotient). The subsequent pairs are found by using eigenvalues of decreasing magnitudes. Orthogonality is guaranteed by the symmetry of the correlation matrices.

Another way of viewing this computation is that c{displaystyle c} and d{displaystyle d} are the left and right singular vectors of the correlation matrix of X and Y corresponding to the highest singular value.

Solution[edit]

The solution is therefore:

  • c{displaystyle c} is an eigenvector of ΣXX−1/2ΣXYΣYY−1ΣYXΣXX−1/2{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1/2}}
  • d{displaystyle d} is proportional to ΣYY−1/2ΣYXΣXX−1/2c{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1/2}c}

Reciprocally, there is also:

  • d{displaystyle d} is an eigenvector of ΣYY−1/2ΣYXΣXX−1ΣXYΣYY−1/2{displaystyle Sigma _{YY}^{-1/2}Sigma _{YX}Sigma _{XX}^{-1}Sigma _{XY}Sigma _{YY}^{-1/2}}
  • c{displaystyle c} is proportional to ΣXX−1/2ΣXYΣYY−1/2d{displaystyle Sigma _{XX}^{-1/2}Sigma _{XY}Sigma _{YY}^{-1/2}d}

Reversing the change of coordinates, we have that

  • a{displaystyle a} is an eigenvector of ΣXX−1ΣXYΣYY−1ΣYX{displaystyle Sigma _{XX}^{-1}Sigma _{XY}Sigma _{YY}^{-1}Sigma _{YX}},
  • b{displaystyle b} is proportional to ΣYY−1ΣYXa;{displaystyle Sigma _{YY}^{-1}Sigma _{YX}a;}
  • b{displaystyle b} is an eigenvector of ΣYY−1ΣYXΣXX−1ΣXY,{displaystyle Sigma _{YY}^{-1}Sigma _{YX}Sigma _{XX}^{-1}Sigma _{XY},}
  • a{displaystyle a} is proportional to ΣXX−1ΣXYb{displaystyle Sigma _{XX}^{-1}Sigma _{XY}b}.

The canonical variables are defined by:

U=c′ΣXX−1/2X=a′X{displaystyle U=c’Sigma _{XX}^{-1/2}X=a’X}V=d′ΣYY−1/2Y=b′Y{displaystyle V=d’Sigma _{YY}^{-1/2}Y=b’Y}

Implementation[edit]

CCA can be computed using singular value decomposition on a correlation matrix.[5] It is available as a function in[6]

  • MATLAB as canoncorr (also in Octave)
  • R as the standard function cancor and several other packages. CCP for statistical hypothesis testing in canonical correlation analysis.
  • SAS as proc cancorr
  • Python in the libraryscikit-learn, as Cross decomposition and in statsmodels, as CanCorr.
  • SPSS as macro CanCorr shipped with the main software
  • Julia (programming language) in the MultivariateStats.jl package.

CCA computation using singular value decomposition on a correlation matrix is related to the cosine of the angles between flats. The cosine function is ill-conditioned for small angles, leading to very inaccurate computation of highly correlated principal vectors in finite precisioncomputer arithmetic. To fix this trouble, alternative algorithms[7] are available in

  • SciPy as linear-algebra function subspace_angles
  • MATLAB as FileExchange function subspacea

Hypothesis testing[edit]

Each row can be tested for significance with the following method. Since the correlations are sorted, saying that row i{displaystyle i} is zero implies all further correlations are also zero. If we have p{displaystyle p} independent observations in a sample and ρ^i{displaystyle {widehat {rho }}_{i}} is the estimated correlation for i=1,…,min{m,n}{displaystyle i=1,dots ,min{m,n}}. For the i{displaystyle i}th row, the test statistic is:

χ2=−(p−1−12(m+n+1))ln⁡∏j=imin{m,n}(1−ρ^j2),{displaystyle chi ^{2}=-left(p-1-{frac {1}{2}}(m+n+1)right)ln prod _{j=i}^{min{m,n}}(1-{widehat {rho }}_{j}^{2}),}

which is asymptotically distributed as a chi-squared with (m−i+1)(n−i+1){displaystyle (m-i+1)(n-i+1)}degrees of freedom for large p{displaystyle p}.[8] Since all the correlations from min{m,n}{displaystyle min{m,n}} to p{displaystyle p} are logically zero (and estimated that way also) the product for the terms after this point is irrelevant.

Note that in the small sample size limit with p<n+m{displaystyle p<n+m} then we are guaranteed that the top m+n−p{displaystyle m+n-p} correlations will be identically 1 and hence the test is meaningless.[9]

Practical uses[edit]

A typical use for canonical correlation in the experimental context is to take two sets of variables and see what is common among the two sets.[10] For example, in psychological testing, one could take two well established multidimensional personality tests such as the Minnesota Multiphasic Personality Inventory (MMPI-2) and the NEO. By seeing how the MMPI-2 factors relate to the NEO factors, one could gain insight into what dimensions were common between the tests and how much variance was shared. For example, one might find that an extraversion or neuroticism dimension accounted for a substantial amount of shared variance between the two tests.

One can also use canonical-correlation analysis to produce a model equation which relates two sets of variables, for example a set of performance measures and a set of explanatory variables, or a set of outputs and set of inputs. Constraint restrictions can be imposed on such a model to ensure it reflects theoretical requirements or intuitively obvious conditions. This type of model is known as a maximum correlation model.[11]

Visualization of the results of canonical correlation is usually through bar plots of the coefficients of the two sets of variables for the pairs of canonical variates showing significant correlation. Some authors suggest that they are best visualized by plotting them as heliographs, a circular format with ray like bars, with each half representing the two sets of variables.[12]

Examples[edit]

Let X=x1{displaystyle X=x_{1}} with zero expected value, i.e., E⁡(X)=0{displaystyle operatorname {E} (X)=0}. If Y=X{displaystyle Y=X}, i.e., X{displaystyle X} and Y{displaystyle Y} are perfectly correlated, then, e.g., a=1{displaystyle a=1} and b=1{displaystyle b=1}, so that the first (and only in this example) pair of canonical variables is U=X{displaystyle U=X} and V=Y=X{displaystyle V=Y=X}. If Y=−X{displaystyle Y=-X}, i.e., X{displaystyle X} and Y{displaystyle Y} are perfectly anticorrelated, then, e.g., a=1{displaystyle a=1} and b=−1{displaystyle b=-1}, so that the first (and only in this example) pair of canonical variables is U=X{displaystyle U=X} and V=−Y=X{displaystyle V=-Y=X}. We notice that in both cases U=V{displaystyle U=V}, which illustrates that the canonical-correlation analysis treats correlated and anticorrelated variables similarly.

Connection to principal angles[edit]

Assuming that X=(x1,…,xn)′{displaystyle X=(x_{1},dots ,x_{n})’} and Y=(y1,…,ym)′{displaystyle Y=(y_{1},dots ,y_{m})’} have zero expected values, i.e., E⁡(X)=E⁡(Y)=0{displaystyle operatorname {E} (X)=operatorname {E} (Y)=0}, their covariance matrices ΣXX=Cov⁡(X,X)=E⁡[XX′]{displaystyle Sigma _{XX}=operatorname {Cov} (X,X)=operatorname {E} [XX’]} and ΣYY=Cov⁡(Y,Y)=E⁡[YY′]{displaystyle Sigma _{YY}=operatorname {Cov} (Y,Y)=operatorname {E} [YY’]} can be viewed as Gram matrices in an inner product for the entries of X{displaystyle X} and Y{displaystyle Y}, correspondingly. In this interpretation, the random variables, entries xi{displaystyle x_{i}} of X{displaystyle X} and yj{displaystyle y_{j}} of Y{displaystyle Y} are treated as elements of a vector space with an inner product given by the covariancecov⁡(xi,yj){displaystyle operatorname {cov} (x_{i},y_{j})}; see Covariance#Relationship to inner products.

The definition of the canonical variables U{displaystyle U} and V{displaystyle V} is then equivalent to the definition of principal vectors for the pair of subspaces spanned by the entries of X{displaystyle X} and Y{displaystyle Y} with respect to this inner product. The canonical correlations corr⁡(U,V){displaystyle operatorname {corr} (U,V)} is equal to the cosine of principal angles.

Whitening and probabilistic canonical correlation analysis[edit]

CCA can also be viewed as special whitening transformation where random vectors X{displaystyle X} and Y{displaystyle Y} are simultaneously transformed in such a way that the cross-correlation between the whitened vectors XCCA{displaystyle X^{CCA}} and YCCA{displaystyle Y^{CCA}} is diagonal.[13]The canonical correlations are then interpreted as regression coefficients linking XCCA{displaystyle X^{CCA}} and YCCA{displaystyle Y^{CCA}} and may also be negative. The regression view of CCA also provides a way to construct a latent variable probabilistic generative model for CCA, with uncorrelated hidden variables representing shared and non-shared variability.

Canonical Correlation Analysis Spss

See also[edit]

References[edit]

  1. ^Härdle, Wolfgang; Simar, Léopold (2007). ‘Canonical Correlation Analysis’. Applied Multivariate Statistical Analysis. pp. 321–330. CiteSeerX10.1.1.324.403. doi:10.1007/978–3–540–72244–1_14. ISBN978–3–540–72243–4.
  2. ^Knapp, T. R. (1978). ‘Canonical correlation analysis: A general parametric significance-testing system’. Psychological Bulletin. 85 (2): 410–416. doi:10.1037/0033–2909.85.2.410.
  3. ^Hotelling, H. (1936). ‘Relations Between Two Sets of Variates’. Biometrika. 28 (3–4): 321–377. doi:10.1093/biomet/28.3–4.321. JSTOR2333955.
  4. ^Jordan, C. (1875). ‘Essai sur la géométrie à n{displaystyle n} dimensions’. Bull. Soc. Math. France. 3: 103.
  5. ^Hsu, D.; Kakade, S. M.; Zhang, T. (2012). ‘A spectral algorithm for learning Hidden Markov Models’(PDF). Journal of Computer and System Sciences. 78 (5): 1460. arXiv:0811.4413. doi:10.1016/j.jcss.2011.12.025.
  6. ^Huang, S. Y.; Lee, M. H.; Hsiao, C. K. (2009). ‘Nonlinear measures of association with kernel canonical correlation analysis and applications’(PDF). Journal of Statistical Planning and Inference. 139 (7): 2162. doi:10.1016/j.jspi.2008.10.011.
  7. ^Knyazev, A.V.; Argentati, M.E. (2002), ‘Principal Angles between Subspaces in an A-Based Scalar Product: Algorithms and Perturbation Estimates’, SIAM Journal on Scientific Computing, 23 (6): 2009–2041, CiteSeerX10.1.1.73.2914, doi:10.1137/S1064827500377332
  8. ^Kanti V. Mardia, J. T. Kent and J. M. Bibby (1979). Multivariate Analysis. Academic Press.
  9. ^Yang Song, Peter J. Schreier, David Ram´ırez, and Tanuj Hasija Canonical correlation analysis of high-dimensionaldata with very small sample supporthttps://arxiv.org/pdf/1604.02047.pdf
  10. ^Sieranoja, S.; Sahidullah, Md; Kinnunen, T.; Komulainen, J.; Hadid, A. (July 2018). ‘Audiovisual Synchrony Detection with Optimized Audio Features’(PDF). IEEE 3rd Int. Conference on Signal and Image Processing (ICSIP 2018).
  11. ^Tofallis, C. (1999). ‘Model Building with Multiple Dependent Variables and Constraints’. Journal of the Royal Statistical Society, Series D. 48 (3): 371–378. arXiv:1109.0725. doi:10.1111/1467–9884.00195.
  12. ^Degani, A.; Shafto, M.; Olson, L. (2006). ‘Canonical Correlation Analysis: Use of Composite Heliographs for Representing Multiple Patterns’(PDF). Diagrammatic Representation and Inference. Lecture Notes in Computer Science. 4045. p. 93. CiteSeerX10.1.1.538.5217. doi:10.1007/11783183_11. ISBN978–3–540–35623–3.
  13. ^Jendoubi, T.; Strimmer, K. (2018). ‘A whitening approach to probabilistic canonical correlation analysis for omics data integration’. BMC Bioinformatics. 20 (1): 15. arXiv:1802.03490. doi:10.1186/s12859–018–2572–9. PMC6327589. PMID30626338.

External links[edit]

  • Discriminant Correlation Analysis (DCA)[1] (MATLAB)
  • Hardoon, D. R.; Szedmak, S.; Shawe-Taylor, J. (2004). ‘Canonical Correlation Analysis: An Overview with Application to Learning Methods’. Neural Computation. 16 (12): 2639–2664. CiteSeerX10.1.1.14.6452. doi:10.1162/0899766042321814. PMID15516276.
  • A note on the ordinal canonical-correlation analysis of two sets of ranking scores (Also provides a FORTRAN program)- in Journal of Quantitative Economics 7(2), 2009, pp. 173–199
  • Representation-Constrained Canonical Correlation Analysis: A Hybridization of Canonical Correlation and Principal Component Analyses (Also provides a FORTRAN program)- in Journal of Applied Economic Sciences 4(1), 2009, pp. 115–124
  1. ^Haghighat, Mohammad; Abdel-Mottaleb, Mohamed; Alhalabi, Wadee (2016). ‘Discriminant Correlation Analysis: Real-Time Feature Level Fusion for Multimodal Biometric Recognition’. IEEE Transactions on Information Forensics and Security. 11 (9): 1984–1996. doi:10.1109/TIFS.2016.2569061.

Retrieved from ‘https://en.wikipedia.org/w/index.php?title=Canonical_correlation&oldid=906579811'

Download here

--

--