Construct for Hermitian matrices
In mathematics, the Rayleigh quotient[1] (/ˈreɪ.li/) for a given complex Hermitian matrix
and nonzero vector
is defined as:[2][3]
![{\displaystyle R(M,x)={x^{*}Mx \over x^{*}x}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/50c6f0656cf02c152ae9a877912aa83c7ee10a85)
For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose
![{\displaystyle x^{*}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/e5be23ee5d433f8b576e63bcb47518128ee0b6bb)
to the usual transpose
![{\displaystyle x'}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0ac74959896052e160a5953102e4bc3850fe93b2)
. Note that
![{\displaystyle R(M,cx)=R(M,x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b1d54d3c850d35f99329591e3b57cef98d17237f)
for any non-zero scalar
![{\displaystyle c}](https://wikimedia.org/api/rest_v1/media/math/render/svg/86a67b81c2de995bd608d5b2df50cd8cd7d92455)
. Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value
![{\displaystyle \lambda _{\min }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/82c24522483ceaf1d54224b69af4244b60c3ac08)
(the smallest eigenvalue of
![{\displaystyle M}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f82cade9898ced02fdd08712e5f0c0151758a0dd)
) when
![{\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)
is
![{\displaystyle v_{\min }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/486623019ef451e0582b874018e0249a46e0f996)
(the corresponding eigenvector).
[4] Similarly,
![{\displaystyle R(M,x)\leq \lambda _{\max }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/18fbf88c578fc9f75d4610ebd18ab55f4f2842ce)
and
![{\displaystyle R(M,v_{\max })=\lambda _{\max }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/200db82bfdbc81cd227cb3470aa826d6f11a7653)
.
The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation.
The range of the Rayleigh quotient (for any matrix, not necessarily Hermitian) is called a numerical range and contains its spectrum. When the matrix is Hermitian, the numerical radius is equal to the spectral norm. Still in functional analysis,
is known as the spectral radius. In the context of
-algebras or algebraic quantum mechanics, the function that to
associates the Rayleigh–Ritz quotient
for a fixed
and
varying through the algebra would be referred to as vector state of the algebra.
In quantum mechanics, the Rayleigh quotient gives the expectation value of the observable corresponding to the operator
for a system whose state is given by
.
If we fix the complex matrix
, then the resulting Rayleigh quotient map (considered as a function of
) completely determines
via the polarization identity; indeed, this remains true even if we allow
to be non-Hermitian. However, if we restrict the field of scalars to the real numbers, then the Rayleigh quotient only determines the symmetric part of
.
Bounds for Hermitian M
As stated in the introduction, for any vector x, one has
, where
are respectively the smallest and largest eigenvalues of
. This is immediate after observing that the Rayleigh quotient is a weighted average of eigenvalues of M:
![{\displaystyle R(M,x)={x^{*}Mx \over x^{*}x}={\frac {\sum _{i=1}^{n}\lambda _{i}y_{i}^{2}}{\sum _{i=1}^{n}y_{i}^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d4473f85e19bf0680172d59f5d08cf5ee3815e03)
where
![{\displaystyle (\lambda _{i},v_{i})}](https://wikimedia.org/api/rest_v1/media/math/render/svg/022d25d594d24b831659df4fadd5ed158caf1533)
is the
![{\displaystyle i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20)
-th eigenpair after orthonormalization and
![{\displaystyle y_{i}=v_{i}^{*}x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/18728e644692e0294a773a1253d8e38459771125)
is the
![{\displaystyle i}](https://wikimedia.org/api/rest_v1/media/math/render/svg/add78d8608ad86e54951b8c8bd6c8d8416533d20)
th coordinate of
x in the eigenbasis. It is then easy to verify that the bounds are attained at the corresponding eigenvectors
![{\displaystyle v_{\min },v_{\max }}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5768399af86da5be636c442cf28f625cd09b9492)
.
The fact that the quotient is a weighted average of the eigenvalues can be used to identify the second, the third, ... largest eigenvalues. Let
be the eigenvalues in decreasing order. If
and
is constrained to be orthogonal to
, in which case
, then
has maximum value
, which is achieved when
.
Special case of covariance matrices
An empirical covariance matrix
can be represented as the product
of the data matrix
pre-multiplied by its transpose
. Being a positive semi-definite matrix,
has non-negative eigenvalues, and orthogonal (or orthogonalisable) eigenvectors, which can be demonstrated as follows.
Firstly, that the eigenvalues
are non-negative:
![{\displaystyle {\begin{aligned}&Mv_{i}=A'Av_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{i}'A'Av_{i}=v_{i}'\lambda _{i}v_{i}\\\Rightarrow {}&\left\|Av_{i}\right\|^{2}=\lambda _{i}\left\|v_{i}\right\|^{2}\\\Rightarrow {}&\lambda _{i}={\frac {\left\|Av_{i}\right\|^{2}}{\left\|v_{i}\right\|^{2}}}\geq 0.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/76f6ad350437cdc62280918fd56dd537052987d0)
Secondly, that the eigenvectors
are orthogonal to one another:
![{\displaystyle {\begin{aligned}&Mv_{i}=\lambda _{i}v_{i}\\\Rightarrow {}&v_{j}'Mv_{i}=v_{j}'\lambda _{i}v_{i}\\\Rightarrow {}&\left(Mv_{j}\right)'v_{i}=\lambda _{j}v_{j}'v_{i}\\\Rightarrow {}&\lambda _{j}v_{j}'v_{i}=\lambda _{i}v_{j}'v_{i}\\\Rightarrow {}&\left(\lambda _{j}-\lambda _{i}\right)v_{j}'v_{i}=0\\\Rightarrow {}&v_{j}'v_{i}=0\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/a50a2abfcf2984daa519b9ad8840fe8507f915df)
if the eigenvalues are different – in the case of multiplicity, the basis can be orthogonalized.
To now establish that the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue, consider decomposing an arbitrary vector
on the basis of the eigenvectors
:
![{\displaystyle x=\sum _{i=1}^{n}\alpha _{i}v_{i},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0557bc395e831b9b7a58f5dd9b0c94e8bd4f6519)
where
![{\displaystyle \alpha _{i}={\frac {x'v_{i}}{v_{i}'v_{i}}}={\frac {\langle x,v_{i}\rangle }{\left\|v_{i}\right\|^{2}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c7ccaf59cd82268e0badaaa1cacd533f6e419f8c)
is the coordinate of
![{\displaystyle x}](https://wikimedia.org/api/rest_v1/media/math/render/svg/87f9e315fd7e2ba406057a97300593c4802b53e4)
orthogonally projected onto
![{\displaystyle v_{i}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/7dffe5726650f6daac54829972a94f38eb8ec127)
. Therefore, we have:
![{\displaystyle {\begin{aligned}R(M,x)&={\frac {x'A'Ax}{x'x}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'\left(A'A\right){\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}v_{i}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}(A'A)v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}{v_{i}}'{v_{i}}{\Bigr )}}}\\&={\frac {{\Bigl (}\sum _{j=1}^{n}\alpha _{j}v_{j}{\Bigr )}'{\Bigl (}\sum _{i=1}^{n}\alpha _{i}\lambda _{i}v_{i}{\Bigr )}}{{\Bigl (}\sum _{i=1}^{n}\alpha _{i}^{2}\|{v_{i}}\|^{2}{\Bigr )}}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/42f2440a7c34ccb2c42909ea2c9c4f2bb3a84f20)
which, by orthonormality of the eigenvectors, becomes:
![{\displaystyle {\begin{aligned}R(M,x)&={\frac {\sum _{i=1}^{n}\alpha _{i}^{2}\lambda _{i}}{\sum _{i=1}^{n}\alpha _{i}^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)(v_{i}'v_{i})^{2}}}\\&=\sum _{i=1}^{n}\lambda _{i}{\frac {(x'v_{i})^{2}}{(x'x)}}\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f47e218181a9bacd5e8afe095eaf5fdf55d12de6)
The last representation establishes that the Rayleigh quotient is the sum of the squared cosines of the angles formed by the vector
and each eigenvector
, weighted by corresponding eigenvalues.
If a vector
maximizes
, then any non-zero scalar multiple
also maximizes
, so the problem can be reduced to the Lagrange problem of maximizing
under the constraint that
.
Define:
. This then becomes a linear program, which always attains its maximum at one of the corners of the domain. A maximum point will have
and
for all
(when the eigenvalues are ordered by decreasing magnitude).
Thus, the Rayleigh quotient is maximized by the eigenvector with the largest eigenvalue.
Formulation using Lagrange multipliers
Alternatively, this result can be arrived at by the method of Lagrange multipliers. The first part is to show that the quotient is constant under scaling
, where
is a scalar
![{\displaystyle R(M,cx)={\frac {(cx)^{*}Mcx}{(cx)^{*}cx}}={\frac {c^{*}c}{c^{*}c}}{\frac {x^{*}Mx}{x^{*}x}}=R(M,x).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/ee378652b2f9ae4f851437b799723f2181bfa242)
Because of this invariance, it is sufficient to study the special case
. The problem is then to find the critical points of the function
![{\displaystyle R(M,x)=x^{\mathsf {T}}Mx,}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d9dd5a1d73152a086f6469185d952862926f8b03)
subject to the constraint
![{\displaystyle \|x\|^{2}=x^{T}x=1.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3e9af38e99fd5612787a51527bc4827d82b2065e)
In other words, it is to find the critical points of
![{\displaystyle {\mathcal {L}}(x)=x^{\mathsf {T}}Mx-\lambda \left(x^{\mathsf {T}}x-1\right),}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0277bc6fb2d13f8169342191be7d37b130ee0fd0)
where
![{\displaystyle \lambda }](https://wikimedia.org/api/rest_v1/media/math/render/svg/b43d0ea3c9c025af1be9128e62a18fa74bedda2a)
is a Lagrange multiplier. The stationary points of
![{\displaystyle {\mathcal {L}}(x)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4c01ea98f37d254827608f25f0a181feba3e99fb)
occur at
![{\displaystyle {\begin{aligned}&{\frac {d{\mathcal {L}}(x)}{dx}}=0\\\Rightarrow {}&2x^{\mathsf {T}}M-2\lambda x^{\mathsf {T}}=0\\\Rightarrow {}&2Mx-2\lambda x=0{\text{ (taking the transpose of both sides and noting that }}M{\text{ is Hermitian)}}\\\Rightarrow {}&Mx=\lambda x\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/852506c9a19ebdefbdc2e9f184d1d47fe09d8f76)
and
![{\displaystyle \therefore R(M,x)={\frac {x^{\mathsf {T}}Mx}{x^{\mathsf {T}}x}}=\lambda {\frac {x^{\mathsf {T}}x}{x^{\mathsf {T}}x}}=\lambda .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8bbcd74a36d4d4267538f05efed9388ffd64edb5)
Therefore, the eigenvectors
of
are the critical points of the Rayleigh quotient and their corresponding eigenvalues
are the stationary values of
. This property is the basis for principal components analysis and canonical correlation.
Use in Sturm–Liouville theory
Sturm–Liouville theory concerns the action of the linear operator
![{\displaystyle L(y)={\frac {1}{w(x)}}\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/371147c6e1ef449ffd33e67888014e4866665d32)
on the
inner product space defined by
![{\displaystyle \langle {y_{1},y_{2}}\rangle =\int _{a}^{b}w(x)y_{1}(x)y_{2}(x)\,dx}](https://wikimedia.org/api/rest_v1/media/math/render/svg/f468853e4dcc2902529e7bb5eca8cd836212ccaa)
of functions satisfying some specified
boundary conditions at
a and
b. In this case the Rayleigh quotient is
![{\displaystyle {\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}={\frac {\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x){\frac {dy}{dx}}\right]+q(x)y(x)\right)dx}{\int _{a}^{b}{w(x)y(x)^{2}}dx}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/cbc74991eb4219cc9d071797b1c76c086a0f6509)
This is sometimes presented in an equivalent form, obtained by separating the integral in the numerator and using integration by parts:
![{\displaystyle {\begin{aligned}{\frac {\langle {y,Ly}\rangle }{\langle {y,y}\rangle }}&={\frac {\left\{\int _{a}^{b}y(x)\left(-{\frac {d}{dx}}\left[p(x)y'(x)\right]\right)dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}\\&={\frac {\left\{\left.-y(x)\left[p(x)y'(x)\right]\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}y'(x)\left[p(x)y'(x)\right]\,dx\right\}+\left\{\int _{a}^{b}{q(x)y(x)^{2}}\,dx\right\}}{\int _{a}^{b}w(x)y(x)^{2}\,dx}}\\&={\frac {\left\{\left.-p(x)y(x)y'(x)\right|_{a}^{b}\right\}+\left\{\int _{a}^{b}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx\right\}}{\int _{a}^{b}{w(x)y(x)^{2}}\,dx}}.\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8c68f876aa4dd809977dfd4ff2ce4a2cc3bcf852)
Generalizations
- For a given pair (A, B) of matrices, and a given non-zero vector x, the generalized Rayleigh quotient is defined as:
![{\displaystyle R(A,B;x):={\frac {x^{*}Ax}{x^{*}Bx}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3488c71da608e91d1c385d2b9dcabc4e90b3f4df)
The generalized Rayleigh quotient can be reduced to the Rayleigh Quotient
through the transformation
where
is the Cholesky decomposition of the Hermitian positive-definite matrix B. - For a given pair (x, y) of non-zero vectors, and a given Hermitian matrix H, the generalized Rayleigh quotient can be defined as:
![{\displaystyle R(H;x,y):={\frac {y^{*}Hx}{\sqrt {y^{*}y\cdot x^{*}x}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/5b5628e26ebac5ed62245c55b238dbd814fbaec9)
which coincides with R(H,x) when x = y. In quantum mechanics, this quantity is called a "matrix element" or sometimes a "transition amplitude".
See also
References
- ^ Also known as the Rayleigh–Ritz ratio; named after Walther Ritz and Lord Rayleigh.
- ^ Horn, R. A.; Johnson, C. A. (1985). Matrix Analysis. Cambridge University Press. pp. 176–180. ISBN 0-521-30586-1.
- ^ Parlett, B. N. (1998). The Symmetric Eigenvalue Problem. Classics in Applied Mathematics. SIAM. ISBN 0-89871-402-8.
- ^ Costin, Rodica D. (2013). "Midterm notes" (PDF). Mathematics 5102 Linear Mathematics in Infinite Dimensions, lecture notes. The Ohio State University.
Further reading
- Shi Yu, Léon-Charles Tranchevent, Bart Moor, Yves Moreau, Kernel-based Data Fusion for Machine Learning: Methods and Applications in Bioinformatics and Text Mining, Ch. 2, Springer, 2011.