4.2: Properties of Eigenvalues and Eigenvectors (2024)

  1. Last updated
  2. Save as PDF
  • Page ID
    63399
  • \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \)\(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \(\newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\)\(\newcommand{\AA}{\unicode[.8,0]{x212B}}\)

    Learning Objectives
    • T/F: \(A\) and \(A^{T}\) have the same eigenvectors.
    • T/F: \(A\) and \(A^{−1}\) have the same eigenvalues.
    • T/F: Marie Ennemond Camille Jordan was a guy.
    • T/F: Matrices with a trace of \(0\) are important, although we haven’t seen why.
    • T/F: A matrix \(A\) is invertible only if \(1\) is an eigenvalue of \(A\).

    In this section we’ll explore how the eigenvalues and eigenvectors of a matrix relate to other properties of that matrix. This section is essentially a hodgepodge of interesting facts about eigenvalues; the goal here is not to memorize various facts about matrix algebra, but to again be amazed at the many connections between mathematical concepts.

    We’ll begin our investigations with an example that will give a foundation for other discoveries.

    Example \(\PageIndex{1}\)

    Let \(A=\left[\begin{array}{ccc}{1}&{2}&{3}\\{0}&{4}&{5}\\{0}&{0}&{6}\end{array}\right].\) Find the eigenvalues of \(A\).

    Solution

    To find the eigenvalues, we compute \(\text{det}(A-\lambda I)\):

    \[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{ccc}{1-\lambda}&{2}&{3}\\{0}&{4-\lambda}&{5}\\{0}&{0}&{6-\lambda}\end{array}\right| \\ &=(1-\lambda)(4-\lambda)(6-\lambda)\end{aligned}\end{align} \nonumber \]

    Since our matrix is triangular, the determinant is easy to compute; it is just the product of the diagonal elements. Therefore, we found (and factored) our characteristic polynomial very easily, and we see that we have eigenvalues of \(\lambda = 1, 4\), and \(6\).

    This examples demonstrates a wonderful fact for us: the eigenvalues of a triangular matrix are simply the entries on the diagonal. Finding the corresponding eigenvectors still takes some work, but finding the eigenvalues is easy.

    With that fact in the backs of our minds, let us proceed to the next example where we will come across some more interesting facts about eigenvalues and eigenvectors.

    Example \(\PageIndex{2}\)

    Let \(A=\left[\begin{array}{cc}{-3}&{15}\\{3}&{9}\end{array}\right]\) and let \(B=\left[\begin{array}{ccc}{-7}&{-2}&{10}\\{-3}&{2}&{3}\\{-6}&{-2}&{9}\end{array}\right]\) (as used in Examples 4.1.3 and 4.1.5 respectively). Find the following:

    1. eigenvalues and eigenvectors of \(A\) and \(B\)
    2. eigenvalues and eigenvectors of \(A^{-1}\) and \(B^{-1}\)
    3. eigenvalues and eigenvectors of \(A^{T}\) and \(B^{T}\)
    4. The trace of \(A\) and \(B\)
    5. The determinant of \(A\) and \(B\)

    Solution

    We’ll answer each in turn.

    1. We already know the answer to these for we did this work in previous examples. Therefore we just list the answers.
      For \(A\), we have eigenvalues \(\lambda = -6\) and \(12\), with eigenvectors
      \[\vec{x}=x_{2}\left[\begin{array}{c}{-5}\\{1}\end{array}\right]\text{ and }x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right],\text{ respectively.} \nonumber \]
      For \(B\), we have eigenvalues \(\lambda = -1,\ 2,\) and \(3\) with eigenvectors
      \[\vec{x}=x_{3}\left[\begin{array}{c}{3}\\{1}\\{2}\end{array}\right],\: x_{3}\left[\begin{array}{c}{2}\\{1}\\{2}\end{array}\right]\text{ and }x_{3}\left[\begin{array}{c}{1}\\{0}\\{1}\end{array}\right],\text{ respectively}. \nonumber \]
    2. We first compute the inverses of \(A\) and \(B\). They are:
      \[A^{-1}=\left[\begin{array}{cc}{-1/8}&{5/24}\\{1/24}&{1/24}\end{array}\right]\quad\text{and}\quad B^{-1}=\left[\begin{array}{ccc}{-4}&{1/3}&{13/3}\\{-3/2}&{1/2}&{3/2}\\{-3}&{1/3}&{10/3}\end{array}\right]. \nonumber \]
      Finding the eigenvalues and eigenvectors of these matrices is not terribly hard, but it is not “easy,” either. Therefore, we omit showing the intermediate steps and go right to the conclusions.
      For \(A^{-1}\), we have eigenvalues \(\lambda = -1/6\) and \(1/12\), with eigenvectors
      \[\vec{x}=x_{2}\left[\begin{array}{c}{-5}\\{1}\end{array}\right]\text{ and }x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right],\text{ respectively.} \nonumber \]
      For \(B^{-1}\), we have eigenvalues \(\lambda = -1\), \(1/2\) and \(1/3\) with eigenvectors
      \[\vec{x}=x_{3}\left[\begin{array}{c}{3}\\{1}\\{2}\end{array}\right],\: x_{3}\left[\begin{array}{c}{2}\\{1}\\{2}\end{array}\right]\text{ and }x_{3}\left[\begin{array}{c}{1}\\{0}\\{1}\end{array}\right],\text{ respectively.} \nonumber \]
    3. Of course, computing the transpose of \(A\) and \(B\) is easy; computing their eigenvalues and eigenvectors takes more work. Again, we omit the intermediate steps.
      For \(A^{T}\), we have eigenvalues \(\lambda = -6\) and \(12\) with eigenvectors
      \[\vec{x}=x_{2}\left[\begin{array}{c}{-1}\\{1}\end{array}\right]\text{ and }x_{2}\left[\begin{array}{c}{5}\\{1}\end{array}\right],\text{ respectively.} \nonumber \]
      For \(B^{T}\), we have eigenvalues \(\lambda = -1,\ 2\) and \(3\) with eigenvectors
      \[\vec{x}=x_{3}\left[\begin{array}{c}{-1}\\{0}\\{1}\end{array}\right],\: x_{3}\left[\begin{array}{c}{-1}\\{1}\\{1}\end{array}\right]\text{ and }x_{3}\left[\begin{array}{c}{0}\\{-2}\\{1}\end{array}\right],\text{ respectively.} \nonumber \]
    4. The trace of \(A\) is \(6\); the trace of \(B\) is \(4\).
    5. The determinant of \(A\) is \(-72\); the determinant of \(B\) is \(-6\).

    Now that we have completed the “grunt work,” let’s analyze the results of the previous example. We are looking for any patterns or relationships that we can find.

    The eigenvalues and eigenvectors of \(A\) and \(A^{-1}\).

    In our example, we found that the eigenvalues of \(A\) are \(-6\) and \(12\); the eigenvalues of \(A^{-1}\) are \(-1/6\) and \(1/12\). Also, the eigenvalues of \(B\) are \(-1\), \(2\) and \(3\), whereas the eigenvalues of \(B^{-1}\) are \(-1\), \(1/2\) and \(1/3\). There is an obvious relationship here; it seems that if \(\lambda\) is an eigenvalue of \(A\), then \(1/\lambda\) will be an eigenvalue of \(A^{-1}\). We can also note that the corresponding eigenvectors matched, too.

    Why is this the case? Consider an invertible matrix \(A\) with eigenvalue \(\lambda\) and eigenvector \(\vec{x}\). Then, by definition, we know that \(A\vec{x}=\lambda\vec{x}\). Now multiply both sides by \(A^{-1}\):

    \[\begin{align}\begin{aligned} A\vec{x}&=\lambda\vec{x} \\ A^{-1}A\vec{x}&=A^{-1}\lambda\vec{x} \\ \vec{x}&=\lambda A^{-1}\vec{x} \\ \frac{1}{\lambda}\vec{x}&=A^{-1}\vec{x}\end{aligned}\end{align} \nonumber \]

    We have just shown that \(A^{-1}\vec{x}=1/\lambda\vec{x}\); this, by definition, shows that \(\vec{x}\) is an eigenvector of \(A^{-1}\) with eigenvalue \(1/\lambda\). This explains the result we saw above.

    The eigenvalues and eigenvectors of \(A\) and \(A^{T}\).

    Our example showed that \(A\) and \(A^{T}\) had the same eigenvalues but different (but somehow similar) eigenvectors; it also showed that \(B\) and \(B^{T}\) had the same eigenvalues but unrelated eigenvectors. Why is this?

    We can answer the eigenvalue question relatively easily; it follows from the properties of the determinant and the transpose. Recall the following two facts:

    1. \((A+B)^{T}=A^{T}+B^{T}\) (Theorem 3.1.1) and
    2. \(\text{det}(A)=\text{det}(A^{T})\) (Theorem 3.4.3).

    We find the eigenvalues of a matrix by computing the characteristic polynomial; that is, we find \(\text{det}(A-\lambda I)\). What is the characteristic polynomial of \(A^{T}\)? Consider:

    \[\begin{align}\begin{aligned}\text{det}(A^{T}-\lambda I)&=\text{det}(A^{T}-\lambda I^{T}) &\text{since }I=I^{T} \\ &=\text{det}((A-\lambda I)^{T}) &\text{Theorem 3.1.1} \\ &=\text{det}(A-\lambda I) &\text{Theorem 3.4.3}\end{aligned}\end{align} \nonumber \]

    So we see that the characteristic polynomial of \(A^{T}\) is the same as that for \(A\). Therefore they have the same eigenvalues.

    What about their respective eigenvectors? Is there any relationship? The simple answer is “No."\(^{1}\)

    The eigenvalues and eigenvectors of \(A\) and The Trace.

    Note that the eigenvalues of \(A\) are \(-6\) and \(12\), and the trace is \(6\); the eigenvalues of \(B\) are \(-1\), \(2\) and \(3\), and the trace of \(B\) is \(4\). Do we notice any relationship?

    It seems that the sum of the eigenvalues is the trace! Why is this the case?

    The answer to this is a bit out of the scope of this text; we can justify part of this fact, and another part we’ll just state as being true without justification.

    First, recall from Theorem 3.2.1 that \(\text{tr}(AB)=\text{tr}(BA)\). Secondly, we state without justification that given a square matrix \(A\), we can find a square matrix \(P\) such that \(P^{-1}AP\) is an upper triangular matrix with the eigenvalues of \(A\) on the diagonal.\(^{2}\) Thus \(\text{tr}(P^{-1}AP)\) is the sum of the eigenvalues; also, using our Theorem 3.2.1, we know that \(\text{tr}(P^{-1}AP)=\text{tr}(P^{-1}PA)=\text{tr}(A)\). Thus the trace of \(A\) is the sum of the eigenvalues.

    The eigenvalues and eigenvectors of \(A\) and The Determinant.

    Again, the eigenvalues of \(A\) are \(-6\) and \(12\), and the determinant of \(A\) is \(-72\). The eigenvalues of \(B\) are \(-1\), \(2\) and \(3\); the determinant of \(B\) is \(-6\). It seems as though the product of the eigenvalues is the determinant.

    This is indeed true; we defend this with our argument from above. We know that the determinant of a triangular matrix is the product of the diagonal elements. Therefore, given a matrix \(A\), we can find \(P\) such that \(P^{-1}AP\) is upper triangular with the eigenvalues of \(A\) on the diagonal. Thus \(\text{det}(P^{-1}AP)\) is the product of the eigenvalues. Using Theorem 3.4.3, we know that \(\text{det}(P^{-1}AP)=\text{det}(P^{-1}PA)=\text{det}(A)\). Thus the determinant of \(A\) is the product of the eigenvalues.

    We summarize the results of our example with the following theorem.

    Theorem \(\PageIndex{1}\)

    Properties of Eigenvalues and Eigenvectors

    Let \(A\) be an \(n\times n\) invertible matrix. The following are true:

    1. If \(A\) is triangular, then the diagonal elements of \(A\) are the eigenvalues of \(A\).
    2. If \(\lambda\) is an eigenvalue of \(A\) with eigenvector \(\vec{x}\), then \(\frac{1}{\lambda}\) is an eigenvalue of \(A^{−1}\) with eigenvector \(\vec{x}\).
    3. If \(\lambda\) is an eigenvalue of \(A\) then \(\lambda\) is an eigenvalue of \(A^{T}\).
    4. The sum of the eigenvalues of \(A\) is equal to \(\text{tr}(A)\), the trace of \(A\).
    5. The product of the eigenvalues of \(A\) is the equal to \(\text{det}(A)\), the determinant of \(A\).

    There is one more concept concerning eigenvalues and eigenvectors that we will explore. We do so in the context of an example.

    Example \(\PageIndex{3}\)

    Find the eigenvalues and eigenvectors of the matrix \(A=\left[\begin{array}{cc}{1}&{2}\\{1}&{2}\end{array}\right]\).

    Solution

    To find the eigenvalues, we compute \(\text{det}(A-\lambda I)\):

    \[\begin{align}\begin{aligned}\text{det}(A-\lambda I)&=\left|\begin{array}{cc}{1-\lambda}&{2}\\{1}&{2-\lambda}\end{array}\right| \\ &=(1-\lambda)(2-\lambda)-2 \\ &=\lambda^{2}-3\lambda \\ &=\lambda (\lambda -3)\end{aligned}\end{align} \nonumber \]

    Our eigenvalues are therefore \(\lambda = 0, 3\).

    For \(\lambda = 0\), we find the eigenvectors:

    \[\left[\begin{array}{ccc}{1}&{2}&{0}\\{1}&{2}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{2}&{0}\\{0}&{0}&{0}\end{array}\right] \nonumber \]

    This shows that \(x_1 = -2x_2\), and so our eigenvectors \(\vec{x}\) are

    \[\vec{x}=x_{2}\left[\begin{array}{c}{-2}\\{1}\end{array}\right]. \nonumber \]

    For \(\lambda = 3\), we find the eigenvectors:

    \[\left[\begin{array}{ccc}{-2}&{2}&{0}\\{1}&{-1}&{0}\end{array}\right]\quad\vec{\text{rref}}\quad\left[\begin{array}{ccc}{1}&{-1}&{0}\\{0}&{0}&{0}\end{array}\right] \nonumber \]

    This shows that \(x_1 = x_2\), and so our eigenvectors \(\vec{x}\) are

    \[\vec{x}=x_{2}\left[\begin{array}{c}{1}\\{1}\end{array}\right]. \nonumber \]

    One interesting thing about the above example is that we see that \(0\) is an eigenvalue of \(A\); we have not officially encountered this before. Does this mean anything significant?\(^{3}\)

    Think about what an eigenvalue of \(0\) means: there exists an nonzero vector \(\vec{x}\) where \(A\vec{x}=0\vec{x}=\vec{0}\). That is, we have a nontrivial solution to \(A\vec{x}=\vec{0}\). We know this only happens when \(A\) is not invertible.

    So if \(A\) is invertible, there is no nontrivial solution to \(A\vec{x}=\vec{0}\), and hence \(0\) is not an eigenvalue of \(A\). If \(A\) is not invertible, then there is a nontrivial solution to \(A\vec{x}=\vec{0}\), and hence \(0\) is an eigenvalue of \(A\). This leads us to our final addition to the Invertible Matrix Theorem.

    Theorem \(\PageIndex{2}\)

    Invertible Matrix Theorem

    Let \(A\) be an \(n\times n\) matrix. The following statements are equivalent.

    1. \(A\) is invertible.
    2. \(A\) does not have an eigenvalue of \(0\).

    This section is about the properties of eigenvalues and eigenvectors. Of course, we have not investigated all of the numerous properties of eigenvalues and eigenvectors; we have just surveyed some of the most common (and most important) concepts. Here are four quick examples of the many things that still exist to be explored.

    First, recall the matrix

    \[A=\left[\begin{array}{cc}{1}&{4}\\{2}&{3}\end{array}\right] \nonumber \]

    that we used in Example 4.1.1. It’s characteristic polynomial is \(p(\lambda)=\lambda^2-4\lambda-5\). Compute \(p(A)\); that is, compute \(A^2-4A-5I\). You should get something “interesting,” and you should wonder “does this always work?"\(^{4}\)

    Second, in all of our examples, we have considered matrices where eigenvalues “appeared only once.” Since we know that the eigenvalues of a triangular matrix appear on the diagonal, we know that the eigenvalues of

    \[A=\left[\begin{array}{cc}{1}&{1}\\{0}&{1}\end{array}\right] \nonumber \]

    are “1 and 1;” that is, the \(\lambda = 1\) appears twice. What does that mean when we consider the eigenvectors of \(\lambda = 1\)? Compare the result of this to the matrix

    \[A=\left[\begin{array}{cc}{1}&{0}\\{0}&{1}\end{array}\right], \nonumber \]

    which also has the \(\lambda =1\) appearing twice.\(^{5}\)

    Third, consider the matrix

    \[A=\left[\begin{array}{cc}{0}&{-1}\\{1}&{0}\end{array}\right]. \nonumber \]

    What are the eigenvalues?\(^{6}\) We quickly compute the characteristic polynomial to be \(p(\lambda) = \lambda^2 + 1\). Therefore the eigenvalues are \(\pm \sqrt{-1} = \pm i\). What does this mean?

    Finally, we have found the eigenvalues of matrices by finding the roots of the characteristic polynomial. We have limited our examples to quadratic and cubic polynomials; one would expect for larger sized matrices that a computer would be used to factor the characteristic polynomials. However, in general, this is not how the eigenvalues are found. Factoring high order polynomials is too unreliable, even with a computer – round off errors can cause unpredictable results. Also, to even compute the characteristic polynomial, one needs to compute the determinant, which is also expensive (as discussed in the previous chapter).

    So how are eigenvalues found? There are iterative processes that can progressively transform a matrix \(A\) into another matrix that is almost an upper triangular matrix (the entries below the diagonal are almost zero) where the entries on the diagonal are the eigenvalues. The more iterations one performs, the better the approximation is.

    These methods are so fast and reliable that some computer programs convert polynomial root finding problems into eigenvalue problems!

    Most textbooks on Linear Algebra will provide direction on exploring the above topics and give further insight to what is going on. We have mentioned all the eigenvalue and eigenvector properties in this section for the same reasons we gave in the previous section. First, knowing these properties helps us solve numerous real world problems, and second, it is fascinating to see how rich and deep the theory of matrices is.

    Footnotes

    [1] We have defined an to be a column vector. Some mathematicians prefer to use row vectors instead; in that case, the typical eigenvalue/eigenvector equation looks like \(\vec{x}A=\lambda\vec{x}\). It turns out that doing things this way will give you the same eigenvalues as our method. What is more, take the transpose of the above equation: you get \((\vec{x}A)^{T}=(\lambda\vec{x})^{T}\) which is also \(A^{T}\vec{x}^{T}=\lambda\vec{x}^{T}\). The transpose of a row vector is a column vector, so this equation is actually the kind we are used to, and we can say that \(\vec{x}^{T}\) is an eigenvector of \(A^{T}\).

    In short, what we find is that the eigenvectors of \(A^{T}\) are the “row” eigenvectors of \(A\), and vice–versa.

    [2] Who in the world thinks up this stuff? It seems that the answer is Marie Ennemond Camille Jordan, who, despite having at least two girl names, was a guy.

    [3] Since \(0\) is a “special” number, we might think so – afterall, we found that having a determinant of \(0\) is important. Then again, a matrix with a trace of \(0\) isn’t all that important. (Well, as far as we have seen; it actually is). So, having an eigenvalue of \(0\) may or may not be significant, but we would be doing well if we recognized the possibility of significance and decided to investigate further.

    [4] Yes.

    [5] To direct further study, it helps to know that mathematicians refer to this as the duplicity of an eigenvalue. In each of these two examples, has the \(\lambda=1\) with duplicity of \(2\).

    [6] Be careful; this matrix is not triangular.

    4.2: Properties of Eigenvalues and Eigenvectors (2024)

    FAQs

    Can there be multiple answers for eigenvectors? ›

    Since a nonzero subspace is infinite, every eigenvalue has infinitely many eigenvectors. (For example, multiplying an eigenvector by a nonzero scalar gives another eigenvector.)

    What happens if you multiply a matrix by its eigenvector? ›

    Matrix multiplied to its Eigenvector is same as the Eigenvalue multiplied to its Eigenvector.

    What is the maximum number of eigenvectors of a matrix? ›

    This lemma shows that an n × n matrix can have at most n distinct eigenvalues, since a set of n eigenvalues yields n linearly independent vectors. The maximum number of linearly independent eigenvectors corresponding to a single eigenvalue A is known as the geometric multiplicity of A.

    How many eigenvalues and eigenvectors can a matrix have? ›

    Since a nonzero subspace is infinite, every eigenvalue has infinitely many eigenvectors. (For example, multiplying an eigenvector by a nonzero scalar gives another eigenvector.) On the other hand, there can be at most n linearly independent eigenvectors of an n × n matrix, since R n has dimension n .

    Why are eigenvalues and eigenvectors important in math? ›

    Eigenvalues and eigenvectors, key concepts in linear algebra, have wide-ranging applications in the field of engineering. These mathematical tools provide valuable insights into the behavior of systems, allowing engineers to analyze complex phenomena, optimize designs, and solve various engineering problems.

    How do you solve eigenvalues and eigenvectors? ›

    How to Calculate Eigenvalues and Eigenvectors? For any square matrix A, To find eigenvalues: Solve the characteristic equation |A - λI| = 0 for λ. To find the eigenvectors: Solve the equation (A - λI) v = O for v.

    What are the rules for eigenvalues? ›

    The sum of the n eigenvalues of A is the same as the trace of A (that is, the sum of the diagonal elements of A). The product of the n eigenvalues of A is the same as the determinant of A. If λ is an eigenvalue of A, then the dimension of Eλ is at most the multiplicity of λ.

    How many eigenvalues can a 4x4 matrix have? ›

    A 4x4 matrix always has 4 distinct eigenvalues. A 4x4 matrix either has 4 distinct eigenvalues, or else it is not diagonalizable. A 4x4 matrix either has 4 distinct eigenvalues, or else it has an eigenvalue for which the geometric multiplicity is greater than 1.

    What is a 4x4 identity matrix? ›

    The identity matrix or unit matrix of size 4 is the 4×4 4 × 4 square matrix with ones on the main diagonal and zeros elsewhere. ⎡⎢ ⎢ ⎢ ⎢⎣1000010000100001⎤⎥ ⎥ ⎥ ⎥⎦

    Do eigenvalues change if a matrix is squared? ›

    If the eigenvalue is 0 then eigenvector lies in null space (eigenvector could not be a zero vector). 4. If matrix is squared (by matrix multiplication with itself) then the eigenvectors stay same but the eigenvalues are squared.

    What do eigenvalues tell you about a matrix? ›

    Specifically, eigenvalues provide essential information about the transformation's scaling factors or stretching properties. When applied to a square matrix, eigenvalues represent the values for which a non-zero vector, known as an eigenvector, remains in the same direction after the matrix transformation.

    How do you know if something is an eigenvalue? ›

    The scalar λ is an eigenvalue of L if L(v) = λv for some nonzero vector v in V . If λ is an eigenvalue of L, then any vector v in V for which L(v) = λv is an eigenvector of L corresponding to λ (or just λ-eigenvector).

    Can a matrix have multiple sets of eigenvectors? ›

    A matrix can have multiple eigenbases because it can have different sets of eigenvectors that satisfy the definition of an eigenbasis.

    Can there be multiple eigenvalues? ›

    7.7 Multiple eigenvalues. It may happen that a matrix has some “repeated” eigenvalues. That is, the characteristic equation det ( A − λ I ) = 0 may have repeated roots. This is actually unlikely to happen for a random matrix.

    Is a multiple of an eigenvector still an eigenvector? ›

    FACT: If X is an eigenvector for A, then any multiple of X, say cX, is also an eigenvector since A(CX) = cAX = cλX = X(cX) .

    Can a matrix have two same eigenvectors? ›

    A matrix can have same eigenvalues and different eigenvectors corresponding to those eigenvalues but never same eigenvectors because then eigenvalue equation will remain unchanged. i.e. the eigenvalue equation which goes like MX=λX for M to be the Matrix one of whose eigenvalue is λ and eigenvector X.

    Top Articles
    Latest Posts
    Article information

    Author: Lakeisha Bayer VM

    Last Updated:

    Views: 6098

    Rating: 4.9 / 5 (69 voted)

    Reviews: 84% of readers found this page helpful

    Author information

    Name: Lakeisha Bayer VM

    Birthday: 1997-10-17

    Address: Suite 835 34136 Adrian Mountains, Floydton, UT 81036

    Phone: +3571527672278

    Job: Manufacturing Agent

    Hobby: Skimboarding, Photography, Roller skating, Knife making, Paintball, Embroidery, Gunsmithing

    Introduction: My name is Lakeisha Bayer VM, I am a brainy, kind, enchanting, healthy, lovely, clean, witty person who loves writing and wants to share my knowledge and understanding with you.