Function scales eigenvalues, but what happens to eigenvectors?

In summary, the eigenvalues of a matrix A remain the same if and only if the polynomial p(x) is a linear combination of the eigenvalues.
  • #1
johnpjust
22
0
Statement: I can prove that if I apply a function to my matrix (lets call it) "A"...whatever that function does on A, it will do the same thing to the eigenvalues (I can prove this with a similarity transformation I think), so long as the function is basically a linear combination of the powers of "A" or something like that.

Question: How do I prove what this function does to the eigen vectors though? Do they remain the same? Do they change? Thanks!
 
Physics news on Phys.org
  • #2
You have a matrix A and you have eigenvalues so that you know ##Av_i = a_iv_i## where ##a_i## is the ith eigenvalue with ##v_i## as the corresponding eigenvector. (I take it A is nxn and there are n different eigenvalues?)

You have some function f(A) ... which is a matrix equation ... so B=f(A) - is that correct?
Can you represent the effect of f on A by a matrix so that B=FA?

Then when find the eigenvalues ... you get ##Bu_i = b_iu_i## (##b_i## and ##u_i## are the eigenvalues and vectors for B) ... and you find that, when they are ordered a certain way, ##b_i=sa_i## where s is a constant scale factor between "corresponding" eigenvalues?

So - to show the relationship between the eigenvectors - if any, you need to be explicit about how you proved the relationship for the eigenvalues.
 
  • #3
Thank you for the response. A is indeed an nxn matrix, but we cannot assume n different eigenvalues. Also, f(A) is more like a polynomial function or power series, and so cannot be assumed to be a matrix transformation equation. I incorrectly used the word "scales" in the question title, which is probably why you assumed these things above (my fault!). The similarity transformation that I mention has Jordan Normal form involved, which allows us to prove that whatever the function does to A , it also does to the eigenvalues of A.

I was trying to think about how to use the Av=av equation you show above to prove that the eigenvectors do not change, but I'm not sure how...
 
  • #4
Conjugation invariant polynomials will work, for instance the trace and the determinant.
 
  • #5
Hi lavinia - can you please elaborate for me regarding your response? Thank you!
 
  • #6
johnpjust said:
Hi lavinia - can you please elaborate for me regarding your response? Thank you!
Hi johnpjust
If I understood you right a function like the trace depends only on the eigenvalues of the matrix. While it is the sum of the diagonal entries of the matrix it is also the sum of the eigenvalues. By conjugation any matrix can be put in Jordan canonical form and then the diagonal entries are the eigenvalues with multiplicity.

I think - but correct me if I am wrong - that any conjugation invariant function will depend only on the eigenvalues because in Jordan canonical form the only entries are the eigenvalues on the diagonal and 1's on the super diagonal. This is certainly true of symmetric invariant polynomials such as the trace and determinant.

Your statement of your question confused me a little so I interpreted to mean functions that are determined by the eigenvalues.
 
  • #7
johnpjust said:
Statement: I can prove that if I apply a function to my matrix (lets call it) "A"...whatever that function does on A, it will do the same thing to the eigenvalues (I can prove this with a similarity transformation I think), so long as the function is basically a linear combination of the powers of "A" or something like that.

Question: How do I prove what this function does to the eigen vectors though? Do they remain the same? Do they change? Thanks!
This is a special case of a theorem called "the spectral mapping theorem" in books on functional analysis. Note that if there's a non-zero vector x such that ##Ax=\lambda x##, then ##(A-\lambda I)x=0##, and ##A-\lambda I## isn't invertible (because if it is, then x=0). Because of this, the spectrum of A is defined as the set of all ##\lambda\in\mathbb C## such that ##A-\lambda I## is not invertible. The spectrum of A is denoted by ##\sigma(A)##. One version of the spectral mapping theorem says that if f is a polynomial, then we have ##\sigma(f(A))=f(\sigma(A))##. The right-hand side is (by definition of the notation) equal to ##\{f(\lambda)|\lambda\in\sigma(A)\}##.

I don't know if there's a simple proof for the case where A is a linear operator on a finite-dimensional vector space.
 
Last edited:
  • #8
If ##A## is a square matrix, ##\lambda## is its eigenvalue with a corresponding eigenvector ##v##, then if ##p (x)= \sum_{k=0}^n a_k x_k## is a polynomial, then $$p(A) v = p(\lambda) v, $$ i.e. eigenvalues change by the rule ##\lambda\to p(\lambda)## and the eigenvectors remain the same.
That is a very easy statement, it is almost trivial for ##p(x)=x^k##, and then you just take the linear combination of identities ##A^k v= \lambda^k v##.

A more non-trivial statement is the so-called spectral mapping theorem, which says that if ##\mu## is an eigenvalue of ##p(A)##, then there exists ##\lambda\in\sigma(A)## (i.e. ##\lambda## is an eigenvalue of ##A##) such that ##\mu=p(\lambda)##. You can find a proof in any a more or less advanced linear algebra text, you can look for example in "Linear algebra done wrong", Ch. 9, s. 2.

As for eigenvectors, if $$p(A) v = \mu v,$$ then $$v= \sum_{k=1}^r \alpha_k v_k,$$ where ##\alpha_k## are arbitrary scalars, ##Av_k = \lambda_k v_k##, and ##\lambda_k##, ##k=1, 2, \ldots, r## are all eigenvalues of ##A## such that ##p(\lambda_k)=\mu##. This statement can be easily seen from the Jordan decomposition of a matrix. I am nor aware of any "elementary" proof.
 
  • Like
Likes Fredrik

Related to Function scales eigenvalues, but what happens to eigenvectors?

1. What is a function scale?

A function scale is a mathematical operation that multiplies a matrix by a scalar value, which is a single number. This scalar value can be any real number, and it is applied to each element in the matrix, resulting in a new matrix with all elements scaled by the same factor.

2. What are eigenvalues?

Eigenvalues are a set of numbers associated with a square matrix that represent the scaling factor of the eigenvectors of that matrix. They are the values that do not change direction when a matrix is transformed, and they play a crucial role in understanding the behavior of linear transformations.

3. How are eigenvalues affected by function scales?

When a function scale is applied to a matrix, it affects all elements of the matrix, including the eigenvalues. The eigenvalues will be multiplied by the same scalar value as the rest of the matrix, resulting in a new set of eigenvalues that are scaled by the same factor.

4. What happens to eigenvectors when a function scale is applied to a matrix?

Eigenvectors are not affected by function scales, meaning they do not change direction or magnitude when a matrix is scaled. However, the corresponding eigenvalues will be affected, as mentioned in the previous question.

5. Can eigenvectors be used to understand the behavior of function scales?

Yes, eigenvectors can be used to understand the behavior of function scales. The eigenvectors of a matrix represent the directions in which the matrix is stretched or compressed, and the corresponding eigenvalues represent the amount of stretching or compression. So, by looking at the eigenvectors and eigenvalues, we can understand how a matrix is affected by a function scale.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
877
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
656
  • Linear and Abstract Algebra
Replies
7
Views
8K
  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Back
Top