What Are Eigenvectors and Eigenvalues? - Comments

In summary, Mark44 provides a brief overview of the concept of eigenvalues and eigenvectors, and how they can be used to solve problems in linear algebra.
  • #1
37,684
9,961
Mark44 submitted a new PF Insights post

What Are Eigenvectors and Eigenvalues?

Eigenvectors-80x80.png


Continue reading the Original PF Insights Post.
 
  • Like
Likes FactChecker, kaushikquanta, Samy_A and 2 others
Physics news on Phys.org
  • #3
Excellent information, Mark44!
 
  • #4
QuantumQuest said:
Great job Mark!

RJLiberator said:
Excellent information, Mark44!
Thanks! My Insights article isn't anything groundbreaking -- just about every linear algebra text will cover this topic. My intent was to write something for this site that was short and sweet, on why we care about eigenvectors and eigenvalues, and how we find them in a couple of examples.
 
  • Like
Likes RJLiberator
  • #5
I learned about eigenvalues and eigenvectors in quantum mechanics first, so I can't help but think of wavefunctions and energy levels of some Hamiltonian.

Other cases are (to me) just different analogs to wave functions and energy levels.

Nice article. I eventually took linear algebra and learned it the way you are presenting it, but I was a senior in college taking the course by correspondence.
 
  • #6
So the red/blue arrows on the image are eigenvectors?
 
  • #8
Haruki Chou said:
So the red/blue arrows on the image are eigenvectors?
What image are you talking about? The article doesn't have any images in it.
 
  • #9
Mark44 said:
What image are you talking about? The article doesn't have any images in it.
It's a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post following his own post. :oldsmile:
If that is the case, the blue and violet arrows are the eigenvectors, not the red.
 
  • #10
Mark44 said:
What image are you talking about? The article doesn't have any images in it.
Could it be a reference to Mona Lisa at the top of the Insight?
 
  • #11
Samy_A said:
It's a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post following his own post. :oldsmile:
If that is the case, the blue and violet arrows are the eigenvectors, not the red.
How about asking Harouki to write an insight on that one -- time travel??
 
  • Like
Likes Samy_A
  • #12
i do not understand how det(A-lambda(I))=0
since x is not a square matrix we cannot write det((A-lambda(I))*x)=det(A-lambda(I))*det(x)
 
  • #13
2nafish117 said:
i do not understand how det(A-lambda(I))=0
since x is not a square matrix we cannot write det((A-lambda(I))*x)=det(A-lambda(I))*det(x)
Correct, you can't write that. Note that Mark44 doesn't write ##|A – \lambda I||\vec x| = 0##. He correctly writes ##|A – \lambda I|=0##.

In general, if for a square matrix ##B## there exists a non 0 vector ##\vec x## satisfying ##B\vec x=\vec 0 ##, then the determinant of ##B## must be 0.
That's how ## (A – \lambda I)\vec{x} = \vec{0}## implies ##|A – \lambda I|=0##.
 
  • #14
Samy_A said:
Correct, you can't write that. Note that Mark44 doesn't write ##|A – \lambda I||\vec x| = 0##. He correctly writes ##|A – \lambda I|=0##.

In general, if for a square matrix ##B## there exists a non 0 vector ##\vec x## satisfying ##B\vec x=\vec 0 ##, then the determinant of ##B## must be 0.
That's how ## (A – \lambda I)\vec{x} = \vec{0}## implies ##|A – \lambda I|=0##.
but howw?
ah i got it just as i was writing this.
let 'x' be non a zero vector and let det(A) ≠ 0 .
then premultiplying with A(inverse) we get
(A(inverse)*A)*x=A(inverse)*0
which then leads to the contradiction x=0
am i right?
i'm sorry that i don't know how to use latex.
 
  • #15
2nafish117 said:
but howw?
ah i got it just as i was writing this.
let 'x' be non a zero vector and let det(A) ≠ 0 .
then premultiplying with A(inverse) we get
(A(inverse)*A)*x=A(inverse)*0
which then leads to the contradiction x=0
am i right?
i'm sorry that i don't know how to use latex.
Yes, that's basically it.
 
  • #16
Nice insight !
If you like it, I have an exemple of application for your post to euclidean geometry. You could explain how eigenvalues and eigenvectors are helpfull in order to carry out a full description of isometries in dimension 3, and conclude that they are rotations, reflections, and the composition of a rotation and a reflection about the orthogonal plane to the the axis of rotation.
 
  • #17
geoffrey159 said:
Nice insight !
If you like it, I have an exemple of application for your post to euclidean geometry. You could explain how eigenvalues and eigenvectors are helpfull in order to carry out a full description of isometries in dimension 3, and conclude that they are rotations, reflections, and the composition of a rotation and a reflection about the orthogonal plane to the the axis of rotation.
Thank your for the offer, but I think that I will decline. All I wanted to say in the article was a bit about what they (eigenvectors and eigenvalues) are, and a brief bit on how to find them. Adding what you suggested would go well beyond the main thrust of the article.
 
  • #18
For how this helps the physics people, the eigen values reduce the components of a large tensor into only as many components as the order of the matrix. These reduced ones have the same effect as all the tensoral components combined. About eigenvectors, I'm not sure how it is applied.
 
  • #19
Eigenvalues/vectors is something I've often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I've been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out... "But what is it for?" :)

"Here is an awesome tool that is very useful to a long list of disciplines. It's called a screwdriver. To make use of it you grasp it with your hand and turn it. The end." Nooooo! Don't stop there - I don't have any insight yet into why this tool is so useful, nor intuition into the types of problems I might encounter where I would be glad I had brought my trusty screwdriver with me.

I would truly love to know these things, so I hope you will consider adding some additional exposition that offers insight into why eigenstuff is so handy.
 
  • #20
ibkev said:
Eigenvalues/vectors is something I've often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I've been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out... "But what is it for?" :)
My background isn't in engineering, so I'm not aware of how eigenvalues and eigenvectors are applicable to engineering disciplines, if at all. An important application of these ideas is in diagonalizing square matrices to solve a system of differential equations. A few other applications, as listed in this Wikipedia article (https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors) are
  • Schrödinger equation in quantum mechanics
  • Molecular orbitals
  • Vibration analysis
  • Geology and glaciology (to study the orientation of components of glacial till)
 
  • Like
Likes ibkev
  • #21
"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?
Are you saying that ##\vec{x}## cannot be ##\vec{0}## and ##A – \lambda## may not be ##\vec{0}##.
=>##A – \lambda## does not have an inverse.
=>##det(A – \lambda)## must be 0. Is that the reasoning?
 
  • #22
smodak said:
"All we can be sure of is that the determinant of IA–λI must be zero". How do you arrive at this?
Are you saying that vec{x} cannot be vec{0}. And A – lambda may not be vec{0}.

=>A – lambda does not have an inverse.
=>det|A – lambda| must be 0.
Is that the reasoning?
Sorry my LaTex was all messed up. Here is what I meant to say...

"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?
Are you saying that ##\vec{x}## cannot be ##\vec{0}## and ##A – \lambda## may not be ##\vec{0}##.
=>##A – \lambda## does not have an inverse.
=>##det(A – \lambda)## must be 0. Is that the reasoning?
 
  • #23
smodak said:
Sorry my LaTex was all messed up. Here is what I meant to say...

"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?
Are you saying that ##\vec{x}## cannot be ##\vec{0}## and ##A – \lambda## may not be ##\vec{0}##.
=>##A – \lambda## does not have an inverse.
=>##det(A – \lambda)## must be 0. Is that the reasoning?
The full quote near the beginning of the article is this:
In the last equation above, one solution would be ##\vec{x} = \vec{0}##, but we don’t allow this possibility, because an eigenvector has to be nonzero. Another solution would be ##A – λI = \vec{0}##, but because of the way matrix multiplication is defined, a matrix times a vector can result in zero even if neither the matrix nor the vector are zero. All we can be sure of is that the determinant of |A – λI| must be zero.

The "last equation" in the quoted text above is ##(A - \lambda I)\vec{x} = \vec{0}##. My statement about |A – λI| being zero doesn't follow from ##A - \lambda I## not being invertible; it's really the other way around (i.e., since |A – λI| = 0, then A – λI doesn't have an inverse.

Note that what you wrote, A – λ, isn't correct. Subtracting a scalar ( λ) from a matrix is not defined.
 
  • #24
a good explanation ,but .( "In other words, when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction,") please explain the statement
 
  • #25
excellent ,but please explain
"when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction"
 
  • #26
kaushikquanta said:
excellent ,but please explain
"when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction"
This is from the basic definition of an eigenvector.

An eigenvector ##\vec{x}## for a matrix is a nonzero vector for which ##A\vec{x} = \lambda\vec{x}## for some scalar ##\lambda##. From this definition, it can be seen that ##A\vec{x}## results in a vector that is a multiple of ##\vec{x}##, hence it has the same direction or the opposite direction. Any time you multiply a vector by a scalar, the new vector is in the same direction or the opposite direction.
 
  • #27
I recently stumbled across a great "intuition oriented" supplement to Mark44's Insight article. It has some nice animations that help visualize it from a geometric perspective.
 
  • #28
ibkev said:
but the engineer in me cries out... "But what is it for?" :)

An unsophisticated indication:

Suppose a problem involves a given matrix ##M## and many different column vectors ##v_1,v_2,v_3,...## and that you must compute the products ##Mv_1, Mv_2,Mv_3,...##.

Further suppose you have your choice about what basis to use in representing the vectors and that ##M## has two eigenvectors ##e_1, e_2## with respective eigenvalues ##\lambda_1, \lambda_2##.

In the happy event that each vector ##v_1## can be represented as a linear combination of ##e_1,e_2##, you could do all the multiplications without actually multiplying a matrix times a vector in detail. For example, if ##v_1 = 3e_1 + 4e_2## then
##Mv_1 = 3Me_1 + 4Me_2 = 3\lambda_1 e_1 + 4 \lambda_2 e_2##.

The coordinate representation of that would be ##M \begin{pmatrix} 3 \\4 \end{pmatrix} = \begin{pmatrix} 3 \lambda_1 \\ 4\lambda_2 \end{pmatrix}## provided the coordinates represent the vectors ##v_i## in the basis ##{e_1,e_2}##.

Of course, you might say "But I'd have to change all the ##v_i## to be in the ##{e_1,e_2}## basis". However, in practical data collection, raw data is reduced to some final form. So you are at least you know that the "eigenbasis" would be a good format for the reduced data. Also, in theoretical reasoning, it is often simpler to imagine that a set of vectors is represented in the eigenbasis of a particular matrix.

A randomly chosen set of vectors can't necessarily be represented using the eigenvectors of a given matrix as a basis. However, there are many situations where the vectors involved in a physical situation can be represented using the eigenvectors of a matrix involved in that situation. Why Nature causes this to happen varies from case to case. When it does happen in Nature, we don't want to overlook it because it offers a great simplification.
 
  • Like
Likes ibkev
  • #29
This is not *at all* a rigorous, well-thought definition, but...

I personally like to think of it this way: whenever a linear operator acts on some vector space, it transforms the vector subspaces inside, right? There might be some subspaces that aren't rotated or manipulated in any other direction, only scaled by some factor. Those invariant subspaces contain the eigenvectors, and the scaling factor is the corresponding eigenvalue. It's not hard to extend this general idea of "invariance" to the idea of, say, finding the allowed states of a system in quantum mechanics, especially when you remember that the Hamiltonian is a linear operator. Linear algebra in general is the art of taking algebraic, abstract concepts, and putting them into concrete matrices and systems of equations.
 
  • Like
Likes ibkev

Related to What Are Eigenvectors and Eigenvalues? - Comments

1. What are eigenvectors and eigenvalues?

Eigenvectors and eigenvalues are concepts in linear algebra that are used to analyze the behavior of linear transformations. Eigenvectors are special vectors that do not change direction when a linear transformation is applied to them. Eigenvalues are scalars that represent how much an eigenvector is scaled when it undergoes a linear transformation.

2. How are eigenvectors and eigenvalues used in science?

Eigenvectors and eigenvalues have various applications in science, including physics, engineering, and computer science. They are used to analyze the behavior of physical systems, such as quantum mechanical systems, and to solve problems in data analysis and machine learning.

3. What is the significance of eigenvectors and eigenvalues?

Eigenvectors and eigenvalues are important in linear algebra because they provide a way to simplify complex systems and understand their underlying structure. They also have practical applications in fields like physics and computer science, making them essential concepts for scientists to understand.

4. How do you find eigenvectors and eigenvalues?

To find eigenvectors and eigenvalues, you need to solve a system of equations. The eigenvectors are the solutions to the system, and the eigenvalues are the constants that multiply the eigenvectors. This can be done using various methods, such as the characteristic polynomial or the power iteration method.

5. Can eigenvectors and eigenvalues have complex values?

Yes, eigenvectors and eigenvalues can have complex values. In fact, in certain cases, complex eigenvalues and eigenvectors are necessary to fully describe a system. In physics, for example, complex eigenvalues are often used to represent the energy levels of quantum mechanical systems.

Similar threads

  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
663
Replies
3
Views
2K
  • Advanced Physics Homework Help
Replies
17
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
8K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
4K
  • Advanced Physics Homework Help
Replies
17
Views
1K
  • Linear and Abstract Algebra
Replies
16
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
9K
Back
Top