- #1
- 37,684
- 9,961
Mark44 submitted a new PF Insights post
What Are Eigenvectors and Eigenvalues?
Continue reading the Original PF Insights Post.
What Are Eigenvectors and Eigenvalues?
Continue reading the Original PF Insights Post.
QuantumQuest said:Great job Mark!
Thanks! My Insights article isn't anything groundbreaking -- just about every linear algebra text will cover this topic. My intent was to write something for this site that was short and sweet, on why we care about eigenvectors and eigenvalues, and how we find them in a couple of examples.RJLiberator said:Excellent information, Mark44!
What image are you talking about? The article doesn't have any images in it.Haruki Chou said:So the red/blue arrows on the image are eigenvectors?
It's a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post following his own post.Mark44 said:What image are you talking about? The article doesn't have any images in it.
Could it be a reference to Mona Lisa at the top of the Insight?Mark44 said:What image are you talking about? The article doesn't have any images in it.
How about asking Harouki to write an insight on that one -- time travel??Samy_A said:It's a mystery challenging the basic foundations of Physics: he seems to refer to the image mentioned in the post following his own post.
If that is the case, the blue and violet arrows are the eigenvectors, not the red.
Correct, you can't write that. Note that Mark44 doesn't write ##|A – \lambda I||\vec x| = 0##. He correctly writes ##|A – \lambda I|=0##.2nafish117 said:i do not understand how det(A-lambda(I))=0
since x is not a square matrix we cannot write det((A-lambda(I))*x)=det(A-lambda(I))*det(x)
but howw?Samy_A said:Correct, you can't write that. Note that Mark44 doesn't write ##|A – \lambda I||\vec x| = 0##. He correctly writes ##|A – \lambda I|=0##.
In general, if for a square matrix ##B## there exists a non 0 vector ##\vec x## satisfying ##B\vec x=\vec 0 ##, then the determinant of ##B## must be 0.
That's how ## (A – \lambda I)\vec{x} = \vec{0}## implies ##|A – \lambda I|=0##.
Yes, that's basically it.2nafish117 said:but howw?
ah i got it just as i was writing this.
let 'x' be non a zero vector and let det(A) ≠ 0 .
then premultiplying with A(inverse) we get
(A(inverse)*A)*x=A(inverse)*0
which then leads to the contradiction x=0
am i right?
i'm sorry that i don't know how to use latex.
Thank your for the offer, but I think that I will decline. All I wanted to say in the article was a bit about what they (eigenvectors and eigenvalues) are, and a brief bit on how to find them. Adding what you suggested would go well beyond the main thrust of the article.geoffrey159 said:Nice insight !
If you like it, I have an exemple of application for your post to euclidean geometry. You could explain how eigenvalues and eigenvectors are helpfull in order to carry out a full description of isometries in dimension 3, and conclude that they are rotations, reflections, and the composition of a rotation and a reflection about the orthogonal plane to the the axis of rotation.
My background isn't in engineering, so I'm not aware of how eigenvalues and eigenvectors are applicable to engineering disciplines, if at all. An important application of these ideas is in diagonalizing square matrices to solve a system of differential equations. A few other applications, as listed in this Wikipedia article (https://en.wikipedia.org/wiki/Eigenvalues_and_eigenvectors) areibkev said:Eigenvalues/vectors is something I've often wanted to learn more about, so I really appreciate the effort that went into writing this article Mark. The problem is that I feel like I've been shown a beautiful piece of abstract art with lots of carefully thought out splatters but the engineer in me cries out... "But what is it for?" :)
Sorry my LaTex was all messed up. Here is what I meant to say...smodak said:"All we can be sure of is that the determinant of IA–λI must be zero". How do you arrive at this?
Are you saying that vec{x} cannot be vec{0}. And A – lambda may not be vec{0}.
=>A – lambda does not have an inverse.
=>det|A – lambda| must be 0.
Is that the reasoning?
The full quote near the beginning of the article is this:smodak said:Sorry my LaTex was all messed up. Here is what I meant to say...
"All we can be sure of is that det(A–λ) must be zero". How do you arrive at this?
Are you saying that ##\vec{x}## cannot be ##\vec{0}## and ##A – \lambda## may not be ##\vec{0}##.
=>##A – \lambda## does not have an inverse.
=>##det(A – \lambda)## must be 0. Is that the reasoning?
In the last equation above, one solution would be ##\vec{x} = \vec{0}##, but we don’t allow this possibility, because an eigenvector has to be nonzero. Another solution would be ##A – λI = \vec{0}##, but because of the way matrix multiplication is defined, a matrix times a vector can result in zero even if neither the matrix nor the vector are zero. All we can be sure of is that the determinant of |A – λI| must be zero.
This is from the basic definition of an eigenvector.kaushikquanta said:excellent ,but please explain
"when a matrix A multiplies an eigenvector, the result is a vector in the same (or possibly opposite) direction"
ibkev said:but the engineer in me cries out... "But what is it for?" :)
Eigenvectors and eigenvalues are concepts in linear algebra that are used to analyze the behavior of linear transformations. Eigenvectors are special vectors that do not change direction when a linear transformation is applied to them. Eigenvalues are scalars that represent how much an eigenvector is scaled when it undergoes a linear transformation.
Eigenvectors and eigenvalues have various applications in science, including physics, engineering, and computer science. They are used to analyze the behavior of physical systems, such as quantum mechanical systems, and to solve problems in data analysis and machine learning.
Eigenvectors and eigenvalues are important in linear algebra because they provide a way to simplify complex systems and understand their underlying structure. They also have practical applications in fields like physics and computer science, making them essential concepts for scientists to understand.
To find eigenvectors and eigenvalues, you need to solve a system of equations. The eigenvectors are the solutions to the system, and the eigenvalues are the constants that multiply the eigenvectors. This can be done using various methods, such as the characteristic polynomial or the power iteration method.
Yes, eigenvectors and eigenvalues can have complex values. In fact, in certain cases, complex eigenvalues and eigenvectors are necessary to fully describe a system. In physics, for example, complex eigenvalues are often used to represent the energy levels of quantum mechanical systems.