Gram_Schmidt Orthonormalization .... Remarks by Garling, Section 11.4 .... ....

  • MHB
  • Thread starter Math Amateur
  • Start date
  • Tags
    Section
In summary: So ... the x_j in \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle have nothing to do with the x_j in the basis (x_1, \ ... \ ... , x_d) ... ?... ... why not use \lambda_j instead of x_j in \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle ... ... ?The notation can be confusing, but in general the context should make it clear whether something is a scalar or a vector. In this case, the $x_j$ in the basis $(x_1, \ldots,
  • #1
Math Amateur
Gold Member
MHB
3,998
48
I am reading D. J. H. Garling's book: "A Course in Mathematical Analysis: Volume II: Metric and Topological Spaces, Functions of a Vector Variable" ... ...

I am focused on Chapter 11: Metric Spaces and Normed Spaces ... ...

I need some help in order to understand the meaning and the point or reason for some remarks by Garling made after Theorem 11.4.1, Gram-Schmidt Orthonormalization ... ..

Theorem 11.4.1 and its proof followed by remarks by Garling ... read as follows:View attachment 8968
View attachment 8969

In his remarks just after the proof of Theorem 11.4.1, Garling writes the following:

" ... ... Note that if \(\displaystyle ( e_1, \ ... \ ... , e_k)\) is an orthonormal sequence and \(\displaystyle \sum_{ j = 1 }^k x_j e_j = 0 \text{ then } x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle = 0 \) for \(\displaystyle 1 \le i \le k\); ... ... "Can someone please explain how/why \(\displaystyle x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ...

... where did this expression come from ... ?PeterNOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...EDIT ... reflection ... May be worth expanding the term \(\displaystyle \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ... but then how do we deal with the \(\displaystyle x_j e_j\) terms ... we of course look to exploit \(\displaystyle \langle e_j, e_k \rangle = 0\) when \(\displaystyle j \neq k\) ... and \(\displaystyle \langle e_i, e_i \rangle = 1\) ... ..
 

Attachments

  • Garling - 1 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks  ... PART 1 .png
    Garling - 1 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks ... PART 1 .png
    47.6 KB · Views: 64
  • Garling - 2 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks  ... PART 2 ... .png
    Garling - 2 - Theorem 11.4.1 ... G_S Orthonormalisation plus Remarks ... PART 2 ... .png
    7.6 KB · Views: 54
Last edited:
Physics news on Phys.org
  • #2
Peter said:
Can someone please explain how/why \(\displaystyle x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ...

... where did this expression come from ... ?
The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ The orthonormal sequence has the property that $ \langle e_j , e_i \rangle $ is zero if $j\ne i$, and is $1$ when $j=i$. So that last sum reduces to the single term $x_i$.

Peter said:
NOTE: There are some typos on this page concerning the dimension of the space ... but they are pretty obvious and harmless I think ...
Garling starts the section by announcing that his spaces will all have dimension $n$. But then he immediately states a theorem in which the space has dimension $d$. He then sticks with $d$ consistently as the dimension of the space. But at one point he writes "Thus $(e_1,\ldots,e_n)$ is a basis for $W_j$." In that sentence, the $n$ should be $j$.

I did not spot any other mistakes.
 
  • #3
Opalg said:
The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ The orthonormal sequence has the property that $ \langle e_j , e_i \rangle $ is zero if $j\ne i$, and is $1$ when $j=i$. So that last sum reduces to the single term $x_i$.Garling starts the section by announcing that his spaces will all have dimension $n$. But then he immediately states a theorem in which the space has dimension $d$. He then sticks with $d$ consistently as the dimension of the space. But at one point he writes "Thus $(e_1,\ldots,e_n)$ is a basis for $W_j$." In that sentence, the $n$ should be $j$.

I did not spot any other mistakes.
Thanks for the help Opalg ...

But ... just a clarification ...

You write:

" ... ... The linearity and scalar multiplication properties of the inner product show that $ \bigl\langle \sum_{ j = 1 }^k x_j e_j , e_i \bigr\rangle = \sum_{ j = 1 }^k\langle x_j e_j , e_i \rangle = \sum_{ j = 1 }^k x_j\langle e_j , e_i \rangle.$ ... ..."

However, you seem to be treating \(\displaystyle x_j\) as a scalar ... but isn't \(\displaystyle x_j\) a vector ... By the way I agree with you on the typos ... I find that they can be slightly disconcerting ... Thanks again ...

Peter
 
  • #4
Peter said:
you seem to be treating \(\displaystyle x_j\) as a scalar ... but isn't \(\displaystyle x_j\) a vector ...
No. In the expression $x_je_j$, $x_j$ is a scalar and $e_j$ is a vector. Garling typically writes $x = \sum_{ j = 1 }^k x_j e_j$ to write a vector $x$ as a sum of components $x_je_j$, where $x_j$ is the scalar component (or coordinate) of $x$ in the direction of the basis vector $e_j$.
 
  • #5
Opalg said:
No. In the expression $x_je_j$, $x_j$ is a scalar and $e_j$ is a vector. Garling typically writes $x = \sum_{ j = 1 }^k x_j e_j$ to write a vector $x$ as a sum of components $x_je_j$, where $x_j$ is the scalar component (or coordinate) of $x$ in the direction of the basis vector $e_j$.
Oh Lord ... how are we supposed to tell what's a scalar and what's a vector when just above in the proof \(\displaystyle x_1 \ ... \ , x_d\) are basis vectors ... ! ... how confusing ...

So ... the \(\displaystyle x_j\) in \(\displaystyle \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) have nothing to do with the \(\displaystyle x_j\) in the basis \(\displaystyle (x_1, \ ... \ ... , x_d)\) ... ?

... ... why not use \(\displaystyle \lambda_j\) instead of \(\displaystyle x_j\) in \(\displaystyle \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) ... ... ?Just another clarification ... I can see that \(\displaystyle x_i = \left\langle \sum_{ j = 1 }^k x_j e_j , e_i \right\rangle\) but why is the expression equal to \(\displaystyle 0\) ...?

Peter
 
Last edited:
  • #6
Peter said:
Oh Lord ... how are we supposed to tell what's a scalar and what's a vector when just above in the proof \(\displaystyle x_1 \ ... \ , x_d\) are basis vectors ... ! ... how confusing ...
I agree, it's shockingly bad notation. Next time I see Ben Garling I'll have a word with him. (Wait)

One thing you can be sure of is that in a vector space, the only multiplication that can occur is scalar multiplication. There is no vector multiplication. In a product of the form $xe$ the scalar will (almost?) invariably come before the vector. So if it hasn't already been made clear, you can expect that $x$ must be the scalar and $e$ the vector.
 
  • #7
Opalg said:
I agree, it's shockingly bad notation. Next time I see Ben Garling I'll have a word with him. (Wait)

One thing you can be sure of is that in a vector space, the only multiplication that can occur is scalar multiplication. There is no vector multiplication. In a product of the form $xe$ the scalar will (almost?) invariably come before the vector. So if it hasn't already been made clear, you can expect that $x$ must be the scalar and $e$ the vector.
Thanks for al your help, Opalg ...

It as helped me no end ... ... !

Wonderful that you know Ben Garling ... his 3 volumes on mathematical analysis is comprehensive and inspiring ,,,

Thanks again ...

Peter
 

Related to Gram_Schmidt Orthonormalization .... Remarks by Garling, Section 11.4 .... ....

1. What is Gram-Schmidt Orthonormalization?

Gram-Schmidt Orthonormalization is a mathematical process used to convert a set of linearly independent vectors into a set of orthonormal vectors. This process is commonly used in linear algebra and functional analysis to simplify calculations and solve problems involving vectors.

2. What is the purpose of Gram-Schmidt Orthonormalization?

The main purpose of Gram-Schmidt Orthonormalization is to create a set of orthonormal vectors, which have a length of 1 and are perpendicular to each other. This makes it easier to perform calculations involving these vectors, as well as simplifying proofs and solving problems in linear algebra and functional analysis.

3. How does Gram-Schmidt Orthonormalization work?

The Gram-Schmidt Orthonormalization process involves taking a set of linearly independent vectors and creating a new set of orthonormal vectors by subtracting the projections of each vector onto the previously created vectors. This process is repeated until all vectors in the original set have been converted into orthonormal vectors.

4. What are the benefits of using Gram-Schmidt Orthonormalization?

One of the main benefits of using Gram-Schmidt Orthonormalization is that it simplifies calculations involving vectors. It also makes it easier to prove theorems and solve problems in linear algebra and functional analysis. Additionally, orthonormal vectors have many useful properties, such as being orthogonal and having a length of 1, which can be used to simplify and solve equations.

5. Are there any limitations to using Gram-Schmidt Orthonormalization?

While Gram-Schmidt Orthonormalization is a useful process, it does have some limitations. For example, if the original set of vectors is not linearly independent, the resulting orthonormal vectors may not accurately represent the original set. Additionally, this process can be computationally intensive for large sets of vectors, so it may not always be the most efficient method for solving problems involving vectors.

Similar threads

Replies
5
Views
1K
Replies
2
Views
1K
Replies
2
Views
1K
Replies
2
Views
1K
Replies
2
Views
1K
Replies
2
Views
966
Replies
2
Views
2K
Replies
2
Views
1K
Replies
5
Views
1K
  • Topology and Analysis
Replies
3
Views
2K
Back
Top