Existence of basis for P_2 with no polynomial of degree 1

In summary, we discussed the question of whether there is a basis for the vector space of polynomials of degree 2 or less consisting of three polynomial vectors ##\{ p_1, p_2, p_3 \}##, where none is a polynomial of degree 1. We determined that the standard basis for this vector space is ##\{1, t, t^2\}##, but this does not meet the criteria because it contains a polynomial of degree 1. We also discussed how to construct other bases from a standard basis and the requirement for a finite number of bases in a vector space. We then delved into the notation ##\mathbb{Z}_p##, which represents the set of possible
  • #1
Mr Davis 97
1,462
44
I have the following question: Is there a basis for the vector space of polynomials of degree 2 or less consisting of three polynomial vectors ##\{ p_1, p_2, p_3 \}##, where none is a polynomial of degree 1?

We know that the standard basis for the vector space is ##\{1, t, t^2\}##. However, this wouldn't be allowed because there is a polynomial of degree 1 in this basis.

I'm thinking that there is not a basis without a polynomial of degree 1, but can't seem to formalize it.
 
Physics news on Phys.org
  • #2
Mr Davis 97 said:
I have the following question: Is there a basis for the vector space of polynomials of degree 2 or less consisting of three polynomial vectors ##\{ p_1, p_2, p_3 \}##, where none is a polynomial of degree 1?

We know that the standard basis for the vector space is ##\{1, t, t^2\}##. However, this wouldn't be allowed because there is a polynomial of degree 1 in this basis.

I'm thinking that there is not a basis without a polynomial of degree 1, but can't seem to formalize it.
Assume there is a basis, ##\{p_1(t),p_2(t),p_3(t)\}##. Since ##q(t)=t## is in the vector space, what does this mean?
 
  • #3
How about [itex](t^2,t^2+1,t^2+t)[/itex]?
 
  • #4
the usual basis for R^3 is (1,0,0), (0,1,0), (0,0,1), but isn't (1,1,1), (0,1,1), (0,0,1) also a basis? what does that translate into for polynomials, where (a,b,c) represents a + bX + cX^2?
 
  • #5
Thanks, it makes sense now. In general, if I know what the standard basis is for a vector space, how could I construct other bases from that standard basis?

Also, are there any examples of vector spaces where there are a small finite number of bases?
 
  • #6
Mr Davis 97 said:
Thanks, it makes sense now. In general, if I know what the standard basis is for a vector space, how could I construct other bases from that standard basis?
You can take any regular square matrix. Then the row or column vectors interpreted in your "standard" basis are a new one.
However, the regular matrices over the reals are dense in the set of all matrices, which means: as soon as you vary some matrix entry only by a little, you will get a regular matrix.
Also, are there any examples of vector spaces where there are a small finite number of bases?
To have a finite number of basis, you first of all need a finite field for otherwise with a basis ## \{b_1,\dots ,b_n\}## and a scalar ##\beta## of your field, ## \{\beta \cdot b_1,\dots ,b_n\}## is also a basis. Infinitely many ##\beta## give infinitely many basis.
So, e.g. in any ##\mathbb{Z}_p^n## you find a finite number of possible basis over ##\mathbb{Z}_p##.
 
  • #7
fresh_42 said:
You can take any regular square matrix. Then the row or column vectors interpreted in your "standard" basis are a new one.
However, the regular matrices over the reals are dense in the set of all matrices, which means: as soon as you vary some matrix entry only by a little, you will get a regular matrix.

To have a finite number of basis, you first of all need a finite field for otherwise with a basis ## \{b_1,\dots ,b_n\}## and a scalar ##\beta## of your field, ## \{\beta \cdot b_1,\dots ,b_n\}## is also a basis. Infinitely many ##\beta## give infinitely many basis.
So, e.g. in any ##\mathbb{Z}_p^n## you find a finite number of possible basis over ##\mathbb{Z}_p##.

What does the notation with the ##\mathbb{Z}## stand for?
 
  • #8
Math_QED said:
What does the notation with the ##\mathbb{Z}## stand for?
##\mathbb{Z}_p## denotes the set ##\{0,1,2, \dots ,p-1\}## for a prime number ##p##. They are the possible remainders by integer division by ##p##. If ##p## is a prime, one has an addition and a multiplication in this set, that satisfy all laws of a field, like associativity, distributivity, etc.

An alternative description is:
Say two integers are equivalent, if and only if their remainder by division by ##p## is equal. This splits ##\mathbb{Z}## into the ##p## subsets ##0+p \mathbb{Z}, 1+p \mathbb{Z}, 2+ p \mathbb{Z}, \dots , (p-1)+p \mathbb{Z}##. These sets are called cosets with respect to ##p\mathbb{Z}## or equivalence classes. And they allow a field structure, i.e. we can add, subtract, multiply and divide them.

Everyday examples are ##\mathbb{Z}_2 = \{0,1\}## with which our computers or the light switch on the wall work, or ##\mathbb{Z}_{12}## which shows us the hours on the clock. But ##12## isn't prime. This means, we cannot divide every number: ##3 \cdot 4 = 0## (three times ##4## hours is the null position again) which means division by ##4## is not possible in this case. Therefore ##p## has to be prime.
 
  • #9
fresh_42 said:
##\mathbb{Z}_p## denotes the set ##\{0,1,2, \dots ,p-1\}## for a prime number ##p##. They are the possible remainders by integer division by ##p##. If ##p## is a prime, one has an addition and a multiplication in this set, that satisfy all laws of a field, like associativity, distributivity, etc.

An alternative description is:
Say two integers are equivalent, if and only if their remainder by division by ##p## is equal. This splits ##\mathbb{Z}## into the ##p## subsets ##0+p \mathbb{Z}, 1+p \mathbb{Z}, 2+ p \mathbb{Z}, \dots , (p-1)+p \mathbb{Z}##. These sets are called cosets with respect to ##p\mathbb{Z}## or equivalence classes. And they allow a field structure, i.e. we can add, subtract, multiply and divide them.

Everyday examples are ##\mathbb{Z}_2 = \{0,1\}## with which our computers or the light switch on the wall work, or ##\mathbb{Z}_{12}## which shows us the hours on the clock. But ##12## isn't prime. This means, we cannot divide every number: ##3 \cdot 4 = 0## (three times ##4## hours is the null position again) which means division by ##4## is not possible in this case. Therefore ##p## has to be prime.

I might be missing something but don't we need multiplication inverses for a field?
 
  • #10
Math_QED said:
I might be missing something but don't we need multiplication inverses for a field?
Yes. Therefore ##\mathbb{Z}_{12}## isn't a field (only a ring). But all ##\mathbb{Z}_p## with ##p## prime are fields.
If you have a number ##a \neq 0## and ##a < p## then ##a## and ##p## are coprime and you can find ##\alpha \, , \, \beta## such that ##1=\alpha a + \beta p##. Then the remainder of ##\alpha## from a division by ##p## times the remainder of ##a## from a division by ##p## has the remainder ##1## by a division by ##p##, i.e. ##\alpha a = 1## in ##\mathbb{Z}_p## or ##1/a = \alpha##.
 
  • Like
Likes member 587159
  • #11
Mr Davis 97 said:
Thanks, it makes sense now. In general, if I know what the standard basis is for a vector space, how could I construct other bases from that standard basis?

For example, if we consider the standard basis for polynomials of degree 2 with real coefficients to be ##\{1, x, x^2\}##, we can say that ##\{1, 2.33\ x, x^2\}## is a different basis. So a "cheap" way to construct a different basis is multiply some of the basis vectors in the standard basis by some non-zero scalars.

If you want to do something more interesting, you need more complicated algorithms. The general idea would be to replace some of the vectors in the standard basis by linear combinations of themselves and other vectors in the standard basis without introducing any dependence in the new basis. For example, we could replace ##\{1,x,x^2\}## by ##\{1 + x, 1 - x, x^2\}## by using the observation that ## 1 = (1/2)((1+x) + (1-x)) ## and ## x = (1/2)( (1+x) - (1-x) ) ##.

Also, are there any examples of vector spaces where there are a small finite number of bases?

That's an interesting question!

If you are studying a text where the field of scalars is assumed to be the real numbers or the complex numbers, the answer is "no" because of what I mentioned above about being able to create a new basis vector by using a scalar multiple of it. However, as other's have mentioned, there are examples of vector spaces where the field of scalars is finite. For example, we can try the set of polynomials in degree at most 2 with coefficients in the integers mod 3. Does that work ? It has only a finite number of elements in it.

The topic of a "standard basis" highlights the distinction between a "vector" and a "representation of a vector". For example, it is tempting to declare that {(1,0,0), (0,1,0), (0,0,1)} is "the standard basis" for a 3 dimensional vector space. But what "(0,1,0)" represents is ambiguous. For example, if we consider the vector space of polynomials of degree at most 2 with real valued coefficients and take its standard basis to be ##\{1, x, x^2\}## then "(0,1,0)" represents the polynomial ##(0)(1) + (1)(x) + 0(x^2) ##. But in the context of a book on physics, the author would probably assume "(0,1,0)" represents a vector in the y-direction in a cartesian coordinate system.

So, in an abstract vector space, declaring that a set like {(1,0,0), (0,1,0), (0,0,1)} is "the standard basis" doesn't really define what the standard basis is. A representation like "(0,1,0)" only has meaning after we have specified what the vectors in the "standard basis" are.

Furthermore, a basis is defined as a set of vectors with certain properties. In a set, the order of the elements doesn't matter. So ##\{1,x,x^2\}## is the same basis as ##\{1,x^2,x\}##. For a representation like "(0,1,0)" to have a definite meaning, we must not only specify the set of basis vectors, we must also specify that they are listed in a definite order.
 
  • Like
Likes Mr Davis 97
  • #12
Thank you guys for the very detailed discussion. I have a few more questions

If every real vector space is isomorphic to ##\mathbb{R}^n##, would finding a new (non-trivial) basis from some given basis for any real vector space be only as complicated as it is to find a new basis from a given basis in ##\mathbb{R}^n##? For example, would finding a new basis in ##\mathbb{P}^2## be only as hard as finding a new basis in ##\mathbb{R}^3##?

Also, to what extent is it possible to describe a vector without any reference to a basis? For example, if I were to describe a vector in ##\mathbb{R}^n##, it seems that I would always need to implicitly refer to the standard basis of ##\mathbb{R}^n##.
 
  • #13
Mr Davis 97 said:
Thank you guys for the very detailed discussion. I have a few more questions

If every real vector space is isomorphic to ##\mathbb{R}^n##, ...
Every n-dimensional real vector space. There are also vector spaces of infinite dimension, e.g. continuous functions.
... would finding a new (non-trivial) basis from some given basis for any real vector space be only as complicated as it is to find a new basis from a given basis in ##\mathbb{R}^n##? For example, would finding a new basis in ##\mathbb{P}^2## be only as hard as finding a new basis in ##\mathbb{R}^3##?
I'm not sure what you mean by as complicated or as hard.
If you take a dice and roll some numbers, preferably with some digits right of the point, then the chances are high, that you create a basis if arranged as vectors. Not so hard.
Also, to what extent is it possible to describe a vector without any reference to a basis? For example, if I were to describe a vector in ##\mathbb{R}^n##, it seems that I would always need to implicitly refer to the standard basis of ##\mathbb{R}^n##.
That's an interesting question. A vector is basically an arrow. It starts somewhere, has a length and a direction where it points to. How would you communicate such an arrow to me? "Somewhere" needs a map, "length" a measure or scale, and "direction" an orientation. Unless we sit together and talk about our drawings, something to measure all of this will be needed.

Another system, than that which you probably have in mind when you say standard basis, are polar coordinates, a measure system with lengths and angles.
 
  • #14
Mr Davis 97 said:
If every real vector space is isomorphic to ##\mathbb{R}^n##, would finding a new (non-trivial) basis from some given basis for any real vector space be only as complicated as it is to find a new basis from a given basis in ##\mathbb{R}^n##?

Every finite dimensional real vector space is isomorphic to the ##\mathbb{R}^n## of that dimension. However, keep in mind that the definition of "vector space" does not include the concept of the "norm" or "length" of a vector, nor the concept of an inner product or the angle between vectors. So an isomorphism between vector spaces does not imply that two real vector spaces of the same dimension are isomorphic with respect to the properties that come from a norm.

For example, would finding a new basis in ##\mathbb{P}^2## be only as hard as finding a new basis in ##\mathbb{R}^3##?
The two problems are essentially the same. The numerical calculations would be the same.

Also, to what extent is it possible to describe a vector without any reference to a basis? For example, if I were to describe a vector in ##\mathbb{R}^n##, it seems that I would always need to implicitly refer to the standard basis of ##\mathbb{R}^n##.

For example, if you have a physical situation where a vector is described as "the force of the ladder against the wall", there isn't any reference to a basis until you impose a coordinate system on the problem. For example "The force of the ladder against the wall" isn't necessarily "in the x-direction" or "in the y-direction".

Futhermore, a "coordinate system" is a distinct concept from the concept of representing a vector in terms of basis vectors. For example, the polar coordinate representation of a vector involves two numbers, but they are not coefficients of basis vectors for ##\mathbb{R}^2##. The representation of vectors in a form like (0,1,0) is a particular coordinate system for vectors, but not the only possible coordinate system.

The general concept of a "coordinate system", permits the same thing to be represented by two different sets of coordinates. For example, in ##\mathbb{R}^2## we can take the 3 vectors { (1,0), (0,1),(1,1) } and represent the vector (2,2) as 2(1,0) + 2(0,1) + 0(1,1) or as 0(1,0) + 0(0,1) + 2(1,1). So we can give (2,2) the coordinates (2,2,0) or (0,0,2). This is an inconvenient coordinate system for most purposes, but it does satisfy the definition of a coordinate system.
 

Related to Existence of basis for P_2 with no polynomial of degree 1

1. What is the "existence of basis" for P_2?

The existence of basis for P_2 refers to whether or not there exists a set of polynomials that can be used to span the entire space of polynomials of degree 2 or lower. In other words, can we find a finite set of polynomials that can be combined to create any polynomial of degree 2 or lower.

2. Why is it important to know if there exists a basis for P_2?

Understanding the existence of basis for P_2 is important in many areas of mathematics and science. It can help us solve problems involving polynomials, understand the structure of vector spaces, and even provide insights into the nature of functions and equations.

3. What does it mean for there to be "no polynomial of degree 1" in the basis for P_2?

If there is no polynomial of degree 1 in the basis for P_2, it means that the set of polynomials used to span the space does not include any linear terms (i.e. terms with a variable raised to the first power). This can have implications for the types of functions and equations that can be represented using this basis.

4. How can we prove the existence of basis for P_2 with no polynomial of degree 1?

To prove the existence of basis for P_2 with no polynomial of degree 1, we can use mathematical techniques such as linear algebra and polynomial interpolation. By showing that a set of polynomials of degree 2 or lower can be combined to represent any polynomial of degree 2 or lower without using any linear terms, we can demonstrate the existence of such a basis.

5. Are there any real-world applications for the existence of basis for P_2 with no polynomial of degree 1?

Yes, there are many real-world applications for understanding the existence of basis for P_2 with no polynomial of degree 1. Some examples include signal processing, data compression, and approximation of functions. By using a basis with no linear terms, we can often achieve more accurate and efficient representations of real-world data and functions.

Similar threads

  • Linear and Abstract Algebra
Replies
9
Views
401
  • Linear and Abstract Algebra
Replies
9
Views
739
  • Linear and Abstract Algebra
2
Replies
39
Views
2K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
797
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
832
  • Linear and Abstract Algebra
Replies
2
Views
1K
Back
Top