To come in
Sewerage and drainpipes portal
  • Dried fruit sweets "Energy koloboks"
  • Raspberries grated with sugar: tasty and healthy
  • Alexey Pleshcheev: biography
  • How to preserve apple juice
  • Cabbage salad with carrots like in a dining room - the best recipes from childhood
  • An even complexion without foundation!
  • Is it the basis. Example

    Is it the basis. Example

    In geometry, a vector is understood as a directed segment, and vectors obtained from one another by parallel translation are considered equal. All equal vectors are treated as the same vector. The origin of a vector can be placed anywhere in space or on a plane.

    If the coordinates of the ends of the vector are given in space: A(x 1 , y 1 , z 1), B(x 2 , y 2 , z 2), then

    = (x 2 – x 1 , y 2 – y 1 , z 2 – z 1). (1)

    A similar formula takes place on a plane. This means that the vector can be written as a coordinate line. Operations on vectors, - addition and multiplication by a number, on strings are performed componentwise. This makes it possible to expand the concept of a vector, understanding any string of numbers as a vector. For example, the solution to a system of linear equations, as well as any set of values \u200b\u200bof the variables of the system, can be viewed as a vector.

    On strings of the same length, the addition operation is performed according to the rule

    (a 1, a 2, ..., a n) + (b 1, b 2, ..., b n) \u003d (a 1 + b 1, a 2 + b 2, ..., a n+ b n). (2)

    Multiplication of a string by a number is performed according to the rule

    l (a 1, a 2, ..., a n) \u003d (la 1, la 2, ..., la n). (3)

    A set of row vectors of a given length n with the indicated operations of addition of vectors and multiplication by a number, forms an algebraic structure, which is called n-dimensional linear space.

    A linear combination of vectors is a vector , where λ 1, ..., λ m - arbitrary coefficients.

    A system of vectors is called linearly dependent if there is a linear combination of it equal to, in which there is at least one nonzero coefficient.

    A system of vectors is called linearly independent if, in any of its equal linear combinations, all coefficients are zero.

    Thus, solving the problem of the linear dependence of the system of vectors is reduced to solving the equation

    x 1 + x 2 + … + x m = . (4)

    If this equation has nonzero solutions, then the vector system is linearly dependent. If the zero solution is unique, then the vector system is linearly independent.

    To solve system (4), for clarity, vectors can be written not in the form of rows, but in the form of columns.

    Then, performing the transformations on the left side, we arrive at a system of linear equations equivalent to equation (4). The main matrix of this system is formed by the coordinates of the original vectors arranged in columns. The column of free members is not needed here, since the system is homogeneous.

    The basis of a system of vectors (finite or infinite, in particular, of the entire linear space) is called its non-empty linearly independent subsystem, through which any vector of the system can be expressed.

    Example 1.5.2.Find the basis of the system of vectors \u003d (1, 2, 2, 4), \u003d (2, 3, 5, 1), \u003d (3, 4, 8, –2), \u003d (2, 5, 0, 3) and express other vectors through the basis.

    Decision... We build a matrix in which the coordinates of these vectors are arranged in columns. This is the matrix of the system x 1 + x 2 + x 3 + x 4 \u003d. ... We bring the matrix to a stepped form:

    ~ ~ ~

    The basis of this system of vectors is formed by vectors,,, which correspond to the leading elements of the lines, marked with circles. To express the vector, we solve the equation x 1 + x 2 + x 4 \u003d. It is reduced to a system of linear equations, the matrix of which is obtained from the initial permutation of the corresponding column to the place of the column of free terms. Therefore, when converting to a stepped view, the same transformations will be made over the matrix as above. This means that you can use the resulting matrix in a stepped form by making the necessary column permutations in it: we place the columns with circles to the left of the vertical line, and place the column corresponding to the vector to the right of the line.

    We consistently find:

    x 4 = 0;

    x 2 = 2;

    x 1 + 4 = 3, x 1 = –1;

    Comment... If it is required to express several vectors through the basis, then for each of them a corresponding system of linear equations is constructed. These systems will differ only in the columns of free members. Moreover, each system is solved independently of the others.

    R e n t e n t 1.4. Find the basis of the system of vectors and express the remaining vectors through the basis:

    a) \u003d (1, 3, 2, 0), \u003d (3, 4, 2, 1), \u003d (1, –2, –2, 1), \u003d (3, 5, 1, 2);

    b) \u003d (2, 1, 2, 3), \u003d (1, 2, 2, 3), \u003d (3, –1, 2, 2), \u003d (4, –2, 2, 2);

    c) \u003d (1, 2, 3), \u003d (2, 4, 3), \u003d (3, 6, 6), \u003d (4, –2, 1); \u003d (2, –6, –2).

    In a given system of vectors, a basis can usually be distinguished in different ways, but all bases will have the same number of vectors. The number of vectors in the basis of a linear space is called the dimension of the space. For n-dimensional linear space n - this is the dimension of space, since this space has a standard basis \u003d (1, 0,…, 0), \u003d (0, 1,…, 0),…, \u003d (0, 0,…, 1). Through this basis, any vector \u003d (a 1, a 2, ..., a n) is expressed as follows:

    \u003d (a 1, 0,…, 0) + (0, a 2,…, 0) +… + (0, 0,…, a n) =

    A 1 (1, 0,…, 0) + a 2 (0, 1,…, 0) +… + a n(0, 0,…, 1) \u003d a 1 + a 2 +… + a n .

    Thus, the components in the vector row \u003d (a 1, a 2, ..., a n) Are its coefficients in the expansion in terms of the standard basis.

    Straight lines on a plane

    The task of analytical geometry is the application of the coordinate method to geometric problems. Thus, the problem is transformed into algebraic form and solved by means of algebra.

    Expression of the form called linear combination of vectors A 1, A 2, ..., A n with coefficients λ 1, λ 2, ..., λ n.

    Determination of the linear dependence of a system of vectors

    Vector system A 1, A 2, ..., A n called linearly dependent, if there is a nonzero set of numbers λ 1, λ 2, ..., λ n, in which the linear combination of vectors λ 1 * A 1 + λ 2 * A 2 + ... + λ n * A n equal to zero vector, that is, the system of equations: has a nonzero solution.
    Set of numbers λ 1, λ 2, ..., λ n is nonzero if at least one of the numbers λ 1, λ 2, ..., λ n nonzero.

    Determination of linear independence of a system of vectors

    Vector system A 1, A 2, ..., A n called linearly independentif the linear combination of these vectors λ 1 * A 1 + λ 2 * A 2 + ... + λ n * A n equal to zero vector only for zero set of numbers λ 1, λ 2, ..., λ n , that is, the system of equations: A 1 x 1 + A 2 x 2 + ... + A n x n \u003d Θ has the only zero solution.

    Example 29.1

    Check if the vector system is linearly dependent

    Decision:

    1. We compose a system of equations:

    2. We solve it using the Gauss method... The Jordano transformations of the system are shown in Table 29.1. In the calculation, the right-hand sides of the system are not written as they are equal to zero and do not change during the Jordan transformations.

    3. From the last three rows of the table we write down the allowed system, which is equivalent to the original system:

    4. We get the general solution of the system:

    5. Having set the value of the free variable x 3 \u003d 1 at your discretion, we obtain a particular nonzero solution X \u003d (- 3,2,1).

    Answer: Thus, with a nonzero set of numbers (-3,2,1), the linear combination of vectors equals the zero vector -3A 1 + 2A 2 + 1A 3 \u003d Θ. Consequently, vector system linearly dependent.

    Vector system properties

    Property (1)
    If the system of vectors is linearly dependent, then at least one of the vectors is expanded in terms of the rest and, conversely, if at least one of the vectors of the system is expanded in terms of the rest, then the system of vectors is linearly dependent.

    Property (2)
    If any subsystem of vectors is linearly dependent, then the whole system is linearly dependent.

    Property (3)
    If a system of vectors is linearly independent, then any of its subsystems is linearly independent.

    Property (4)
    Any system of vectors containing a zero vector is linearly dependent.

    Property (5)
    A system of m-dimensional vectors is always linearly dependent if the number of vectors n is greater than their dimension (n\u003e m)

    Vector system basis

    The basis of the vector system A 1, A 2, ..., A n such a subsystem B 1, B 2, ..., B r(each of the vectors B 1, B 2, ..., B r is one of the vectors A 1, A 2, ..., A n), which satisfies the following conditions:
    1. B 1, B 2, ..., B r linearly independent system of vectors;
    2. any vector A j systems A 1, A 2, ..., A n is linearly expressed in terms of vectors B 1, B 2, ..., B r

    r - the number of vectors included in the basis.

    Theorem 29.1 On the unit basis of a system of vectors.

    If a system of m-dimensional vectors contains m different unit vectors E 1 E 2, ..., E m, then they form a basis of the system.

    Algorithm for finding the basis of a system of vectors

    In order to find the basis of the system of vectors A 1, A 2, ..., A n it is necessary:

    • Create a homogeneous system of equations corresponding to the system of vectors A 1 x 1 + A 2 x 2 + ... + A n x n \u003d Θ
    • Lead this system

    In the article on n -dimensional vectors, we came to the concept of a linear space generated by a set of n -dimensional vectors. Now we have to consider no less important concepts such as the dimension and basis of a vector space. They are directly related to the concept of a linearly independent vector system, so it is additionally recommended to remind yourself of the basics of this topic.

    Let's introduce some definitions.

    Definition 1

    Dimension of vector space - the number corresponding to the maximum number of linearly independent vectors in this space.

    Definition 2

    Vector space basis - a set of linearly independent vectors, ordered and equal in number to the dimension of space.

    Consider a certain space of n -vectors. Its dimension is correspondingly equal to n. Let's take a system of n -unit vectors:

    e (1) \u003d (1, 0,..., 0) e (2) \u003d (0, 1,..., 0) e (n) \u003d (0, 0,..., 1)

    We use these vectors as components of the matrix A: it will be the unit with dimension n by n. The rank of this matrix is \u200b\u200bn. Therefore, the vector system e (1), e (2),. ... ... , e (n) is linearly independent. In this case, it is impossible to add a single vector to the system without violating its linear independence.

    Since the number of vectors in the system is n, the dimension of the space of n -dimensional vectors is n, and the unit vectors are e (1), e (2),. ... ... , e (n) are the basis of the indicated space.

    From the obtained definition, we conclude: any system of n -dimensional vectors, in which the number of vectors is less than n, is not a basis of space.

    If we swap the first and second vectors, we get a system of vectors e (2), e (1),. ... ... , e (n). It will also be the basis of the n-dimensional vector space. Let's compose a matrix, taking vectors of the resulting system as its rows. The matrix can be obtained from the identity matrix by permuting the first two rows, its rank will be equal to n. System e (2), e (1),. ... ... , e (n) is linearly independent and is a basis of an n -dimensional vector space.

    By rearranging other vectors in the original system, we obtain one more basis.

    We can take a linearly independent system of non-unit vectors, and it will also represent a basis of n -dimensional vector space.

    Definition 3

    A vector space with dimension n has as many bases as there are linearly independent systems of n -dimensional vectors of number n.

    The plane is a two-dimensional space - its basis will be any two non-collinear vectors. Any three non-coplanar vectors will serve as the basis of the three-dimensional space.

    Let's consider the application of this theory with specific examples.

    Example 1

    Initial data:vectors

    a \u003d (3, - 2, 1) b \u003d (2, 1, 2) c \u003d (3, - 1, - 2)

    It is necessary to determine whether the indicated vectors are the basis of a three-dimensional vector space.

    Decision

    To solve the problem, we investigate the given system of vectors for linear dependence. Let's compose a matrix where the rows are the coordinates of the vectors. Let us determine the rank of the matrix.

    A \u003d 3 2 3 - 2 1 - 1 1 2 - 2 A \u003d 3 - 2 1 2 1 2 3 - 1 - 2 \u003d 3 1 (- 2) + (- 2) 2 3 + 1 2 (- 1) - 1 1 3 - (- 2) 2 (- 2) - 3 2 (- 1) \u003d \u003d - 25 ≠ 0 ⇒ R ank (A) \u003d 3

    Consequently, the vectors specified by the condition of the problem are linearly independent, and their number is equal to the dimension of the vector space - they are the basis of the vector space.

    Answer: these vectors are the basis of the vector space.

    Example 2

    Initial data: vectors

    a \u003d (3, - 2, 1) b \u003d (2, 1, 2) c \u003d (3, - 1, - 2) d \u003d (0, 1, 2)

    It is necessary to determine whether the indicated system of vectors can be the basis of three-dimensional space.

    Decision

    The system of vectors indicated in the problem statement is linearly dependent, since the maximum number of linearly independent vectors is 3. Thus, the indicated system of vectors cannot serve as a basis for a three-dimensional vector space. But it should be noted that the subsystem of the original system a \u003d (3, - 2, 1), b \u003d (2, 1, 2), c \u003d (3, - 1, - 2) is a basis.

    Answer: the specified system of vectors is not a basis.

    Example 3

    Initial data: vectors

    a \u003d (1, 2, 3, 3) b \u003d (2, 5, 6, 8) c \u003d (1, 3, 2, 4) d \u003d (2, 5, 4, 7)

    Can they be the basis of the four-dimensional space?

    Decision

    Let's compose the matrix using the coordinates of the given vectors as rows

    A \u003d 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7

    Using the Gauss method, we determine the rank of the matrix:

    A \u003d 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7 ~ 1 2 3 3 0 1 0 2 0 1 - 1 1 0 1 - 2 1 ~ ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 - 2 - 1 ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 0 1 ⇒ ⇒ R ank (A) \u003d 4

    Consequently, the system of given vectors is linearly independent and their number is equal to the dimension of the vector space - they are the basis of the four-dimensional vector space.

    Answer: the given vectors are the basis of the four-dimensional space.

    Example 4

    Initial data: vectors

    a (1) \u003d (1, 2, - 1, - 2) a (2) \u003d (0, 2, 1, - 3) a (3) \u003d (1, 0, 0, 5)

    Do they form a basis for a 4-dimensional space?

    Decision

    The original system of vectors is linearly independent, but the number of vectors in it is insufficient to become the basis of a four-dimensional space.

    Answer: no, they don't.

    Expansion of a vector in basis

    Let us assume that arbitrary vectors e (1), e (2),. ... ... , e (n) are a basis of a vector n-dimensional space. Let's add to them some n -dimensional vector x →: the resulting system of vectors will become linearly dependent. The properties of linear dependence state that at least one of the vectors of such a system can be linearly expressed in terms of the others. Reformulating this statement, we can say that at least one of the vectors of a linearly dependent system can be expanded in terms of the rest of the vectors.

    Thus, we came to the formulation of the most important theorem:

    Definition 4

    Any vector of n -dimensional vector space is uniquely decomposed in basis.

    Proof 1

    Let us prove this theorem:

    define the basis of the n -dimensional vector space - e (1), e (2),. ... ... , e (n). Let us make the system linearly dependent by adding the n -dimensional vector x → to it. This vector can be linearly expressed in terms of the original vectors e:

    x \u003d x 1 e (1) + x 2 e (2) +. ... ... + x n e (n), where x 1, x 2,. ... ... , x n - some numbers.

    Now let us prove that such a decomposition is unique. Suppose this is not the case and there is another similar decomposition:

    x \u003d x ~ 1 e (1) + x 2 ~ e (2) +. ... ... + x ~ n e (n), where x ~ 1, x ~ 2,. ... ... , x ~ n are some numbers.

    Subtract from the left and right sides of this equality, respectively, the left and right sides of the equality x \u003d x 1 e (1) + x 2 e (2) +. ... ... + x n e (n). We get:

    0 \u003d (x ~ 1 - x 1) e (1) + (x ~ 2 - x 2) e (2) +. ... ... (x ~ n - x n) e (2)

    The system of basis vectors e (1), e (2),. ... ... , e (n) is linearly independent; by the definition of linear independence of a system of vectors, the above equality is possible only if all coefficients are (x ~ 1 - x 1), (x ~ 2 - x 2),. ... ... , (x ~ n - x n) will be equal to zero. From which it will be fair: x 1 \u003d x ~ 1, x 2 \u003d x ~ 2,. ... ... , x n \u003d x ~ n. And this proves the only way to expand the vector in terms of the basis.

    In this case, the coefficients x 1, x 2,. ... ... , x n are called coordinates of the vector x → in the basis e (1), e (2),. ... ... , e (n).

    The proven theory makes clear the expression "given an n -dimensional vector x \u003d (x 1, x 2,..., X n)": a vector x → n -dimensional vector space is considered, and its coordinates are given in a certain basis. It is also clear that the same vector in a different basis of n -dimensional space will have different coordinates.

    Consider the following example: suppose that in some basis of n -dimensional vector space a system of n linearly independent vectors is given

    and a vector x \u003d (x 1, x 2,..., x n) is given.

    The vectors e 1 (1), e 2 (2),. ... ... , e n (n) in this case are also the basis of this vector space.

    Suppose that it is necessary to determine the coordinates of the vector x → in the basis e 1 (1), e 2 (2),. ... ... , e n (n), denoted as x ~ 1, x ~ 2,. ... ... , x ~ n.

    The vector x → will be represented as follows:

    x \u003d x ~ 1 e (1) + x ~ 2 e (2) +. ... ... + x ~ n e (n)

    Let's write this expression in coordinate form:

    (x 1, x 2,..., xn) \u003d x ~ 1 (e (1) 1, e (1) 2,..., e (1) n) + x ~ 2 (e (2 ) 1, e (2) 2,.., E (2) n) +. ... ... + + x ~ n (e (n) 1, e (n) 2,..., e (n) n) \u003d \u003d (x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + ... + x ~ ne 1 (n), x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + +... + x ~ ne 2 (n),..., x ~ 1 en (1) + x ~ 2 en (2) +... + x ~ nen (n))

    The resulting equality is equivalent to a system of n linear algebraic expressions with n unknown linear variables x ~ 1, x ~ 2,. ... ... , x ~ n:

    x 1 \u003d x ~ 1 e 1 1 + x ~ 2 e 1 2 +. ... ... + x ~ n e 1 n x 2 \u003d x ~ 1 e 2 1 + x ~ 2 e 2 2 +. ... ... + x ~ n e 2 n ⋮ x n \u003d x ~ 1 e n 1 + x ~ 2 e n 2 +. ... ... + x ~ n e n n

    The matrix of this system will be as follows:

    e 1 (1) e 1 (2) ⋯ e 1 (n) e 2 (1) e 2 (2) ⋯ e 2 (n) ⋮ ⋮ ⋮ ⋮ e n (1) e n (2) ⋯ e n (n)

    Let it be a matrix A, and its columns are vectors of a linearly independent system of vectors e 1 (1), e 2 (2),. ... ... , e n (n). The rank of the matrix is \u200b\u200bn, and its determinant is nonzero. This indicates that the system of equations has a unique solution that can be determined in any convenient way: for example, the Cramer method or the matrix method. Thus, we can determine the coordinates x ~ 1, x ~ 2,. ... ... , x ~ n of the vector x → in the basis e 1 (1), e 2 (2),. ... ... , e n (n).

    Let's apply the considered theory to a specific example.

    Example 6

    Initial data:in the basis of three-dimensional space, vectors

    e (1) \u003d (1, - 1, 1) e (2) \u003d (3, 2, - 5) e (3) \u003d (2, 1, - 3) x \u003d (6, 2, - 7)

    It is necessary to confirm the fact that the system of vectors e (1), e (2), e (3) also serves as the basis of the given space, and also to determine the coordinates of the vector x in the given basis.

    Decision

    A system of vectors e (1), e (2), e (3) will be a basis of a three-dimensional space if it is linearly independent. Let us clarify this possibility by determining the rank of the matrix A whose rows are given vectors e (1), e (2), e (3).

    We use the Gauss method:

    A \u003d 1 - 1 1 3 2 - 5 2 1 - 3 ~ 1 - 1 1 0 5 - 8 0 3 - 5 ~ 1 - 1 1 0 5 - 8 0 0 - 1 5

    R a n k (A) \u003d 3. Thus, the system of vectors e (1), e (2), e (3) is linearly independent and is a basis.

    Let the vector x → have coordinates x ~ 1, x ~ 2, x ~ 3 in the basis. The relationship between these coordinates is determined by the equation:

    x 1 \u003d x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + x ~ 3 e 1 (3) x 2 \u003d x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + x ~ 3 e 2 (3) x 3 \u003d x ~ 1 e 3 (1) + x ~ 2 e 3 (2) + x ~ 3 e 3 (3)

    Let's apply the values \u200b\u200baccording to the conditions of the problem:

    x ~ 1 + 3 x ~ 2 + 2 x ~ 3 \u003d 6 - x ~ 1 + 2 x ~ 2 + x ~ 3 \u003d 2 x ~ 1 - 5 x ~ 2 - 3 x 3 \u003d - 7

    Let's solve the system of equations by the Cramer method:

    ∆ \u003d 1 3 2 - 1 2 1 1 - 5 - 3 \u003d - 1 ∆ x ~ 1 \u003d 6 3 2 2 2 1 - 7 - 5 - 3 \u003d - 1, x ~ 1 \u003d ∆ x ~ 1 ∆ \u003d - 1 - 1 \u003d 1 ∆ x ~ 2 \u003d 1 6 2 - 1 2 1 1 - 7 - 3 \u003d - 1, x ~ 2 \u003d ∆ x ~ 2 ∆ \u003d - 1 - 1 \u003d 1 ∆ x ~ 3 \u003d 1 3 6 - 1 2 2 1 - 5 - 7 \u003d - 1, x ~ 3 \u003d ∆ x ~ 3 ∆ \u003d - 1 - 1 \u003d 1

    So, the vector x → in the basis e (1), e (2), e (3) has coordinates x ~ 1 \u003d 1, x ~ 2 \u003d 1, x ~ 3 \u003d 1.

    Answer: x \u003d (1, 1, 1)

    Relationship between bases

    Suppose that two linearly independent systems of vectors are given in some basis of an n-dimensional vector space:

    c (1) \u003d (c 1 (1), c 2 (1),.., cn (1)) c (2) \u003d (c 1 (2), c 2 (2),.., cn (2)) ⋮ c (n) \u003d (c 1 (n), e 2 (n),..., Cn (n))

    e (1) \u003d (e 1 (1), e 2 (1),..., en (1)) e (2) \u003d (e 1 (2), e 2 (2),..., en (2)) ⋮ e (n) \u003d (e 1 (n), e 2 (n),..., En (n))

    These systems are also bases of a given space.

    Let c ~ 1 (1), c ~ 2 (1),. ... ... , c ~ n (1) are the coordinates of the vector c (1) in the basis e (1), e (2),. ... ... , e (3), then the relationship of coordinates will be specified by a system of linear equations:

    c 1 (1) \u003d c ~ 1 (1) e 1 (1) + c ~ 2 (1) e 1 (2) +. ... ... + c ~ n (1) e 1 (n) c 2 (1) \u003d c ~ 1 (1) e 2 (1) + c ~ 2 (1) e 2 (2) +. ... ... + c ~ n (1) e 2 (n) ⋮ c n (1) \u003d c ~ 1 (1) e n (1) + c ~ 2 (1) e n (2) +. ... ... + c ~ n (1) e n (n)

    In the form of a matrix, the system can be displayed as follows:

    (c 1 (1), c 2 (1),.., cn (1)) \u003d (c ~ 1 (1), c ~ 2 (1),..., c ~ n (1)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    Let's make the same notation for the vector c (2) by analogy:

    (c 1 (2), c 2 (2),.., cn (2)) \u003d (c ~ 1 (2), c ~ 2 (2),..., c ~ n (2)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    (c 1 (n), c 2 (n),.., cn (n)) \u003d (c ~ 1 (n), c ~ 2 (n),..., c ~ n (n)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    Let's combine matrix equalities into one expression:

    c 1 (1) c 2 (1) ⋯ cn (1) c 1 (2) c 2 (2) ⋯ cn (2) ⋮ ⋮ ⋮ ⋮ c 1 (n) c 2 (n) ⋯ cn (n) \u003d c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e 1 (1) e 2 (1) ⋯ en (1) e 1 (2) e 2 (2) ⋯ en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n ) e 2 (n) ⋯ en (n)

    It will determine the relationship between vectors of two different bases.

    Using the same principle, it is possible to express all vectors of the basis e (1), e (2),. ... ... , e (3) through the basis c (1), c (2),. ... ... , c (n):

    e 1 (1) e 2 (1) ⋯ en (1) e 1 (2) e 2 (2) ⋯ en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) ⋯ en (n) \u003d e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) c 1 (1) c 2 (1) ⋯ cn (1) c 1 (2) c 2 (2) ⋯ cn (2) ⋮ ⋮ ⋮ ⋮ c 1 (n ) c 2 (n) ⋯ cn (n)

    Let's give the following definitions:

    Definition 5

    Matrix c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) is the transition matrix from the basis e (1), e (2),. ... ... , e (3)

    to the basis c (1), c (2),. ... ... , c (n).

    Definition 6

    Matrix e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) is the transition matrix from the basis c (1), c (2),. ... ... , c (n)

    to the basis e (1), e (2),. ... ... , e (3).

    It is obvious from these equalities that

    c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) \u003d 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 e ~ 1 (1) e ~ 2 ( 1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n ) C ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) \u003d 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1

    those. transition matrices are reciprocal.

    Let's consider the theory with a specific example.

    Example 7

    Initial data: it is necessary to find the transition matrix from the basis

    c (1) \u003d (1, 2, 1) c (2) \u003d (2, 3, 3) c (3) \u003d (3, 7, 1)

    e (1) \u003d (3, 1, 4) e (2) \u003d (5, 2, 1) e (3) \u003d (1, 1, - 6)

    You also need to indicate the relationship of the coordinates of an arbitrary vector x → in the given bases.

    Decision

    1. Let T be the transition matrix, then the equality will be true:

    3 1 4 5 2 1 1 1 1 \u003d T 1 2 1 2 3 3 3 7 1

    We multiply both sides of the equality by

    1 2 1 2 3 3 3 7 1 - 1

    and get:

    T \u003d 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1

    2. Let's define the transition matrix:

    T \u003d 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1 \u003d \u003d 3 1 4 5 2 1 1 1 - 6 - 18 5 3 7 - 2 - 1 5 - 1 - 1 \u003d - 27 9 4 - 71 20 12 - 41 9 8

    3. Define the relationship of the coordinates of the vector x →:

    assume that in the basis c (1), c (2),. ... ... , c (n) vector x → has coordinates x 1, x 2, x 3, then:

    x \u003d (x 1, x 2, x 3) 1 2 1 2 3 3 3 7 1,

    and in the basis e (1), e (2),. ... ... , e (3) has coordinates x ~ 1, x ~ 2, x ~ 3, then:

    x \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6

    Because the left sides of these equalities are equal, we can equate the right ones:

    (x 1, x 2, x 3) 1 2 1 2 3 3 3 7 1 \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6

    Multiply both sides on the right by

    1 2 1 2 3 3 3 7 1 - 1

    and get:

    (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1 ⇔ ⇔ ( x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) T ⇔ ⇔ (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3 ) · - 27 9 4 - 71 20 12 - 41 9 8

    On the other hand

    (x ~ 1, x ~ 2, x ~ 3) \u003d (x 1, x 2, x 3) - 27 9 4 - 71 20 12 - 41 9 8

    The last equalities show the connection between the coordinates of the vector x → in both bases.

    Answer: transition matrix

    27 9 4 - 71 20 12 - 41 9 8

    The coordinates of the vector x → in the given bases are related by the ratio:

    (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) - 27 9 4 - 71 20 12 - 41 9 8

    (x ~ 1, x ~ 2, x ~ 3) \u003d (x 1, x 2, x 3) - 27 9 4 - 71 20 12 - 41 9 8 - 1

    If you notice an error in the text, please select it and press Ctrl + Enter

    Lectures on algebra and geometry. Semester 1.

    Lecture 9. Basis of vector space.

    Abstract: a system of vectors, a linear combination of a system of vectors, coefficients of a linear combination of a system of vectors, a basis on a straight line, plane and in space, dimensions of vector spaces on a straight line, plane and in space, decomposition of a vector in a basis, coordinates of a vector with respect to a basis, equality theorem two vectors, linear operations with vectors in coordinate notation, orthonormal triplet of vectors, right and left triplets of vectors, orthonormal basis, main theorem of vector algebra.

    Chapter 9. Basis of vector space and decomposition of a vector in basis.

    item 1. Basis on a straight line, on a plane and in space.

    Definition. Any finite set of vectors is called a vector system.

    Definition. An expression where
    is called a linear combination of the vector system
    and the numbers
    are called the coefficients of this linear combination.

    Let L, P and S be a line, plane and space of points, respectively, and
    ... Then
    - vector spaces of vectors as directed segments on the straight line L, on the plane P and in the space S, respectively.


    any nonzero vector is called
    , i.e. any nonzero vector collinear on the line L:
    and
    .

    Basis designation
    :
    - basis
    .

    Definition. The basis of the vector space
    is any ordered pair of noncollinear vectors of the space
    .

    where
    ,
    - basis
    .

    Definition. The basis of the vector space
    is called any ordered triplet of non-coplanar vectors (i.e., not lying in the same plane) of the space
    .

    - basis
    .

    Comment. The basis of a vector space cannot contain a zero vector: in space
    by definition in space
    two vectors will be collinear if at least one of them is zero, in space
    three vectors will be coplanar, that is, they will lie in the same plane if at least one of the three vectors is zero.

    item 2. Expansion of a vector in basis.

    Definition. Let be - an arbitrary vector,
    - an arbitrary system of vectors. If equality holds

    then they say that the vector presented as a linear combination of this vector system. If the given system of vectors
    is a basis of the vector space, then equality (1) is called the expansion of the vector on the basis
    ... Linear combination coefficients
    in this case are called the coordinates of the vector on the basis
    .

    Theorem. (On the expansion of a vector in terms of a basis.)

    Any vector of a vector space can be decomposed in its basis and, moreover, in a unique way.

    Evidence. 1) Let L be an arbitrary line (or axis) and
    - basis
    ... Take an arbitrary vector
    ... Since both vectors and collinear with the same line L, then
    ... We will use the collinearity theorem for two vectors. Because
    , then there is (exists) such a number
    , what
    and thus we got the vector expansion on the basis
    vector space
    .

    Now let us prove the uniqueness of such a decomposition. Suppose the opposite. Let there be two expansions of the vector on the basis
    vector space
    :

    and
    where
    ... Then
    and using the law of distributivity, we get:

    Because
    , then from the last equality it follows that
    , ch.d.

    2) Now let P be an arbitrary plane and
    - basis
    ... Let be
    an arbitrary vector of this plane. Let us postpone all three vectors from any one point of this plane. Let's build 4 lines. Let's draw a straight line on which the vector lies , straight
    on which the vector lies ... Through the end of the vector draw a straight line parallel to the vector and a straight line parallel to the vector ... These 4 straight lines carve a parallelogram. See below fig. 3. According to the parallelogram rule
    and
    ,
    ,
    - basis ,
    - basis
    .

    Now, by what has already been proved in the first part of this proof, there exist numbers
    , what

    and
    ... From here we get:

    and the possibility of expansion in a basis is proved.

    We now prove the uniqueness of the expansion in terms of the basis. Suppose the opposite. Let there be two expansions of the vector on the basis
    vector space
    :
    and
    ... We get equality

    Where does it follow
    ... If
    then
    , and since
    then
    and the expansion coefficients are:
    ,
    ... Let now
    ... Then
    where
    ... By the collinearity theorem for two vectors, this implies that
    ... This contradicts the hypothesis of the theorem. Consequently,
    and
    , ch.d.

    3) Let
    - basis
    let it go
    arbitrary vector. Let's carry out the following constructions.

    Set aside all three basis vectors
    and vector from one point and build 6 planes: the plane in which the basis vectors lie
    , plane
    and plane
    ; further through the end of the vector draw three planes parallel to the three planes just constructed. These 6 planes carve a parallelepiped:

    By the rule of addition of vectors, we obtain the equality:

    . (1)

    By construction
    ... Hence, by the collinearity theorem for two vectors, it follows that there exists a number
    , such that
    ... Similarly,
    and
    where
    ... Now, substituting these equalities into (1), we get:

    and the possibility of expansion in a basis is proved.

    Let us prove the uniqueness of such a decomposition. Suppose the opposite. Let there be two expansions of the vector on the basis
    :

    And. Then

    Note that by hypothesis the vectors
    non-coplanar, therefore, they are pairwise non-collinear.

    Two cases are possible:
    or
    .

    a) Let
    , then equality (3) implies:

    . (4)

    It follows from equality (4) that the vector decomposes on the basis
    , i.e. vector lies in the plane of vectors
    and therefore the vectors
    coplanar, which contradicts the condition.

    b) The case remains
    , i.e.
    ... Then from equality (3) we obtain either

    Because
    Is the basis of the space of vectors lying in the plane, and we have already proved the uniqueness of the expansion in terms of the basis of vectors of the plane, then it follows from equality (5) that
    and
    , ch.d.

    The theorem is proved.

    Consequence.

    1) There is a one-to-one correspondence between the set of vectors of the vector space
    and the set of real numbers R.

    2) There is a one-to-one correspondence between the set of vectors of the vector space
    and cartesian square

    3) There is a one-to-one correspondence between the set of vectors of the vector space
    and cartesian cube
    the set of real numbers R.

    Evidence. Let us prove the third statement. The first two are proved similarly.

    Select and fix in space
    some basis
    and arrange the mapping
    according to the following rule:

    those. each vector is associated with an ordered set of its coordinates.

    Since for a fixed basis each vector has a single set of coordinates, the correspondence given by rule (6) is indeed a mapping.

    It follows from the proof of the theorem that different vectors have different coordinates with respect to the same basis, i.e. mapping (6) is an injection.

    Let be
    an arbitrary ordered set of real numbers.

    Consider a vector
    ... By construction, this vector has coordinates
    ... Consequently, mapping (6) is a surjection.

    A mapping that is both injective and surjective is bijective, i.e. one-to-one, p.t.d.

    The corollary is proved.

    Theorem. (On the equality of two vectors.)

    Two vectors are equal if and only if their coordinates are equal relative to the same basis.

    The proof immediately follows from the previous corollary.

    item 3. Dimension of vector space.

    Definition. The number of vectors in the basis of a vector space is called its dimension.

    Designation:
    Is the dimension of the vector space V.

    Thus, in accordance with this and previous definitions, we have:

    1)
    Is the vector space of vectors of the line L.

    - basis
    ,
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinate on the basis
    .

    2)
    Is the vector space of vectors of the plane P.

    - basis
    ,
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinates on the basis
    .

    3)
    - vector space of vectors in the space of points S.

    - basis
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinates on the basis
    .

    Comment. If
    then
    and you can choose the basis
    space
    so that
    - basis
    and
    - basis
    ... Then
    and
    , .

    Thus, any vector of the straight line L, plane P and space S can be expanded in the basis
    :

    Designation. By virtue of the theorem on the equality of vectors, we can identify any vector with an ordered triple of real numbers and write:

    This is possible only when the basis
    fixed and there is no danger of confusion.

    Definition. Writing a vector in the form of an ordered triple of real numbers is called the coordinate form of writing a vector:
    .

    item 4. Linear operations with vectors in coordinate notation.

    Let be
    - space basis
    and
    - two of its arbitrary vectors. Let be
    and
    - writing these vectors in coordinate form. Let, further,
    - an arbitrary real number. In this notation, the following theorem holds.

    Theorem. (About linear operations on vectors in coordinate form.)

    2)
    .

    In other words, in order to add two vectors, you need to add their corresponding coordinates, and to multiply a vector by a number, you need to multiply each coordinate of a given vector by a given number.

    Evidence. Since by the condition of the theorem, then using the axioms of the vector space, which obey the operations of addition of vectors and multiplication of a vector by a number, we obtain:

    This implies .

    The second equality is proved similarly.

    The theorem is proved.

    p. 5. Orthogonal vectors. Orthonormal basis.

    Definition. Two vectors are called orthogonal if the angle between them is equal to the right angle, i.e.
    .

    Designation:
    - vectors and orthogonal.

    Definition. Three vectors
    is called orthogonal if these vectors are pairwise orthogonal to each other, i.e.
    ,
    .

    Definition. Three vectors
    is called orthonormal if it is orthogonal and the lengths of all vectors are equal to one:
    .

    Comment. It follows from the definition that an orthogonal and, therefore, an orthonormal triplet of vectors is non-coplanar.

    Definition. Ordered noncoplanar triplet of vectors
    , plotted from one point, is called right (right-oriented), if, when viewed from the end of the third vector onto the plane in which the first two vectors lie and , the shortest rotation of the first vector to the second going counterclockwise. Otherwise, the triplet of vectors is called left (left-oriented).

    Here, Fig. 6 shows the right triplet of vectors
    ... The following figure 7 shows the left triplet of vectors
    :

    Definition. Basis
    vector space
    is called orthonormal if
    orthonormal triplet of vectors.

    Designation. In what follows, we will use the right orthonormal basis
    , see the following figure.

    Linear dependence and linear independence of vectors.
    The basis of vectors. Affine coordinate system

    There is a cart with chocolates in the audience, and each visitor today will get a sweet couple - analytical geometry with linear algebra. This article will touch upon two sections of higher mathematics at once, and we will see how they get along in one wrapper. Pause, eat Twix! ... damn it, well, and argued nonsense. Although okay, I will not score, in the end, there should be a positive attitude to study.

    Linear dependence of vectors, linear independence of vectors, vector basis and other terms have not only a geometric interpretation, but, above all, an algebraic meaning. The very concept of "vector" from the point of view of linear algebra is not always the "ordinary" vector that we can depict on a plane or in space. You don't have to go far for proof, try to draw a vector of five-dimensional space ... Or the weather vector, for which I just went to Gismeteo: - temperature and atmospheric pressure, respectively. The example, of course, is incorrect from the point of view of the properties of the vector space, but, nevertheless, no one forbids formalizing these parameters with a vector. Breath of Autumn….

    No, I'm not going to load you with theory, linear vector spaces, the task is to understand definitions and theorems. The new terms (linear dependence, independence, linear combination, basis, etc.) are applicable to all vectors from an algebraic point of view, but geometric examples will be given. Thus, everything is simple, accessible and clear. In addition to the problems of analytical geometry, we will also consider some typical tasks of algebra. To master the material, it is advisable to familiarize yourself with the lessons Vectors for dummies and How to calculate the determinant?

    Linear dependence and independence of plane vectors.
    Plane basis and affine coordinate system

    Consider the plane of your computer desk (just a table, bedside table, floor, ceiling, who likes what). The task will consist of the following actions:

    1) Select Plane Basis... Roughly speaking, a tabletop has a length and a width, so it is intuitively clear that two vectors are required to construct a basis. One vector is clearly not enough, three vectors are too much.

    2) Based on the selected basis set coordinate system (coordinate grid) to assign coordinates to all objects on the table.

    Do not be surprised, at first the explanations will be on the fingers. Moreover, on yours. Please place left index finger on the edge of the countertop so that it looks into the monitor. This will be a vector. Now put right little finger on the edge of the table in the same way - so that it is directed towards the monitor screen. This will be a vector. Smile, you look great! What about vectors? Data vectors collinear, which means linearly expressed through each other:
    , well, or vice versa:, where is some number other than zero.

    A picture of this action can be seen in the lesson Vectors for dummieswhere I explained the rule of multiplying a vector by a number.

    Will your fingers set a baseline on the plane of the computer desk? Obviously not. Collinear vectors travel back and forth along one direction, and the plane has length and width.

    Such vectors are called linearly dependent.

    Reference: The words "linear", "linear" denote the fact that there are no squares, cubes, other degrees, logarithms, sines, etc. in mathematical equations, expressions. There are only linear (1st degree) expressions and dependencies.

    Two plane vectors linearly dependent if and only if they are collinear.

    Cross your fingers on the table so that there is any angle between them except 0 or 180 degrees. Two plane vectorslinearly notdependent if and only if they are not collinear... So, the basis is obtained. There is no need to be embarrassed that the basis turned out to be "oblique" with non-perpendicular vectors of different lengths. Very soon we will see that not only an angle of 90 degrees is suitable for its construction, and not only unit vectors of equal length

    Any vector plane unique way decomposed according to the basis:
    , where are real numbers. The numbers are called vector coordinates in this basis.

    It is also said that vector presented as linear combination basis vectors... That is, the expression is called decomposition of the vectoron the basis or linear combination basis vectors.

    For example, we can say that a vector is decomposed in the orthonormal basis of the plane, or we can say that it is represented as a linear combination of vectors.

    Let's formulate baseline definition formally: Basis plane is a pair of linearly independent (noncollinear) vectors, , wherein any the plane vector is a linear combination of the basis vectors.

    An essential point in the definition is the fact that the vectors are taken in a certain order... Bases Are two completely different bases! As the saying goes, the little finger of your left hand cannot be rearranged in place of the little finger of your right hand.

    We figured out the basis, but it is not enough to set a coordinate grid and assign coordinates to each item on your computer desk. Why not enough? The vectors are free and wander all over the plane. So how do you assign coordinates to those dirty little table spots left over from your hectic weekend? A starting point is needed. And such a reference point is a point familiar to all - the origin of coordinates. Understanding the coordinate system:

    I'll start with the "school" system. Already in the introductory lesson Vectors for dummies I have highlighted some of the differences between a rectangular coordinate system and an orthonormal basis. Here's a typical picture:

    When talking about rectangular coordinate system, then most often they mean the origin, coordinate axes and scale along the axes. Try to type in the search engine "rectangular coordinate system", and you will see that many sources will tell you about the coordinate axes familiar from the 5-6th grade and how to lay points on the plane.

    On the other hand, one gets the impression that a rectangular coordinate system is quite possible to define in terms of an orthonormal basis. And this is almost the case. The wording is as follows:

    originand orthonormalbasis is given cartesian rectangular plane coordinate system ... That is, the rectangular coordinate system unequivocally defined by a single point and two unit orthogonal vectors. That is why you see the drawing that I gave above - in geometric problems, both vectors and coordinate axes are often (but far from always) drawn.

    I think everyone understands that using a point (origin) and an orthonormal basis ANY POINT of the plane and ANY VECTOR of the planeyou can assign coordinates. Figuratively speaking, "everything can be numbered on a plane."

    Do coordinate vectors have to be unit? No, they can be of arbitrary non-zero length. Consider a point and two orthogonal vectors of arbitrary nonzero length:


    Such a basis is called orthogonal... The origin of coordinates with vectors is set by the coordinate grid, and any point of the plane, any vector have their coordinates in this basis. For example, or. The obvious inconvenience is that the coordinate vectors in general have different lengths other than one. If the lengths are equal to one, then the usual orthonormal basis is obtained.

    ! Note : in the orthogonal basis, as well as below in the affine bases of the plane and space, the units along the axes are CONDITIONAL... For example, one unit along the abscissa contains 4 cm, and one unit along the ordinate is 2 cm. This information is enough to convert the “non-standard” coordinates into “our usual centimeters” if necessary.

    And the second question, which has actually been answered - is the angle between the basis vectors necessarily equal to 90 degrees? No! As the definition says, the basis vectors must be only non-collinear... Accordingly, the angle can be any other than 0 and 180 degrees.

    The point on the plane called originand non-collinear vectors, , set affine plane coordinate system :


    Sometimes this coordinate system is called oblique system. Points and vectors are shown in the drawing as examples:

    As you understand, the affine coordinate system is even less convenient, the formulas for the lengths of vectors and segments, which we considered in the second part of the lesson, do not work in it. Vectors for dummies, many delicious formulas associated with dot product of vectors... But the rules for adding vectors and multiplying a vector by a number, formulas for dividing a segment in this respect, as well as some other types of problems that we will soon consider, are true.

    And the conclusion is that the most convenient particular case of the affine coordinate system is the Cartesian rectangular system. Therefore, her, dear, most often you have to contemplate. ... However, everything in this life is relative - there are many situations in which it is appropriate to oblique (or some other, for example, polar) coordinate system. And humanoids may like such systems \u003d)

    Let's move on to the practical part. All objectives of this lesson are valid for both rectangular coordinate systems and the general affine case. There is nothing complicated here, all the material is available even to a student.

    How to determine collinearity of plane vectors?

    Typical thing. In order for two vectors of the plane are collinear, it is necessary and sufficient that their corresponding coordinates are proportional toEssentially, this is a coordinate-wise detail of the obvious relationship.

    Example 1

    a) Check if vectors are collinear .
    b) Do the vectors form the basis ?

    Decision:
    a) Let us find out whether there exists for vectors proportionality coefficient, such that the equalities are satisfied:

    I will definitely tell you about the "dude" version of the application of this rule, which is quite effective in practice. The idea is to figure out the proportion right away and see if it is correct:

    Let's compose the proportion from the ratios of the corresponding coordinates of the vectors:

    Reducing:
    , thus, the corresponding coordinates are proportional, therefore,

    The ratio could be composed and vice versa, this is an equivalent option:

    For self-test, you can use the fact that collinear vectors are linearly expressed through each other. In this case, the equalities hold ... Their validity can be easily verified through elementary actions with vectors:

    b) Two vectors of the plane form a basis if they are not collinear (linearly independent). Let us investigate vectors for collinearity ... Let's compose a system:

    From the first equation it follows that, from the second equation it follows that, therefore, the system is inconsistent (no solutions). Thus, the corresponding coordinates of the vectors are not proportional.

    Output: vectors are linearly independent and form a basis.

    A simplified version of the solution looks like this:

    Let's compose the proportion from the corresponding coordinates of the vectors :
    , therefore, these vectors are linearly independent and form a basis.

    Usually this option is not rejected by reviewers, but a problem arises in cases where some coordinates are equal to zero. Like this: ... Or like this: ... Or like this: ... How to act here through proportion? (indeed, you cannot divide by zero). It is for this reason that I called the simplified solution "dude".

    Answer:a), b) form.

    A small creative example for an independent solution:

    Example 2

    At what value of the parameter the vectors will be collinear?

    In the solution sample, the parameter is found through proportion.

    There is an elegant algebraic way of checking vectors for collinearity., We systematize our knowledge and add it as the fifth point:

    For two vectors of the plane, the following statements are equivalent:

    2) vectors form a basis;
    3) vectors are not collinear;

    + 5) the determinant composed of the coordinates of these vectors is nonzero.

    Respectively, the following opposite statements are equivalent:
    1) vectors are linearly dependent;
    2) vectors do not form a basis;
    3) vectors are collinear;
    4) vectors can be linearly expressed through each other;
    + 5) the determinant composed of the coordinates of these vectors is equal to zero.

    I really, really hope that at the moment you already understand all the terms and statements that you have encountered.

    Let's take a closer look at the new, fifth point: two plane vectors collinear if and only if the determinant composed of the coordinates of these vectors is equal to zero:. To use this feature, of course, you need to be able to find determinants.

    Let's solve Example 1 in the second way:

    a) We calculate the determinant composed of the coordinates of the vectors :
    hence, these vectors are collinear.

    b) Two vectors of the plane form a basis if they are not collinear (linearly independent). Let us calculate the determinant composed of the coordinates of the vectors :
    so the vectors are linearly independent and form a basis.

    Answer:a), b) form.

    It looks much more compact and prettier than a solution with proportions.

    With the help of the material considered, it is possible to establish not only the collinearity of vectors, but also to prove the parallelism of line segments. Consider a couple of problems with specific geometric shapes.

    Example 3

    The vertices of the quadrangle are given. Prove that a quadrilateral is a parallelogram.

    Evidence: There is no need to build a drawing in the problem, since the solution will be purely analytical. Let's remember the definition of a parallelogram:
    Parallelogram is called a quadrilateral, in which opposite sides are pairwise parallel.

    Thus, it is necessary to prove:
    1) parallelism of opposite sides and;
    2) parallelism of opposite sides and.

    We prove:

    1) Find vectors:


    2) Find vectors:

    The result is one and the same vector (“according to school” - equal vectors). Collinearity is quite obvious, but it is still better to draw up a decision clearly, with an arrangement. Let us calculate the determinant composed of the coordinates of the vectors:
    , so these vectors are collinear, and.

    Output: Opposite sides of a quadrilateral are pairwise parallel, which means that it is a parallelogram by definition. Q.E.D.

    More good and different shapes:

    Example 4

    The vertices of the quadrilateral are given. Prove that the quadrilateral is a trapezoid.

    For a more rigorous formulation of the proof, it is better, of course, to get a definition of a trapezoid, but it is enough just to remember what it looks like.

    This is an independent task. Complete solution at the end of the tutorial.

    And now it's time to quietly move from plane to space:

    How to determine collinearity of space vectors?

    The rule is very similar. For two vectors of space to be collinear, it is necessary and sufficient that their corresponding coordinates are proportional to.

    Example 5

    Find out if the following space vectors are collinear:

    and) ;
    b)
    at)

    Decision:
    a) Check if there is a proportionality coefficient for the corresponding coordinates of the vectors:

    The system has no solution, so the vectors are not collinear.

    "Simplified" is drawn up by checking the proportion. In this case:
    - the corresponding coordinates are not proportional, which means that the vectors are not collinear.

    Answer: vectors are not collinear.

    b-c) These are items for independent decision. Try to design it in two ways.

    There is a method for checking spatial vectors for collinearity and through a third-order determinant, this method is highlighted in the article Vector product of vectors.

    Similarly to the plane case, the considered tools can be used to study the parallelism of spatial segments and straight lines.

    Welcome to the second section:

    Linear dependence and independence of vectors of three-dimensional space.
    Spatial basis and affine coordinate system

    Many of the patterns that we have considered on the plane will also be true for space. I tried to minimize the abstract on the theory, since the lion's share of the information has already been chewed. Nevertheless, I recommend that you carefully read the introductory part, as new terms and concepts will appear.

    Now, instead of the plane of the computer table, we explore three-dimensional space. First, let's create its basis. Someone is now in the room, someone is on the street, but in any case, we cannot get away from three dimensions: width, length and height. Therefore, to build the basis, three spatial vectors are required. One or two vectors are not enough, the fourth is unnecessary.

    And again we warm up on our fingers. Please raise your hand up and spread it apart. thumb, forefinger and middle finger... These will be vectors, they look in different directions, have different lengths and have different angles to each other. Congratulations, your 3D basis is ready! By the way, there is no need to demonstrate this to teachers, no matter how you twist your fingers, and you can't get away from the definitions \u003d)

    Next, let's ask an important question, do any three vectors form a basis of three-dimensional space? Please press three fingers firmly against the computer desk top. What happened? Three vectors are located in the same plane, and, roughly speaking, one of our measurements has disappeared - height. Such vectors are coplanar and it is quite obvious that the basis of three-dimensional space is not created.

    It should be noted that coplanar vectors do not have to lie in the same plane, they can be in parallel planes (just don't do it with your fingers, so only Salvador Dali came off \u003d)).

    Definition: vectors are called coplanarif there is a plane to which they are parallel. It is logical to add here that if such a plane does not exist, then the vectors will not be coplanar either.

    Three coplanar vectors are always linearly dependent, that is, they are linearly expressed through each other. For simplicity, let's again imagine that they are in the same plane. Firstly, the vectors are not only coplanar, but in addition they can be collinear, then any vector can be expressed in terms of any vector. In the second case, if, for example, the vectors are not collinear, then the third vector is expressed through them in a unique way: (and why - it is easy to guess from the materials of the previous section).

    The converse is also true: three non-coplanar vectors are always linearly independent, that is, they are in no way expressed through each other. And, obviously, only such vectors can form the basis of three-dimensional space.

    Definition: The basis of three-dimensional space a triple of linearly independent (non-coplanar) vectors is called, taken in a certain order, and any vector of space unique way decomposed according to the given basis, where are the coordinates of the vector in the given basis

    Let me remind you that we can also say that the vector is represented as linear combination basis vectors.

    The concept of a coordinate system is introduced in the same way as for the plane case, one point and any three linearly independent vectors are sufficient:

    originand non-coplanar vectors, taken in a certain order, set affine coordinate system of three-dimensional space :

    Of course, the coordinate grid is "oblique" and inconvenient, but, nevertheless, the constructed coordinate system allows us unequivocally determine the coordinates of any vector and coordinates of any point in space. Similarly to the plane, some formulas that I have already mentioned will not work in the affine coordinate system of space.

    The most familiar and convenient special case of the affine coordinate system, as everyone guesses, is rectangular space coordinate system:

    The point in space called originand orthonormalbasis is given cartesian rectangular space coordinate system ... Familiar picture:

    Before moving on to practical tasks, we re-organize the information:

    For three space vectors, the following statements are equivalent:
    1) vectors are linearly independent;
    2) vectors form a basis;
    3) vectors are not coplanar;
    4) vectors cannot be linearly expressed through each other;
    5) the determinant composed of the coordinates of these vectors is nonzero.

    Opposite statements, I think, are understandable.

    Linear dependence / independence of space vectors is traditionally checked using a determinant (item 5). The remaining practical tasks will be of a pronounced algebraic character. It's time to hang a geometric stick on a nail and wield a linear algebra baseball bat:

    Three vectors of space coplanar if and only if the determinant composed of the coordinates of these vectors is equal to zero: .

    I draw your attention to a small technical nuance: the coordinates of vectors can be written not only in columns, but also in rows (the value of the determinant will not change from this - see the properties of the determinants). But it is much better in columns, since it is more profitable for solving some practical problems.

    For those readers who have forgotten the methods of calculating determinants a little, or maybe even poorly guided by them, I recommend one of my oldest lessons: How to calculate the determinant?

    Example 6

    Check if the following vectors form the basis of the three-dimensional space:

    Decision: In fact, the whole solution is reduced to calculating the determinant.

    a) Let us calculate the determinant composed of the coordinates of the vectors (the determinant is expanded on the first line):

    , so the vectors are linearly independent (not coplanar) and form a basis of three-dimensional space.

    Answer: these vectors form a basis

    b) This is a point for an independent decision. Complete solution and answer at the end of the tutorial.

    There are also creative tasks:

    Example 7

    At what value of the parameter will the vectors be coplanar?

    Decision: Vectors are coplanar if and only if the determinant composed of the coordinates of these vectors is zero:

    Essentially, you need to solve an equation with a determinant. We put on zeros like kites on jerboas - it is most profitable to open the determinant on the second line and immediately get rid of the minuses:

    We carry out further simplifications and reduce the matter to the simplest linear equation:

    Answer: at

    It is easy to check here, for this you need to substitute the resulting value in the original determinant and make sure that by revealing it again.

    In conclusion, consider another typical problem, which is more algebraic in nature and is traditionally included in the course of linear algebra. It is so widespread that it deserves a separate topic:

    Prove that 3 vectors form a basis of three-dimensional space
    and find the coordinates of the 4th vector in this basis

    Example 8

    Given vectors. Show that the vectors form a basis of three-dimensional space and find the coordinates of the vector in this basis.

    Decision: First, we deal with the condition. By condition, four vectors are given, and, as you can see, they already have coordinates in a certain basis. What basis it is - we are not interested. And the following thing is of interest: three vectors may well form a new basis. And the first stage completely coincides with the solution of Example 6, it is necessary to check if the vectors are really linearly independent:

    Let us calculate the determinant composed of the coordinates of the vectors:

    , therefore, the vectors are linearly independent and form a basis of the three-dimensional space.

    ! Important : coordinates of vectors necessarily write down into columns determinant, not into strings. Otherwise, there will be confusion in the further solution algorithm.