To come in
Sewerage and drainpipes portal
  • Dried fruit sweets "Energy koloboks"
  • Raspberries grated with sugar: tasty and healthy
  • Alexey Pleshcheev: biography
  • How to preserve apple juice
  • Cabbage salad with carrots like in a dining room - the best recipes from childhood
  • An even complexion without foundation!
  • Find the rank and basis of the vector system online. How to find the basis of a given vector system

    Find the rank and basis of the vector system online. How to find the basis of a given vector system

    In the article on n -dimensional vectors, we came to the concept of a linear space generated by a set of n -dimensional vectors. Now we have to consider no less important concepts such as the dimension and basis of a vector space. They are directly related to the concept of a linearly independent vector system, so it is additionally recommended to remind yourself of the basics of this topic.

    Let's introduce some definitions.

    Definition 1

    Dimension of vector space - the number corresponding to the maximum number of linearly independent vectors in this space.

    Definition 2

    Vector space basis - a set of linearly independent vectors, ordered and equal in number to the dimension of space.

    Consider a certain space of n -vectors. Its dimension is correspondingly equal to n. Let's take a system of n -unit vectors:

    e (1) \u003d (1, 0,..., 0) e (2) \u003d (0, 1,..., 0) e (n) \u003d (0, 0,..., 1)

    We use these vectors as components of the matrix A: it will be the unit with dimension n by n. The rank of this matrix is \u200b\u200bn. Therefore, the vector system e (1), e (2),. ... ... , e (n) is linearly independent. In this case, it is impossible to add a single vector to the system without violating its linear independence.

    Since the number of vectors in the system is n, the dimension of the space of n -dimensional vectors is n, and the unit vectors are e (1), e (2),. ... ... , e (n) are the basis of the indicated space.

    From the obtained definition, we conclude: any system of n -dimensional vectors, in which the number of vectors is less than n, is not a basis of space.

    If we swap the first and second vectors, we get a system of vectors e (2), e (1),. ... ... , e (n). It will also be the basis of the n-dimensional vector space. Let's compose a matrix, taking vectors of the resulting system as its rows. The matrix can be obtained from the identity matrix by permuting the first two rows, its rank will be equal to n. System e (2), e (1),. ... ... , e (n) is linearly independent and is a basis of an n -dimensional vector space.

    By rearranging other vectors in the original system, we obtain one more basis.

    We can take a linearly independent system of non-unit vectors, and it will also represent a basis of n -dimensional vector space.

    Definition 3

    A vector space with dimension n has as many bases as there are linearly independent systems of n -dimensional vectors of number n.

    The plane is a two-dimensional space - its basis will be any two non-collinear vectors. Any three non-coplanar vectors will serve as the basis of the three-dimensional space.

    Let's consider the application of this theory with specific examples.

    Example 1

    Initial data:vectors

    a \u003d (3, - 2, 1) b \u003d (2, 1, 2) c \u003d (3, - 1, - 2)

    It is necessary to determine whether the indicated vectors are the basis of a three-dimensional vector space.

    Decision

    To solve the problem, we investigate the given system of vectors for linear dependence. Let's compose a matrix where the rows are the coordinates of the vectors. Let us determine the rank of the matrix.

    A \u003d 3 2 3 - 2 1 - 1 1 2 - 2 A \u003d 3 - 2 1 2 1 2 3 - 1 - 2 \u003d 3 1 (- 2) + (- 2) 2 3 + 1 2 (- 1) - 1 1 3 - (- 2) 2 (- 2) - 3 2 (- 1) \u003d \u003d - 25 ≠ 0 ⇒ R ank (A) \u003d 3

    Consequently, the vectors specified by the condition of the problem are linearly independent, and their number is equal to the dimension of the vector space - they are the basis of the vector space.

    Answer: these vectors are the basis of the vector space.

    Example 2

    Initial data: vectors

    a \u003d (3, - 2, 1) b \u003d (2, 1, 2) c \u003d (3, - 1, - 2) d \u003d (0, 1, 2)

    It is necessary to determine whether the indicated system of vectors can be the basis of three-dimensional space.

    Decision

    The system of vectors indicated in the problem statement is linearly dependent, since the maximum number of linearly independent vectors is 3. Thus, the indicated system of vectors cannot serve as a basis for a three-dimensional vector space. But it should be noted that the subsystem of the original system a \u003d (3, - 2, 1), b \u003d (2, 1, 2), c \u003d (3, - 1, - 2) is a basis.

    Answer: the specified system of vectors is not a basis.

    Example 3

    Initial data: vectors

    a \u003d (1, 2, 3, 3) b \u003d (2, 5, 6, 8) c \u003d (1, 3, 2, 4) d \u003d (2, 5, 4, 7)

    Can they be the basis of the four-dimensional space?

    Decision

    Let's compose the matrix using the coordinates of the given vectors as rows

    A \u003d 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7

    Using the Gauss method, we determine the rank of the matrix:

    A \u003d 1 2 3 3 2 5 6 8 1 3 2 4 2 5 4 7 ~ 1 2 3 3 0 1 0 2 0 1 - 1 1 0 1 - 2 1 ~ ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 - 2 - 1 ~ 1 2 3 3 0 1 0 2 0 0 - 1 - 1 0 0 0 1 ⇒ ⇒ R ank (A) \u003d 4

    Consequently, the system of given vectors is linearly independent and their number is equal to the dimension of the vector space - they are the basis of the four-dimensional vector space.

    Answer: the given vectors are the basis of the four-dimensional space.

    Example 4

    Initial data: vectors

    a (1) \u003d (1, 2, - 1, - 2) a (2) \u003d (0, 2, 1, - 3) a (3) \u003d (1, 0, 0, 5)

    Do they form a basis for a 4-dimensional space?

    Decision

    The original system of vectors is linearly independent, but the number of vectors in it is insufficient to become the basis of a four-dimensional space.

    Answer: no, they don't.

    Expansion of a vector in basis

    Let us assume that arbitrary vectors e (1), e (2),. ... ... , e (n) are a basis of a vector n-dimensional space. Let's add to them some n -dimensional vector x →: the resulting system of vectors will become linearly dependent. The properties of linear dependence state that at least one of the vectors of such a system can be linearly expressed in terms of the others. Reformulating this statement, we can say that at least one of the vectors of a linearly dependent system can be expanded in terms of the rest of the vectors.

    Thus, we came to the formulation of the most important theorem:

    Definition 4

    Any vector of n -dimensional vector space is uniquely decomposed in basis.

    Proof 1

    Let us prove this theorem:

    define the basis of the n -dimensional vector space - e (1), e (2),. ... ... , e (n). Let us make the system linearly dependent by adding the n -dimensional vector x → to it. This vector can be linearly expressed in terms of the original vectors e:

    x \u003d x 1 e (1) + x 2 e (2) +. ... ... + x n e (n), where x 1, x 2,. ... ... , x n - some numbers.

    Now let us prove that such a decomposition is unique. Suppose this is not the case and there is another similar decomposition:

    x \u003d x ~ 1 e (1) + x 2 ~ e (2) +. ... ... + x ~ n e (n), where x ~ 1, x ~ 2,. ... ... , x ~ n are some numbers.

    Subtract from the left and right sides of this equality, respectively, the left and right sides of the equality x \u003d x 1 e (1) + x 2 e (2) +. ... ... + x n e (n). We get:

    0 \u003d (x ~ 1 - x 1) e (1) + (x ~ 2 - x 2) e (2) +. ... ... (x ~ n - x n) e (2)

    The system of basis vectors e (1), e (2),. ... ... , e (n) is linearly independent; by the definition of linear independence of a system of vectors, the above equality is possible only if all coefficients are (x ~ 1 - x 1), (x ~ 2 - x 2),. ... ... , (x ~ n - x n) will be equal to zero. From which it will be fair: x 1 \u003d x ~ 1, x 2 \u003d x ~ 2,. ... ... , x n \u003d x ~ n. And this proves the only way to expand the vector in terms of the basis.

    In this case, the coefficients x 1, x 2,. ... ... , x n are called coordinates of the vector x → in the basis e (1), e (2),. ... ... , e (n).

    The proven theory makes clear the expression "given an n -dimensional vector x \u003d (x 1, x 2,..., X n)": a vector x → n -dimensional vector space is considered, and its coordinates are given in a certain basis. It is also clear that the same vector in a different basis of n -dimensional space will have different coordinates.

    Consider the following example: suppose that in some basis of n -dimensional vector space a system of n linearly independent vectors is given

    and a vector x \u003d (x 1, x 2,..., x n) is given.

    The vectors e 1 (1), e 2 (2),. ... ... , e n (n) in this case are also the basis of this vector space.

    Suppose that it is necessary to determine the coordinates of the vector x → in the basis e 1 (1), e 2 (2),. ... ... , e n (n), denoted as x ~ 1, x ~ 2,. ... ... , x ~ n.

    The vector x → will be represented as follows:

    x \u003d x ~ 1 e (1) + x ~ 2 e (2) +. ... ... + x ~ n e (n)

    Let's write this expression in coordinate form:

    (x 1, x 2,..., xn) \u003d x ~ 1 (e (1) 1, e (1) 2,..., e (1) n) + x ~ 2 (e (2 ) 1, e (2) 2,.., E (2) n) +. ... ... + + x ~ n (e (n) 1, e (n) 2,..., e (n) n) \u003d \u003d (x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + ... + x ~ ne 1 (n), x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + +... + x ~ ne 2 (n),..., x ~ 1 en (1) + x ~ 2 en (2) +... + x ~ nen (n))

    The resulting equality is equivalent to a system of n linear algebraic expressions with n unknown linear variables x ~ 1, x ~ 2,. ... ... , x ~ n:

    x 1 \u003d x ~ 1 e 1 1 + x ~ 2 e 1 2 +. ... ... + x ~ n e 1 n x 2 \u003d x ~ 1 e 2 1 + x ~ 2 e 2 2 +. ... ... + x ~ n e 2 n ⋮ x n \u003d x ~ 1 e n 1 + x ~ 2 e n 2 +. ... ... + x ~ n e n n

    The matrix of this system will be as follows:

    e 1 (1) e 1 (2) ⋯ e 1 (n) e 2 (1) e 2 (2) ⋯ e 2 (n) ⋮ ⋮ ⋮ ⋮ e n (1) e n (2) ⋯ e n (n)

    Let it be a matrix A, and its columns are vectors of a linearly independent system of vectors e 1 (1), e 2 (2),. ... ... , e n (n). The rank of the matrix is \u200b\u200bn, and its determinant is nonzero. This indicates that the system of equations has a unique solution that can be determined in any convenient way: for example, the Cramer method or the matrix method. Thus, we can determine the coordinates x ~ 1, x ~ 2,. ... ... , x ~ n of the vector x → in the basis e 1 (1), e 2 (2),. ... ... , e n (n).

    Let's apply the considered theory to a specific example.

    Example 6

    Initial data:in the basis of three-dimensional space, vectors

    e (1) \u003d (1, - 1, 1) e (2) \u003d (3, 2, - 5) e (3) \u003d (2, 1, - 3) x \u003d (6, 2, - 7)

    It is necessary to confirm the fact that the system of vectors e (1), e (2), e (3) also serves as the basis of the given space, and also to determine the coordinates of the vector x in the given basis.

    Decision

    A system of vectors e (1), e (2), e (3) will be a basis of a three-dimensional space if it is linearly independent. Let us clarify this possibility by determining the rank of the matrix A whose rows are given vectors e (1), e (2), e (3).

    We use the Gauss method:

    A \u003d 1 - 1 1 3 2 - 5 2 1 - 3 ~ 1 - 1 1 0 5 - 8 0 3 - 5 ~ 1 - 1 1 0 5 - 8 0 0 - 1 5

    R a n k (A) \u003d 3. Thus, the system of vectors e (1), e (2), e (3) is linearly independent and is a basis.

    Let the vector x → have coordinates x ~ 1, x ~ 2, x ~ 3 in the basis. The relationship between these coordinates is determined by the equation:

    x 1 \u003d x ~ 1 e 1 (1) + x ~ 2 e 1 (2) + x ~ 3 e 1 (3) x 2 \u003d x ~ 1 e 2 (1) + x ~ 2 e 2 (2) + x ~ 3 e 2 (3) x 3 \u003d x ~ 1 e 3 (1) + x ~ 2 e 3 (2) + x ~ 3 e 3 (3)

    Let's apply the values \u200b\u200baccording to the conditions of the problem:

    x ~ 1 + 3 x ~ 2 + 2 x ~ 3 \u003d 6 - x ~ 1 + 2 x ~ 2 + x ~ 3 \u003d 2 x ~ 1 - 5 x ~ 2 - 3 x 3 \u003d - 7

    Let's solve the system of equations by the Cramer method:

    ∆ \u003d 1 3 2 - 1 2 1 1 - 5 - 3 \u003d - 1 ∆ x ~ 1 \u003d 6 3 2 2 2 1 - 7 - 5 - 3 \u003d - 1, x ~ 1 \u003d ∆ x ~ 1 ∆ \u003d - 1 - 1 \u003d 1 ∆ x ~ 2 \u003d 1 6 2 - 1 2 1 1 - 7 - 3 \u003d - 1, x ~ 2 \u003d ∆ x ~ 2 ∆ \u003d - 1 - 1 \u003d 1 ∆ x ~ 3 \u003d 1 3 6 - 1 2 2 1 - 5 - 7 \u003d - 1, x ~ 3 \u003d ∆ x ~ 3 ∆ \u003d - 1 - 1 \u003d 1

    So, the vector x → in the basis e (1), e (2), e (3) has coordinates x ~ 1 \u003d 1, x ~ 2 \u003d 1, x ~ 3 \u003d 1.

    Answer: x \u003d (1, 1, 1)

    Relationship between bases

    Suppose that two linearly independent systems of vectors are given in some basis of an n-dimensional vector space:

    c (1) \u003d (c 1 (1), c 2 (1),.., cn (1)) c (2) \u003d (c 1 (2), c 2 (2),.., cn (2)) ⋮ c (n) \u003d (c 1 (n), e 2 (n),..., Cn (n))

    e (1) \u003d (e 1 (1), e 2 (1),..., en (1)) e (2) \u003d (e 1 (2), e 2 (2),..., en (2)) ⋮ e (n) \u003d (e 1 (n), e 2 (n),..., En (n))

    These systems are also bases of a given space.

    Let c ~ 1 (1), c ~ 2 (1),. ... ... , c ~ n (1) are the coordinates of the vector c (1) in the basis e (1), e (2),. ... ... , e (3), then the relationship of coordinates will be specified by a system of linear equations:

    c 1 (1) \u003d c ~ 1 (1) e 1 (1) + c ~ 2 (1) e 1 (2) +. ... ... + c ~ n (1) e 1 (n) c 2 (1) \u003d c ~ 1 (1) e 2 (1) + c ~ 2 (1) e 2 (2) +. ... ... + c ~ n (1) e 2 (n) ⋮ c n (1) \u003d c ~ 1 (1) e n (1) + c ~ 2 (1) e n (2) +. ... ... + c ~ n (1) e n (n)

    In the form of a matrix, the system can be displayed as follows:

    (c 1 (1), c 2 (1),.., cn (1)) \u003d (c ~ 1 (1), c ~ 2 (1),..., c ~ n (1)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    Let's make the same notation for the vector c (2) by analogy:

    (c 1 (2), c 2 (2),.., cn (2)) \u003d (c ~ 1 (2), c ~ 2 (2),..., c ~ n (2)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    (c 1 (n), c 2 (n),.., cn (n)) \u003d (c ~ 1 (n), c ~ 2 (n),..., c ~ n (n)) e 1 (1) e 2 (1)… en (1) e 1 (2) e 2 (2)… en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n)… en (n)

    Let's combine matrix equalities into one expression:

    c 1 (1) c 2 (1) ⋯ cn (1) c 1 (2) c 2 (2) ⋯ cn (2) ⋮ ⋮ ⋮ ⋮ c 1 (n) c 2 (n) ⋯ cn (n) \u003d c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e 1 (1) e 2 (1) ⋯ en (1) e 1 (2) e 2 (2) ⋯ en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n ) e 2 (n) ⋯ en (n)

    It will determine the relationship between vectors of two different bases.

    Using the same principle, it is possible to express all vectors of the basis e (1), e (2),. ... ... , e (3) through the basis c (1), c (2),. ... ... , c (n):

    e 1 (1) e 2 (1) ⋯ en (1) e 1 (2) e 2 (2) ⋯ en (2) ⋮ ⋮ ⋮ ⋮ e 1 (n) e 2 (n) ⋯ en (n) \u003d e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) c 1 (1) c 2 (1) ⋯ cn (1) c 1 (2) c 2 (2) ⋯ cn (2) ⋮ ⋮ ⋮ ⋮ c 1 (n ) c 2 (n) ⋯ cn (n)

    Let's give the following definitions:

    Definition 5

    Matrix c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) is the transition matrix from the basis e (1), e (2),. ... ... , e (3)

    to the basis c (1), c (2),. ... ... , c (n).

    Definition 6

    Matrix e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) is the transition matrix from the basis c (1), c (2),. ... ... , c (n)

    to the basis e (1), e (2),. ... ... , e (3).

    It is obvious from these equalities that

    c ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) e ~ 1 (1) e ~ 2 (1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n) \u003d 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1 e ~ 1 (1) e ~ 2 ( 1) ⋯ e ~ n (1) e ~ 1 (2) e ~ 2 (2) ⋯ e ~ n (2) ⋮ ⋮ ⋮ ⋮ e ~ 1 (n) e ~ 2 (n) ⋯ e ~ n (n ) C ~ 1 (1) c ~ 2 (1) ⋯ c ~ n (1) c ~ 1 (2) c ~ 2 (2) ⋯ c ~ n (2) ⋮ ⋮ ⋮ ⋮ c ~ 1 (n) c ~ 2 (n) ⋯ c ~ n (n) \u003d 1 0 ⋯ 0 0 1 ⋯ 0 ⋮ ⋮ ⋮ ⋮ 0 0 ⋯ 1

    those. transition matrices are reciprocal.

    Let's consider the theory with a specific example.

    Example 7

    Initial data: it is necessary to find the transition matrix from the basis

    c (1) \u003d (1, 2, 1) c (2) \u003d (2, 3, 3) c (3) \u003d (3, 7, 1)

    e (1) \u003d (3, 1, 4) e (2) \u003d (5, 2, 1) e (3) \u003d (1, 1, - 6)

    You also need to indicate the relationship of the coordinates of an arbitrary vector x → in the given bases.

    Decision

    1. Let T be the transition matrix, then the equality will be true:

    3 1 4 5 2 1 1 1 1 \u003d T 1 2 1 2 3 3 3 7 1

    We multiply both sides of the equality by

    1 2 1 2 3 3 3 7 1 - 1

    and get:

    T \u003d 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1

    2. Let's define the transition matrix:

    T \u003d 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1 \u003d \u003d 3 1 4 5 2 1 1 1 - 6 - 18 5 3 7 - 2 - 1 5 - 1 - 1 \u003d - 27 9 4 - 71 20 12 - 41 9 8

    3. Define the relationship of the coordinates of the vector x →:

    assume that in the basis c (1), c (2),. ... ... , c (n) vector x → has coordinates x 1, x 2, x 3, then:

    x \u003d (x 1, x 2, x 3) 1 2 1 2 3 3 3 7 1,

    and in the basis e (1), e (2),. ... ... , e (3) has coordinates x ~ 1, x ~ 2, x ~ 3, then:

    x \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6

    Because the left sides of these equalities are equal, we can equate the right ones:

    (x 1, x 2, x 3) 1 2 1 2 3 3 3 7 1 \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6

    Multiply both sides on the right by

    1 2 1 2 3 3 3 7 1 - 1

    and get:

    (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) 3 1 4 5 2 1 1 1 - 6 1 2 1 2 3 3 3 7 1 - 1 ⇔ ⇔ ( x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) T ⇔ ⇔ (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3 ) · - 27 9 4 - 71 20 12 - 41 9 8

    On the other hand

    (x ~ 1, x ~ 2, x ~ 3) \u003d (x 1, x 2, x 3) - 27 9 4 - 71 20 12 - 41 9 8

    The last equalities show the connection between the coordinates of the vector x → in both bases.

    Answer: transition matrix

    27 9 4 - 71 20 12 - 41 9 8

    The coordinates of the vector x → in the given bases are related by the ratio:

    (x 1, x 2, x 3) \u003d (x ~ 1, x ~ 2, x ~ 3) - 27 9 4 - 71 20 12 - 41 9 8

    (x ~ 1, x ~ 2, x ~ 3) \u003d (x 1, x 2, x 3) - 27 9 4 - 71 20 12 - 41 9 8 - 1

    If you notice an error in the text, please select it and press Ctrl + Enter

    Lectures on algebra and geometry. Semester 1.

    Lecture 9. Basis of vector space.

    Abstract: a system of vectors, a linear combination of a system of vectors, coefficients of a linear combination of a system of vectors, a basis on a straight line, plane and in space, dimensions of vector spaces on a straight line, plane and in space, expansion of a vector in a basis, coordinates of a vector with respect to a basis, equality theorem two vectors, linear operations with vectors in coordinate form, orthonormal triplet of vectors, right and left triplets of vectors, orthonormal basis, the main theorem of vector algebra.

    Chapter 9. Basis of vector space and decomposition of a vector in basis.

    item 1. Basis on a straight line, on a plane and in space.

    Definition. Any finite set of vectors is called a vector system.

    Definition. Expression where
    is called a linear combination of the vector system
    and the numbers
    are called the coefficients of this linear combination.

    Let L, P and S be a line, plane and space of points, respectively, and
    ... Then
    - vector spaces of vectors as directed segments on the straight line L, on the plane P and in the space S, respectively.


    any nonzero vector
    , i.e. any nonzero vector collinear on the line L:
    and
    .

    Basis designation
    :
    - basis
    .

    Definition. The basis of the vector space
    is any ordered pair of noncollinear vectors of the space
    .

    where
    ,
    - basis
    .

    Definition. The basis of the vector space
    is called any ordered triplet of non-coplanar vectors (i.e., not lying in the same plane) of the space
    .

    - basis
    .

    Comment. The basis of a vector space cannot contain a zero vector: in space
    by definition, in space
    two vectors will be collinear if at least one of them is zero, in space
    three vectors will be coplanar, that is, they will lie in the same plane if at least one of the three vectors is zero.

    item 2. Decomposition of a vector in basis.

    Definition. Let be - an arbitrary vector,
    - an arbitrary system of vectors. If equality holds

    then they say that the vector presented as a linear combination of this vector system. If the given system of vectors
    is a basis of the vector space, then equality (1) is called the expansion of the vector on the basis
    ... Linear combination coefficients
    in this case are called the coordinates of the vector on the basis
    .

    Theorem. (On the expansion of a vector in terms of a basis.)

    Any vector of a vector space can be decomposed in its basis and, moreover, in a unique way.

    Evidence. 1) Let L be an arbitrary line (or axis) and
    - basis
    ... Take an arbitrary vector
    ... Since both vectors and collinear with the same line L, then
    ... We will use the collinearity theorem for two vectors. Because
    , then there is (exists) such a number
    , what
    and thus we got the vector expansion on the basis
    vector space
    .

    Now let us prove the uniqueness of such a decomposition. Suppose the opposite. Let there be two expansions of the vector on the basis
    vector space
    :

    and
    where
    ... Then
    and using the law of distributivity, we get:

    Because
    , then from the last equality it follows that
    , ch.d.

    2) Now let P be an arbitrary plane and
    - basis
    ... Let be
    an arbitrary vector of this plane. Let us postpone all three vectors from any one point of this plane. Let's build 4 lines. Let's draw a straight line on which the vector lies , straight
    on which the vector lies ... Through the end of the vector draw a straight line parallel to the vector and a straight line parallel to the vector ... These 4 lines carve a parallelogram. See below fig. 3. According to the parallelogram rule
    and
    ,
    ,
    - basis ,
    - basis
    .

    Now, by what has already been proved in the first part of this proof, there exist numbers
    , what

    and
    ... From here we get:

    and the possibility of expansion in a basis is proved.

    Now let us prove the uniqueness of the expansion in terms of the basis. Suppose the opposite. Let there be two expansions of the vector on the basis
    vector space
    :
    and
    ... We get equality

    Where follows
    ... If
    then
    , and since
    then
    and the expansion coefficients are:
    ,
    ... Let now
    ... Then
    where
    ... By the collinearity theorem for two vectors, this implies that
    ... This contradicts the hypothesis of the theorem. Consequently,
    and
    , ch.d.

    3) Let
    - basis
    let it go
    arbitrary vector. Let's carry out the following constructions.

    Set aside all three basis vectors
    and vector from one point and build 6 planes: the plane in which the basis vectors lie
    , plane
    and plane
    ; further through the end of the vector draw three planes parallel to the three planes just constructed. These 6 planes carve a parallelepiped:

    By the rule of addition of vectors, we obtain the equality:

    . (1)

    By construction
    ... Hence, by the collinearity theorem for two vectors, it follows that there exists a number
    , such that
    ... Similarly,
    and
    where
    ... Now, substituting these equalities into (1), we get:

    and the possibility of expansion in a basis is proved.

    Let us prove the uniqueness of such a decomposition. Suppose the opposite. Let there be two expansions of the vector on the basis
    :

    And. Then

    Note that by hypothesis the vectors
    non-coplanar, therefore, they are pairwise non-collinear.

    Two cases are possible:
    or
    .

    a) Let
    , then equality (3) implies:

    . (4)

    It follows from equality (4) that the vector decomposes on the basis
    , i.e. vector lies in the plane of vectors
    and therefore the vectors
    coplanar, which contradicts the condition.

    b) The case remains
    , i.e.
    ... Then from equality (3) we obtain either

    Because
    Is the basis of the space of vectors lying in the plane, and we have already proved the uniqueness of the expansion in terms of the basis of vectors of the plane, then it follows from equality (5) that
    and
    , ch.d.

    The theorem is proved.

    Consequence.

    1) There is a one-to-one correspondence between the set of vectors of the vector space
    and the set of real numbers R.

    2) There is a one-to-one correspondence between the set of vectors of the vector space
    and cartesian square

    3) There is a one-to-one correspondence between the set of vectors of the vector space
    and cartesian cube
    the set of real numbers R.

    Evidence. Let us prove the third statement. The first two are proved similarly.

    Select and fix in space
    some basis
    and arrange the mapping
    according to the following rule:

    those. each vector is associated with an ordered set of its coordinates.

    Since for a fixed basis each vector has a single set of coordinates, the correspondence given by rule (6) is indeed a mapping.

    It follows from the proof of the theorem that different vectors have different coordinates with respect to the same basis, i.e. mapping (6) is an injection.

    Let be
    an arbitrary ordered set of real numbers.

    Consider a vector
    ... By construction, this vector has coordinates
    ... Consequently, mapping (6) is a surjection.

    A mapping that is both injective and surjective is bijective, i.e. one-to-one, p.t.d.

    The corollary is proved.

    Theorem. (On the equality of two vectors.)

    Two vectors are equal if and only if their coordinates are equal relative to the same basis.

    The proof immediately follows from the previous corollary.

    item 3. Dimension of vector space.

    Definition. The number of vectors in the basis of a vector space is called its dimension.

    Designation:
    Is the dimension of the vector space V.

    Thus, in accordance with this and previous definitions, we have:

    1)
    Is the vector space of vectors of the line L.

    - basis
    ,
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinate on the basis
    .

    2)
    Is the vector space of vectors of the plane P.

    - basis
    ,
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinates on the basis
    .

    3)
    - vector space of vectors in the space of points S.

    - basis
    ,
    ,
    - vector expansion
    on the basis
    ,
    - vector coordinates on the basis
    .

    Comment. If
    then
    and you can choose the basis
    space
    so that
    - basis
    and
    - basis
    ... Then
    and
    , .

    Thus, any vector of the straight line L, plane P and space S can be expanded in the basis
    :

    Designation. By virtue of the theorem on the equality of vectors, we can identify any vector with an ordered triple of real numbers and write:

    This is possible only when the basis
    fixed and there is no danger of confusion.

    Definition. Writing a vector in the form of an ordered triple of real numbers is called the coordinate form of writing a vector:
    .

    item 4. Linear operations with vectors in coordinate notation.

    Let be
    - space basis
    and
    - two of its arbitrary vectors. Let be
    and
    - writing these vectors in coordinate form. Let, further,
    - an arbitrary real number. In this notation, the following theorem holds.

    Theorem. (About linear operations on vectors in coordinate form.)

    2)
    .

    In other words, in order to add two vectors, you need to add their corresponding coordinates, and to multiply a vector by a number, you need to multiply each coordinate of a given vector by a given number.

    Evidence. Since by the condition of the theorem, then using the axioms of the vector space, which obey the operations of addition of vectors and multiplication of a vector by a number, we obtain:

    This implies .

    The second equality is proved similarly.

    The theorem is proved.

    p. 5. Orthogonal vectors. Orthonormal basis.

    Definition. Two vectors are called orthogonal if the angle between them is equal to the right angle, i.e.
    .

    Designation:
    - vectors and orthogonal.

    Definition. Three vectors
    is called orthogonal if these vectors are pairwise orthogonal to each other, i.e.
    ,
    .

    Definition. Three vectors
    is called orthonormal if it is orthogonal and the lengths of all vectors are equal to one:
    .

    Comment. It follows from the definition that an orthogonal and, therefore, an orthonormal triplet of vectors is non-coplanar.

    Definition. Ordered noncoplanar triplet of vectors
    , plotted from one point, is called right (right-oriented), if, when viewed from the end of the third vector onto the plane in which the first two vectors lie and , the shortest rotation of the first vector to the second going counterclockwise. Otherwise, the triplet of vectors is called left (left-oriented).

    Here, Fig. 6 shows the right triplet of vectors
    ... The following figure 7 shows the left triplet of vectors
    :

    Definition. Basis
    vector space
    is called orthonormal if
    orthonormal triplet of vectors.

    Designation. In what follows, we will use the right orthonormal basis
    , see the following figure.

    Expression of the form called linear combination of vectors A 1, A 2, ..., A n with coefficients λ 1, λ 2, ..., λ n.

    Determination of the linear dependence of a system of vectors

    Vector system A 1, A 2, ..., A n called linearly dependent, if there is a nonzero set of numbers λ 1, λ 2, ..., λ n, in which the linear combination of vectors λ 1 * A 1 + λ 2 * A 2 + ... + λ n * A n equal to zero vector, that is, the system of equations: has a nonzero solution.
    Set of numbers λ 1, λ 2, ..., λ n is nonzero if at least one of the numbers λ 1, λ 2, ..., λ n nonzero.

    Determination of linear independence of a system of vectors

    Vector system A 1, A 2, ..., A n called linearly independentif the linear combination of these vectors λ 1 * A 1 + λ 2 * A 2 + ... + λ n * A n equal to zero vector only for zero set of numbers λ 1, λ 2, ..., λ n , that is, the system of equations: A 1 x 1 + A 2 x 2 + ... + A n x n \u003d Θ has the only zero solution.

    Example 29.1

    Check if the vector system is linearly dependent

    Decision:

    1. We compose a system of equations:

    2. We solve it using the Gauss method... The Jordano transformations of the system are shown in Table 29.1. In the calculation, the right-hand sides of the system are not written as they are equal to zero and do not change under the Jordan transformations.

    3. From the last three rows of the table we write down the allowed system, which is equivalent to the original system:

    4. We get the general solution of the system:

    5. Having set the value of the free variable x 3 \u003d 1 at your discretion, we obtain a particular nonzero solution X \u003d (- 3,2,1).

    Answer: Thus, with a nonzero set of numbers (-3,2,1), the linear combination of vectors equals the zero vector -3A 1 + 2A 2 + 1A 3 \u003d Θ. Consequently, vector system linearly dependent.

    Vector system properties

    Property (1)
    If the system of vectors is linearly dependent, then at least one of the vectors is expanded in terms of the rest and, conversely, if at least one of the vectors of the system is expanded in terms of the rest, then the system of vectors is linearly dependent.

    Property (2)
    If any subsystem of vectors is linearly dependent, then the whole system is linearly dependent.

    Property (3)
    If a system of vectors is linearly independent, then any of its subsystems is linearly independent.

    Property (4)
    Any system of vectors containing a zero vector is linearly dependent.

    Property (5)
    A system of m-dimensional vectors is always linearly dependent if the number of vectors n is greater than their dimension (n\u003e m)

    Vector system basis

    The basis of the vector system A 1, A 2, ..., A n is a subsystem B 1, B 2, ..., B r(each of the vectors B 1, B 2, ..., B r is one of the vectors A 1, A 2, ..., A n), which satisfies the following conditions:
    1. B 1, B 2, ..., B r linearly independent system of vectors;
    2. any vector A j systems A 1, A 2, ..., A n is linearly expressed in terms of vectors B 1, B 2, ..., B r

    r - the number of vectors included in the basis.

    Theorem 29.1 On the unit basis of a system of vectors.

    If a system of m-dimensional vectors contains m different unit vectors E 1 E 2, ..., E m, then they form a basis of the system.

    Algorithm for finding the basis of a system of vectors

    In order to find the basis of the system of vectors A 1, A 2, ..., A n it is necessary:

    • Create a homogeneous system of equations corresponding to the system of vectors A 1 x 1 + A 2 x 2 + ... + A n x n \u003d Θ
    • Lead this system

    In geometry, a vector is understood as a directed segment, and vectors obtained from one another by parallel translation are considered equal. All equal vectors are treated as the same vector. The origin of a vector can be placed anywhere in space or on a plane.

    If the coordinates of the ends of the vector are given in space: A(x 1 , y 1 , z 1), B(x 2 , y 2 , z 2), then

    = (x 2 – x 1 , y 2 – y 1 , z 2 – z 1). (1)

    A similar formula takes place on a plane. This means that the vector can be written as a coordinate line. Operations on vectors, - addition and multiplication by a number, on strings are performed componentwise. This makes it possible to expand the concept of a vector, understanding any string of numbers as a vector. For example, the solution to a system of linear equations, as well as any set of values \u200b\u200bof the variables of the system, can be viewed as a vector.

    On strings of the same length, the addition operation is performed according to the rule

    (a 1, a 2, ..., a n) + (b 1, b 2, ..., b n) \u003d (a 1 + b 1, a 2 + b 2, ..., a n+ b n). (2)

    Multiplication of a string by a number is performed according to the rule

    l (a 1, a 2, ..., a n) \u003d (la 1, la 2, ..., la n). (3)

    A set of row vectors of a given length n with the indicated operations of addition of vectors and multiplication by a number, forms an algebraic structure, which is called n-dimensional linear space.

    A linear combination of vectors is a vector , where λ 1, ..., λ m - arbitrary coefficients.

    A system of vectors is called linearly dependent if there is a linear combination of it equal to, in which there is at least one nonzero coefficient.

    A system of vectors is called linearly independent if, in any of its equal linear combinations, all coefficients are zero.

    Thus, solving the problem of the linear dependence of the system of vectors is reduced to solving the equation

    x 1 + x 2 + … + x m = . (4)

    If this equation has nonzero solutions, then the vector system is linearly dependent. If the zero solution is unique, then the vector system is linearly independent.

    To solve system (4), for clarity, vectors can be written not in the form of rows, but in the form of columns.

    Then, performing transformations on the left side, we arrive at a system of linear equations equivalent to equation (4). The main matrix of this system is formed by the coordinates of the original vectors arranged in columns. The column of free members is not needed here, since the system is homogeneous.

    The basis of a system of vectors (finite or infinite, in particular, of the entire linear space) is called its non-empty linearly independent subsystem, through which any vector of the system can be expressed.

    Example 1.5.2.Find the basis of the system of vectors \u003d (1, 2, 2, 4), \u003d (2, 3, 5, 1), \u003d (3, 4, 8, –2), \u003d (2, 5, 0, 3) and express other vectors through the basis.

    Decision... We build a matrix in which the coordinates of these vectors are arranged in columns. This is the matrix of the system x 1 + x 2 + x 3 + x 4 \u003d. ... We bring the matrix to a stepped form:

    ~ ~ ~

    The basis of this system of vectors is formed by vectors,,, which correspond to the leading elements of the lines, marked with circles. To express the vector, we solve the equation x 1 + x 2 + x 4 \u003d. It is reduced to a system of linear equations, the matrix of which is obtained from the original permutation of the corresponding column to the place of the column of free terms. Therefore, when converting to a stepped view, the same transformations will be made over the matrix as above. This means that you can use the resulting matrix in a stepped form by making the necessary permutations of the columns in it: we place the columns with circles to the left of the vertical bar, and place the column corresponding to the vector to the right of the bar.

    We consistently find:

    x 4 = 0;

    x 2 = 2;

    x 1 + 4 = 3, x 1 = –1;

    Comment... If it is required to express several vectors through the basis, then for each of them a corresponding system of linear equations is constructed. These systems will differ only in the columns of free members. Moreover, each system is solved independently of the others.

    R e n t e n t 1.4. Find the basis of the system of vectors and express the remaining vectors through the basis:

    a) \u003d (1, 3, 2, 0), \u003d (3, 4, 2, 1), \u003d (1, –2, –2, 1), \u003d (3, 5, 1, 2);

    b) \u003d (2, 1, 2, 3), \u003d (1, 2, 2, 3), \u003d (3, –1, 2, 2), \u003d (4, –2, 2, 2);

    c) \u003d (1, 2, 3), \u003d (2, 4, 3), \u003d (3, 6, 6), \u003d (4, –2, 1); \u003d (2, –6, –2).

    In a given system of vectors, the basis can usually be distinguished in different ways, but all the bases will have the same number of vectors. The number of vectors in the basis of a linear space is called the dimension of the space. For n-dimensional linear space n - this is the dimension of space, since this space has a standard basis \u003d (1, 0,…, 0), \u003d (0, 1,…, 0),…, \u003d (0, 0,…, 1). Through this basis, any vector \u003d (a 1, a 2, ..., a n) is expressed as follows:

    \u003d (a 1, 0,…, 0) + (0, a 2,…, 0) +… + (0, 0,…, a n) =

    A 1 (1, 0,…, 0) + a 2 (0, 1,…, 0) +… + a n(0, 0,…, 1) \u003d a 1 + a 2 +… + a n .

    Thus, the components in the vector row \u003d (a 1, a 2, ..., a n) Are its coefficients in the expansion in terms of the standard basis.

    Straight lines on a plane

    The task of analytical geometry is the application of the coordinate method to geometric problems. Thus, the problem is transformed into algebraic form and solved by means of algebra.

    Determination of the basis.A system of vectors forms a basis if:

    1) it is linearly independent,

    2) any vector of space through it is linearly expressed.

    Example 1.Space basis:.

    2. In the vector system vectors are the basis: is linearly expressed in terms of vectors.

    Comment.To find the basis of a given vector system, you need:

    1) write the coordinates of the vectors into the matrix,

    2) using elementary transformations to bring the matrix to a triangular form,

    3) nonzero rows of the matrix will be the basis of the system,

    4) the number of vectors in the basis is equal to the rank of the matrix.

    Kronecker-Capelli theorem

    The Kronecker – Capelli theorem gives an exhaustive answer to the question of the compatibility of an arbitrary system of linear equations with unknowns

    The Kronecker – Capelli theorem... The system of linear algebraic equations is consistent if and only if the rank of the extended matrix of the system is equal to the rank of the main matrix,.

    The algorithm for finding all solutions of a joint system of linear equations follows from the Kronecker – Capelli theorem and the following theorems.

    Theorem. If the rank of a consistent system is equal to the number of unknowns, then the system has a unique solution.

    Theorem. If the rank of a consistent system is less than the number of unknowns, then the system has an infinite number of solutions.

    Algorithm for solving an arbitrary system of linear equations:

    1. Let us find the ranks of the main and extended matrices of the system. If they are not equal (), then the system is incompatible (has no solutions). If the ranks are equal (, then the system is consistent.

    2. For a consistent system, we find some minor, the order of which determines the rank of the matrix (such a minor is called basic). Let us compose a new system of equations in which the coefficients of the unknowns are included in the basic minor (these unknowns are called the main unknowns), the rest of the equations are discarded. We leave the main unknowns with coefficients on the left, and transfer the remaining unknowns (they are called free unknowns) to the right side of the equations.

    3. Let's find expressions of the main unknowns in terms of free ones. We get a general solution to the system.



    4. By assigning arbitrary values \u200b\u200bto the free unknowns, we obtain the corresponding values \u200b\u200bof the principal unknowns. Thus, we find particular solutions of the original system of equations.

    Linear programming. Basic concepts

    Linear programming Is a direction of mathematical programming that studies methods for solving extreme problems, which are characterized by a linear relationship between variables and a linear criterion.

    A necessary condition for the formulation of a linear programming problem is the constraints on the availability of resources, the amount of demand, the production capacity of the enterprise and other production factors.

    The essence of linear programming is to find the points of the largest or smallest value of a certain function for a certain set of constraints imposed on the arguments and generators system of restrictions , which has, as a rule, an infinite number of solutions. Each set of values \u200b\u200bof variables (function arguments F ) that satisfy the system of constraints is called acceptable plan linear programming problems. Function F , the maximum or minimum of which is determined, is called target function tasks. A valid plan that reaches the maximum or minimum of a function F is called optimal plan tasks.

    The system of constraints defining the set of plans is dictated by the conditions of production. The linear programming problem ( ZLP ) is the choice of the most profitable (optimal) from the set of admissible plans.

    In general, the linear programming problem looks like this:

    There are some variables x \u003d (x 1, x 2, ... x n) and the function of these variables f (x) \u003d f (x 1, x 2, ... x n) which bears the name target functions. The task is: to find the extremum (maximum or minimum) of the objective function f (x) provided that the variables x belong to some area G :

    Depending on the type of function f (x) and areas G and distinguish sections of mathematical programming: quadratic programming, convex programming, integer programming, etc. Linear programming is characterized by the fact that
    a) function f (x) is a linear function of the variables x 1, x 2, ... x n
    b) area G determined by the system linear equalities or inequalities.