To come in
Sewerage and drainpipes portal
  • Dried fruit sweets "Energy koloboks"
  • Raspberries grated with sugar: tasty and healthy
  • Alexey Pleshcheev: biography
  • How to preserve apple juice
  • Cabbage salad with carrots like in a dining room - the best recipes from childhood
  • An even complexion without foundation!
  • Linear dependence and independence of vectors of linear space. Linear dependence and linear independence of vectors

    Linear dependence and independence of vectors of linear space. Linear dependence and linear independence of vectors

    Let the functions have derivatives of the limit (n-1).

    Consider the determinant: (1)

    W (x) is called the Wronski determinant for functions.

    Theorem 1. If functions are linearly dependent in the interval (a, b), then their Wronskian W (x) is identically equal to zero in this interval.

    Evidence. By the hypothesis of the theorem, the relation

    , (2) where not all are equal to zero. Let be . Then

    (3). We differentiate this identity n-1 times and,

    Substituting the obtained values \u200b\u200binstead of them into the Vronsky determinant,

    we get:

    (4).

    In the Vronsky determinant, the last column is a linear combination of the previous n-1 columns and therefore is equal to zero at all points of the interval (a, b).

    Theorem 2.If the functions y1,…, yn are linearly independent solutions of the equation L [y] \u003d 0, all coefficients of which are continuous in the interval (a, b), then the Wronskian of these solutions differs from zero at each point of the interval (a, b).

    Evidence. Suppose the opposite. There is X0, where W (X0) \u003d 0. Let's compose a system of n equations

    (5).

    It is obvious that system (5) has a nonzero solution. Let (6).

    Let's compose a linear combination of solutions y1,…, yn.

    Y (x) is a solution to the equation L [y] \u003d 0. In addition. By virtue of the uniqueness theorem, the solution to the equation L [y] \u003d 0 with zero initial conditions can only be zero, that is,.

    We get an identity where not all are equal to zero, which means that y1,…, yn are linearly dependent, which contradicts the hypothesis of the theorem. Therefore, there is no such point where W (X0) \u003d 0.

    Based on Theorem 1 and Theorem 2, we can formulate the following statement. For the n solutions of the equation L [y] \u003d 0 to be linearly independent in the interval (a, b), it is necessary and sufficient that their Wronskian does not vanish at any point of this interval.

    The theorems proved also imply such obvious properties of the Wronskian.

    1. If the Wronskian of n solutions of the equation L [y] \u003d 0 is equal to zero at one point х \u003d х0 from the interval (a, b), in which all the coefficients рi (x) are continuous, then it is equal to zero at all points of this interval.
    2. If the Wronskian of n solutions of the equation L [y] \u003d 0 is nonzero at one point x \u003d x0 from the interval (a, b), then it is nonzero at all points of this interval.

    Thus, for the linearity of n independent solutions of the equation L [y] \u003d 0 in the interval (a, b), in which the coefficients of the equation pi (x) are continuous, it is necessary and sufficient that their Wronskian be nonzero at least at one point of this interval ...

    The following give several criteria for linear dependence and, accordingly, linear independence of vector systems.

    Theorem. (A necessary and sufficient condition for the linear dependence of vectors.)

    A system of vectors is dependent if and only if one of the vectors of the system is linearly expressed through the others of this system.

    Evidence. Necessity. Let the system be linearly dependent. Then, by definition, it represents the zero vector nontrivially, i.e. there is a nontrivial combination of this system of vectors equal to the zero vector:

    where at least one of the coefficients of this linear combination is not zero. Let be , .

    We divide both sides of the previous equality by this non-zero coefficient (i.e. multiply by:

    Let us denote:, where.

    those. one of the vectors of the system is linearly expressed through the others of this system, p.a.

    Adequacy. Let one of the vectors of the system be linearly expressed in terms of other vectors of this system:

    We transfer the vector to the right of this equality:

    Since the coefficient at the vector is equal, we have a nontrivial representation of zero by a system of vectors, which means that this system of vectors is linearly dependent, etc.

    The theorem is proved.

    Consequence.

    1. A system of vectors of a vector space is linearly independent if and only if none of the vectors of the system is linearly expressed in terms of other vectors of this system.

    2. A system of vectors containing a zero vector or two equal vectors is linearly dependent.

    Evidence.

    1) Necessity. Let the system be linearly independent. Suppose the opposite and there is a vector of the system that is linearly expressed in terms of other vectors of this system. Then, by the theorem, the system is linearly dependent and we arrive at a contradiction.

    Adequacy. Let none of the vectors of the system be expressed in terms of others. Suppose the opposite. Let the system be linearly dependent, but then it follows from the theorem that there is a vector of the system that is linearly expressed in terms of other vectors of this system and we again come to a contradiction.

    2a) Let the system contain a zero vector. Let's assume for definiteness that the vector:. Then the equality

    those. one of the vectors of the system is linearly expressed in terms of other vectors of this system. It follows from the theorem that such a system of vectors is linearly dependent, p.a.

    Note that this fact can be proved directly from a linearly dependent system of vectors.

    Since, the following equality is obvious

    This is a non-trivial representation of the zero vector, which means that the system is linearly dependent.

    2b) Let the system have two equal vectors. Let for. Then the equality

    Those. the first vector is linearly expressed in terms of the remaining vectors of the same system. It follows from the theorem that this system is linearly dependent, p.a.

    Similarly to the previous one, this statement can also be proved directly for the definition of a linearly dependent system. Then this system represents the zero vector nontrivially

    whence the linear dependence of the system follows.

    The theorem is proved.

    Consequence. A system consisting of one vector is linearly independent if and only if this vector is nonzero.

    Linear (vector) spaces.

    Definition: A bunch of L called linear (vector) space if two operations are entered on it:

    1) addition: for any x, y Є L sum ( x + y) Є L,

    2) multiplication by a number: for any x Є L and any number λ the product

    λx Є L,

    which satisfy 8 axioms:

    1) x + y \u003d y + xwhere x, y Є L;

    2) (x + y) + z \u003d x + (y + z)where x, y, z Є L;

    3) there exists a zero element Ө such that Ө + x \u003d xwhere x Є L;

    4) for any x Є L there is only one opposite element

    (–X) such that x + (-x) \u003d Ө;

    5) 1 x \u003d xwhere x Є L;

    6) α (βх) \u003d (αβ) хwhere x Є L, α and β are numbers;

    7) α (x + y) \u003d αx + αywhere x, y Є L, α is the number;

    8) (α + β) х \u003d αх + βхwhere x Є L, α and β are numbers.

    Comment: Elements of a linear (vector) space are called vectors .

    Examples:

    The set of real numbers is a linear space.

    The sets of all vectors on the plane and in space are linear space.

    The set of all matrices of the same size is a linear space.

    Given in linear space a system of vectors а 1, а 2, а 3, ... а n Є L.

    Definition: Vector α 1 a 1 + α 2 a 2 +… + α n a n Є Lwhere α i(i \u003d 1, ..., n) - numbers, called linear combination (LC) vectors а 1, а 2, а 3, ... а n.

    Definition: Linear space vector system а 1, а 2, а 3, ... а n Є L called linearly independent (LNZ) if the linear combination

    α 1 a 1 + α 2 a 2 + α 3 a 3 +… + α n a n \u003d 0 if and only if the coefficients

    α 1 \u003d α 2 \u003d α 3 \u003d… \u003d α n \u003d 0.

    Definition: Vector system а 1, а 2, а 3, ... а n Є L called linearly dependent (LZ) if there is a set of numbers α 1, α 2, α 3 ... α n , not all of which are equal to 0, such that the linear combination α 1 a 1 + α 2 a 2 +… + α n a n \u003d 0.

    Examples:

    The two vectors are called collinearif they are parallel to one straight line or lie on one straight line.

    1) Consider two non-zero, non-collinear vectors in the plane. Diagonal \u003d 0.

    a 2

    The linear combination is equal to zero, there is not a zero coefficient, therefore, two collinear vectors on the plane are linearly dependent.

    Theorem 1. Necessary and sufficient condition for linear dependence.

    In order for a system of vectors of a linear space to be linearly dependent, it is necessary and sufficient that any vector of this system be a linear combination of all the others.



    Doc: Necessity ().

    The LZ system is given. It is necessary to prove that one vector is the LC of all the others.

    a 1, a 2, a 3, ... a n - LZ system of vectors, i.e. among α 1, α 2, α 3 ... α n there is a nonzero number so that the LC α 1 a 1 + α 2 a 2 + α 3 a 3 +… + α n a n \u003d 0.

    For definition, we assume that the coefficient α 1 ≠ 0... We divide both sides of the last equality by α 1 ≠ 0:

    Hence it follows that a 1 - LC of other vectors.

    The necessity is proven.

    Adequacy ().

    Let one vector be a linear combination of the others. It is necessary to prove that the system of vectors is LZ.

    Let be α n \u003d α 1 a 1 + α 2 a 2 + α 3 a 3 +… + α n -1 a n -1.

    α 1 а 1 + α 2 а 2 + α 3 а 3 +… + α n -1 а n -1 - 1α n \u003d 0.

    Since there is not a zero coefficient, the vector system a 1, a 2, a 3, ... a n- linearly dependent.

    Theorem 2. A system containing a null vector is linearly dependent.

    Doc: Consider a system of vectors containing a null vector. а 1, а 2, а 3, ... а n, Өwhere Ө Is a null vector. Obviously, the following equality holds: 0 a 1 + 0 a 2 + 0 a 3 + ... + 5 Ө \u003d 0.

    There is a non-zero coefficient equal to 5, and the linear combination is equal to 0, hence it follows that the system of vectors is LZ.

    Theorem 3. A system containing a linearly dependent subsystem will also be linearly dependent.

    Doc: Consider a system of vectors а 1, а 2, ..., а к, а к + 1 ... а nwhere а 1, а 2, ..., а к is a linearly dependent piece. α 1 а 1 + α 2 а 2 + ... + α к а к \u003d 0... There is a nonzero coefficient.

    Obviously, with the same coefficients, the equality

    α 1 a 1 + α 2 a 2 + ... + α to a to +… + 0 a to + 1 +… + 0 α n \u003d 0.

    Hence it follows that the system of vectors is LZ.

    Def.System of elements x 1,…, x m lin. pr-va V is called linearly dependent if ∃ λ 1,…, λ m ∈ ℝ (| λ 1 | +… + | λ m | ≠ 0) such that λ 1 x 1 +… + λ mxm \u003d θ ...

    Def.A system of elements x 1,…, x m ∈ V is called linearly independent if from the equality λ 1 x 1 +… + λ m x m \u003d θ ⟹λ 1 \u003d… \u003d λ m \u003d 0.

    Def.An element x ∈ V is called a linear combination of elements x 1,…, x m ∈ V if ∃ λ 1,…, λ m ∈ ℝ such that x \u003d λ 1 x 1 +… + λ m x m.

    Theorem (criterion for linear dependence): A system of vectors x 1,…, x m ∈ V is linearly dependent if and only if at least one vector of the system is linearly expressed in terms of the others.

    Doc. Necessity: Let x 1,…, xm be linearly dependent ⟹ ∃ λ 1,…, λ m ∈ ℝ (| λ 1 | +… + | λ m | ≠ 0) such that λ 1 x 1 +… + λ m -1 xm -1 + λ mxm \u003d θ. Suppose λ m ≠ 0, then

    x m \u003d (-) x 1 + ... + (-) x m -1.

    Adequacy: Let at least one of the vectors be linearly expressed in terms of the rest of the vectors: xm \u003d λ 1 x 1 +… + λ m -1 xm -1 (λ 1,…, λ m -1 ∈ ℝ) λ 1 x 1 +… + λ m -1 xm -1 + (- 1) xm \u003d 0 λ m \u003d (- 1) ≠ 0 ⟹ x 1,…, xm - are linearly independent.

    Ven. linear dependence condition:

    If the system contains a zero element or a linearly dependent subsystem, then it is linearly dependent.

    λ 1 x 1 +… + λ m x m \u003d 0 - linearly dependent system

    1) Let x 1 \u003d θ, then this equality is valid for λ 1 \u003d 1 and λ 1 \u003d… \u003d λ m \u003d 0.

    2) Let λ 1 x 1 +… + λ m x m \u003d 0 be a linearly dependent subsystem ⟹ | λ 1 | +… + | λ m | ≠ 0. Then, for λ 1 \u003d 0, we also obtain | λ 1 | +… + | λ m | ≠ 0 ⟹ λ 1 x 1 +… + λ m x m \u003d 0 is a linearly dependent system.

    Linear space basis. The coordinates of the vector in the given basis. The coordinates of the sums of vectors and the product of a vector by a number. A necessary and sufficient condition for the linear dependence of a system of vectors.

    Definition: An ordered system of elements e 1,…, e n of a linear space V is called a basis of this space if:

    A) e 1 ... e n are linearly independent

    B) ∀ x ∈ α 1… α n such that x \u003d α 1 e 1 +… + α n е n

    x \u003d α 1 e 1 +… + α n e n - expansion of the element x in the basis e 1,…, e n

    α 1… α n ∈ ℝ are the coordinates of the element x in the basis e 1,…, e n

    Theorem: If a basis e 1,…, e n is given in a linear space V, then ∀ x ∈ V the column of coordinates x in the basis e 1,…, e n is uniquely determined (coordinates are determined uniquely)

    Evidence: Let x \u003d α 1 e 1 +… + α n e n and x \u003d β 1 e 1 +… + β n e n


    x \u003d ⇔ \u003d Θ, that is, e 1,…, e n are linearly independent, then - \u003d 0 ∀ i \u003d 1,…, n ⇔ \u003d ∀ i \u003d 1,…, n, etc.

    Theorem: let e 1,…, e n be a basis of the linear space V; x, y are arbitrary elements of the space V, λ ∈ ℝ is an arbitrary number. When x and y are added, their coordinates are added, when x is multiplied by λ, the x coordinates are also multiplied by λ.

    Evidence: x \u003d (e 1,…, e n) and y \u003d (e 1,…, e n)

    x + y \u003d + \u003d (e 1,…, e n)

    λx \u003d λ) \u003d (e 1,…, e n)

    Lemma1: (a necessary and sufficient condition for the linear dependence of the vector system)

    Let e \u200b\u200b1… е n be a basis of the space V. A system of elements f 1,…, f k ∈ V is linearly dependent if and only if the columns of coordinates of these elements in the basis e 1,…, e n are linearly dependent

    Evidence: expand f 1,…, f k in the basis e 1,…, e n

    f m \u003d (e 1,…, e n) m \u003d 1,…, k

    λ 1 f 1 +… + λ k f k \u003d (e 1,…, e n) [λ 1 +… + λ n] that is, λ 1 f 1 +… + λ k f k \u003d Θ ⇔

    ⇔ λ 1 +… + λ n \u003d as required.

    13. Dimension of linear space. A theorem on the connection between dimension and basis.
    Definition: A linear space V is called an n-dimensional space if there are n linearly independent elements in V, and a system of any n + 1 elements of the space V is linearly dependent. In this case, n is called the dimension of the linear space V and is denoted by dimV \u003d n.

    A linear space is called infinite-dimensional if ∀N ∈ ℕ in the space V there is a linearly independent system containing N elements.

    Theorem: 1) If V is an n-dimensional linear space, then any ordered system of n linearly independent elements of this space forms a basis. 2) If in the linear space V there is a basis consisting of n elements, then the dimension of V is equal to n (dimV \u003d n).

    Evidence: 1) Let dimV \u003d n ⇒ in V ∃ n linearly independent elements e 1,…, e n. Let us prove that these elements form a basis, that is, we prove that ∀ x ∈ V can be expanded in terms of e 1,…, e n. Let's add x to them: e 1,…, e n, x - this system contains n + 1 vectors, which means it is linearly dependent. Since e 1, ..., e n is linearly independent, then by Theorem 2 x is linearly expressed through e 1,…, e n ie ∃,…, such that x \u003d α 1 e 1 +… + α n е n. So e 1,…, e n is a basis of the space V. 2) Let e \u200b\u200b1,…, e n be a basis of V, so there are n linearly independent elements in V ∃. Take arbitrary f 1,…, f n, f n +1 ∈ V - n + 1 elements. Let us show their linear dependence. Let's expand them on the basis:

    f m \u003d (e 1,…, e n) \u003d where m \u003d 1,…, n Let's compose the matrix from the columns of coordinates: A \u003d The matrix contains n rows ⇒ RgA≤n. The number of columns n + 1\u003e n ≥ RgA ⇒ The columns of the matrix A (that is, the columns of coordinates f 1,…, f n, f n +1) are linearly dependent. From Lemma 1 ⇒,…, f n, f n +1 are linearly dependent ⇒ dimV \u003d n.

    Corollary:If any basis contains n elements, then any other basis of this space also contains n elements.

    Theorem 2: If a system of vectors x 1, ..., x m -1, x m is linearly dependent, and its subsystem x 1, ..., x m -1 is linearly independent, then x m is linearly expressed through x 1, ..., x m -1

    Evidence: Because x 1,…, x m -1, x m is linearly dependent, then ∃,…,,,

    ,…, | , | such that. If,,…, | \u003d\u003e x 1, ..., x m -1 - are linearly independent, which cannot be. So m \u003d (-) x 1 +… + (-) x m -1.