To come in
Sewerage and drainpipes portal
  • Faculty of foreign languages \u200b\u200band regional studies, Moscow State University Teaching foreign citizens
  • Padded ball: definition, purpose, exercises
  • Primary school teacher: where to get education, learning features and reviews
  • Kostroma State University named after
  • Analytical reference on monitoring in the middle group
  • Design and innovation activities
  • Operations on matrices and their properties are brief. Math for Dummies

    Operations on matrices and their properties are brief. Math for Dummies

    Lecture 1. “Matrices and basic actions on them. Determinants

    Definition. Matrix size mnwhere m- number of lines, n- the number of columns, called a table of numbers arranged in a specific order. These numbers are called matrix elements. The location of each element is uniquely determined by the number of the row and column at the intersection of which it is located. Matrix elements are denoteda ij where iis the line number, and j- column number.

    A \u003d

    Basic operations on matrices.

    The matrix can consist of one row or one column. Generally speaking, a matrix can even consist of one element.

    Definition. If the number of columns of the matrix is \u200b\u200bequal to the number of rows (m \u003d n), then the matrix is \u200b\u200bcalled square.

    Definition. Matrix of the form:

    = E ,

    called unit matrix.

    Definition. If a mn = a nm , then the matrix is \u200b\u200bcalled symmetric.

    Example.
    - symmetric matrix

    Definition. Square matrix of the form
    called diagonalmatrix.

    Addition and subtraction matrices is reduced to the corresponding operations on their elements. The most important property of these operations is that they defined only for matrices of the same size... Thus, it is possible to define the operations of addition and subtraction of matrices:

    Definition. Sum (difference) matrices is a matrix whose elements are the sum (difference) of the elements of the original matrices, respectively.


    c ij \u003d a ij b ij

    C \u003d A + B \u003d B + A.

    Operation multiplication (division) of a matrix of any size by an arbitrary number is reduced to multiplying (dividing) each element of the matrix by this number.

    (A + B) \u003d  A   B A () \u003d  A   A

    Example. Given matrices A \u003d
    ; B \u003d
    , find 2A + B.

    2A \u003d
    , 2A + B \u003d
    .

    Matrix multiplication operation.

    Definition: By product matrices is called a matrix whose elements can be calculated by following formulas:

    AB = C;
    .

    It can be seen from the above definition that the operation of matrix multiplication is defined only for matrices, the number of columns of the first of which is equal to the number of rows of the second.

    Properties of the operation of matrix multiplication.

    1) Matrix multiplicationnot commutative , i.e. AB  BA even if both works are defined. However, if for some matrices the relation AB \u003d BA holds, then such matrices are calledpermutable.

    The most typical example is a matrix that permutes with any other matrix of the same size.

    Permutation can only be square matrices of the same order.

    А Е \u003d Е А \u003d А

    Obviously, the following property holds for any matrices:

    AO = O; OA = O,

    where O - zeromatrix.

    2) Operation of matrix multiplication associative, those. if the products AB and (AB) C are defined, then BC and A (BC) are determined, and the equality is fulfilled:

    (AB) C \u003d A (BC).

    3) Operation of matrix multiplication distributive in relation to addition, i.e. if expressions A (B + C) and (A + B) C make sense, then, respectively:

    A (B + C) \u003d AB + AC

    (A + B) C \u003d AC + BC.

    4) If the product AB is defined, then for any number the ratio is correct:

    (AB) = (A) B = A(B).

    5) If the product AB is defined, then the product B T A T is defined and the equality holds:

    (AB) T \u003d B T A T, where

    subscript T denotes transposed matrix.

    6) Note also that for any square matrices det (AB) \u003d detA detB.

    What det will be discussed below.

    Definition . Matrix B is called transposedmatrix A, and the transition from A to B transpositionif the elements of each row of matrix A are written in the same order into the columns of matrix B.

    A \u003d
    ; B \u003d AT \u003d
    ;

    in other words, b ji \u003d a ij.

    As a consequence of the previous property (5), we can write that:

    (ABC) T \u003d C T B T A T,

    provided that the product of matrices ABC is defined.

    Example. Given matrices A \u003d
    , B \u003d, C \u003d
    and the number
     \u003d 2. Find AT B +  C.

    A T =
    ; A T B =
    =
    =
    ;

    C =
    ; A T B +  C \u003d
    +
    =
    .

    Example. Find the product of matrices A \u003d and B \u003d
    .

    AB \u003d 
    =
    .

    VA \u003d
     = 2  1 + 4  4 + 1  3 = 2 + 16 + 3 = 21.

    Example. Find the product of matrices A \u003d
    , B \u003d

    AB \u003d

    =
    =
    .

    Determinants(determinants).

    Definition. Determinantsquare matrix A \u003d
    is a number that can be calculated from the elements of the matrix by the formula:

    det A \u003d
    where (1)

    M 1 to - the determinant of the matrix obtained from the original by deleting the first row and k-th column. It should be noted that determinants have only square matrices, i.e. matrices with the number of rows equal to the number of columns.

    F formula (1) allows calculating the determinant of the matrix by the first row; the formula for calculating the determinant by the first column is also valid:

    det A \u003d
    (2)

    Generally speaking, the determinant can be calculated for any row or column of the matrix, i.e. the formula is valid:

    detA \u003d
    , i \u003d 1,2,…, n. (3)

    Obviously, different matrices can have the same determinants.

    The determinant of the identity matrix is \u200b\u200b1.

    For the indicated matrix A, the number M 1k is called an additional minor element of the matrix a 1 k. Thus, we can conclude that each element of the matrix has its own additional minor. Additional minors exist only in square matrices.

    Definition. Additional minor an arbitrary element of the square matrix a ij is equal to the determinant of the matrix obtained from the original by deleting the i-th row and j-th column.

    Property1. An important property of determinants is the following relation:

    det A \u003d det A T;

    Property 2. det (A B) \u003d det A det B.

    Property 3. det (AB) = detAdetB

    Property 4. If any two rows (or columns) are swapped in a square matrix, the determinant of the matrix will change sign without changing its absolute value.

    Property 5. When a column (or row) of a matrix is \u200b\u200bmultiplied by a number, its determinant is multiplied by that number.

    Property 6. If in matrix A the rows or columns are linearly dependent, then its determinant is zero.

    Definition: The columns (rows) of the matrix are called linearly dependentif there is a linear combination of them equal to zero, which has nontrivial (not equal to zero) solutions.

    Property 7. If the matrix contains a zero column or a zero row, then its determinant is zero. (This statement is obvious, since the determinant can be read exactly by the zero row or column.)

    Property 8. The determinant of the matrix will not change if to the elements of one of its rows (column) we add (subtract) the elements of another row (column), multiplied by some number that is not equal to zero.

    Property 9. If for the elements of any row or column of the matrix the following relation is true:d = d 1 d 2 , e = e 1 e 2 , f \u003d det (AB).

    1st way: det A \u003d 4 - 6 \u003d -2; det B \u003d 15 - 2 \u003d 13; det (AB) \u003d det A det B \u003d -26.

    2nd way: AB =
    , det (AB) = 7 18 - 8 19 = 126 –

    152 = -26.

    Linear algebra problems. Matrix concept. Types of matrices. Operations with matrices. Solving matrix transformation problems.

    When solving various problems of mathematics, very often you have to deal with tables of numbers called matrices. Using matrices, it is convenient to solve systems of linear equations, perform many operations with vectors, solve various computer graphics problems and other engineering problems.

    The matrix is \u200b\u200bcalled a rectangular table of numbers containing a number m lines and some p columns. Numbers tand P are called orders of the matrix. If t = p, the matrix is \u200b\u200bcalled square, and the number m \u003d n -her order.

    In the future, either double dashes or parentheses will be used to write matrices:

    Or

    For a short designation of a matrix, either one capital letter (for example, A) or the symbol || a ij || , and sometimes with a clarification: A = || a ij || = (a ij),where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n).

    Numbers a ij, included in this matrix are called its elements. In recording a ij first index і means line number and second index j - column number. In the case of a square matrix

    (1.1)

    the concepts of main and side diagonals are introduced. The main diagonal of matrix (1.1) is the diagonal a 11 a 12 a nn going from the upper left corner of this matrix to its lower right corner. The side diagonal of the same matrix is \u200b\u200bcalled the diagonal a n 1 a (n -1) 2 a 1 n,going from the lower left corner to the upper right corner.

    Basic operations on matrices and their properties.

    Let's move on to defining the basic operations on matrices.

    Addition of matrices. The sum of two matrices A \u003d || a ij || ,where and B \u003d || b ij || , Where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n)the same orders tand P is called the matrix С \u003d || c ij || (i \u003d 1,2, ..., m; j \u003d 1, 2, ...., n) the same orders tand P, the elements with ijwhich are determined by the formula

    , where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n)(1.2)

    To denote the sum of two matrices, the notation is used C \u003d A + B. The operation of making up the sum of matrices is called their addition. So, by definition:

    + =

    From the definition of the sum of matrices, or more precisely from formulas (1.2), it immediately follows that the operation of addition of matrices has the same properties as the operation of addition of real numbers, namely:

    1) a displacement property: A + B \u003d B + A,

    2) a combination property: ( A + B) + C \u003d A + (B + C).

    These properties make it possible not to worry about the order of the matrix terms when adding two or more matrices.

    Multiplying a matrix by a number. The product of the matrix A \u003d || a ij || , where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n) by a real number l, is the matrix С \u003d || c ij || (i \u003d 1,2, ..., m; j \u003d 1, 2, ...., n), the elements of which are determined by the formula:

    , where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n)(1.3)

    To denote the product of matrix i and a number, the notation is used С \u003d l A or C \u003d A l. The operation of composing the product of a matrix by a number is called multiplying a matrix by this number.

    It is clear from formula (1.3) that multiplication of a matrix by a number has the following properties:

    1) a combination property with respect to a numerical factor: (l m) A \u003d l (m A);

    2) the distribution property with respect to the sum of matrices: l (A + B) \u003d l A + l B;

    3) a distribution property with respect to the sum of numbers: (l + m) A \u003d l A + m A

    Comment. By the difference of two matrices A and IN the same orders tand P it is natural to call such a matrix FROM the same orders tand P, which together with the matrix Bgives the matrix A. To denote the difference between two matrices, the natural notation is used: C \u003d A - B.

    It is very easy to verify that the difference FROM two matrices Aand IN can be obtained by the rule C \u003d A + (–1) B.

    Product of matricesor multiplication of matrices.

    The product of the matrix A \u003d || a ij || , where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., n) having orders respectively equal t and n, on the matrix B \u003d || b ij || , Where (i \u003d 1, 2, ..., n, j \u003d 1, 2, ..., p), having orders respectively equal to n and r, called the matrix С \u003d || c ij || (i \u003d 1,2, ..., m; j \u003d 1, 2, ...., p)with orders respectively equal to tand Rwhose elements are determined by the formula:

    Where (i \u003d 1, 2, ..., m, j \u003d 1, 2, ..., p)(1.4)

    To denote the product of the matrix A on the matrix INuse recording C \u003d A × B... The operation of composing the product of a matrix A on the matrix IN is called the multiplication of these matrices.

    The definition formulated above implies that matrix A can not be multiplied by every matrix B, it is necessary that the number of columns of the matrix A was equal to the number of rows of the matrix IN.

    Formula (1.4) is a rule for composing the elements of the matrix C, which is the product of the matrix A on the matrix IN. This rule can also be formulated verbally: the element c i j standing at the intersection of the i-th row and j-th column of the matrix С \u003d А В, is equal to the sum of the pairwise products of the corresponding elements of the i-th row of the matrix А and the j-th column of the matrix B.

    As an example of the application of this rule, we present the formula for multiplying square matrices of the second order.

    × =

    Formula (1.4) implies the following properties of the matrix product A to matrix IN:

    1) combination property: (A B) C \u003d A (B C);

    2) a distributional property with respect to the sum of matrices:

    (A + B) C \u003d A C + B C or A (B + C) \u003d A B + A C.

    The question of the permutation (displacement) property of a matrix product A on the matrix INit makes sense to set only for square matrices A and B the same order.

    Let us give important special cases of matrices for which the permutation property is also valid. Two matrices for whose product the permutation property is valid are usually called commuting.

    Among the square matrices, we single out the class of the so-called diagonal matrices, each of which has elements located outside the main diagonal equal to zero. Each diagonal order matrix phas the form

    D \u003d (1.5)

    where d 1, d 2,, d n-any numbers. It is easy to see that if all these numbers are equal to each other, i.e. d 1 \u003d d 2 \u003d… = d n then for any square matrix A order p fair equality A D \u003d D A.

    Among all diagonal matrices (1.5) with coincident elements d 1 \u003d d 2 \u003d… = d n \u003d = d two matrices play a particularly important role. The first of these matrices is obtained for d \u003d 1, called the identity matrix n E. The second matrix is \u200b\u200bobtained when d \u003d 0, is called the zero matrix n-th order and denoted by the symbol O. In this way,

    E \u003d O \u003d

    By virtue of what was proved above A E \u003d E A and A O \u003d O A. Moreover, it is easy to show that

    A E \u003d E A \u003d A, A O \u003d O A \u003d 0. (1.6)

    The first of formulas (1.6) characterizes the special role of the unit matrix E, analogous to the role played by the number 1 in the multiplication of real numbers. As for the special role of the zero matrix ABOUT, then it is revealed not only by the second of formulas (1.7), but also by the elementary verifiable equality

    A + 0 \u003d 0 + A \u003d A.

    In conclusion, we note that the concept of a zero matrix can also be introduced for non-square matrices (zero is called anymatrix, all elements of which are equal to zero).

    Block matrices

    Suppose that some matrix A \u003d || a ij ||with the help of horizontal and vertical lines, it is divided into separate rectangular cells, each of which is a matrix of smaller dimensions and is called a block of the original matrix. In this case, it becomes possible to consider the original matrix A as some new (so-called block) matrix A = || A a b ||, the elements of which are the indicated blocks. We denote the indicated elements with a capital Latin letter to emphasize that they are, generally speaking, matrices, and not numbers and (like ordinary numerical elements) we supply two indices, the first of which indicates the number of the "block" row, and the second - the number of the "block" "Column.

    For example, the matrix

    can be thought of as a block matrix

    the elements of which are the following blocks:

    Remarkable is the fact that the basic operations with block matrices are performed according to the same rules by which they are performed with ordinary numerical matrices, only blocks are used as elements.

    Determinant concept.

    Consider an arbitrary square matrix of any order p:

    A \u003d (1.7)

    With each such matrix, we associate a well-defined numerical characteristic called the determinant corresponding to this matrix.

    If the order n of matrix (1.7) is equal to one, then this matrix consists of one element and i j is a first-order determinant corresponding to such a matrix, we will call the value of this element.

    then the second-order determinant corresponding to such a matrix is \u200b\u200ba number equal to a 11 a 22 - a 12 a 21and denoted by one of the symbols:

    So, by definition

    (1.9)

    Formula (1.9) is a rule for compiling a second-order determinant from the elements of the corresponding matrix. The verbal formulation of this rule is as follows: the second-order determinant corresponding to matrix (1.8) is equal to the difference between the product of elements on the main diagonal of this matrix and the product of elements on its secondary diagonal. Determinants of the second and higher orders are widely used in solving systems of linear equations.

    Consider how the operations with matrices in MathCad ... The simplest operations of matrix algebra are implemented in MathCad as operators. Writing operators in their meaning is as close as possible to their mathematical operation. Each operator is expressed by a corresponding symbol. Consider the matrix and vector operations of MathCad 2001. Vectors are a special case of matrices of dimension n x 1, therefore, for them, all those operations are valid as for matrices, unless the restrictions are specially stipulated (for example, some operations are applicable only to square matrices n x n). Some actions are allowed only for vectors (for example, the dot product), and some, despite the same spelling, act differently on vectors and matrices.


    In the dialog that appears, specify the number of rows and columns of the matrix.

    q After pressing the OK button, a field for entering matrix elements opens. To enter a matrix element, place the cursor at the marked position and enter a number or expression from the keyboard.

    In order to perform any operation using the toolbar, you need:

    select the matrix and click on the operation button in the panel,

    q or click on the button in the panel and enter the matrix name in the marked position.

    The Symbols menu contains three operations - transpose, invert, determinant.

    This means, for example, that you can calculate the determinant of a matrix by running the command Symbols / Matrices / Determinant.

    MathCAD stores the number of the first row (and first column) of the matrix in the ORIGIN variable. By default, the countdown is zero. In mathematical notation, it is more common to count from 1. In order for MathCAD to count row and column numbers from 1, you need to set the value of the variable ORIGIN: \u003d 1.

    Functions intended for working with linear algebra problems are collected in the "Vectors and Matrices" section of the "Insert Function" dialog (recall that it is called by the button on the "Standard" panel). The main of these functions will be described later.

    Transpose

    Fig. 2 Matrix transposition

    In MathCAD, you can both add matrices and subtract them from each other. These operators use the symbols <+> or <-> accordingly. Matrices must have the same dimension, otherwise an error message will be displayed. Each element of the sum of two matrices is equal to the sum of the corresponding elements of the matrix addends (example in Fig. 3).
    In addition to matrix addition, MathCAD supports the operation of matrix addition with a scalar value, i.e. number (example in Fig. 4). Each element of the resulting matrix is \u200b\u200bequal to the sum of the corresponding element of the original matrix and the scalar.
    To enter the multiplication symbol, you need to press the key with an asterisk<*> or use the toolbar Matrix, pressing the button on it Dot Product (Multiplication) (fig. 1). Matrix multiplication is denoted by default by a dot, as shown in the example in Figure 6. The matrix multiplication symbol can be selected in the same way as in scalar expressions.
    Another example related to the multiplication of a vector by a matrix-row and, conversely, a row by a vector, is shown in Fig. 7. The second line of this example shows what the formula looks like when you select the display of the multiplication operator No Space (Together).However, the same multiplication operator acts on two vectors differently .

    Similar information.


    Definition. Matrixsize m´n, where m is the number of rows, n is the number of columns, called a table of numbers arranged in a certain order. These numbers are called matrix elements. The location of each element is uniquely determined by the number of the row and column at the intersection of which it is located. Matrix elements are denoted a ij, where i is the row number and j is the column number.

    Basic operations on matrices.

    The matrix can consist of one row or one column. Generally speaking, a matrix can even consist of one element.

    Definition. If the number of columns of the matrix is \u200b\u200bequal to the number of rows (m \u003d n), then the matrix is \u200b\u200bcalled square.

    Definition. If = , then the matrix is \u200b\u200bcalled symmetric.

    Example. - symmetric matrix

    Definition. A square matrix of the form is called diagonalmatrix.

    Definition. A diagonal matrix with only ones on the main diagonal:

    = Eis called unit matrix.

    Definition. A matrix with only zero elements under the main diagonal is called upper triangular matrix. If the matrix has only zero elements above the main diagonal, then it is called lower triangular matrix.

    Definition. The two matrices are called equalif they are of the same dimension and equality holds:

    · Addition and subtraction matrices is reduced to the corresponding operations on their elements. The most important property of these operations is that they defined only for matrices of the same size... Thus, it is possible to define the operations of addition and subtraction of matrices:

    Definition. Sum (difference) matrices is a matrix whose elements are the sum (difference) of the elements of the original matrices, respectively.

    C \u003d A + B \u003d B + A.

    · Operation multiplication (division) of a matrix of any size by an arbitrary number is reduced to multiplying (dividing) each element of the matrix by this number.

    a (A + B) \u003d aA ± aB

    А (a ± b) \u003d aА ± bА

    Example. Given matrices A \u003d; B \u003d, find 2A + B.

    2A \u003d, 2A + B \u003d.

    · Definition: By product Matrix is \u200b\u200ba matrix whose elements can be calculated by the following formulas:

    It can be seen from the above definition that the operation of matrix multiplication is defined only for matrices, the number of columns of the first of which is equal to the number of rows of the second.

    Example.

    · Definition. Matrix B is called transposedmatrix A, and the transition from A to B transpositionif the elements of each row of matrix A are written in the same order into the columns of matrix B.

    A \u003d; B \u003d AT \u003d;

    in other words, \u003d.

    inverse matrix .

    Definition. If there are square matrices X and A of the same order, satisfying the condition:



    where E is the identity matrix of the same order as the matrix A, then the matrix X is called reverseto the matrix A and is designated A -1.

    Each square matrix with a determinant that is not equal to zero has an inverse matrix and, moreover, only one.

    inverse matrix

    It can be built according to the following scheme:

    If, then the matrix is \u200b\u200bcalled non-degenerate, otherwise - degenerate.

    An inverse matrix can only be constructed for non-degenerate matrices.

    Properties of inverse matrices.

    1) (A -1) -1 \u003d A;

    2) (AB) -1 \u003d B -1 A -1

    3) (A T) -1 \u003d (A -1) T.

    By the rank of the matrix is the highest order of nonzero minors of this matrix.

    In a matrix of order m´n, a minor of order r is called basicif it is not equal to zero, and all minors of order r + 1 and above are equal to zero, or do not exist at all, i.e. r matches the lesser of m or n.

    The columns and rows of the matrix on which the base minor stands are also called basic.

    The matrix can have several different basic minors of the same order.

    Highly important property elementary matrix transformations is that they do not change the rank of the matrix.

    Definition. The matrices obtained as a result of an elementary transformation are called equivalent.

    It should be noted that equal matrices and equivalent matrices are completely different concepts.

    Theorem. The largest number of linearly independent columns in a matrix is \u200b\u200bequal to the number of linearly independent rows.

    Because elementary transformations do not change the rank of the matrix, then the process of finding the rank of the matrix can be significantly simplified.

    Example. Determine the rank of the matrix.

    Addition of matrices:

    Subtraction and addition of matrices is reduced to the corresponding operations on their elements. Matrix addition operation introduced only for matrices the same size, i.e. for matrices, which have the number of rows and columns, respectively. Sum of matrices A and B are called matrix C, whose elements are equal to the sum of the corresponding elements. С \u003d А + В c ij \u003d a ij + b ij difference of matrices.

    Multiplying a matrix by a number:

    Matrix multiplication (division) operation any size by an arbitrary number is reduced to multiplying (dividing) each element matrices by that number. The product of the matrix And the number k is called matrix B, such that

    b ij \u003d k × a ij. В \u003d k × A b ij \u003d k × a ij. Matrix - A \u003d (-1) × A is called the opposite matrix A.

    Matrix addition and matrix multiplication properties:

    Matrix addition operations and matrix multiplication on a number have the following properties: 1. A + B \u003d B + A; 2.A + (B + C) \u003d (A + B) + C; 3. A + 0 \u003d A; 4. A - A \u003d 0; 5.1 × A \u003d A; 6. α × (A + B) \u003d αA + αB; 7. (α + β) × A \u003d αA + βA; 8. α × (βA) \u003d (αβ) × A; , where А, В and С are matrices, α and β are numbers.

    Matrix multiplication (Matrix product):

    The operation of multiplying two matrices is introduced only for the case when the number of columns of the first matrices equal to the number of lines of the second matrices. The product of the matrix And m × n on matrix In n × p, is called matrix With m × p such that with ik \u003d a i1 × b 1k + a i2 × b 2k + ... + a in × b nk, i.e., find the sum of the products of the elements of the i-th row matrices And on the corresponding elements of the j-th column matrices B. If matrices A and B are square of the same size, then the products AB and BA always exist. It is easy to show that A × E \u003d E × A \u003d A, where A is square matrix, E - unit matrix the same size.

    Matrix multiplication properties:

    Matrix multiplication not commutative, i.e. AB ≠ BA even if both works are defined. However, if for any matrices the ratio AB \u003d BA is satisfied, then such matrices are called permutation. The most typical example is a single matrixwhich is permutable with any other matrix the same size. Permutation can only be square matrices of the same order. A × E \u003d E × A \u003d A

    Matrix multiplication has the following properties: 1. A × (B × C) \u003d (A × B) × C; 2. A × (B + C) \u003d AB + AC; 3. (A + B) × C \u003d AC + BC; 4. α × (AB) \u003d (αA) × B; 5. A × 0 \u003d 0; 0 × A \u003d 0; 6. (AB) T \u003d B T A T; 7. (ABC) T \u003d C T B T A T; 8. (A + B) T \u003d A T + B T;

    2. Determinants of the 2nd and 3rd orders. Determinant properties.

    The determinant of the matrix second order, or determinant second order, is called a number that is calculated by the formula:

    The determinant of the matrix third order, or determinant of the third order, a number is called, which is calculated by the formula:

    This number represents the algebraic sum of six terms. Each term contains exactly one element from each row and each column matrices... Each term consists of the product of three factors.

    Signs with which members determinant of matrix are included in the formula finding the determinant of the matrix the third order can be determined using the above scheme, which is called the rule of triangles or Sarrus' rule. The first three terms are taken with a plus sign and are determined from the left figure, and the next three terms are taken with a minus sign and are determined from the right figure.

    Determine the number of terms to find determinant of matrix, in the algebraic sum, we can calculate the factorial: 2! \u003d 1 × 2 \u003d 2 3! \u003d 1 × 2 × 3 \u003d 6

    Matrix determinants properties

    Matrix determinants properties:

    Property # 1:

    Matrix determinant will not change if its rows are replaced with columns, each row with a column with the same number, and vice versa (Transpose). | A | \u003d | A | T

    Corollary:

    Columns and Rows determinant of matrix are equal, therefore, the properties inherent in rows are fulfilled for columns as well.

    Property # 2:

    When swapping 2 rows or columns determinant of a matrix will reverse the sign while keeping the absolute value, i.e .:

    Property # 3:

    Matrix determinanthaving two identical rows is zero.

    Property # 4:

    Common factor of elements of any row determinant of matrix can be taken out of the mark determinant.

    Consequences from properties # 3 and # 4:

    If all elements of a certain row (row or column) are proportional to the corresponding elements of a parallel row, then such determinant of a matrix is zero.

    Property # 5:

    determinant of matrix equal to zero, then itself determinant of a matrix is zero.

    Property # 6:

    If all elements of any row or column determinant are presented as a sum of 2 terms, then determinant matrices can be represented as the sum of 2 determinants according to the formula:

    Property # 7:

    If to any row (or column) determinant add the corresponding elements of another row (or column), multiplied by the same number, then determinant of a matrix will not change its size.

    An example of applying properties for calculation determinant of matrix:

    Let us turn to the definition of operations on matrices.

    1) Matrix addition . The sum of two matrices A=(a ij) and B=(b ij) of the same size m× n called the matrix C=(c ij) the same size m× n whose elements are equal

    from ij \u003d a ij + b ij (i \u003d1,2, … , m; j \u003d1,2, … ,n). (1)

    To denote the sum of matrices, the notation is used C=A+ B.

    2) Matrix multiplication by number . Product ( m× n)- matrices A on the number λ is called ( m× n)-matrix C= (c ij) whose elements are equal

    from ij = λ a ij (i \u003d1,2, … , m; j \u003d1,2, … ,n). (2)

    To denote the product of a matrix by a number, the notation is used C= λ∙ A.

    It is clear from formulas (1) and (2) that the two introduced operations have the following properties:

    a) A + B \u003d B + A - commutative addition;

    b) ( A + B)+C \u003d A +(B + C) - associativity of addition;

    c) (λμ) A=λ(μ A) - associativity of multiplication by a number;

    d) λ ( A + B) = λ AIN - distributiveness of multiplication relative to addition.

    Remark 1. The difference between the matrices can be determined as follows:

    A – B \u003d A+(–1)IN.

    In short, addition, subtraction of matrices, and multiplication of a matrix by a number are done element by element.

    Example:

    3) Matrix multiplication . Product ( m× n) -matrices A=(a ij) on ( n× p)- matrix B=(b ij) is called ( m× p)-matrix FROM=(from ij), whose elements are calculated by the formula

    c ij = a i 1 b 1 j + a i 2 b 2 j +…+ a in b nj ,

    which, using the summation symbol, can be written as

    (i= 1,2, … , m; j= 1,2, … , p).

    To denote the product of the matrix A on the matrix IN use recording C \u003d A ∙ B.

    Note right away that the matrix A can not be multiplied by every matrix IN: it is necessary that the number of columns of the matrix A was equal to the number of rows of the matrix IN.

    Formula (3) represents the rule for finding matrix elements A ∙ B... Let's formulate this rule verbally: the element c ij standing in i-th line and jth column of the matrix A ∙ B, is equal to the sum of pairwise products of the corresponding elements ith row of matrix A and jth column of the matrix IN.

    Let's give an example of multiplication of square matrices of the second order:

    .

    Matrix multiplication has the following properties:

    a) ( AB)FROM = A(Sun) - associativity;

    b) ( A + B)FROM = AS+Sun or A(B + C) = AB + AC - distributiveness of multiplication relative to addition.

    It makes sense to pose the question of the commutativity of multiplication only for square matrices of the same order, because only for such matrices A and IN both works AB and VA are definite and are matrices of the same order. Elementary examples show that matrix multiplication is, generally speaking, non-commutative. For example, if

    then

    Example . For matrix
    find all matrices IN such that

    AB \u003d BA.

    Decision . Let us introduce the notation
    Then

    Equality AB \u003d BA is equivalent to the system of equations

    which, in turn, is equivalent to the system

    So, the required matrix has the form
    where x and z - arbitrary numbers. It can be written like this: IN = zA+(xz)E.

    Comment. Unit and null matrices n-th order commutes with any square matrix of the same order, and AE \u003d \u003d EA \u003d A, A∙0 = 0∙A = 0.

    Using the operation of multiplication, we give the most concise - matrix - form of writing the system of linear equations (1.1). Let's introduce the notation: A=(a ij) – (m× n) -matrix of coefficients of the system of equations; m-dimensional column of free members and

    n-dimensional column of unknowns. By definition, the work A ∙X represents m-dimensional column. Its an element standing in i-th line has the form

    a i 1 x 1 + a i 2 x 2 +…+ a in x n .

    But this amount is nothing but the left side ith equation of system (1.1) and by condition it is equal to b i , i.e. element standing in i-th row of the column IN. From here we get: A ∙ X \u003d B . This is the matrix notation of the system of linear

    equations. Here: A - matrix of system coefficients, IN - column of free members, X - column of unknowns.

    4) Matrix transposition. The transposition of any matrix is \u200b\u200ban operation that changes the positions of rows and columns while maintaining their order. As a result of transposition ( m× n) -matrices A it turns out ( m× n) -matrix denoted by the symbol A and called matrix transposed A.

    Example . For A= (a 1 a 2 a 3) find A ∙ A´ and A´∙ A.

    Decision . A transposed row is a column. Therefore:

    Is a 1st order square matrix.

    - square matrix of the 3rd