To come in
Sewerage and drainpipes portal
  • Do-it-yourself dome construction
  • Adjusting plastic windows with your own hands: choosing a tool, basic rules
  • Ageratum flower: description, features of growing in the open field
  • Planting, growing and caring for Heuchera
  • Growing nasturtium: planting seeds for seedlings and in open ground
  • How to make a fireplace with your own hands: step by step instructions, drawings
  • Continuity of a function of several variables. Continuity of a function of two variables Determining the continuity of a function of two variables at a point

    Continuity of a function of several variables. Continuity of a function of two variables Determining the continuity of a function of two variables at a point

    2. Limit and continuity of a function of two variables

    The notions of limit and continuity of a function of two variables are analogous to the case of one variable.

    Let be an arbitrary point of the plane. - the neighborhood of a point is the set of all points whose coordinates satisfy the inequality. In other words, the neighborhood of a point is all the interior points of a circle with a center at a point and a radius.

    Definition 2. A number is called the limit of a function at (or at a point) if for any arbitrarily small positive number there exists (depending on) such that the inequality holds for all and satisfying the inequality.

    The limit is indicated as follows:

    Example 1. Find the limit.

    Decision. Let us introduce the notation whence. For, we have that. Then

    Definition 3. A function is called continuous at a point if: 1) is defined at a point and its neighborhood; 2) has a finite limit; 3) this limit is equal to the value of the function at the point, i.e. ...

    A function is called continuous in some area if it is continuous at every point of this area.

    The points at which the continuity condition is not satisfied are called the discontinuity points of this function. In some functions, the break points form entire break lines. For example, a function has two break lines: an axis () and an axis ().

    Example 2. Find the discontinuity points of a function.

    Decision. This function is undefined at those points at which the denominator vanishes, that is, at the points where or. It is a circle with a center at the origin and a radius. This means that the line of discontinuity of the original function will be a circle.

    Discrete Math

    All logical operations that were discussed in 3.2 apply to functions of several variables. Now we will consider functions F (x1, x2, ..., xn), where xi are logical variables that take values \u200b\u200bof zero or one ...

    Proofs of Inequalities Using One-Monotone Sequences

    If \u003d a1b1. then \u003d a1b1 + a2b2 Theorem 1. Let (a1a2) (b1b2) be one-monotone sequences. Then Proof Indeed, - \u003d a1b1 + a2b2-a1b2-a2b1 \u003d (a1-a2) (b1-b2) Since the sequences (а1а2) (b1b2) are monotone, the numbers a1-a2 and b1-b2 have the same sign ...

    Mathematical programming

    The Lagrange multiplier method can be used to construct optimality criteria for problems with equality constraints. Kuhn and Tucker generalized this approach to a general constrained nonlinear programming problem ...

    Minimax and multiobjective optimization

    Let there be a function f (x) at x? x, x \u003d (x1, ..., xn). Consider all its first and second derivatives at the point: \u003d 0,; || || , is a positively (negatively) definite matrix. Then at such points a minimum (maximum) will be observed, respectively ...

    Minimum of a function of many variables

    Limits. Comparison of infinitesimal quantities

    When examining the graphs of various functions, you can see that when the function argument tends to some value, either finite or infinite, the function itself can also take on a number of values \u200b\u200b...

    Applying a derivative to problem solving

    Definition 3. Let the function y \u003d f (x) be defined in some neighborhood of the point a or at some points of this neighborhood. The function y \u003d f (x) tends to the limit b (yb) as x tends to a, if for every positive number, no matter how small it may be ...

    Let the function f (x) be defined on (a, +?). The number A is called the limit of the function f (x) for x\u003e +? (denoted by A \u003d lim x\u003e +? f (x)), if? ? \u003e 0? N:? x\u003e N? | f (x)? a |< ?. Пусть функция f(x) определена на (? ?,a)...

    Solving problems in higher mathematics

    Let the function f (x) be defined in some punctured neighborhood of the point x0. The number A is called the limit of the function f (x) for x\u003e x0 (or at the point x0), if for any? \u003e 0 is there? \u003e 0 such that for all x for which 0< |x ? x0| < ?...

    Comparative analysis of optimization methods

    We will consider functions of several variables f \u003d f (x1, ..., xn) as functions defined at points x of the n-dimensional Euclidean space En: f \u003d f (x). 1. The point x * En is called the point of the global minimum of the function f (x) ...

    Functions of many variables

    Functions of many variables

    Many phenomena occurring in nature, economy, social life cannot be described using a function of one variable. For example, the profitability of an enterprise depends on profits, fixed and circulating assets ...

    Functions of several variables

    The notions of limit and continuity of a function of two variables are analogous to the case of one variable. Let be an arbitrary point of the plane. - the neighborhood of a point is the set of all points whose coordinates satisfy the inequality ...

    Functions of several variables

    Definition 7. A point is called a minimum (maximum) point of a function if there exists a neighborhood of the point such that for all points from this neighborhood the inequality holds, () ...

    A function z \u003d ƒ (x; y) (or ƒ (M)) is called continuous at a point M 0 (x 0; y 0) if it:

    a) is defined at this point and some of its neighborhood,

    b) has a limit

    c) this limit is equal to the value of the function z at the point Mo, i.e.

    A function that is continuous at every point of a certain area is called continuous in this area. The points at which the continuity is broken (at least one of the conditions for the continuity of a function at a point is not satisfied) are called points of discontinuity of this function.

    71. Derivatives and differentials of functions of several variables . Let the function z \u003d ƒ (x; y) be given. Since x and y are independent variables, one of them can change, while the other retains its value. Let us give the independent variable x an increment of Δx, keeping the value of y unchanged. Then z will receive an increment, which is called the partial increment of z in x and is denoted by ∆xz. So, Δxz \u003d ƒ (x + Δx; y) -ƒ (x; y). Similarly, we obtain a partial increment of z with respect to y: Δуz \u003d ƒ (x; у + Δу) -ƒ (x; у). The total increment Δz of the function z is determined by the equality Δz \u003d ƒ (x + Δx; y + Δy) - ƒ (x; y). If there is a limit, then it is called the partial derivative of the function z \u003d ƒ (x; y) at the point M (x; y) with respect to the variable x and is denoted by one of the symbols: Partial derivatives with respect to x at a point are usually denoted by symbols. The partial derivative of z \u003d ƒ (x; y) with respect to the variable y: is defined and denoted in a similar way. Thus, the partial derivative of a function of several (two, three or more) variables is defined as the derivative of a function of one of these variables, provided that the values \u200b\u200bof the remaining independent variables are constant. Therefore, the partial derivatives of the function ƒ (x; y) are found by the formulas and rules for calculating the derivatives of a function of one variable (in this case, respectively, x or y is considered a constant).

    72. Application of the differential of a function of several (two) variables to approximate calculations . The total differential of a function of several variables can be used for approximate calculations. Given a differentiable function, its total increment is expressed by the formula. Here, strive for 0 faster than, ... Therefore, for small ρ, i.e. for small, the terms can be neglected and written:, i.e. the increment of the function can be approximately replaced by its full differential. Since, then we substitute this expression for into formula (1.) we get:, from there. Formula (2) can be used when approximating the computation of the values \u200b\u200bof the function of two variables at the point close to the point P (x; y), if the values \u200b\u200bof the function and its part of derivatives at the point P (x; y) itself are known.



    73. Partial derivatives of the first order. Definition. If there is a finite limit on the ratio of the x functions f (x, y, z) at the point M 0 (x 0, y 0, z 0) to the increment that caused it Δx at Δx 0, then this limit is called the partial derivative with respect to x functions u \u003d f (x, y, z) at point М 0 and is denoted by one of the symbols: By definition, Partial derivatives with respect to y and with respect to z are defined in the same way: Derivatives f "x; f" y; f "z are also called first-order partial derivatives of the function f (x, y, z), or first partial derivatives. Since the partial increment Δxf (M 0) is obtained only due to the increment of the independent variable x with fixed values \u200b\u200bof other independent variables, then the partial derivative f "x (M 0) can be considered as the derivative of the function f (x 0, y 0, z 0) of one variable x. Therefore, to find the derivative with respect to x, all the other independent variables must be considered constant and the derivative with respect to x must be calculated as a function of one independent variable x. Partial derivatives with respect to other independent variables are calculated similarly. If partial derivatives exist at each point of the domain V, then they will be functions of the same independent variables as the function itself.

    74. Directional derivative. Gradient. Let a function and a point M (x, y, z) be given in some domain D. Draw from point M a vector whose direction cosines are. Consider a point on the vector, at a distance from its origin, i.e. ... We will assume that the function u \u003d u (x, y, z) and its first-order partial derivatives are continuous in the domain D. The limit of the ratio at is called derivative of the function u \u003d u (x, y, z) at the point M (x, y, z) in the direction of the vector and denoted, i.e. ... To find the derivative of the function u \u003d u (x, y, z) at a given point in the direction of the vector use formula: where are the direction cosines of the vector, which are calculated by the formulas: ... Let a function be given at each point of some domain D u \u003d u (x, y, z)A vector whose projections on the coordinate axes are the values \u200b\u200bof the partial derivatives of this function at the corresponding point is called the gradient of the function u \u003d u (x, y, z) and is denoted by or (read "nablau"): ... It is said that a vector field of gradients is defined in the region D. To find the gradient of the function u \u003d u (x, y, z) at a given point use the formula: . Gradient properties 1. The derivative at a given point in the direction of the vector has the greatest value if the direction of the vector coincides with the direction of the gradient. This largest value of the derivative is. 2. The derivative in the direction of the vector perpendicular to the vector grad u is equal to zero.



    75. Extremum of a function of several variables. The concepts of maximum, minimum, extremum of a function of two variables are analogous to the corresponding concepts of a function of one independent variable. Let the function z \u003d f (x; y) defined in some area D, dot N (x 0 ; y 0 ) Î D. Dot (x 0 ; y 0 ) called maximum point functions z = f (x; y), if there is a δ-neighborhood of the point (x 0 ; y 0 ), what for each point (x; y), different from ( x 0 ;at 0), from this neighborhood the inequality f (x; y) (x 0 ;y 0). Similarly, minimum pointfunctions: for all points (x; y), other than ( x 0 ;y 0), from the δ-ξ neighborhood of the point ( x 0 ;y 0) the following inequality holds: f (x; y)\u003e f (x 0; y 0). Figure 6: N 1is the maximum point, and N 2 is the minimum point of the function z = f (x; y)The value of the function at the maximum (minimum) point is called maximum (minimum) functions. The maximum and minimum of a function call it extrema. Necessary conditions for an extremum: if the function z \u003d f (x, y) has an extremum at the point M 0 (x 0, y 0), then each first-order partial derivative of z at this point either equals zero, or does not exist. The points at which the partial derivatives and functions z \u003d f (x, y) are zero or do not exist are called critical points of this function. Note that, by definition, the extremum point of the function lies within the domain of the function; maximum and minimum have local (local) character: the value of a function at a point (x 0; y 0) is compared with its values \u200b\u200bat points close enough to ( x 0 ;y 0). In the area of D a function can have several extrema or none.

    76. Conditional extremum. Lagrange multiplier method . The function z \u003d f (x, y) has a conditional minimum (maximum) at the interior point M 0 (x 0, y 0), if for any points M (x, y) from some neighborhood O (M 0), satisfying the constraint equation ϕ (x, y) \u003d 0, the condition ∆f (x 0, y 0) \u003d f (x, y) -f (x 0, y 0) ≥0, (∆f (x 0, y 0) ≤ 0). In the general case, this problem is reduced to finding the usual Lagrange extremum L (x, y, λ) \u003d f (x, y) \u003d λϕ (x, y) with an unknown Lagrange multiplier λ. The necessary condition for the extremum of the Lagrange function L (x, y, λ) is a system of three equations with three unknowns x, y, λ: ... A sufficient condition for the extremum of the Lagrange function is in the following statement ∆\u003e 0, then the function z \u003d f (x, y) at the point M 0 (x 0, y 0) has a conditional minimum, ∆<0- то условный максимум.

    77. Number series. Basic concepts. Series convergence ... Number series is called an expression of the form, where u 1, u 2,…., u n,… are real or complex numbers, called members of a number, u n - common member row. A series is considered given if the common term of the series u n is known, expressed as a function of its number n: u n \u003d f (n). The sum of the first n terms of the series is called the nth partial sum series and is denoted by S n, i.e. S n \u003d u 1 + u 2 + ... + u n. If there is a finite limit of the sequence of partial sums of the series , then this limit is called the sum of the series and they say that a number converges.

    78. A necessary criterion for convergence. Harmonic series. Theorem:Let the number series u 1 + u 2 +… + u n +…, (1) converge, and S is its sum. Then, with an unlimited increase in the number n of terms of the series, its common term u n tends to 0. This criterion is a necessary but not sufficient criterion for the convergence of the series, since you can specify a series for which equality is satisfied

    In fact, if it converged, it would equal 0. Thus, the theorem we have proved sometimes allows us, without calculating the sum S n, to draw a conclusion about the divergence of one or another series. For example, the series diverge since . Harmonic series - the sum, made up of an infinite number of members, reciprocal of consecutive numbers of natural numbers: The series is called harmonic because it is composed of "harmonics": the (\\ displaystyle k) th harmonic extracted from the violin string is the fundamental tone produced by a string of length (\\ displaystyle (\\ frac (1) (k))) over length original string.

    Definition 1

    If for each pair $ (x, y) $ of values \u200b\u200bof two independent variables from a certain domain a certain value of $ z $ is associated, then $ z $ is said to be a function of two variables $ (x, y) $ in this domain.

    Notation: $ z \u003d f (x, y) $.

    Let a function $ z \u003d f (x, y) $ of two independent variables $ (x, y) $ be given.

    Remark 1

    Since the variables $ (x, y) $ are independent, one of them can be changed, while the other remains constant.

    Let's give the variable $ x $ an increment of $ \\ Delta x $, while keeping the value of the variable $ y $ unchanged.

    Then the function $ z \u003d f (x, y) $ will receive an increment, which will be called the partial increment of the function $ z \u003d f (x, y) $ with respect to the variable $ x $. Designation:

    Definition 2

    The partial derivative with respect to the variable $ x $ of the given function $ z \u003d f (x, y) $ is the limit of the ratio of the partial increment $ \\ Delta _ (x) z $ of the given function to the increment $ \\ Delta x $ at $ \\ Delta x \\ Designation: $ z "_ (x), \\, \\, f" _ (x) (x, y), \\, \\, \\ frac (\\ partial z) (\\ partial x), \\, \\, \\ frac ( \\ partial f) (\\ partial x) $.

    Remark 2

    \\ [\\ frac (\\ partial z) (\\ partial x) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta x \\ to 0) \\ frac (\\ Delta _ (x) z) (\\ Delta x) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta x \\ to 0) \\ frac (f (x + \\ Delta x, y) -f (x, y)) (\\ Delta x). \\]

    Let's give the variable $ y $ the increment $ \\ Delta y $, while keeping the value of the variable $ x $ unchanged.

    Then the function $ z \u003d f (x, y) $ will receive an increment, which will be called the partial increment of the function $ z \u003d f (x, y) $ with respect to the variable $ y $. Designation:

    Definition 3

    The partial derivative with respect to the variable $ y $ of the given function $ z \u003d f (x, y) $ is the limit of the ratio of the partial increment $ \\ Delta _ (y) z $ of the given function to the increment $ \\ Delta y $ at $ \\ Delta y \\ Designation: $ z "_ (y), \\, \\, f" _ (y) (x, y), \\, \\, \\ frac (\\ partial z) (\\ partial y), \\, \\, \\ frac ( \\ partial f) (\\ partial y) $.

    Remark 3

    By the definition of a partial derivative, we have:

    \\ [\\ frac (\\ partial z) (\\ partial y) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta y \\ to 0) \\ frac (\\ Delta _ (y) z) (\\ Delta y) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta y \\ to 0) \\ frac (f (x, y + \\ Delta y) -f (x, y)) (\\ Delta y). \\]

    Note that the rules for calculating the partial derivative of a given function coincide with the rules for calculating the derivatives of a function of one variable. However, when calculating the partial derivative, you need to remember which variable is used to search for the partial derivative.

    Example 1

    Decision:

    $ \\ frac (\\ partial z) (\\ partial x) \u003d (x + y ^ (2)) "_ (x) \u003d 1 $ (by variable $ x $),

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x + y ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $).

    Example 2

    Determine the partial derivatives of a given function:

    at point (1; 2).

    By the definition of partial derivatives, we get:

    $ \\ frac (\\ partial z) (\\ partial x) \u003d (x ^ (2) + y ^ (3)) "_ (x) \u003d 2x $ (by variable $ x $),

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x + y ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $).

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x ^ (2) + y ^ (3)) "_ (y) \u003d 3y ^ (2) $ (by variable $ y $).

    {!LANG-3307b1bcaa47f028f49e5c64aa791be3!}

    {!LANG-7b7efbe5b917468a3ed9174a12ab848e!}

    \\ [\\ left. \\ frac (\\ partial z) (\\ partial x) \\ right | _ ((1; 2)) \u003d 2 \\ cdot 1 \u003d 2, \\ left. \\ frac (\\ partial z) (\\ partial y) \\ right | _ ((1; 2)) \u003d 3 \\ cdot 2 ^ (2) \u003d 12. \\]

    Definition 4

    If for each triple $ (x, y, z) $ of values \u200b\u200bof three independent variables from a certain region a certain value of $ w $ is associated, then $ w $ is said to be a function of three variables $ (x, y, z) $ in this area.

    Notation: $ w \u003d f (x, y, z) $.

    Definition 5

    If for each collection $ (x, y, z, ..., t) $ of values \u200b\u200bof independent variables from a certain region, a certain value of $ w $ is associated, then $ w $ is said to be a function of variables $ (x, y, z, ..., t) $ in this domain.

    Notation: $ w \u003d f (x, y, z, ..., t) $.

    For a function of three or more variables, in the same way as for a function of two variables, partial derivatives with respect to each of the variables are determined:

      $ \\ frac (\\ partial w) (\\ partial z) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta z \\ to 0) \\ frac (\\ Delta _ (z) w) (\\ Delta z) \u003d \\ mathop ( \\ lim) \\ limits _ (\\ Delta z \\ to 0) \\ frac (f (x, y, z + \\ Delta z) -f (x, y, z)) (\\ Delta z) $;

      $ \\ frac (\\ partial w) (\\ partial t) \u003d \\ mathop (\\ lim) \\ limits _ (\\ Delta t \\ to 0) \\ frac (\\ Delta _ (t) w) (\\ Delta t) \u003d \\ mathop ( \\ lim) \\ limits _ (\\ Delta t \\ to 0) \\ frac (f (x, y, z, ..., t + \\ Delta t) -f (x, y, z, ..., t)) ( \\ Delta t) $.

    Example 3

    By the definition of partial derivatives, we get:

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x + y ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $).

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x ^ (2) + y ^ (3)) "_ (y) \u003d 3y ^ (2) $ (by variable $ y $).

    $ \\ frac (\\ partial w) (\\ partial x) \u003d (x + y ^ (2) + 2z) "_ (x) \u003d 1 $ (by variable $ x $),

    $ \\ frac (\\ partial w) (\\ partial y) \u003d (x + y ^ (2) + 2z) "_ (y) \u003d 2y $ (by variable $ y $),

    $ \\ frac (\\ partial w) (\\ partial z) \u003d (x + y ^ (2) + 2z) "_ (z) \u003d 2 $ (by variable $ z $).

    Example 4

    By the definition of partial derivatives, we get:

    at point (1; 2; 1).

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x + y ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $).

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x ^ (2) + y ^ (3)) "_ (y) \u003d 3y ^ (2) $ (by variable $ y $).

    $ \\ frac (\\ partial w) (\\ partial x) \u003d (x + y ^ (2) +2 \\ ln z) "_ (x) \u003d 1 $ (by variable $ x $),

    $ \\ frac (\\ partial w) (\\ partial y) \u003d (x + y ^ (2) +2 \\ ln z) "_ (y) \u003d 2y $ (by variable $ y $),

    $ \\ frac (\\ partial w) (\\ partial z) \u003d (x + y ^ (2) +2 \\ ln z) "_ (z) \u003d \\ frac (2) (z) $ (by variable $ z $) ...

    The values \u200b\u200bof the partial derivatives at a given point:

    \\ [\\ left. \\ frac (\\ partial w) (\\ partial x) \\ right | _ ((1; 2; 1)) \u003d 1, \\ left. \\ frac (\\ partial w) (\\ partial y) \\ right | _ ((1; 2; 1)) \u003d 2 \\ cdot 2 \u003d 4, \\ left. \\ frac (\\ partial w) (\\ partial z) \\ right | _ ((1; 2; 1)) \u003d \\ frac (2) (1) \u003d 2. \\]

    Example 5

    By the definition of partial derivatives, we get:

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x + y ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $).

    $ \\ frac (\\ partial z) (\\ partial y) \u003d (x ^ (2) + y ^ (3)) "_ (y) \u003d 3y ^ (2) $ (by variable $ y $).

    $ \\ frac (\\ partial w) (\\ partial x) \u003d (3 \\ ln x + y ^ (2) + 2z + ... + t ^ (2)) "_ (x) \u003d \\ frac (3) (x ) $ (by variable $ x $),

    $ \\ frac (\\ partial w) (\\ partial y) \u003d (3 \\ ln x + y ^ (2) + 2z + ... + t ^ (2)) "_ (y) \u003d 2y $ (by variable $ y $),

    $ \\ frac (\\ partial w) (\\ partial z) \u003d (3 \\ ln x + y ^ (2) + 2z + ... + t ^ (2)) "_ (z) \u003d 2 $ (with respect to the variable $ z $),

    $ \\ frac (\\ partial w) (\\ partial t) \u003d (3 \\ ln x + y ^ (2) + 2z + ... + t ^ (2)) "_ (t) \u003d 2t $ (by variable $ t $).

    Let us prove (7) for example.

    Let be ( x k, y k) → (x 0 , at 0) ((x k, y k) ≠ (x 0 , at 0)); then

    (9)

    Thus, the limit on the left side of (9) exists and is equal to the right side of (9), and since the sequence ( x k, y k) tends to ( x 0 , at 0) according to any law, then this limit is equal to the limit of the function f (x, y) ∙φ (x, y) at point ( x 0 , at 0).

    Theorem. if function f (x, y) has a limit that is not equal to zero at the point ( x 0 , at 0), i.e.

    then there exists δ\u003e 0 such that for all x, at

    < δ, (10)

    it satisfies the inequality

    (12)

    Therefore, for such (x, y)

    those. inequality (11) holds. From inequality (12) for the indicated (x, y) should

    whence at A\u003e 0 and for

    A< 0 (сохранение знака).

    By definition, the function f(x) = f (x 1 , …, x n) = Ahas a limit at the point

    equal to the number AND, denoted like this:

    (write more f(x) A (xx 0)), if it is defined on some neighborhood of the point x 0, except perhaps for herself, and if there is a limit

    whatever the striving for x 0 sequence of points x k from the specified neighborhood ( k \u003d 1, 2, ...) other than x 0 .

    Another equivalent definition is this: function f has at point x 0 limit equal to ANDif it is defined in some neighborhood of the point x 0, with the possible exception of itself, and for any ε\u003e 0 there is δ\u003e 0 such that

    (13)

    for all xsatisfying the inequalities

    0 < |xx 0 | < δ.

    This definition, in turn, is equivalent to the following: for any ε\u003e 0 there is a neighborhood U (x 0 ) points x 0 such that for all x

    U(x 0 ) , xx 0, inequality (13) holds.

    Obviously, if the number AND there is a limit f(x) at x 0, then AND there is a function limit f(x 0 + h) from h at zero point:

    and vice versa.

    Consider some function fgiven at all points of the neighborhood of the point x 0, except perhaps the point x 0; let ω \u003d (ω 1, ..., ω P) Is an arbitrary vector of length one (| ω | \u003d 1) and t \u003e 0 is a scalar. View points x 0 + tω (0 < t) form an outgoing x 0 ray in the direction of the vector ω. For each ω, we can consider the function

    (0 < t < δ ω)

    from scalar variable t, where δ ω is a number depending on ω. The limit of this function (from one variable t)


    if it exists, it is natural to call it the limit f at the point x 0 in the direction of the vector ω.

    Will write

    if function f defined in some neighborhood x 0, except maybe x 0, and for everyone N\u003e 0 there is δ\u003e 0 such that | f(x) | >N, since 0< |xx 0 | < δ.

    You can talk about the limit fwhen x → ∞:

    (14)

    For example, in the case of a finite number AND equality (14) must be understood in the sense that for any ε\u003e 0 one can indicate such N\u003e 0, which for points xfor which | x| > N, function f is defined and the inequality

    .

    So the function limit f(x) = f(x 1 , ..., x n) from p variables is defined by analogy in the same way as for a function of two variables.

    Thus, we turn to the definition of the limit of a function of several variables.

    Number AND called the limit of the function f(M) at MM 0 if for any number ε\u003e 0 there is always such a number δ\u003e 0 that for any points Mother than M 0 and satisfying the condition | MM 0 | < δ, будет иметь место неравенство |f(M) AND | < ε.

    Limit denote

    In the case of a function of two variables

    Limit theorems. If functions f 1 (M) and f 2 (M) at MM 0 each tend to a finite limit, then:

    Example 1. Find the limit of a function:

    Decision. We transform the limit as follows:

    Let be y = kxthen

    Example 2. Find the limit of a function:

    Decision. Let's use the first remarkable limit

    Then

    Example 3. Find the limit of a function:

    Decision. Let's use the second wonderful limit

    Then

    Continuity of a function of several variables

    By definition, the function f (x, y) is continuous at the point ( x 0 , at 0) if it is defined in some of its neighborhood, including at the point itself ( x 0 , at 0) and if the limit f (x, y) at this point is equal to its value in it:

    (1)

    Continuity condition i.e. function fis continuous at the point ( x 0 , at 0) if the function is continuous f (x 0 + Δ x, at 0 + Δ y) in variables Δ x, Δ at at Δ x = Δ y \u003d0.

    You can enter an increment Δ and functions and = f (x, y) at the point (x, y) corresponding to increments Δ x, Δ at arguments

    Δ and = f (x + Δ x, at + Δ y)f (x, y)

    and in this language define the continuity f at (x, y) : function f continuous at the point (x, y) , if

    (1"")

    Theorem. The sum, difference, product and quotient of continuous at point ( x 0 , at 0) functions f and φ is a continuous function at this point, if, of course, in the case of the quotient φ ( x 0 , at 0) ≠ 0.

    Constant from can be viewed as a function f (x, y) = from from variables x, y... It is continuous in these variables, because

    |f (x, y) f (x 0 , at 0) | = |s - s| = 0 0.

    The next most complex functions are f (x, y) = x and f (x, y) = at... They can also be viewed as functions of (x, y) and yet they are continuous. For example, the function f (x, y) = x matches each point (x, y) number equal to x... Continuity of this function at an arbitrary point (x, y) can be proven like this.

    Continuity of function

    A function of two variables f (x, y) defined at the point (x 0, y 0) and in some neighborhood of it is called continuous at the point (x 0, y 0) if the limit of this function at the point (x 0, y 0 ) is equal to the value of this function f (x 0, y 0), i.e. if

    A function that is continuous at every point of a certain area is called continuous in this area. Continuous functions of two variables have properties similar to those of continuous functions of one variable.

    If at some point (x 0, y 0) the continuity condition is not satisfied, then the function f (x, y) is said to be discontinuous at the point (x 0, y 0).

    Differentiation of a function of two variables

    Partial derivatives of the first order

    An even more important characteristic of the change in function is the limits:

    Ratio limit

    is called the first order partial derivative of the function z \u003d f (x, y) with respect to the argument x (abbreviated as the partial derivative) and is denoted by the symbols or or

    Similarly, the limit

    is called the partial derivative of the function z \u003d f (x, y) with respect to the argument y and is denoted by the symbols or or.

    Finding partial derivatives is called partial differentiation.

    From the definition of a partial derivative, it follows that when it is found by one particular argument, the other particular argument is considered constant. After performing the differentiation, both private arguments are considered variable again. In other words, partial derivatives and are functions of two variables x and y.

    Private differentials

    The quantity

    called the main linear part of the increment? x f (linear with respect to the increment of the private argument? x). This value is called the partial differential, and is denoted by the symbol d x f.

    Similarly

    Total differential of a function of two variables

    By definition, the total differential of a function of two variables, denoted by the symbol d f, is the principal linear part of the total increment of the function:

    The total differential turned out to be equal to the sum of the partial differentials. Now the formula for the total differential can be rewritten as follows:

    We emphasize that the formula for the total differential is obtained under the assumption that the partial derivatives of the first order

    are continuous in some neighborhood of the point (x, y).

    A function that has a total differential at a point is called differentiable at that point.

    For a function of two variables to be differentiable at a point, it is not enough that it has all the partial derivatives at this point. It is necessary that all these partial derivatives be continuous in some neighborhood of the point under consideration.

    Higher-order derivatives and differentials

    Consider a function of two variables z \u003d f (x, y). It was already noted above that the partial derivatives of the first

    are themselves functions of two variables, and they can be differentiated with respect to x and with respect to y. We get derivatives of the highest (second) order:

    There are already four second-order partial derivatives. Without proof, the statement is made: If mixed partial derivatives of the second order are continuous, then they are equal:

    Consider now the first-order differential

    It is a function of four arguments: x, y, dx, dy, which can take different values.

    The differential of the second order is calculated as the differential of the differential of the first order: under the assumption that the differentials of the partial arguments dx and dy are constants:

    2005-2017, HOCHU.UA