The law of distribution of a discrete random variable. Examples of problem solving

Definition 3. X has normal distribution law (Gaussian law), if its distribution density has the form:

where m = M(X), σ 2=D(X), σ > 0 .

The normal distribution curve is called normal or gaussian curve(Fig. 6.7).

A normal curve is symmetrical about a straight line x = m, has a maximum at the point x = m, equal .

The distribution function of a random variable X, distributed according to the normal law, is expressed in terms of the Laplace function Ф( X) according to the formula:

F( x) is the Laplace function.

Comment. Function F( X) is odd (Ф(- X) = -Ф( X)), in addition, when X> 5 can be considered F( X) ≈ 1/2.

Table of values ​​of function Ф( X) is given in the appendix (Table P 2.2).

Distribution function plot F(x) is shown in Fig. 6.8.

The probability that a random variable X will take values ​​belonging to the interval ( a;b) are calculated by the formula:

R(a< X < b ) = .

The probability that the absolute value of the deviation of a random variable from its mathematical expectation is less than a positive number δ is calculated by the formula:

P(| X -m| .

In particular, when m=0 the equality is true:

P(| X | .

"Three Sigma Rule"

If the random variable X has a normal distribution law with parameters m and σ, then it is almost certain that its values ​​are contained in the interval ( m 3σ; m+ 3σ), because P(| X -m| = 0,9973.

Problem 6.3. Random value X distributed normally with mean 32 and variance 16. Find: a) probability distribution density f(x); X will take a value from the interval (28;38).

Solution: By condition m= 32, σ 2 = 16, therefore, σ= 4, then

a)

b) Let's use the formula:

R(a< X )= .

Substituting a= 28, b= 38, m= 32, σ= 4, we get

R(28< X < 38)= F(1.5) F(1)

According to the table of values ​​of the function Ф( X) we find Ф(1.5) = 0.4332, Ф(1) = 0.3413.

So the desired probability is:

P(28

Tasks

6.1. Random value X evenly distributed in the interval (-3;5). Find:

a) distribution density f(x);

b) distribution functions F(x);

c) numerical characteristics;

d) probability R(4<X<6).

6.2. Random value X evenly distributed over the segment. Find:

a) distribution density f(x);

b) distribution function F(x);

c) numerical characteristics;

d) probability R(3≤X≤6).

6.3. An automatic traffic light is installed on the highway, in which the green light is on for 2 minutes, yellow for 3 seconds and red for 30 seconds, etc. A car is driving down the highway at a random time. Find the probability that the car passes the traffic light without stopping.


6.4. Subway trains run regularly at intervals of 2 minutes. The passenger enters the platform at a random time. What is the probability that the passenger will have to wait more than 50 seconds for the train? Find the mathematical expectation of a random variable X- train waiting time.

6.5. Find the variance and standard deviation of the exponential distribution given by the distribution function:

6.6. Continuous random variable X given by the probability distribution density:

a) Name the law of distribution of the considered random variable.

b) Find the distribution function F(x) and numerical characteristics of the random variable X.

6.7. Random value X distributed according to the exponential law, given by the probability distribution density:

X will take a value from the interval (2.5;5).

6.8. Continuous random variable X distributed according to the exponential law given by the distribution function:

Find the probability that as a result of the test X will take the value from the interval .

6.9. The mathematical expectation and standard deviation of a normally distributed random variable are 8 and 2, respectively. Find:

a) density distribution f(x);

b) the probability that as a result of the test X will take a value from the interval (10;14).

6.10. Random value X normally distributed with mean 3.5 and variance 0.04. Find:

a) distribution density f(x);

b) the probability that as a result of the test X will take the value from the interval .

6.11. Random value X distributed normally with M(X) = 0 and D(X)= 1. Which of the events: | X|≤0.6 or | X|≥0.6 has a high probability?

6.12. Random value X distributed normally with M(X) = 0 and D(X)= 1. From which interval (-0.5; -0.1) or (1; 2) in one test will it take on a value with a greater probability?

6.13. The current price per share can be modeled using the normal distribution with M(X)= 10 days units and σ( X) = 0.3 den. units Find:

a) the probability that the current share price will be from 9.8 den. units up to 10.4 den. units;

b) using the "rule of three sigma" to find the boundaries in which the current price of the stock will be.

6.14. The substance is weighed without systematic errors. Random weighing errors are subject to the normal law with the standard deviation σ= 5r. Find the probability that in four independent experiments the error in three weighings will not exceed 3 g in absolute value.

6.15. Random value X distributed normally with M(X)= 12.6. The probability of a random variable falling into the interval (11.4; 13.8) is 0.6826. Find the standard deviation σ.

6.16. Random value X distributed normally with M(X) = 12 and D(X) = 36. Find the interval in which, with a probability of 0.9973, the random variable will fall as a result of the test X.

6.17. A part produced by an automatic machine is considered defective if the deviation X its controlled parameter from the nominal value exceeds by modulo 2 units of measurement. It is assumed that the random variable X distributed normally with M(X) = 0 and σ( X) = 0.7. What percentage of defective parts does the machine give out?

3.18. Parameter X parts are normally distributed with a mathematical expectation of 2 equal to the nominal value and a standard deviation of 0.014. Find the probability that the deviation X from the face value modulo will not exceed 1% of the face value.

Answers

v) M(X)=1, D(X)=16/3, σ( X)= 4/ , d)1/8.



v) M(X)=4,5, D(X) =2 , σ ( X)= , d)3/5.


6.3. 40/51.

6.4. 7/12, M(X)=1.


6.5. D(X) = 1/64, σ ( X)=1/8

6.6. M(X)=1 , D(X) =2 , σ ( X)= 1 .


6.7. P(2.5<X<5)=e -1 e -2 ≈0,2325 6.8. Р(2≤ X≤5)=0,252.


b) R(10 < X < 14) ≈ 0,1574.

b) R(3,1 ≤ X ≤ 3,7) ≈ 0,8185.


6.11. |x|≥0,6.

6.12. (-0,5; -0,1).


6.13. a) Р(9.8 ≤ Х ≤ 10.4) ≈ 0.6562 6.14. 0,111.

b) (9.1; 10.9).


6.15. σ = 1.2.

6.16. (-6; 30).

6.17. 0,4 %.

- the number of boys among 10 newborns.

It is quite clear that this number is not known in advance, and in the next ten children born there may be:

Or boys - one and only one of the listed options.

And, in order to keep in shape, a little physical education:

- long jump distance (in some units).

Even the master of sports is not able to predict it :)

However, what are your hypotheses?

2) Continuous random variable - takes all numeric values ​​from some finite or infinite range.

Note : abbreviations DSV and NSV are popular in educational literature

First, let's analyze a discrete random variable, then - continuous.

Distribution law of a discrete random variable

- it conformity between the possible values ​​of this quantity and their probabilities. Most often, the law is written in a table:

The term is quite common row distribution, but in some situations it sounds ambiguous, and therefore I will adhere to the "law".

And now very important point: since the random variable necessarily will accept one of the values, then the corresponding events form full group and the sum of the probabilities of their occurrence is equal to one:

or, if written folded:

So, for example, the law of the distribution of probabilities of points on a die has the following form:

No comment.

You may be under the impression that a discrete random variable can only take on "good" integer values. Let's dispel the illusion - they can be anything:

Example 1

Some game has the following payoff distribution law:

…probably you have been dreaming about such tasks for a long time :) Let me tell you a secret - me too. Especially after finishing work on field theory.

Solution: since a random variable can take only one of three values, the corresponding events form full group, which means that the sum of their probabilities is equal to one:

We expose the "partisan":

– thus, the probability of winning conventional units is 0.4.

Control: what you need to make sure.

Answer:

It is not uncommon when the distribution law needs to be compiled independently. For this use classical definition of probability, multiplication / addition theorems for event probabilities and other chips tervera:

Example 2

There are 50 lottery tickets in the box, 12 of which are winning, and 2 of them win 1000 rubles each, and the rest - 100 rubles each. Draw up a law of distribution of a random variable - the size of the winnings, if one ticket is randomly drawn from the box.

Solution: as you noticed, it is customary to place the values ​​of a random variable in ascending order. Therefore, we start with the smallest winnings, and namely rubles.

In total, there are 50 - 12 = 38 such tickets, and according to classical definition:
is the probability that a randomly drawn ticket will not win.

The rest of the cases are simple. The probability of winning rubles is:

Checking: - and this is a particularly pleasant moment of such tasks!

Answer: the required payoff distribution law:

The following task for an independent decision:

Example 3

The probability that the shooter will hit the target is . Make a distribution law for a random variable - the number of hits after 2 shots.

... I knew that you missed him :) We remember multiplication and addition theorems. Solution and answer at the end of the lesson.

The distribution law completely describes a random variable, but in practice it is useful (and sometimes more useful) to know only some of it. numerical characteristics .

Mathematical expectation of a discrete random variable

In simple terms, this average expected value with repeated testing. Let a random variable take values ​​with probabilities respectively. Then the mathematical expectation of this random variable is equal to sum of works all its values ​​by the corresponding probabilities:

or in folded form:

Let's calculate, for example, the mathematical expectation of a random variable - the number of points dropped on a dice:

Now let's recall our hypothetical game:

The question arises: is it even profitable to play this game? ... who has any impressions? So you can’t say “offhand”! But this question can be easily answered by calculating the mathematical expectation, in essence - weighted average probabilities of winning:

Thus, the mathematical expectation of this game losing.

Don't trust impressions - trust numbers!

Yes, here you can win 10 or even 20-30 times in a row, but in the long run we will inevitably be ruined. And I would not advise you to play such games :) Well, maybe only for fun.

From all of the above, it follows that the mathematical expectation is NOT a RANDOM value.

Creative task for independent research:

Example 4

Mr X plays European roulette according to the following system: he constantly bets 100 rubles on red. Compose the law of distribution of a random variable - its payoff. Calculate the mathematical expectation of winnings and round it up to kopecks. how many average does the player lose for every hundred bet?

reference : European roulette contains 18 red, 18 black and 1 green sector ("zero"). In the event of a “red” falling out, the player is paid a double bet, otherwise it goes to the casino’s income

There are many other roulette systems for which you can create your own probability tables. But this is the case when we do not need any distribution laws and tables, because it is established for certain that the mathematical expectation of the player will be exactly the same. Only changes from system to system

1.2.4. Random variables and their distributions

Distributions of random variables and distribution functions. The distribution of a numerical random variable is a function that uniquely determines the probability that a random variable takes a given value or belongs to some given interval.

The first is if the random variable takes on a finite number of values. Then the distribution is given by the function P(X = x), giving each possible value X random variable X the likelihood that X = x.

The second is if the random variable takes on infinitely many values. This is possible only when the probability space on which the random variable is defined consists of an infinite number of elementary events. Then the distribution is given by the set of probabilities P(a < X for all pairs of numbers a, b such that a . The distribution can be specified using the so-called. distribution function F(x) = P(X defining for all real X the probability that the random variable X takes values ​​less than X. It's clear that

P(a < X

This relationship shows that just as the distribution can be calculated from the distribution function, so, conversely, the distribution function can be calculated from the distribution.

Distribution functions used in probabilistic-statistical decision-making methods and other applied research are either discrete or continuous, or combinations thereof.

Discrete distribution functions correspond to discrete random variables that take a finite number of values ​​or values ​​from a set whose elements can be renumbered by natural numbers (such sets are called countable in mathematics). Their graph looks like a step ladder (Fig. 1).

Example 1 Number X of defective items in the batch takes the value 0 with a probability of 0.3, the value 1 with a probability of 0.4, the value 2 with a probability of 0.2 and the value 3 with a probability of 0.1. Graph of the distribution function of a random variable X shown in Fig.1.

Fig.1. Graph of the distribution function of the number of defective products.

Continuous distribution functions do not have jumps. They increase monotonically as the argument increases, from 0 for to 1 for . Random variables with continuous distribution functions are called continuous.

Continuous distribution functions used in probabilistic-statistical decision-making methods have derivatives. First derivative f(x) distribution functions F(x) is called the probability density,

The distribution function can be determined from the probability density:

For any distribution function

The listed properties of distribution functions are constantly used in probabilistic-statistical decision-making methods. In particular, the last equality implies a specific form of the constants in the formulas for the probability densities considered below.

Example 2 The following distribution function is often used:

(1)

where a and b- some numbers a . Let's find the probability density of this distribution function:

(at points x = a and x = b function derivative F(x) does not exist).

A random variable with distribution function (1) is called "uniformly distributed on the interval [ a; b]».

Mixed distribution functions occur, in particular, when observations stop at some point. For example, when analyzing statistical data obtained using reliability test plans that provide for the termination of tests after a certain period of time. Or when analyzing data on technical products that required warranty repairs.

Example 3 Let, for example, the service life of an electric light bulb be a random variable with a distribution function F(t), and the test is carried out until the light bulb fails, if this occurs less than 100 hours from the start of the test, or until the moment t0= 100 hours. Let G(t)- distribution function of the operating time of the lamp in good condition in this test. Then

Function G(t) has a jump at a point t0, since the corresponding random variable takes the value t0 with probability 1- F(t0)> 0.

Characteristics of random variables. In probabilistic-statistical decision-making methods, a number of characteristics of random variables are used, expressed through distribution functions and probability density.

When describing income differentiation, when finding confidence limits for the parameters of distributions of random variables, and in many other cases, such a concept as “order quantile” is used. R", where 0< p < 1 (обозначается x p). Order quantile R is the value of a random variable for which the distribution function takes the value R or there is a "jump" from a value less than R up to a value greater R(Fig. 2). It may happen that this condition is satisfied for all values ​​of x belonging to this interval (i.e., the distribution function is constant on this interval and is equal to R). Then each such value is called a "quantile of the order R". For continuous distribution functions, as a rule, there is a single quantile x p order R(Fig. 2), and

F(x p) = p. (2)

Fig.2. Definition of a quantile x p order R.

Example 4 Let's find the quantile x p order R for the distribution function F(x) from (1).

At 0< p < 1 квантиль x p is found from the equation

those. x p = a + p(b – a) = a( 1- p)+bp. At p= 0 any x < a is the order quantile p= 0. Order quantile p= 1 is any number x > b.

For discrete distributions, as a rule, there is no x p satisfying equation (2). More precisely, if the distribution of a random variable is given in Table 1, where x 1< x 2 < … < x k , then equality (2), considered as an equation with respect to x p, has solutions only for k values p, namely,

p \u003d p 1,

p \u003d p 1 + p 2,

p \u003d p 1 + p 2 + p 3,

p \u003d p 1 + p 2 + ...+ pm, 3 < m < k,

p = p 1 + p 2 + … + p k.

Table 1.

Distribution of a discrete random variable

For the listed k probability values p solution x p equation (2) is not unique, namely,

F(x) = p 1 + p 2 + ... + p m

for all X such that x m< x < xm+1 . Those. x p - any number from the range (x m ; x m+1 ]. For everyone else R from the interval (0;1) not included in the list (3), there is a “jump” from a value less than R up to a value greater R. Namely, if

p 1 + p 2 + … + p m

then x p \u003d x m + 1.

The considered property of discrete distributions creates significant difficulties in tabulating and using such distributions, since it turns out to be impossible to accurately maintain the typical numerical values ​​of the distribution characteristics. In particular, this is true for the critical values ​​and significance levels of nonparametric statistical tests (see below), since the distributions of the statistics of these tests are discrete.

The order quantile is of great importance in statistics. R= ½. It is called the median (random variable X or its distribution function F(x)) and denoted Me(X). In geometry, there is the concept of "median" - a straight line passing through the vertex of a triangle and dividing its opposite side in half. In mathematical statistics, the median bisects not the side of the triangle, but the distribution of a random variable: equality F(x0.5)= 0.5 means that the probability of getting to the left x0.5 and the probability of getting right x0.5(or directly to x0.5) are equal to each other and equal to ½, i.e.

P(X < x 0,5) = P(X > x 0.5) = ½.

The median indicates the "center" of the distribution. From the point of view of one of the modern concepts - the theory of stable statistical procedures - the median is a better characteristic of a random variable than the mathematical expectation. When processing measurement results in an ordinal scale (see the chapter on measurement theory), the median can be used, but the mathematical expectation cannot.

Such a characteristic of a random variable as a mode has a clear meaning - the value (or values) of a random variable corresponding to a local maximum of the probability density for a continuous random variable or a local maximum of the probability for a discrete random variable.

If x0 is the mode of a random variable with density f(x), then, as is known from differential calculus, .

A random variable can have many modes. So, for uniform distribution (1) each point X such that a< x < b , is fashion. However, this is an exception. Most of the random variables used in probabilistic-statistical decision-making methods and other applied research have one mode. Random variables, densities, distributions that have one mode are called unimodal.

The mathematical expectation for discrete random variables with a finite number of values ​​is considered in the chapter "Events and Probabilities". For a continuous random variable X expected value M(X) satisfies the equality

which is an analogue of formula (5) from statement 2 of the chapter "Events and probabilities".

Example 5 Mathematical expectation for a uniformly distributed random variable X equals

For the random variables considered in this chapter, all those properties of mathematical expectations and variances that were considered earlier for discrete random variables with a finite number of values ​​are true. However, we do not provide proofs of these properties, since they require deepening into mathematical subtleties, which is not necessary for understanding and qualified application of probabilistic-statistical decision-making methods.

Comment. In this textbook, mathematical subtleties are deliberately avoided, connected, in particular, with the concepts of measurable sets and measurable functions, the -algebra of events, and so on. Those wishing to master these concepts should refer to the specialized literature, in particular, to the encyclopedia.

Each of the three characteristics - mathematical expectation, median, mode - describes the "center" of the probability distribution. The concept of "center" can be defined in different ways - hence the three different characteristics. However, for an important class of distributions - symmetric unimodal - all three characteristics coincide.

Distribution density f(x) is the density of the symmetric distribution, if there is a number x 0 such that

. (3)

Equality (3) means that the graph of the function y = f(x) symmetrical about a vertical line passing through the center of symmetry X = X 0 . From (3) it follows that the symmetric distribution function satisfies the relation

(4)

For a symmetrical distribution with one mode, the mean, median, and mode are the same and equal x 0.

The most important case is symmetry with respect to 0, i.e. x 0= 0. Then (3) and (4) become equalities

(6)

respectively. The above relations show that there is no need to tabulate symmetric distributions for all X, it suffices to have tables for x > x0.

We note one more property of symmetric distributions, which is constantly used in probabilistic-statistical decision-making methods and other applied research. For a continuous distribution function

P(|X| < a) = P(-a < X < a) = F(a) – F(-a),

where F is the distribution function of the random variable X. If the distribution function F is symmetric with respect to 0, i.e. formula (6) is valid for it, then

P(|X| < a) = 2F(a) – 1.

Another formulation of the statement under consideration is often used: if

.

If and are quantiles of the order and, respectively (see (2)) of a distribution function symmetric with respect to 0, then it follows from (6) that

From the characteristics of the position - the mathematical expectation, median, mode - let's move on to the characteristics of the spread of a random variable X: variance , standard deviation and coefficient of variation v. The definition and properties of the variance for discrete random variables were considered in the previous chapter. For continuous random variables

The standard deviation is the non-negative value of the square root of the variance:

The coefficient of variation is the ratio of the standard deviation to the mathematical expectation:

The coefficient of variation is applied when M(X)> 0. It measures the spread in relative units, while the standard deviation is in absolute units.

Example 6 For a uniformly distributed random variable X find the variance, standard deviation and coefficient of variation. The dispersion is:

Variable substitution makes it possible to write:

where c = (ba)/ 2. Therefore, the standard deviation is equal to and the coefficient of variation is:

For every random variable X determine three more quantities - centered Y, normalized V and given U. Centered random variable Y is the difference between the given random variable X and its mathematical expectation M(X), those. Y = X - M(X). Mathematical expectation of a centered random variable Y is equal to 0, and the variance is the variance of the given random variable: M(Y) = 0, D(Y) = D(X). distribution function F Y(x) centered random variable Y related to the distribution function F(x) initial random variable X ratio:

F Y(x) = F(x + M(X)).

For the densities of these random variables, the equality

f Y(x) = f(x + M(X)).

Normalized random variable V is the ratio of this random variable X to its standard deviation , i.e. . Mathematical expectation and variance of a normalized random variable V expressed through characteristics X So:

,

where v is the coefficient of variation of the original random variable X. For the distribution function F V(x) and density f V(x) normalized random variable V we have:

where F(x) is the distribution function of the original random variable X, a f(x) is its probability density.

Reduced random variable U is a centered and normalized random variable:

.

For a reduced random variable

Normalized, centered and reduced random variables are constantly used both in theoretical research and in algorithms, software products, regulatory and technical and instructive and methodological documentation. In particular, because the equalities make it possible to simplify the substantiation of methods, formulations of theorems, and calculation formulas.

Transformations of random variables and more general plan are used. So if Y = aX + b, where a and b are some numbers, then

Example 7 If then Y is the reduced random variable, and formulas (8) are transformed into formulas (7).

With every random variable X you can connect a lot of random variables Y given by the formula Y = aX + b at various a> 0 and b. This set is called scale-shift family, generated by a random variable X. Distribution functions F Y(x) constitute a scale-shift family of distributions generated by the distribution function F(x). Instead of Y = aX + b frequently used notation

Number With is called the shift parameter, and the number d- scale parameter. Formula (9) shows that X- the result of measuring a certain quantity - goes into At- the result of the measurement of the same value, if the beginning of the measurement is moved to the point With, and then use the new unit of measure, in d times greater than the old one.

For the scale-shift family (9), the distribution X is called standard. In probabilistic-statistical decision-making methods and other applied research, the standard normal distribution, the standard Weibull-Gnedenko distribution, the standard gamma distribution, etc. are used (see below).

Other transformations of random variables are also used. For example, for a positive random variable X consider Y= log X, where lg X is the decimal logarithm of the number X. Chain of equalities

F Y (x) = P( lg X< x) = P(X < 10x) = F( 10x)

relates distribution functions X and Y.

When processing data, such characteristics of a random variable are used X like moments of order q, i.e. mathematical expectations of a random variable X q, q= 1, 2, … Thus, the mathematical expectation itself is a moment of order 1. For a discrete random variable, the moment of order q can be calculated as

For a continuous random variable

Moments of order q also called the initial moments of the order q, in contrast to related characteristics - the central moments of the order q, given by the formula

Thus, dispersion is a central moment of order 2.

Normal distribution and the central limit theorem. In probabilistic-statistical decision-making methods, we often talk about a normal distribution. Sometimes they try to use it to model the distribution of the initial data (these attempts are not always justified - see below). More importantly, many data processing methods are based on the fact that the calculated values ​​have distributions that are close to normal.

Let X 1 , X 2 ,…, X n M(X i) = m and dispersions D(X i) = , i = 1, 2,…, n,… As follows from the results of the previous chapter,

Consider the reduced random variable U n for the sum , namely,

As follows from formulas (7), M(U n) = 0, D(U n) = 1.

(for identically distributed terms). Let X 1 , X 2 ,…, X n, … are independent identically distributed random variables with mathematical expectations M(X i) = m and dispersions D(X i) = , i = 1, 2,…, n,… Then for any x there is a limit

where F(x) is the standard normal distribution function.

More about the function F(x) - below (it reads “fi from x”, because F- Greek capital letter "phi").

The Central Limit Theorem (CLT) takes its name from the fact that it is the central, most frequently used mathematical result of probability theory and mathematical statistics. The history of the CLT takes about 200 years - from 1730, when the English mathematician A. De Moivre (1667-1754) published the first result related to the CLT (see below about the Moivre-Laplace theorem), until the twenties - thirties of the twentieth century, when Finn J.W. Lindeberg, Frenchman Paul Levy (1886-1971), Yugoslav V. Feller (1906-1970), Russian A.Ya. Khinchin (1894-1959) and other scientists obtained necessary and sufficient conditions for the validity of the classical central limit theorem.

The development of the subject under consideration did not stop there at all - they studied random variables that do not have dispersion, i.e. those for whom

(academician B.V. Gnedenko and others), the situation when random variables (more precisely, random elements) of a more complex nature than numbers are summed up (academicians Yu.V. Prokhorov, A.A. Borovkov and their associates), etc. .d.

distribution function F(x) is given by the equality

,

where is the density of the standard normal distribution, which has a rather complicated expression:

.

Here \u003d 3.1415925 ... is a number known in geometry, equal to the ratio of the circumference to the diameter, e \u003d 2.718281828 ... - the base of natural logarithms (to remember this number, note that 1828 is the year of birth of the writer Leo Tolstoy). As is known from mathematical analysis,

When processing the results of observations, the normal distribution function is not calculated according to the above formulas, but is found using special tables or computer programs. The best in Russian “Tables of Mathematical Statistics” were compiled by Corresponding Members of the USSR Academy of Sciences L.N. Bolshev and N.V. Smirnov.

The form of the density of the standard normal distribution follows from the mathematical theory, which we cannot consider here, as well as the proof of the CLT.

For illustration, we present small tables of the distribution function F(x)(Table 2) and its quantiles (Table 3). Function F(x) is symmetrical with respect to 0, which is reflected in Tables 2-3.

Table 2.

Function of the standard normal distribution.

If the random variable X has a distribution function F(x), then M(X) = 0, D(X) = 1. This statement is proved in probability theory based on the form of the probability density . It agrees with a similar statement for the characteristics of the reduced random variable U n, which is quite natural, since the CLT states that with an infinite increase in the number of terms, the distribution function U n tends to the standard normal distribution function F(x), and for any X.

Table 3

Quantiles of the standard normal distribution.

Order quantile R

Order quantile R

Let us introduce the concept of a family of normal distributions. By definition, a normal distribution is the distribution of a random variable X, for which the distribution of the reduced random variable is F(x). As follows from the general properties of scale-shift families of distributions (see above), the normal distribution is the distribution of a random variable

where X is a random variable with distribution F(X), and m = M(Y), = D(Y). Normal distribution with shift parameters m and scale is usually denoted N(m, ) (sometimes the notation N(m, ) ).

As follows from (8), the probability density of the normal distribution N(m, ) there is

Normal distributions form a scale-shift family. In this case, the scale parameter is d= 1/ , and the shift parameter c = - m/ .

For the central moments of the third and fourth order of the normal distribution, the equalities are true

These equalities underlie the classical methods of checking that the results of observations follow a normal distribution. At present, normality is usually recommended to be checked by the criterion W Shapiro - Wilka. The normality check problem is discussed below.

If random variables X 1 and X 2 have distribution functions N(m 1 , 1) and N(m 2 , 2) respectively, then X 1+ X 2 has a distribution Therefore, if the random variables X 1 , X 2 ,…, X n N(m, ) , then their arithmetic mean

has a distribution N(m, ) . These properties of the normal distribution are constantly used in various probabilistic-statistical decision-making methods, in particular, in the statistical control of technological processes and in statistical acceptance control by a quantitative attribute.

The normal distribution defines three distributions that are now commonly used in statistical data processing.

Distribution (chi - square) - distribution of a random variable

where random variables X 1 , X 2 ,…, X n are independent and have the same distribution N(0.1). In this case, the number of terms, i.e. n, is called the "number of degrees of freedom" of the chi-squared distribution.

Distribution t Student is the distribution of a random variable

where random variables U and X independent, U has a standard normal distribution N(0,1) and X– distribution chi – square with n degrees of freedom. Wherein n is called the "number of degrees of freedom" of the Student's distribution. This distribution was introduced in 1908 by the English statistician W. Gosset, who worked at a beer factory. Probabilistic-statistical methods were used to make economic and technical decisions at this factory, so its management forbade V. Gosset to publish scientific articles under his own name. In this way, a trade secret was protected, "know-how" in the form of probabilistic-statistical methods developed by W. Gosset. However, he was able to publish under the pseudonym "Student". The history of Gosset-Student shows that for another hundred years, the great economic efficiency of probabilistic-statistical decision-making methods was obvious to British managers.

The Fisher distribution is the distribution of a random variable

where random variables X 1 and X 2 are independent and have chi distributions - the square with the number of degrees of freedom k 1 and k 2 respectively. At the same time, a couple (k 1 , k 2 ) is a pair of "numbers of degrees of freedom" of the Fisher distribution, namely, k 1 is the number of degrees of freedom of the numerator, and k 2 is the number of degrees of freedom of the denominator. The distribution of the random variable F is named after the great English statistician R. Fisher (1890-1962), who actively used it in his work.

Expressions for the distribution functions of chi - square, Student and Fisher, their densities and characteristics, as well as tables can be found in the special literature (see, for example,).

As already noted, normal distributions are currently often used in probabilistic models in various applied fields. Why is this two-parameter family of distributions so widespread? It is clarified by the following theorem.

Central limit theorem(for differently distributed terms). Let X 1 , X 2 ,…, X n,… are independent random variables with mathematical expectations M(X 1 ), M(X 2 ),…, M(X n), … and dispersions D(X 1 ), D(X 2 ),…, D(X n), … respectively. Let

Then, under the validity of certain conditions that ensure the smallness of the contribution of any of the terms to U n,

for anyone X.

The conditions in question will not be formulated here. They can be found in the specialized literature (see, for example,). "Clarifying the conditions under which the CPT operates is the merit of the outstanding Russian scientists A.A. Markov (1857-1922) and, in particular, A.M. Lyapunov (1857-1918)" .

The central limit theorem shows that in the case when the result of a measurement (observation) is formed under the influence of many reasons, each of them making only a small contribution, and the cumulative result is determined by additively, i.e. by addition, then the distribution of the measurement (observation) result is close to normal.

It is sometimes believed that for the distribution to be normal it is sufficient that the result of the measurement (observation) X formed under the influence of many causes, each of which has a small effect. This is not true. What matters is how these causes work. If additive, then X has an approximately normal distribution. If multiplicatively(that is, the actions of individual causes are multiplied, not added), then the distribution X not close to normal, but to the so-called. logarithmically normal, i.e. not X, and lg X has an approximately normal distribution. If there are no grounds to believe that one of these two mechanisms for the formation of the final result (or some other well-defined mechanism) is operating, then about the distribution X nothing definite can be said.

From what has been said, it follows that in a specific applied problem, the normality of the results of measurements (observations), as a rule, cannot be established from general considerations, it should be checked using statistical criteria. Or use non-parametric statistical methods that are not based on assumptions about the membership of the distribution functions of the results of measurements (observations) to one or another parametric family.

Continuous distributions used in probabilistic-statistical decision-making methods. In addition to the scale-shift family of normal distributions, a number of other distribution families are widely used - logarithmically normal, exponential, Weibull-Gnedenko, gamma distributions. Let's take a look at these families.

Random value X has a log-normal distribution if the random variable Y= log X has a normal distribution. Then Z=ln X = 2,3026…Y also has a normal distribution N(a 1 ,σ 1), where ln X- natural logarithm X. The density of the log-normal distribution is:

It follows from the central limit theorem that the product X = X 1 X 2 X n independent positive random variables X i, i = 1, 2,…, n, at large n can be approximated by a log-normal distribution. In particular, the multiplicative model of the formation of wages or income leads to the recommendation to approximate the distributions of wages and incomes by log-normal laws. For Russia, this recommendation turned out to be justified - the statistics confirm it.

There are other probabilistic models that lead to the log-normal law. A classical example of such a model is given by A.N. ball mills have a log-normal distribution.

Let's move on to another family of distributions, widely used in various probabilistic-statistical decision-making methods and other applied research, the family of exponential distributions. Let's start with a probabilistic model that leads to such distributions. To do this, consider the "stream of events", i.e. a sequence of events occurring one after the other at some point in time. Examples are: call flow at the telephone exchange; the flow of equipment failures in the technological chain; flow of product failures during product testing; the flow of customer requests to the bank branch; the flow of buyers applying for goods and services, etc. In the theory of event flows, a theorem similar to the central limit theorem is valid, but it does not deal with the summation of random variables, but with the summation of event flows. We consider a total flow composed of a large number of independent flows, none of which has a predominant effect on the total flow. For example, the flow of calls arriving at the telephone exchange is made up of a large number of independent call flows originating from individual subscribers. It is proved that in the case when the characteristics of the flows do not depend on time, the total flow is completely described by one number - the intensity of the flow. For the total flow, consider a random variable X- the length of the time interval between successive events. Its distribution function has the form

(10)

This distribution is called the exponential distribution because formula (10) involves the exponential function ex. The value 1/λ is a scale parameter. Sometimes a shift parameter is also introduced With, exponential is the distribution of a random variable X + c, where the distribution X is given by formula (10).

Exponential distributions are a special case of the so-called. Weibull - Gnedenko distributions. They are named after the engineer W. Weibull, who introduced these distributions into the practice of analyzing the results of fatigue tests, and the mathematician B.V. Gnedenko (1912-1995), who received such distributions as limiting ones when studying the maximum of the test results. Let X- a random variable that characterizes the duration of the operation of a product, complex system, element (i.e. resource, operating time to the limit state, etc.), the duration of the operation of an enterprise or the life of a living being, etc. Failure rate plays an important role

(11)

where F(x) and f(x) - distribution function and density of a random variable X.

Let us describe the typical behavior of the failure rate. The entire time interval can be divided into three periods. On the first of them, the function λ(x) has high values ​​and a clear tendency to decrease (most often it decreases monotonically). This can be explained by the presence in the batch under consideration of product units with obvious and latent defects, which lead to a relatively quick failure of these product units. The first period is called the "break-in" (or "break-in") period. This is usually covered by the warranty period.

Then comes the period of normal operation, characterized by an approximately constant and relatively low failure rate. The nature of failures during this period is of a sudden nature (accidents, errors of operating personnel, etc.) and does not depend on the duration of operation of a product unit.

Finally, the last period of operation is the period of aging and wear. The nature of failures during this period is in irreversible physical, mechanical and chemical changes in materials, leading to a progressive deterioration in the quality of a unit of production and its final failure.

Each period has its own type of function λ(x). Consider the class of power dependencies

λ(х) = λ0bxb -1 , (12)

where λ 0 > 0 and b> 0 - some numeric parameters. Values b < 1, b= 0 and b> 1 correspond to the type of failure rate during the periods of running-in, normal operation and aging, respectively.

Relation (11) for a given failure rate λ(x)- differential equation with respect to the function F(x). It follows from the theory of differential equations that

(13)

Substituting (12) into (13), we get that

(14)

The distribution given by formula (14) is called the Weibull - Gnedenko distribution. Insofar as

then it follows from formula (14) that the quantity a, given by formula (15), is a scaling parameter. Sometimes a shift parameter is also introduced, i.e. Weibull - Gnedenko distribution functions are called F(x - c), where F(x) is given by formula (14) for some λ 0 and b.

The density of the Weibull - Gnedenko distribution has the form

(16)

where a> 0 - scale parameter, b> 0 - form parameter, With- shift parameter. In this case, the parameter a from formula (16) is related to the parameter λ 0 from formula (14) by the ratio indicated in formula (15).

The exponential distribution is a very special case of the Weibull - Gnedenko distribution, corresponding to the value of the shape parameter b = 1.

The Weibull - Gnedenko distribution is also used in the construction of probabilistic models of situations in which the behavior of an object is determined by the "weakest link". An analogy with a chain is implied, the safety of which is determined by that link that has the lowest strength. In other words, let X 1 , X 2 ,…, X n are independent identically distributed random variables,

X(1)=min( X 1 , X 2 ,…, X n), X(n)=max( X 1 , X 2 ,…, X n).

In a number of applied problems, an important role is played by X(1) and X(n) , in particular, when studying the maximum possible values ​​("records") of certain values, for example, insurance payments or losses due to commercial risks, when studying the limits of elasticity and endurance of steel, a number of reliability characteristics, etc. It is shown that for large n the distributions X(1) and X(n) , as a rule, are well described by Weibull - Gnedenko distributions. Foundational contributions to the study of distributions X(1) and X(n) was introduced by the Soviet mathematician B.V. Gnedenko. The works of V. Weibull, E. Gumbel, V.B. Nevzorova, E.M. Kudlaev and many other specialists.

Let's move on to the family of gamma distributions. They are widely used in economics and management, theory and practice of reliability and testing, in various fields of technology, meteorology, etc. In particular, in many situations, the gamma distribution is subject to such quantities as the total service life of the product, the length of the chain of conductive dust particles, the time the product reaches the limit state during corrosion, the operating time up to k th refusal, k= 1, 2, …, etc. The life expectancy of patients with chronic diseases, the time to achieve a certain effect in the treatment in some cases have a gamma distribution. This distribution is the most adequate for describing demand in economic and mathematical models of inventory management (logistics).

The density of the gamma distribution has the form

(17)

The probability density in formula (17) is determined by three parameters a, b, c, where a>0, b>0. Wherein a is a form parameter, b- scale parameter and With- shift parameter. Factor 1/Γ(а) is a normalization, it is introduced in order to

Here Γ(а)- one of the special functions used in mathematics, the so-called "gamma function", by which the distribution given by formula (17) is also named,

At a fixed a formula (17) defines a scale-shift family of distributions generated by a distribution with density

(18)

The distribution of the form (18) is called the standard gamma distribution. It is obtained from formula (17) with b= 1 and With= 0.

A special case of gamma distributions at a= 1 are exponential distributions (with λ = 1/b). With natural a and With=0 gamma distributions are called Erlang distributions. From the works of the Danish scientist K.A. Erlang (1878-1929), an employee of the Copenhagen telephone company, who studied in 1908-1922. the functioning of telephone networks, the development of the theory of queuing began. This theory is engaged in probabilistic-statistical modeling of systems in which the flow of requests is serviced in order to make optimal decisions. Erlang distributions are used in the same application areas as exponential distributions. This is based on the following mathematical fact: the sum of k independent random variables exponentially distributed with the same parameters λ and With, has a gamma distribution with shape parameter a =k, scale parameter b= 1/λ and the shift parameter kc. At With= 0 we get the Erlang distribution.

If the random variable X has a gamma distribution with shape parameter a such that d = 2 a- an integer, b= 1 and With= 0, then 2 X has a chi-squared distribution with d degrees of freedom.

Random value X with gvmma-distribution has the following characteristics:

Expected value M(X) =ab + c,

dispersion D(X) = σ 2 = ab 2 ,

The coefficient of variation

asymmetry

Excess

The normal distribution is an extreme case of the gamma distribution. More precisely, let Z be a random variable with a standard gamma distribution given by formula (18). Then

for any real number X, where F(x)- standard normal distribution function N(0,1).

In applied research, other parametric families of distributions are also used, of which the Pearson curve system, Edgeworth and Charlier series are the most well-known. They are not considered here.

Discrete distributions used in probabilistic-statistical decision-making methods. Most often, three families of discrete distributions are used - binomial, hypergeometric and Poisson, as well as some other families - geometric, negative binomial, multinomial, negative hypergeometric, etc.

As already mentioned, the binomial distribution takes place in independent trials, in each of which with a probability R event appears A. If the total number of trials n given, then the number of trials Y, in which the event appeared A, has a binomial distribution. For a binomial distribution, the probability of being accepted as a random variable Y values y is determined by the formula

Number of combinations from n elements by y known from combinatorics. For all y, except for 0, 1, 2, …, n, we have P(Y= y)= 0. Binomial distribution with a fixed sample size n is set by the parameter p, i.e. binomial distributions form a one-parameter family. They are used in the analysis of sample research data, in particular, in the study of consumer preferences, selective control of product quality according to single-stage control plans, when testing populations of individuals in demography, sociology, medicine, biology, etc.

If Y 1 and Y 2 - independent binomial random variables with the same parameter p 0 determined by samples with volumes n 1 and n 2 respectively, then Y 1 + Y 2 - binomial random variable with distribution (19) with R = p 0 and n = n 1 + n 2 . This remark expands the applicability of the binomial distribution, allowing you to combine the results of several groups of tests, when there is reason to believe that the same parameter corresponds to all these groups.

The characteristics of the binomial distribution were calculated earlier:

M(Y) = np, D(Y) = np( 1- p).

In the section "Events and probabilities" for a binomial random variable, the law of large numbers is proved:

for anyone . With the help of the central limit theorem, the law of large numbers can be refined by indicating how Y/ n differs from R.

De Moivre-Laplace theorem. For any numbers a and b, a< b, we have

where F(X) is a standard normal distribution function with mean 0 and variance 1.

To prove it, it suffices to use the representation Y as a sum of independent random variables corresponding to the outcomes of individual trials, formulas for M(Y) and D(Y) and the central limit theorem.

This theorem is for the case R= ½ was proved by the English mathematician A. Moivre (1667-1754) in 1730. In the above formulation, it was proved in 1810 by the French mathematician Pierre Simon Laplace (1749-1827).

Hypergeometric distribution takes place during selective control of a finite set of objects of volume N according to an alternative feature. Each controlled object is classified either as having the attribute A, or as not possessing this feature. The hypergeometric distribution has a random variable Y, equal to the number of objects that have the attribute A in a random sample of volume n, where n< N. For example, number Y defective units of products in a random sample of volume n from batch volume N has a hypergeometric distribution if n< N. Another example is the lottery. Let the sign A a ticket is a sign of “being winning”. Let all the tickets N, and some person has acquired n of them. Then the number of winning tickets for this person has a hypergeometric distribution.

For a hypergeometric distribution, the probability that a random variable Y takes the value y has the form

(20)

where D is the number of objects that have the attribute A, in the considered set of volume N. Wherein y takes values ​​from max(0, n - (N - D)) to min( n, D), with other y the probability in formula (20) is equal to 0. Thus, the hypergeometric distribution is determined by three parameters - the volume of the general population N, number of objects D in it, possessing the considered feature A, and sample size n.

Simple random sampling n from the total volume N is called a sample obtained as a result of random selection, in which any of the sets from n objects has the same probability of being selected. Methods for random selection of samples of respondents (interviewees) or units of piece products are considered in the instructive-methodical and normative-technical documents. One of the selection methods is as follows: objects are selected one from the other, and at each step each of the remaining objects in the set has the same chance of being selected. In the literature, for the type of samples under consideration, the terms “random sample”, “random sample without replacement” are also used.

Since the volumes of the general population (lots) N and samples n are commonly known, then the hypergeometric distribution parameter to be estimated is D. In statistical methods of product quality management D- usually the number of defective units in the batch. Of interest is also the characteristic of the distribution D/ N- defect level.

For hypergeometric distribution

The last factor in the variance expression is close to 1 if N>10 n. If, at the same time, we make the substitution p = D/ N, then the expressions for the mathematical expectation and variance of the hypergeometric distribution will turn into expressions for the mathematical expectation and variance of the binomial distribution. This is no coincidence. It can be shown that

at N>10 n, where p = D/ N. The limiting ratio is valid

and this limiting relation can be used for N>10 n.

The third widely used discrete distribution is the Poisson distribution. A random variable Y has a Poisson distribution if

,

where λ is the Poisson distribution parameter, and P(Y= y)= 0 for all others y(for y=0, 0!=1 is denoted). For the Poisson distribution

M(Y) = λ, D(Y) = λ.

This distribution is named after the French mathematician C.D. Poisson (1781-1840), who first derived it in 1837. The Poisson distribution is an extreme case of the binomial distribution, where the probability R implementation of the event is small, but the number of trials n great, and np= λ. More precisely, the limit relation

Therefore, the Poisson distribution (in the old terminology "distribution law") is often also called the "law of rare events".

The Poisson distribution arises in the theory of event flows (see above). It is proved that for the simplest flow with constant intensity Λ, the number of events (calls) that occurred during the time t, has a Poisson distribution with parameter λ = Λ t. Therefore, the probability that in time t no event will occur e - Λ t, i.e. the distribution function of the length of the interval between events is exponential.

The Poisson distribution is used in the analysis of the results of selective marketing surveys of consumers, the calculation of the operational characteristics of statistical acceptance control plans in the case of small values ​​of the acceptance level of defectiveness, to describe the number of breakdowns of a statistically controlled technological process per unit of time, the number of "requirements for service" arriving per unit of time in queuing system, statistical patterns of accidents and rare diseases, etc.

The description of other parametric families of discrete distributions and the possibility of their practical use are considered in the literature.


In some cases, for example, when studying prices, output volumes or total time between failures in reliability problems, the distribution functions are constant on certain intervals in which the values ​​of the random variables under study cannot fall.

Previous

Examples of random variables distributed according to the normal law are the height of a person, the mass of the caught fish of the same species. Normal distribution means the following : there are values ​​​​of human height, the mass of fish of the same species, which are intuitively perceived as "normal" (and in fact - averaged), and they are much more common in a sufficiently large sample than those that differ up or down.

The normal probability distribution of a continuous random variable (sometimes the Gaussian distribution) can be called bell-shaped due to the fact that the density function of this distribution, which is symmetric about the mean, is very similar to the cut of a bell (red curve in the figure above).

The probability of meeting certain values ​​​​in the sample is equal to the area of ​​\u200b\u200bthe figure under the curve, and in the case of a normal distribution, we see that under the top of the "bell", which corresponds to values ​​tending to the average, the area, and therefore the probability, is greater than under the edges. Thus, we get the same thing that has already been said: the probability of meeting a person of "normal" height, catching a fish of "normal" weight is higher than for values ​​that differ up or down. In very many cases of practice, measurement errors are distributed according to a law close to normal.

Let's stop again at the figure at the beginning of the lesson, which shows the density function of the normal distribution. The graph of this function was obtained by calculating some data sample in the software package STATISTICS. On it, the histogram columns represent intervals of sample values ​​whose distribution is close (or, as they say in statistics, do not differ significantly from) to the normal distribution density function graph itself, which is a red curve. The graph shows that this curve is indeed bell-shaped.

The normal distribution is valuable in many ways because knowing only the mean of a continuous random variable and the standard deviation, you can calculate any probability associated with that variable.

The normal distribution has the added benefit of being one of the easiest to use statistical criteria used to test statistical hypotheses - Student's t-test- can be used only in the case when the sample data obey the normal distribution law.

The density function of the normal distribution of a continuous random variable can be found using the formula:

,

where x- value of the variable, - mean value, - standard deviation, e\u003d 2.71828 ... - the base of the natural logarithm, \u003d 3.1416 ...

Properties of the normal distribution density function

Changes in the mean move the bell curve in the direction of the axis Ox. If it increases, the curve moves to the right, if it decreases, then to the left.

If the standard deviation changes, then the height of the curve vertex changes. When the standard deviation increases, the top of the curve is higher, when it decreases, it is lower.

The probability that the value of a normally distributed random variable will fall within a given interval

Already in this paragraph, we will begin to solve practical problems, the meaning of which is indicated in the title. Let us analyze what possibilities the theory provides for solving problems. The starting concept for calculating the probability of a normally distributed random variable falling into a given interval is the integral function of the normal distribution.

Integral normal distribution function:

.

However, it is problematic to obtain tables for every possible combination of mean and standard deviation. Therefore, one of the simple ways to calculate the probability of a normally distributed random variable falling into a given interval is to use probability tables for a standardized normal distribution.

A normal distribution is called a standardized or normalized distribution., whose mean value is , and the standard deviation is .

Density function of the standardized normal distribution:

.

Cumulative function of the standardized normal distribution:

.

The figure below shows the integral function of the standardized normal distribution, the graph of which was obtained by calculating some data sample in the software package STATISTICS. The graph itself is a red curve, and the sample values ​​are approaching it.


To enlarge the picture, you can click on it with the left mouse button.

Standardizing a random variable means moving from the original units used in the task to standardized units. Standardization is performed according to the formula

In practice, all possible values ​​of a random variable are often not known, so the values ​​of the mean and standard deviation cannot be accurately determined. They are replaced by the arithmetic mean of the observations and the standard deviation s. Value z expresses the deviations of the values ​​of a random variable from the arithmetic mean when measuring standard deviations.

Open interval

The probability table for the standardized normal distribution, which is available in almost any book on statistics, contains the probabilities that a random variable having a standard normal distribution Z takes on a value less than a certain number z. That is, it will fall into the open interval from minus infinity to z. For example, the probability that the value Z less than 1.5 is equal to 0.93319.

Example 1 The company manufactures parts that have a normally distributed lifetime with a mean of 1000 and a standard deviation of 200 hours.

For a randomly selected part, calculate the probability that its service life will be at least 900 hours.

Solution. Let's introduce the first notation:

The desired probability.

The values ​​of the random variable are in the open interval. But we can calculate the probability that a random variable will take a value less than a given one, and according to the condition of the problem, it is required to find equal to or greater than a given one. This is the other part of the space under the bell curve. Therefore, in order to find the desired probability, it is necessary to subtract from one the mentioned probability that the random variable will take a value less than the specified 900:

Now the random variable needs to be standardized.

We continue to introduce the notation:

z = (X ≤ 900) ;

x= 900 - given value of a random variable;

μ = 1000 - average value;

σ = 200 - standard deviation.

Based on these data, we obtain the conditions of the problem:

.

According to the tables of a standardized random variable (interval boundary) z= −0.5 corresponds to the probability 0.30854. Subtract it from unity and get what is required in the condition of the problem:

So, the probability that the life of the part will be at least 900 hours is 69%.

This probability can be obtained using the MS Excel function NORM.DIST (the value of the integral value is 1):

P(X≥900) = 1 - P(X≤900) = 1 - NORM.DIST(900; 1000; 200; 1) = 1 - 0.3085 = 0.6915.

About calculations in MS Excel - in one of the subsequent paragraphs of this lesson.

Example 2 In a certain city, the average annual family income is a normally distributed random variable with a mean value of 300,000 and a standard deviation of 50,000. It is known that the income of 40% of families is less than the value A. Find value A.

Solution. In this problem, 40% is nothing more than the probability that a random variable will take a value from an open interval that is less than a certain value, indicated by the letter A.

To find the value A, we first compose the integral function:

According to the task

μ = 300000 - average value;

σ = 50000 - standard deviation;

x = A is the value to be found.

Making up equality

.

According to the statistical tables, we find that the probability of 0.40 corresponds to the value of the interval boundary z = −0,25 .

Therefore, we make the equality

and find its solution:

A = 287300 .

Answer: income of 40% of families is less than 287300.

Closed interval

In many problems, it is required to find the probability that a normally distributed random variable takes a value in the interval from z 1 to z 2. That is, it will fall into the closed interval. To solve such problems, it is necessary to find in the table the probabilities corresponding to the boundaries of the interval, and then find the difference between these probabilities. This requires subtracting the smaller value from the larger one. Examples for solving these common problems are as follows, and it is proposed to solve them yourself, and then you can see the correct solutions and answers.

Example 3 The profit of an enterprise for a certain period is a random variable subject to the normal distribution law with an average value of 0.5 million c.u. and a standard deviation of 0.354. Determine, with an accuracy of two decimal places, the probability that the profit of the enterprise will be from 0.4 to 0.6 c.u.

Example 4 The length of the manufactured part is a random variable distributed according to the normal law with parameters μ =10 and σ =0.071 . Find, with an accuracy of two decimal places, the probability of marriage if the allowable dimensions of the part should be 10 ± 0.05.

Hint: in this problem, in addition to finding the probability of a random variable falling into a closed interval (the probability of obtaining a non-defective part), one more action is required.

allows you to determine the probability that the standardized value Z not less -z and no more +z, where z- an arbitrarily chosen value of a standardized random variable.

An Approximate Method for Checking the Normality of a Distribution

An approximate method for checking the normality of the distribution of sample values ​​is based on the following property of a normal distribution: skewness β 1 and kurtosis coefficient β 2 zero.

Asymmetry coefficient β 1 numerically characterizes the symmetry of the empirical distribution with respect to the mean. If the skewness is zero, then the arithmetric mean, median, and mode are equal: and the distribution density curve is symmetrical about the mean. If the coefficient of asymmetry is less than zero (β 1 < 0 ), then the arithmetic mean is less than the median, and the median, in turn, is less than the mode () and the curve is shifted to the right (compared to the normal distribution). If the coefficient of asymmetry is greater than zero (β 1 > 0 ), then the arithmetic mean is greater than the median, and the median, in turn, is greater than the mode () and the curve is shifted to the left (compared to the normal distribution).

Kurtosis coefficient β 2 characterizes the concentration of the empirical distribution around the arithmetic mean in the direction of the axis Oy and the degree of peaking of the distribution density curve. If the kurtosis coefficient is greater than zero, then the curve is more elongated (compared to the normal distribution) along the axis Oy(the graph is more pointed). If the kurtosis coefficient is less than zero, then the curve is more flattened (compared to a normal distribution) along the axis Oy(the graph is more obtuse).

The skewness coefficient can be calculated using the MS Excel function SKRS. If you are checking one array of data, then you need to enter a range of data in one "Number" box.


The coefficient of kurtosis can be calculated using the MS Excel function kurtosis. When checking one data array, it is also enough to enter the data range in one "Number" box.


So, as we already know, with a normal distribution, the skewness and kurtosis coefficients are equal to zero. But what if we got skewness coefficients equal to -0.14, 0.22, 0.43, and kurtosis coefficients equal to 0.17, -0.31, 0.55? The question is quite fair, since in practice we are dealing only with approximate, selective values ​​of asymmetry and kurtosis, which are subject to some inevitable, uncontrollable scatter. Therefore, it is impossible to require strict equality of these coefficients to zero, they should only be sufficiently close to zero. But what does enough mean?

It is required to compare the received empirical values ​​with admissible values. To do this, you need to check the following inequalities (compare the values ​​of the coefficients modulo with the critical values ​​- the boundaries of the hypothesis testing area).

For the asymmetry coefficient β 1 .

Similar articles

2022 liveps.ru. Homework and ready-made tasks in chemistry and biology.