Expectation examples. Population mean is

In the previous one, we gave a number of formulas that allow us to find the numerical characteristics of functions when the distribution laws of the arguments are known. However, in many cases, to find the numerical characteristics of functions, one does not even need to know the distribution laws of the arguments, but it is enough to know only some of their numerical characteristics; while we generally do without any distribution laws. Determination of the numerical characteristics of functions by given numerical characteristics of the arguments is widely used in probability theory and makes it possible to significantly simplify the solution of a number of problems. For the most part, these simplified methods are linear functions; however, some elementary nonlinear functions also allow for a similar approach.

In the present we present a number of theorems on the numerical characteristics of functions, which in their totality represent a very simple apparatus for calculating these characteristics, applicable in a wide range of conditions.

1. The mathematical expectation of a non-random variable

The formulated property is obvious enough; it can be proved by considering a non-random value as a particular form of a random one, with one possible value with a probability of one; then by the general formula for the mathematical expectation:

.

2. Dispersion of a nonrandom quantity

If is a nonrandom quantity, then

3. Taking out a non-random value for the sign of the mathematical expectation

, (10.2.1)

that is, a non-random quantity can be taken outside the sign of the mathematical expectation.

Evidence.

a) For discontinuous quantities

b) For continuous quantities

.

4. Subtraction of a non-random value for the sign of dispersion and standard deviation

If is a non-random value, and is a random value, then

, (10.2.2)

that is, a non-random quantity can be taken out of the sign of the variance by squaring it.

Evidence. By definition of variance

Consequence

,

that is, a non-random value can be taken out of the sign of the standard deviation by its absolute value. We obtain the proof by extracting the square root of formula (10.2.2) and taking into account that the r.s.s. is a substantially positive value.

5. Mathematical expectation of the sum of random variables

Let us prove that for any two random variables and

i.e. expected value the sum of two random variables is equal to the sum of their mathematical expectations.

This property is known as the expectation addition theorem.

Evidence.

a) Let be a system of discontinuous random variables. Apply to the sum of random variables general formula (10.1.6) for the mathematical expectation of a function of two arguments:

.

Ho represents nothing more than the total probability that a value will take on a value:

;

hence,

.

Let us prove in a similar way that

,

and the theorem is proved.

b) Let be a system of continuous random variables. According to the formula (10.1.7)

. (10.2.4)

We transform the first of the integrals (10.2.4):

;

similarly

,

and the theorem is proved.

It should be specially noted that the theorem of addition of mathematical expectations is valid for any random variables, both dependent and independent.

The theorem of addition of mathematical expectations is generalized to an arbitrary number of terms:

, (10.2.5)

that is, the mathematical expectation of the sum of several random variables is equal to the sum of their mathematical expectations.

For the proof, it suffices to apply the full induction method.

6. Mathematical expectation linear function

Consider a linear function of several random arguments:

where are non-random coefficients. Let us prove that

, (10.2.6)

that is, the mathematical expectation of a linear function is equal to the same linear function of the mathematical expectations of the arguments.

Evidence. Using the addition theorem for m. and the rule for placing a non-random value outside the sign of the m.o., we get:

.

7. Dispepthese are the sums of random variables

The variance of the sum of two random variables is equal to the sum of their variances plus the doubled correlation moment:

Evidence. We denote

By the theorem of addition of mathematical expectations

Let's go from random variables to corresponding centered values. Subtracting equality (10.2.9) term by term from equality (10.2.8), we have:

By definition of variance

q.E.D.

Formula (10.2.7) for the variance of the sum can be generalized to any number of terms:

, (10.2.10)

where is the correlation moment of the quantities, the sign under the sum means that the summation applies to all possible pairwise combinations of random variables .

The proof is similar to the previous one and follows from the formula for the square of a polynomial.

Formula (10.2.10) can be written in another form:

, (10.2.11)

where the double sum applies to all elements of the correlation matrix of the system of quantities containing both correlation moments and variance.

If all random variables entering the system are uncorrelated (i.e., at), formula (10.2.10) takes the form:

, (10.2.12)

that is, the variance of the sum of uncorrelated random variables is equal to the sum of the variances of the terms.

This statement is known as the variance addition theorem.

8. Dispersion of a linear function

Consider a linear function of several random variables.

where are non-random values.

Let us prove that the variance of this linear function is expressed by the formula

, (10.2.13)

where is the correlation moment of the quantities,.

Evidence. Let's introduce the notation:

. (10.2.14)

Applying to the right side of expression (10.2.14) formula (10.2.10) for the variance of the sum and taking into account that, we get:

where is the correlation moment of the quantities:

.

Let's calculate this moment. We have:

;

similarly

Substituting this expression in (10.2.15), we arrive at the formula (10.2.13).

In the special case when all quantities uncorrelated, formula (10.2.13) takes the form:

, (10.2.16)

that is, the variance of the linear function of uncorrelated random variables is equal to the sum of the products of the squared coefficients by the variances of the corresponding arguments.

9. Mathematical expectation of the product of random variables

The mathematical expectation of the product of two random variables is equal to the product of their mathematical expectations plus the correlation moment:

Evidence. We will proceed from the definition of the correlation moment:

We transform this expression using the properties of the mathematical expectation:

which is obviously equivalent to formula (10.2.17).

If the random variables are uncorrelated, then the formula (10.2.17) takes the form:

that is, the mathematical expectation of the product of two uncorrelated random variables is equal to the product of their mathematical expectations.

This statement is known as the expectation multiplication theorem.

Formula (10.2.17) is nothing more than an expression of the second mixed central moment of the system through the second mixed initial moment and mathematical expectations:

. (10.2.19)

This expression is often used in practice when calculating the correlation moment in the same way as for one random variable the variance is often calculated through the second initial moment and mathematical expectation.

The theorem of multiplication of mathematical expectations is generalized to an arbitrary number of factors, only in this case for its application it is not enough that the quantities are uncorrelated, but it is required that some higher mixed moments, the number of which depends on the number of terms in the product, also vanish. These conditions are certainly satisfied when the random variables included in the product are independent. In this case

, (10.2.20)

that is, the mathematical expectation of the product of independent random variables is equal to the product of their mathematical expectations.

This statement is easily proved by the method of complete induction.

10. Dispersion of the product of independent random variables

Let us prove that for independent quantities

Evidence. Let us denote. By definition of variance

Since the quantities are independent, and

At independent values \u200b\u200bare also independent; hence,

,

But there is nothing more than the second initial moment of magnitude, and, therefore, is expressed through the variance:

;

similarly

.

Substituting these expressions into the formula (10.2.22) and bringing similar terms, we arrive at the formula (10.2.21).

In the case when centered random variables are multiplied (values \u200b\u200bwith mathematical expectations equal to zero), formula (10.2.21) takes the form:

, (10.2.23)

that is, the variance of the product of independent centered random variables is equal to the product of their variances.

11. Higher moments of the sum of random variables

In some cases, it is necessary to calculate the highest moments of the sum of independent random variables. Let us prove some relations related to this.

1) If the quantities are independent, then

Evidence.

whence by the theorem of multiplication of mathematical expectations

But the first central moment for any quantity is zero; the two middle terms vanish, and formula (10.2.24) is proved.

Relation (10.2.24) can be easily generalized by induction to an arbitrary number of independent terms:

. (10.2.25)

2) The fourth central moment of the sum of two independent random variables is expressed by the formula

where are the variances of the quantities and.

The proof is completely similar to the previous one.

By the method of complete induction, it is easy to prove a generalization of formula (10.2.26) to an arbitrary number of independent terms.

The mathematical expectation is the probability distribution of a random variable

Expectation, definition, mathematical expectation of discrete and continuous random variables, sample, conditional expectation, calculation, properties, tasks, estimation of expectation, variance, distribution function, formulas, examples of calculation

Expand content

Collapse content

The mathematical expectation is, the definition

One of the most important concepts in mathematical statistics and probability theory that characterizes the distribution of values \u200b\u200bor probabilities of a random variable. Usually expressed as a weighted average of all possible parameters of a random variable. It is widely used in technical analysis, research number series, the study of continuous and continuous processes. It is important in assessing risks, predicting price indicators when trading in financial markets, and is used in developing strategies and methods of gaming tactics in the theory of gambling.

The mathematical expectation ismean value of a random variable, the probability distribution of a random variable is considered in probability theory.

The mathematical expectation isa measure of the mean value of a random variable in probability theory. The mathematical expectation of a random variable x denoted M (x).

The mathematical expectation is


The mathematical expectation is in the theory of probability, the weighted average of all possible values \u200b\u200bthat this random variable can take.


The mathematical expectation isthe sum of the products of all possible values \u200b\u200bof a random variable by the probabilities of these values.

The mathematical expectation is average benefit from one solution or another, provided that such a solution can be considered within the framework of the theory of large numbers and long distance.


The mathematical expectation isin the theory of gambling, the amount of winnings that a player can earn or lose, on average, for each bet. In the language of gamblers, this is sometimes called "player advantage" (if it is positive for the player) or "casino advantage" (if it is negative for the player).

The mathematical expectation is percentage of profit on winnings multiplied by average profit minus probability of loss multiplied by average loss.


The mathematical expectation of a random variable in mathematical theory

One of the important numerical characteristics of a random variable is the mathematical expectation. Let's introduce the concept of a system of random variables. Consider a collection of random variables that are the results of the same random experiment. If - one of the possible values \u200b\u200bof the system, then the event corresponds to a certain probability that satisfies the Kolmogorov axioms. A function defined for any possible values \u200b\u200bof random variables is called a joint distribution law. This function allows you to calculate the probabilities of any events from. In particular, the joint law of distribution of random variables and, which take values \u200b\u200bfrom the set and, is given by probabilities.


The term "mathematical expectation" was introduced by Pierre Simon the Marquis de Laplace (1795) and originated from the concept of "expected value of a payoff", which first appeared in the 17th century in the theory of gambling in the works of Blaise Pascal and Christian Huygens. However, the first complete theoretical understanding and assessment of this concept was given by Pafnutii Lvovich Chebyshev (mid-19th century).


The distribution law of random numerical values \u200b\u200b(distribution function and distribution series or probability density) fully describe the behavior of a random variable. But in a number of problems, it is enough to know some of the numerical characteristics of the investigated quantity (for example, its average value and possible deviation from it) in order to answer the question posed. The main numerical characteristics of random variables are mathematical expectation, variance, mode and median.

The mathematical expectation of a discrete random variable is the sum of the products of its possible values \u200b\u200bby the corresponding probabilities. Sometimes the mathematical expectation is called the weighted average, since it is approximately equal to the arithmetic mean of the observed values \u200b\u200bof a random variable for a large number of experiments. From the definition of the mathematical expectation it follows that its value is not less than the smallest possible value of a random variable and not more than the largest. The mathematical expectation of a random variable is a non-random (constant) value.


The mathematical expectation has a simple physical meaning: if a unit mass is placed on a straight line by placing some mass at some points (for a discrete distribution), or "smearing" it with a certain density (for an absolutely continuous distribution), then the point corresponding to the mathematical expectation will be the coordinate The "center of gravity" is straight.


The average value of a random variable is a certain number, which is, as it were, its “representative” and replaces it in rough approximate calculations. When we say: “the average operating time of the lamp is equal to 100 hours” or “the midpoint of impact is displaced relative to the target by 2 m to the right”, we are indicating a certain numerical characteristic of a random variable that describes its location on the numerical axis, i.e. "Position description".

From the characteristics of the position in the theory of probability, the most important role is played by the mathematical expectation of a random variable, which is sometimes called simply the mean value of a random variable.


Consider a random variable Xwith possible values x1, x2, ..., xn with probabilities p1, p2, ..., pn... We need to characterize the position of the values \u200b\u200bof a random variable on the abscissa axis by some number, taking into account the fact that these values \u200b\u200bhave different probabilities. For this purpose, it is natural to use the so-called "weighted average" of the values xi, and each value of xi during averaging should be taken into account with a "weight" proportional to the probability of this value. Thus, we will calculate the mean of the random variable Xwhich we will denote M | X |:


This weighted average is called the mathematical expectation of a random variable. Thus, we have introduced in consideration one of the most important concepts of probability theory - the concept of mathematical expectation. The mathematical expectation of a random variable is the sum of the products of all possible values \u200b\u200bof a random variable by the probabilities of these values.

X associated with a peculiar relationship with the arithmetic mean of the observed values \u200b\u200bof a random variable with a large number of experiments. This dependence is of the same type as the dependence between frequency and probability, namely: with a large number of experiments, the arithmetic mean of the observed values \u200b\u200bof a random variable approaches (converges in probability) to its mathematical expectation. From the presence of a relationship between frequency and probability, one can deduce as a consequence the presence of a similar relationship between the arithmetic mean and mathematical expectation. Indeed, consider the random variable X, characterized by a distribution series:


Let it be produced N independent experiments, in each of which the value Xtakes on a certain meaning. Suppose the value x1appeared m1times, value x2appeared m2times, generally meaning xiappeared mi times. Let us calculate the arithmetic mean of the observed values \u200b\u200bof X, which, in contrast to the mathematical expectation M | X |we denote M * | X |:

With an increase in the number of experiments Nfrequency piwill approach (converge in probability) to the corresponding probabilities. Consequently, the arithmetic mean of the observed values \u200b\u200bof the random variable M | X | with an increase in the number of experiments, it will approach (converge in probability) to its mathematical expectation. The above connection between the arithmetic mean and the mathematical expectation is the content of one of the forms of the law of large numbers.

We already know that all forms of the law of large numbers state the fact that certain averages are stable for a large number of experiments. Here we are talking about the stability of the arithmetic mean from a series of observations of the same quantity. With a small number of experiments, the arithmetic mean of their results is random; with a sufficient increase in the number of experiments, it becomes "almost random" and, stabilizing, approaches a constant value - the mathematical expectation.


The property of stability of averages with a large number of experiments is easy to verify experimentally. For example, weighing a body in a laboratory on an accurate balance, we get a new value each time as a result of weighing; to reduce the observation error, we weigh the body several times and use the arithmetic mean of the values \u200b\u200bobtained. It is easy to see that with a further increase in the number of experiments (weighings), the arithmetic mean reacts to this increase less and less, and with a sufficiently large number of experiments, it practically stops changing.

It should be noted that the most important characteristic of the position of a random variable - the mathematical expectation - does not exist for all random variables. It is possible to compose examples of such random variables for which the mathematical expectation does not exist, since the corresponding sum or integral diverges. However, for practice, such cases are not of significant interest. Usually, the random variables we deal with have a limited range of possible values \u200b\u200band, of course, have a mathematical expectation.


In addition to the most important of the characteristics of the position of a random variable - the mathematical expectation - other characteristics of the position are sometimes used in practice, in particular, the mode and median of a random variable.


The mode of a random variable is its most probable value. The term "most probable value", strictly speaking, applies only to discontinuous quantities; for a continuous quantity, the mode is that value at which the probability density is maximum. The figures show the mode respectively for discontinuous and continuous random variables.


If the distribution polygon (distribution curve) has more than one maximum, the distribution is called "polymodal".



Sometimes there are distributions that have a minimum, not a maximum, in the middle. Such distributions are called "anti-modal".


In the general case, the mode and the mathematical expectation of a random variable do not coincide. In the special case when the distribution is symmetric and modal (i.e., has a mode) and there is a mathematical expectation, then it coincides with the mode and the center of symmetry of the distribution.

Another characteristic of the position is often used - the so-called median of a random variable. This characteristic is usually used only for continuous random variables, although formally it can be determined for a discontinuous variable. Geometrically, the median is the abscissa of the point at which the area bounded by the distribution curve is halved.


In the case of a symmetric modal distribution, the median coincides with the mathematical expectation and mode.

The mathematical expectation is the mean value of the random variable - the numerical characteristic of the probability distribution of the random variable. In the most general way, the mathematical expectation of a random variable X (w) is defined as the Lebesgue integral with respect to the probability measure Rin the original probability space:


The mathematical expectation can be calculated as the Lebesgue integral of xby probability distribution pxmagnitudes X:


In a natural way, you can define the concept of a random variable with an infinite mathematical expectation. The return times in some random walks are typical examples.

Using the mathematical expectation, many numerical and functional characteristics of the distribution are determined (as the mathematical expectation of the corresponding functions of a random variable), for example, a generating function, a characteristic function, moments of any order, in particular, variance, covariance.

The mathematical expectation is a characteristic of the location of the values \u200b\u200bof a random variable (the average value of its distribution). In this capacity, the mathematical expectation serves as some "typical" distribution parameter and its role is similar to the role of the static moment - the coordinates of the center of gravity of the mass distribution - in mechanics. The mathematical expectation differs from other location characteristics, with the help of which the distribution is described in general terms, medians, modes, by the greater value that it and the corresponding scattering characteristic - dispersion - have in the limit theorems of probability theory. With the greatest completeness, the meaning of the mathematical expectation is revealed by the law of large numbers (Chebyshev's inequality) and the strengthened law of large numbers.

The mathematical expectation of a discrete random variable

Let there be some random variable that can take one of several numerical values \u200b\u200b(for example, the number of points when throwing a dice can be 1, 2, 3, 4, 5, or 6). In practice, the question often arises for such a value: what value does it take "on average" with a large number of tests? What will be our average income (or loss) from each of the risky operations?


Let's say there is some kind of lottery. We want to understand whether it is profitable or not to participate in it (or even to participate repeatedly, regularly). Let's say every fourth winning ticket, the prize is 300 rubles, and the price of any ticket is 100 rubles. With an infinitely large number of participation, this is what happens. In three quarters of cases, we will lose, every three losses will cost 300 rubles. In every fourth case, we will win 200 rubles. (prize minus cost), that is, for four participations we lose on average 100 rubles, for one - on average 25 rubles. In total, the average rate of our ruin will be 25 rubles per ticket.

We throw the dice. If it's not cheating (no shift in the center of gravity, etc.), then how many points will we have on average at a time? Since each option is equally probable, we take a stupid arithmetic mean and get 3.5. Since this is AVERAGE, there is no need to be indignant that no specific throw will give 3.5 points - well, this cube has no edge with such a number!

Now let's summarize our examples:


Let's look at the picture just shown. On the left is a table of the distribution of a random variable. The value X can take one of n possible values \u200b\u200b(shown in the top line). There can be no other values. Each possible value below is labeled with its probability. On the right is the formula, where M (X) is called the mathematical expectation. The meaning of this value is that with a large number of tests (with a large sample), the average value will tend to this same mathematical expectation.

Let's go back to the same playing cube. The mathematical expectation of the number of points when throwing is 3.5 (calculate yourself using the formula, if you don't believe). Let's say you threw it a couple of times. They dropped 4 and 6. On average, it turned out 5, that is, far from 3.5. They threw it one more time, dropped 3, that is, on average (4 + 6 + 3) / 3 \u003d 4.3333 ... Somehow far from the mathematical expectation. Now do this crazy experiment - roll the cube 1000 times! And if the average is not exactly 3.5, it will be close to that.

Let's calculate the mathematical expectation for the above described lottery. The plate will look like this:


Then the mathematical expectation will be, as we established above .:


Another thing is that it would be difficult to use the same “on the fingers”, without a formula, if there were more options. Let's say there would be 75% of losing tickets, 20% of winning tickets, and 5% of extra winning tickets.

Now some properties of the mathematical expectation.

Proving this is simple:


The constant factor is allowed to be taken outside the mathematical expectation sign, that is:


This is a special case of the linearity property of the mathematical expectation.

Another consequence of the linearity of the mathematical expectation:

that is, the mathematical expectation of the sum of random variables is equal to the sum of the mathematical expectations of the random variables.

Let X, Y be independent random variables, then:

This is also not difficult to prove) XY itself is a random variable, while if the initial values \u200b\u200bcould take nand mvalues \u200b\u200brespectively, then XYcan take nm values. The probability of each of the values \u200b\u200bis calculated based on the fact that the probabilities of independent events are multiplied. As a result, we get this:


The mathematical expectation of a continuous random variable

Continuous random variables have such a characteristic as distribution density (probability density). It, in fact, characterizes the situation that a random variable takes some values \u200b\u200bfrom the set of real numbers more often, some less often. For example, consider the following graph:


Here Xis a random variable itself, f (x)- distribution density. Judging by this graph, in experiments the value Xwill often be a number close to zero. Chances to exceed 3 or be less -3 rather purely theoretical.


For example, suppose there is a uniform distribution:



This is quite consistent with intuitive understanding. Say, if we get a lot of random real numbers with a uniform distribution, each of the segment |0; 1| , then the arithmetic mean should be about 0.5.

The properties of the mathematical expectation - linearity, etc., applicable for discrete random variables, apply here as well.

Relationship between mathematical expectation and other statistical indicators

In statistical analysis, along with the mathematical expectation, there is a system of interdependent indicators reflecting the homogeneity of phenomena and the stability of processes. Variation indicators often have no independent meaning and are used for further data analysis. The exception is the coefficient of variation, which characterizes the homogeneity of the data, which is a valuable statistic.


The degree of variability or stability of processes in statistical science can be measured using several indicators.

The most important indicator characterizing the variability of a random variable is Dispersion, which is most closely and directly related to the mathematical expectation. This parameter is actively used in other types of statistical analysis (hypothesis testing, analysis of cause-and-effect relationships, etc.). Like the linear average, variance also reflects the measure of the spread of the data around the mean.


Sign language is useful to translate into language of words. It turns out that the variance is the mean square of the deviations. That is, the average is calculated first, then the difference between each original and average is taken, squared, added, and then divided by the number of values \u200b\u200bin the population. The difference between the individual value and the mean reflects the measure of the deviation. It is squared so that all deviations become exclusively positive numbers and to avoid mutual destruction of positive and negative deviations when they are added. Then, with the squares of the deviations, we simply calculate the arithmetic mean. Average - square - deviations. Deviations are squared and the average is considered. The answer to the magic word "variance" lies in just three words.

However, in its pure form, such as the arithmetic mean, or index, variance is not used. It is rather an auxiliary and intermediate indicator that is used for other types of statistical analysis. She doesn't even have a normal unit of measurement. Judging by the formula, this is the square of the unit of measure of the original data.

Let us measure a random variable Ntimes, for example, we measure the wind speed ten times and want to find the average value. How is the mean related to the distribution function?

Or we will roll the dice a large number of times. The number of points that will drop out on the die with each roll is a random variable and can take any natural values \u200b\u200bfrom 1 to 6. The arithmetic mean of the dropped points calculated for all dice rolls is also a random value, but for large Nit tends to fully specific number - mathematical expectation Mx... In this case, Mx \u003d 3.5.

How did this value come about? Let in Ntrials n1once dropped 1 point, n2times - 2 points and so on. Then the number of outcomes in which one point fell:


Likewise for the outcomes when 2, 3, 4, 5 and 6 points are rolled.


Suppose now that we know the distribution law of a random variable x, that is, we know that a random variable x can take values \u200b\u200bx1, x2, ..., xk with probabilities p1, p2, ..., pk.

The mathematical expectation Mx of a random variable x is:


The mathematical expectation is not always a reasonable estimate of some random variable. So, to estimate the average wage, it is more reasonable to use the concept of the median, that is, such a value that the number of people who receive less than the median wage and more are the same.

The probability p1 that the random variable x will be less than x1 / 2, and the probability p2 that the random variable x will be greater than x1 / 2 are the same and equal to 1/2. The median is not uniquely determined for all distributions.


Standard or Root mean square deviation in statistics, the degree to which observational data or sets deviate from the mean. It is designated by the letters s or s. A small standard deviation indicates that the data is clustered around the mean, while a large standard deviation indicates that the initial data is far from it. The standard deviation is square root a quantity called variance. It is the average of the sum of the squared differences of the initial data deviating from the mean. The root-mean-square deviation of a random variable is called the square root of the variance:


Example. Under test conditions when shooting at a target, calculate the variance and standard deviation of a random variable:


Variation- variability, variability of the value of the trait in the units of the population. Individual numerical values \u200b\u200bof a feature that are found in the studied population are called value options. Insufficiency of the average value for a complete characteristic of the population makes it necessary to supplement the average values \u200b\u200bwith indicators that make it possible to assess the typicality of these averages by measuring the variability (variation) of the trait under study. The coefficient of variation is calculated by the formula:


Swipe variation (R) is the difference between the maximum and minimum values \u200b\u200bof the trait in the studied population. This indicator gives the most general idea about the variability of the trait under study, since it shows the difference only between the limiting values \u200b\u200bof the options. The dependence on the extreme values \u200b\u200bof the trait gives the range of variation an unstable, random character.


Average linear deviationis the arithmetic mean of the absolute (modulo) deviations of all values \u200b\u200bof the analyzed population from their average:


Expectation in gambling theory

The mathematical expectation isthe average amount of money a gambler can win or lose on a given bet. This is a very important concept for the player, because it is fundamental to the assessment of most game situations. Expectation is also an optimal tool for analyzing basic cards and game situations.

Let's say you're playing a coin with a friend, betting $ 1 equally each time, regardless of what comes up. Tails - you win, heads - you lose. The odds of coming up tails are one-to-one and you bet $ 1 to $ 1. Thus, your mathematical expectation is zero, because mathematically speaking, you cannot know whether you will be leading or losing after two throws or after 200.


Your hourly gain is zero. An hourly win is the amount of money you expect to win in an hour. You can flip a coin 500 times within an hour, but you won't win or lose because your chances are neither positive nor negative. From the point of view of a serious player, such a betting system is not bad. But this is simply a waste of time.

But suppose someone wants to bet $ 2 against your $ 1 in the same game. Then you immediately have a positive expectation of 50 cents from each bet. Why 50 cents? On average, you win one bet and lose the second. Bet the first dollar and lose $ 1, bet the second and win $ 2. You bet $ 1 twice and are $ 1 ahead. So each of your one dollar bets gave you 50 cents.


If the coin falls out 500 times in one hour, your hourly winnings will already be $ 250, because on average, you lost $ 1 250 times and won $ 2 250 times. $ 500 minus $ 250 equals $ 250, which is the total winnings. Please note that the expected value, which is the amount you won on average on one bet, is 50 cents. You won $ 250 by placing a dollar bet 500 times, which equals 50 cents from the stake.

Expectation has nothing to do with short-term results. Your opponent, who decided to bet $ 2 against you, could beat you on the first ten tosses in a row, but you, having a 2 to 1 betting advantage, all other things being equal, under any circumstances, earn 50 cents from every $ 1 bet. It makes no difference whether you win or lose one bet or several bets, but only if you have enough cash to calmly compensate for the costs. If you continue to bet in the same way, then over a long period of time your winnings will come up to the sum of your expectations in separate throws.


Every time you make a bet with the best outcome (a bet that can turn out to be profitable over the long run), when the odds are in your favor, you will definitely win something on it, and it does not matter if you lose it or not in this hand. Conversely, if you make a bet with the worst outcome (a bet that is not profitable in the long run) when the odds are not in your favor, you are losing something, regardless of whether you win or lose in the given hand.

You bet with the best outcome if your expectation is positive, and it is positive if the odds are on your side. When placing a bet with the worst outcome, you have negative expectation, which happens when the odds are against you. Serious gamblers only bet with the best outcome; in the worst case, they fold. What does the odds mean in your favor? You may end up winning more than the real odds bring. The real odds of coming up tails are 1 to 1, but you get 2 to 1 due to the ratio of the bets. In this case, the odds are in your favor. You will definitely get the best outcome with a positive expectation of 50 cents per bet.


Here's a more complex example of expected value. Your buddy writes the numbers from one to five and bets $ 5 against your $ 1 that you will not determine the hidden number. Should you agree to such a bet? What is the expectation here?

On average, you get it wrong four times. Based on this, the odds against you guessing the number are 4 to 1. The odds are that you lose a dollar on one try. However, you win 5 to 1, if you can lose 4 to 1. So the odds are in your favor, you can take the bet and hope for a better outcome. If you make this bet five times, on average you will lose four times $ 1 and win $ 5 once. Based on this, for all five tries, you will earn $ 1 with a positive expected value of 20 cents per bet.


A player who is going to win more than he bets, as in the example above, catches the odds. Conversely, he ruins the odds when he expects to win less than he bets. A player making a bet can have either positive or negative expectation, which depends on whether he catches or ruins the odds.

If you bet $ 50 to win $ 10 with a 4 to 1 probability of winning, you will get a negative expectation of $ 2, because on average, you win four times $ 10 and lose $ 50 once, which shows that the loss for one bet is $ 10. But if you bet $ 30 in order to win $ 10, with the same chances of winning 4 to 1, then in this case you have a positive expectation of $ 2, because you win four times again for $ 10 and lose $ 30 once for a profit of $ 10. These examples show that the first bet is bad and the second is good.


The mathematical expectation is the center of any game situation. When a bookmaker encourages football fans to bet $ 11 to win $ 10, they have a positive expectation of 50 cents for every $ 10. If the casino pays out equal money from the passing line in the craps, then the casino's positive expectation is approximately $ 1.40 for every $ 100, because This game is designed in such a way that everyone who bet on this line loses 50.7% on average and wins 49.3% of the total time. Undoubtedly, it is this seemingly minimal positive expectation that brings colossal profits to casino owners around the world. As Vegas World casino owner Bob Stupak noted, “One thousandth of a percent negative probability over a long enough distance will ruin the richest man in the world".


Mathematical expectation when playing Poker

The game of Poker is the most illustrative and illustrative example in terms of using the theory and properties of mathematical expectation.


The mathematical expectation (English Expected Value) in Poker is the average benefit from a particular solution, provided that such a solution can be considered within the framework of the theory of large numbers and long distance. A successful poker game is about always accepting moves with positive expectation.

The mathematical meaning of the mathematical expectation when playing poker is that we often come across random variables when making a decision (we do not know which cards are in our opponent's hands, which cards will come on subsequent betting rounds). We must consider each of the solutions from the point of view of the theory of large numbers, which states that with a sufficiently large sample, the average value of a random variable will tend to its mathematical expectation.


Among the particular formulas for calculating the mathematical expectation, the following is most applicable in poker:

When playing poker, the expected value can be calculated for both bets and calls. In the first case, fold equity should be taken into account, in the second - the pot's own odds. When assessing the mathematical expectation of a move, remember that a fold always has a zero expectation. Thus, discarding cards will always be a more profitable decision than any negative move.

Expectation tells you what you can expect (profit or loss) for every dollar you risk. Casinos make money because the mathematical expectation from all the games that are practiced in them is in favor of the casino. With a sufficiently long series of games, you can expect that the client will lose his money, since the "probability" is in favor of the casino. However, professional casino players limit their games to short periods of time, thereby increasing the odds in their favor. The same goes for investing. If your expectation is positive, you can earn more moneymaking many trades in a short period of time. Expectation is your percentage of profit per win multiplied by your average profit minus your probability of loss multiplied by your average loss.


Poker can also be viewed in terms of mathematical expectation. You can assume that a certain move is profitable, but in some cases it may turn out to be far from the best because another move is more profitable. Let's say you hit a full house in a five card draw poker. Your opponent bets. You know that if you raise your bid, he will answer. Therefore, raising looks like the best tactic. But if you do raise the bet, the remaining two players will definitely fold. But if you call, you will be completely sure that the other two players after you will do the same. When you raise the bet, you get one unit, and simply call - two. Thus, equalizing gives you a higher positive mathematical expectation, and is the best tactic.

The mathematical expectation can also give an idea of \u200b\u200bwhich tactics are less profitable in poker and which are more. For example, when playing a certain hand, you believe that your losses will average 75 cents, including the antes, then this hand should be played because this is better than folding when the ante is $ 1.


Another important reason for understanding the essence of mathematical expectation is that it gives you a sense of peace whether you won a bet or not: if you made a good bet or fold on time, you will know that you have earned or saved a certain amount of money, which the weaker player could not save. It is much more difficult to fold if you are upset that your opponent has made a stronger hand on the exchange. With all this, the money that you saved without playing, instead of betting, is added to your winnings per night or per month.

Just remember that if you changed your hands, your opponent would call you, and as you will see in the article "The Fundamental Theorem of Poker" this is just one of your advantages. You should be happy when this happens. You can even learn to enjoy a losing hand, because you know that other players in your place would lose much more.


As stated in the coin game example at the beginning, the hourly profit ratio is related to the expected value, and this concept especially important for professional players. When you are going to play poker, you must mentally estimate how much you can win in an hour of playing. In most cases, you will need to rely on your intuition and experience, but you can also use some math. For example, you are playing draw lowball and you see three players bet $ 10 and then exchange two cards, which is a very bad tactic, you might find that every time they bet $ 10, they lose about $ 2. Each of them does it eight times an hour, which means that all three lose about $ 48 per hour. You are one of the remaining four players, which are approximately equal, so these four players (and you among them) have to divide $ 48, and each profit will be $ 12 per hour. Your hourly rate in this case is simply your share of the money lost by three bad players in an hour.

Over a long period of time, the player's total payoff is the sum of his mathematical expectations in individual hands. The more you play with positive expectation, the more you win, and vice versa, the more hands with negative expectation you play, the more you will lose. As a consequence, you should choose a game that can maximize your positive expectations or negate negative ones so that you can maximize your hourly winnings.


Positive mathematical expectation in game strategy

If you know how to count cards, you may have an edge over the casinos if they don't see it and kick you out. Casinos love drunken gamblers and can't stand card counters. Advantage allows you to win more times over time than you lose. Good money management using expectation calculations can help you get more out of your advantage and reduce losses. Without an advantage, you're better off donating money to charity. In trading on the stock exchange, the advantage is given by the game system, which creates more profit than losses, price differences and commissions. No amount of money management will save a bad gaming system.

A positive expectation is defined by a value greater than zero. The larger this number, the stronger the statistical expectation. If the value is less than zero, then the mathematical expectation will also be negative. The larger the modulus of the negative value, the worse the situation. If the result is zero, then the expectation is breakeven. You can only win when you have a positive mathematical expectation, a reasonable system of play. Playing by intuition leads to disaster.


Expectation and exchange trading

The mathematical expectation is a fairly widely demanded and popular statistical indicator when carrying out exchange trading in financial markets. This parameter is primarily used to analyze the success of a trade. It is not difficult to guess that the higher the given value, the more reason to consider the studied trade successful. Of course, the analysis of a trader's work cannot be done only with the help of this parameter. However, the calculated value, in combination with other methods of assessing the quality of work, can significantly improve the accuracy of the analysis.


The mathematical expectation is often calculated in the services of monitoring trading accounts, which allows you to quickly assess the work done on the deposit. As exceptions, one can cite strategies that use “sitting out” of losing trades. A trader may be lucky for some time, and therefore, in his work there may be no losses at all. In this case, it will not be possible to navigate only by expectation, because the risks used in the work will not be taken into account.

In trading on the market, expectation is most often used when predicting the profitability of a trading strategy or when predicting a trader's income based on the statistical data of his previous trades.

In terms of money management, it is very important to understand that when making trades with negative expectation, there is no money management scheme that can definitely bring high profits. If you continue to play on the stock exchange under these conditions, then no matter how you manage your money, you will lose your entire account, no matter how large it was in the beginning.

This axiom is not only true for games or trades with negative expectation, it is also true for games with equal odds. Therefore, the only case where you have a chance to benefit in the long term is when you enter into trades with positive expected value.


The difference between negative expectation and positive expectation is the difference between life and death. It doesn't matter how positive or how negative the expectation is; what matters is whether it is positive or negative. Therefore, before considering money management issues, you must find a game with positive expectation.

If you don't have such a game, then no amount of money management in the world will save you. On the other hand, if you have a positive expectation, then you can, through good money management, turn it into an exponential growth function. It doesn't matter how little that positive expectation is! In other words, it doesn't matter how profitable a single contract trading system is. If you have a system that wins $ 10 per contract on a single trade (after deducting commissions and slippage), you can use money management techniques to make it more profitable than a system that shows an average profit of $ 1000 per trade (after deduction of commissions and slippage).


What matters is not how profitable the system was, but how certain it can be said that the system will show at least the minimum profit in the future. Therefore, the most important preparation a trader can make is to make sure that the system will show a positive mathematical expectation in the future.

In order to have a positive mathematical expectation in the future, it is very important not to restrict the degrees of freedom of your system. This is achieved not only by eliminating or reducing the number of parameters to be optimized, but also by reducing as many system rules as possible. Every parameter you add, every rule you make, every tiny change you make to the system, reduces the number of degrees of freedom. Ideally, you need to build a fairly primitive and simple system that will consistently generate small profits in almost any market. Again, it is important that you understand that it does not matter how profitable the system is, as long as it is profitable. The money you earn in trading will be earned through effective money management.

A trading system is simply a tool that gives you a positive mathematical expectation so that money management can be used. Systems that work (show at least minimal profit) in only one or a few markets, or have different rules or parameters for different markets, most likely will not work in real time for long enough. The problem with most tech-savvy traders is that they spend too much time and effort optimizing the various rules and parameter values \u200b\u200bof the trading system. This gives completely opposite results. Instead of wasting energy and computer time increasing the profits of the trading system, focus your energy on increasing the level of reliability of making the minimum profit.

Knowing that money management is just a numerical game that requires the use of positive expectations, a trader can stop looking for the “holy grail” of stock trading. Instead, he can start checking his trading method, find out how logically this method is, whether it gives positive expectations. The right money management methods applied to any, even mediocre trading methods, will do the rest of the work themselves.


For any trader to succeed in his work, it is necessary to solve the three most important tasks:. Ensure that the number of successful deals exceeds the inevitable mistakes and miscalculations; Set up your trading system so that the opportunity to earn money is as often as possible; Achieve the stability of the positive result of your operations.

And here we, working traders, can be helped by the mathematical expectation. This term in the theory of probability is one of the key ones. With its help, you can give an average estimate of a certain random value. The mathematical expectation of a random variable is similar to the center of gravity if we imagine all possible probabilities as points with different masses.


In relation to a trading strategy, to assess its effectiveness, the mathematical expectation of profit (or loss) is most often used. This parameter is defined as the sum of the products of the given levels of profit and loss and the probability of their occurrence. For example, the developed trading strategy assumes that 37% of all transactions will bring profit, and the rest - 63% - will be unprofitable. At the same time, the average income from a successful deal will be $ 7, and the average loss will be $ 1.4. Let's calculate the mathematical expectation of trading using the following system:

What does this number mean? It says that, following the rules of this system, on average we will receive $ 1.708 from each closed trade. Since the obtained efficiency estimate is greater than zero, then such a system can be used for real work. If, as a result of the calculation, the mathematical expectation turns out to be negative, then this already speaks of an average loss and such a trade will lead to ruin.

The amount of profit per trade can also be expressed as a relative value in the form of%. For example:

- percentage of income per 1 deal - 5%;

- percentage of successful trading operations - 62%;

- percentage of loss per 1 deal - 3%;

- percentage of unsuccessful transactions - 38%;

That is, the average trade will generate 1.96%.

It is possible to develop a system that, despite the predominance of unprofitable trades, will give a positive result, since its MO\u003e 0.

However, waiting alone is not enough. It is difficult to make money if the system gives very few trading signals. In this case, its profitability will be comparable to the bank interest. Let each transaction give an average of only $ 0.50, but what if the system assumes 1000 transactions per year? This will be a very serious amount in a relatively short time. It logically follows from this that another distinguishing feature of a good trading system is a short period of holding positions.


Sources and links

dic.academic.ru - Academic Internet Dictionary

mathematics.ru - educational site in mathematics

nsu.ru - educational website of Novosibirsk state university

webmath.ru - educational portal for students, applicants and schoolchildren.

exponenta.ru educational mathematical website

ru.tradimo.com - free online school trading

crypto.hut2.ru - a multidisciplinary information resource

poker-wiki.ru - the free poker encyclopedia

sernam.ru - Science Library selected natural science publications

reshim.su - website LET'S SOLVE course control tasks

unfx.ru - Forex at UNFX: training, trading signals, trust management

slovopedia.com - Large encyclopedic Dictionary Slovopedia

pokermansion.3dn.ru - Your guide to the poker world

statanaliz.info - information blog "Statistical Data Analysis"

forex-trader.rf - Forex-Trader portal

megafx.ru - up-to-date Forex analytics

fx-by.com - everything for the trader

The mathematical expectation is, the definition

Mate expectation is one of the most important concepts in mathematical statistics and probability theory, characterizing the distribution of values \u200b\u200bor probabilities random variable. Usually expressed as a weighted average of all possible parameters of a random variable. It is widely used in technical analysis, the study of numerical series, the study of continuous and long-term processes. It is important in assessing risks, predicting price indicators when trading in financial markets, it is used in the development of strategies and methods of game tactics in gambling theory.

Checkmate waiting - thismean value of a random variable, distribution probabilities random variable is considered in probability theory.

Mate expectation isa measure of the mean value of a random variable in probability theory. Math expectation of a random variable x denoted M (x).

Population mean is

Mate expectation is

Mate expectation is in the theory of probability, the weighted average of all possible values \u200b\u200bthat this random variable can take.

Mate expectation isthe sum of the products of all possible values \u200b\u200bof a random variable by the probabilities of these values.

Population mean is

Mate expectation is average benefit from one solution or another, provided that such a solution can be considered within the framework of the theory of large numbers and long distance.

Mate expectation isin the theory of gambling, the amount of winnings that a speculator can earn or lose, on average, for each bet. In the language of gambling speculators this is sometimes called the "advantage speculator"(If it is positive for the speculator) or" casino advantage "(if it is negative for the speculator).

Population mean is


Each, separately taken value is completely determined by its distribution function. Also, to solve practical problems, it is enough to know several numerical characteristics, thanks to which it becomes possible to present the main features of a random variable in a short form.

These values \u200b\u200binclude primarily expected value and dispersion .

Expected value - the average value of a random variable in probability theory. It is indicated as.

The most in a simple way mathematical expectation of a random variable X (w)find as integralLebesgue in relation to the probabilistic measure R original probability space

You can also find the mathematical expectation of a value as lebesgue integral from x by probability distribution P X magnitudes X:

where is the set of all possible values X.

The mathematical expectation of functions of a random variable X is through distribution P X. for example, if X - a random variable with values \u200b\u200bin and f (x) - unambiguous borelfunction X , then:

If F (x) - distribution function X, then the mathematical expectation is representable integralLebesgue - Stieltjes (or Riemann - Stieltjes):

moreover, the integrability X in terms of ( * ) corresponds to the finiteness of the integral

In specific cases, if X has a discrete distribution with probable values x k, k \u003d 1, 2,. , and probabilities, then

if X has an absolutely continuous distribution with probability density p (x)then

in this case, the existence of a mathematical expectation is equivalent to the absolute convergence of the corresponding series or integral.

The properties of the mathematical expectation of a random variable.

  • The mathematical expectation of a constant value is equal to this value:

C- constant;

  • M \u003d C.M [X]
  • The mathematical expectation of the sum of randomly taken values \u200b\u200bis equal to the sum of their mathematical expectations:

  • The mathematical expectation of the product of independent randomly taken quantities \u003d the product of their mathematical expectations:

M \u003d M [X] + M [Y]

if X and Y independent.

if the series converges:

Algorithm for calculating the mathematical expectation.

Properties of discrete random variables: all their values \u200b\u200bcan be renumbered with natural numbers; equate each value with a nonzero probability.

1. Multiply the pairs in turn: x i on p i.

2. Add the product of each pair x i p i.

For example, for n = 4 :

Distribution function of a discrete random variable stepwise, it increases abruptly at those points whose probabilities have a positive sign.

Example:Find the expected value by the formula.

Expected value - the average value of a random variable (the probability distribution of a stationary random variable) when the number of samples or the number of measurements (sometimes they say - the number of tests) tends to infinity.

The arithmetic mean of a one-dimensional random variable of a finite number of tests is usually called estimation of mathematical expectation... When the number of tests of a stationary random process tends to infinity, the estimate of the mathematical expectation tends to the mathematical expectation.

Expectation is one of the basic concepts in probability theory).

Encyclopedic YouTube

    1 / 5

    ✪ Expectation and variance - bezbotvy

    ✪ Probability Theory 15: Expectation

    ✪ Expectation

    ✪ Mathematical expectation and variance. Theory

    ✪ Expected value in trading

    Subtitles

Definition

Let the probability space be given (Ω, A, P) (\\ displaystyle (\\ Omega, (\\ mathfrak (A)), \\ mathbb (P))) and a random variable defined on it X (\\ displaystyle X)... That is, by definition, X: Ω → R (\\ displaystyle X \\ colon \\ Omega \\ to \\ mathbb (R)) is a measurable function. If there is a Lebesgue integral of X (\\ displaystyle X) in space Ω (\\ displaystyle \\ Omega), then it is called the mathematical expectation, or the average (expected) value and is denoted M [X] (\\ displaystyle M [X]) or E [X] (\\ displaystyle \\ mathbb (E) [X]).

M [X] \u003d ∫ Ω X (ω) P (d ω). (\\ displaystyle M [X] \u003d \\ int \\ limits _ (\\ Omega) \\! X (\\ omega) \\, \\ mathbb (P) (d \\ omega).)

Basic formulas for mathematical expectation

M [X] \u003d ∫ - ∞ ∞ x d F X (x); x ∈ R (\\ displaystyle M [X] \u003d \\ int \\ limits _ (- \\ infty) ^ (\\ infty) \\! x \\, dF_ (X) (x); x \\ in \\ mathbb (R)).

The mathematical expectation of a discrete distribution

P (X \u003d xi) \u003d pi, ∑ i \u003d 1 ∞ pi \u003d 1 (\\ displaystyle \\ mathbb (P) (X \u003d x_ (i)) \u003d p_ (i), \\; \\ sum \\ limits _ (i \u003d 1 ) ^ (\\ infty) p_ (i) \u003d 1),

then it follows directly from the definition of the Lebesgue integral that

M [X] \u003d ∑ i \u003d 1 ∞ x i p i (\\ displaystyle M [X] \u003d \\ sum \\ limits _ (i \u003d 1) ^ (\\ infty) x_ (i) \\, p_ (i)).

The expected value of an integer value

P (X \u003d j) \u003d p j, j \u003d 0, 1,. ... ... ; ∑ j \u003d 0 ∞ pj \u003d 1 (\\ displaystyle \\ mathbb (P) (X \u003d j) \u003d p_ (j), \\; j \u003d 0,1, ...; \\ quad \\ sum \\ limits _ (j \u003d 0 ) ^ (\\ infty) p_ (j) \u003d 1)

then its mathematical expectation can be expressed in terms of the generating function of the sequence (p i) (\\ displaystyle \\ (p_ (i) \\))

P (s) \u003d ∑ k \u003d 0 ∞ p k s k (\\ displaystyle P (s) \u003d \\ sum _ (k \u003d 0) ^ (\\ infty) \\; p_ (k) s ^ (k))

as the value of the first derivative in unit: M [X] \u003d P ′ (1) (\\ displaystyle M [X] \u003d P "(1))... If the mathematical expectation X (\\ displaystyle X) endlessly then lim s → 1 P ′ (s) \u003d ∞ (\\ displaystyle \\ lim _ (s \\ to 1) P "(s) \u003d \\ infty) and we will write P ′ (1) \u003d M [X] \u003d ∞ (\\ displaystyle P "(1) \u003d M [X] \u003d \\ infty)

Now let's take the generating function Q (s) (\\ displaystyle Q (s)) distribution tail sequences (q k) (\\ displaystyle \\ (q_ (k) \\))

q k \u003d P (X\u003e k) \u003d ∑ j \u003d k + 1 ∞ p j; Q (s) \u003d ∑ k \u003d 0 ∞ q k s k. (\\ displaystyle q_ (k) \u003d \\ mathbb (P) (X\u003e k) \u003d \\ sum _ (j \u003d k + 1) ^ (\\ infty) (p_ (j)); \\ quad Q (s) \u003d \\ sum _ (k \u003d 0) ^ (\\ infty) \\; q_ (k) s ^ (k).)

This generating function is associated with a previously defined function P (s) (\\ displaystyle P (s)) property: Q (s) \u003d 1 - P (s) 1 - s (\\ displaystyle Q (s) \u003d (\\ frac (1-P (s)) (1-s))) at | s |< 1 {\displaystyle |s|<1} ... From this, by the mean value theorem, it follows that the mathematical expectation is simply equal to the value of this function in unity:

M [X] \u003d P ′ (1) \u003d Q (1) (\\ displaystyle M [X] \u003d P "(1) \u003d Q (1))

The expectation of an absolutely continuous distribution

M [X] \u003d ∫ - ∞ ∞ xf X (x) dx (\\ displaystyle M [X] \u003d \\ int \\ limits _ (- \\ infty) ^ (\\ infty) \\! Xf_ (X) (x) \\, dx ).

The mathematical expectation of a random vector

Let be X \u003d (X 1,…, X n) ⊤: Ω → R n (\\ displaystyle X \u003d (X_ (1), \\ dots, X_ (n)) ^ (\\ top) \\ colon \\ Omega \\ to \\ mathbb ( R) ^ (n)) is a random vector. Then by definition

M [X] \u003d (M [X 1],…, M [X n]) ⊤ (\\ displaystyle M [X] \u003d (M, \\ dots, M) ^ (\\ top)),

that is, the mathematical expectation of a vector is determined componentwise.

The mathematical expectation of the transformation of a random variable

Let be g: R → R (\\ displaystyle g \\ colon \\ mathbb (R) \\ to \\ mathbb (R)) is a Borel function such that the random variable Y \u003d g (X) (\\ displaystyle Y \u003d g (X)) has a finite mathematical expectation. Then the formula is valid for it

M [g (X)] \u003d ∑ i \u003d 1 ∞ g (xi) pi, (\\ displaystyle M \\ left \u003d \\ sum \\ limits _ (i \u003d 1) ^ (\\ infty) g (x_ (i)) p_ ( i),)

if X (\\ displaystyle X) has a discrete distribution;

M [g (X)] \u003d ∫ - ∞ ∞ g (x) f X (x) dx, (\\ displaystyle M \\ left \u003d \\ int \\ limits _ (- \\ infty) ^ (\\ infty) \\! G (x ) f_ (X) (x) \\, dx,)

if X (\\ displaystyle X) has an absolutely continuous distribution.

If the distribution P X (\\ displaystyle \\ mathbb (P) ^ (X)) random variable X (\\ displaystyle X) general form, then

M [g (X)] \u003d ∫ - ∞ ∞ g (x) P X (d x). (\\ displaystyle M \\ left \u003d \\ int \\ limits _ (- \\ infty) ^ (\\ infty) \\! g (x) \\, \\ mathbb (P) ^ (X) (dx).)

In the special case when g (X) \u003d X k (\\ displaystyle g (X) \u003d X ^ (k)), expected value M [g (X)] \u003d M [X k] (\\ displaystyle M \u003d M) called k (\\ displaystyle k)-th moment of a random variable.

Simplest properties of mathematical expectation

  • The mathematical expectation of a number is the number itself.
M [a] \u003d a (\\ displaystyle M [a] \u003d a) a ∈ R (\\ displaystyle a \\ in \\ mathbb (R)) - constant;
  • The mathematical expectation is linear, that is
M [a X + b Y] \u003d a M [X] + b M [Y] (\\ displaystyle M \u003d aM [X] + bM [Y])where X, Y (\\ displaystyle X, Y) are random variables with finite mathematical expectation, and a, b ∈ R (\\ displaystyle a, b \\ in \\ mathbb (R)) - arbitrary constants; 0 ⩽ M [X] ⩽ M [Y] (\\ displaystyle 0 \\ leqslant M [X] \\ leqslant M [Y]); M [X] \u003d M [Y] (\\ displaystyle M [X] \u003d M [Y]). M [X Y] \u003d M [X] M [Y] (\\ displaystyle M \u003d M [X] M [Y]).
Similar articles

2021 liveps.ru. Homework and ready-made tasks in chemistry and biology.