Explore Related Concepts

Best Results From Wikipedia Yahoo Answers Youtube

From Wikipedia

Random variable

In probability and statistics, a random variable or stochastic variable is a variable whose value is not known. Its possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the potential values of a quantity whose already-existing value is uncertain (e.g., as a result of incomplete information or imprecise measurements). Intuitively, a random variable can be thought of as a quantity whose value is not fixed, but which can take on different values; a probability distribution is used to describe the probabilities of different values occurring. Realizations of a random variable are called random variates.

Random variables are usually real-valued, but one can consider arbitrary types such as boolean values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, functions, and processes. The term random elementis used to encompass all such related concepts. A related concept is thestochastic process, a set of indexed random variables (typically indexed by time or space).


Real-valued random variables (those whose range is the real numbers) are used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. In such instances, the function that maps the outcome to a real number is often the identity function or similarly trivial function, and not explicitly described. In many cases, however, it is useful to consider random variables that are functions of other random variables, and then the mapping function included in the definition of a random variable becomes important. As an example, the square of a random variable distributed according to a standard normal distribution is itself a random variable, with a chi-square distribution. One way to think of this is to imagine generating a large number of samples from a standard normal distribution, squaring each one, and plotting a histogram of the values observed. With enough samples, the graph of the histogram will approximate the density function of a chi-square distribution with one degree of freedom.

Another example is the sample mean, which is the average of a number of samples. When these samples are independent observations of the same random event they can be called independent identically distributed random variables. Since each sample is a random variable, the sample mean is a function of random variables and hence a random variable itself, whose distribution can be computed and properties determined.

One of the reasons that real-valued random variables are so commonly considered is that the expected value (a type of average) and variance (a measure of the "spread", or extent to which the values are dispersed) of the variable can be computed.

There are two types of random variables: discrete and continuous. A discrete random variable maps outcomes to values of a countable set (e.g., the integers), with each value in the range having probability greater than or equal to zero. A continuous random variable maps outcomes to values of an uncountable set (e.g., the real numbers). For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values (such as an interval of non-zero length) may be positive. A random variable can be "mixed", with part of its probability spread out over an interval like a typical continuous variable, and part of it concentrated on particular values like a discrete variable. These classifications are equivalent to the categorization of probability distributions.

The expected value of random vectors, random matrices, and similar aggregates of fixed structure is defined as the aggregation of the expected value computed over each individual element. The concept of "variance of a random vector" is normally expressed through a covariance matrix. No generally-agreed-upon definition of expected value or variance exists for cases other than just discussed.


The possible outcomes for one coin toss can be described by the state space \Omega = {heads, tails}. We can introduce a real-valued random variable Y as follows:

Y(\omega) = \begin{cases} 1, & \text{if} \ \ \omega = \text{heads} ,\\ 0, & \text{if} \ \ \omega = \text{tails} . \end{cases}

If the coin is equally likely to land on either side then it has a probability mass function given by:

\rho_Y(y) = \begin{cases}\frac{1}{2},& \text{if }y=1,\\

\frac{1}{2},& \text{if }y=0.\end{cases}

Discrete probability distribution

In probability theory and statistics, a discrete probability distribution is a probability distribution characterized by a probability mass function. Thus, the distribution of a random variableX is discrete, and X is then called a discrete random variable, if

\sum_u \Pr(X=u) = 1

as u runs through the set of all possible values of X. It follows that such a random variable can assume only a finite or countably infinite number of values. That is, the possible values might be listed, although the list might be infinite. For example, count observations such as the numbers of birds in flocks comprise only natural number values {0, 1, 2, ...}. By contrast, continuous observations such as the weights of birds comprise real number values and would typically be modeled by a continuous probability distribution such as the normal.

In cases more frequently considered, this set of possible values is a topologically discrete set in the sense that all its points are isolated points. But there are discrete random variables for which this countable set is dense on the real line (for example, a distribution over rational numbers).

Among the most well-known discrete probability distributions that are used for statistical modeling are the Poisson distribution, the Bernoulli distribution, the binomial distribution, the geometric distribution, and the negative binomial distribution. In addition, the discrete uniform distribution is commonly used in computer programs that make equal-probability random selections between a number of choices.

Alternative description

Equivalently to the above, a discrete random variable can be defined as a random variable whose cumulative distribution function (cdf) increases only by jump discontinuities—that is, its cdf increases only where it "jumps" to a higher value, and is constant between those jumps. The points where jumps occur are precisely the values which the random variable may take. The number of such jumps may be finite or countably infinite. The set of locations of such jumps need not be topologically discrete; for example, the cdf might jump at each rational number.

Consequently, a discrete probability distribution is often represented as a generalized probability density function involving Dirac delta functions, which substantially unifies the treatment of continuous and discrete distributions. This is especially useful when dealing with probability distributions involving both a continuous and a discrete part.

Representation in terms of indicator functions

For a discrete random variable X, let u0, u1, ... be the values it can take with non-zero probability. Denote

\Omega_i=\{\omega: X(\omega)=u_i\},\, i=0, 1, 2, \dots

These are disjoint sets, and by formula (1)

\Pr\left(\bigcup_i \Omega_i\right)=\sum_i \Pr(\Omega_i)=\sum_i\Pr(X=u_i)=1.

It follows that the probability that X takes any value except for u0, u1, ... is zero, and thus one can write X as

X=\sum_i u_i 1_{\Omega_i}

except on a set of probability zero, where 1_A is the indicator function of A. This may serve as an alternative definition of discrete random variables.

From Yahoo Answers

Question:A) # of petals on a randomly chosen daisy. B) Stem length in cm of a random daisy C) # of daisys found in a randomly chosen grassy area 1 sq meter in size. D) average number of petals per daisy computed from all the daisies found in a randomly chosen grassy area 1 sq meter in size. I think A) is discrete and B) is continuous, but what about the rest and why> Thank you.

Answers:you were correct with A and B. C is discrete ( you can't have a fraction of a daisy - only a whole number) D is continuous ( the average could be any number)

Question:Consider a die with the numbers 1,2,3,4,5, and 6 that is constructed so that the probability of each odd face is three times the probability of each even face. Find the probability distribution of hte random variable X which denotes the value showing on the up face. Please explain your answer so I can learn this material. Thanks!

Answers:I'm sure you already understand that the probabilities in a distribution must add up to one. Since this is a die with uneven probabilities, let's drop this analogy and pretend it's a spinner with more than 6 sections, and an equal chance of each section, with sections labeled: 1, 1, 1, (the 3x chance of odd numbers), 2, 3, 3, 3, 4, 5 , 5, 5, 6. The probability distribution is the chance that even will turn up on a given result. There are 12 sections in all here, so we have: 1: 3/12 = 1/4 2: 1/12 3: 3/12 = 1/4 4: 1/12 5: 3/12=1/4 6: 1/12 Total: 12/12 = 1 This satisfies the requirement of each odd face having three times the probability of occurring as each even face, and the requirement of the total probability adding up to one.

Question:I got this question wrong on a test that I toke for an online statistics class I am taking. Since the class is online the teacher does not explain why the answers are wrongs unless you go to her office during office hours, which i would prefer not to do for one simple question. I know the definition of a discrete variable but when given real life examples I get so confused. Can someone please tell me what they believe the answer to be and why. The choices are as followed: ( there can be more than one answer ) a. the weight of a randomly selected bag of onions b. the number of dogs in a dog show c. the volume of gas in my car d. the number of animals in the zoo Thanks!

Answers:a) continuous b) discrete c) continuous d) discrete in real life discrete is something you can have a whole number of, like it's not possible to have half a dog. discrete is something you can count one by one, where as continuous has infinite possibilities, like 1 kg, 1.1 kg, 1.2 kg etc

Question:Is a discrete random variable difined by a probability density function? And is as continuous random variable defined by just a probability distribution? Thanks Vikram!

Answers:A discrete random variable is one which may take on only a countable number of distinct values such as 0,1,2,3,4,........ Discrete random variables are usually (but not necessarily) counts. If a random variable can take only a finite number of distinct values, then it must be discrete. Examples of discrete random variables include the number of children in a family, the Friday night attendance at a cinema, the number of patients in a doctor's surgery, the number of defective light bulbs in a box of ten. A continuous random variable is one which takes an infinite number of possible values. Continuous random variables are usually measurements. Examples include height, weight, the amount of sugar in an orange, the time required to run a mile. If you want more detailed answer then visit http://en.wikipedia.org/wiki/Random_variable

From Youtube

Statistics: Random Variables (Discrete or Continuous) :Watch more free lectures and examples of Statistics at www.educator.com Other subjects include Algebra, Trigonometry, Calculus, Biology, Chemistry, Physics, and Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer

Statistics: Probability Distribution of a Discrete Random Variable :Watch more free lectures and examples of Statistics at www.educator.com Other subjects include Algebra, Trigonometry, Calculus, Biology, Chemistry, Physics, and Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer