Explore Related Concepts


example equal interval variable
Best Results From Wikipedia Yahoo Answers Youtube
From Wikipedia
In probability and statistics, a random variable or stochastic variable is a variable whose value is not known. Its possible values might represent the possible outcomes of a yettobeperformed experiment, or the potential values of a quantity whose alreadyexisting value is uncertain (e.g., as a result of incomplete information or imprecise measurements). Intuitively, a random variable can be thought of as a quantity whose value is not fixed, but which can take on different values; a probability distribution is used to describe the probabilities of different values occurring. Realizations of a random variable are called random variates.
Random variables are usually realvalued, but one can consider arbitrary types such as boolean values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, functions, and processes. The term random elementis used to encompass all such related concepts. A related concept is thestochastic process, a set of indexed random variables (typically indexed by time or space).
Introduction
Realvalued random variables (those whose range is the real numbers) are used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. In such instances, the function that maps the outcome to a real number is often the identity function or similarly trivial function, and not explicitly described. In many cases, however, it is useful to consider random variables that are functions of other random variables, and then the mapping function included in the definition of a random variable becomes important. As an example, the square of a random variable distributed according to a standard normal distribution is itself a random variable, with a chisquare distribution. One way to think of this is to imagine generating a large number of samples from a standard normal distribution, squaring each one, and plotting a histogram of the values observed. With enough samples, the graph of the histogram will approximate the density function of a chisquare distribution with one degree of freedom.
Another example is the sample mean, which is the average of a number of samples. When these samples are independent observations of the same random event they can be called independent identically distributed random variables. Since each sample is a random variable, the sample mean is a function of random variables and hence a random variable itself, whose distribution can be computed and properties determined.
One of the reasons that realvalued random variables are so commonly considered is that the expected value (a type of average) and variance (a measure of the "spread", or extent to which the values are dispersed) of the variable can be computed.
There are two types of random variables: discrete and continuous. A discrete random variable maps outcomes to values of a countable set (e.g., the integers), with each value in the range having probability greater than or equal to zero. A continuous random variable maps outcomes to values of an uncountable set (e.g., the real numbers). For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values (such as an interval of nonzero length) may be positive. A random variable can be "mixed", with part of its probability spread out over an interval like a typical continuous variable, and part of it concentrated on particular values like a discrete variable. These classifications are equivalent to the categorization of probability distributions.
The expected value of random vectors, random matrices, and similar aggregates of fixed structure is defined as the aggregation of the expected value computed over each individual element. The concept of "variance of a random vector" is normally expressed through a covariance matrix. No generallyagreedupon definition of expected value or variance exists for cases other than just discussed.
Examples
The possible outcomes for one coin toss can be described by the state space \Omega = {heads, tails}. We can introduce a realvalued random variable Y as follows:
Y(\omega) = \begin{cases} 1, & \text{if} \ \ \omega = \text{heads} ,\\ 0, & \text{if} \ \ \omega = \text{tails} . \end{cases}
If the coin is equally likely to land on either side then it has a probability mass function given by:
 \rho_Y(y) = \begin{cases}\frac{1}{2},& \text{if }y=1,\\
\frac{1}{2},& \text{if }y=0.\end{cases}
In probability theory, a continuous probability distribution is a probability distribution which possesses a probability density function. Mathematicians also call such distribution absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measureÎ». If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chisquared, and others.
Intuitively, a continuous random variable is the one which can take a continuous range of values â€” as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g. rolling 3Â½ on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3Â½ cm is possible, however it has probability zero because there are infinitely many other potential values even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero.
Formally, if X is a continuous random variable, then it has a probability density functionÆ’(x), and therefore its probability to fall into a given interval, say is given by the integral
\Pr[a\le X\le b] = \int_a^b f(x) \, dx In particular, the probability for X to take any single value a (that is ) is zero, because an integral with coinciding upper and lower limits is always equal to zero.
The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cdf, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor their mixture. An example is given by theCantor distribution. Such singular distributions however are never encountered in practice.
Note on terminology: some authors use the term to denote the distribution with continuous cdf. Thus, their definition includes both the (absolutely) continuous and singular distributions.
From Yahoo Answers
Answers:Personnally, I would show a bar only where you have data at 1, 2 and 5 and make them 1wide and show nothing where you have no data.
Answers:P{0<= <=kX} = 1 => P{0<= /k<=X} = 1 So now we integrate P{0<= /k<=X} = integral from /k to infinity of f(x). Do usubstitution u = x/ and and we get integral from 1/k to infinity e^(u) = 1  e^(1/k) That means 1  e^(1/k) = (1 ) => e^(1/k) = => 1/k = ln( ) => k = 1/ ln( )
Answers:Q: The new Twinkle bulb has a standard deviation hours. A random sample of 50 light bulbs is selected from inventory. The sample mean was found to be 500 hours. Find the margin of error E for a 95% confidence interval. ANSWER: Indeterminate Why??? margin of error is a function of three variables. Margin of Error = (zfactor) * Sample Standard Deviation/SQRT(Sample Size) Since (zfactor) is a Table "lookup" for the specified Confidence Level [ 95%] and that the Sample Size [50] is known, the third variable Sample Standard Deviation or Population Standard Deviation cannot be assumed since it isn't stated. The computation of the Margin of Error is indeterminate. 3. ANSWER: Sample Size = 43 Why??? SMALL SAMPLE, LEVEL OF CONFIDENCE, NORMAL POPULATION DISTRIBUTION Margin of Error (half of confidence interval)3 The margin of error is defined as the "radius" (or half the width) of a confidence interval for a particular statistic. Level of Confidence95 s = Sample standard deviation10 'z critical value' from Lookup Table for 95% 1.96 significant digits 2 Margin of Error = ('z critical value') * s/SQRT(n) n = Sample Size43 4.a. ANSWER: Margin of Error is 9.6 Why?? Margin of Error = 1/2 half confidence interval [180.42, 199.59] SMALL SAMPLE, CONFIDENCE INTERVAL, NORMAL POPULATION DISTRIBUTION xbar = Sample mean 190 s = Sample standard deviation18 n = Number of samples 16 df = degrees of freedom 15 significant digits2 Confidence Level95 "Lookup" Table 'tcritical value'2.13 Lookup Table of t critical values for confidence and prediction intervals. Central twoside area = 95% with df = 15. Another Lookup method is to utilize Microsoft Excel function: TINV(probability,degrees_freedom) Returns the inverse of the Student's tdistribution 95% Resulting Confidence Interval for 'true mean': xbar +/ ('t critical value') * s/SQRT(n)= 190 +/ 2.13 * 18/SQRT(16) = [180.42, 199.59]
Answers:There are three requirements necessary for continuity to hold for a function f(x) at x = c. 1) lim_{x > c} f(x) exists. 2) f(c) exists. 3) The numbers in 1) and 2) are equal. Condition 1) is equivalent to saying that the left and righthand limits exist and are equal. So, for f(x)= absolute value of x+3 where x<0 2x+1 x=0 (x+2)^2 1 x>0 at c = 0 we have lim_{x > 0} f(x) = lim_{x > 0} x + 3 = lim_{x > 0} x + 3 = 0 + 3 = 3. lim_{x > 0+} f(x) = lim_{x > 0+} (x+2)^2 1 = (0+2)^2  1 = 4  1 = 3. They are equal, thus lim_{x > 0} f(x) exists and is equal to 3. f(0) = 2(0) + 1 = 1. lim_{x > 0} f(x) does not equal f(0). f is not continuous here. Try the same process with your second one. There, observe that if x is not pi/4, the function is continuous, as cos^2 x is continuous everywhere, and hence on the interval x < pi/4. Ditto for the sine squared. Hope this helps.
From Youtube