Explore Related Concepts

example equal interval variable

Best Results From Wikipedia Yahoo Answers Youtube


From Wikipedia

Random variable

In probability and statistics, a random variable or stochastic variable is a variable whose value is not known. Its possible values might represent the possible outcomes of a yet-to-be-performed experiment, or the potential values of a quantity whose already-existing value is uncertain (e.g., as a result of incomplete information or imprecise measurements). Intuitively, a random variable can be thought of as a quantity whose value is not fixed, but which can take on different values; a probability distribution is used to describe the probabilities of different values occurring. Realizations of a random variable are called random variates.

Random variables are usually real-valued, but one can consider arbitrary types such as boolean values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, functions, and processes. The term random elementis used to encompass all such related concepts. A related concept is thestochastic process, a set of indexed random variables (typically indexed by time or space).

Introduction

Real-valued random variables (those whose range is the real numbers) are used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. In such instances, the function that maps the outcome to a real number is often the identity function or similarly trivial function, and not explicitly described. In many cases, however, it is useful to consider random variables that are functions of other random variables, and then the mapping function included in the definition of a random variable becomes important. As an example, the square of a random variable distributed according to a standard normal distribution is itself a random variable, with a chi-square distribution. One way to think of this is to imagine generating a large number of samples from a standard normal distribution, squaring each one, and plotting a histogram of the values observed. With enough samples, the graph of the histogram will approximate the density function of a chi-square distribution with one degree of freedom.

Another example is the sample mean, which is the average of a number of samples. When these samples are independent observations of the same random event they can be called independent identically distributed random variables. Since each sample is a random variable, the sample mean is a function of random variables and hence a random variable itself, whose distribution can be computed and properties determined.

One of the reasons that real-valued random variables are so commonly considered is that the expected value (a type of average) and variance (a measure of the "spread", or extent to which the values are dispersed) of the variable can be computed.

There are two types of random variables: discrete and continuous. A discrete random variable maps outcomes to values of a countable set (e.g., the integers), with each value in the range having probability greater than or equal to zero. A continuous random variable maps outcomes to values of an uncountable set (e.g., the real numbers). For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values (such as an interval of non-zero length) may be positive. A random variable can be "mixed", with part of its probability spread out over an interval like a typical continuous variable, and part of it concentrated on particular values like a discrete variable. These classifications are equivalent to the categorization of probability distributions.

The expected value of random vectors, random matrices, and similar aggregates of fixed structure is defined as the aggregation of the expected value computed over each individual element. The concept of "variance of a random vector" is normally expressed through a covariance matrix. No generally-agreed-upon definition of expected value or variance exists for cases other than just discussed.

Examples

The possible outcomes for one coin toss can be described by the state space \Omega = {heads, tails}. We can introduce a real-valued random variable Y as follows:

Y(\omega) = \begin{cases} 1, & \text{if} \ \ \omega = \text{heads} ,\\ 0, & \text{if} \ \ \omega = \text{tails} . \end{cases}

If the coin is equally likely to land on either side then it has a probability mass function given by:

\rho_Y(y) = \begin{cases}\frac{1}{2},& \text{if }y=1,\\

\frac{1}{2},& \text{if }y=0.\end{cases}

Continuous probability distribution

In probability theory, a continuous probability distribution is a probability distribution which possesses a probability density function. Mathematicians also call such distribution absolutely continuous, since its cumulative distribution function is absolutely continuous with respect to the Lebesgue measureλ. If the distribution of X is continuous, then X is called a continuous random variable. There are many examples of continuous probability distributions: normal, uniform, chi-squared, and others.

Intuitively, a continuous random variable is the one which can take a continuous range of values — as opposed to a discrete distribution, where the set of possible values for the random variable is at most countable. While for a discrete distribution an event with probability zero is impossible (e.g. rolling 3½ on a standard die is impossible, and has probability zero), this is not so in the case of a continuous random variable. For example, if one measures the width of an oak leaf, the result of 3½ cm is possible, however it has probability zero because there are infinitely many other potential values even between 3 cm and 4 cm. Each of these individual outcomes has probability zero, yet the probability that the outcome will fall into the interval is nonzero. This apparent paradox is resolved by the fact that the probability that X attains some value within an infinite set, such as an interval, cannot be found by naively adding the probabilities for individual values. Formally, each value has an infinitesimally small probability, which statistically is equivalent to zero.

Formally, if X is a continuous random variable, then it has a probability density functionÆ’(x), and therefore its probability to fall into a given interval, say is given by the integral

\Pr[a\le X\le b] = \int_a^b f(x) \, dx In particular, the probability for X to take any single value a (that is ) is zero, because an integral with coinciding upper and lower limits is always equal to zero.

The definition states that a continuous probability distribution must possess a density, or equivalently, its cumulative distribution function be absolutely continuous. This requirement is stronger than simple continuity of the cdf, and there is a special class of distributions, singular distributions, which are neither continuous nor discrete nor their mixture. An example is given by theCantor distribution. Such singular distributions however are never encountered in practice.

Note on terminology: some authors use the term to denote the distribution with continuous cdf. Thus, their definition includes both the (absolutely) continuous and singular distributions.



From Yahoo Answers

Question:I am making a histogram from data that was not taken at equal intervals. For example on the X axis I have 0, 1, 2, 3, 4, 5, 6 but only have data for say, 1, 2, and 5. How wide should the rectangles be? I thought the middle of the rectangle goes where the point is but not sure how wide to make them.

Answers:Personnally, I would show a bar only where you have data at 1, 2 and 5 and make them 1-wide and show nothing where you have no data.

Question:Let X be a value of a random variable having an exponential distribution with density f(x) = (1/ )e^(-x/ ) x>0 0 else Find k such that the interval from 0 to kX is a (1- )100% confidence interval for the parameter .. Hint: Start with the following statement about and kX: P{0<= <=kX} = 1- . Manipulate algebraically what is inside the probability statement to transform it into a probability statement about X. Now use the density to calculate that probability directly (through integration). The result of your integral will depend on k, then solve for k from the fact that the integral equals 1- .

Answers:P{0<= <=kX} = 1- => P{0<= /k<=X} = 1- So now we integrate P{0<= /k<=X} = integral from /k to infinity of f(x). Do u-substitution u = x/ and and we get integral from 1/k to infinity e^(-u) = 1 - e^(-1/k) That means 1 - e^(-1/k) = (1- ) => e^(-1/k) = => -1/k = ln( ) => k = -1/ ln( )

Question:Hi - Can someone please review my homework and tell me if I did this right? The new Twinkle bulb has a standard deviation hours. A random sample of 50 light bulbs is selected from inventory. The sample mean was found to be 500 hours. Find the margin of error E for a 95% confidence interval. (5 points) Round your answer to the nearest hundredths. . (References: definition of margin of error on page 312 and example 2 on page 312). E = 1.96 x 35/sqrt 50 =9.702 Construct a 95% confidence interval for the mean life, m of all Twinkle bulbs. (5 points) (References: example 5 page 315, end of section exercises 51 - 56 pages 319 - 320) 500-9.7
Answers:Q: The new Twinkle bulb has a standard deviation hours. A random sample of 50 light bulbs is selected from inventory. The sample mean was found to be 500 hours. Find the margin of error E for a 95% confidence interval. ANSWER: Indeterminate Why??? margin of error is a function of three variables. Margin of Error = (z-factor) * Sample Standard Deviation/SQRT(Sample Size) Since (z-factor) is a Table "look-up" for the specified Confidence Level [ 95%] and that the Sample Size [50] is known, the third variable Sample Standard Deviation or Population Standard Deviation cannot be assumed since it isn't stated. The computation of the Margin of Error is indeterminate. 3. ANSWER: Sample Size = 43 Why??? SMALL SAMPLE, LEVEL OF CONFIDENCE, NORMAL POPULATION DISTRIBUTION Margin of Error (half of confidence interval)3 The margin of error is defined as the "radius" (or half the width) of a confidence interval for a particular statistic. Level of Confidence95 s = Sample standard deviation10 'z critical value' from Look-up Table for 95% 1.96 significant digits 2 Margin of Error = ('z critical value') * s/SQRT(n) n = Sample Size43 4.a. ANSWER: Margin of Error is 9.6 Why?? Margin of Error = 1/2 half confidence interval [180.42, 199.59] SMALL SAMPLE, CONFIDENCE INTERVAL, NORMAL POPULATION DISTRIBUTION x-bar = Sample mean 190 s = Sample standard deviation18 n = Number of samples 16 df = degrees of freedom 15 significant digits2 Confidence Level95 "Look-up" Table 't-critical value'2.13 Look-up Table of t critical values for confidence and prediction intervals. Central two-side area = 95% with df = 15. Another Look-up method is to utilize Microsoft Excel function: TINV(probability,degrees_freedom) Returns the inverse of the Student's t-distribution 95% Resulting Confidence Interval for 'true mean': x-bar +/- ('t critical value') * s/SQRT(n)= 190 +/- 2.13 * 18/SQRT(16) = [180.42, 199.59]

Question:Would someone mind giving me some guidence with regards to these two? Use the definition of continuity to determine if f(x) is continuous at x=0 f(x)= absolute value of x+3 where x<0 2x+1 x=0 (x+2)^2 -1 x>0 this one is even worse... Use the definition of continuity to determine over what intervals f(x) is continuous. f(x) = cos^2 x x< pi/4 (sinx)(cosx) x=pi/4 sin^2 x x>pi/4 I've consulted all my reference books but have no idea where to start with solving them so I am hoping by asking I can at least get a working example of the proper method to approach this. This is a beginner course so we have not covered differentiation yet. Thanks in advance I really appreciate it.

Answers:There are three requirements necessary for continuity to hold for a function f(x) at x = c. 1) lim_{x -> c} f(x) exists. 2) f(c) exists. 3) The numbers in 1) and 2) are equal. Condition 1) is equivalent to saying that the left and right-hand limits exist and are equal. So, for f(x)= absolute value of x+3 where x<0 2x+1 x=0 (x+2)^2 -1 x>0 at c = 0 we have lim_{x -> 0-} f(x) = lim_{x -> 0-} |x + 3| = lim_{x -> 0-} x + 3 = 0 + 3 = 3. lim_{x -> 0+} f(x) = lim_{x -> 0+} (x+2)^2 -1 = (0+2)^2 - 1 = 4 - 1 = 3. They are equal, thus lim_{x -> 0} f(x) exists and is equal to 3. f(0) = 2(0) + 1 = 1. lim_{x -> 0} f(x) does not equal f(0). f is not continuous here. Try the same process with your second one. There, observe that if x is not pi/4, the function is continuous, as cos^2 x is continuous everywhere, and hence on the interval x < pi/4. Ditto for the sine squared. Hope this helps.

From Youtube

95% Confidence Interval for Two Means: Worked Example :95% confidence interval for the difference between two means (assuming equal variances) - worked example