Explore Related Concepts


equal interval variable
Best Results From Wikipedia Yahoo Answers
From Wikipedia
In probability and statistics, a random variable or stochastic variable is a variable whose value is not known. Its possible values might represent the possible outcomes of a yettobeperformed experiment, or the potential values of a quantity whose alreadyexisting value is uncertain (e.g., as a result of incomplete information or imprecise measurements). Intuitively, a random variable can be thought of as a quantity whose value is not fixed, but which can take on different values; a probability distribution is used to describe the probabilities of different values occurring. Realizations of a random variable are called random variates.
Random variables are usually realvalued, but one can consider arbitrary types such as boolean values, complex numbers, vectors, matrices, sequences, trees, sets, shapes, manifolds, functions, and processes. The term random elementis used to encompass all such related concepts. A related concept is thestochastic process, a set of indexed random variables (typically indexed by time or space).
Introduction
Realvalued random variables (those whose range is the real numbers) are used in the sciences to make predictions based on data obtained from scientific experiments. In addition to scientific applications, random variables were developed for the analysis of games of chance and stochastic events. In such instances, the function that maps the outcome to a real number is often the identity function or similarly trivial function, and not explicitly described. In many cases, however, it is useful to consider random variables that are functions of other random variables, and then the mapping function included in the definition of a random variable becomes important. As an example, the square of a random variable distributed according to a standard normal distribution is itself a random variable, with a chisquare distribution. One way to think of this is to imagine generating a large number of samples from a standard normal distribution, squaring each one, and plotting a histogram of the values observed. With enough samples, the graph of the histogram will approximate the density function of a chisquare distribution with one degree of freedom.
Another example is the sample mean, which is the average of a number of samples. When these samples are independent observations of the same random event they can be called independent identically distributed random variables. Since each sample is a random variable, the sample mean is a function of random variables and hence a random variable itself, whose distribution can be computed and properties determined.
One of the reasons that realvalued random variables are so commonly considered is that the expected value (a type of average) and variance (a measure of the "spread", or extent to which the values are dispersed) of the variable can be computed.
There are two types of random variables: discrete and continuous. A discrete random variable maps outcomes to values of a countable set (e.g., the integers), with each value in the range having probability greater than or equal to zero. A continuous random variable maps outcomes to values of an uncountable set (e.g., the real numbers). For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values (such as an interval of nonzero length) may be positive. A random variable can be "mixed", with part of its probability spread out over an interval like a typical continuous variable, and part of it concentrated on particular values like a discrete variable. These classifications are equivalent to the categorization of probability distributions.
The expected value of random vectors, random matrices, and similar aggregates of fixed structure is defined as the aggregation of the expected value computed over each individual element. The concept of "variance of a random vector" is normally expressed through a covariance matrix. No generallyagreedupon definition of expected value or variance exists for cases other than just discussed.
Examples
The possible outcomes for one coin toss can be described by the state space \Omega = {heads, tails}. We can introduce a realvalued random variable Y as follows:
Y(\omega) = \begin{cases} 1, & \text{if} \ \ \omega = \text{heads} ,\\ 0, & \text{if} \ \ \omega = \text{tails} . \end{cases}
If the coin is equally likely to land on either side then it has a probability mass function given by:
 \rho_Y(y) = \begin{cases}\frac{1}{2},& \text{if }y=1,\\
\frac{1}{2},& \text{if }y=0.\end{cases}
From Yahoo Answers
Answers:P{0<= <=kX} = 1 => P{0<= /k<=X} = 1 So now we integrate P{0<= /k<=X} = integral from /k to infinity of f(x). Do usubstitution u = x/ and and we get integral from 1/k to infinity e^(u) = 1  e^(1/k) That means 1  e^(1/k) = (1 ) => e^(1/k) = => 1/k = ln( ) => k = 1/ ln( )
Answers:ANSWER: 85% Resulting Confidence Interval for 'true mean' for water specimens exceeding 5ppm = [0.01, 0.11] Why??? POPULATION PROPORTION, CONFIDENCE INTERVAL, NORMAL DISTRIBUTION n: Number of samples = 50 k: Number of Successes = 3 p: Sample Proportion [3/50] = 0.06 significant digits: 2 Confidence Level = 85 "Lookup" Table 'zcritical value' = 1.440 This 'zcritical value' is a Lookup from the Table of Standard Normal Distribution. The Table is organized as a cummulative 'area' from the LEFT corresponding to the STANDARDIZED VARIABLE z. The Standard Normal Distribution is symmetric (called a 'Bell Curve') which means its an interpretive procedure to LookUp the 'area' from the Table. For the Confidence Level (or Level of Confidence) = 85, there is a LEFT 'area' OUTSIDE. And due to symmetry there is a RIGHT 'area' OUTSIDE. Using a Lookup from the Table involves adding and subtracting an 'area' which is equal to the Confidence Level. For STANDARDIZED VARIABLE z = 1.44, this corresponds to the LEFT 'area' half of the Confidence Level area = 0.5 * (1  85/100) = 0.08 by a Lookup in the Table for Standard Normal Distribution. Or alternative; use Excel function: NORMSINV(probability) Returns the inverse of the standard normal cumulative distribution. The distribution has a mean of zero, a standard deviation of one and is symmetrical. 85% Resulting Confidence Interval for 'true mean': p +/ ('z critical value') * SQRT[p * (1  p)/n] = 0.06 +/ 1.44 * SQRT[0.06 * (1  0.06)/50] = [0.01, 0.11]
Answers:a) ANSWER: 95% Resulting Confidence Interval for 'true mean'= [0.56, 0.64] Why??? POPULATION PROPORTION, CONFIDENCE INTERVAL, NORMAL DISTRIBUTION n: Number of samples = 500 k: Number of Successes = 300 p: Sample Proportion [300/500] = 0.6 significant digits: 2 Confidence Level = 95 "Lookup" Table 'zcritical value' = 1.960 This 'zcritical value' is a Lookup from the Table of Standard Normal Distribution. The Table is organized as a cummulative 'area' from the LEFT corresponding to the STANDARDIZED VARIABLE z. The Standard Normal Distribution is symmetric (called a 'Bell Curve') which means its an interpretive procedure to LookUp the 'area' from the Table. For the Confidence Level (or Level of Confidence) = 95, there is a LEFT 'area' OUTSIDE. And due to symmetry there is a RIGHT 'area' OUTSIDE. Using a Lookup from the Table involves adding and subtracting an 'area' which is equal to the Confidence Level. For STANDARDIZED VARIABLE z = 1.96, this corresponds to the LEFT 'area' half of the Confidence Level area = 0.5 * (1  95/100) = 0.03 by a Lookup in the Table for Standard Normal Distribution. Or alternative; use Excel function: NORMSINV(probability) Returns the inverse of the standard normal cumulative distribution. The distribution has a mean of zero, a standard deviation of one and is symmetrical. 95% Resulting Confidence Interval for 'true mean': p +/ ('z critical value') * SQRT[p * (1  p)/n] = 0.6 +/ 1.96 * SQRT[0.6 * (1  0.6)/500] = [0.56, 0.64] b) Yes; there is ample evidence to refute the claim of 0.50 proportion of customers what ATM machings to give stamps. Why??? SAMPLE STATISTICS TEST, TWOTAILED, 6  Step Procedure 95% confidence in conclusion: H1 is true The simplest and most commonly used formula for a binomial confidence interval relies on approximating the binomial distribution with a normal distribution. This approximation is justified by the central limit theorem. Step 1: Determine the hypothesis to be test. H0: = 0 H1: 0 Step 2: Determine a planning value for [level of significance]0.05 Step 3: From the sample data determine p, and n; then compute Standardized Test Statistic: z = (p  P) / p = sample proportion0.6 n = number of individuals in the sample500 = sqrt[ P * ( 1  P ) / n ] standard deviation of the sampling distribution 0.022 P = hypothesized value of population proportion in null hypothesis0.5 significant digits2 Standardized Test Statistic z = ( 0.6  0.5 )/( 0.022 ) = 4.55 Step 4: Using Nomal distribution, "lookup" the area to the left of z=NORMSDIST( 4.55 ) using Normal distribution Table or Excel NORMSDIST(z). Step 5: Area in Step 4 is equal to P value0 The Pvalue is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a zscore, use the Normal Distribution Calculator to assess the probability associated with the zscore. Step 6: For P /2, fail to reject H0; and for P < /2, reject H0 with 95% confidence in conclusion: H1 is true Note: level of significance [ ] is the maximum level of risk an experimenter is willing to take in making a "reject H0" or "conclude H1" conclusion (i.e. it is the maximum risk in making a Type I error).
Answers:Yes it is. If n equaled 2, then you are saying that 2 is equal to itself. That's called the reflexive identity. 4n + 6  2n = 2(n+3) 2n + 6 = 2n + 6 2n = 2n n = n So for any value of n, as long as the same value is used on each side, the equation is true.