Explore Related Concepts

population parameter vs sample statistic

Best Results From Wikipedia Yahoo Answers Youtube


From Wikipedia

Sample size

The sample size of a statistical sample is the number of observations that constitute it. It is typically denoted n, a positive integer (natural number).

Typically, all else being equal, a larger sample size leads to increased precision in estimates of various properties of the population, though the results will become less accurate if there is a systematic error in the experiment. This can be seen in such statistical rules as the law of large numbers and the central limit theorem. Repeated measurements and replication of independent samples are often required in measurement and experiments to reach a desired precision.

A typical example would be when a statistician wishes to estimate the arithmetic mean of a quantitative random variable (for example, the height of a person). Assuming that they have a randomsample with independent observations, and also that the variability of the population (as measured by the standard deviationσ) is known, then the standard error of the sample mean is given by the formula:

\sigma/\sqrt{n}.

It is easy to show that as n becomes very large, this variability becomes small. This leads to more sensitive hypothesis tests with greater statistical power and smaller confidence intervals.

Implications of sample size

Central limit theorem

The central limit theorem states that as the size of a sample of independent observations approaches infinity, provided data come from a distribution with finite variance, that the sampling distribution of the sample mean approaches a normal distribution.

Estimating proportions

A typical statistical aim is to demonstrate with 95% certainty that the true value of a parameter is within a distance B of the estimate: B is an error range that decreases with increasing sample size (n). Typically B is generated in such a way that the range of values of that are within a distance B of the estimated parameter value will be a 95% confidence interval, at least in an approximate sense.

For example, a simple situation is estimating a proportion in a population. To do so, a statistician will estimate the bounds of a 95% confidence interval for an unknown proportion.

The rule of thumb for (a maximum or 'conservative') B for a proportion derives from the fact the estimator of a proportion, \hat p = X/n, (where X is the number of 'positive' observations) has a (scaled) binomial distribution and is also a form of samplemean (from a Bernoulli distribution [0,1] which has a maximum variance of 0.25 for parameterp = 0.5). So, the sample mean X/n has maximum variance 0.25/n. For sufficiently large n (usually this means that we need to have observed at least 10 positive and 10 negative responses), this distribution will be closely approximated by a normal distribution with the same mean and variance.

Using this approximation, it can be shown that the confidence interval (+/- the margin of error) is:

At 99% confidence: (\hat p - 1.29/\sqrt{n} ,~~ \hat p + 1.29/\sqrt{n})

At 95% confidence: (\hat p - 0.98/\sqrt{n} ,~~ \hat p + 0.98/\sqrt{n})

At 90% confidence: (\hat p - 0.82/\sqrt{n} ,~~ \hat p + 0.82/\sqrt{n})

One sees these numbers quoted often in news reports of opinion polls and other sample surveys.

Extension to other cases

In general, if a populationmean is estimated using the samplemean from n observations from a distribution with variance σ², then if n is large enough (typically >30) the central limit theorem can be applied to obtain an approximate 95% confidence interval of the form

(\bar x - B,\bar x + B), B=2\sigma/\sqrt{n}

If the sampling errorε is required to be no larger than bound B, as above, then

4\sigma^2/\varepsilon^2 \approx 4\sigma^2/B^2=n

Note, if the


From Yahoo Answers

Question:Please help me with this question, if someone could please explain it to me! "A sample of 50 patients is selected from among the patients admitted to the ER at a hospital, and it is found that 28% have no health insurance. " Is this a: A: Population parameter. OR B: Sample Statistic. I think that it is B. Please help me understand the difference between the two, if someone could use examples, that would be great! Thanks! BTW: will give 10 points to best answer ; ) 7 minutes ago - 4 days left to answer.

Answers:Yep, it's B. A "sample statistic" is just a measurement associated with a sample, like x bar (sample mean) or s (sample standard deviation). A "population parameter" is the same thing, except for a population (like "mu" for population mean, and "sigma" for population standard deviation." If it's a variable whose measurement is derived from a sample, it's a sample statistic. If it's a variable whose measurement is derived from a population, it's a population parameter.

Question:I have both definitions but I'm still pretty confused. Can somebody please give me an example of how to use each one so I understand the difference?

Answers:ANSWER: Random Sampling vs. Simple Random Sampling is the two have negligibly different definitions. They're the same thing. Random Sampling is the practice concerned with the selection of individuals intended to yield "some knowledge" Simple Random Sampling is the practice of selecting individuals entirely by chance such that each individual is chosen from a larger set of a population.

Question:If the population mean is 35 and standard deviation (SD) is 6, what is the probability that sample variance is less than 19? (Sample mean is 35, sample SD = 1.342) Forgot to mention sample size! It is n=20. Here's the question in its original context: "A forest has an average diameter of 35 cm and a standard deviation of 6 cm. What is the probability that a sample variance is less than 19 cm squared if 20 trees are measured?"

Answers:Perhaps you want a chi-square test for the standard deviation (link below)? Also, aren't you really interested in the population parameters? After all, the sample variance is what it is - there's no inference to be made. OK. You have a sample sd=6. You also need the sample size (the mean is irrelevant). H0: sigma > sqrt(19) H1: sigma < sqrt(19) Test statistic: T = (n-1)(s/sqrt(19))^2 Compare to a chi-square(1-alpha, n-1).

Question:

Answers:actually you don't do point estimates for statistics. statistics are values that are taken from a sample and are actual data. point estimate refers to parameters, information about the population. typically the statistic is a point estimate (a single value estimate) for the parameter. this is different from a confidence interval which gives you a range of values for the parameter.

From Youtube

Statistics: Sample vs. Population Mean :The difference between the mean of a sample and the mean of a population.

Statistics - 1 - Terms - 2 - Population and Sample :This video develops the concept of a population and a sample.