#### • Class 11 Physics Demo

Explore Related Concepts

# different ways in describing relation

From Wikipedia

Recurrence relation

In mathematics, a recurrence relation is an equation that recursively defines a sequence: each term of the sequence is defined as a function of the preceding terms.

The term difference equationsometimes (and for the purposes of this article) refers to a specific type of recurrence relation. Note however that "difference equation" is frequently used to refer to any recurrence relation.

An example of a recurrence relation is the logistic map:

x_{n+1} = r x_n (1 - x_n) \,

Some simply defined recurrence relations can have very complex (chaotic) behaviours, and they are a part of the field of mathematics known as nonlinear analysis.

Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.

## Example: Fibonacci numbers

The Fibonacci numbers are defined using the linear recurrence relation

F_n = F_{n-1}+F_{n-2} \,

with seed values:

F_0 = 0 \,
F_1 = 1 \,

Explicitly, recurrence yields the equations:

F_2 = F_1 + F_0 \,
F_3 = F_2 + F_1 \,
F_4 = F_3 + F_2 \,

etc.

We obtain the sequence of Fibonacci numbers which begins:

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...

It can be solved by methods described below yielding the closed form expression which involve powers of the two roots of the characteristic polynomial t2&nbsp;=&nbsp;t&nbsp;+&nbsp;1; the generating function of the sequence is the rational function

\frac{t}{1-t-t^2}.

## Structure

### Linear homogeneous recurrence relations with constant coefficients

An order d linear homogeneous recurrence relation with constant coefficients is an equation of the form:

a_n = c_1a_{n-1} + c_2a_{n-2}+\cdots+c_da_{n-d} \,

where the d coefficients ci(for all i) are constants.

More precisely, this is an infinite list of simultaneous linear equations, one for each n&gt;d&minus;1. A sequence which satisfies a relation of this form is called a linear recursive sequence or LRS. There are d degrees of freedom for LRS, the initial values a_0,\dots,a_{d-1} can be taken to be any values but then the linear recurrence determines the sequence uniquely.

The same coefficients yield the characteristic polynomial (also "auxiliary polynomial")

p(t)= t^d - c_1t^{d-1} - c_2t^{d-2}-\cdots-c_{d}\,

whose d roots play a crucial role in finding and understanding the sequences satisfying the recurrence. If the roots r1, r2, ... are all distinct, then the solution to the recurrence takes the form

a_n = k_1 r_1^n + k_2 r_2^n + \cdots + k_d r_d^n,

where the coefficients kiare determined in order to fit the initial conditions of the recurrence. When the same roots occur multiple times, the terms in this formula corresponding to the second and later occurrences of the same root are multiplied by increasing powers of n. For instance, if the characteristic polynomial can be factored as (x&nbsp;&minus;&nbsp;r)3, with the same root r occurring three times, then the solution would take the form

a_n = k_1 r^n + k_2 n r^n + k_3 n^2 r^n.\,

### Rational generating function

Linear recursive sequences are precisely the sequences whose generating function is a rational function: the denominator is the auxiliary polynomial (up to a transform), and the numerator is obtained from the seed values.

The simplest cases are periodic sequences, a_n = a_{n-d}, n\geq d, which have sequence a_0,a_1,\dots,a_{d-1},a_0,\dots and generating function a sum of geometric series:

\begin{align} & \frac{a_0 + a_1 x^1 + \cdots + a_{d-1}x^{d-1}}{1-x^d} \\[6pt] & = \left(a_0 + a_1 x^1 + \cdots + a_{d-1}x^{d-1}\right) \\[3pt] & {} \quad + \left(a_0 + a_1 x^1 + \cdots + a_{d-1}x^{d-1}\right)x^d \\[3pt] & {} \quad + \left(a_0 + a_1 x^1 + \cdots + a_{d-1}x^{d-1}\right)x^{2d} + \cdots. \end{align}

More generally, given the recurrence relation:

a_n = c_1a_{n-1} + c_2a_{n-2}+\cdots+c_da_{n-d} \,

with generating function

a_0 + a_1x^1 + a_2 x^2 + \cdots,

the series is annihilated at a_d and above by the polynomial:

1- c_1x^1 - c_2 x^2 - \cdots - c_dx^d. \,

That is, multiplying the generating function by the polynomial yields

b_n = a_n - c_1 a_{n-1} - c_2 a_{n-2} - \cdots - c_d a_{n-d} \,

as the coefficient on x^n, which vanishes (by the recurrence relation) for n \geq d. Thus

(a_0 + a_1x^1 + a_2 x^2 + \cdots {} ) (1- c_1x^1 - c_2 x^2 - \cdots - c_dx^d) = (b_0 + b_1x^1 + b_2 x^2 + \cdots + b_{d-1} x^{d-1})

so dividing yields

a_0 + a_1x^1 + a_2 x^2 + \cdots =

\frac{b_0 + b_1x^1 + b_2 x^2 + \cdots + b_{d-1} x^{d-1}}{1- c_1x^1 - c_2 x^2 - \cdots - c_dx^d},

expressing the generating function as a rational function.

The denominator is x^d p\left(x^{-1}\right), a transform of the auxiliary polynomial (equivalently, reversing the order of coefficients); one could also use any multiple of this, but this normalization is chosen both because of the simple relation to the auxiliary polynomial, and so that b_0 = a_0.

### Relationship to difference equations narrowly defined

Given an ordered sequence \left\{a_n\right\}_{n=1}^\infty of real numbers: the first difference \Delta(a_n)\, is defined as

\Delta(a_n) = a_{n+1} - a_n\,.

The second difference \Delta^2(a_n)\, is defined as

\Delta^2(a_n) = \Delta(a_{n+1}) - \Delta(a_n)\,,

which can be simplified to

\Delta^2(a_n) = a_{n+2} - 2a_{n+1} + a_n\,.

More generally: the kth difference of the sequence a_n\, is written as \Delta^k(a_n)\, is defined recursively as

\Delta^k(a_n) = \Delta^{k-1}(a_{n+1}) - \Delta^{k-1}(a_n)\,.

The more restrictive definition of difference equation is an equation composed of anand its kth differences. (A widely used broader definition treats "difference equation" as synonymous with "

Mean difference

The mean difference is a measure of statistical dispersion equal to the average absolute difference of two independent values drawn from a probability distribution. A related statistic is the relative mean difference, which is the mean difference divided by the arithmetic mean. An important relationship is that the relative mean difference is equal to twice the Gini coefficient, which is defined in terms of the Lorenz curve.

The mean difference is also known as the absolute mean difference and the Gini mean difference. The mean difference is sometimes denoted by Î” or as MD. Themean deviation is a different measure of dispersion.

## Calculation

For a population of size n, with a sequence of values yi, i = 1 to n:

MD = \frac{1}{n(n-1)} \Sigma_{i=1}^n \Sigma_{j=1}^n | y_i - y_j | .

For a discrete probability functionf(y), where yi, i = 1 to n, are the values with nonzero probabilities:

MD = \Sigma_{i=1}^n \Sigma_{j=1}^n f(y_i) f(y_j) | y_i - y_j | .

For a probability density functionf(x):

MD = \int_{-\infty}^\infty \int_{-\infty}^\infty f(x)\,f(y)\,|x-y|\,dx\,dy .

For a cumulative distribution function F(x) with quantile function x(F):

MD = \int_0^1 \int_0^1 |x(F_1)-x(F_2)|\,dF_1\,dF_2 .

## Relative mean difference

When the probability distribution has a finite and nonzero arithmetic mean, the relative mean difference, sometimes denoted by âˆ‡ or RMD, is defined by

RMD = \frac{MD}{\mbox{arithmetic mean}}.

The relative mean difference quantifies the mean difference in comparison to the size of the mean and is a dimensionless quantity. The relative mean difference is equal to twice the Gini coefficient which is defined in terms of the Lorenz curve. This relationship gives complementary perspectives to both the relative mean difference and the Gini coefficient, including alternative ways of calculating their values.

## Properties

The mean difference is invariant to translations and negation, and varies proportionally to positive scaling. That is to say, if X is a random variable and c is a constant:

• MD(X + c) = MD(X),
• MD(-X) = MD(X), and
• MD(cX) = |c| MD(X).

The relative mean difference is invariant to positive scaling, commutes with negation, and varies under translation in proportion to the ratio of the original and translated arithmetic means. That is to say, if X is a random variable and c is a constant:

• RMD(X + c) = RMD(X) Â· mean(X)/(mean(X) + c) = RMD(X) / (1+c / mean(X)) for câ‰  -mean(X),
• RMD(-X) = âˆ’RMD(X), and
• RMD(cX) = RMD(X) for c> 0.

If a random variable has a positive mean, then its relative mean difference will always be greater than or equal to zero. If, additionally, the random variable can only take on values that are greater than or equal to zero, then its relative mean difference will be less than 2.

## Compared to standard deviation

Both the standard deviation and the mean difference measure dispersionâ€”how spread out are the values of a population or the probabilities of a distribution. The mean difference is not defined in terms of a specific measure of central tendency, whereas the standard deviation is defined in terms of the deviation from the arithmetic mean. Because the standard deviation squares its differences, it tends to give more weight to larger differences and less weight to smaller differences compared to the mean difference. When the arithmetic mean is finite, the mean difference will also be finite, even when the standard deviation is infinite. See the examples for some specific comparisons. The recently introduced distance standard deviation plays similar role than the mean difference but the distance standard deviation works with centered distances. See also E-statistics.

## Sample estimators

For a random sample S from a random variable X, consisting of n values yi, the statistic

MD(S) = \frac{\sum_{i=1}^n \sum_{j=1}^n | y_i - y_j |}{n(n-1)}

is a consistent and unbiasedestimator of MD(X). The statistic:

RMD(S) = \frac{\sum_{i=1}^n \sum_{j=1}^n | y_i - y_j |}{(n-1)\sum_{i=1}^n y_i}

is a consistentestimator of RMD(X), but is not, in general, unbiased.

Confidence intervals for RMD(X) can be calculated using bootstrap sampling techniques.

There does not exist, in general, an unbiased estimator for RMD(X), in part because of the difficulty of finding an unbiased estimation for multiplying by the inverse of the mean. For example, even where the sample is known to be taken from a random variable X(p) for an unknown p, and X(p) - 1 has the Bernoulli distribution, so that Pr(X(p) = 1) = 1&nbsp;âˆ’&nbsp;p and , then

RMD(X(p)) = 2p(1&nbsp;âˆ’&nbsp;p)/(1&nbsp;+&nbsp;p).

But the expected value of any estimator R(S) of RMD(X(p)) will be of the form:

From Encyclopedia

Einstein's Special Theory of Relativity EINSTEIN'S SPECIAL THEORY OF RELATIVITY

Question:

Answers:If serious and formal, we should refer to the definitions accepted by ISO(1993) and the other related terms as uncertainty and precision. Accuracy describes how closely to the truth something is. The accuracy of the measurement refers to how close the measured value is to the true or accepted (reference) value. Classical way of expressing accuracy is the error of measurement: (X) = X(M) - X(S) X(M) measured value X(S) - true (correct) value (a problem: if not known so called conventionally true (reference) value). Various influences acting together with measured value result in difference between measured and true value of the measured quantity (systematic errors; random errors). The accuracy is related only to the systematic ones. Therefore, there are 3 ways to improve the ACCURACY: - by choosing the proper instrument or equipment (calibrated most closely to the reference value) - by elimination of the possible sources of systematic errors (if they are known and controllable) - by correcting the result with the value of all known systematic errors (if they are estimated). The accuracy, has no direct connections with the repeatability, the uncertainty and the precision of the used instrument. It makes little sense to quote values of high precision (low random error) beyond the expected accuracy of the measurement (the method). Without stating the estimated accuracy, such a reading cannot be used in serious computations. It sounds as joke, but it is true: "If you have only one watch, you always know exactly what time it is. If you have two watches, you are never quite sure..., so refer to GMT." Good luck.

Question:And why would the absolute humidity be more than the relative humidity in summer?

Answers:Absolute humidity is the amount of water vapor in the air, often expressed in grams per liter. Relative humidity is the amount of water vapor in the air *expressed as a percentage of the maximum amount possible at a given temperature.* Since warm air can hold more water vapor than cold air, if the absolute humidity remains constant, then the relative humidity will decrease as the temperature increases. Since absolute and relative humidity are simply different ways of describing the same amount of water vapor in the air, one cannot be larger than the other regardless of temperature. The numerical value for the absolute humidity may be greater than the numerical relative-humidity percentage in hot weather, but because the units of measurement are different this doesn't mean that one is "larger" than the other.