Best Results From Wikipedia Yahoo Answers Youtube


From Wikipedia

Root-finding algorithm

A root-finding algorithm is a numerical method, or algorithm, for finding a value x such that f(x) = 0, for a given functionf. Such an x is called a root of the function f.

This article is concerned with finding scalar, real or complex roots, approximated as floating point numbers. Finding integer roots or exact algebraic roots are separate problems, whose algorithms have little in common with those discussed here. (See: Diophantine equation as for integer roots)

Finding a root of f(x)− g(x) = 0 is the same as solving the equationf(x) = g(x). Here, x is called the unknown in the equation. Conversely, any equation can take the canonical formf(x) = 0, so equation solving is the same thing as computing (or finding) a root of a function.

Numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converge towards a limit (the so called "fixed point") which is a root. The first values of this series are initial guesses. The method computes subsequent values based on the old ones and the function f.

The behaviour of root-finding algorithms is studied in numerical analysis. Algorithms perform best when they take advantage of known characteristics of the given function. Thus an algorithm to find isolated real roots of a low-degree polynomial in one variable may bear little resemblance to an algorithm for complex roots of a "black-box" function which is not even known to be differentiable. Questions include ability to separate close roots, robustness in achieving reliable answers despite inevitable numerical errors, and rate of convergence.

Specific algorithms

The simplest root-finding algorithm is the bisection method. It works when f is a continuous function and it requires previous knowledge of two initial guesses, a and b, such that f(a) and f(b) have opposite signs. Although it is reliable, it converges slowly, gaining one bit of accuracy with each iteration.

Newton's method assumes the function f to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher order of convergence are the Householder's methods. The first one after Newton's method is Halley's method with cubic order of convergence.

Replacing the derivative in Newton's method with a finite difference, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6). A generalization of the secant method in higher dimensions is Broyden's method.

The false position method, also called the regula falsi method, is like the secant method. However, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method is faster than the bisection method and more robust than the secant method, but requires the two starting points to bracket the root.

The secant method also arises if one approximates the unknown function f by linear interpolation. When quadratic interpolation is used instead, one arrives at Müller's method. It converges faster than the secant method. A particular feature of this method is that the iterates xn may become complex.

This can be avoided by interpolating the inverse of f, resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.

Finally, Brent's method is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.

Finding roots of polynomials

Much attention has been given to the special case that the function f is a polynomial; there exist root-finding algorithms exploiting the polynomial nature of f. For a univariate polynomial of degree less than five, we have closed form solutions such as the quadratic formula. However, even this degree-two solution should be used with care to ensure numerical stability. The degree-four solution is unwieldy and troublesome. Higher-degree polynomials have no such general solution, according to the Abel–Ruffini theorem (1824, 1799).

Birge-Vieta's method combines


From Yahoo Answers

Question:Please help with Algorithm method to find a square root. I have seen and understood how to find square root of a value through this method but still not able to understand that what deviser I should start dividing the value with. For example: Find the square root of 0.204304. After skipping 0, I can divide the first pair, 20 with 2 or 4 or 5 or 10. However, the correct square root can only be found if the division is begun with 4. My question is how do I know if 4 or 5 or 10 will bring the correct result? A detailed and easy-to-understand response will be highly appreciated and selected best.

Answers:Find the square root of 0.204304. (Starting step for grouping always start at decimal and go right with a pair of digits (2 digits together) same method towards left also) Group the above number in following ways 20 is group 1, .43 is group 2 and 04 is group 3. normally this is done by putting hypen above each group Write the number .204304 in the form you use if you were to divide 0. 20 43 04 by some other divisor) Like . 20 43 04 with a dash above the number and a vertical dash left to the number 0. 20 43 04 and divisor left to the dash. Now consider 20 (or group 1) ( 2^2 = 4, 3^2 = 9, 4^2 =16 and 5^2 = 25 always take perfect square ) Take 16 as it is less than and nearest to 20 Write . 4 as quotient above .20 (first group of 2 digit) and 4 as divisor (left to vertical dash of .204304) Now write16 below 20 (like in division) and subtract you have 20-16 = 4 (Say A) Copy the next group 43 beside 4 (result A) so you have 443 (result B) In this method the divisor keeps changing and you will have a new divisor written left to (or same row as 443) This new divisor is old divisor 4 added to the old quotient 4 In this case new divisor is 4+4 = 8 Now consider 443 (result B) and divide it by 8 by this method 81 * 1 = 81 82 * 2 = 162 83* 3 = 249 84* 4 = 336 85* 5 = 425 86 * 6 = 516 (this is greater than 443 or result B and hence not suitable) Note here that the units place above should match with multiplier (Multiplier to be selected on this basis) Now 443 is nearest and greater than 425 So you write 5 at quotient place (that is write it above 43 or group 2) and also change the divisor to 85 by writing 5 beside 8 After doing all above you will have 45 as quotient New remainder is 443 425 = 18 And 85 to 5 = 90 (which shall be the new divisor written left of 18 ) Now take the third group 04 besides 18 (last remainder) You will have 1804 and this is being divided by 90 901 * 1= 901 902* 2 = 1804 ( our earlier remainder ) You write 2 right to our previous quotient 45 (4 written over first group, and 5 written over second and 2 written over group 3) Now you will have an answer = . 452 Note that there is no remainder as 1804 1804 = 0 Hence sqrt (0 .204304) is 0 . 452

Question:1) Use the Chinese square root algorithm to find the cube root of 142,884. 2) Use the Chinese cube root algorithm to find the cube root of 12,812,904.

Answers:There're different edition of Chinese algorithm to get the square root or cube root. For example: Nine Chapter's algorithm, which gets a numerical result: *Jia Xian's which gets a numerical result ** Qin Jiushao's generalization of , which is equiv to "Horner scheme" 600 years later So just use "Horner scheme" (Qin Jiushao method) because it's equiv to what you called Chinese * root algorithm

Question:when find the square root using the long division kind of method, why do we multiply the initial number by 20? 1______ 1 | 264 1 ----- 2*10+6|164 <------ Why we are multiplying by 2? 156 ------ 8

Answers:I do not know this algorithm but let me quote from Richard Feynmann in his lecture on Algebra. "There is a definite arithmetic procedure, but the easiest way to find the square root of any number N is to choose some a fairly close, find N/a, average a' = 1/2[a + (N/a)], and use this average a' for the next choice of a. The convergence is very rapid -- the number of significant figures doubles each time." In this case 16^2 = 256 so 16 would be a logical first a.

Question:Hi everyone, I am doing a project about different methods to compute (without the use of a calculator) square roots of positive real numbers. One of these is the so called "high school method" which looks kind of like doing long division (see link below for further details). I know and understand how it works but I would like to know WHY it works, in mathematical terms of course. http://www.homeschoolmath.net/teaching/square-root-algorithm.php Any ideas? Thank you

Answers:It uses the idea that (a+h)^2=a^2+2ah+h^2 See how this applies to the algorithm, particularly the doubling. This should be an interesting project. Good luck with it.

From Youtube

Square Root Algorithm (C++) :Learn the Babylonian Method for estimating the square root of positive numbers. Includes an introduction to algorithms, recursion, and a programming example in C++.

MF49: How to find a square root :We consider three methods, or algorithms, for finding the square root of a natural number we know to be a square. One is trial and error estimation, the other is the Babylonian method equivalent to Newton's method, and the third we call the Vedic method, since it goes back to the Hindus. It is completely feasible to do by hand. This video belongs to Wildberger's MathFoundations series, which sets out a coherent and logical framework for modern mathematics.