Advertisement
Guest User

Untitled

a guest
Oct 18th, 2019
105
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 6.60 KB | None | 0 0
  1. Consider $f(x) = x(1+\frac{1}{1+x}) -3$. Clearly if $x$ is 0 this is -3, which is negative; equally clearly, if x is 3, f(x) is positive (more generally, if $x = \xi , g(\xi)$, in which $\xi$ is substituted for 3 in f(x) and positive k and $\eta$ values used, will be positive, as $1+\sum_j\frac{k_j\eta_j}{1+k_jx}$ will, $\xi$ being positive, be greater than 1). Since f(x) is a composition of continuous functions (division being continuous except when the denominator is 0, which (1+x) never is on the range [0,3]) f(x) is continuous on [0,3]. Thus a root exists in [0,3]. $\frac{df}{dx}$ is $1+\frac{1}{1+x} - (\frac{x}{(1+x)^2}) = 1 + \frac{1}{(1+x)^2}$ which is clearly always positive on [0,3]; hence f(x) is strictly increasing on this interval. Thus only one root exists on [0,3].
  2. (Above and below $\sum_j$ is sometimes used as shorthand for $\sum^M_{j=1}$).
  3. Note as above that for any sets of k, $\eta$, and $\xi$ where all the listed parameters are positive, the root will be bracketed by $x=0$ and $x=\xi$, so these values can be used to construct an interval for bisection.
  4. Using Picard iteration in the above case where $x_{n+1} = \xi - x_n\sum_j\frac{k_j\eta_j}{1+k_j\x_n}$, it is clear that in the particular case examined above, where $x_{n+1} = 3 - \frac{x_n}{1+\x_n}$, this iteration will converge, as the derivative of this function ($\frac{-1}{(1+x)^2}$) has an absolute value less than or equal to one for all positive $x$, and thus for all x in the interval between any reasonable guess and the true value; as shown in class Picard iteration will therefore result in a contraction of that interval. In the case where $x_{n+1} = 3 - \frac{5x_n}{1+\x_n}$, this iteration will not converge, as near the true root (about 0.791) the derivative of the Picard iteration function used is greater than one, so Picard iteration for a value near the true root will tend to expand the interval between the approximation and the root.
  5. These results can be verified using the program "test.exe" provided, giving the input "test.exe 1 3 1 1 picard" for the first-mentioned and "test.exe 1 3 1 5 picard" for the second-mentioned cases respectively. Note that the iteration in the second case converges to an undesired negative root.
  6. Supposing $x_{n+1} = \frac{\xi + x_n(\alpha - \sum_j\frac{k_j\eta_j}{1+k_jx_n}) }$, and substituting $k=[1], \eta = [5], \xi = 3$, experiment shows that two sample values of $\alpha$ for which this will converge to the proper root are 5 and 2. In fact, simple analysis of the derivative of the iteration function as explained above at x=0.791 shows that any $\alpha$ value greater than roughly 0.28 should suffice.
  7. ...
  8. Consider Newton's method for the case in which M = 1, $\xi =1$, $k = [ 1 ]$, $\eta = [10]$.
  9. Then using the program pl2.exe with guess 1.6 (instruction: "pl2.exe 1 1 1 10 n 1.6"), it is clear that this method does not converge to the desired root, but to a negative root elsewhere. Using guess 1.5, the program converges to a fictitious root at -1 ; using guess 1.4999 it converges to the proper root. The failure observed derives from the fact that over x = 1.5 the slope of the function is mild enough to reroute the algorithm to another (and fictitious) root of the function. (Using 1.0 good results are observed). It is of course possible to programmatically exclude negative guesses in any of a number of ways, such as by including a bisection step to produce a better guess if such a failure condition is encountered; however, in this class of problems, using an initial guess of 0 should be enough to prevent this sort of error, as, since this class of functions is always convex over the positive domain (the second derivative is of the form $2(x\sum_j\frac{k^3_j\eta_j}{(1+k_jx)^3} - \sum_j\frac{k^2_j\eta_j}{(1+k_jx)^2})$ and trivially for any j (all variables assumed positive) $x\frac{k^3_j\eta_j}{(1+k_jx)^3} < \frac{k^2_j\eta_j}{(1+k_jx)^2} $ as $1 + k_jx > k_jx $ and so f''(x) --f(x) being the equation for which to find a root-- is negative) and, as shown in class, using Newton's method on a convex function with a guess smaller than the root will never overshoot, and 0 is smaller than the positive root (which must exist, as shown above.).
  10. Using the case M = 3, $\xi = 9$, k = [1 2 6], and $\eta$ = [2 3 1], running Newton's method ("pl2.exe 3 9 1 2 6 2 3 1 n") yields a result accurate to a relative error (using the computed root to approximate the actual root) of 1 in ten billion in 7 iterations. A meaningful set of second-order error ratios (the magnitude in this case of the quotient of the error of the value for each iteration and the square of that of the previous iteration) is the four values (to three figures): 0.158, 0.0676, 0.0359, 0.0335 : thus, though there are not really enough values to be certain, the observed asymptotic error constant appears to be close to the predicted value (derived in class and in any number of textbooks) near the root of $\frac{f''(x)}{2f'(x)}$ evaluated at the root found, which is approximately 0.335 (to three figures).
  11. Similar analysis of the Picard iteration solution for the same problem and using the alpha-value above gives a first-order asymptotic error ratio around 0.9 (ignoring the last ten or so iterations, for which convergence notably speeds up, which is probably an artefact) which is fairly close, as one might expect, to the derivative of the iteration function at the root, which is 0.921 (to three figures).
  12. Similar analysis of the bisection algorithm for the same problem yields, as one might expect from the fact that the maximum error is halved at each step, a variety of error ratios of the first order whose geometric mean is 0.497, which is very close to 0.5.
  13. Clearly, for this class of problems Newton's method is, of these three, by far the fastest way to solve a given problem, and it is also reliable (given an appropriate initial guess). For this reason the program submitted will, if given no choice of method (ex: "pl2.exe 3 9 1 2 6 2 3 1") apply Newton's method with an initial guess of 0.
  14.  
  15.  
  16. An explanation of the excel files attached somewhere: These were used to estimate the asymptotic error constant. In the first purely numeric column will be found the successive estimates for the root, then in the next column the error of the previous iteration, then that of the current, then the ratio of the two (or the ratio of the one to the square of the other in the case of Newton's method).
  17. Running pl2.exe will generate a great deal of noise by printing the intermediate iterations on the way to the root. This was useful for this exercise but will not be generally so. To stop this, recompile pl2.c, having first undefined the preprocessor constant "NOISY".
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement