superman22 said:

mad merlin, lol, i know that.

but youre mistaken here. this is a math problem i have to do for IB (anyone in that?) for calculus bc and on, so i dunno if anyone can really help. (im hoping someone can though)

im just tryign to figure out why:

the iteration* of cosx converges.

it has to do with the slope

of the equation

x(n+1) = acosx with a being the slope

i figured out that for it to converge to a point, it has to be -1.3< a < 1.3

but im trying to figure out the reasoning behind this.

anyone know?

This is a case treated by one of the simpler Fixed Point Theorems, which discuss under what conditions you can find a point x0, such that f(x0)=x0 for specific functions f. In particular, for your question, it also talks about when the series generated by x_n=f(x_n-1) converges to the fixed point.

Let's say you look at a function f(x) on the interval [a,b], where for every point x in the interval, f(x) also only takes on values on the interval [a,b]. One example would be f(x)=x^2 on [0,1]. Another example is Sin[x] on [-2Pi,2Pi]. As you might be able to tell, this is really useful, because while it doesn't guarantee that starting at a point x0 and iterating will converge to a fixed point, it does say that at least your function won't leave the interval [a,b].

Next, let's think about what conditions would guarantee a fixed point. You're in IB Calc right? Do you know about derivatives and the Mean Value Theorem? I'll state a fairly restrictive condition that'll always give you a converging fixed point, and try to point you in the correct direction for a proof.

Suppose that |f'(x)|< A < 1 everywhere on the interval [a,b]. If you start iterating with any point in [a,b], this will give you a fixed point, say z, i.e. f(z)=z. Say you start off with a point x0, and let's denote x_n=f(x_n-1) for positive integers n. e.g., x1=f(x0)

The Mean Value Theorem states that for a differentiable function f(x), on the interval [a,b], there exists a point c inside the interval such that (f(b)-f(a))/(b-a) = f'(c). Now, let's try to apply this theorem to our situation. z and x0 are both contained in [a,b], hence, by the theorem, we can conclude that there exists a point b0 between z and x0 such that

f(z)-f(x0) = (z-x0) f'(b0)

|f(z) - f(x0)|= |z - x1| = |f'(b0)| |z-x0| < A*|z-x0| < |z-x0|

As you can see, this states that the distance between x1 and z is smaller than the distance between x0 and z. The rest of the proof to show that this sequence of x_n's converges should be pretty easy to produce from here.

The conditions stated above are pretty restrictive when you think about it. The function has to be differentiable on an entire interval, and the derivative has to be bounded above by 1. Are there other cases for f where you can do iterations and still get it to converge? What happens when f isn't differentiable over the entire interval? Can you survive if it's not differentiable only at a single point? What if the interval you look at f on isn't bounded, but goes to infinity? Can you treat infinity as a "fixed point" too? Anyway, there are lots of things you can ask about this stuff... hope this helps!