# math help. math iteration

#### superman22

##### Diabloii.Net Member
math help. math iteration

if you have a function. and you plug in an iteration, why does it sometimes converge, and why does it diverge?

what makes it converge/diverge?

for example... a sin graph. or a tan graph.

if it has
asinx. and a=1 while x =1
it would be

but if you change the exponent in front, to say,

if you repeat this iteration, it diverges, why?

#### Corneo

##### Diabloii.Net Member
Errr...wait till tarnok get here.
But that thing you changed is called a coeffienct, not an exponent.

#### AeroJonesy

##### Diabloii.Net Member
Yeah, I don't get your question either. Perhaps you can clarify?

##### Diabloii.Net Member
Not quite sure what you're asking here, but I can tell you that none of the trigonomic functions converge, they're all periodic over 2pi radians. The graph of tan x has some vertical asymptotes at pi/2 and 3pi/2 radians where you could get a left/right side limit of positive or negative infinity if you wanted, but that isn't really what I think you mean by convergence.

#### superman22

##### Diabloii.Net Member
mad merlin, lol, i know that.

but youre mistaken here. this is a math problem i have to do for IB (anyone in that?) for calculus bc and on, so i dunno if anyone can really help. (im hoping someone can though)

im just tryign to figure out why:
the iteration* of cosx converges.
it has to do with the slope
of the equation

x(n+1) = acosx with a being the slope

i figured out that for it to converge to a point, it has to be -1.3< a < 1.3

but im trying to figure out the reasoning behind this.
anyone know?

##### Banned
you will find that other numbers converge also.

The slope is the line that is tangent to the graph. When the line is straight horizontal, (or is it vertical), that represents spots of non-convergence.

As you know by looking at the cos/sin graphs, it repeasts itself, thats why I think you will find more ranges of convergence than just the one you listed.

#### Suicidal Zebra

##### Diabloii.Net Member
Superman...

Try analysing the gradient of the tangent to the curve (for example sin(x)) at X(0), X(1)... X(n) etc. Essentially, what you should find (in a general sense) is that curves which converge should have a gradient of -1< m <1 for values of x(n), and divergent curves have gradients greater than this.

Often, for complex functions, the key to finding a convergent path is by using a 'well picked' x(0), and can be essential for when analysing chaotic behaviour.

#### superman22

##### Diabloii.Net Member
nm.. im not sure what you guys are taklign about but..

to find why it converges and diverges at certain points, its because

you have to take the derivative of cosx
the derivative of the function, as we know, gives you the slope.

the derivative of cosx = -sinx

now, like zebra mentioned, the slope is convergent when it is between 1 and -1.

the values of the function that satisfy this condition range from 1.3 to -1.3. that is why. now..

all i need is the program maple9, to graph it..

anyone have it?

#### Wuhan_Clan

##### Diabloii.Net Member
superman22 said:
if you have a function. and you plug in an iteration, why does it sometimes converge, and why does it diverge?

what makes it converge/diverge?

for example... a sin graph. or a tan graph.

if it has
asinx. and a=1 while x =1
it would be

but if you change the exponent in front, to say,

if you repeat this iteration, it diverges, why?
I'm not sure if you are talking about functions or series. The logic behind both are similar but for infinite series, there are certain tests that you can do (such as Integral Test, Limit Test, Ratio Test etc.) to test whether they converge or not. You should be able to look these up in a calculus text book.

To test functions, there is a systematic way to find convergence by following general rules of limits. For any function f(x), you would let x tend towards infinity.
some basic rules are like: 1/x tends to 0 when x tends to infinity

You can build up to more complicated rules but most math classes will require you to prove them (or the teacher will prove it and show you).

#### Corneo

##### Diabloii.Net Member
*Whoosh*
This thing went right over my head.

#### zarikdon

##### Diabloii.Net Member
superman22 said:
mad merlin, lol, i know that.

but youre mistaken here. this is a math problem i have to do for IB (anyone in that?) for calculus bc and on, so i dunno if anyone can really help. (im hoping someone can though)

im just tryign to figure out why:
the iteration* of cosx converges.
it has to do with the slope
of the equation

x(n+1) = acosx with a being the slope

i figured out that for it to converge to a point, it has to be -1.3< a < 1.3

but im trying to figure out the reasoning behind this.
anyone know?
This is a case treated by one of the simpler Fixed Point Theorems, which discuss under what conditions you can find a point x0, such that f(x0)=x0 for specific functions f. In particular, for your question, it also talks about when the series generated by x_n=f(x_n-1) converges to the fixed point.

Let's say you look at a function f(x) on the interval [a,b], where for every point x in the interval, f(x) also only takes on values on the interval [a,b]. One example would be f(x)=x^2 on [0,1]. Another example is Sin[x] on [-2Pi,2Pi]. As you might be able to tell, this is really useful, because while it doesn't guarantee that starting at a point x0 and iterating will converge to a fixed point, it does say that at least your function won't leave the interval [a,b].

Next, let's think about what conditions would guarantee a fixed point. You're in IB Calc right? Do you know about derivatives and the Mean Value Theorem? I'll state a fairly restrictive condition that'll always give you a converging fixed point, and try to point you in the correct direction for a proof.

Suppose that |f'(x)|< A < 1 everywhere on the interval [a,b]. If you start iterating with any point in [a,b], this will give you a fixed point, say z, i.e. f(z)=z. Say you start off with a point x0, and let's denote x_n=f(x_n-1) for positive integers n. e.g., x1=f(x0)

The Mean Value Theorem states that for a differentiable function f(x), on the interval [a,b], there exists a point c inside the interval such that (f(b)-f(a))/(b-a) = f'(c). Now, let's try to apply this theorem to our situation. z and x0 are both contained in [a,b], hence, by the theorem, we can conclude that there exists a point b0 between z and x0 such that

f(z)-f(x0) = (z-x0) f'(b0)
|f(z) - f(x0)|= |z - x1| = |f'(b0)| |z-x0| < A*|z-x0| < |z-x0|

As you can see, this states that the distance between x1 and z is smaller than the distance between x0 and z. The rest of the proof to show that this sequence of x_n's converges should be pretty easy to produce from here.

The conditions stated above are pretty restrictive when you think about it. The function has to be differentiable on an entire interval, and the derivative has to be bounded above by 1. Are there other cases for f where you can do iterations and still get it to converge? What happens when f isn't differentiable over the entire interval? Can you survive if it's not differentiable only at a single point? What if the interval you look at f on isn't bounded, but goes to infinity? Can you treat infinity as a "fixed point" too? Anyway, there are lots of things you can ask about this stuff... hope this helps!

#### Steel_Avatar

##### Diabloii.Net Member
See? SEE?

THIS is why physicists hate mathematicians #### zarikdon

##### Diabloii.Net Member
Steel_Avatar said:
See? SEE?

THIS is why physicists hate mathematicians Ha! Are you implying I'm a mathematician? I'll have you know that I'm actually an unrepetant physicist. :grrr: