the Fixed point algorithms
Objective :
We want to calculate numerically a minimizer of a convex function
f:X→]∞,+∞]. According to Fermat’s rule you have to find a point x
such as 0 ∈∂f(x). The case where f derivable, this is equivalent to finding
a zero of the gradient 0 =∇f(x). In this case a popular algorithm is the
gradient algorithm, consist tô generate a suite (xk:k=0,1, ...)defined
by :
xk+1=xk−γ∇f(xk).
We can note T(x) = x−γ∇f(x)and the algorithm becomes in form
xk+1=T(xk)such as any fixed point of T is a minimizer of f. There are
many optimization algorithms written in this form where T is an
application chosen so that his fixed points are solutions to the problem. It
is therefore necessary to highlight conditions on T which guarantee the
convergence of the algorithm to a fixed point.
10 mars 2019 4 / 30