User:Eusebius Doedel/Proposed/Numerical continuation

From Scholarpedia
Jump to: navigation, search

Numerical continuation concerns itself with following solutions to systems of equations, as these solutions depend on parameters, using software. For example, given a system of ordinary differential equations

\[\tag{1} \dot x = f(x, p), \quad x\in \mathbb{R}^n, \quad p\in \mathbb{R}, \]

where \(f(x, p)\) is sufficiently smooth, there could be an equilibrium solution \(x_0\) at \(p_0\ ,\) for which

\[\tag{2} f(x_0,p_0)=0. \]

As \(p\) varies, generically, the equilibrium solution \(x\) changes too. If, during the continuation, certain properties are monitored, bifurcations and changes of stability can be detected. Similarly, it is possible to continue solutions that vary with time, such as periodic, homoclinic and heteroclinic orbits, or in general, any solution that satisfies a boundary value problem.


Continuation methods

For simplicity, first assume that the solution to be continued is an equilibrium point. This is possible if there exists a branch \(x(p)\) for which \(x(p_0)=x_0\) and \(f(x(p),p)=0\ .\) Alternatively, and more generally in the case of folds, there exists a branch \(u(s)\ ,\) parametrized by \(s\in\mathbb{R}\) where \(u=(x,p)\ ,\) \(u(0)=u_0\) and \(f(u_0)=f(u)=0\ .\)

Definition: a solution \(u_0\) is regular if the associated Jacobian matrix \(f_u^0=f_u(u(0))=f_u(u_0)\) has maximal rank, that is, if \(\mathrm{Rank}(f_u^0)=n\ .\)

Theorem: let \(u_0\) be a regular solution. Then, in a neighborhood of \(u_0\ ,\) there exists a unique one-dimensional continuation solution branch \(u(s)\) with \(u(0)=u_0\ .\)

Proof: if \(\mathrm{Rank}(f_u^0)=\mathrm{Rank}(f_x^0|f_p^0)=n\) then either \(f_x^0\) is nonsingular and the Implicit Function Theorem implies that there exists a unique solution branch \(x(p)\) near \(p_0\ ,\) or else it is possible to interchange columns in \(f_u^0\) so that the branch can be locally parametrized by one of the components of \(x\ .\)

The latter possibility in the proof corresponds to a simple fold. If \(u_0\) is not a regular solution no unique branch is guaranteed and generically there exists a bifurcation point or branch point from which multiple solution branches emanate.

Core continuation procedures generally consists of three parts:

  • prediction
  • correction
  • step-size control

The predictor, given the point \(u_i\) estimates the next point in the continuation \(u_{i+1}\) using a normalized tangent vector \(v_i\) and step size \(h\) using \(u_{i+1}^{(0)}=u_i+hv_i\ .\) The tangent vector can be computed by solving an extended system involving the Jacobian matrix, or if two previous points are available, by estimating \(v_i=(u_{i}-u_{i-1})/(\|u_{i}-u_{i-1}\|)\) (a secant method).

The corrector then uses a procedure to improve \(u_{i+1}^{(0)}\) so that it satisfies \(f(u_{i+1})=0\) up to user-defined accuracy. If the corrector fails, the procedure can be repeated using a smaller step-size, until a user specified minimum is reached. Between steps the step-size may also be increased, if the corrector quickly converges.

A corrector can apply Newton's method to solve \(f(u_{i+1})=0\ .\) However, since this equation is underspecified, an additional constraint \(g(u_{i+1})=0\) must be added, where \(g:\mathbb{R}^{n+1}\to\mathbb{R}\ .\) Newton's method is then applied by solving the following linear system: \[ \begin{pmatrix} f_u(u_{i+1}^{(\nu)}) \\ g_u(u_{i+1}^{(\nu)}) \end{pmatrix} \Delta u_{i+1}^{(\nu)} = \begin{pmatrix} -f(u_{i+1}^{(\nu)}) \\ -g(u_{i+1}^{(\nu)}) \end{pmatrix},\quad u_{i+1}^{(\nu+1)}=u_{i+1}^{(\nu)}+\Delta u_{i+1}^{(\nu)}, \quad \nu=0,1,2,\ldots, \] As soon as \(\Delta u_{i+1}^{(\nu)}\) is small enough, the algorithm has converged. If \(\Delta u_{i+1}^{(\nu)}\) is still too large after a user-specified number of iterations the corrected has failed.

There are several ways to implement the constraint \(g(u)=0\ :\)

Natural continuation

Here one of the coordinates or the parameter is fixed, namely \(u_{i,j}\ :\) \(g_i(u)=u_{i,j}-u^{(0)}_{i+1,j}\) The index \(j\) can be chosen to correspond the coordinate or parameter that changed the most in magnitude in the previous step, to avoid problems at folds. This algorithm is easy to implement but not very elegant compared to more sophisticated methods.

Pseudo-arclength continuation

Instead of choosing one particular coordinate, Newton iterations are now restricted to the hyperplane at \(u^{(0)}_{i+1}\) that is perpendicular to \(v_i\ :\) \[ g_i(u)=\langle u-u^{(0)}_{i+1},v_i\rangle=\langle u-u_i-hv_i,v_i\rangle=\langle u-u_i,v_i\rangle-h. \]

Moore-Penrose continuation

Here the function \(g_i\) is adapted in every Newton iteration so that it depends on the iteration number \(\nu\ .\) For instance, after every iteration a new direction vector \(v_{i+1}^{(\nu)}\) is computed for \(u_{i+1}^{(\nu)}\ ,\) and then used in a similar fashion as in pseudo-arclength continuation: \[ g^{(\nu)}_i(u)=\langle u-u_{i+1}^{(\nu)},v_{i+1}^{(\nu+1)}\rangle, \] where \(v_{i+1}^{(\nu+1)}=v_i\ .\) Note that the first Newton iteration step in this procedure is exactly the same as the first step in pseudo-arclength continuation.



For equilibrium points of ordinary differential equations the procedure proceeds as explained above. For fixed points of maps the same procedure can be applied, by continuing the solution of \(g(x,\alpha)-x=0\) where \(g(x,\alpha)\) denotes a map.

Boundary value problems

Solutions to boundary value problems must first be discretized on a mesh. This gives rise to a high-dimensional problem that can be solved using the above algorithm. There are several ways to do this, a very effective and the one most widely used being orthogonal collocation with piecewise polynomials. The resulting Jacobian matrix will then have a special band structure that can be exploited for numerical efficiency. The mesh can be adapted on demand after each continuation step.

Periodic orbits

Periodic orbits \(x(t)\ ,\) where \(x(0)=x(T)\) can be seen as boundary value problems. Firstly to deal with the period \(T\) it is advantageous to scale the ordinary differential equation so the solutions have period 1: \[ \dot x = Tf(x(t),\alpha), \] where now \(x(0)=x(1)\ .\) To avoid phase shifts of the periodic orbit that would make the continuation problem non-unique, another condition must be added. Here an integral phase condition is effective, such as \[ \int_0^1 \langle x(t),\dot{x}_{\mathrm{old}}(t) \rangle dt = 0. \] Here \(\dot{x}_{\mathrm{old}}(t)\) denotes the time-derivative of the previous solution in the continuation procedure.

Homoclinic and heteroclinic orbits

Homoclinic and heteroclinic orbits can be effectively continued using truncated orbits on a finite time interval that satisfy projection boundary conditions. These techniques were first developed for point-to-point connections but have recently been extended to also apply to point-to-cycle and cycle-to-cycle connections.


Continuation methods are often used to detect and continue bifurcations in dynamical systems. Using special techniques it is also possible to switch branches at bifurcation points.

Detecting bifurcations

Bifurcations are generally detected by locating a zero of a test function. For instance, to detect a Hopf bifurcation one can monitor the real part of the complex eigenvalue pair that is closest to the imaginary axis. If, after going through a continuation step, the sign of a test function changes, the zero can be located by stepping backwards and forwards using the secant and Müller's method.

Continuing bifurcations

A bifurcation can be generically continued by adding an extra free continuation parameter. A basic way to perform such a continuation is by adding the constraint that the relevant test function must remain zero during the continuation, thereby creating a minimally augmented system. However, since test functions can be complex it can be expensive, and often necessary to use numerical differentiation, to compute the Jacobian matrix. Therefore the test function constraint is often modified, using a bordered system to make it possible to obtain analytically expressed derivatives.

Furthermore, bifurcations can be continued using standard augmented systems, where the dimension of the problem is increased by more than 1, that is, more than one constraint is added. In that case, the constraints are often simple linear systems that are easy to differentiate, however the extra dimensions add \(O(n^3)\) time to the linear solver that is used in the Newton iterations.

Branch switching

When the Jacobian matrix becomes singular and there is no simple fold, so the continuation point is not regular, multiple branches emanate from such a point. In that case, to switch to a different branch than the one that follows the direction \(v_i\) (as defined above) the continuation must be perturbed. The algebraic bifurcation equation gives the necessary direction. One can often also perturb the system and then try to converge parallel to the original branch; typically the second branch is then found.

Switching from an equilibrium point to a periodic orbit at a Hopf bifurcation can also be considered as a form of branch switching. In that case, the eigenvalues of the equilibrium point can be used to construct an initial low-amplitude cycle that satisfies the differential equations close to the Hopf bifurcation point.



AUTO is one of the oldest continuation packages, originally dating from 1980. It is written in Fortran, and, with its most recent incarnation AUTO-07P, it is accompanied by a Python interface to allow scripting.


XPPAUT is a general phase plane analysis tool and incorporates a subset of an older release of AUTO.


CONTENT is a multi-platform interactive environment to study dynamical systems. It is designed to perform simulation, continuation, and normal form analysis of dynamical systems appearing in research and engineering. The current version supports bifurcation analysis of ODEs, iterated maps, and evolution PDEs in the unit interval.


MATCONT is a tool for MATLAB that continues equilibrium solutions, periodic orbits, and connecting orbits with the most comprehensive coverage of bifurcations to date.


PyDSTool is a sophisticated and integrated simulation and analysis environment for dynamical systems models of physical systems (ODEs, DAEs, maps, and hybrid systems). It includes PyCont, a Python implementation of algorithms that continue fixed points combined with an interface to AUTO to continue periodic orbits.


Internal references

Further reading

External links

See also

Personal tools

Focal areas