# Averaging

Post-publication activity

Curator: Jan A. Sanders Figure 1: Solution to perturbed logistic equation $$\dot{x}=\varepsilon\left( x(1-x) +\sin t \right)$$ (blue) and the averaged equation $$\dot{z}=\varepsilon z(1-z)$$ (red) with $$\varepsilon=0.05\ .$$

Averaging is the procedure of replacing a vector field by its average (over time or an angular variable) with the goal to obtain asymptotic approximations to the original system and to obtain periodic solutions.

## Basic definitions, the periodic case

Consider an ordinary differential equation of the type $\tag{1} \dot{x}=\varepsilon f(x,t,\varepsilon), \quad x(0)=x_0,\quad x,\ x_0\in D\subset\mathbb{R}^n,$

where $$D$$ is an open set with compact (that is, closed and bounded) closure, on which $$f$$ is defined. The parameter $$\varepsilon$$ is assumed to be small. The equation often arises by expansion in the neighborhood of an equilibrium. The vector field $$f$$ is assumed to be differentiable with respect to all variables, but this can be relaxed.

Since $$f$$ depends explicitly on time $$t\ ,$$ equation (1) is a nonautonomous differential equation. This type of equation is usually very difficult to analyze, so one is interested in finding an autonomous system, the solutions of which approximate the original system, where the accuracy of the approximation is a function of $$\varepsilon\ .$$

Putting $$\varepsilon=0$$ is not going to do much good: it will give us an approximation that is valid on the interval $$0\leq t\leq L$$ for some constant $$L\ ,$$ that is on time scale $$1 \ .$$ On a longer time scale, for instance $$1/\varepsilon\ ,$$ this is a singular perturbation problem, that is to say, the solution of the unperturbed problem (with $$\varepsilon=0$$) is not an approximation of the solution of the full problem (1).

On this longer time $$1/\varepsilon$$ another natural idea works better: average the right hand side over the time $$t\ .$$ Assume, for simplicity, that $$f$$ is periodic in $$t$$ with period $$T\ .$$ Then define the average $\tag{2} \bar{f}(x)=\frac{1}{T}\int_{0}^T f(x,s,0)\, d s.$

### Averaging: the periodic case

One now considers the first order averaged equation $\tag{3} \dot{z}=\varepsilon \bar{f}(z), \quad z(0)=x_0.$

Let $$z(t)$$ be the solution of (3) and let $$L\ ,$$ independent of $$\varepsilon$$ be such that $$z(t)\in D$$ for $$0 \leq \epsilon t \leq L\ .$$ Then there exists an $$\varepsilon$$-independent constant $$C$$ such that $\tag{4} ||x(t)-z(t)||\leq C \varepsilon$

for $$0 \leq \epsilon t \leq L\ ,$$


where $$x(t)$$ is the solution of (1). We say that $$x(t)=z(t)+O(\varepsilon)$$ on $$1/\varepsilon\ .$$

Observe that we only require $$z(t)\in D\ .$$ Since $$x(t)$$ is $$\varepsilon$$-close to $$z(t)$$ and $$D$$ is an open set, we can always choose $$\varepsilon$$ small enough.

### Remark on the method of proof

There are two methods of proof: a direct method and a formal transformation method.

• The direct method needs less differentiability assumptions and can be generalized to more complicated situations, for instance delay equations, and
• The transformation method, which can be used to obtain higher order transformations.

## Example: perturbed logistic growth

Consider the equation $\dot{x}=\varepsilon\left( x(1-x) +\sin t \right),\quad x(0)=x_0,$ which models slow logistic growth (the $$x(1-x)$$ term) with a seasonal influence (the $$\sin t$$ term). The averaged equation is $\dot{z}=\varepsilon z(1-z),$ and this can be integrated by separation of variables.

To apply the periodic averaging theorem one needs to fix the domains $$D\subset K \subset \mathbb{R}\ .$$ A good choice would be $$D=(0,1), K=[0,1] \ .$$ In general, this choice will determine $$L$$ and $$C\ .$$ It then follows that in this model the seasonal influence on the solutions is $$O(\varepsilon)\ ,$$ as one can see in Figure 1.

## Example: the van der Pol oscillator

The van der Pol oscillator equation is (see van der Pol, 1926) $\tag{5} \ddot{V} +V=\varepsilon (1-V^2)\dot{V},$

where $$V$$ is the current in a circuit. This equation is not of the form (1), but one can put it in this form by the method of variation of constants. First we write (5) as a system: $\tag{6} \begin{matrix} \dot{V}&=&I&& \\ \dot{I}&=&-V &+&\varepsilon (1-V^2) I. \end{matrix}$

$$I$$ is the current in the circuit. The change of variables $\tag{7} \left(\begin{matrix} X\\ Y\end{matrix}\right)= \left(\begin{matrix} \cos t & -\sin t\\ \sin t &\cos t \end{matrix}\right) \left(\begin{matrix} V \\ I \end{matrix}\right)$

transforms the system to the form (1), with $\tag{8} x=\left(\begin{matrix} X \\ Y \end{matrix}\right)$

and $\tag{9} f(X,Y,t,\varepsilon)=\left(\begin{matrix} - (1-(X \cos t + Y \sin t)^2) (-X \sin t +Y \cos t) \sin t\\ (1-(X \cos t + Y \sin t)^2) (-X \sin t +Y \cos t) \cos t\end{matrix}\right).$

One can now compute the average equation using (2) with $$T=2\pi$$ to obtain $\tag{10} \bar{f}(X,Y)=1/8 \left( 4-({X}^{2}+{Y}^{2}) \right) \left(\begin{matrix}X \\ Y \end{matrix}\right).$

One sees that $$\bar{f}(X,Y)=0$$ if $$V^2+I^2=X^2+Y^2=4\ .$$ This corresponds to the famous limit cycle of the van der Pol oscillator. The expression $$\tau=X^2+Y^2$$ is an invariant of the flow of the linear part of the equation. Dropping all information on the phase, one can reduce the averaged equation to $\tag{11} \dot{\tau}=(1-\frac{\tau}{4})\tau.$

This equation has two equilibria, one unstable at $$\tau=0\ ,$$ one stable at $$\tau=4\ .$$ Equation (11) can be integrated explicitly. The fact that the averaged equation is simpler is an important aspect. Even if the system is high dimensional, and the averaged system is still difficult to analyze, there is a gain: the time-scale of the original equation is $$1\ ,$$ and of the averaged equation it is $$1/\varepsilon\ ,$$ which makes numerical methods much more efficient, since it increases the step size with a factor $$1/\varepsilon\ .$$

## The non-periodic case

When one drops the assumption of periodicity, one can still try and define the averaged equation by

$\bar{f}(x) = \lim_{T \rightarrow \infty} \frac{1}{T} \int_0^T f(x, s,0) \, ds.$ If this expression exists, and the limit is uniform in $$x$$ on compact sets $$E\subset D\ ,$$ one has to compute a suitable order function, defined by $\delta( \varepsilon ) =\sup_{{x} \in D } \sup_{t \in [ 0 , L/\varepsilon ) } \varepsilon \left| \int_0^t [ f (x,s ,0) - \bar{f}( {x} ) ] \,d{ s}\right|.$

### Averaging: the general case

Assume $$f$$ to be differentiable or at least Lipschitz continuous. Let $$z(t)$$ be the solution of $\dot{z}=\varepsilon\bar{f}(z),\quad z(0)=x_0.$ Then $$x(t)=z(t)+O(\sqrt{\delta(\varepsilon)})$$ on $$1/\varepsilon\ .$$

### Remarks

• The same remark as in the periodic case applies here.
• One assumes quantities to be independent of $$\varepsilon$$ unless the dependence is explicitly given. One can construct counterexamples of the general averaging theorem if $$x_0$$ is allowed to be $$\varepsilon$$-dependent.
• A good example is

$\ddot{V} + \varepsilon ( 2 - F ( t ) ) \dot{V} + V = 0 ,$ where one can vary $$F$$ to see what the theory is telling (see Sanders, Verhulst and Murdock (2007)).

## Averaging over angles

Consider the system $\begin{matrix} \dot{x}&=&&\varepsilon f(x,\phi,\varepsilon),&x(0)&=&x_0,\quad & x, x_0\in\mathbb{R}^n\\\dot{\phi}&=&\Omega(x)&&\phi(0)&=&\phi_0,\quad&\phi, \phi_0\in \mathbb{T}^1\end{matrix}$ If $$\Omega(x)$$ is bounded away from zero (in an $$\varepsilon$$-independent fashion) then $$x(t)=z(t)+O(\varepsilon)$$ on $$1/\varepsilon\ ,$$ where $$z(t)$$ is the solution of $\begin{matrix} \dot{z}&=&&\varepsilon \bar{f}(z),&z(0)&=&x_0,&\quad z\in\mathbb{R}^n\\\dot{\psi}&=&\Omega(z)&&\psi(0)&=&\phi_0,&\quad\psi\in \mathbb{T}^1\end{matrix},$ with $\bar{f}(x)=\frac{1}{2\pi}\int_{0}^{2\pi} f(x,\theta,0) d\theta.$ Warning: the condition that $$\Omega(x)$$ is bounded away from zero is not apparent from the averaged equation, it only shows up in the proof. When $$\Omega(x)$$ is not bounded away from zero, one has to study the problem of Passage through resonance. This is done by splitting the problem into two parts: a boundary layer around $$\Omega(x)=0$$ and an outer region, where averaging is permitted, but where the magnitude of $$\Omega(x)$$ is dynamically taken into account. An extensive discussion can be found in Sanders, Verhulst and Murdock (2007).

Notice that situations with more than one angle can also be treated by this method. Often there are $$\mathbb{Z}$$-linear combination of angles that are slowly varying (in resonance) and one combination that is fast. This fast varying combination is then called $$\phi$$ and the others are incorporated in the $$x$$ variable. However, when there is more than one fast angle, the application of averaging (for instance over a higher dimensional torus) and the corresponding proof of the invariance of tori becomes much more difficult. One has to make Diophantine assumptions to avoid small divisors. This is in the Hamiltonian case the subject of Kolmogorov-Arnold-Moser (KAM)-theory.

## Higher order approximations

When the averaged equation is zero (or one simply needs higher accuracy), one may want to do a higher order calculation in order to obtain asymptotic information from the system. In the periodic case this is a quite straightforward procedure, although there are different ways of doing it. In the non-periodic case this is a more subtle process, since it depends on the existence of the higher order averages and the corresponding higher order order functions $$\delta(\varepsilon)\ .$$ The actual computation of higher order averaged equations is better left to a computer algebra system.

## Related Theories

The theory of periodic averaging is closely related to normal form theory. In normal form theory one has to solve the homological equation. Averaging does this in a smart way avoiding linear algebra. It is possible to translate the procedure of solving the homological equation with respect to a semi-simple linear part of a given vector field into averaging terms. Notice that when the eigenvalues are not pure imaginary conjugate pairs, one is strictly speaking not averaging over a real period, but the formula to solve the homological equation is the same.