Spectral methods

 Sigal Gottlieb and David Gottlieb (2009), Scholarpedia, 4(9):7504. doi:10.4249/scholarpedia.7504 revision #91796 [link to/cite this article]
Post-publication activity

Curator: Sigal Gottlieb

Spectral methods are powerful methods used for the solution of partial differential equations. Unlike finite difference methods, spectral methods are global methods, where the computation at any given point depends not only on information at neighboring points, but on information from the entire domain. Spectral methods converged exponentially, which makes them more accurate than local methods. Global methods are preferable to local methods when the solution varies considerably in time or space, when very high spatial resolution is required, and also when long time integration is needed.

Finite Differences vs. Spectral Methods:

Finite difference methods are obtained by approximating a function u(x) by a local polynomial interpolant, and the derivatives of u(x) are then approximated by differentiating this local polynomial. Here local refers to the use of nearby grid points to approximate the function or its derivative at a given point. For slowly varying functions, the use of local polynomial interpolants based on a small number of interpolating grid points is very reasonable. Indeed, it seems to make little sense to include function values far away from the point of interest in approximating the derivative. However, using low-degree local polynomials to approximate solutions containing very significant spatial or temporal variation requires a very fine grid in order to accurately resolve the function. Clearly, the use of fine grids requires significant computational resources in simulations of interest to science and engineering. In the face of such limitations we seek alternative schemes that will allow coarser grids, and therefore fewer computational resources. Spectral methods are such methods; they use all available function values to construct the necessary approximations.

Fourier spectral methods for time-dependent PDEs:

Global methods begin with the fact that if the function $$u(x,t)$$ is sufficiently smooth (typically we require $$u$$ and $$u'$$ to be piecewise continuous) then it has a series expansion $$u(x,t) = \sum_{j=-\infty}^\infty \hat{u}_j(t) \phi_j(x) ,$$ where $$\phi_j(x)$$ are some complete set of functions (preferably orthogonal under some inner product). The coefficients $$\hat{u}_j(t)$$ are calculated based on the global behavior of the function.

For example, it is well known that if $$f(x)$$ is a $$2 \pi$$ periodic function on $$(-\pi, \pi)$$ and is square integrable, then it has a Fourier series expansion $$f(x) = \sum_{j=-\infty}^\infty \hat{f}_j e^{i j x} ,$$ where the Fourier coefficients are calculated by $$\hat{f}_j = \frac{1}{2 \pi} \int_{-\pi}^\pi f(x) e^{-i j x} dx,$$ and $$i = \sqrt{-1}\ .$$ This formula tells us that at any point $$x$$ the function's value has a representation that depends on the entire domain. This is what we call a global representation of the function.

Spectral methods use the idea of global representations to find high order approximations. If we wish to find an approximation $$u_N(x,t)$$ to a function $$u(x,t)$$ which is a solution to a given PDE we have two approaches:

Galerkin methods:

In the Galerkin approach we find the approximation so that the residual is orthogonal to the space from which $$u_N(x,t)$$ comes. This is accomplished by ensuring that the residual is orthogonal to each of the basis functions. Consider the simple PDE $$\frac{\partial u(x,t)}{\partial t} = \frac{\partial u(x,t)}{\partial x} \ .$$ We wish to find an approximate solution $$u_N(x,t) = \sum_{j=-N}^N a_j(t) e^{ijx}\ .$$

We define the residual $$R_N = \frac{\partial u_N(x,t)}{\partial t} - \frac{\partial u_N(x,t)}{\partial x}$$ and require that $$\int_{-\pi}^\pi R_N e^{-i j x} dx = \int_{-\pi}^\pi \left(\frac{\partial u_N(x,t)}{\partial t} - \frac{\partial u_N(x,t)}{\partial x} \right) e^{-ijx} dx =0$$ for all $$-N \leq j \leq N \ .$$

Since $$\frac{\partial u_N(x,t)}{\partial t} = \sum_{j=-N}^N a_j'(t) e^{ijx}$$ and $$\frac{\partial u_N(x,t)}{\partial x} = \sum_{j=-N}^N ij a_j(t) e^{ijx} ,$$ the coefficients are defined by $$\int_{-\pi}^\pi \left(\frac{\partial u_N(x,t)}{\partial t} - \frac{\partial u_N(x,t)}{\partial x} \right) e^{-ijx} dx = \int_{-\pi}^\pi \sum_{k=-N}^N \left( a_k'(t) e^{-ikx} - ik a_k(t) e^{ikx} \right) e^{ijx} dx = \left(a_j'(t) - ij a_j(t) \right) = 0$$ for $$-N \leq j \leq N \ .$$ Here $$a_j'(t)$$ denotes the derivative of $$a_j(t)$$ with respect to time $$t\ .$$ This means that to obtain the Galerkin approximation $$u_N(x,t)$$ we need to solve the ODEs $$\frac{da_j(t)}{dt} = i j a_j(t)$$ for the coefficients $$a_j(t) \ .$$ In this case we can do this analytically, of course, but in general we do this using some ODE solver.

This simple approach becomes complicated when the PDE is highly nonlinear. For example, if we want to solve the PDE $$\frac{\partial u(x,t)}{\partial t} = e^{u(x,t)} \frac{\partial u(x,t)}{\partial x} \ ,$$ we have the residual $$R_N = \frac{\partial u_N(x,t)}{\partial t} - e^{u_N(x,t)} \frac{\partial u_N(x,t)}{\partial x}$$ and we want it to be orthogonal to the Fourier basis functions$\int_{-\pi}^\pi \left( \frac{\partial u_N(x,t)}{\partial t} - e^{u_N(x,t)} \frac{\partial u_N(x,t)}{\partial x} \right) e^{-ijx} dx \ .$ This is not easy to do. It is not clear if this is possible!

For such complicated problems, we'd like to avoid the need for inner products. This leads us to the second approach.

Collocation methods:

In this approach, we require the residual to vanish at a given set of points. For example, consider the hyperbolic PDE $$\frac{\partial u(x,t)}{\partial t} = \frac{\partial f( u(x,t))}{\partial x} \ ,$$ for which we wish to find an approximate solution $$u_N(x,t) = \sum_{j=-N}^N a_j(t) e^{ijx}\ .$$ We define the residual $$R_N = \frac{\partial u_N(x,t)}{\partial t} - \frac{\partial f(u_N(x,t))}{\partial x}$$ and require that $$\left. \left( \frac{\partial u_N(x,t)}{\partial t} - \frac{\partial f(u_N(x,t))}{\partial x}\right) \right|_{x=x_j} =0$$ at a specified set of points $$\{x_j \}_{j=-N}^N \ .$$ This describes an ODE which can be evolved in time.

To see how this works, it's convenient to consider the fully discrete case. Assume that we have the initial condition $$u(x,t)=g(x)\ ,$$ so that we assign $$u_N(x_k,0)=g(x_k)\ .$$ We define a sequence of vectors which contain the approximations $$U^n_k =u_N(x_k, t^n)\ ,$$ where $$t^n = n \Delta t\ ,$$ $$n = 0, 1, \cdots,$$ and so $$U^0_k$$ is known. Our goal is to step forward from $$U^n_k$$ to $$U^{n+1}_k \ .$$

We start with the values of $$U^n_k$$ and use this to compute $$f(U^n_k)\ .$$ For each time-level $$n$$ we can compute the discrete Fourier coefficients $$\lambda_j$$ such that $$f(U^n_k) = \sum_{j=-N}^N \lambda_j(t) e^{ijx_k}$$ for the set of points $$\{ x_k \}\ .$$ This is accomplished by solving the linear system $$\Phi \lambda = \mathbf{f}^n \ ,$$ where the matrix $$\Phi$$ contains the basis functions at the collocation points, the vector $$\mathbf{f}^n$$ contains the function values to be interpolated, and the vector $$\lambda$$ contains the coefficients. Once we know the Fourier coefficients of the function, we can easily differentiate it, because $$\left. \frac{\partial f}{\partial x} \right|_{x=x_k}= \sum_{j=-N}^N \lambda_j(t) i j e^{ijx_k} \ .$$ In matrix form, we can write $$\mathbf{f}^n_x = \Phi' \lambda = \Phi' \Phi^{-1} \mathbf{f}^n = D \mathbf{f}^n$$ where we refer to the matrix $$D$$ as the differentiation matrix. Now to step forward in time we use an ODE solver, -- for simplicity, let's use Euler's method here (but never in real life!)$U^{n+1}_k=U^{n}_k + \Delta t D f(U^n_k)\ .$

To summarize this procedure, $$U^n \rightarrow \mathbf{f}^n = f(U^n) \rightarrow \mathbf{f}^n _x = D \mathbf{f}^n \rightarrow U^{n+1}\ .$$

For the collocation method, a nonlinearity does not complicate matters. For example, the PDE $$\frac{\partial u(x,t)}{\partial t} = e^{u(x,t)} \frac{\partial u(x,t)}{\partial x}$$ becomes $$\frac{\partial u_N(x_k,t)}{\partial t} = e^{u_N(x_k,t)} D u_N(x_k,t)$$ which can be solved numerically by any ODE solver.

Stability of spectral methods

The Fourier Galerkin method is stable for the linear problem $$\frac{\partial u}{\partial t} = L u$$ where the linear operator $$L$$ is semi-bounded in the $$L^2$$ norm. This is harder to show for the Fourier collocation method for the hyperbolic problems -- the presence of aliasing error complicates the stability of the method. To counteract the aliasing term, we must add a viscosity term or filter to stabilize the method.

A good review of the research on the stability of the Fourier collocation methods for linear problems is given by Tadmor in his paper Stability Analysis of finite-difference, pseudospectral, and Fourier-Galerkin approximations for time dependent problems which appeared in SIAM Review 29 (1987) pp. 525-555. The stabilization of these methods using the vanishing viscosity methods was developed in the series of papers by Maday and Tadmor and Tadmor (1989, 1990, 1993):

• Maday and Tadmor, Analysis of the spectral vanishing viscosity method for periodic conservation laws. SIAM Journal on Numerical Analysis 26 (1989) pp. 854-870.
• Tadmor, Convergence of spectral methods for nonlinear conservation laws. SIAM Journal of Numerical Analysis, 26 (1989) pp. 30--44.
• Tadmor, Shock capturing by the spectral viscosity method. Computer Methods in Applied Mechanics and Engineering 80 (1990) pp. 197--208.
• Schochet, The rate of convergence of spectral viscosity methods for periodic scalar conservation laws. SIAM Journal on Numerical Analysis 27 (1990) pp.1142-1159.

The alternative (equivalent) use of filters stabilization can be found in Gottlieb and Hesthaven, Spectral methods for hyperbolic problems. in Journal of Computational and Applied Mathematics 128 (2001) pp. .

The situation for nonlinear problems is much more difficult, and some results can be found in Tadmor's paper, Superviscosity and spectral approximations of nonlinear conservation laws. in Numerical Methods for Fluid Dynamics IV (edited by M. J. Baines and K. W. Morton) Oxford University Press, London, 1993 pp. 69--82.

Spectral methods with other basis functions

Spectral methods can be constructed with other orthogonal polynomials rather than the Fourier basis functions. If the problem considered is periodic, the Fourier basis functions are usually used. If the problem is defined in the bounded domain but is not periodic, then the usual choice of basis functions are the Chebyshev or Legendre polynomials which are defined in the interval $$x = [-1,1] \ .$$ If the domain is semi-bounded, $$x\in [0, \infty)\ ,$$ the Laguerre polynomials are used and if the domain is unbounded $$x \in (-\infty, \infty) \ ,$$ the Hermite polynomials are used. The last two polynomials are used in particular in kinetic theory. The Galerkin and collocation methods are defined in the same way as described above with the Fourier spectral methods.