Notice: Undefined offset: 5744 in /var/www/scholarpedia.org/mediawiki/includes/parser/Parser.php on line 5961
Control of partial differential equations - Scholarpedia

# Control of partial differential equations

Post-publication activity

Curator: Jean-Michel Coron

A control system is a dynamical system on which one can act by using suitable controls. In this article, the dynamical model is modeled by partial differential equations of the following type $\tag{1} \dot y=f(y,u).$

The variable $$y$$ is the state and belongs to some space $$\mathcal{Y}\ .$$ The variable $$u$$ is the control and belongs to some space $$\mathcal{U}\ .$$ In this article, the space $$\mathcal{Y}$$ is of infinite dimension and the differential equation (1) is a partial differential equation.

There are a lot of problems that appear when studying a control system. But the most common one is the controllability problem, which is, roughly speaking, the following one. Let us give two states. Is it possible to steer the control system from the first one to the second one? In the framework of (1), this means that, given the state $$a\in \mathcal{Y}$$ and the state $$b\in \mathcal{Y}\ ,$$ does there exit a map $$u:[0,T]\rightarrow \mathcal{U}$$ such that the solution of the Cauchy problem $$\dot y=f(y,u(t)), \, y(0)=a,$$ satisfies $$y(T)=b\ ?$$ If the answer is yes whatever the given states are, the control system is said to be controllable. If $$T>0$$ can be arbitrary small one speaks of small-time controllability. If the two given states and the control are restricted to be close to an equilibrium one speaks of local controllability at this equilibrium. (An equilibrium of the control system is a point $$(y_e,u_e)\in \mathcal{Y}\times \mathcal{U}$$ such that $$f(y_e,u_e)=0$$). If, moreover, the time $$T$$ is small, one speaks of small-time local controllability.

## A general framework for control systems modeled by linear PDE's

### The framework

For two normed linear spaces $$H_1$$ and $$H_2 \ ,$$ we denote by $$\mathcal{L}(H_1;H_2)$$ the set of continuous linear maps from $$H_1$$ into $$H_2$$ and denote by $$\|\cdot\|_{\mathcal{L}(H_1;H_2)}$$ the usual norm in this space.

Let $$H$$ and $$U$$ be two Hilbert spaces. Just to simplify the notations, these Hilbert spaces are assumed to be real Hilbert spaces (the case of complex Hilbert spaces follows directly from the case of real Hilbert spaces). The space $$H$$ is the state space and the space $$U$$is the control space. We denote by $$(\cdot, \cdot)_H$$ the scalar product in $$H\ ,$$ by $$(\cdot,\cdot)_U$$ the scalar product in $$U\ ,$$ by $$\|\cdot\|_H$$ the norm in $$H$$ and by$$\|\cdot\|_U$$ the norm in $$U\ .$$

Let $$S(t),\, t\in[0,+\infty)\ ,$$ be a strongly continuous semigroup of continuous linear operators on$$H\ .$$ Let $$A$$ be the infinitesimal generator of the semigroup $$S(t), \, t\in[0,+\infty)\ .$$ As usual, we denote by $$S(t)^*$$ the adjoint of $$S(t)\ .$$ Then $$S(t)^*, \ t\in[0,+\infty),$$ is a strongly continuous semigroup of continuous linear operators and the infinitesimal generator of this semigroup is the adjoint $$A^*$$ of $$A\ .$$ The domain $$D(A^*)$$ is equipped with the usual graph norm $$\|\cdot\|_{D(A^*)}$$ of the unbounded operator $$A^*\ :$$

$$\|z\|_{D(A^*)}:=\|z\|_{H}+\|A^*z\|_{H}, \, \forall z\in D(A^*).$$

This norm is associated to the scalar product in $$D(A^*)$$ defined by

$$(z_1,z_2)_{D(A^*)}:=(z_1,z_2)_{H}+(A^*z_1,A^*z_2)_H, \, \forall (z_1,z_2) \in D(A^*)^2.$$

With this scalar product, $$D(A^*)$$ is a Hilbert space. Let $$D(A^*)'$$ be the dual of $$D(A^*)$$ with respect to the pivot space $$H\ .$$ In particular,

$$D(A^*)\subset H\subset D(A^*)'.$$

Let

$\tag{2} B\in \mathcal{L}(U,D(A^*)').$

In other words, $$B$$ is a linear map from $$U$$ into the set of linear functions from $$D(A^*)$$ into $$\mathbb{R}$$ such that, for some $$C>0\ ,$$

$$|(Bu)z|\leqslant C \|u\|_{U}\|z\|_{D(A^*)},\, \forall u \in U, \, \forall z\in D(A^*).$$

We also assume the following regularity property (also called admissibility condition):

$\tag{3} \forall T>0, \exists C_T>0 \text{ such that } \int_0^T\|B^*S(t)^* z\|_{U}^2dt \leqslant C_T \|z\|^2_H, \, \forall z\in D(A^*).$

In (3) and in the following, $$B^*\in \mathcal{L}(D(A^*);U)$$ is the adjoint of $$B\ .$$ It follows from (3) that the operators

$$(z\in D(A^*)) \mapsto ((t\mapsto B^*S(t)^* z)\in C^0([0,T];U)),$$

$$(z\in D(A^*)) \mapsto ((t\mapsto B^*S(T-t)^* z)\in C^0([0,T];U))$$

can be extended in a unique way as continuous linear maps from $$H$$ into $$L^2((0,T);U)\ .$$ We use the same symbols to denote these extensions.

Note that, using the fact that $$S(t)^*\ ,$$ $$t\in[0,+\infty)\ ,$$ is a strongly continuous semigroup of continuous linear operators on$$H\ ,$$ it is not hard to check that (3) is equivalent to

$$\exists T>0, \exists C_T>0 \text{ such that } \int_0^T\|B^*S(t)^* z\|_{U}^2dt \leqslant C_T \|z\|^2_H, \, \forall z\in D(A^*).$$

The control system we consider here is

$\tag{4} \dot y =Ay +Bu, \, t\in(0,T),$

where, at time $$t\ ,$$ the control is $$u(t)\in U$$ and the state is$$y(t)\in H\ .$$

Let $$T>0\ ,$$ $$y^0\in H$$ and $$u\in L^2((0,T);U)\ .$$ We are interested in the Cauchy problem

$\tag{5} \dot y =Ay +Bu(t) , \, t\in (0,T),$

$\tag{6} y(0)=y^0.$

We first give the definition of a solution to (5)-(6). Let us first motivate our definition. Let $$\tau \in [0,T]$$ and $$\varphi :[0,\tau]\rightarrow H\ .$$ We take the scalar product in $$H$$ of (5) with $$\varphi$$ and integrate on$$[0,\tau]\ .$$ At least formally, we get, using an integration by parts together with (6),

$$(y(\tau),\varphi(\tau))_{H}-(y^0,\varphi(0))_{H}-\int_0^\tau (y(t),\dot \varphi (t) +A^*\varphi (t))_{H}dt=\int_0^\tau (u(t),B^*\varphi (t))_U dt.$$

Taking $$\varphi(t)=S(\tau-t)^*z^\tau\ ,$$ for every given $$z^\tau \in H\ ,$$ we have formally $$\dot \varphi (t) +A^*\varphi (t)=0\ ,$$ which leads to the following definition.

### Definition (solution of the Cauchy problem)

Let $$T>0\ ,$$ $$y^0\in H$$ and $$u\in L^2((0,T);U)\ .$$ A solution of the Cauchy problem (5)-(6) is a function $$y\in C^0([0,T];H)$$ such that $\tag{7} (y(\tau),z^\tau)_H-(y^0,S(\tau)^*z^\tau)_H = \int_0^\tau (u(t),B^*S(\tau-t)^*z^\tau)_U dt, \, \forall \tau \in[0,T], \, \forall z^\tau \in H.$

Note that, by the regularity property (3), the right hand side of (7) is well defined.

With this definition one has the following theorem.

### Theorem 1 (well posedness of the Cauchy problem)

Let $$T>0\ .$$ Then, for every $$y^0\in H$$ and for every $$u\in L^2((0,T);U)\ ,$$ the Cauchy problem (5)-(6) has a unique solution $$y\ .$$ Moreover, there exists $$C=C(T)>0\ ,$$ independent of $$y^0\in H$$ and $$u\in L^2((0,T);U)\ ,$$ such that

$\tag{8} \|y(\tau)\|_{H}\leqslant C (\|y^0\|_{H}+\|u\|_{L^2((0,T);U)}), \, \forall \tau \in [0,T].$

For a proof of this theorem, see, for example, (Jean-Michel Coron, 2007, pages 53-54).

## Controllability of linear control systems

### Different types of controllability

In this section we are interested in the controllability of the control system (4). In contrast to the case of linear finite-dimensional control systems, many types of controllability are possible and interesting. We define here three types of controllability.

### Definition (exact controllability)

Let$$T>0\ .$$ The control system (4) is exactly controllable in time $$T$$ if, for every $$y^0\in H$$ and for every $$y^1\in H\ ,$$ there exists $$u \in L^2((0,T);U)$$ such that the solution $$y$$ of the Cauchy problem

$\tag{9} \dot y =Ay + Bu(t), \, y(0)=y^0,$

satisfies$$y(T)=y^1\ .$$

### Definition (null controllability)

Let$$T>0\ .$$ The control system (4) is null controllable in time $$T$$ if, for every $$y^0\in H$$ and for every $$\tilde y^0\in H\ ,$$ there exists $$u \in L^2((0,T);U)$$ such that the solution of the Cauchy problem (8) satisfies $$y(T)=S(T)\tilde y^0\ .$$

Let us point out that, by linearity, we get an equivalent definition of "null controllable in time $$T$$" if, in the definition above, one assumes that $$\tilde y^0=0\ .$$ This explains the usual terminology "null controllability".

### Definition (approximate controllability)

Let$$T>0\ .$$ The control system (4) is approximately controllable in time $$T$$ if, for every $$y^0\in H\ ,$$ for every $$y^1\in H\ ,$$ and for every $$\varepsilon>0\ ,$$ there exists $$u \in L^2((0,T);U)$$ such that the solution $$y$$ of the Cauchy problem (8) satisfies $$\|y(T)-y^1\|_H\leqslant \varepsilon\ .$$

Clearly

(exact controllability) $$\Rightarrow$$ (null controllability and approximate controllability).

The converse is false in general (see, for example, the heat control equation at this link). However, the converse holds if $$S$$ is a strongly continuous group of linear operators. More precisely, one has the following theorem.

### Theorem 2 (null controllability/exact controllability)

Assume that $$S(t)\ ,$$ $$t\in \mathbb{R}\ ,$$ is a strongly continuous group of linear operators. Let $$T>0\ .$$ Assume that the control system (4) is null controllable in time $$T\ .$$ Then the control system (4) is exactly controllable in time $$T\ .$$

Proof of Theorem. Let $$y^0\in H$$ and $$y^1\in H\ .$$ From the null controllability assumption applied to the initial data $$y^0-S(-T)y^1\ ,$$ there exists $$u\in L^2((0,T);U)$$ such that the solution $$\tilde y$$ of the Cauchy problem

$$\dot {\tilde y}=A\tilde y +Bu(t), \,\tilde y (0)=y^0-S(-T)y^1,$$

satisfies

$\tag{10} \tilde y(T)=0.$

One easily sees that the solution $$y$$ of the Cauchy problem

$$\dot y=A y +Bu(t), \, y (0)=y^0,$$

is given by

$\tag{11} y(t)=\tilde y(t)+S(t-T)y^1, \, \forall t \in [0,T].$

In particular, from (10) and (11),

$$y(T)=y^1.$$

This concludes the proof of the theorem.

## Methods to study controllability

Rouhgly speaking there are essentially two types of methods to study the controllability of linear PDE, namely direct methods and duality methods.

### Direct methods

Among these methods, let us mention in particular

### Duality methods

Let us now introduce some "optimal control maps". Let us first deal with the case where the control system (4) is exactly controllable in time $$T\ .$$ Then, for every $$y^1\ ,$$ the set $$U^T(y^1)$$ of $$u\in L^2((0,T);U)$$ such that

$$(\dot y=Ay+Bu(t), \, y(0)=0)\Rightarrow (y(T)=y^1)$$

is nonempty. Clearly the set $$U^T(y^1)$$ is a closed affine subspace of $$L^2((0,T);U)\ .$$ Let us denote by $$\mathcal{U}^T(y^1)$$ the projection of $$0$$ on this closed affine subspace, i.e., the element of $$U^T(y^1)$$ of the smallest $$L^2((0,T);U)$$-norm. Then it is not hard to see that the map

$$\begin{array}{rrcl} \mathcal{U}^T:&H&\rightarrow&L^2((0,T);U) \\ &y^1&\mapsto&\mathcal{U}^T(y^1) \end{array}$$

is a linear map. Moreover, using the closed graph theorem (see, for example, Theorem 2.15 on page 50 in (Rudin, 1973)) one readily checks that this linear map is continuous.

Let us now deal with the case where the control system (4) is null controllable in time $$T\ .$$ Then, for every $$y^0\ ,$$ the set $$U_T(y^0)$$ of $$u\in L^2((0,T);U)$$ such that

$$(\dot y=Ay+Bu(t), \, y(0)=y^0)\Rightarrow (y(T)=0)$$

is nonempty. Clearly the set $$U_T(y^0)$$ is a closed affine subspace of $$L^2((0,T);U)\ .$$ Let us denote by $$\mathcal{U}_T(y^0)$$ the projection of $$0$$ on this closed affine subspace, i.e., the element of $$U_T(y^0)$$ of the smallest $$L^2((0,T);U)$$-norm. Then, again, it is not hard to see that the map

$$\begin{array}{rrcl} \mathcal{U}_T:&H&\rightarrow&L^2((0,T);U) \\ &y^0&\mapsto&\mathcal{U}_T(y^0) \end{array}$$

is a continuous linear map.

The main results of this section are the following ones.

### Theorem 3 (exact controllability)

Let $$T>0\ .$$ The control system (4) is exactly controllable in time $$T$$ if and only if there exists $$c>0$$ such that

$\tag{12} \int_0^T\|B^*S(t)^* z\|_U^2dt \geqslant c \|z\|_{H}^2, \, \forall z \in D(A^*).$

Moreover, if such a $$c>0$$ exists and if $$c^T$$ is the maximum of the set of $$c>0$$ such that (12) holds, one has

$\tag{13} \left\|\mathcal{U}^T\right\|_{\mathcal{L}(H;L^2((0,T);U))}=\frac{1}\sqrt{c^T}}.$

### Theorem 4 (approximate controllability)

The control system (4) is approximately controllable in time $$T$$ if and only if, for every $$z\in H\ ,$$

$\tag{14} (B^*S(\cdot)^*z =0 \text{ in }L^2((0,T);U))\Rightarrow (z=0).$

### Theorem 5 (null controllability)

Let$$T>0\ .$$ The control system (4) is null controllable in time $$T$$ if and only if there exists $$c>0$$ such that $\tag{15} \int_0^T\|B^*S(t)^* z\|_U^2dt \geqslant c \|S(T)^*z\|^2_{H}, \, \forall z \in D(A^*).$

Moreover, if such a $$c>0$$ exists and if $$c_T$$ is the maximum of the set of $$c>0$$ such that (15) holds, then $\tag{16} \left\|\mathcal{U}_T\right\|_{\mathcal{L}(H;L^2((0,T);U))}=\frac{1}\sqrt{c_T}}.$

### Theorem 6 (null controllability/approximate controllability)

Assume that, for every $$T>0\ ,$$ the control system (4) is null controllable in time $$T\ .$$ Then, for every $$T>0\ ,$$ the control system (4) is approximately controllable in time $$T\ .$$

For a proof of these theorems, see, for example Section 2.3.2 in (Jean-Michel Coron, 2007). Inequalities (12) and (15) are usually called observability inequalities for the abstract linear control system $$\dot y =Ay +Bu\ .$$ The difficulty is to prove them! For this purpose, there are many methods available (but still many open problems). Among these methods, let us mention in particular

Remark. In contrast with Theorem 6, note that, for a given$$T>0\ ,$$ the null controllability in time $$T$$ does not imply the approximate controllability in time $$T\ .$$ For example, let $$L>0$$ and let us take $$H:=L^2(0,L)$$ and $$U:=\{0\}\ .$$ We consider the linear control system

$\tag{17} y_t+y_x=0,\, t\in (0,T),\, x\in (0,L),$

$\tag{18} y(t,0)=u(t)=0,\, t\in (0,T).$

Through examples in the next section, we shall see how to put this control system in the abstract framework $$\dot y =Ay +Bu\ .$$ As one can see in the section at this link, whatever $$y^0\in L^2(0,L)$$ is, the solution to the Cauchy problem

$$y_t+y_x=0,\, t\in (0,T),\, x\in (0,L),$$

$$y(t,0)=u(t)=0,\, t\in (0,T),$$

$$y(0,x)=y^0(x),\, x\in (0,L),$$

satisfies

$$y(T,\cdot)=0, \text{ if } T\geqslant L.$$

In particular, if $$T\geqslant L\ ,$$ the linear control system (17)-(18) is null controllable but is not approximately controllable.

## Numerical methods

Again, there are two possibilities to study numerically the controllability of a linear control system: direct methods, duality methods. The most popular ones use duality methods and in particular the Hilbert Uniqueness Method (HUM) introduced in (Jacques-Louis Lions, 1988). For the numerical approximation, one uses often discretization by finite difference methods. However a new problem appear: the control for the discretized model does not necessarily lead to a good approximation to the control for the original continuous problem. In particular, the classical convergence requirements, namely stability and consistency, of the numerical scheme used does not suffice to guarantee good approximations to the controls that one wants to compute. Observability/controllability may be lost under numerical discretization as the mesh size tends to zero. To overcome this problem, several remedies have been used, in particular, filtering, Tychonoff regularization, multigrid methods, and mixed finite element methods. For precise informations and references, we refer to the survey papers (Enrique Zuazua, 2005; 2006).

## Complements

On the controllability of linear PDE, we have already given references to books and papers. But there are of course many other references which must also be mentioned. If one restricts to books or surveys we would like to add in particular (but this is a very incomplete list):

• The books (Alain Bensoussan, Giuseppe Da Prato, Michel Delfour and Sanjoy Mitter, 1992; 1993) which deal, in particular, with differential control systems with delays and partial differential control systems with specific emphasis on controllability, stabilizability and the Riccati equations.
• The book (Ruth Curtain and Hans Zwart, 1995) which deals with general infinite-dimensional linear control systems theory. It includes the usual classical topics in linear control theory such as controllability, observability, stabilizability, and the linear-quadratic optimal problem. For a more advanced level on this general approach, one can look at the book (Olof Staffans, 2005).
• The book (René Dáger and Enrique Zuazua, 2006) on partial differential equations on planar graphs modeling networked flexible mechanical structures (with extensions to the heat, beam and Schrödinger equations on planar graphs).
• The books (Hector Fattorini, 1999; 2005) on optimal control for infinite-dimensional control problems (linear or nonlinear, including partial differential equations).
• The book (Andrei Fursikov, 2000) on the study of optimal control problems for infinite-dimensional control systems with many examples coming from physical systems governed by partial differential equations (including the Navier-Stokes equations).
• The books (Irena Lasiecka and Roberto Triggiani, 2000a; 2000b) which deal with finite horizon quadratic regulator problems and related differential Riccati equations for general parabolic and hyperbolic equations with numerous important specific examples.
• The survey (David Russell, 1978) which deals with the hyperbolic and parabolic equations, quadratic optimal control for linear PDE, moments and duality methods, controllability and stabilizability.
• The survey (Enrique Zuazua, 2006) on recent results on the controllability of linear partial differential equations. It includes the study of the controllability of wave equations, heat equations, in particular with low regularity coefficients, which is important to treat semi-linear equations, fluid-structure interaction models.

There are many important problems which are not discussed in this paper. Perhaps the more fundamental ones are optimal control theory and the stabilization problem. For the optimal control theory, see references already mentioned above. The stabilization problem is the following one. We have an equilibrium which is unstable (or not enough stable) without the use of the control. Let us give a concrete example. One has a stick that is placed vertically on one of his fingers. In principle, if the stick is exactly vertical with a speed exactly equal to 0, it should remain vertical. But, due to various small errors (the stick is not exactly vertical, for example), in practice, the stick falls down. In order to avoid this, one moves the finger in a suitable way, depending on the position and speed of the stick; one uses a feedback law (or closed-loop control) which stabilizes the equilibrium. The problem of the stabilization is the existence and construction of such stabilizing feedback laws for a given control system. More precisely, let us consider the control system (1) and let us assume that $$f(0,0)=0\ .$$ The stabilization problem is to find a feedback law $$y\rightarrow u(y)$$ such that 0 is asymptotically stable for the closed loop system $$\dot y = f(y,u(y))\ .$$

Again, as for the controllability, the first step to study the stabilization problem is to look at the linearized control system at the equilibrium. Roughly speaking one expects that a linear feedback which stabilizes (exponentially) the linearized control system stabilizes (locally) the nonlinear control system. This is indeed the case in many important situations. For example, for the Navier control system mentioned in the section Controllability of nonlinear control systems, see in particular (Viorel Barbu, 2003), (Viorel Barbu and Roberto Triggiani, 2004), (Viorel Barbu, Irena Lasiecka and Roberto Triggiani, 2006), (Andrei Fursikov, 2004), (Jean-Pierre Raymond, 2006; 2007) and (Rafael Vázquez, Emmanuel Trélat and Jean-Michel Coron, 2008).

When the linearized control system cannot be stabilized it still may happen that the nonlinearity helps. This is for example the case for the Euler control system described in the section at this link. See (Jean-Michel Coron, 1999) and (Olivier Glass, 2005). (See also at this link for the controllability.)

The most popular approach to construct stabilizing feedbacks relies on Lyapunov functions. See Chapter 12 in (Jean-Michel Coron, 2007) for various methods to design Lyapunov functions.