Optimal control

From Scholarpedia
Victor M. Becerra (2008), Scholarpedia, 3(1):5354. doi:10.4249/scholarpedia.5354 revision #124632 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Victor M. Becerra

Optimal control is the process of determining control and state trajectories for a dynamic system over a period of time to minimise a performance index.

Contents

Origins and applications

Optimal control is closely related in its origins to the theory of calculus of variations. Some important contributors to the early theory of optimal control and calculus of variations include Johann Bernoulli (1667-1748), Isaac Newton (1642-1727), Leonhard Euler (1707-1793), Ludovico Lagrange (1736-1813), Andrien Legendre (1752-1833), Carl Jacobi (1804-1851), William Hamilton (1805-1865), Karl Weierstrass (1815-1897), Adolph Mayer (1839-1907), and Oskar Bolza (1857-1942). Some important milestones in the development of optimal control in the 20th century include the formulation dynamic programming by Richard Bellman (1920-1984) in the 1950s, the development of the minimum principle by Lev Pontryagin (1908-1988) and co-workers also in the 1950s, and the formulation of the linear quadratic regulator and the Kalman filter by Rudolf Kalman (b. 1930) in the 1960s. See the review papers Sussmann and Willems (1997) and Bryson (1996) for further historical details.

Optimal control and its ramifications have found applications in many different fields, including aerospace, process control, robotics, bioengineering, economics, finance, and management science, and it continues to be an active research area within control theory. Before the arrival of the digital computer in the 1950s, only fairly simple optimal control problems could be solved. The arrival of the digital computer has enabled the application of optimal control theory and methods to many complex problems.

Formulation of optimal control problems

There are various types of optimal control problems, depending on the performance index, the type of time domain (continuous, discrete), the presence of different types of constraints, and what variables are free to be chosen. The formulation of an optimal control problem requires the following:

  • a mathematical model of the system to be controlled,
  • a specification of the performance index,
  • a specification of all boundary conditions on states, and constraints to be satisfied by states and controls,
  • a statement of what variables are free.

Continuous time optimal control using the variational approach

General case with fixed final time and no terminal or path constraints

If there are no path constraints on the states or the control variables, and if the initial and final times are fixed, a fairly general continuous time optimal control problem can be defined as follows:

Problem 1: Find the control vector trajectory \(\mathbf{u}: [t_0,t_f]\subset \mathbb{R} \mapsto \mathbb{R}^{n_u} \) to minimize the performance index: \[\tag{1} J= \varphi(\mathbf{x}(t_f)) + \int_{t_0}^{t_f} L(\mathbf{x}(t),\mathbf{u}(t),t) dt \]

subject to: \[\tag{2} \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),t), \,\, \mathbf{x}(t_0)=\mathbf{x}_0 \]

where \( [t_0, t_f] \) is the time interval of interest, \(\mathbf{x}: [t_0,t_f] \mapsto \mathbb{R}^{n_x}\) is the state vector, \(\varphi: \mathbb{R}^{n_x} \times \mathbb{R} \mapsto \mathbb{R} \) is a terminal cost function, \(L: \mathbb{R}^{n_x} \times \mathbb{R}^{n_u} \times \mathbb{R} \mapsto \mathbb{R} \) is an intermediate cost function, and \(\mathbf{f}: \mathbb{R}^{n_x}\times \mathbb{R}^{n_u}\times \mathbb{R} \mapsto \mathbb{R}^{n_x} \) is a vector field. Note that equation (2) represents the dynamics of the system and its initial state condition. Problem 1 as defined above is known as the Bolza problem. If \(L(\mathbf{x},\mathbf{u},t)=0\ ,\) then the problem is known as the Mayer problem, if \(\varphi(\mathbf{x}(t_f))=0\ ,\) it is known as the Lagrange problem. Note that the performance index \(J=J(\mathbf{u})\) is a functional, this is a rule of correspondence that assigns a real value to each function u in a class. Calculus of variations (Gelfand and Fomin, 2000) is concerned with the optimisation of functionals, and it is the tool that is used in this section to derive necessary optimality conditions for the minimisation of J(u).

Adjoin the constraints to the performance index with a time-varying Lagrange multiplier vector function \(\lambda: [t_0,t_f] \mapsto \mathbb{R}^{n_x}\) (also known as the co-state), to define an augmented performance index \(\bar{J}\ :\)

\[\tag{3} \bar{J}=\varphi(\mathbf{x}(t_{f}))+\int_{t_{o}}^{t_{f}}\left\{L(\mathbf{x},\mathbf{u},t) +\lambda^{T}(t)\left[\mathbf{f}(\mathbf{x},\mathbf{u},t)-\dot{\mathbf{x}}\right]\right\}dt \]


Define the Hamiltonian function H as follows:

\[\tag{4} H(\mathbf{x}(t),\mathbf{u}(t),\mathbf{\lambda}(t),t)= L(\mathbf{x}(t),\mathbf{u}(t),t) + \mathbf{\lambda}(t)^T \mathbf{f}(\mathbf{x}(t),\mathbf{u}(t),t), \]

such that \(\bar{J}\) can be written as:

\[ \bar{J}=\varphi(\mathbf{x}(t_{f}))+\int_{t_{o}}^{t_{f}}\left\{H(\mathbf{x}(t),\mathbf{u}(t),\lambda(t),t)-\lambda^{T}(t)\dot{\mathbf{x}}\right\} dt \]

Assume that \( t_0 \) and \(t_f\) are fixed. Now consider an infinitesimal variation in \(\mathbf{u}(t)\ ,\) that is denoted as \(\delta \mathbf{u}(t)\ .\) Such a variation will produce variations in the state history \(\delta \mathbf{x}(t)\ ,\) and a variation in the performance index \(\delta \bar{J}\ :\) \[ \delta\bar{J}=\left[\left(\frac{\partial{\varphi}}{\partial{\mathbf{x}}}-\lambda^{T}\right)\delta \mathbf{x}\right]_{t=t_{f}} + \left[\lambda^{T}\delta \mathbf{x}\right]_{t=t_{O}}+\int_{t_{o}}^{t_{f}}\left\{\left(\frac{\partial{H}}{\partial{\mathbf{x}}}+\dot{\lambda}^{T}\right)\delta \mathbf{x} + \left(\frac {\partial{H}}{\partial{\mathbf{u}}}\right) \delta \mathbf{u}\right\}dt \]

Since the Lagrange multipliers are arbitrary, they can be selected to make the coefficients of \(\delta \mathbf{x}(t)\) and \(\delta \mathbf{x}(t_f)\) equal to zero, as follows:

\[\tag{5} \dot{\lambda}(t)^T = -\frac{\partial H}{\partial \mathbf{x}}, \]


\[\tag{6} \lambda(t_f)^T = \left. \frac{\partial \varphi}{\partial \mathbf{x}} \right|_{t=t_f}. \]


This choice of \(\lambda(t)\) results in the following expression for \( \bar{J} \ ,\) assuming that the initial state is fixed, so that \(\delta \mathbf{x}(t_0) =0\ :\) \[ \delta\bar{J}=\int_{t_{o}}^{t_{f}}\left\{ \left(\frac {\partial{H}}{\partial{\mathbf{u}}}\right) \delta \mathbf{u}\right\}dt \]

For a minimum, it is necessary that \(\delta \bar{J}=0\ .\) This gives the stationarity condition:

\[\tag{7} \frac{\partial H^T}{\partial \mathbf{u}} = \mathbf{0} \ .\]


Equations (2), (5), (6), and (7) are the first-order necessary conditions for a minimum of J. Equation (5) is known as the co-state (or adjoint) equation. Equation (6) and the initial state condition represent the boundary (or transversality) conditions. These necessary optimality conditions, which define a two point boundary value problem, are very useful as they allow to find analytical solutions to special types of optimal control problems, and to define numerical algorithms to search for solutions in general cases. Moreover, they are useful to check the extremality of solutions found by computational methods. Sufficient conditions for general nonlinear problems have also been established. Distinctions are made between sufficient conditions for weak local, strong local, and strong global minima. Sufficient conditions are useful to check if an extremal solution satisfying the necessary optimality conditions actually yields a minimum, and the type of minimum that is achieved. See (Gelfand and Fomin, 2003), (Wan, 1995) and (Leitmann, 1981) for further details.

The theory presented above does not deal with the existence of an optimal control that minimises the performance index J. See the book by Cesari (1983) which covers theoretical issues on the existence of optimal controls. Moreover, a key point in the mathematical theory of optimal control is the existence of the Lagrange multiplier function \(\lambda(t)\ .\) See the book by Luenberger (1997) for details on this issue.

The linear quadratic regulator

A special case of optimal control problem which is of particular importance arises when the objective function is a quadratic function of x and u, and the dynamic equations are linear. The resulting feedback law in this case is known as the linear quadratic regulator (LQR). The performance index is given by:

\[\tag{8} J=\frac{1}{2}\mathbf{x}(t_{f})^T \mathbf{S}_f \mathbf{x}(t_f) +\frac{1}{2}\int_{t_{o}}^{t_{f}} (\mathbf{x}(t)^T\mathbf{Q}\mathbf{x}(t) + \mathbf{u}(t)^T\mathbf{R}\mathbf{u}(t)) dt \]


where \(\mathbf{S}_f\) and \(\mathbf{Q}\) are positive semidefinite matrices, and \(\mathbf{R}\) is a positive definite matrix, while the system dynamics obey:

\[\tag{9} \dot{\mathbf{x}}(t) = \mathbf{A} \mathbf{x}(t) + \mathbf{B} \mathbf{u}(t), \,\, \mathbf{x}(t_0)=\mathbf{x}_0 \]

where A is the system matrix and B is the input matrix.

In this case, using the optimality conditions given above, it is possible to find that the optimal control law can be expressed as a linear state feedback:

\[\tag{10} \mathbf{u}(t) = -\mathbf{K}(t) \mathbf{x}(t) \]


where the state feedback gain is given by:

\[\tag{11} \mathbf{K}(t) = \mathbf{R}^{-1}\mathbf{B}^T \mathbf{S}(t), \]


and S(t) is the solution to the differential Ricatti equation\[\tag{12} -\dot{\mathbf{S}} = \mathbf{A}^T\mathbf{S} + \mathbf{S}\mathbf{A} - \mathbf{S}\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^T\mathbf{S}+\mathbf{Q},\, \mathbf{S}(t_f)=\mathbf{S}_f \]


In the particular case where \( t_f \rightarrow \infty \ ,\) and provided the pair (A,B) is stabilizable, the Ricatti differential equation converges to a limiting solution S, and it is possible to express the optimal control law as a state feedback as in (10) but with constant gain K. which is given by \[ \mathbf{K}= \mathbf{R}^{-1} \mathbf{B}^T \mathbf{S} \] where S is the positive definite solution to the algebraic Ricatti equation: \[\tag{13} \mathbf{A}^T\mathbf{S} + \mathbf{S}\mathbf{A} - \mathbf{S}\mathbf{B}\mathbf{R}^{-1}\mathbf{B}^T\mathbf{S}+\mathbf{Q} = \mathbf{0} \]


Moreover, if the pair (A,C) is observable, where \( \mathbf{C}^T \mathbf{C} = \mathbf{Q} \ ,\) then the closed loop system \[\tag{14} \dot{\mathbf{x}} = (\mathbf{A}-\mathbf{B}\mathbf{K})\mathbf{x} \]


is asymptotically stable. This is an important result, as the linear quadratic regulator provides a way of stabilizing any linear system that is stabilizable. It is worth pointing out that there are well established methods and software for solving the algebraic Ricatti equation (13). This facilitates the design of linear quadratic regulators. A useful extension of the linear quadratic regulator ideas involves modifying the performance index (8) to allow for a reference signal that the output of the system should track. Moreover, an extension of the LQR concept to systems with gaussian additive noise, which is known as the linear quadratic gaussian (LQG) controller, has been widely applied. The LQG controller involves coupling the linear quadratic regulator with the Kalman filter using the separation principle. See (Lewis and Syrmos, 1995) for further details.

Case with terminal constraints

In case problem 1 is also subject to a set of terminal constraints of the form:

\[\tag{15} \psi( \mathbf{x}(t_f), t_f) = \mathbf{0} \]


where \(\psi:\mathbb{R}^{n_x} \times \mathbb{R} \mapsto \mathbb{R}^{n_{\psi}} \) is a vector function, variational analysis (Lewis and Syrmos, 1995) shows that the necessary conditions for a minimum of J are (7), (5), (2), and the following terminal condition:

\[\tag{16} \left. \left(\frac{\partial \varphi}{\partial \mathbf{x}}^T + \frac{\partial{\psi}}{\partial \mathbf{x}}^T \nu - \lambda \right)^T\right|_{t_f} \delta \mathbf{x}(t_f)+ \left. \left( \frac{\partial \varphi}{\partial t} + \frac{\partial \psi}{\partial t}^T \nu + H \right) \right|_{t_f} \delta t_f = 0 \]


where \(\nu \in \mathbb{R}^{n_{\psi}}\) is the Lagrange multiplier associated with the terminal constraint, \(\delta t_f\) is the variation of the final time, and \(\delta \mathbf{x}(t_f) \) is the variation of the final state. Note that if the final time is fixed, then \(\delta t_f = 0\) and the second term vanishes. Also, if the terminal constraint is such that element j of x is fixed at the final time, then element j of \(\delta \mathbf{x}(t_f) \) vanishes.

Case with input constraints - the minimum principle

Realistic optimal control problems often have inequality constraints associated with the input variables, so that the input variable u is restricted to be within an admissible compact region \(\Omega\ ,\) such that: \[ \mathbf{u}(t) \in \Omega \ .\]

It was shown by Pontryagin and co-workers (Pontryagin, 1987) that in this case, the necessary conditions (2), (5) and (6) still hold, but the stationarity condition (7), has to be replaced by: \[ H(\mathbf{x}^*(t),\mathbf{u}^*(t),\lambda^*(t),t) \le H(\mathbf{x}^*(t),\mathbf{u}(t),\lambda^*(t),t) \] for all admissible u, where * denotes optimal variables. This condition is known as Pontryagin's minimum principle. According to this principle, the Hamiltonian must be minimised over all admissible u for optimal values of the state and costate variables.

Minimum time problems

One special class of optimal control problem involves finding the optimal input u(t) to reach a terminal constraint in minimum time. This kind of problem is defined as follows.

Problem 2: Find \( t_f \) and \( \mathbf{u}(t)\, (t\in[t_0,t_f]) \) to minimise: \[ J = \int_{t_0}^{t_f} 1 dt = t_f-t_0 \] subject to: \[ \dot{\mathbf{x}}(t) = \mathbf{f}(\mathbf{x(t)},\mathbf{u(t)},t), \quad \mathbf{x}(0)=\mathbf{x}_o \]

\[ \psi(\mathbf{x}(t_f),t_f) = \mathbf{0} \quad \]

\[ \mathbf{u}(t) \in \Omega \]

See (Lewis and Syrmos, 1995) and (Naidu, 2003) for further details on minimum time problems.

Problems with path constraints

Sometimes it is necessary to restrict state and control trajectories such that a set of constraints is satisfied within the interval of interest \([t_0, t_f]\ :\)

\[ \mathbf{c}( \mathbf{x(t)}, \mathbf{u(t)}, t) \le \mathbf{0} \]

where \(\mathbf{c}: \mathbb{R}^{n_x} \times \mathbb{R}^{n_u} \times [t_0, t_f] \mapsto \mathbb{R}^{n_c} \ .\) Moreover, in some problems it may be required that the state satisfies equality constraints at some intermediate point in time \( t_1, \, t_0 \le t_1 \le t_f \ .\) These are known as interior point constraints and can be expressed as follows: \[ \mathbf{q}(\mathbf{x}(t_1), t_1) = \mathbf{0} \] where \(\mathbf{q}: \mathbb{R}^{n_x} \times \mathbb{R}\mapsto\mathbb{R}^{n_q}\ .\) See Bryson and Ho (1975) for a detailed treatment of optimal control problems with path constraints.

Singular arcs

In some optimal control problems, extremal arcs satisfying (7) occur where the matrix \( \partial^2 H/\partial \mathbf{u}^2 \) is singular. These are called singular arcs. Additional tests are required to verify if a singular arc is optimizing. A particular case of practical relevance occurs when the Hamiltonian function is linear in at least one of the control variables. In such cases, the control is not determined in terms of the state and co-state by the stationarity condition (7). Instead, the control is determined by the condition that the time derivatives of \(\partial H/\partial \mathbf{u}\) must be zero along the singular arc. In the case of a single control u, once the control is obtained by setting the time derivative of \(\partial H/\partial {u}\) to zero, then additional necessary conditions known as the generalized Legendre-Clebsch conditions must be checked:

\[ (-1)^k \frac{\partial}{\partial u}\left[ \frac{d^{(2k)}}{dt^{2k}} \frac{\partial H}{\partial {u}} \right] \ge 0, \, \, k=0, 1, 2, \ldots \]

The presence of singular arcs may cause difficulties to computational optimal control methods to find accurate solutions if the appropriate conditions are not enforced a priori. See (Bryson and Ho, 1975) and (Sethi and Thompson, 2000) for further details on the handling of singular arcs.

Computational optimal control

The solutions to many optimal control problems cannot be found by analytical means. Over the years, many numerical procedures have been developed to solve general optimal control problems. With direct methods, optimal control problems are discretised and converted into nonlinear programming problems of the form:

Problem 3: Find a decision vector \( \mathbf{y} \in \mathbb{R}^{n_y} \) to minimise \(F(\mathbf{y})\) subject to \(\mathbf{g}(\mathbf{y}) \le \mathbf{0}\ ,\) \(\mathbf{h}(\mathbf{y}) = \mathbf{0}\ ,\) and simple bounds \(\mathbf{y}_l \le \mathbf{y} \le \mathbf{y}_u,\) where \( F:\mathbb{R}^{n_y} \mapsto \mathbb{R} \) is a differentiable scalar function, \(\mathbf{g}:\mathbb{R}^{n_y} \mapsto \mathbb{R}^{n_g} \) and \(\mathbf{h}:\mathbb{R}^{n_y} \mapsto \mathbb{R}^{n_h} \) are differentiable vector functions. Some methods involve the discretization of the differential equations using, for example, Euler, Trapezoidal, or Runge-Kutta methods, by defining a grid of N points covering the time interval \( [t_0, t_f] \ ,\) \( t_0=t_1<t_2\ldots<t_N=t_f \ .\) In this way, the differential equations become equality constraints of the nonlinear programming problem. The decision vector y contains the control and state variables at the grid points. Other direct methods involve a decision vector y which contains only the control variables at the grid points, with the differential equations solved by integration and their gradients found by integrating the co-state equations, or by finite differences. Other direct methods involve the approximation of the control and states using basis functions, such as splines or Lagrange polynomials. There are well established numerical techniques for solving nonlinear programming problems with constraints, such as sequential quadratic programming (Bazaraa et al, 1993). Direct methods using nonlinear programming are known to deal in an efficient manner with problems involving path constraints. See Betts (2001) for more details on computational optimal control using nonlinear programming. See also (Becerra, 2004) for a straightforward way of combining a dynamic simulation tool with nonlinear programming code to solve optimal control problems with constraints.

Indirect methods involve iterating on the necessary optimality conditions to seek their satisfaction. This usually involves attempting to solve nonlinear two-point boundary value problems, through the forward integration of the plant equations and the backward integration of the co-state equations. Examples of indirect methods include the gradient method and the multiple shooting method, both of which are described in detail in the book by Bryson (1999).

Dynamic programming

Dynamic programming is an alternative to the variational approach to optimal control. It was proposed by Bellman in the 1950s, and is an extension of Hamilton-Jacobi theory. Bellman's principle of optimality is stated as follows: "An optimal policy has the property that regardless of what the previous decisions have been, the remaining decisions must be optimal with regard to the state resulting from those previous decisions". This principle serves to limit the number of potentially optimal control strategies that must be investigated. It also shows that the optimal strategy must be determined by working backward from the final time.

Consider Problem 1 with the addition of a terminal state constraint (15). Using Bellman's principle of optimality, it is possible to derive the Hamilton-Jacobi-Bellman (HJB) equation: \[\tag{17} -\frac{\partial J^*}{\partial t} = \min_{\mathbf{u}} \left( L + \frac{\partial J^*}{\partial \mathbf{x}}\mathbf{f} \right) \]


where J* is the optimal performance index. In some cases, the HJB equation can be used to find analytical solutions to optimal control problems.

Dynamic programming includes formulations for discrete time systems as well as combinatorial systems, which are discrete systems with quantized states and controls. Discrete dynamic programming, however, suffers from the 'curse of dimensionality', which causes the computations and memory requirements to grow dramatically with the problem size. See the books (Lewis and Syrmos, 1995), (Kirk, 1970), and (Bryson and Ho, 1975) for further details on dynamic programming.

Discrete-time optimal control

Most of the problems defined above have discrete-time counterparts. These formulations are useful when the dynamics are discrete (for example, a multistage system), or when dealing with computer controlled systems. In discrete-time, the dynamics can be expressed as a difference equation:

\[ \mathbf{x}(k+1) = \mathbf{f}( \mathbf{x}(k), \mathbf{u(k)}, k), \, \mathbf{x}(N_0)=\mathbf{x}_0 \]

where k is an integer index, x(k) is the state vector, u(k) is the control vector, and f is a vector function. The objective is to find a control sequence \( \{\mathbf{u}(k)\}, \,k=N_0,\ldots,N_f-1, \) to minimise a performance index of the form: \[ J = \varphi(\mathbf{x}(N_f)) + \sum\limits_{k=N_0}^{N_f-1} L(\mathbf{x}(k),\mathbf{u}(k),k) \]

See, for example, (Lewis, 1995), (Bryson and Ho, 1975), and (Bryson, 1999) for further details.

Examples

Minimum energy control of a double integrator with terminal constraint

Consider the following optimal control problem.

Figure 1: Optimal control and state histories for the double integrator example

\[ \min\limits_{u(t)} \, J= \int_0^{1} u(t)^2 dt \] subject to \[\tag{18} \dot x_1(t) = x_2(t), \, \dot x_2(t) = u(t), \]

\[ x_1(0)=1, \,\, x_2(0)=1,\,\,x_1(1)=0, \,\, x_2(1) = 0 \]

The Hamiltonian function (4) is given by: \[ H = \frac{1}{2} u^2 + \lambda_1 x_2 + \lambda_2 u \] The stationarity condition (7) yields: \[\tag{19} u+ \lambda_2 = 0 \implies u = -\lambda_2 \]

The co-state equation (5) gives: \[ \dot{\lambda}_1 = 0, \,\, \dot{\lambda}_2 = - \lambda_1, \] so that \[\tag{20} \lambda_1(t) = a, \,\, \lambda_2(t) = -a t + b, \]

where a and b are constants to be found. Replacing (20) in (19) gives \[\tag{21} u(t) = a t - b. \]

In this case, the terminal constraint function is \( \psi(\mathbf{x}(1)) = [x_1(1), x_2(1)]^T = [0,\, 0]^T\ ,\) so that the final value of the state vector is fixed, which implies that \(\delta \mathbf{x}(t_f) = 0\ .\) Noting that \(\delta t_f=0\) since the final time is fixed, then the terminal condition (16) is satisfied. Replacing (21) into the state equation (18), and integrating both states gives: \[\tag{22} x_1(t) = \frac{1}{6} a t^3 - \frac{1}{2} b t^2 + c t + d, \,\, x_2(t) = \frac{1}{2} a t^2 - b t + c. \]

Evaluating (22) at t=0 and using the initial conditions gives the values c=1 and d=1. Evaluating (22) at the terminal time t=1 gives two simultaneous equations: \[ \frac{1}{6} a - \frac{1}{2}b + 2 = 0, \,\, \frac{1}{2}a - b + 1 = 0. \] This yields a=18, and b=10. Therefore, the optimal control is given by: \[ u = 18 t - 10. \] The resulting optimal control and state histories are shown in Fig 1.

Computational optimal control: B-727 maximum altitude climbing turn manoeuvre

This example is solved using a gradient method in (Bryson, 1999). Here, a path constraint is considered and the solution is sought by using a direct method and nonlinear programming. It is desired to find the optimal control histories to maximise the altitude of a B-727 aircraft in a given time \(t_f\ ,\) with terminal constraints that the aircraft path be turned 60 degrees and the velocity be slightly above the stall velocity. Such a flight path may be of interest to reduce engine noise over populated areas located ahead of an airport runway. This manoeuvre can be formulated as an optimal control problem, as follows.

\[ \min\limits_{u(t), \alpha(t)} \, J= - h(t_f ) \]

subject to: \[ \dot V = T(V)\cos (\alpha + \varepsilon ) - C_D (\alpha )V^2 - \sin \gamma , \] \[ \dot \gamma = (1/V)[T(V)\sin (\alpha + \varepsilon ) + C_L (\alpha )V^2 ]\cos \sigma - (1/V) \cos \gamma , \] \[ \;\dot \psi = (1/(V\cos \gamma)) [T(V)\sin (\alpha + \varepsilon ) + C_L (\alpha )V^2 ]\sin \sigma , \] \[ \dot h = V\sin \gamma , \] \[ \dot x = V\cos \gamma \cos \psi , \] \[ \dot y = V\cos \gamma \sin \psi . \]

with initial conditions given by:

Figure 2: 3D plot of optimal B-727 aircraft trajectory

\[ V(0) = 1.0 \] \[ \gamma (0) = \psi (0) = h(0) = x(0) = y(0) = 0 \]

tbe terminal constraints:

\[ V(t_f ) = 0.60, \,\, \psi (t_f ) = \frac{\pi}{3} \]

and the path constraint:

\[ h(t) \ge 0, \,\, t\in[0,t_f] \]

where h is the altitude, x is the horizontal distance in the initial direction, y is the horizontal distance perpendicular to the initial direction, V is the aircraft velocity, γ is the climb angle, ψ is the heading angle, and \(t_f=2.4 \) units. The distance and time units in the above equations are normalised. To obtain meters and seconds, the corresponding variables need to be multiplied by 10.0542, and 992.0288, respectively. There are two controls: the angle of attack α and the bank angle σ. The functions T(V), CD(α) and CL(α) are given by: \[ T(V) = 0.2476 -0.04312V + 0.008392V^2 \] \[ C_D(\alpha) = 0.07351 -0.08617\alpha + 1.996 \alpha^2 \] \[ C_L(\alpha) = \left \{ \begin{matrix} 0.1667+6.231\alpha, & \mbox{if } \alpha\le 12\pi/180 \\ 0.1667+6.231\alpha + 21.65(\alpha-12\pi/180)^2 & \mbox{if } \alpha>12 \pi/180 \end{matrix} \right. \]

The solution shown in Fig 2 was obtained by using sequential quadratic programming, where the decision vector consisted of the control values at the grid points. The differential equations were integrated using 5th order Runge-Kutta steps with size Δt= 0.01 units, and the gradients required by the nonlinear programming code were found by finite differences.

References

  • Bazaraa M.S., Sherali H.D. and Shetty C.M. (1993). Nonlinear Programming. Wiley. ISBN 0471557935.
  • Becerra, V.M. (2004) Solving optimal control problems with state constraints using nonlinear programming and simulation tools. IEEE Transactions on Education, 47(3):377-384.
  • Betts J.T. (2001) Practical Methods for Optimal Control Using Nonlinear Programming. SIAM. ISBN 0-89871-488-5.
  • A.E. Bryson Jr. (1996) Optimal control 1950 to 1985, IEEE Control Systems Magazine, pp. 26-33 (June).
  • Bryson A.E. (Jr) and Ho Y. (1975) Applied Optimal Control. Halsted Press. ISBN 0-470-11481-9.
  • Cesari, L. (1983) Optimization-Theory and Applications: Problems With Ordinary Differential Equations. Springer. ISBN 3540906762.
  • Gelfand I.M. and Fomin S.V. (2003) Calculus of Variations. Dover Publications. ISBN 0486414485.
  • Lewis F.L. and Syrmos V.L. (1995) Optimal Control. John Wiley & Sons. ISBN 0-471-03378-2.
  • Leitmann (1981) The Calculus of Variations and Optimal Control. Springer. ISBN 0306407078.
  • Luenberger D.G. (1997) Optimization by Vector Space Methods. Wiley. ISBN 0471-18117-X.
  • Pontryagin L.S. (1987) The Mathematical Theory of Optimal Processes (Classics of Soviet Mathematics). CRC Press. ISBN 2881240771.
  • Sethi S and Thompson G.L. (2000) Optimal Control Theory: Applications to Management Science and Economics. Kluwer. ISBN 0792386086.
  • Sussmann H.J and Willems J.C. (1997) 300 Years of Optimal Control: from the Brachystochrone to the Maximum Principle, IEEE Control Systems Magazine, pp. 32-44 (June).
  • Wan F.Y.M. (1995) Introduction to the Calculus of Variations and its Applications. Chapman & Hall. ISBN 0412051419.

Internal references

Further reading

  • Athans M. and Falb P. L. (2006) Optimal Control: An Introduction to the Theory and Its Applications. Dover Publications. ISBN 0486453286.
  • Hull D. G. (2003) Optimal Control Theory for Applications. ISBN 0387400702
  • Sargent R.W.H. (2000) Optimal Control. Journal of Computational and Applied Mathematics. Vol. 124, pp. 361-371.
  • Seierstad A. and Sydsaeter K. (1987) Optimal Control Theory with Economic Applications. North Holland. ISBN 0444879234.

External links

See also

Boundary Value Problem, Predictive Control, Robust Control, Stochastic Control Theory, Variational methods

Personal tools
Namespaces

Variants
Actions
Navigation
Focal areas
Activity
Tools