Hamilton-Jacobi equation

From Scholarpedia
Robert L. Warnock (2010), Scholarpedia, 5(7):8330. doi:10.4249/scholarpedia.8330 revision #91338 [link to/cite this article]
Jump to: navigation, search
Post-publication activity

Curator: Robert L. Warnock

The Hamilton-Jacobi Equation is a first-order nonlinear partial differential equation of the form \( H(x,u_x(x,\alpha,t),t)+u_t(x,\alpha,t)=K(\alpha,t)\) with independent variables \((x,t)\in {\mathbb R}^n\times{\mathbb R}\) and parameters \( \alpha\in {\mathbb R}^n\ .\) It has wide applications in optics, mechanics, and semi-classical quantum theory. Its solutions determine infinite families of solutions of Hamilton's ordinary differential equations, which are the equations of motion of a mechanical system or an optical system in the ray approximation.



Sir William Rowan Hamilton (1805-1865) carried out one of the earliest studies of geometrical optics in an arbitrary medium with varying index of refraction (Hamilton (1830-1832), Synge (1937), Carathéodory (1937)). He found a powerful expression of the topic in a characteristic function, which is the optical path length of a ray, regarded as a function of initial and final positions and times of the ray. This and related functions satisfy partial differential equations, and directly determine infinite families of rays. Following an analogy between rays and trajectories of a mechanical system, Hamilton soon extended his concepts to mechanics, incorporating ideas of Lagrange and others concerning generalized coordinates. The resulting Hamiltonian mechanics, notable for its invariance under coordinate transformations, is a cornerstone of theoretical physics.

With an emphasis on mechanics, Carl Gustav Jacob Jacobi (1804-1851) sharpened Hamilton's formulation, clarified mathematical issues, and made significant applications (Jacobi (1842-1843)). The resultant Hamilton-Jacobi theory and later developments are presented in several famous texts: Arnol'd (1974), Landau & Lifshitz (1969), Gantmacher (1970), Born & Wolf (1965), Lanczos (1949), Carathéodory (1982), Courant & Hilbert (1962). For studies using modern PDE theory see Lions (1982), Evans (2008), and Benton (1977). The theory embodies a wave-particle duality, which figured in the advent of the de Broglie - Schrödinger wave mechanics (Jammer (1966)). Hamilton-Jacobi theory also played an important role in development of the theory of first order partial differential equations and the calculus of variations (Courant & Hilbert (1962), Carathéodory (1982)).

In a view broader than that of the original work, a solution of the Hamilton-Jacobi equation is the generator of a canonical transformation, a symplectic change of variables intended to simplify the equations of motion. In this framework (as applied to mechanics) there are solutions of a type different from that of Hamilton, which determine not only orbits but also invariant tori in phase space on which the orbits lie. These solutions, which are known to exist only under special circumstances, are the subject of the celebrated work of Kolmogorov, Arnol'd, and Moser; see Gallavotti (1983). Even approximate invariants, constructed by approximate solutions of the Hamilton-Jacobi equation, have implications for stability of motion over finite times (Nekhoroshev (1977), Warnock & Ruth (1992)). Approximate invariants also find applications in the Einstein-Brillouin-Keller quantization of semi-classical quantum theory (Keller (1958), Percival (1977), Chapman et al. (1976), Martens & Ezra (1987)). Various forms and generalizations of the Hamilton-Jacobi equation occur widely in contemporary applied mathematics, for instance in optimal control theory (Fleming & Rishel (1975)).

Canonical Transformations

Canonical transformations (equivalently, symplectic transformations) are of crucial importance in classical mechanics, as they are the chief means of solving a mechanical system or clarifying the structure of the system when it cannot be solved. The Hamilton-Jacobi equation is used to generate particular canonical transformations that simplify the equations of motion.

A mechanical system with \(n\) degrees of freedom is described by generalized coordinates \(q=(q_1,\cdots, q_n)\) and corresponding generalized momenta \( p=(p_1,\cdots,p_n)\ ;\) we write \(z=(q,p)\ .\) The motion of the system is governed by Hamilton's canonical equations of motion, i.e., the ordinary differential equations

\[\tag{1} \dot q= H_p(z,t)\ ,\quad \dot p=-H_q(z,t)\ , \]

where \( \dot{}\ \) denotes the time derivative and subscripts indicate vectors of partial derivatives; thus \(H_q=(\partial H/\partial q_1,\cdots,\partial H/\partial q_n)\ .\) The Hamiltonian function \(H:\mathbb{R}^{2n}\times\mathbb{R}\rightarrow\mathbb{R}\) is here assumed to be \(C^2\) in \(z\) and continuous in \(t\ .\) The solution of the initial value problem (or flow) for the Hamiltonian system (1) is denoted by \({\mathbf z}(t,z_0)=({\mathbf q}(t,z_0),{\mathbf p}(t,z_0))\) for initial value \(z_0={\mathbf z}(0,z_0)\ .\) This solution, denoted by the bold faced letter \(\mathbf z\) to distinguish it from a general point \(z\) in phase space, will be called an orbit. If \(H\) depends on the time, specification of an orbit requires the initial time \(t_0\) (not just the elapsed time) as well as the initial condition \(z_0\ ;\) for convenience the origin of time is chosen so that \(t_0=0\ .\)

One seeks a transformation of coordinates, \(Z=(Q,P)=\Phi(z,t)=(\Phi_1(z,t),\Phi_2(z,t))\ ,\) so that the equations of motion retain their form but with a new Hamiltonian \(K\ ,\) namely

\[\tag{2} \dot Q= K_P(Z,t)\ ,\quad \dot P=- K_Q(Z,t)\ . \]

If \(K\) can be made independent of \(Q\ ,\) then \(\mathbf P\) is constant and the solution of (2) is given simply as

\[\tag{3} {\mathbf Q}(t,Z_0)=Q_0+\int_0^t K_P(P_0,\tau)d\tau\ ,\quad {\mathbf P}(t,Z_0)=P_0\ . \]

The solution of (1) is retrieved by the inverse transformation \(z=\Psi(Z,t) \equiv \Phi^{-1}(Z,t)\ .\)

Write \({\mathbf Z}(t,Z_0)=({\mathbf Q}(t,Z_0),{\mathbf P}(t,Z_0))=\Phi({\mathbf z}(t,z_0),t)\) for an orbit in the new coordinates, where \(Z_0=\Phi(z_0,0)\ .\) Reference to initial conditions will often be suppressed. A canonical transformation will be implicitly determined through the equation

\[\tag{4} {\mathbf p}(t)\cdot\dot{\mathbf q}(t)-H({\mathbf z}(t),t)=-{\mathbf Q}(t)\cdot\dot{\mathbf P}(t)-K({\mathbf Z}(t),t) +\frac{d}{dt}F({\mathbf q}(t),{\mathbf P}(t),t)\ , \]

where \(\cdot\) indicates the scalar product and the given function \(F(q,P,t)\) is \(C^2\) in its first two arguments, \(C^1\) in \(t\ ,\) and such that

\[\tag{5} \det F_{qP}=\det\{\partial^2 F/\partial q_i\partial P_j\}\ne 0\ , \]

in some open region \(\Omega\subset \mathbb{R}^{2n+1}\) of \((q,P,t)\)-space. This function \(F\) is called the generator or generating function of the transformation. By writing out \(dF/dt\ ,\) one sees that (4) is satisfied if

\[\tag{6} {\mathbf p}(t)=F_q({\mathbf q}(t),{\mathbf P}(t),t)\ , \ :\]

\[\tag{7} {\mathbf Q}(t)=F_P({\mathbf q}(t),{\mathbf P}(t),t)\ , \ :\]

\[\tag{8} K({\mathbf Z}(t),t)=H({\mathbf z}(t),t)+ F_t({\mathbf q}(t),{\mathbf P}(t),t)\ . \]

This suggests defining the canonical transformation by the equations

\[\tag{9} p=F_q(q,P,t)\ , \ :\]

\[\tag{10} Q=F_P(q,P,t)\ . \]

Owing to condition (5) and the inverse function theorem, (9) can be solved for \(P=\Phi_2(z,t)\) (at least locally in \(\Omega\)). Substitution of the solution in (10) gives \(Q=\Phi_1(z,t)\) as well. To get the inverse transformation \(z=\Psi(Z,t)\ ,\) solve (10) for \(q=\Psi_1(Z,t)\ ,\) then substitute in (9) to find \(p=\Psi_2(Z,t)\ .\) Then the new Hamiltonian is defined by

\[\tag{11} K(Z,t)=H(z,t)+F_t(q,P,t)=H(\Psi(Z,t),t)+F_t(\Psi_1(Z,t),P,t)\ . \]

Textbooks usually apply a variational principle to show that the equations of motion are invariant in form under the transformation just defined. The advantage of the variational argument lies in its geometrical foundation, which provides motivation for the starting equation (4), but is too long a story for this brief account; see Arnol'd (1974) for the geometric viewpoint. By generalizing an idea in Jacobi's 20th lecture (Jacobi (1842-1843), pp.158-159), the proof may be carried out instead by direct calculation. Substitution of (9) and (10) in (11) gives

\[\tag{12} H(q,F_q(q,P,t),t)+F_t(q,P,t)=K(F_P(q,P,t),P,t)\ . \]

Take \(\partial/\partial P\) of (12), evaluate along orbits, and then subtract \(d/dt\) of (7). Similarly, take \(\partial/\partial q\) of (12), evaluate on orbits, and add \(d/dt\) of (6). This leads to the informative equations

\[\tag{13} F_{qP}(\dot{\mathbf q}-H_p)-(\dot{\mathbf Q}-K_P)+F_{PP}(\dot{\mathbf P}+K_Q)=0\ , \ :\]

\[\tag{14} F_{qP}(\dot{\mathbf P}+K_Q)-(\dot{\mathbf p}+H_q)+F_{qq}(\dot{\mathbf q}-H_p)=0\ . \]

In view of (5), this shows that (1) implies (2) and vice versa, as long as \((q,Q,t)\) lies in \(\Omega\ .\)

There are other possible choices of the old and new variables on which the generator may depend. In general the condition (5) on \(F(q,P,t)\) will not hold globally, in which case one might alternatively try to use a function \(F_1(q,Q,t)\) with \(\det F_{1qQ}\ne 0\ .\) Then the equations analogous to (4), (9), (10), and (11) are

\[\tag{15} p\dot q-H=P\dot Q-K+dF_1/dt\ , \qquad p=F_{1q}\ ,\qquad P=-F_{1Q}\ ,\qquad H+F_{1t}=K\ . \]

A frequently used notation follows Goldstein (1981), who writes \(F_2(q,P,t)\) for the first \(F\) discussed above, and gives equations for the four functions \(F_1(q,Q,t), F_2(q,P,t), F_3(p,Q,t), F_4(p,P,t)\ .\) These are far from being the only possible choices; see Feng et al. (1989) and Erdelyi & Berz (2001) for a broader view. According to a theorem in Section 48B of Arnol'd (1974) there is always a generator that can represent locally a given canonical transformation. It may depend on \(q=(q_1,\cdots,q_n)\) and \(n\) new variables \((P_{i_1},\cdots, P_{i_k},Q_{j_1},\cdots,Q_{j_{n-k}})\ .\)

One can show that the transformation induced by any generator with requisite smoothness is symplectic, which means that its Jacobian matrix \(M=\{ \partial \Phi_i(z,t)/\partial z_j \}\) is symplectic for all \(z\ .\) Written in terms of \(n\times n\) blocks this condition is

\[\tag{16} MJM^T=J\ ,\quad M=\begin{bmatrix}\partial Q/\partial q&\partial Q/\partial p\\ \partial P/\partial q&\partial P/\partial p\end{bmatrix}\ ,\quad J=\begin{bmatrix}0& -I\\I& 0\end{bmatrix}\ , \]

where \(T\) denotes transpose. For \(n=1\) the symplectic condition reduces to \(\det M=1\ .\) A symplectic transformation preserves volumes in phase space and areas on appropriate surfaces of even dimension. The conserved quantities are known as Poincaré invariants (Arnol'd, 1974, Gantmacher, 1970).

To prove (16) for the transformation induced by \(F\ ,\) differentiate (9) and (10) with respect to \(q\) and \(p\ .\) Thanks to (5), the resulting equations can be solved for \(M\ ;\) some calculation then shows that the solution obeys (16). An alternative viewpoint is to take symplecticity as the defining property of a canonical transformation (Meyer et al., 2008).

Hamilton-Jacobi Equation and Invariant Tori

To produce a useful transformation the generator \(F\) must be determined so that \(K\) is indeed independent of \(Q\ ,\) thus giving (3) as the solution of the transformed equations. With this form of \(K\ ,\) substitution of (9) in (11) yields

\[\tag{17} H(q, F_q(q,P,t),t)+ F_t(q,P,t)=K(P,t)\ , \]

which is the Hamilton-Jacobi equation for the type-2 generator. Here \(P\) is regarded as a (vector) parameter; the independent variables of the PDE are \(q\) and \(t\ .\) A solution of (17) depending on \(n\) parameters \(P_i\) and such that \(\det F_{qP}\ne 0\) was called a complete solution (Vollständige Lösung) by Jacobi. As was shown above, it determines a canonical transformation.

A case of great interest, that considered by Hamilton and Jacobi, is obtained by requiring \(K=0\ .\) Then \(Q\) and \(P\) are constant and the orbit \(z(t)\) determined by a complete solution through (9) and (10) satisfies the original Hamilton equations (1). That is seen from (13) and (14) which now reduce to

\[\tag{18} F_{qP}(\dot{\mathbf q}-H_p)=0\ ,\quad \dot{\mathbf p}+H_q-F_{qq}(\dot{\mathbf q}-H_p)=0\ . \]

By completeness the first equation implies \(\dot{\mathbf q}=H_p\ ,\) then the second equation gives \(\dot{\mathbf p}=-H_q,\ .\) Thus we have determined an infinite family of orbits through a complete solution of the Hamilton-Jacobi equation with \(K=0\ ,\) the initial conditions for each orbit being fixed by a choice of the \(2n\) parameters \(P_i,Q_i\ .\) The parameters need not be interpreted as new momenta and coordinates (and in fact were not in Jacobi's original treatment), and the \(P_i\) may enter the solution \(F\) in any way, perhaps through an Ansatz for the form of the solution. This will be illustrated below for the case of a central potential. A frequently used notation, due to Jacobi, is \(P=\alpha,\ Q=\beta\ .\)

In the following section it will be shown that a knowledge of the orbits is sufficient to construct explicitly a complete solution of the Hamilton-Jacobi equation with \(K=0\ .\) Thus the question of existence of the complete solution can be referred to the standard existence theory for ordinary differential equations. The situation is far different with \(K\ne 0\ ,\) in which case solutions of (17) exist only under special circumstances and are not continuous in \(P\ .\) A complete discussion of this case is too much for a short article, so the goal will be to give some idea of the character of the problem, and a method of solution for a truncated version of the problem. In particular, it will be seen how the construction of \(K\) must be part of the solution procedure.

To illustrate the situation with non-zero \(K\ ,\) take the case of a time-independent Hamiltonian \(H(z)\) and look for a solution in which \(K\) and \(F\) are also time-independent. Take polar coordinates \((q,p)=(\phi,I), \ \ (Q,P)=(\psi,J)\) where \( \phi,\psi\in [0,2\pi],\ \ I,J\in [0,\infty)\ .\) Also, define \(G\) so that \(F(\phi,J)=\phi\cdot J+G(\phi,J)\ ,\) where the first term on the right gives the identity transformation. Then the Hamilton-Jacobi equation to solve for \(G\) is

\[\tag{19} H(\phi,J+G_\phi(\phi,J))=K(J)\ , \]

and the equations (9) and (10) defining the transformation are

\[\tag{20} I=J+G_\phi(\phi,J)\ , \ :\]

\[\tag{21} \psi=\phi+G_J(\phi,J)\ . \]

If \(G\) satisfies (19) for some function \(K(J)\ ,\) then \(J\) is constant and (20) represents an invariant torus in phase space. The new angle variable \(\psi\) advances linearly in time, according to (3).

Now consider a perturbed integrable system with Hamiltonian

\[\tag{22} H(\phi,I)=H_0(I)+\epsilon V(\phi,I)\ , \]

which satisfies a condition of non-degeneracy

\[\tag{23} \det\ \nu_I(I)\ne 0\ ,\quad \nu(I)= H_{0I}(I)\ . \]

Next rearrange (19) to subtract the first terms of the Taylor series of \(H_0(J+G_\phi)\ :\)

\[\tag{24} -\nu(J)\cdot G_\phi=\epsilon V(\phi,J+G_\phi)+\big[ H_0(J+G_\phi)-H_0(J)-\nu(J)\cdot G_\phi\big] +\big[ H_0(J)-K(J)\big] \ . \]

The sum of the terms in the first square bracket is \(\mathcal{O}(G_\phi^2)\) and therefore small if the transformation induced by (20),(21) is close to the identity. Introduce the (multiple) Fourier series

\[\tag{25} G(\phi,J)= \sum_{m\in Z^n} g_m(J)\exp(im\cdot\phi)\ \]

so that

\[\tag{26} G_\phi(\phi,J)= \sum_{m\in Z^n} im\ g_m(J)\exp(im\cdot\phi)\ , \]

and take the Fourier transform of (24) to obtain

\[\tag{27} g_m(J)=\frac{i}{m\cdot\nu(J)}\frac{1}{(2\pi)^n}\int_{T^n} \exp(-im\cdot\phi)\big[\epsilon V(\phi,J+G_\phi) + H_0(J+G_\phi)-H_0(J)-\nu(J)\cdot G_\phi\big]d\phi,\quad m\ne {\mathbf 0}\ . \]

Since \(G_\phi\) does not contain the zero mode, the set of equations (26) and (27) is a closed system for the Fourier coefficients \(g_m,\ m\ne{\mathbf 0}\ .\) If a solution of this system is known for some \(J\ ,\) then the projection of (19) onto every mode except the zero mode has been solved. The zero mode projection is solved as well simply by defining \(K\) as the average of the left-hand side:

\[\tag{28} K(J)=\frac{1}{(2\pi)^n}\int_{T^n}d\phi\big[H_0(J+G_\phi)+ \epsilon V(\phi,J+G_\phi)\big]\ . \]

This gives some understanding of how the PDE (19) could be solved without prior knowledge of its right-hand side. The zero mode amplitude \(g_{\mathbf 0}\) can be chosen arbitrarily, for instance put equal to zero.

At first sight Eq.(27) would seem to be a straightforward fixed point problem that might be solved for the \(g_m\) by some kind of iteration, provided that the divisor \(m\cdot\nu(J)\) could be bounded away from zero through an appropriate choice of \(J\ .\) Thanks to (23) the value of \(\nu\) can be controlled by varying \(J\ .\) The iteration might be started by keeping only the term \(\epsilon V\ ,\) which gives lowest order perturbation theory. If the series (25) is truncated, then the problem can indeed be approached in that way, and (27) provides a practical method for computing approximate invariant tori (Warnock & Ruth (1987)). The exact problem requires the refined method of KAM theory to control small divisors \(m\cdot\nu \) for large \(m\) (Gallavotti (1983), Pöschel (1982)). The theory ensures the existence of invariant tori for sufficiently small \(\epsilon\ ,\) but they are not continuous functions of \(J\ .\) Rather, they exist only on a Cantor set in \(J\)-space, and the concept of complete solution does not apply in the classical sense (it is nevertheless possible to construct a smooth function which solves the Hamilton-Jacobi equation on the above-mentioned Cantor set; see Pöschel (1982)).

Action as a Solution of the Hamilton-Jacobi Equation

The following discussion is mostly an interpretation of Jacobi's 19th lecture. For a geometric approach see Arnol'd (1974), Section 46C. The goal is to solve the Hamilton-Jacobi equation for a Type-1 generator with the new Hamiltonian \( K = 0\ .\) Write \(Q=q_0\) so that the equation is

\[\tag{29} H(q,F_{1q}(q,q_0,t),t)+F_{1t}(q,q_0,t)=0\ . \]

Using the method of characteristics, suppose that the characteristic (orbit) \({\mathbf z}(t,z_0)=({\mathbf q}(t,z_0),{\mathbf p}(t,z_0))\) that solves (1) is known. Let us try to determine \(F_1(q,q_0,t)\) from its values for \(q={\mathbf q}(t,z_0)\) by means of an ODE for \(g(t)=F_1({\mathbf q}(t,z_0),q_0,t)\ .\) Since \(\dot g=F_{1q}\dot q+F_{1t}\ ,\) equations (15) and (29) suggest putting

\[\tag{30} \dot g(t)={\mathbf p}(t,z_0)\cdot \dot{\mathbf q}(t,z_0)-H({\mathbf z}(t,z_0),t)\ , \]

whence by integration the proposal

\[\tag{31} F_1({\mathbf q}(t,z_0),q_0,t) = \int_0^t\big[{\mathbf p}(\tau,z_0)\cdot \dot {\mathbf q}(\tau,z_0)-H({\mathbf z} (\tau,z_0),\tau)\big]d\tau\ \equiv S(q_0,p_0,t)\ . \]

From this one would like to get \(F_1(q,q_0,t)\) for general \(q\ ,\) but that can be done only if \(p_0\) can be deduced from the \(2n+1\) numbers \((q,q_0,t)\ .\) In general this is not possible for all \(t\ ;\) since orbits projected onto \(q\) space can cross, there can be more than one \(z_0\) giving the same \({\mathbf q}(t,z_0)\ .\) The locus of such crossings is called a caustic. To rule out caustics, the equation \(q={\mathbf q}(t,q_0,p_0)\) must be solvable uniquely for \(p_0=\mathcal{P}_0(q,q_0,t)\ .\) To ensure this, suppose \(t>0\) and

\[\tag{32} \det\bigg[\frac{\partial{\mathbf q }(t,q_0,p_0)}{\partial p_0}\bigg]\ne 0\ . \]

Under these conditions the proposed generator is defined through (31) as

\[\tag{33} F_1(q,q_0,t)=S(q_0,\mathcal{P}_0(q,q_0,t),t)\ . \]

This was Hamilton's essential idea, to view the action (integral of the Lagrangian) as a function of initial and final coordinates and times.

To show that \(F_1\) satisfies (29), first make a variation of the orbit, \({\mathbf z}(t,z_0)\rightarrow \tilde{\mathbf z}(t,\epsilon)={\mathbf z}(t,z_0)+\epsilon\delta{\mathbf z}(t)\ ,\) where \(\delta{\mathbf z}\) is an arbitrary \(C^1\) function. After integration by parts the corresponding variation of (31) is

\[\tag{34} \delta F_1({\mathbf q}(t,z_0),q_0,t) \equiv \bigg[\frac{d}{d\epsilon}\int_0^t\big[\tilde{\mathbf p}(\tau,\epsilon) \cdot\frac{d}{d\tau}\tilde{\mathbf q}(\tau,\epsilon)-H(\tilde{\mathbf z}(\tau,\epsilon),\tau)\big]d\tau \bigg]_{\epsilon=0} \ :\]
\[ =\int_0^t\big[(\dot{\mathbf q}-H_p)\cdot\delta {\mathbf p}-(\dot{\mathbf p}+H_q)\cdot\delta{\mathbf q} \big]d\tau+{\mathbf p}(\tau,z_0)\cdot\delta{\mathbf q}(\tau)\bigg|_0^t\ . \]

Since the integral is zero by (1), it follows that

\[\tag{35} \delta F_1({\mathbf q}(t,z_0),q_0,t)=F_{1q}({\mathbf q}(t,z_0),q_0,t)\cdot\delta{\mathbf q}(t,z_0)+F_{1q_0}({\mathbf q}(t,z_0),q_0,t)\cdot\delta q_0= {\mathbf p}(t,z_0)\cdot\delta{\mathbf q}(t,z_0)-p_0\cdot\delta q_0\ , \]

and since the variations are arbitrary

\[\tag{36} {\mathbf p}(t,z_0)=F_{1q}({\mathbf q}(t,z_0),q_0,t)\ , \ :\]

\[\tag{37} p_0=-F_{1q_0}({\mathbf q}(t,z_0),q_0,t)\ . \]

Next take \(d/dt\) of (31) and apply (36) to obtain

\[\tag{38} H({\mathbf q}(t,z_0),F_{1q}({\mathbf q}(t,z_0),q_0,t),t)+F_{1t}({\mathbf q}(t,z_0),q_0,t)=0\ . \]

Now this shows that \(F_1\) satisfies the Hamilton-Jacobi equation (29) since for any \(q, q_0\) there is a \(p_0\) such that \(q={\mathbf q}(t,z_0)\ ,\) by condition (32).

Recalling the equations (15) that define the canonical transformation, it is seen from (36) and (37) that the transformation from new to old variables is just the time evolution \(z={\mathbf z}(t,z_0)\ ,\) with the new variables being just the initial conditions \(z_0\ ,\) which are constant because the new Hamiltonian is zero. The condition \(\det(F_{1qq_0})\ne 0\)is implied by (32) and (37), as may be seen by differentiating the latter with respect to \(p_0\ ,\) then taking determinants.

If it is not possible to solve \(q={\mathbf q}(t,q_0,p_0)\) for \(p_0\ ,\) it may instead be possible to solve for \(q_0=\mathcal{Q}_0(q,p_0,t)\ .\) Then we can use a generator of Type 2, easily constructed by a Legendre transformation of \(F_1\) (Goldstein (1981)). Namely,

\[\tag{39} F_2(q,p_0,t)=F_1(q,q_0,t)+q_0\cdot p_0 \equiv F_1(q,\mathcal{Q}_0(q,p_0,t),t)+ \mathcal{Q}_0(q,p_0,t)\cdot p_0=S(\mathcal{Q}_0(q,p_0,t),p_0,t)+\mathcal{Q}_0(q,p_0,t)\cdot p_0\ . \]

By again applying the variational argument, it is easy to check that \(F_2\) satisfies all the required equations.

The discussion above proves existence of a solution of (29) in terms of the more elementary existence theory for (1), and also suggests methods of numerical solution of (29).

Solution of Classical Problems by Separation of Variables

Hamilton's principal function (33) solves the Hamilton-Jacobi equation (29) and determines an infinite family of orbits, but in order to construct it one needs to know this family of orbits at the start. Gantmacher (1970, Chap.4, Sect.26) refers to this as a ``vicious circle", and states that ``Jacobi's contribution consists in the fact that he continued Hamilton's investigation and broke the vicious circle". He showed that any solution \(F(q,t,\alpha)\) of

\[\tag{40} H(q,F_q)+F_t=0\ , \]

depending on real parameters \(\alpha= (\alpha_1,\cdots,\alpha_n)\) and complete in the sense that \(\det F_{q\alpha}\ne 0\ ,\) determines the orbits of the system. This is just the story recounted above, with the notation \(P=\alpha\ .\)

It is instructive to illustrate Jacobi's program in a soluble example. Gantmacher (1970, Chap.4, Sect.27) describes three structures of the Hamiltonian, labeled \( 1^o, 2^o, 3^o \) for which (29) is explicitly soluble, each embodying the idea of ``separation of variables". Type \(2^o\) includes some basic systems. For this case in two degrees of freedom the Hamiltonian has the form

\[\tag{41} H(z)=H_2(H_1(z_1),z_2)\ ,\quad z_i=(q_i,p_i)\ , \]

and similarly for \(n\) degrees of freedom,

\[\tag{42} H(z)=H_n(\cdots H_3(H_2(H_1(z_1),z_2),z_3)\cdots,z_n)\ . \]

Each \(H_n\) is required to be \(C^1\) in all arguments and to satisfy

\[\tag{43} \frac{\partial H_n(\cdots,q_n,p_n)}{\partial p_n}\ne 0\ . \]

Considering now two degrees of freedom, notice that because of (43) there exist functions \(G_1,G_2\) such that

\[\tag{44} H_1(q_1,G_1(q_1,\alpha_1))=\alpha_1\ ,\quad H_2(\alpha_1,q_2,G_2(q_2,\alpha_2,\alpha_1))=\alpha_2\ , \]

for any constants \(\alpha_1,\alpha_2\ .\) Identification of \(G_i\) with \(F_{q_i}\) gives a solution of (40) in the form

\[\tag{45} F(q,t,\alpha)=\int_{q_{10}}^{q_1}G_1(q_1^\prime,\alpha_1)dq_1^\prime + \int_{q_{20}}^{q_2}G_2(q_2^\prime,\alpha_2,\alpha_1)dq_2^\prime-\alpha_2 t\ . \]

Indeed, after substitution of this function and application of (44) the l.h.s. of (40) reads \(\alpha_2-\alpha_2\ .\) Moreover, this solution is complete, as is seen by differentiating the first equation of (44) with respect to \(\alpha_1\) and the second with respect to \(\alpha_2\ .\) Because of (43), that shows that \(D_2G_1\ne 0\) and \(D_3G_2\ne 0\ ,\) hence \(\det F_{q\alpha} =D_1G_1D_3G_2\ne0\ ,\) where \(D_i\) means partial derivative with respect to the \(i\)-th argument.

An example is planar motion in a central potential \(V(r)\) (Goldstein, 1981). In polar coordinates the Hamiltonian for a particle of mass \(m\) is

\[\tag{46} H=\frac{1}{2m}\bigg[p_r^2+\frac{p_\phi^2}{r^2}\bigg]+V(r)\ , \quad (q_1,p_1)=(\phi,p_\phi)\ ,\quad (q_2,p_2)=(r,p_r)\ . \]

Since \(H\) is independent of \(\phi\ ,\) the conjugate momentum \(p_\phi\) is constant in time; it is the conserved angular momentum. To apply the above scheme put

\[\tag{47} H_1(q_1,p_1)=p_1=\alpha_1\ ,\quad H_2(\alpha_1,q_2,p_2)=\frac{1}{2m} \bigg[p_2^2+\frac{\alpha_1^2}{q_2^2}\bigg]+V(q_2)=\alpha_2\ . \]

Notice that \(H_2\) satisfies (43) if and only if \(p_2\ne0\ .\) Now the \(G_i\) defined by (44) are

\[\tag{48} G_1(q_1,\alpha_1)=\alpha_1\ , \ :\]

\[\tag{49} G_2(q_2,\alpha_2,\alpha_1)=\pm\Pi(q_2,\alpha_2,\alpha_1)\ ,\quad \Pi= \bigg[2m\big(\alpha_2-V(q_2)\big)-(\alpha_1/q_2)^2\bigg]^{1/2}\ge 0\ . \]

In physicist's notation \( \alpha_1=L=\) angular momentum, \( \alpha_2=E= \) energy, and the formula (45) reads

\[\tag{50} F(\phi,r,L,E,t)=L(\phi-\phi_0)-Et\pm\int_{r_0}^r\Pi(\rho,E,L)d\rho\ ,\quad \Pi(\rho,E,L)=\big[2m(E-V(\rho))-(L/\rho)^2\big]^{1/2}\ . \]

Here \( r_0, r\) must be such that the argument of the square root is non-negative in the region of integration. The motion \( \phi(t),\ r(t) \) is obtained by solving (9) and (10) with \(P=\alpha\) and \(Q=\beta\ .\) To that end compute

\[\tag{51} \beta_1=F_L=\phi-\phi_0\mp L\int_{r_0}^r\Pi(\rho,E,L)^{-1}d\rho/\rho^2\ , \ :\]

\[\tag{52} \beta_2=F_E=-t\pm 2m\int_{r_0}^r\Pi(\rho,E,L)^{-1}d\rho\ . \]

For initial conditions \( \phi_0,\ p_{\phi_0},\ r_0,\ p_{r_0}\) the parameters are \( \beta_1=0,\ \beta_2=0,\ \alpha_1=p_{\phi_0}=L,\ \alpha_2= (p_{r0}^2+(\alpha_1/r_0)^2)/2m+V(r_0)=E \ .\) Now (52) gives \(t(r)\ ,\) which must be inverted to give \(r(t)\ ;\) then (51) gives \(\phi(t)\ .\) This is the standard solution derived less elegantly in elementary treatments without the Hamilton-Jacobi method. The choice of sign in front of the integrals depends on \(t\) and initial conditions. Suppose that the potential is attractive and \(E\) is such that there is oscillatory motion in the effective one-dimensional potential with \(r_0\le r\le r_1\) (Goldstein, 1981). During the first half-period \(T/2\) the integral (52) runs from \(r_0\) to \(r\le r_1\ ,\) with the plus sign. During the second half-period the integral is defined as \(T/2\) plus the integral from \(r_1\) to \(r\ge r_0\) with the minus sign, and so on. Within any half-period the integral is monotonic in \(r\) so that the inversion of \(t(r)\) is always possible.

A list of other classical problems for which the Hamilton-Jacobi equation is separable in appropriate coordinates includes the Kepler two-body problem, planar motion in the Coulomb field of two fixed charges, planar motion in the Coulomb field of one fixed charge plus a constant electric field, and free motion of a particle constrained to an ellipsoid (Landau & Lifshitz (1969), Arnol'd (1974), Goldstein (1981), Jacobi (1842-1843)). All of these problems were treated by Jacobi. For a comprehensive study of integrable problems from a geometric perspective, which uses separation of variables and many other techniques, see Perelomov (1990).

Wave-Particle Duality and the Classical Limit of Quantum Theory

In the important case of a time-independent Hamiltonian, one may seek solutions of (17) with \(K=0\) in the form

\[\tag{53} F(q,P,t)=W(q,P)-P_1t\ , \]

where \(W\) is to be determined by the time-independent Hamilton-Jacobi equation

\[\tag{54} H(q,W(q,P))=E\ ,\quad E=P_1\ . \]

Here \(E\) is identified with the energy of the system, the value of \(H\) on an orbit. Now suppose that \(F\) is a complete solution and consider a family of orbits generated through (9) and (10), the members of the family corresponding to various values of \(Q\) with fixed \(P\ ,\) thus an \(n\)-dimensional family. (Recall that \(Q,P\) determine initial conditions of the orbits.) Now let us view this family in \((q,t)\)-space, supposing that \(t\) is sufficiently small to prevent caustics; that is, different curves \(q(t,Q,P)\) shall not intersect.

It is interesting to consider surfaces of constant \(F\) determined by the equation

\[\tag{55} F(q,P,t)=W(q,P)-Et=c(P)\ . \]

At any \(t\) the normal to the surface in \(q\)-space is in the direction of \(p=W_q(q,t)\ .\) Assuming for convenience that coordinates are Cartesian, it follows that the particles are moving normal to the surface in \(q\)-space at each \(t\ .\) A representative point on the surface defined by (55) is denoted by \((q_s(t),t)\ ,\) and the velocity of such a point, projected onto the unit normal \(n(q_s)\ ,\) is obtained by differentiating (55) as follows:

\[\tag{56} W_q(q_s,P)\cdot \frac{dq_s}{dt}=E\ ,\quad n(q_s)\cdot\frac{dq_s}{dt}=E/|W_q(q_s,P)|\ . \]

This velocity \(dq_s/dt\) might be called the phase velocity or wave front velocity of a "wave" defined by (55). It is not to be confused with the particle velocity; certainly not, since by (56) it is in a different direction and its projection onto the particle's direction is inversely proportional to the particle's velocity. The slower the particles, the faster the wave front moves.

This wave front description can be connected with quantum mechanics. The connection was of great importance in the development of de Broglie - Schrödinger wave mechanics. For simplicity take the Schrödinger equation for one particle with Cartesian coordinates \(q=(q_1,q_2,q_3)\ ,\)

\[\tag{57} -\frac{\hbar^2}{2m}\triangle_q\psi+V(q)\psi=i\hbar\psi_t\ . \]

The following story can be generalized to many interacting particles. Write the wave function in phase-amplitude form

\[\tag{58} \psi(q,t)=A(q,t)\exp\bigg[\frac{i}{\hbar}F(q,t)\bigg]\ , \]

where \(A\) and \(F\) are real. After substituting in (57) and separating real and imaginary parts one finds a pair of equations entirely equivalent to the Schrödinger equation,

\[\tag{59} \frac{1}{2m}\big(F_q\big)^2+V+F_t=\frac{\hbar^2}{2m}\frac{\triangle_qA}{A}\ , \ :\]

\[\tag{60} mA_t+A_q\cdot F_q+\frac{1}{2}A\triangle_qF=0\ . \]

The second equation can be recognized as the continuity equation of quantum mechanics, when stated in terms of \(\rho\ ,\) the probability density for finding a particle at \(q\ ,\) and \(\mathbf J\ ,\) the probability flux, where

\[\tag{61} \rho=|\psi|^2=A^2\ ,\quad {\mathbf J}={\rm Re}\bigg[\frac{\hbar}{im}\psi^*\psi_q\bigg]=\frac{1}{m}\rho F_q\ . \]

Then multiplication of (60) by \(2A/m\) yields the continuity equation

\[\tag{62} \rho_t+\nabla_q\cdot{\mathbf J}=0\ . \]

One hopes to retrieve some features of classical physics, if not all, by regarding Planck's constant as small. In the small-\(\hbar \) limit the right hand side of (59) is zero, and \(F\) satisfies the classical Hamilton-Jacobi equation

\[\tag{63} H(q,F_q)+F_t=0\ ,\quad H(q,p)=\frac{p^2}{2m}+V(q)\ . \]

Since \(H\) is time independent a solution is sought in the form \(F(q,P,t)=W(q,P)-Et\ ,\) with parameters \(P=(E,P_2,P_3)\ .\) Correspondingly, \(A(q,P)\) is assumed to be time-independent, so that the equations for \(W\) and \(A\) are

\[\tag{64} H(q,W_q)=E\ , \ :\]

\[\tag{65} \nabla_q\cdot(A^2W_q)=0\ . \]

Given a complete solution of (64) one must then solve (65), a linear PDE for \(A^2\) with variable coefficients. Thus one finds the zeroth-order semi-classical wave function

\[\tag{66} \psi_0(q,t)=A(q,P)\exp\bigg[\frac{i}{\hbar}(\ W(q,P)-Et\ )\bigg]\ . \]

It now appears that the general concept of phase velocity introduced above is just the conventional phase velocity for the matter wave of (66). Note also the beautiful expression \({\mathbf J}=\rho{\mathbf v}\) where \({\mathbf v}={\mathbf p}/m=W_q/m\) is the classical velocity.

In the case of one degree of freedom the construction of (66) can be worked out explicitly. The result is the lowest order WKB approximation presented in standard textbooks (Messiah, 1999). A striking feature is that imaginary solutions \(W\) are relevant, corresponding to quantum mechanical tunneling into classically forbidden regions of coordinate space. Also, for bound states the energy turns out to be quantized. Since important quantal features are retained even in the small-\(\hbar\) limit, the semi-classical theory should be carried beyond the one-dimensional case.

General semi-classical theory considers \(n\) degrees of freedom and non-separable systems, aiming for results in the sense of rigorous asymptotics for \(\hbar\rightarrow 0\ .\) Starting with a seminal paper of Keller (1958), the question of multi-dimensional quantization was reexamined (Percival (1977)), and the basic problem of how to do asymptotics in the neighborhood of caustics was attacked (Ludwig (1966)). In this endeavor the work of Maslov (1965, 1981) was prominent, along with that of other leading mathematicians (see Guillemin & Sternberg (1977) and citations therein). Similar asymptotic analysis applies to the wave equation for an inhomogeneous medium, in which case the limit is for small wavelength (geometrical optics), and the Hamilton-Jacobi equation is the eikonal equation as in Hamilton's original work (Born & Wolf, 1965).

Applications of semi-classical theory have been pursued extensively by physical chemists and physicists (Marcus, Miller, Martens, Ezra, Heller, Delos, Littlejohn, Gutzwiller, Klauder, Berry, Percival, et al.). A topic of great interest is highly excited states of atoms and molecules, for which semi-classical theory might be the best calculational approach. The influence of chaotic regions in classical motion is of course an interesting topic, much studied but perhaps still not fully understood (Gutzwiller (1991), Percival (1977)).

Numerical Methods

Numerical solution of the Hamilton-Jacobi equation is a powerful tool to attack complex problems in theoretical physics and engineering. There is a large literature on numerical methods, often applied to the case of eikonal equations, and ranging from classical approaches to generalized solutions of viscosity type (Lions (1982), Evans (2008)). It is perhaps fair to say that methods based on the method of characteristics are the most efficient and widely applicable. In such methods a large but finite set of orbits is computed, corresponding to various initial conditions. Some kind of interpolation procedure is then used to approximate the desired solution. For instance, to find a solution with \(K=0\) one could calculate Hamilton's principal function \(F_1(q,q_0,t)\) on available orbits, for \((q,q_0)\) on a finite mesh \(\{ q_i,q_{0j} \}\ .\) A \(C^2\) interpolation would then be used to define \(F_1\) at off-mesh points. In some important applications one has to do this for only one large value \(T\) of the time \(t\ ,\) since it is enough to follow the intersection of orbits with a surface of section encountered with period \(T\ .\) Such a construction is being pursued for the case of full-turn symplectic maps for circular particle accelerators, following earlier successes with methods in the same spirit (Warnock & Berg (1997), Warnock & Cai (2009)). In this example the Hamiltonian is very complicated, having thousands of terms, and there is a big cost advantage for stability studies in using a full turn map defined by a generator in place of computing separate orbits.

It is also possible to calculate invariant tori (solutions with \(K\ne 0\)) by interpolating data from single non-resonant orbits (Warnock & Ruth (1992)). This is done by fitting the formula (20) to a Fourier series, using values of \(I\) at the values of \(\phi\) where the orbit hits a surface of section. This proves to be much faster than more classical methods that hark back to perturbation theory (Warnock & Ruth (1987), Chapman, Garrett, & Miller (1976)), but it gives no direct control of the frequencies (winding numbers) of the torus constructed.

In most cases the method of characteristics requires a good symplectic integrator to follow orbits (Leimkuhler & Reich (2004)). The Hamilton-Jacobi equation is often used to derive such integrators, especially the implicit variety (Feng et al. (1989), Scovel & Channel (1990)).


Arnol'd, V. I., "Mathematical Methods of Classical Mechanics", (Springer, New York, 1974).

Benton, S. H., "The Hamilton-Jacobi Equation: A Global Approach", (Academic Press, New York, 1977).

Born, M. and Wolf, E., "Principles of Optics", (Pergamon Press, Oxford, 1965).

Carathéodory, C., "Geometrische Optik", (Springer, Berlin, 1937).

Carathéodory, C., "Calculus of Variations and Partial Differential Equations of the First Order", (Chelsea, New York, 1982).

Chapman, S., Garrett, B. C., and Miller, W. H., "Semiclassical Eigenvalues for Nonseparable Systems: Nonperturbative Solution of the Hamilton-Jacobi Equation in Action-Angle Variables", J. Chem. Phys. 64, 502-509 (1976).

Courant, R. and Hilbert, D., "Methods of Mathematical Physics, Vol. II", (Interscience, New York, 1962).

Erdelyi, B. and Berz, M., "Optimal Symplectic Approximation of Hamiltonian Flows", Phys. Rev. Lett. 87, 114302 (2001).

Evans, L., "Weak KAM theory and partial differential equations", in Calculus of Variations and Nonlinear Partial Differential Equations, pp. 123-154, Lecture Notes in Math. 1927 (Springer, Berlin, 2008).

Feng, K., Wu, H., Quin, M., and Wang, D., J. Comp. Math. 7, 71 (1989).

Fleming, W. H. and Rishel, R. "Deterministic and Stochastic Optimal Control", (Springer, Berlin, 1975).

Gallavotti, G., "The Elements of Mechanics", (Springer, New York, 1983).

Gantmacher, F., "Lectures in Analytical Mechanics", (MIR Publishers, Moscow, 1970).

Goldstein, H., "Classical Mechanics", (Addison-Wesley, Menlo Park, 1981).

Guillemin, V., and Sternberg, S., "Geometric Asymptotics", (Amer. Math. Soc., Providence, 1977).

Gutzwiller, M. C., "Chaos in Classical Mechanics", (Springer, New York, 1991).

Hamilton, W. R., "The Mathematical Papers of William Rowan Hamilton, Vol. I, Geometrical Optics, Vol.II, Dynamics" (Cambridge University Press, Cambridge, 1931), especially three Supplements (1830-1832) to the "Theory of Systems of Rays" (1827) in Vol.I, and "On a General Method in Dynamics" (1832) in Vol.II.

Jammer, M., "The Conceptual Development of Quantum Mechanics", (McGraw-Hill, New York, 1966).

Jacobi, C. G. J., "Vorlesungen über Dynamik", Königsberg lectures of 1842-1843, (reprinted by Chelsea Publishing Co., New York, 1969).

Keller, J., "Corrected Bohr-Sommerfeld Quantum Conditions for Nonseparable Systems", Ann. Physics 4, 100-188 (1958).

Leimkuhler, B. and Reich, S., "Simulating Hamiltonian Dynamics", (Cambridge U. Press, Cambridge, 2004).

Ludwig, D., "Uniform Asymptotic Expansions at a Caustic", Comm. Pure Appl. Math., 19, 215-250 (1966).

Lanczos, C., "The Variational Principles of Mechanics", (U. Toronto Press, Toronto, 1949).

Landau, L. D. and Lifshitz, E. M., "Mechanics", (Pergamon Press, Oxford, 1969).

Lions, P.-L., "Generalized Solutions of Hamilton-Jacobi Equations", (Pitman, Boston, 1982).

Martens, C. C. and Ezra, G. S., "Semi-classical Mechanics of Strongly Resonant Systems: a Fourier Transform Approach", J. Chem. Phys. 86, 279-307 (1987).

Maslov, V. P., "Perturbation Theory and Asymptotic Methods", (Moscow State U., Moscow, 1965).

Maslov, V. P. and Fedoriuk, M. V., "Semi-classical Approximation in Quantum Mechanics", (Reidel, Dordrecht, 1981).

Meyer, K., Hall, G., and Offin, D., "Introduction to Hamiltonian Dynamical Systems and the N-Body Problem", (Springer, New York, 2008).

Nekhoroshev, N. N., "An Exponential Estimate of the Time of Stability of Nearly Integrable Hamiltonian Systems", Russ. Math. Surveys 32, 6, 1-65 (1977).

Percival, I. C., "Semiclassical Theory of Bound States", in Advances in Chemical Physics 36 (Wiley, New York, 1977).

Perelomov, A. M., "Integrable Systems of Classical Mechanics and Lie Algebras" (Birkhäuser, Basel, 1990).

Pöschel, J., "Integrability of Hamiltonian Systems on Cantor Sets", Comm. Pure Appl. Math. 35, 653-695 (1982).

Scovel, C. and Channel, P., "Symplectic Integration of Hamiltonian Systems", Nonlinearity 3, 231-259 (1990).

Synge, J. L., "Geometrical Optics, an Introduction to Hamilton's Method", (Cambridge University Press, 1937).

Warnock, R. and Ruth, R. D., "Long Term Bounds on Nonlinear Hamiltonian Motion", Physica D 56 188-215 (1992).

Warnock, R. and Ruth, R. D., "Invariant Tori through Direct Solution of the Hamilton-Jacobi Equation", Physica D 26, 1-36 (1987).

Warnock, R. and Berg, J. S., "Fast Symplectic Mapping and Long-term Stability Near Broad Resonances", AIP Conf. Proc. 395 (Amer. Inst. Phys., 1997).

Warnock, R. and Cai, Y., "Construction of Large Period Symplectic Maps by Interpolative Methods", SLAC National Accelerator Laboratory report SLAC-PUB-13867 (2009), to be published in Proc. 10th International Computational Accelerator Physics Conference.

See also

Principle of least action

Personal tools

Focal areas