An introduction to Lie algebra cohomology/lecture 8b

From Scholarpedia
Jump to: navigation, search

    On to the ninth lecture

    Back to the eighth lecture

    Contents

    reference

    We follow Humphreys, 1972 closely for those parts that are not explicitly concerned with cohomology.

    the field

    Although we use \(\mathbb{C}\) as our field, one can also do the initial part of this section with an arbitrary field.

    At some point we will need the field to have characteristic zero, and a bit later we want it to be closed.

    The latter condition can be slightly relaxed, but we need to find the roots of the characteristic equation of the ad-action of elements in the Lie algebra.

    the Casimir operator

    definition - dual basis

    Let \(\tilde{\mathfrak{g}}=\mathfrak{g}/\ker d_1\) (This makes sense, since \(\ker d_1\) is an ideal).

    A trace form \( K_\mathfrak{a}\) on \(\mathfrak{g}\) induces a trace form \(\tilde{K}_\mathfrak{a}\) on \(\tilde{\mathfrak{g}}\) by \[\tilde{K}_\mathfrak{a}([x],[y])=K_\mathfrak{a}(x,y)\] Suppose \(\dim_\mathbb{C}\tilde{\mathfrak{g}}=n<\infty\ .\) Let \( e_1,\cdots,e_n\) be a basis of \(\tilde{\mathfrak{g}}\ .\)

    If \(\tilde{K}_\mathfrak{a}\) is nondegenerate, then define \(e^1,\cdots,e^n\) to be the dual basis with respect to \(\tilde{K}_\mathfrak{a}\ ,\) that is, \(\tilde{K}_\mathfrak{a}(e_i,e^j)=\delta_i^j\ .\)

    example

    Let, for \(\mathfrak{g}=\tilde{\mathfrak{g}}=\mathfrak{sl}_2\ ,\) the basis be given by \[e_1=M,\quad e_2=N,\quad e_3=H\] Then a dual basis is given by \[ e^1=N,\quad e^2=M,\quad e^3=\frac{1}{2} H\]

    proposition

    Suppose \( [e_i,e_j]=\sum_{k=1}^n c_{ij}^k e_k\ .\) Then \([e^i,e_j]=\sum_{k=1}^n c_{jk}^i e^k\ .\)

    proof

    The structure constants \(c_{ij}^k\) can be expressed in terms of the trace form as follows. \[ \tilde{K}_\mathfrak{a}([e_i,e_j],e^k)=\sum_{s=1}^n c_{ij}^s \tilde{K}_\mathfrak{a}(e_s,e^k)=\sum_{s=1}^n c_{ij}^s \delta_s^k=c_{ij}^k\] Let \([e^i,e_j]=\sum_{k=1}^n d_{jk}^i e^k\ .\) Then \[\tilde{K}_\mathfrak{a}(e_k,[e^i,e_j])=\sum_{s=1}^n d_{js}^i \tilde{K}_\mathfrak{a}(e_k,e^s)=\sum_{s=1}^n d_{js}^i \delta_k^s=d_{jk}^i\] The result follows from the \(\mathfrak{g}\)-invariance of \(\tilde{K}_\mathfrak{a}\ :\) \[\tilde{K}_\mathfrak{a}(e_k,[e^i,e_j])=-\tilde{K}_\mathfrak{a}(e_k,[e_j,e^i])=\tilde{K}_\mathfrak{a}([e_j,e_k],e^i)=c_{jk}^i\]

    corollary

    \[ [x,e^i]=-\sum_{k=1}^n \tilde{K}_\mathfrak{a}(x,[e_k,e^i])e^k\] and \( [x,e_i]=\sum_{k=1}^n \tilde{K}_\mathfrak{a}(x,[e_i,e^k])e_k\)

    definition - Casimir

    Define the Casimir operator \(\gamma\) by \[\gamma=\sum_{i=1}^n d_1(e^i)d_1(e_i) \in End(\mathfrak{a})\]

    remark

    For the Casimir to exist one only needs finite-dimensionality of \(\mathfrak{g}\ ,\) \(\mathfrak{a}\) can be infinite dimensional.

    Only when \(K_{\mathfrak{a}}\) plays a role, one assumes \(\mathfrak{a}\) to be finite-dimensional, just so that one does not have to worry about traces in infinite-dimensional spaces.

    well defined

    The \(e_i, e^i\) stand for equivalence classes, but taking different representatives does not change the value of \(\gamma\ .\)

    The definition of \(\gamma\) is also independent of the choice of basis.

    Let \( f_i=\sum_{k=1}^n A_i^k e_k\ ,\) with \(A\) an invertible matrix, be another basis, with dual basis \(f^i\ .\)

    Let \(f^i=\sum_{k=1}^n B_k^i e^k\ .\)

    Then \[\delta_j^i=K_\mathfrak{b}(e^i,e_j)=\sum_{k,l=1}^n B_k^i A_j^l K_\mathfrak{b}(f^k,f_l)=\sum_{k,l=1}^n B_k^i A_j^l\delta_l^k=\sum_{k=1}^n B_k^i A_j^k\] This shows that \[\gamma=\sum_{i=1}^n d_1(f^i)d_1(f_i)\]

    corollary

    If the dual basis is chosen with respect to \(K_\mathfrak{a}\ ,\) then \[ \mathrm{tr }(\gamma)=\sum_{i=1}^n \mathrm{tr }(d_1(e^i)d_1(e_i))=\sum_{i=0}^n K_\mathfrak{a}(e^i,e_i)=n\]

    example

    In the case \(\mathfrak{g}=\mathfrak{sl}_2\) and \(\mathfrak{a}=\R^2\ ,\) with the standard representation, one has \[\gamma=d_1(e^1)d_1(e_1)+d_1(e^2)d_1(e_3)+d_1(e^3)d_1(e_3)\ :\] \[=d_1(N)d^{(0)}(M)+d_1(M)d^{(0)}(N)+\frac{1}{2}d_1(H)d_1(H)\ :\] \[=\begin{bmatrix} 0&0\\1&0\end{bmatrix}\begin{bmatrix} 0&1\\0&0\end{bmatrix}+ \begin{bmatrix} 0&1\\0&0\end{bmatrix}\begin{bmatrix} 0&0\\1&0\end{bmatrix}+\frac{1}{2} \begin{bmatrix} 1&0\\0&-1\end{bmatrix}\begin{bmatrix} 1&0\\0&-1\end{bmatrix}\ :\] \[=\frac{3}{2}\begin{bmatrix} 1&0\\0&1\end{bmatrix}\] One checks that indeed \( \mathrm{tr\ }\gamma=3\ .\)

    lemma

    Suppose \(\dim\mathfrak{a}<\infty\ .\) Then \[\gamma d_1(x)=d_1(x)\gamma\]

    proof

    \[\gamma d_1(x)-d_1(x)\gamma\ :\] \[=\sum_{i=1}^n d_1(e^i)d_1(e_i)d_1(x)-d_1(x)\sum_{i=1}^n d_1(e^i)d_1(e_i)\ :\] \[=\sum_{i=1}^n d_1(e^i)d_1([e_i,x])+\sum_{i=1}^n d_1(e^i)d_1(x)d_1(e_i)-\sum_{i=1}^n d_1(e^i)d_1(x)d(e_i)+\sum_{i=1}^n d_1([e^i,x])d_1(e_i)\ :\] \[=\sum_{i=1}^n d_1(e^i)d_1([e_i,x])+\sum_{i=1}^n d_1([e^i,x])d_1(e_i)\ :\] \[=-\sum_{i,j=1}^n \tilde{K}_{\mathfrak{a}}(x,[e_i,e^j])d_1(e^i)d_1(e_j)+\sum_{i,j=1}^n \tilde{K}_{\mathfrak{a}}(x,[e_j,e^i]) d_1(e^j) d_1(e_i)\ :\] \[=0\]

    corollary

    The map \(\gamma\) is a \(\mathfrak{g}\)-endomorphism.

    lemma - Fitting decomposition

    Let \(\alpha\in \mathrm{End}_\mathfrak{g}(\mathfrak{a})\ ,\) with \(\dim\mathfrak{a}<\infty\ .\)

    Then \(\mathfrak{a}=\mathfrak{a}_0\oplus\mathfrak{a}_1\ ,\) where \(\mathfrak{a}_i\) is invariant under \(\alpha\) and \(\mathfrak{g}\ .\)

    Moreover, of one denotes the restriction of \(\alpha\) to \(\mathfrak{a}_i\) by \(\alpha_i\ ,\) one has that \(\alpha_0\) is nilpotent and \(\alpha_1\) is invertible.

    proof

    One has s decreasing sequence of subspaces \[ \mathfrak{a}\supset \alpha\mathfrak{a}\supset \alpha^2 \mathfrak{a}\supset\cdots\] where \(\alpha^m\) denotes the \(m\)th power of \(\alpha\ .\)

    Since \(\mathfrak{a}\) is finite-dimensional, this stabilizes, say at \(k\ .\)

    Define \(\mathfrak{a}_1=\alpha^k \mathfrak{a}\ .\)

    This is \(\alpha\)-invariant by construction, and \(\mathfrak{g}\)-invariant since \(\alpha\) commutes with the \(\mathfrak{g}\)-action on \(\mathfrak{a}\ .\)

    Let \(\mathfrak{\beta}_i=\ker \alpha^i\ .\)

    Then \[\mathfrak{b}_0\subset\mathfrak{b}_1\subset\cdots\subset\mathfrak{a}\] Again,this stabilizes, say at \(l\ .\) Let \(\mathfrak{a}_0=\mathfrak{b}_l\) and observe that \(\mathfrak{a}_0\) is \(\alpha\)-invariant and \(\mathfrak{g}\)-invariant.

    Let \(m=\max(k,l)\ .\)

    Then \[ \mathfrak{a}_0=\ker \alpha^m,\quad \mathfrak{a}_1=\mathrm{im}\alpha^m\] Take \(x\in\mathfrak{a}\ .\)

    Then \(\alpha^m x=\alpha^{2m} y\) for some \(y\in\mathfrak{a}\ ,\) since \(\alpha^m\mathfrak{a}=\alpha^{2m}\mathfrak{a}\ .\)

    Write \(x=(x-\alpha^my)+\alpha^m y\in\ker \alpha^m+\mathrm{im}\alpha^m\ .\)

    This implies \[\mathfrak{a}=\mathfrak{a}_0+\mathfrak{a}_1\] Let \(z\in \mathfrak{a}_0\cap\mathfrak{a}_1\ .\)

    This implies that \(z=\alpha^m w\) and \(\alpha^m z=0\ .\)

    It follows that, since \(\alpha^{2m}w=0\ ,\) \(w\in\mathfrak{a}_0\ .\)

    Therefore \(\alpha^mw=0\ ,\) or, in other words, \( z=0\ .\) This shows that \(\mathfrak{a}_0\cap\mathfrak{a}_1=0\) and \[\mathfrak{a}=\mathfrak{a}_0\oplus\mathfrak{a}_1\]

    Since \(\mathfrak{a}_1=\alpha^m\mathfrak{a}=\alpha^{m+1}\mathfrak{a}=\alpha\mathfrak{a}_1\ ,\) it follows that \(\alpha_1\) is surjective, and therefore an isomorphism.

    Denote the projections of \(\mathfrak{a}\rightarrow\mathfrak{a}_i\) by \(\pi_i^0\) and observe they commute with the \(\mathfrak{g}\)-action.

    The decomposition \(\mathfrak{a}=\mathfrak{a}_0\oplus\mathfrak{a}_1\) is called the Fitting decomposition of \(\mathfrak{a}\) with respect to \(\alpha\ .\)

    theorem

    Let \(d_1\) be a representation.

    Suppose there exists a nondegenerate trace form \(\tilde{K}_\mathfrak{a}\ .\)

    Let \(\mathfrak{a}_0\oplus\mathfrak{a}_1\) be the Fitting decomposition with respect to \(\gamma\ .\)

    Then \(H^m(\tilde{\mathfrak{g}},\mathfrak{a})=H^m(\tilde{\mathfrak{g}},\mathfrak{a}_0)\ .\)

    remark

    Contrary to the usual statement of this theorem, the forms do not need to be antisymmetric.

    proof

    Consider the Fitting decomposition of \(\mathfrak{a}\) with respect to \(\gamma\ .\) Take \([\zeta_m]\in H^m(\tilde{\mathfrak{g}},\mathfrak{a})\) and let \[\pi^m\zeta_m=(-1)^{m-1}\gamma^m\omega_m\] Then, since \( \gamma^{m+1}d^m=d^m\gamma^m\) and \( \pi^{m+1}d^m=d^m\pi^m\ ,\) one has \[0=\pi^{m+1}d^m\zeta_m=d^m\pi^m\zeta_m=(-1)^{m-1}d^m \gamma^m\omega_m=(-1)^{m-1}\gamma^{m+1}d^m \omega_m\] Since \(\gamma\) is an isomorphism on \(\mathfrak{a}_1\ ,\) this shows that \(d^m\omega_m=0\ .\)

    Then define \[ \mu_{m-1}(x_1,\cdots,x_{m-1})=\sum_{i=1}^n d_1(e^i)\omega_m(x_1,\cdots,x_{m-1},e_i)\] (Here one needs the trace form to be nondegenerate, in order to define the dual basis). Then \[0=\sum_{i=1}^n d_1(e^i)d^m\omega_m(x_1,\dots,x_m,e_i)\ :\] \[=\sum_{i=1}^n (-1)^{m} d_1(e^i)d_1(e_i) \omega_m(x_1,\dots,x_{m})\ :\] \[+\sum_{k=1}^{m}\sum_{i=1}^n(-1)^{k-1} d_1(e^i) d_1(x_k) \omega_m(x_1,\dots,\hat{x}_k,\dots,x_m,e_i)\ :\] \[-\sum_{k=1}^{m}\sum_{i=1}^n (-1)^{k-1} d_1(e^i)\omega_m(x_1,\dots,\hat{x}_k,\dots,[x_k,e_i])\ :\] \[-\sum_{k=1}^{m}\sum_{l=1}^{k-1}\sum_{i=1}^n(-1)^{l-1} d_1(e^i) \omega_m(x_1,\dots,\hat{x}_l,\dots,[x_l,x_k],\dots,e_i)\ :\] \[=(-1)^m\gamma\omega_m(x_1,\dots,x_{m})\ :\] \[+\sum_{k=1}^{m}\sum_{i=1}^n(-1)^{k-1} d_1(x_k) d_1(e^i)\omega_m(x_1,\dots,\hat{x}_k,\dots,x_m,e_i)\ :\] \[-\sum_{k=1}^{m}\sum_{l=1}^{k-1}\sum_{i=1}^n(-1)^{l-1} d_1(e^i) \omega_m(x_1,\dots,\hat{x}_l.\dots,[x_l,x_k],\dots,e_i)\ :\] \[-\sum_{k=1}^{m}\sum_{i=1}^n(-1)^{k-1} d_1([x_k,e^i])\omega_m(x_1,\dots,\hat{x}_k,\dots,e_i)\ :\] \[-\sum_{k=1}^{m}\sum_{i=1}^n (-1)^{k-1} d_1(e^i)\omega_m(x_1,\dots,\hat{x}_k,\dots,[x_k,e_i])\ :\] \[=-\pi^m\zeta_m(x_1^1,\dots,x_{m}^m)+d^{m-1}\mu_{m-1}(x_1,\dots,x_m)\ :\] \[+\sum_{k=1}^{m}\sum_{i=1}^n \sum_{p=1}^n(-1)^{k-1} K_\mathfrak{a}(x_k,[e_p,e^i])d_1(e^p)\omega_m(x_1,\dots,\hat{x}_k,\dots,x_m,e_i)\ :\] \[-\sum_{k=1}^{m}\sum_{i=1}^n \sum_{p=1}^n (-1)^{k-1}K_\mathfrak{a}(x_k,[e_i,e^p]) d_1(e^i)\omega_m(x_1,\dots,\hat{x}_k.\dots,x_m,e_p)\ :\] \[=-\pi^m\zeta_m(x_1^1,\dots,x_{m}^m)+d^{m-1}\mu_{m-1}(x_1,\dots,x_m)\] and the theorem is proved.\(\square\)

    theorem

    Let \(M=\dim(\mathfrak{a}_0)\ .\) Then \(H^m(\tilde{\mathfrak{g}},\mathfrak{a}_0)=\bigoplus_{\iota=1}^M H^m(\tilde{\mathfrak{g}},\mathbb{C})\ .\)

    proof

    Since \(\gamma_0=\gamma|\mathfrak{a}_0\) is nilpotent, its trace on \(\mathfrak{a}_0\) is zero.

    But this implies that the representation vanishes on \(\mathfrak{a}_0\ ,\) since \(\mathrm{tr\ }\gamma_0=n\ ,\) where \( n\) is the number of basis vectors \(e_\iota\) of \(\mathfrak{a}_0\) such that \(d_1(e_\iota)\neq 0\ .\)

    Therefore \(H^m(\tilde{\mathfrak{g}},\mathfrak{a}_0)=\bigoplus_{\iota=1}^M H^m(\tilde{\mathfrak{g}},\mathbb{C})\ ,\) where the action of \(\tilde{\mathfrak{g}}\) on \(\mathbb{C}\) is supposed to be trivial, as usual.

    corollary

    Let \(d_1\) be a nontrivial representation, such that \(\mathfrak{a}\) is irreducible, that is, it contains no \(\mathfrak{g}\)-invariant subspaces.

    Suppose there exists a nondegenerate trace form \(\tilde{K}_\mathfrak{a}\ .\)

    Then \(H^m(\tilde{\mathfrak{g}},\mathfrak{a})=0\ .\)

    proof

    Since the representation is irreducible, one has either \(\mathfrak{a}=\mathfrak{a}_0\) or \(\mathfrak{a}=\mathfrak{a}_1\ .\)

    But in the first case the representation would be trivial, which is excluded.

    Therefore one is in the second case and the statement follows.

    lemma

    If \([\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}]=\tilde{\mathfrak{g}}\) then \(H^1(\tilde{\mathfrak{g}},\mathbb{C})=0\ .\)

    proof

    Since the representation is trivial, \( d^1\omega^1=0\) implies \(\omega^1([x,y])=0\) for all \(x,y\in\tilde{\mathfrak{g}}\ .\)

    But this implies that \( \omega^1(z)=0\) for all \( z\in\tilde{\mathfrak{g}}\ ,\) since every \(z\) can be written as a finite linear combination of commutators.

    It follows that \(\omega^1=0\) and therefore trivial (with the representation zero, the only way a one form can be trivial is by being zero).

    corollary

    Suppose there exists a nondegenerate trace form \(\tilde{K}_\mathfrak{a}\) and \([\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}]=\tilde{\mathfrak{g}}\ .\)

    Let \(M=\dim(\mathfrak{a}_0)\ .\)

    Then \(H^1(\tilde{\mathfrak{g}},\mathfrak{a})=H^1(\tilde{\mathfrak{g}},\mathfrak{a}_0)=\bigoplus_{\iota=1}^{M} H^1(\tilde{\mathfrak{g}},\mathbb{C})=0\ .\)

    definition -lower central series

    Define the lower central series of a Lie algebra by \(\mathfrak{g}^0=\mathfrak{g}\) and \(\mathfrak{g}^{i+1}=[\mathfrak{g},\mathfrak{g}^i]\ .\)

    proposition

    The \(\mathfrak{g}^i\) are ideals of \(\mathfrak{g}\ .\)

    proof

    For \( i=0\) this is trivial. Suppose \(\mathfrak{g}^i\) is an ideal. Then \[[\mathfrak{g},\mathfrak{g}^{i+1}]=[\mathfrak{g},[\mathfrak{g},\mathfrak{g}^i]]\subset [\mathfrak{g},\mathfrak{g}^i]=\mathfrak{g}^{i+1}\] The proposition follows by induction.

    definition

    \(\mathfrak{g}\) is called nilpotent if there is an \(n\in\mathbb{N}\) such that \(\mathfrak{g}^n=0\ .\)

    proposition

    A nilpotent Lie algebra is solvable, but a solvable Lie algebra need not be nilpotent.

    proof

    The first part follows from \(\mathfrak{g}^{(i)}\subset \mathfrak{g}^i\ .\) An algebra that is solvable bit not nilpotent is the Lie algebra of upper triangular matrices in \(\mathfrak{gl}_n\ .\)

    proposition

    If \(\mathfrak{g}\) is nilpotent, then so are all subalgebras and homomorphic images.

    proof

    Let \( \mathfrak{h}\) be a subalgebra. Then \( \mathfrak{h}^{0}\subset\mathfrak{g}^{0}\ .\) Assume \(\mathfrak{h}^{i}\subset\mathfrak{g}^{i}\ .\) Then \[ \mathfrak{h}^{i+1}=[\mathfrak{g},\mathfrak{h}^{i}]\subset [\mathfrak{g},\mathfrak{g}^{i}]=\mathfrak{g}^{i+1}\] and the statement is proved by induction. Similarly, let \(\phi:\mathfrak{g}\rightarrow \mathfrak{h}\) be surjective, and assume \(\phi:\mathfrak{g}^{i}\rightarrow \mathfrak{h}^{i}\) to be surjective. Then \[\phi(\mathfrak{g}^{i+1})=\phi([\mathfrak{g},\mathfrak{g}^{i}])=[\phi(\mathfrak{g}),\phi(\mathfrak{g}^{i})]= [\mathfrak{h},\mathfrak{h}^{i}]=\mathfrak{h}^{i+1}\]

    proposition

    Let \(\mathcal{Z}(\mathfrak{g})\) denote the center of \(\mathfrak{g}\ ,\) that is, \[\mathcal{Z}(\mathfrak{g})=\{x\in \mathfrak{g}|[x,y]=0 \quad \forall y\in\mathfrak{g}\}\] If \(\mathfrak{g}/\mathcal{Z}(\mathfrak{g})\) is nilpotent, then \(\mathfrak{g}\) is nilpotent.

    proof

    Say \(\mathfrak{g}^n\subset \mathcal{Z}(\mathfrak{g})\ ,\) then \(\mathfrak{g}^{n+1}=[\mathfrak{g},\mathfrak{g}^{n}]\subset [\mathfrak{g},\mathcal{Z}(\mathfrak{g})]=0\ .\)

    proposition

    If \(\mathfrak{g}\) is nilpotent and nonzero, then so is \(\mathcal{Z}(\mathfrak{g})\neq 0\ .\)

    proof

    Let \( n \) be the minimal order such that \(\mathfrak{g}^n=0\ ,\) then \(\mathfrak{g}^{n-1}\subset \mathcal{Z}(\mathfrak{g})\ .\)

    lemma

    If \(x\in \mathfrak{gl}(V)\) is nilpotent, then \( \mathrm{ad}(x) \) is nilpotent. In particular, if \(x^n=0\) then \(\mathrm{ad}^{2n}(x)=0\ .\)

    proof

    Define \(\lambda_x, \rho_x\in\mathrm{End}(\mathrm{End}(V))\) by \[ \lambda_x y=xy,\quad \rho_x y=yx\] These are nilpotent, since for instance, \(\lambda_x^n=\lambda_{x^n}\ .\) If \(x^n=0\ ,\) then \((\lambda_x-\rho_x)^{2n}=0\) (since \(\lambda_x \rho_x=\rho_x\lambda_x\)). This proves the statement, since \(\mathrm{ad}(x)=\lambda_x-\rho_x\ .\)

    theorem

    Let \(\mathfrak{g}\) be a subalgebra of \(\mathfrak{gl}(V)\ ,\) with \(0<\dim V<\infty\ .\) If \(\mathfrak{g}\) consists of nilpotent endomorphisms, then there exists \(0\neq v\in V\) such that \(d^{(0)}(\mathfrak{g})v=0\ .\)

    proof

    The proof is by induction on \(\dim\mathfrak{g}\ .\) The statement is obvious if the dimension is zero, since any \(v\in V\) will do.

    Suppose \(\mathfrak{h}\) is a subalgebra of \(\mathfrak{g}\ .\)

    Then \(\mathfrak{h}\) acts via \(\mathrm{ad}\) as a Lie algebra of nilpotent linear transformations on \(\mathfrak{g}\ ,\) and therefore on \(\mathfrak{g}/\mathfrak{h}\ .\)

    Since \(\dim\mathfrak{h}<\dim\mathfrak{g}\) one can use the induction hypothesis to conclude that there exists a vextor \(x+\mathfrak{h}\ ,\) \( x\notin \mathfrak{h}\ ,\) such that \([y,x]=0\) for any \(y\in \mathfrak{h}\ .\)

    Thus \(\mathfrak{h}\) is properly contained in its normalizer \[ N_\mathfrak{g}(\mathfrak{h})=\{x\in\mathfrak{g}|[x,\mathfrak{h}]\subset\mathfrak{h}\}\]

    The normalizer is a subalgebra, so if one takes \(\mathfrak{h}\) to be a maximal proper subalgebra, then its normalizer must be the whole \(\mathfrak{g}\ ,\) that is to say, \(\mathfrak{h}\) is an ideal in \(\mathfrak{g}\ .\)

    Take \(0\neq x\in\mathfrak{g}/\mathfrak{h}\) and let \(\mathfrak{x}\) be the subalgebra generated by \(x\ .\)

    Then the inverse image of \(\mathfrak{x}\) in \(\mathfrak{g}\) is a subalgebra properly containing \(\mathfrak{h}\ ,\) that is, it is \(\mathfrak{g}\ .\)

    This only makes sense if there is basically one such \(x\ ,\) and it follows that \(\dim\mathfrak{g}/\mathfrak{h}=1\ .\) One writes \[ \mathfrak{g}=\mathfrak{h}+ \mathbb{C} x\ .\] By induction, \(\mathcal{W}=\{v\in V|d^{(0)}(\mathfrak{h})v=0\}\) is nonzero. One has for \(x\in\mathfrak{g}\ ,\) \(y\in\mathfrak{h}\) and \( w\in\mathcal{W}\) that \[d_1(y)d_1(x)w=d_1(x)d_1(y)w-d_1([x,y])w=0\ .\] This implies that \(d_1(x)w\in \mathcal{W}\ ,\) that is, \(\mathcal{W}\) is invariant under \(\mathfrak{g}\ .\) Take \(x\in\mathfrak{x}\) as before.

    Then (since \(\dim\mathfrak{x}=1\)) there exists a nonzero \( v\in\mathcal{W}\) such that \(d_1(x)v=0\ .\) This implies that \(d_1(\mathfrak{g})v=0\ ,\) as desired.

    theorem (Engel)

    If all all elements of \(\mathfrak{g}\) are ad-nilpotent, then \(\mathfrak{g}\) is nilpotent.

    proof

    Identifying \(\mathrm{ad\ }(x)\) with a nilpotent element in \(\mathrm{End}(\mathfrak{g})\ ,\) one conludes to the existence of an \(x\in\mathfrak{g}\) such that \(\mathrm{ad\ }(\mathfrak{g})x=0\ ,\) that, \(x\in \mathcal{Z}(\mathfrak{g})\neq 0\ .\)

    Then \(\mathfrak{g}/\mathcal{Z}(\mathfrak{g})\) again consists of ad-nilpotent elements and \(\dim \mathfrak{g}/\mathcal{Z}(\mathfrak{g})< \dim \mathfrak{g}\ .\)

    Using induction on the dimension, one concludes that \(\mathfrak{g}/\mathcal{Z}(\mathfrak{g})\) is nilpotent.

    It follows that \(\mathfrak{g}\) is nilpotent.

    corollary

    If \(\mathfrak{g}\) is solvable, then \([\mathfrak{g},\mathfrak{g}]\) is nilpotent.

    lemma

    Let \(\mathfrak{g}\) be nilpotent and \(\mathfrak{h}\) a nonzero ideal of \(\mathfrak{g}\ .\) Then \(\mathfrak{h}\cap\mathcal{Z}(\mathfrak{g})\neq 0\) (and in particular, \(\mathcal{Z}(\mathfrak{g})\neq 0\)).

    proof

    If \(\mathfrak{g}^n=0\) then \( (\mathrm{ad\ }(x))^n=0\ .\) Consider \( \mathfrak{h}\) as the representation space (with \(d^{(0)}=\mathrm{ad}\)). Then there exist an element \(h\in\mathfrak{h}\) such that \[ ad(\mathfrak{g})h=0\ .\] This is equivalent with saying that \(h\in\mathfrak{h}\cap\mathcal{Z}(\mathfrak{g})\neq 0\ .\)

    the field

    As remarked in the beginning of this lecture, at this point we need our field to have characteristic zero and we also assume it to be closed.

    definition + remarks

    One calls \(x\in\mathrm{End}(\mathfrak{a})\) semisimple if the roots of its minimal polynomial over \(\mathbb{C}\) are all distinct.

    This is equivalent to saying that \( x\) is diagonizable, since one can take its eigenvectors as a basis of \(\mathfrak{a}\ .\)

    (In we work in a general field, one requires here that the roots of the minimal polynomial are contained in the field; such a field is called a splitting field relative to \(x\ .\))

    If two endomorphisms commute, they can be simultaneously diagonalized.

    A semisimple endomorphism remains semisimple when restricted to an invariant subspace.

    proposition

    Let \(\mathfrak{a}\) be a finite dimensional vectorspace over \(\mathbb{C}\ ,\) \(x\in\mathrm{End}(\mathfrak{a})\ .\)

    There exist unique \(x_s, x_n\in\mathrm{End}(\mathfrak{a})\) such that \(x=x_s+x_n\ ,\) \(x_s\) is semisimple, \(x_n\) is nilpotent and \(x_s\) and \(x_n\) commute.

    proof

    Let \( \lambda_1,\cdots,\lambda_k\) be the distinct eigenvalues of \(x\) with multiplicities \(m_1,\cdots,m_k\ .\) Its characteristic polynomial is then \(\chi(\lambda)=\prod_{i=1}^k (\lambda-\lambda_i)^{m_i}\ .\) Let \( V_i=\mathrm{ker} (x-\lambda_i)^{m_i}\ ,\) then \(V=\bigoplus_{i=1}^k V_i\ .\)

    Using the Chinese Remainder Theorem we find a polynomial \(p(\lambda)\) such that \(p(\lambda)=\lambda_i \mathrm{\ mod\ } (\lambda-\lambda_i)^{m_i}\) and \(p(\lambda)=0 \mathrm{\ mod\ } \lambda\ .\)

    Let \(q(\lambda)=\lambda-p(\lambda)\ .\)

    Then put \(x_s=p(x)\) and \(x_n=q(x)\ .\) Since both are polynomial in \(x\ ,\) they commute.

    One has \(x_s-\lambda_i |V_i=0\ ,\) that is, \(x_s\)acts diagonally on \(V\ ,\) since on each \(V_i\) the characteristic polynomial is \((\lambda-\lambda_i)^{m_i}\ .\)

    Furthermore \(x_n=x-x_s\) is nilpotent, since on each \(V_i\) it obeys its own characteristic equation \( x_n^{m_i}=0\ ,\) so with \(m=\mathrm{max}_{i=1,\ldots,k}m_i\) one has \(x_n^m=0\ .\)

    Any other such decomposition \(x=s+n\) would lead to \(x_s-s=n-x_n\ .\) Since \(s\) and \(n\) commute, they also commute with \(x\) and therefore with \(x_s\) and \(x_n\ .\)

    Since the sum of commuting semisimple operators is semisimple and the sum of nilpotent operators nilpotent, and the only operator that is both semisimple and nilpotent is \(0\ ,\) one must conclude that \(s=x_s\) and \(n=x_n\ .\)

    lemma

    \(\mathrm{Der}(\mathfrak{g})\) contains the semisimple and nilpotent parts in \(\mathrm{End}(\mathfrak{g})\) of its elements.

    proof

    If \(\delta\in \mathrm{Der}(\mathfrak{g})\ ,\) let \(\delta_s, \delta_n\in \mathrm{End}(\mathfrak{g})\) be its semisimple and nilpotent part, respectively. We show that \(\delta_s\in \mathrm{Der}(\mathfrak{g})\ .\)

    For \(\lambda\in\mathbb{C}\ ,\) let \(\mathfrak{g}_\lambda=\left\{x\in\mathfrak{g}|(\delta-\lambda)^k x=0\mathrm{\ for\ some\ } k \right\}\ .\) Then \(\delta_s\) acts on \(\mathfrak{g}_\lambda\) by multiplication by \(\lambda\ .\)

    One verifies that \([\mathfrak{g}_\lambda,\mathfrak{g}_\mu]\subset\mathfrak{g}_{\lambda+\mu}\ :\)

    One has \((\delta-(\lambda+\mu))^n[x,y]=\sum_{i=0}^n\binom{n}{i}[(\delta-\lambda)^{n-i} x,(\delta-\mu)^i y]\ .\)

    Indeed, for \(n=1\) this reads \((\delta-(\lambda+\mu))[x,y]=[\delta x,y]+[x,\delta y]-(\lambda+\mu)[x,y]=[(\delta-\lambda) x,y]+[x,(\delta-\mu) y]\) and the general inductive step is now standard.

    Thus one has \(\delta_s[x,y]=[\delta_s x,y]+[x,\delta_s y]\) for \(x\in\mathfrak{g}_\lambda, y\in \mathfrak{g}_\mu\ .\)

    Since \(\mathfrak{g}=\bigoplus_\lambda \mathfrak{g}_\lambda\ ,\) it follows that \(\delta_s\) is a derivation.

    definition

    Define a representation of \(\tilde{\mathfrak{g}}\) on \(\tilde{\mathfrak{g}}'=C^1(\tilde{\mathfrak{g}},\mathbb{C})\) as follows: \[(b_1(x)c_1)(y)=-c_1([x,y])\]

    well defined

    \[ (b_1([x,y])c_1)(z)=-c_1([[x,y],z])\ :\] \[=-c_1([x,[y,z]])+c_1([y,[x,z]])\ :\] \[=(b_1(x)c_1)([y,z])-(b_1(y)c_1)([x,z])\ :\] \[= -(b_1(y)b_1(x)c_1)(z)+(b_1(x)b_1(y)c_1)(z)\ :\] \[=([b_1(x),b_1(y)]c_1)(z)\]

    lemma

    Let \(\tilde{\mathfrak{g}}\) be a Lie algebra. Suppose there exists a nondegenerate trace form \(K_{\tilde{\mathfrak{g}}'}\ .\) If \([\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}]=\tilde{\mathfrak{g}}\) then \(H^2(\tilde{\mathfrak{g}},\mathbb{C})=0\ .\)

    remark

    The following proofs rely on the fact that \(\tilde{\mathfrak{g}}\) is semisimple.

    This is proved in the literature, but not yet in these notes.

    Alternatively, one could require that \(H^1(\tilde{\mathfrak{g}},\cdot)=0\ .\)

    proof

    Let for \(m\geq 1\) a map \(\phi^m:C^m(\tilde{\mathfrak{g}},\mathbb{C})\rightarrow C^{m-1}(\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}')\) be given by \[ (\phi^m u_m)(x_1,\dots,x_{m-1})(x)=u_m(x_1,\dots,x_{m-1},x)\] Since \[ [(b^{m-1}\phi^m u_m)(x_1,\dots,x_m)](x)=\ :\] \[=\sum_{i=1}^m (-1)^{i-1} b_1(x_i) \phi^m u_m (x_1,\dots,\hat{x}_i,\dots,x_m)(x)-\sum_{i<j}(-1)^{i-1} \phi^m u_m (x_1,\dots,\hat{x}_i,\dots,[x_i,x_j]\dots,x_m)(x)\ :\] \[ =-\sum_{i=1}^m (-1)^{i-1} u_m (x_1,\dots,\hat{x}_i,\dots,x_m,[x_i,x])-\sum_{i<j}(-1)^{i-1} u_m (x_1,\dots,\hat{x}_i,\dots,[x_i,x_j]\dots,x_m,x)\ :\] \[ = - d^m u_m (x_1,\dots,x_m,x)\ :\] \[ =- [\phi^{m+1} d^m u_m (x_1,\dots,x_m)](x)\] This implies that \(b^{m-1}\phi^m=-\phi^{m+1} d^m\) and in particular that \(\phi^m:Z^m(\tilde{\mathfrak{g}},\mathbb{C})\rightarrow Z^{m-1}(\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}')\ .\)

    Take \(\omega_2\in Z^2(\tilde{\mathfrak{g}},\mathbb{C})\ .\) Then \(b^1 \phi^2\omega_2 =0\ .\)

    It follows from the assumptions that \(H^1( \tilde{\mathfrak{g}},\tilde{\mathfrak{g}}')=0\ .\)

    This implies that there exists a \( \beta_1\in C^{0}(\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}')=\tilde{\mathfrak{g}}'\) such that \(\phi\omega_2=b\beta_1\) and \[\omega_2(x,y)= \phi^2\omega_2(x)(y)=b\beta_1(x)(y)=b_1(x)\beta_1(y)=-\beta_1([x,y])=d^1\beta_1(x,y)\] This proves that \(\omega_2=d^1\beta_1\ .\)

    remark

    These cohomology results were obtained by Whitehead in the antisymmetric case.

    There is not an analogous result for \(H_{\wedge}^3(\tilde{\mathfrak{g}},\mathbb{C})\ .\)

    This is related to the fact that \([d^2 K_{\mathfrak{g}'}]\in H_{\wedge}^3(\tilde{\mathfrak{g}},\mathbb{C})\ .\)

    theorem (Weyl)

    Suppose \(\tilde{\mathfrak{g}}\) and \(\mathfrak{a}\) are finite dimensional. If \([\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}]=\tilde{\mathfrak{g}}\) then \(\mathfrak{a}\) is completely reducible, that is, if \(\mathfrak{b}\) is a \(\tilde{\mathfrak{g}}\)-invariant subspace of \(\mathfrak{a}\ ,\) then there exists a \(\tilde{\mathfrak{g}}\)-invariant direct summand to \(\mathfrak{b}\ .\)

    proof

    Let \(\mathfrak{b}\) be a \(\tilde{\mathfrak{g}}\)-invariant subspace of \(\mathfrak{a}\ .\) The idea of the proof is as follows.

    Let \(P_\mathfrak{b}\) be the projector on \(\mathfrak{b}\ .\) If \(P_\mathfrak{b}\) commutes with the \(\mathfrak{g}\)-action, we are done, since then we find a direct summand by letting \(1-P_\mathfrak{b}\) act on \(\mathfrak{a}\ .\)

    To make \(P_\mathfrak{b}\) commute with the action, one perturbs it with another map \(c^0\ .\)

    In order for \(P_\mathfrak{b}+c^0\) to be a projection on \(\mathfrak{b}\) one needs that \(\mathrm{im\ }c^0 \subset \mathfrak{b}\) and \(\mathfrak{b}\subset \ker c^0\) (since \(P_\mathfrak{b}\) is the identity on \(\mathfrak{b}\)).

    These considerations lead to the following definition. Define \(\mathcal{W}\) to be the space of all \(A\in\mathrm{End}(\mathfrak{a})\) such that \[ \mathrm{im\ }A\subset \mathfrak{b}\subset \ker A\ .\] Then \(\mathcal{W}\) is a subspace: Let \(a\in \mathfrak{a}, b\in\mathfrak{b}\) and \(A,B \in \mathcal{W}\ .\) Then \((A+B)b = Ab +Bb=0\) and \( (A+B)a=Aa+Ab \in \mathfrak{b}\ .\)

    Define a representation \(\delta_1\) of \(\tilde{\mathfrak{g}}\) on \(\mathcal{W}\) by \[ \delta_1(x)A=[d_1,A]_{\mathrm{End}(\mathfrak{a})}\] Let \(P_\mathfrak{b}\) be a projector on \(\mathfrak{b}\) as a vectorspace. Then \([d_1(x),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a}}\in \mathcal{W}\ .\) Therefore \( c^1\ ,\) defined by \[ c^1(x)=[d_1(x),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}\] is a linear map from \(\tilde{\mathfrak{g}}\) to \(\mathcal{W}\ ,\) that is, \(c^1\in C^1(\tilde{\mathfrak{g}},\mathcal{W})\ .\)

    Observe that one cannot say\[c^1=\delta P_\mathfrak{b}\] for the simple reason that \(P_\mathfrak{b}\notin\mathcal{W}\ .\) Then \[ \delta^1 c^1(x,y)=\delta_1(x)c^1(y)-\delta_1(y)c^1(x)-c^1([x,y])\ :\] \[=\delta_1(x)[d^{(0)}(y),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}-\delta^{(0)}(y)[d_1(x),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})} -[d_1([x,y]),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}\ :\] \[=[d_1(x),[d_1(y),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}]_{\mathrm{End}(\mathfrak{a})} -[d_1(y),[d_1(x),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}]_{\mathrm{End}(\mathfrak{a})} -[d_1([x,y]),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}\ :\] \[=[[d_1(x),d_1(y)]_{\mathrm{End}(\mathfrak{a})},P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})} -[d_1([x,y]),P_\mathfrak{b}]_{\mathrm{End}(\mathfrak{a})}\ :\] \[=0\] Since \(H^1(\tilde{\mathfrak{g}},\mathcal{W})=0\ ,\) one has \(c^1=\delta c^0\ .\) Then, with \(\mathcal{P}_\mathfrak{b}=P_\mathfrak{b}-c^0\ ,\) \[[d_1(x),\mathcal{P}_\mathfrak{b}]=c^1(x)-\delta_1(x)c^0=c^1(x)-\delta c^0(x)=0\] One has \( \mathcal{P}_\mathfrak{b}a\in \mathfrak{b}\) for \(a\in\mathfrak{a}\) and \(\mathcal{P}_\mathfrak{b}b=P_\mathfrak{b}b=b\) for \(b\in\mathfrak{b}\ .\)

    The conclusion is that \( \mathcal{P}_\mathfrak{b}\) is a projector on \(\mathfrak{b}\) as a \(\tilde{\mathfrak{g}}\)-module (and therefore \((1-\mathcal{P}_\mathfrak{b}) \) is a projector on the complementary subspace).

    Since \(\mathfrak{a}\) is finite-dimensional, the result can be proved using induction.

    theorem

    Suppose \(\tilde{\mathfrak{g}}\) and \(\mathfrak{a}\) are finite dimensional.

    Suppose there exists a nondegenerate trace form \(K_{\tilde{\mathfrak{g}}'}\ .\)

    If \([\tilde{\mathfrak{g}},\tilde{\mathfrak{g}}]=\tilde{\mathfrak{g}}\) then any extension of \(\tilde{\mathfrak{g}}\) by \(\mathfrak{a}\) is trivial.

    proof

    This follows from the fact that \(H^2(\tilde{\mathfrak{g}},\mathfrak{a})=0\ .\)

    references

    • Humphreys, James E. Introduction to Lie algebras and representation theory. Graduate Texts in Mathematics, Vol. 9. Springer-Verlag, New York-Berlin, 1972. xii+169 pp.


    On to the ninth lecture

    Back to the seventh lecture

    Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Focal areas
    Activity
    Tools