# User:Jan A. Sanders/An introduction to Lie algebra cohomology/Lecture 1

Under construction.

## introduction

The plan is to give an introduction to Lie algebra cohomology that can be followed on different levels. The development of the cohomological theory will require nothing beyond the basic rules for Lie algebras and representations. The treatment is not quite standard, since the forms will not necessarily be antisymmetric.

## definition of Lie algebra

A Lie algebra $$\mathfrak{g}$$ is a module or vector space over a ring or a field R (think of $$\mathbb{R}$$ or $$\mathbb{C}$$) with a bilinear operation $$[\cdot,\cdot]$$ obeying the following rule: $\tag{1} [[x,y],z]=[x,[y,z]]-[y,[x,z]],\quad x,y,z\in\mathfrak{g}$

and such that $\tag{2} [x,x]=0,\quad x\in\mathfrak{g}\ .$

Lie algebras have been extensively studied for more than a century.

### remark

If one wants to include the super case (see below) then it is more natural to replace the condition $$[x,x]=0$$ by $[x,y]+[y,x]=0\ .$ The latter condition follows from the first. If $$R$$ is a field with characteristic $$\neq 2$$ the two definitions are equivalent.

### example class of a Lie algebra

Let $$\mathcal{A}$$ be an associative algebra, that is, $$(xy)z=x(yz)$$ for all $$x,y,z\in\mathcal{A}$$ (in other words, one can forget the brackets around the multiplication). Then define a bracket by $\tag{3} [x,y]=xy-yx$

This defines a Lie algebra structure on $$\mathcal{A}$$ (Check!).

### example of an associative algebra - pseudodifferential symbols

The following example is not the typical simple example to give the reader a first glimpse of what is going on, but an advanced example where the techniques of Lie algebra cohomology are used. The mathematical background can be found in Khesin, Wendt.

Let $$\delta$$ be a derivation on a ring $$R\ ,$$ that is to say that $$\delta (fg)=\delta(f)g+f\delta(g)\ ,$$ or, written as an operator, $$\delta f = f^{(1)}+f\delta\ ,$$ with $$f^{(1)}=\delta(f)\ ,$$ $$f,g,f^1\in R\ .$$

Such a ring is called a differential ring.

A differential operator is an expression of the form $$f \delta^k: R\rightarrow R\ .$$

Differential operators themselves form an associative ring which we write as $$DO\ .$$

To show that the composition is associative, one can use the symbolic method, in which each element in $$f\in R$$ has its own symbol $$\xi_f\ .$$

One then expresses $$f^{(k)}=\delta^k(f)$$ symbolically as $$\xi_f^k f$$ in $$R[\xi_f,\cdots]\ .$$

We write $$\hat{\xi}_f=\xi_f \delta^{-1}\ .$$

The composition $$f\delta^k g\delta^l$$ now reads symbolically $$fg(\delta+\xi_g)^k \delta^l$$ or $$\delta^k g=g(\delta+\xi_g)^k=g(1+\hat{\xi}_g)^k \delta^k\ .$$

It is now an automatic consequence that the composition is associative.

If the $$k$$ in this expression is a natural number on speaks of a differential operator, if not, of a pseudodifferential operator, with corresponding space $$\psi DO\ .$$

We call $$k$$ the degree of $$f\delta^k\ .$$

In the pseudodifferential case one should view $$fg(1+\hat{\xi}_g)^k \delta^{k+l}$$ as the definition of the composition of $$f\delta^k$$ and $$g\delta^l \ .$$

If one allows infinite formal sums of pseudodifferential operators, with the degrees bounded from above, one speaks of pseudodifferential symbols, with the corresponding space $$\psi DS\ .$$

If we denote by $$\psi DS^\alpha$$ those elements in $$\psi DS$$ with degree bound $$\alpha\ ,$$ we see that $$\psi DS^\alpha \subset \psi DS^\beta$$ if $$\alpha\leq\beta\ ,$$ and $$\psi DS^\alpha \psi DS^\beta \subset \psi DS^{\alpha+\beta}\ ,$$ that is, $$\psi DS$$ is a filtered algebra.

For instance, $$\delta^{-1}f=f(1+\hat{\xi}_f)^{-1}\delta^{-1}=f\delta^{-1}-f\xi_f \delta^{-2}+f\xi_f^2\delta^{-3}+\cdots=f\delta^{-1}-f^{(1)}\delta^{-2}+f^{(2 )}\delta^{-3}+\cdots\ .$$

Defining the Lie bracket as in (#FORM1) we find that $$[\psi DS^\alpha,\psi DS^\beta]\subset \psi DS^{\alpha+\beta-1}\ .$$ One write $$\psi DS^{(\alpha)}=\psi DS^{\alpha+1}\ .$$

We see that $$\psi DS$$ is a filtered Lie algebra since $$[\psi DS^{(\alpha)},\psi DS^{(\beta)}]\subset\psi DS^{(\alpha+\beta)}\ .$$

In order to introduce $$log(\delta)\ ,$$ consider $$[\delta^\alpha,f\delta^n]=\delta^\alpha f\delta^n -f\delta^{n+\alpha}=f\left((1+\hat{\xi}_f)^\alpha-1\right)\delta^{n+\alpha}\ .$$

If we, formally, let $$\delta^\alpha=e^{\alpha \log(\delta)}\ ,$$ then we obtain after differentiating with respect to $$\alpha$$ at $$0\ ,$$ $$[\log(\delta),f\delta^n]=f\log(1+\hat{\xi}_f)\delta^n=\sum_{i=1}^\infty \frac{(-1)^{i+1}}{i}f^{(i)} \delta^{n-i}\ ,$$ where the implication is not that $$\log(\delta)$$ itself is a pseudodifferential symbol.

Observe that $$[\log(\delta),\cdot]:\psi DS^n\rightarrow \psi DS^{n-1}\ .$$ This implies that $$\exp([\log(\delta),\cdot])$$ is well defined in the filtration topology, so one can do normal form theory on $$\psi DS\ .$$

We now see that $$\delta^\alpha f \delta^n g \delta^k= f g(1+\hat{\xi}_f+\hat{\xi}_g)^\alpha(1+\hat{\xi}_g)^n\delta^{\alpha+n+k}\ .$$

It follows that $$[\log(\delta),f\delta^n g\delta^k]=f g \log(1+\hat{\xi}_f+\hat{\xi}_g)(1+\hat{\xi}_g)^n\delta^{n+k}\ .$$

One also has $$[\log(\delta),f\delta^n ]g\delta^k=f g \left(\log(1+\hat{\xi}_f+\hat{\xi}_g)-\log(1+\hat{\xi}_g)\right)(1+\hat{\xi}_g)^n\delta^{n+k}$$ and $$f\delta^n [\log(\delta), g\delta^k]=f g \log(1+\hat{\xi}_g)(1+\hat{\xi}_g)^n\delta^{n+k}\ .$$

The conclusion is (Exercise 4.9 in Khesin, Wendt) $[\log(\delta),f\delta^n g\delta^k]=[\log(\delta),f\delta^n ]g\delta^k+f\delta^n [\log(\delta), g\delta^k]$ That is, $$[\log(\delta),\cdot]$$ is a derivation on the ring of pseudodifferential operators (and thereby also a derivation on the induced Lie algebra).

#### inner or outer derivation ?

A derivation $$\partial$$ is said to be inner if it can be written as $$\partial \cdot =[d,\cdot], d\in R\ .$$ Is $$[\log(\delta,\cdot]$$ inner?

Let us put $$d=\sum_{\alpha} d_\alpha \delta^\alpha\ .$$

Its degree should be $$0\ .$$ Consider the highest degree term: $[d_0 , f\delta^n]=d_0 f\delta^n-f\delta^n d_0 =-f d_0^{(1)} \delta^{n-1} +\ldots\ .$ So we need to solve $$f^{(1)}=-f d_0^{(1)} \ .$$

The existence of one element $$f\in R$$ such that this equation can not be solved with $$d_0\in R$$ would imply that $$[\log(\delta,\cdot]$$ is outer.

Consider this situation in the ring generated by $$x^n, n\in \mathbb{Z}$$ and $$\delta=\frac{d}{dx}\ .$$

#### tr

We say that two elements in $$R$$ are equivalent if their difference lies in the image of $$\delta\ .$$

That is, $$f\equiv g$$ means that there exists a $$h\in R$$ such that $$f-g=h^{(1)}$$ and we write $$R^\star=R/\equiv\ .$$

Consider $$[f\delta^\mu, g\delta^\nu]=fg(1+\hat{\xi}_g)^\mu\delta^{\mu+\nu}-gf(1+\hat{\xi}_f)^\nu\delta^{\mu+\nu}\ .$$

If $$\mu+\nu\notin\mathbb{N}\ ,$$ this expression does not contain a $$\delta^{-1}$$-term.

If $$\mu+\nu\in\mathbb{N}\ ,$$ the $$\delta^{-1}$$-component of this is $$fg^{(\mu+\nu+1)} \binom{\mu}{\mu+\nu+1}-gf^{(\mu+\nu+1)}\binom{\nu}{\mu+\nu+1}\equiv f^{(\mu+\nu+1)} g (-1)^{\mu+\nu+1}\binom{\mu}{\mu+\nu+1}-gf^{(\mu+\nu+1)}\binom{\nu}{\mu+\nu+1}\ .$$

If $$R$$ is a commutative ring, the last expression is zero, just as the trace of a commutator of matrices is zero.

In the noncommutative case one would also have to divide $$R$$ by $$[R,R]$$ to get the desired zero in the quotient. Observe that $$\delta[R,R]\subset [R,R]\ .$$

If we now define $$\mathrm{tr}\in C^1(\psi DS ,R^\star)\ ,$$ that is $$R$$-linear forms on $$\psi DS$$ with values in $$R^\star\ ,$$ by $$\mathrm{tr} \sum_\alpha f_\alpha \delta^\alpha=f_{-1}\ ,$$ then $$\mathrm{tr} \sum_\alpha f_\alpha \delta^\alpha\not\equiv 0$$ implies in the commutative case that this is not a commutator.

This may give a way to see whether $$[\log(\delta),f \delta^n]$$ is a commutator. If it is not for certain $$f\ ,$$ then $$[\log(\delta),\cdot]$$ is an outer derivation.

However, since $$\mathrm{tr}[\log(\delta),f \delta^n]=\frac{(-1)^n}{n+1} f^{(n+1)}\equiv 0$$ this does not give us any obstruction.

#### reference to this example

• Khesin, Boris ; Wendt, Robert . The geometry of infinite-dimensional groups. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], 51. Springer-Verlag, Berlin, 2009. xii+304 pp. ISBN: 978-3-540-77262-0

### definition of abelian

$$\mathfrak{g}$$ is called abelian if $$[x,y]=0$$ for all $$x,y\in\mathfrak{g}\ .$$

### definition of the Lie algebra $$\mathfrak{sl}_2$$

Consider the triple $$\langle M, N, H \rangle$$ with commutation relations $H=[M,N]\quad , [H,M]=2M,\quad [H,N]=-2N$ Checking the Jacobi identity is a lot of trivial work, which can be avoided by realizing the Lie algebra as an associative algebra.

### morphism

Let $$\phi_1:\mathfrak{a}\rightarrow\mathfrak{b}$$ be a linear map. If $$\phi_2(x,y)=[\phi_1(x),\phi_1(y)]_{\mathfrak{b}}-\phi_1([x,y]_{\mathfrak{a}})=0$$ then $$\phi_1$$ is a Lie algebra morphism.

### linear forms

The space of $$n$$-linear (linear in the $$R$$-module structure) forms, with arguments in $$\mathfrak{g}$$ and values in $$\mathfrak{a}\ ,$$ is denoted by $$C^n(\mathfrak{g},\mathfrak{a})\ .$$

Notice that these are not required to be antisymmetric, contrary to the common Lie algebra cohomology convention.

### super remark

A super Lie algebra is a module $$\mathfrak{g}=\mathfrak{g}^0\oplus\mathfrak{g}^1$$ and a bracket such that $[\mathfrak{g}^i,\mathfrak{g}^j]\subset\mathfrak{g}^{(i+j) \mathrm{mod} 2}$ obeying, with $$x\in\mathfrak{g}^{|x|}$$ and $$y\in\mathfrak{g}^{|y|}$$ (where $$|\cdot|:\mathfrak{g}^i\mapsto i$$) and $$z\in\mathfrak{g}\ ,$$ the super Jacobi identity $[[x,y],z]=[x,[y,z]]-(-1)^{|x||y|}[y,[x,z]]$ and $[x,y]=-(-1)^{|x||y|}[y,x],\quad x\in \mathfrak{g}^{|x|},y\in\mathfrak{g}^{|y|}$ Observe that $$\mathfrak{g}^0$$ itself is a Lie algebra.

The abstract theory of super Lie algebras follows the ordinary theory, with some extra administration.

If one is careful not to change the order of the elements too much, one can always insert the necessary factors at the end of the computation.

At some point, when the reasoning depends on the antisymmetry of the Lie bracket, one has to be careful again.

In these lectures the super signs are not put in, and to do so is left to the reader.

## representations of Lie algebras

Let $$\mathfrak{g}$$ be a Lie algebra and $$\mathfrak{a}$$ be a module or a vector space.

Then we say that $$d_1:\mathfrak{g}\rightarrow End(\mathfrak{a})$$ is a representation of $$\mathfrak{g}$$ in $$\mathfrak{a}$$ if $\tag{4} d_2(x,y)=[d_1(x),d_1(y)]-d_1([x,y])=0,\quad x,y\in\mathfrak{g}\ .$

Take $$\mathfrak{a}=\mathfrak{g}$$ and $$d_1(x)y=[x,y]\ .$$

This is called the adjoint representation and written as $$\mathrm{ad}(x)y\ .$$

### example - de Rham representation

Take $$\mathfrak{a}=\mathbb{R}[x_1,\cdots,x_n]$$ and $$\mathfrak{g}=\langle\frac{\partial}{\partial x_1},\cdots,\frac{\partial}{\partial x_n}\rangle\ .$$

One defines $$d_1(\frac{\partial}{\partial x_i})f=\frac{\partial f}{\partial x_i}\in\mathfrak{a}\ .$$ Check the details.

Observe that $$\mathfrak{g}$$ acts as an abelian Lie algebra on $$\mathfrak{a}\ ,$$ since polynomials are $$C^\infty\ .$$

### representation of $$\mathfrak{sl}_2$$

Let $$\mathfrak{a}=\R^2\ .$$ Take $d_1(M)=\begin{bmatrix} 0&1\\0&0\end{bmatrix}, \quad d_1(N)=\begin{bmatrix} 0&0\\1&0\end{bmatrix} ,\quad d_1(H)=\begin{bmatrix} 1&0\\0&-1\end{bmatrix}$ Then $$d_2(x,y)=[d_1(H),d_1(M)]-d_1([H,M])=0\ ,$$ etc, that is, $$d_1$$ is a representation of $$\mathfrak{sl}_2$$ in $$\R^2\ .$$

Since $$d_1(x_1 N + x_2 M +x_3 H)=0$$ implies $$x_1=x_2=x_3=0\ ,$$ one can now easily check the Jacobi identity for $$\mathfrak{sl}_2\ ,$$ since it follows from the Jacobi identity in the case of an associative algebra. -

## the coboundary operator

We now define the first instance of the coboundary operator $$d:C^0(\mathfrak{g},\mathfrak{a})\rightarrow C^1(\mathfrak{g},\mathfrak{a})\ :$$

Let $$a\in\mathfrak{a}=C^0(\mathfrak{g},\mathfrak{a})\ .$$ Then define $$d a\in C^1(\mathfrak{g},\mathfrak{a})$$ by

$\tag{5} d a (x)=d_1(x)a\ .$

### examples

In the case of the adjoint representation this amounts to $$d x(y)=d_1(y)x=[y,x]\ .$$

In the de Rham case we have $$d f (\frac{\partial}{\partial x_i})=\frac{\partial f}{\partial x_i}\ .$$

In particular, $$d x_j (\frac{\partial}{\partial x_i})=\delta_i^j\ .$$

We can write $$df=\sum_{i=1}^n \frac{\partial f}{\partial x_i} dx_i$$ and conclude that the $$d x_i$$ form an $$\mathfrak{a}$$-basis of $$C^1(\mathfrak{g},\mathfrak{a})\ .$$

### higher order

By itself, the zeroth order coboundary operator is not much fun. But there is more.

Let $$a_1\in C^1(\mathfrak{g},\mathfrak{a})\ .$$ Then define $$d^1 a_1\in C^2(\mathfrak{g},\mathfrak{a})$$ by

$\tag{6} d^1 a_1(x,y)=d_1(x)a_1(y)-d_1(y)a_1(x)-a_1([x,y])\ .$

Thus $$d^1:C^1(\mathfrak{g},\mathfrak{a})\rightarrow C^2(\mathfrak{g},\mathfrak{a})\ .$$

One checks that $$d^1d=0\ :$$

$d^1d a(x,y)=d_1(x)da(y)-d_1(y)da(x)-da([x,y])= d_1(x)d_1(y)a-d_1(y)d_1(x)a-d_1([x,y])a=d_2(x,y)a= 0.$ In general, when one has defined $$d^i:C^i(\mathfrak{g},\mathfrak{a})\rightarrow C^{i+1}(\mathfrak{g},\mathfrak{a})$$ such that $$d^{i+1}d^i=0\ ,$$ then one calls $$d^\cdot$$ a coboundary operator.

To treat the example of central extensions one needs one more coboundary operator. Let $$a_2\in C^2(\mathfrak{g},\mathfrak{a})$$ be a two-form.

Then define

$\tag{7} d^2 a_2(x,y,z)=d_1(x)a_2(y,z)-d_1(y)a_2(x,z)+d_1(z)a_2(x,y)-a_2([x,y],z)-a_2(y,[x,z])+a_2(x,[y,z])\ .$

### remark

These definitions are motivated by the central extension problem in the second lecture.

### example - de Rham continued

Let $$a_1=\sum_{i=1}^n a_1^i d x_i\in C^1(\mathfrak{g},\mathfrak{a})\ .$$ Then $d^1 a_1(\frac{\partial}{\partial x_j},\frac{\partial}{\partial x_k})=\frac{\partial}{\partial x_j}\sum_{i=1}^n a_1^i d x_i(\frac{\partial}{\partial x_k}) -\frac{\partial}{\partial x_k}\sum_{i=1}^n a_1^id x_i(\frac{\partial}{\partial x_j}) =\frac{\partial a_1^k}{\partial x_j} -\frac{\partial a_1^j}{\partial x_k} \ .$ Equivalently, one can write $$d^1 a_1=\sum_{j<k} (\frac{\partial a_1^k}{\partial x_j} -\frac{\partial a_1^j}{\partial x_k})d x_j \wedge d x_k\ .$$

In this example one has the familiar looking $$d^1 d f(\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j})=\frac{\partial}{\partial x_i}\frac{\partial f}{\partial x_j}-\frac{\partial}{\partial x_j}\frac{\partial f}{\partial x_i}=0\ .$$

If one would replace the polynomials by $$C^1$$-functions, then the Lie algebra would no longer be commutative.

Since one always has $$d^1 d=0$$ one would have $$[\frac{\partial}{\partial x_i},\frac{\partial}{\partial x_j}]f=\frac{\partial}{\partial x_i}\frac{\partial f}{\partial x_j}-\frac{\partial}{\partial x_j}\frac{\partial f}{\partial x_i}\ .$$

### exercise

Show that $$d^2 d^1=0\ .$$