# Bogoliubov-Parasiuk-Hepp-Zimmermann renormalization scheme

The Bogoliubov, Parasiuk, Hepp, Zimmermann (abbreviated **BPHZ**) renormalization scheme is a mathematically consistent method of rendering
Feynman amplitudes finite while maintaining the fundamental postulates of
relativistic quantum field theory (Lorentz invariance, unitarity, causality). Technically it is based on the systematic subtraction of momentum space integrals. This distinguishes it from other methods of renormalization. For massless particles the scheme has been enlarged by Lowenstein and is then called BPHZL.

## The problem

For elucidating the problem let us have a look at an intuitive representation of processes involving particles at the subatomic level. Elementary particles like electrons, quarks, photons and gluons interact with each other: in scattering processes incoming particles collide and give rise to outgoing particles, the transition from such an initial state to a final state obeying the rules of quantum mechanics. Pictorially this is described in terms of Feynman diagrams.

Such pictorial descriptions become quantitative by assigning to the lines, vertices and the diagram as a whole appropriate mathematical expressions, every diagram contributing quantitatively to the transition amplitude of the physical process in question. These transition amplitudes form the elements of the scattering matrix \(S\ ,\) which maps every initial state to a final state.

\[\tag{1} S_{\mathrm{fin,in}} = \delta_{\mathrm{fin,in}} -i(2\pi)^4\delta (\sum q_{\mathrm{in}} - \sum q_{\mathrm{fin}}){\mathcal{M}}_{\mathrm{fin,in}} \]

where \(\sum q_{\mathrm{in}}\;\;\left(\sum q_{\mathrm{fin}}\right)\) are the sum of initial (respectively final) momenta that should be equal by momentum conservation. The probability density for the transition \(|{\mathrm{in}}\rangle \rightarrow |{\mathrm{fin}}\rangle\)

is \(\mathcal{M}_{\mathrm{fin,in}}\mathcal{M}_{\mathrm{fin,in}}^*\ ,\) where \(\mathcal{M}_{\mathrm{fin,in}}\) is defined by equation (1).

By a slight change of diagrams and rules one is able to find eventually the matrix elements of other operators as well: one just singles out one vertex as representing the operator in question. If, e.g. one is interested in matrix elements of the energy-momentum tensor one vertex in a Feynman diagram is provided by this tensor as a function of the fields in the theory, see Figure 5.

As long as the diagrams in question have the form of trees the rules yield mathematically well defined expressions and maintain Lorentz covariance. Tree-level transition amplitudes violate however unitarity (conservation of probabilities in physical processes), and causality which are the further fundamental properties which should be valid for a theory of elementary particles. Actually loops of propagators (closed paths in the diagrams) have to appear, if unitarity and causality are requested: indeed \(\mathcal{M}_{\mathrm{fin,in}}\) appears as a loop-ordered formal series of diagrams. (A \(L\)-loop diagram being weighted by \(\hbar ^L\ ,\) where\( \hbar\) is the Planck's constant.) Loops imply however (according to the rules) that one has to perform non-trivial integrations which may just have infinity as a result. The rules, one has set up were too naive.

It is thus necessary to analyze this situation carefully and to set up modified rules which do respect the fundamental postulates (Lorentz covariance, unitarity, causality), lead to meaningful expressions which then, eventually, can be checked by experiment.
Any such set of rules is called a **renormalization scheme**. In this note we describe a specific one, named after its inventors Bogoliubov, Parasiuk, Hepp, Zimmermann – abbreviated as BPHZ.

## Diagrammatics

Let us look at a Feynman diagram with \(I\) internal lines, \(V\) vertices,
\(N\) external lines and \(L\) closed loops. It turns out, that infinities can
be traced back to diagrams which are **one-particle irreducible**:
they are connected and stay so, if one single line is cut in the diagram.
In this spirit external lines do not have to be considered, they serve
only as a remainder for external momenta entering the diagram.
*Diagrams and subdiagrams are supposed to be "spanned" by their lines, the vertices attached to the lines of a diagram or subdiagram also belong to the diagram (resp. subdiagram)*. To every
line of type \(\varphi_a\) (by now: an internal one) is associated a propagator,
\(\Delta_{\mathrm{c}}^{(a)}\ ,\)
to every vertex \(v\) a polynomial \(P_v\) in the momenta. Examples for non-trivial momentum dependence contributing to power counting are shown in Figure 7.

A flow of momentum has to be chosen such that one has conservation of momentum at every vertex and thus for the diagram as a whole. An integration over the momenta \(k_l\) \(l=1,...,L\) of independent loops has to be performed. In the simple example of Figure 8 this results in the expression: \[\tag{2} \int \prod_{l=1}^L \left( d^4k_l\frac{1}{(p-k_l)^2 -m^2}\frac{1}{k_l^2 - m^2}\right)\;.\]

A degree \(d(\gamma)\ ,\) called the **ultraviolet degree of divergence**, is assigned to each diagram \(\gamma\)
by scaling the momenta \(k_l\) in the corresponding integral by a real number \(\rho\ ,\)
by considering the limit \(\rho\rightarrow \infty\) and
by defining \(d(\gamma)\) as the degree of the overall power of \(\rho\)
(including the contribution from the rescaling of the integration measure).
\(d(\gamma)\) measures the "growth" of the integrand for large internal momenta and
thus whether the integral has a chance to exist or not.

It can be show that \(d(\gamma)\) can be expressed as follows: \[\tag{3} d(\gamma) = 4 - \sum_a d_a N_a + \sum_v (d_v - 4)\]

where:

- \(N_a\) is the number of external lines of type \(\varphi_a\ ,\)
- \(d_a \) is the UV-dimension of field \(\varphi_a\) and is given by \( \deg (\Delta_c^{(a)})=2 d_a-4\) (for example, \(d_a=1\) for a scalar boson in \(4\) dimensions),
- \(d_v = \sum_a d_a n_{a,v} + \deg(P_v)\) (\(n_{a,v}\) being the number of fields of type \(\varphi_a\) at vertex \(v,\) and \(\deg(P_v)\) being the degree of the polynomial in the momenta associated to vertex \(v\)).

For the example in Figure 6 one finds \(d(\gamma)= 0\ ,\) hence the diagram is (logarithmically) divergent.

Since, at least for massive fields, the integrand is a rational function of
the momenta, analytic at the origin of momentum space, one can enforce
convergence by Taylor expanding around vanishing external momenta and
subtracting all terms up to and including degree \(d(\gamma)\) in this
expansion. (The operator that performs a Taylor expansion in a given set of momenta \(p\) up to -and included- degree \(d(\gamma)\) is denoted by \(t^{d(\gamma)}_p\)).
This *ad hoc* prescription can be justified by observing
that on the diagrammatic level this amounts to subtract pointlike vertices
carrying a polynomial in external momenta of degree \(d(\gamma)\ .\)
Indeed, formally the subtraction procedure is equivalent to introducing
a new diagram in which the divergent subdiagram has been replaced by
a vertex \(v\) with suitably chosen \(P_v\ ,\) known as **counterterm**.
Hence if on a formal level the
fundamental postulates are satisfied, they will also be maintained
after this
redefinition which leads to a meaningful expression. It is important here
the fact that one works perturbatively (loop expansion):
e.g. the counterterm defined to subtract a one-loop diagram (i.e. of order \(\hbar\))
when inserted as an interaction vertex in diagram with four loops (i.e. of order \(\hbar^4\)),
will give rise to a contribution of order five loops (i.e. of order \(\hbar^5\)).

Of course, by this procedure one has introduced for every counterterm a free parameter, which must be fixed by the so called
**normalization conditions**. Different schemes require different
values for such parameters, but after this **re**-normalization all
schemes agree in their results.

It goes hand in hand with the perturbative construction that the proper
definition of the finite part of a diagram is recursive.
Given a multiloop Feynman diagram, one has first to subtract the divergent subdiagrams which have the smallest loop number,
then one has to consider larger (sub)diagrams of which included the previous subdiagrams etc. In a word, the diagrams have to be *ordered* in some way to be properly treated.

As
long as divergent, one-particle irreducible
subdiagrams are mutually disjoint (with respect to their lines, irrespective of vertices), or properly contained in each other, this is
not problematic because the respective subtractions do not interfere.
As an example, consider Figure 8 with \(L=2\ .\)
It is fairly obvious (and can be proven rigorously) that **Dyson's formula** for the renormalized Feynman diagram \(R_{\gamma} (p,k)\)
(Dyson F.J., 1949),
\[
R_{\gamma} (p,k) = S_{\gamma} \prod_{\lambda\in U}
(1-t^{d(\lambda)}_{p^\lambda}S_{\lambda}) I_{\gamma} (U),
\]
leads to convergence. Here \(U\) is the set
\(U=\{\gamma, \gamma_1, \gamma_2\}\) of diagrams \(\gamma,\, \gamma_1\) and \(\gamma_2\ .\)

\(I_\gamma(U)\) is the integrand written in variables fitting to the set \(U.\) \(S_\lambda\) is a substitution operator, relabeling momenta appropriately. \(p^\lambda\) is the set of the external momenta of the subdiagram \(\lambda,\) as prepared by \(S_\lambda\ .\)

Given a set of subdiagrams \(U=\{\gamma_1,\cdots,\gamma_n\}\ ,\) if *none* of the conditions
\[\gamma_i \subseteq \gamma_j,\, \gamma_j \subseteq \gamma_i,\, \gamma_i \cap \gamma_j = \emptyset\]
holds for any couple \((\gamma_i,\gamma_j)\) of elements of U , then the diagrams are said to **overlap**.
(Note that inclusion and disjunction are here intended in terms of *lines*.)
See Figure 10 for an example.

In general, when divergent one-particle irreducible subdiagrams overlap, subtractions do interfere and one has to give a prescription as how to proceed.
Zimmermann (Zimmermann W., 1969) solved this problem by introducing the
notion of **forests**, defined as families of divergent one-particle irreducible (sub)diagrams (known as **renormalization parts**)
which are **strongly non-overlapping**, i.e. which are pairwise **strongly-disjoint** (disjoint in terms of *lines and vertices*) or included one in the other (in terms of lines).

In order to understand Zimmermann's solution it is instructive to continue with the
example \(L=2\) in Figure 8.
If one multiplies out the product in the Dyson formula and
takes into account that
\[
(1-t^{d(\gamma)}_{p^{\gamma}})t^{d(\gamma_1)}_{p^{\gamma_1}}t^{d(\gamma_2)}_{p^\gamma2}
I_\gamma(U)=0
\]
one can rewrite the result in the form
\[
R_{\gamma} (p,k) = S_{\gamma} \sum_{U\in F_\gamma} \prod_{\lambda\in U}
(-t^{d(\lambda)}_{p^\lambda} S_{\lambda}) I_{\gamma} (U),
\]
if one chooses as **family of forests** \(\mathcal{F}_\gamma\)
\[
\mathcal{F}_\gamma= \{ \emptyset, \{\gamma\},\{\gamma_1\}, \{\gamma_2\},
\{\gamma, \gamma_1\}, \{\gamma, \gamma_2\}\},
\]
i.e. all the sets of renormalization parts of \(\gamma\) which are strongly-non-overlapping.
Note that, remarkably enough, the diagrams \(\gamma_1\) and \(\gamma_2\) are not strongly-disjoint (nor related by inclusion) because they have a vertex in common (and only that), hence the forests \(\{\gamma_1, \gamma_2\}\) and \(\{\gamma, \gamma_1, \gamma_2\}\) do not appear in \(\mathcal{F}_\gamma\ .\)
Moreover, it is essential to define \(\mathcal{F}_\gamma\) such that the empty set belongs to it (one correspondingly sets \(I_\gamma(\emptyset)=I_\gamma (\{\gamma\})\ ;\) no \(t^d_p\) is needed).

Going back to the general case, it turns out that this observation is decisive for the generalization of the subtraction prescription to all diagrams, i.e. the sum over all families of strongly-non-overlapping, divergent one-particle-irreducible (sub)diagrams of a given diagram \(\gamma\) is the right notion to lead to convergence in the general case, even when \(\gamma\) contains overlapping renormalization parts.

The subtracted integrand \(R_\gamma(p,k)\) associated
with an integrand \(I_\gamma(p,k)\)
is then defined as the sum over all possible forests of strongly-non-overlapping renormalization parts of the diagram \(\gamma\) (including the empty set with no subtraction),
as given by the **forest formula**:
\[
R_\gamma (p,k)=S_\gamma \sum_{U\in F_\gamma}\prod_{\lambda\in U}
(-t^{d(\lambda)}_{p^\lambda}S_\lambda) I_\gamma (U).
\]

Using the forest formula together with a specific prescription as to go around the poles in the propagators Zimmermann was then able to prove absolute convergence of the integrals \(\int d^4k_1...d^4k_m R_\gamma (p,k)\ .\)

The absolute convergence originates from a very elegant treatment of the
\(\varepsilon\) in the propagator: Zimmermann replaced the standard
\(i\varepsilon\) by
\[
\Delta_{\mathrm{c}}(p)= \frac{i}{p^2 - m^2 + i\varepsilon({\mathbf{p}}^2+m^2)}
\]
and showed that this definition leads to a *Euclidean* majorant and
minorant for the Minkowski propagator. The respective inequalities read
\[
\frac{1}{\sqrt{1+\varepsilon^2}}\frac{1}{k_0^2+{\mathbf {k}}^2+m^2}
\leq \frac{1}{|k_0^2-{\mathbf{k}}^2-m^2 +i\varepsilon({\mathbf {k}}^2+m^2)|}
\leq \frac{1} {\sqrt{1+\frac{4}{\varepsilon^2}}}\frac{1}{k_0^2+{\mathbf {k}}^2+m^2}.
\]
Hence one avoids the problems
of conditional convergence appearing in the conventional formulation.
Lorentz covariance is recovered in the limit of vanishing \(\varepsilon\ .\)

Let us illustrate these remarks in the simplest possible example, the diagram for \(L=1\) in Figure 8.

Since the degree of divergence is \(d(\gamma)=0\) one has to subtract from the integrand \(I_\gamma(p,k)\) just its value at \(p=0\) and obtains for the desired integral (up to numerical overall factors) \[ \int d^4k\, R_{\gamma} (p,k) = \int d^4k \left(\frac{1}{(p-k)^2 - m^2 +i\varepsilon(({\mathbf{p}}-{\mathbf{k}})^2+m^2)} \frac{1}{k^2 - m^2 +i\varepsilon({\mathbf {k}}^2+m^2)} - \frac{1}{k^2 - m^2 +i\varepsilon({\mathbf{k}}^2+m^2)} \frac{1}{k^2 - m^2 +i\varepsilon({\mathbf{k}}^2+m^2)}\right) \] This integral clearly converges absolutely since \[ \int d^4k|R_\gamma(p,k)|\leq \int d^4k \frac{1}{1+\frac{4}{\varepsilon^2}} \left|\frac{-p^2+2pk}{((p-k)^2_E + m^2)(k_E^2 + m^2)^2}\right| \] does so by Euclidean power counting.

The existence of the limit \(\varepsilon \rightarrow 0\) is difficult to see in this momentum space form of the integral. One can however verify it by going over to another parametrization (Feynman parameters). It turns out that the integral approaches a Lorentz covariant distribution.

## Application

In fact, with this type of construction one is not only able to study diagrams contributing to the \(S-\)matrix, but also to those forming matrix elements of composite operators. One just takes those as vertices into account in the power counting formula and proceeds via the forest formula. Hence one can now derive relations between composite operators on the fully quantized level. A very important example is provided by operator product expansions. Another one is constituted by equations of motions and currents. One can now verify if the latter are conserved and thus check whether symmetries are realizable on the quantum level.

The technical difficulty in this analysis originates from the fact that the composite operators appearing in the field or in the current conservation equations correspond to vertices which introduce extra subtractions, that is extra contributions to \(d(\gamma)\) in Eq. (3). There are situations in which the extra subtractions are "anisotropic" meaning that the extra contributions depend on the external legs of \(\gamma\) and not just on their dimensionally weighted sum, while in other situations the extra subtractions are constants. In both cases one has forest formulae with subtraction degrees higher than their naive dimension.

This difficulty is overcome thanks to an identity proven by Zimmermann,
and thus named after him, which allows the reduction of extra subtracted
composite operators to a linear combination of naively subtracted
ones. The simplest example is that of a mass term for a scalar field
\(m^2\int \varphi^2\) which has naive dimension 2. But one obtains
also finite diagrams, if it is being assigned dimension, i.e. subtraction
degree, 4. We shall denote the first vertex by \(m^2[\int \varphi^2]_2\)
and the second one by \(m^2[\int \varphi^2]_4\ .\)
Of course the integrals obtained for the two prescriptions will, in
general, be different. The **Zimmermann identity** now states that their difference can be expressed in terms of vertices with dimension (and power counting degree) 4.

In the example of one scalar field with \(\varphi^4\) interaction it reads

\[m^2[\int \varphi^2]_2 = m^2[\int \varphi^2]_4 +u[\int \partial\varphi\partial\varphi]_4 + v[\int\varphi^4]_4 \]

The **Zimmermann coefficients** \(u,v\) appearing here are at least of order one-loop. This is obvious,
because in the trivial order – no loops, pointlike vertices – the two objects agree, since there are no subtractions to be performed.

This innocently looking identity is actually one of the most fundamental relations in quantum field theory. In order to show this we consider in some detail how symmetries can be implemented in quantum field theory using the BPHZ renormalization scheme.

Clearly we have to understand how symmetry transformations act on Feynman diagrams and thereafter on the different types of Green functions which can be expressed as sums of diagrams. Since there are infinitely many, say, time ordered Green functions a symmetry of the theory should be translated into an infinite number of equations. A convenient tool for treating them at once are functionals generating the desired Green functions upon differentiation with respect to suitably chosen set of auxiliary functions. Let now \(\phi(x)\) denote a test function with values in the classical field space and let the Fourier transform \(\Gamma_n^{(L)}(x_1,...,x_n)\) denote the sum of all one-particle-irreducible diagrams having \(n\) external legs and \(L\) closed loops. Then one introduces the generating functional for 1PI Green functions through the formal series \[ \Gamma = \sum_{n=1}^\infty\left[\frac{1}{n!} \int dx_1...dx_n \phi (x_1) ... \phi(x_n) \sum_{L=0}^\infty \Gamma^{(L)}_n(x_1,...,x_n)\right]. \] In the tree approximation (no loop) the one-particle-irreducible Green functions are given by pointlike objects, i.e. "vertices" and the functional \(\Gamma^{(0)}\) can be identified with the classical action, the spacetime integral of the Lagrangian density. Therefore, in this approximation, the invariance of the action under a field transformation \(\delta \phi\) can be translated into a functional differential equation:

\[
W\Gamma^{(0)} \equiv
\int \delta \phi\frac{\delta}{\delta\phi}\Gamma^{(0)}=0,
\]
named **Ward identity**, \(W\) being the Ward identity operator.

Extending the differential equation to diagrams with closed loops
one faces the extra subtraction problem discussed above. Extra subtractions
induce further terms into the Ward identity corresponding to diagrams
with the insertion of an additional vertex \(Q(x)\ ,\) more precisely (and this
a non-trivial statement) as a normal product \(\int dx [Q(x)]\cdot \Gamma\ .\)
This is the content of a remarkable theorem (**action principle**)
which corresponds to the general validity of the broken Ward identity
\[
W\Gamma = \left[\int dx \, Q(x)\right]\cdot\Gamma.
\]
Here the explicit form of the insertion \(Q\) and its subtraction degree
depend on \(W\ .\) Notice that the
potential deviation from symmetry, \([\int Q]\cdot \Gamma\ ,\) is at least
of one-loop order if we started from an invariant classical action.

The most interesting question is now, whether a Ward identity:

\[ W\Gamma=0 \] holds to all orders of perturbation theory.

*Linear* symmetry transformations in massive theories can be extended
naively to all loop orders, if the classical action is invariant. Examples
are translations and Lorentz
transformations. Dilatations and special conformal transformations,
however, do not leave invariant the mass term. Then one has to use
the Zimmermann identity, finds that these symmetries are broken in
one-loop (and subsequently in all higher orders) and that the breaking
can be expressed in terms of the coefficients \(u,v\ .\)

Does this breaking disappear for vanishing mass? In order to answer this
question appropriately one has to enlarge the BPHZ subtraction scheme,
since momentum subtractions at vanishing external momenta would lead
to spurious infrared divergences. One proceeds by introducing an auxiliary
mass term
\[
\Gamma_M = -\frac{1}{2}M^2(s-1)^2 \int dx \phi^2,
\]
where the variables \(s\) and \(s-1\) participate in the subtractions like
external momenta of a diagram. Ultraviolet subtractions are performed
at \(s=0\ ,\) hence do not introduce infrared divergences, subsequent
infrared subtractions, namely subtractions with respect to \(s-1\)
re-install the correct infrared behavior, in particular the pole
at \(s=1\) of the propagator, i.e. lead to the massless theory (Lowenstein J. and Zimmermann W. (1975); Lowenstein J. (1976)). We shall name this enlarged scheme BPHZL.
Now one has to treat symmetries analogously to the massive case.
And one arrives at the analogous conclusion: in the \(\varphi^4\) theory
dilatation and special conformal symmetry are incurably violated: one
says, they are **anomalous**.

In the systematic study of symmetries (*non-linear*, internal,
local gauge symmetry,
supersymmetry) it always turned out that with the help of the respective
Zimmermann identities one could decide whether the symmetries were
anomalous or not and one was able to give an explicit expression for
the breaking in terms of the Zimmermann coefficients. This points to
the universal character of this identity.
Even outside of perturbation theory it is such an identity which
governs the truly non-trivial quantum behaviour of a quantum field
theory.

## General remarks

What are the great successes of the BPHZ renormalization scheme? The first certainly was the confirmation of Wilson's hypothesis on operator product expansions in perturbation theory based on Zimmermann's normal products. This provided the basis of confidence for the rich application of Wilson's ideas in particle physics.

The second is the treatment of symmetries. Once Zimmermann and Lowenstein had enlarged the subtraction scheme (Lowenstein J. and Zimmermann W. (1975); Lowenstein J. (1976)) as to treat successfully and with full mathematical rigor theories containing massless particles the road was open to quantize non-abelian gauge theories, in particular non-vectorlike (i.e. chiral, cf. [1]) ones, i.e. theories containing left- or right-handed fields only.

Basing the required analysis solely on power counting and the action
principle Becchi, Rouet and Stora were able to quantize non-abelian
gauge theories and in particular to give a clear cut criterion under
which conditions those were physical, namely maintaining the axioms:
here unitarity is the crucial issue. After having translated broken gauge transformations into the language of a symmetry with anti-commuting
parameters, thereafter called **BRS transformations**, they showed
that the breaking of this symmetry leads to violation of unitarity.
Absence of this breaking is assured to all orders of perturbation
theory if the respective anomaly coefficient in the one-loop
approximation vanishes. This restricts the admissible representations
of fermions in the model.

Interestingly enough, general \(N=1\) supersymmetric non-abelian gauge theories belong also to this wide class of chiral theories, hence indeed are also prime candidates to be quantized using BPHZL. This has been done. (Piguet O. and Sibold K. (1986))

It is remarkable that only as late as 1998 the first *all* order
renormalization of a simplified version of the electroweak standard
model has been achieved (Kraus E. (1998)). And it has been based on this scheme.

The successful quantization of all of these non-abelian gauge theories is thus at the very basis of today particle physics, the success being attributable to the BPHZL renormalization scheme.

More generally speaking this scheme is a perfect tool for studying structural relations: current algebras, including the algebra of the energy-momentum tensor and superconformal algebras; theories with vanishing \(\beta\)-functions (often called "finite" theories); similarly one can rigorously formulate topological field theories and extract the relevant information. The precise definition of anomalies and their interrelations is at its core, since one can obtain them constructively. In this sense the BPHZL scheme is still effectively used and has not been superseded by any other scheme.

In recent years another aspect has become of interest: the algebraic structure which is behind Feynman rules on the one hand, and behind the forest formula on the other. In its simplest form the forest formula carries a Hopf algebra structure and becomes as such an element of a rich mathematical theory. Like for a similar algebraization in pure mathematics it is to be expected that this process also reaches now renormalization theory and produces new, unexpected results which could not be found in the concrete realizations. Hopefully they lead to new physical insight.

Having spoken so much on *one* scheme one should perhaps put it
in the
context of other renormalization schemes and try to contrast it with
those.

First of all one has to recall that all renormalization schemes
are equivalent upon finite renormalizations. This statement is the
content of a theorem due to K. Hepp who has given an axiomatic
characterization of what a renormalization scheme is. Roughly
speaking it is any set of prescriptions which is mathematically
consistent and tells one, how to obtain, say
Green functions or operators (like the scattering operator)
satisfying Lorentz covariance, unitarity and causality.
(This may or may not be realized via Feynman diagrams!)
After this theorem the use of any *specific* scheme is a matter of
practice but not a matter of principle.

So, BPHZL does not maintain BRS invariance (even in vector-like models)
hence is not a very practical tool for *explicit* calculations in such
theories. Here, for instance
**dimensional regularization** and subsequent renormalization is
the most practical scheme because it is naively compatible with this
type of gauge invariance. Dimensional renormalization is however at
least as cumbersome as BPHZL in chiral models, since there is
no naive treatment of \(\gamma_5\ ,\) the latter being a genuine object
of four-dimensional spacetime. If one wants to see how a renormalization
scheme constructively exhausts the axioms (in particular unitarity
and causality) one will use the **Epstein-Glaser method**
because there
the prescriptions of how to construct the \(S\)-operator are directly
based on these principles. Similarly, if one wants to maintain causality
one might stick to **analytic renormalization**. For very
concrete models
one will set up combinations of these schemes in order to facilitate
explicit computations.

Looking back at about forty years it seems that as far as structural relations are concerned and their use in physically relevant models BPHZL is the leading scheme, just because it is constructive and does not only signal, for instance the breakdown of a symmetry, but at the same time explicitly exhibit how the symmetry is broken. It is however to be repeated: this is a matter of practice and not of principle.

## References

- Dyson, Freeman J. (1949) 'The S-matrix in quantum electrodynamics'
*Phys. Rev.***75**: 1736. doi:10.1103/PhysRev.75.1736.

- Kraus, Elisabeth (1998) 'Renormalization of the electroweak standard model to all orders'
*Annals of Physics***262**: 155. doi:10.1006/aphy.1997.5746.

- Lowenstein, John and Wolfhart Zimmermann (1975) 'The Power Counting theorem for Feynman Integrals with Massless Propagators.'
*Communications in Mathematical Physics***44**: 73. doi:10.1007/BF01609059.

- Lowenstein, John (1976) 'Convergence Theorems for Renormalized Feynman Intgrals with Zero-mass Propagators.'
*Communications in Mathematical Physics***47**: 53. doi:10.1007/BF01609353.

- Piguet, Olivier and Klaus Sibold (1986)
*Renormalized Supersymmetry*. Boston: Birkhäuser. doi:10.1007/978-1-4684-7326-1.

- Zimmermann, Wolfhart (1968) 'The Power Counting Theorem for Minkowski Metric.'
*Communications in Mathematical Physics***11**: 1. doi:10.1007/BF01654298.

- Zimmermann, Wolfhart (1969) 'Convergence of Bogoliubov's Method of Renormalization in Momentum Space.'
*Communications in Mathematical Physics***15**: 208. doi:10.1007/BF01645676.

**Internal references**

- Jean Zinn-Justin and Riccardo Guida (2008) Gauge invariance. Scholarpedia, 3(12):8287. doi:10.4249/scholarpedia.8287.

- Gerard ′t Hooft (2008) Gauge theories. Scholarpedia, 3(12):7443. doi:10.4249/scholarpedia.7443.

- Guy Bonneau (2009) Local operator. Scholarpedia, 4(9):9669. doi:10.4249/scholarpedia.9669.

- Vladimir Alexandrovich Smirnov (2009) Multiloop Feynman integrals. Scholarpedia, 4(6):8507. doi:10.4249/scholarpedia.8507.

- Guy Bonneau (2009) Operator product expansion. Scholarpedia, 4(9):8506. doi:10.4249/scholarpedia.8506.

## Further Reading

- Bogoliubov, Nikolai N. and Dimitri V. Shirkov (1959)
*Introduction to the theory of quantized fields*. Wiley-Interscience

- Collins, John (1984)
*Renormalization*. Cambridge doi:10.1017/CBO9780511622656.005.

- DeWitt, Cecile and Raymond Stora (eds.) (1971)
*Statistical mechanics and quantum field theory*. Gordon and Breach (in particular pp. 429-500)

- Itzykson, Claude and Jean-Bernard Zuber (1980)
*Quantum field theory*. McGraw-Hill, Inc.

- Kugo, Taichiro (1997)
*Eichtheorie*. Springer (German) doi:10.1007/978-3-642-59128-0.

- Velo, Giorgio and Arthur S. Wightman (eds.) (1975)
*Renormalization theory*. D. Reidel, Dordrecht (in particular pp. 95-160)

## See also

Algebraic renormalization, BRST Symmetry, Composite operator, Dimensional Renormalization, Gauge theories, Multiloop Feynman integrals, Operator product expansion, Renormalization , Supersymmetry