# Quantum entropies

Post-publication activity

Curator: Anna Vershynina

In information theory, entropy is a measure of randomness or uncertainty in the system. It is crucial in the theory of entanglement (e.g. Schumacher 1996) or quantum communication (Ohya and Volovich 2003), (Ozawa and Yuen 1993), (Holevo 1998). Other applications include quantum algorithms, quantum cryptography, or statistical physics as discussed in (Ohya and Watanabe 2010). Von Neumann entropy is a natural generalization of the classical Shannon entropy. Surprisingly, von Neumann entropy was introduced by von Neumann, (1932), almost 20 years before Shannon entropy was in (Shannon 1948). Several entropy measures are discussed in this article: von Neumann entropy, Rényi entropy, Tsallis entropy, Min entropy, Max entropy, and Unified entropy.

Shannon explained the name 'entropy' in (McIrvine and Tribus 1971):

My greatest concern was what to call it. I thought of calling it 'information,' but the word was overly used, so I decided to call it 'uncertainty.' When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, 'You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage.'

## Definitions/Notations

The following notations will be used throughout this article. Let $$\mathcal{H}$$ be a $$d$$-dimensional complex Hilbert space. A tensor product of two or more Hilbert spaces will be denoted by subscripts, e.g. $$\mathcal{H_{AB}}=\mathcal{H_A}\otimes\mathcal{H_B}$$.

A system $$A$$ is defined by the Hilbert space $$\mathcal{H}_A$$.

A matrix (operator) $$M$$ on a Hilbert space $$\mathcal{H}$$ is Hermitian (or self adjoint) if it is its own conjugate transpose, i.e. $$M=M^*=M^\dagger$$.

A matrix $$M$$ is positive semidefinite, $$M\geq 0$$, if for every nonzero vector $$x$$, the value of $$x^* M x$$ is non negative, where $$x^*$$ denotes the conjugate transpose of the vector $$x$$.

A matrix $$M$$ is positive, $$M>0$$, if for every nonzero vector $$x$$, the value of $$x^* M x$$ is strictly positive, where $$x^*$$ denotes the conjugate transpose of the vector $$x$$.

A density matrix $$\rho$$ on a Hilbert space $$\mathcal{H}$$ is a Hermitian, positive semidefinite matrix of trace one. Density matrices describe quantum physical systems in either mixed or pure states. A quantum system is said to be in a pure state, if its density matrix is a rank-one projector. In other cases, a system is in a mixed state. Let $$\mathcal{D}(\mathcal{H})$$ denote a space of density matrices on $$\mathcal{H}$$, and let $$\mathcal{L}(\mathcal{H})$$ denote the set of all linear operators on $$\mathcal{H}$$.

The trace of a matrix $$M$$ is the sum of its eigenvalues or equivalently, it is the sum of its diagonal entries.

Let $$M_{AB}$$ be a linear operator on a tensor product Hilbert space $$\mathcal{H}_A\otimes\mathcal{H}_B$$. The partial trace of $$M_{AB}$$ over space $$A$$ is the trace of the matrix over space $$\mathcal{H}_A$$, and is denoted by $$M_B:=\mathrm{Tr}_A(M_{AB})$$. Similarly, over space $$B$$ is the trace of the matrix $$M_{AB}$$ over space $$\mathcal{H}_B$$, and is denoted by $$M_A:=\mathrm{Tr}_B(M_{AB})$$.

A purification of a density operator $$\rho_A$$ on Hilbert space $$\mathcal{H}_A$$ is a pure state $$\rho_{RA}$$ on a reference system $$R$$ and the original system $$A$$, such that tracing out the reference system gives the original density operator $$\rho_A$$, i.e. $$\mathrm{Tr}_R\rho_{RA}=\rho_A$$. See (Araki and Lieb 1970) or (Wilde 2013) for its existence.

Let $$\mathcal{H}$$ and $$\mathcal{H'}$$ be Hilbert spaces. An isometry $$U$$ is a linear map $$\mathcal{H}$$ to $$\mathcal{H'}$$ such that $$U^\dagger U=I_\mathcal{H},$$ where $$U^\dagger$$ is a map from $$\mathcal{H'}$$ and $$\mathcal{H}$$ denoting the adjoint operator of $$U$$.

A linear map $$M$$ from $$\mathcal{L(H}_A)$$ to $$\mathcal{L(H}_B)$$ is said to be completely positive if for any reference system $$R$$ of any finite size, the map $$(\mathrm{id}_R\otimes M)$$ is a positive map.

A quantum channel is a completely positive trace preserving map. A unital quantum channel is a quantum channel $$\mathcal{N}$$ such that $$\mathcal{N}(I_A)=I_B$$.

A trace norm of an operator $$M\in\mathcal{L}(\mathcal{H},\mathcal{H'})$$ is defined as $\|M\|_1=\mathrm{Tr}\sqrt{M^\dagger M}\ .$

The trace distance between two operators $$M$$ and $$M'$$ on $$\mathcal{L}(\mathcal{H},\mathcal{H'})$$ is given by $\frac{1}{2}\|M-M'\|_1\ .$

A positive operator valued measure (POVM) is a set $$\{M_j\}_j$$ of operators that satisfy non-negativity and completeness. Meaning that $\text{for all }j, \ M_j\geq 0, \text{ and } \sum_j M_j=I\ .$

## Properties of quantum entropy

Denote $$S(\rho)$$ the entropy value of a density matrix $$\rho\in\mathcal{D}(\mathcal{H})$$. Several quantum entropies are defined and described below. For each of them, the following properties are discussed:

• Non negativity: For density operator $$\rho$$, the entropy $$S(\rho)$$ is non-negative. Meaning

$S(\rho)\geq 0\ .$

• Minimum value: The minimum value of

$S(\rho)=0$ if and only if $$\rho$$ is a pure state.

• Maximum value: The entropy is bounded from above, and a density operator $$\rho$$ achieves that bound whenever it is maximally mixed, i.e. $$\rho=\frac{1}{d}I$$, where $$d$$ is the dimension of a Hilbert space $$\mathcal{H}$$.
• Isometric invariance: If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then

$S(\rho)=S(U \rho\, U^{\dagger})\ .$

• Additivity: Let $$\mathcal{H}_A$$ and $$\mathcal{H}_B$$ be two Hilbert spaces. Suppose there are density operators $$\rho$$ on $$\mathcal{H}_A$$ and $$\sigma$$ on $$\mathcal{H}_B$$. Then additivity of the entropy implies

$S(\rho\otimes\sigma)=S(\rho)+S(\sigma)\ .$

• Subadditivity: Let $$\rho_{AB}$$ be a bipartite state on tensor product Hilbert space $$\mathcal{H}_A\otimes\mathcal{H}_B$$. Then subadditivity is described as

$S(\rho_{AB})\leq S(\rho_A)+S(\rho_B)\ .$

• Strong subadditivity (SSA): Let $$\rho_{ABC}$$ be a density operator on the tensor product Hilbert space $$\mathcal{H}_A\otimes\mathcal{H}_B\otimes \mathcal{H}_C$$. Then

$S(\rho_{AC})+S(\rho_{BC})\geq S(\rho_{ABC})+S(\rho_C)\ .$

• Concavity: Let $$\{\rho_j\}_j$$ be density operators on Hilbert space $$\mathcal{H}$$, for some finite collection of indices $$\{j\}$$. Let $$\{p_j\}_j$$ be a probability distribution. i.e. $$0\leq p_j\leq 1$$ and $$\sum_j p_j=1$$. Then concavity of the entropy $$S(\rho)$$ means that

$S(\rho)\geq \sum_j p_jS(\rho_j)\ ,$ where $$\rho=\sum_jp_j\rho_j$$.

• Data processing inequality: Let $$\rho$$ be a density operator on Hilbert space $$\mathcal{H}$$. Let $$\mathcal{N}:\mathcal{L}(\mathcal{H_A})\to \mathcal{L}(\mathcal{H_B})$$ be a unital quantum channel. Then

$S(\mathcal{N}(\rho))\geq S(\rho)\ .$

• Triangle inequality: If $$\rho_{AB}$$ is a density operator on the tensor product Hilbert space $$\mathcal{H}_A\otimes \mathcal{H}_B$$, then

$\left|S(\rho_A)-S(\rho_B)\right|\leq S(\rho_{AB})\ .$

• Continuity: The entropy $$S(\rho)$$ is continuous with respect to the trace norm. That is, if two density operators are close with respect to the trace norm, then so are their entropies.
• Monotonicity: Given a family of entropies $$\{S_\alpha(\rho)\}_\alpha$$, where $$\alpha$$ is some index, the entropies are monotonic in $$\alpha\in\mathbb{R}$$.

 Property von Neumann Entropy Rényi entropy Max-entropy Min-entopy Tsallis entropy Unified (r,s) entropy Non-negativity Yes Yes Yes Yes Yes Yes, (Hu and Ye 2006) Minimum value Yes, (Wilde 2013) Yes, (Hu and Ye 2006) Yes Yes Yes, (Hu and Ye 2006) Yes, (Hu and Ye 2006) Maximum value Yes Yes Yes, Yes, (Dam and Hayden 2002) Yes, (Audenaert 2007) Yes, (Hu and Ye 2006) Isometric invariance Yes Yes Yes Yes Yes Yes, (Hu and Ye 2006) Additivity Yes Yes, (Dam and Hayden 2002) Yes Yes Generally no No, (Hu and Ye 2006) Subadditivity Yes, Araki and Lieb (1970) No, (Linden et. al. 2013) Yes, (Dam and Hayden 2002) No, (Dam and Hayden 2002) Yes, for $$q>1$$ (Audenaert 2007) No, (Hu and Ye 2006) Strong subadditivity (SSA) Yes, (Strong Subadditivity of Quantum Entropy)(Lieb and Ruskai 1973) No, (Linden et. al. 2013) Yes, Konig et. al. (2009) Yes, Konig et. al. (2009) No, (Petz and Virosztek 2015) Yes, for $$r\to 1,$$ (Hu and Ye 2006), (Lieb and Ruskai 1973), else unknown Concavity Yes No, (Hu and Ye 2006) No, (Dam and Hayden 2002) No, (Dam and Hayden 2002) Yes, (Furuichi et. al. 2007) Yes, for $$r\in(0,1), s>0$$ and $$rs\leq1$$ (Hu and Ye 2006) Data processing inequality Yes, (Lindblad 1975) Yes, for $$\alpha\in[\frac{1}{2},1)\text{ and } (1,\infty)$$ (Frank and Lieb 2013) Yes, (Datta 2009) Yes, (Datta 2009) Yes, for $$q\in(0,1)$$ (Furuichi et. al. 2004) Yes, for $$r\to 1$$, or $$r\neq 1,$$ $$s\to 0$$, or $$r\neq 1$$, $$s=1$$ Triangle inequality Yes, (Araki and Lieb 1970) No, (Linden et. al. 2013) Yes, by Weak subadditivity No, (Linden et. al. 2013) Yes, for $$q\in(0,1)$$ (Audenaert 2007) Yes, for $$r>1$$ and $$s\geq r^{-1}$$, (Rastegin 2011) Continuity Yes, (Audenaert 2007) Yes, for $$\alpha\in(0,1)$$ (Audenaert 2007) for $$\alpha>1$$ (Chen et. al. 2017) No, (Wehrl 1991) Unknown Yes, for $$q\in (0,1)$$ (Audenaert 2007) for $$q>1$$ (Raggio 1995) Yes, for $$r>1$$ and $$s\geq 1$$ (Hu and Ye 2006) Monotonicity Not applicable Yes, for $$\alpha>1$$(Renyi 1961) Not applicable Not applicable Yes, for $$q>1$$ Yes, for $$r>1, s\to 0$$, or $$r>1, s=1$$ (Hu and Ye 2006), else unknown

## von Neumann entropy

One of the most studied and frequently used entropy functions is the von Neumann entropy, which is defined as follows: For a density operator $$\rho\in\mathcal{D}(\mathcal{H})$$ the von Neumann entropy is defined as follows $S(\rho)=-\mathrm{Tr}(\rho \log\rho)\ .$

This entropy is a quantum generalization of the classical Shannon entropy. If $$\{p_i\}_i$$ are the eigenvalues of a density operator $$\rho$$, then the von Neumann entropy equals the Shannon entropy of a random variable $$X_\rho$$ with probability distribution $$\{p_i\}_i$$, i.e. $\tag{1}S(\rho)=H(X_\rho)=-\sum_i p_i \log p_i \ .$ The von Neumann entropy quantifies the amount of information present in a system, and the amount of correlations between quantum systems. Some of the striking differences between Shannon and von Neumann entropies arise when looking at joint systems, conditional quantum entropy, etc. There are several families of entropies that generalize the von Neumann entropy listed below.

Von Neumann entropy is directly related to the notion of source coding, which can be described as a process of encoding and decoding information. The idea is to compress information to reduce costs of storage or transmission of information. In classical information theory this is known as Shannon's source coding theorem, which is found in (Shannon 1948). In 1995 Schumacher proved a quantum analogue to Shannon's source coding theorem, which compresses a quantum information source to a rate which is exactly the von Neumann entropy. See (Schumacher, 1995), or chapter 18 in the book by Wilde (2013), for details.

Some of the properties of von Neumann entropy discussed below are either proved or left as exercises in (Wilde 2013). If there is no reference in that section see this book for more details.

Non negativity

The von Neumann entropy, is non negative. This follows directly from (1), as $$0\leq p_i\leq 1$$.

Minimum value

From (1) the von Neumann entropy is zero if and only if $$\rho$$ is a pure state.

Maximum value

The von Neumann entropy is upper bounded by $S(\rho)\leq \log d\ .$ Equality is achieved if and only if $$\rho$$ is maximally mixed state.

Isometric invariance

If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then $S(\rho)=S(U\, \rho\, U^{\dagger})\ .$

The von Neumann entropy is additive, i.e. $S(\rho\otimes\sigma)=S(\rho)+S(\sigma)\ .$

Araki and Lieb (1970) showed that the von Neumann entropy is subadditive. That is, given a bipartite state $$\rho_{AB}$$, the following holds $S(\rho_{AB})\leq S(\rho_A)+S(\rho_B)\ .$

Lieb and Ruskai (1973) proved strong subadditivity of the von Neumann entropy. For a given tri-partite state $$\rho_{ABC}$$, the following inequality holds: $S(\rho_{AC})+S(\rho_{BC})\geq S(\rho_{ABC})+S(\rho_C)\ .$

Concavity

The von Neumann entropy is concave. Let $$\rho_j$$ be some density operators on $$\mathcal{H}$$, for some finite collection $$\{j\}$$, and let $$\{p_j\}$$ be a probability distribution. i.e. $$0\leq p_j\leq 1$$ and $$\sum_j p_j=1$$. Then concavity of the von Neumann entropy is given by $S(\rho)\geq \sum_j p_jS(\rho_j)\ ,$ where $$\rho=\sum_jp_j\rho_j$$.

Data processing inequality

Lindblad (1975) shows the data processing inequality for the von Neumann entropy. Let $$\mathcal{N}:\mathcal{L}(\mathcal{H_A})\to \mathcal{L}(\mathcal{H_B})$$ be a unital quantum channel. Then $S(\mathcal{N}(\rho))\geq S(\rho)\ .$

Triangle inequality

Araki and Lieb (1970) proved that for a bipartite state $$\rho_{AB}$$ the following holds $\left|S(\rho_A)-S(\rho_B)\right|\leq S(\rho_{AB})\ .$

Continuity

Fannes (1973) proved the following continuity inequality: for any density matricies $$\rho$$ and $$\sigma$$ such that $$T:=\frac{1}{2}\|\rho-\sigma\|_1\leq \frac{1}{2e}$$, then $\left|S(\rho)-S(\sigma)\right|\leq 2T\log d-2T\log(2T)\ .$ And for larger $$T$$, the following holds $\left|S(\rho)-S(\sigma)\right|\leq 2T\log d+1/(e \ln 2)\ .$

Audenaert (2007) proved the following sharper bound for continuity of the von Neumann entropy with respect to a trace norm. The inequality is called Fannes-Audenaert inequality. For all states $$\rho$$ and $$\sigma$$ with $$T:=\frac{1}{2}\|\rho-\sigma\|_1$$, define $$H({T})=-T\log T-(1-T)\log(1-T)$$ to be the Shannon (classical) entropy of a random variable with a binary probability distribution $$\{T, 1-T\}$$. Then $\left|S(\rho)-S(\sigma)\right|\leq T\log(d-1)+H({T})\ .$

### Error Bounds for SSA

Carlen and Lieb (2012) provided the following strengthening of the strong subadditivity inequality: for all tripartite states $$\rho_{ABC}$$, $S(\rho_{AC})-S(\rho_C)+S(\rho_{BC})-S(\rho_{ABC})\geq 2\max\left\{S(\rho_A)-S(\rho_{AB}), S(\rho_B)-S(\rho_{AB}), 0\right\}\ .$

## Rényi entropy

For a density matrix $$\rho\in\mathcal{D}(\mathcal{H})$$, the quantum Rényi entropy is defined as follows: $S_\alpha(\rho)=\frac{1}{1-\alpha}\log\mathrm{Tr}\left(\rho^\alpha\right),\ \alpha\in(0,1)\cup(1,\infty)\ .$ This is a quantum version of a classical Rényi entropy which was introduced in (Renyi, 1961). If $$\{p_i\}_i$$ are the eigenvalues of $$\rho$$, then the quantum Rényi entropy reduces to a Rényi entropy of a random variable $$X_\rho$$ with probability distribution $$\{p_i\}$$ $\tag{2} S_\alpha(\rho)= H_\alpha(X_\rho)=\frac{1}{1-\alpha}\log\left(\sum_i p_i^\alpha\right)\ .$

As in the classical case, von Neumann entropy is a limiting case of the Rényi entropy, as $\lim_{\alpha\to 1}S_\alpha(\rho)=S(\rho)\ ,$ which is discussed and proven in (Muller-Lennert et. al., 2013). Other special cases are discussed below, when $$\alpha$$ tends to zero and infinity.

Rényi entropy is widely used in information theory, for example, in restricting error probabilities in classification problems (Csiszar, 1995), entanglement-assisted local operations and classical communications conversion (Cui et. al., 2012), strong converse problem in quantum hypothesis testing (Mosonyi, Ogawa, 2015), and strong converse problem for the classical capacity of a quantum channel (Wilde et. al., 2014).

Non negativity

From (2), the Rényi entropy is non-negative.

Minimum Value

The Rényi entropy equals zero if and only if $$\rho$$ is a pure state. Hu and Ye (2006) explain that this follows from the fact that Tr$$(\rho^\alpha)\leq 1$$ with equality if and only if $$\rho$$ is a pure state.

Maximum value

Dam and Hayden (2002) explain that the Rényi entropy is upper bounded by $S_\alpha(\rho)\leq \log d\ .$ Equality is achieved if and only if $$\rho$$ is a maximally mixed state.

Isometric invariance

From the definition, if $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then $S_\alpha(\rho)=S_\alpha(U \rho\, U^{\dagger})\ ,$ for all $$\alpha\in (0,1)\cup(1,\infty)$$.

From the definition, the Rényi entropy is additive $S_\alpha(\rho\otimes\sigma)=S_\alpha(\rho)+S_\alpha(\sigma)\ ,$ for all $$\alpha\in (0,1)\cup(1,\infty)$$.

The Rényi entropy is not subadditive or strongly subadditive, which was shown by Linden et. al. (2013).

According to Dam and Hayden (2002) a weaker version of subadditivity holds: $S_\alpha(\rho_A)-S_0(\rho_B)\leq S_\alpha(\rho_{AB})\leq S_\alpha(\rho_A)+S_0(\rho_B),$ where $$S_0(\rho)$$ is defined in the Max entropy section.

Concavity

Hu and Ye (2006) showed that the Rényi entropy is not concave. However, it is Shur concave: if $$\rho\succ \sigma$$, then $S_\alpha(\rho)\leq S_\alpha(\sigma)\ ,$ for all $$\alpha\in (0,1)\cup(1,\infty) .$$ Here, $$\rho\succ \sigma$$ means that the spectrum of operator $$\rho$$ majorizes spectrum of $$\sigma$$. If $$\sigma$$ has eigenvalues $$\gamma_1\geq\gamma_2\geq\cdots\geq\gamma_r$$ and $$\rho$$ has eigenvalues $$\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_r$$, then $\sum_{j=1}^r\lambda_j\geq\sum_{j=1}^r\gamma_j\ ,$ for all $$1\leq j \leq r$$. See (Dam, Hayden, 2002) for details.

Data processing inequality

Sandwiched Rényi entropy is defined as

$D_\alpha(\rho||\sigma):= \begin{cases} (\alpha-1)^{-1}\log[\text{Tr}((\sigma^{\frac{1-\alpha}{2\alpha}}\rho\sigma^{\frac{1-\alpha}{2\alpha}})^\alpha)] & \text{ when } \text{supp}(\rho)\subseteq \text{supp}(\sigma) \\ +\infty & \text{else.} \\ \end{cases}\$

Frank and Lieb (2013) proved that data processing inequality for all $$\alpha\in[\frac{1}{2},1)\text{ and } (1,\infty)$$. i.e.,

$D_\alpha(\rho||\sigma)\geq D_\alpha(\mathcal{N}(\rho)||\mathcal{N}(\sigma)),$

for $$\rho$$ a density operator, $$\sigma$$ a positive operator, and $$\mathcal{N}$$ a quantum channel.

With this definition, we see that $$S_\alpha(\rho)=-D_\alpha(\rho||I),$$ where $$I$$ is the identity operator. Thus given a unital quantum channel $$\mathcal{N}$$, we have that $S_\alpha(\rho)=-D_\alpha(\rho||I)\leq D_\alpha(\mathcal{N}(\rho)||\mathcal{N}(I))=-D_\alpha(\mathcal{N}(\rho)||I)=S_\alpha(\mathcal{N}(\rho).$

Triangle inequality

The Rényi entropy does not satisfy the triangle inequality, which was proved by Linden et. al. (2013).

Continuity

Audenaert (2007) showed the natural generalization of the sharp continuity of the von Neumann entropy with respect to the trace norm, which holds for $$\alpha\in(0,1).$$ For all $$d$$-dimensional states $$\rho$$ and $$\sigma$$, such that their trace distance is given by $$T:=\frac{1}{2}\|\rho-\sigma\|_1$$, Audenaert showed that for $$0<\alpha<1$$ $\left|S_\alpha(\rho)-S_\alpha(\sigma)\right|\leq \frac{1}{1-\alpha}\log\left[(1-T)^\alpha+(d-1)^{1-\alpha}T^\alpha\right] \ .$ It is worth noting that this bound is optimal and no further improvements of this bound are possible.

For $$\alpha>1$$, Chen et. al. (2017) show the following continuity bound $\left|S_\alpha(\rho)-S_\alpha(\sigma)\right|\leq \frac{d^{\alpha-1}}{1-\alpha}\left[1-(1-T)^\alpha-(d-1)^{1-\alpha}T^\alpha\right]\ .$

Datta and Hanson (2017) also show uniform continuity bound for the Rényi entropy as follows: For $$\epsilon\in(0,1]$$ and any density operators $$\rho$$ and $$\sigma$$ such that $$T \leq \epsilon$$ then for $$\alpha\in(0,1),$$

$\left|S_\alpha(\rho)-S_\alpha(\sigma)\right|\leq \begin{cases} (1-\alpha)^{-1}\log\left((1-\epsilon)^\alpha+(d-1)^{1-\alpha}\epsilon^\alpha\right) & \epsilon<1-\frac{1}{d} \\ \log d & \epsilon\geq 1-\frac{1}{d} \\ \end{cases}\ .$

Monotonicity

Muller-Lennert et. al. (2013) proved that the Rényi entropies are monotonically decreasing in $$\alpha$$, whenever $$\alpha>1$$, i.e. $\text{for }1<\alpha_1\leq\alpha_2, \ \ S_{\alpha_1}(\rho)\geq S_{\alpha_2}(\rho)\ .$

## Max entropy (Hartley entropy)

The Max (Hartley) entropy is defined as a limit of Rényi entropies $S_0(\rho)=\lim_{\alpha\to 0}S_\alpha(\rho)=\log \mathrm{rank}(\rho)\ .$ Following Muller-Lennert et. al. (2013), set a convention throughout that $$\log(0)=0$$.

Konig et. al. (2009) describe and prove that the measure of security or secrecy of a system $$X$$ relative to system $$B$$ is quantified by the Max entropy, where $$X$$ is a classical system and $$B$$ is a quantum system. In addition, this notion generalizes naturally between two quantum systems, where the security or secrecy of one quantum system relative to another quantum system is also described by the Max entropy between the two systems. Modern day cryptography, electronic voting, or other securities and secrecies of a 2-party communication use a notion of bit commitment, which is described in (Buhrman et. al., 2006).

Non negativity

By definition and convention, $$S_0(\rho)$$ is non negative.

Minimum Value

From definition, the Max entropy is zero if and only if $$\mathrm{rank}(\rho)=1$$, which happens if and only if $$\rho$$ is a pure state.

Maximum value

Dam and Hayden (2002) provided an upper bound of the Max entropy $S_0(\rho)\leq \log d\ ,$ where $$d$$ is the dimension of the associated Hilbert space. Equality is achieved if and only if the density operator $$\rho$$ is a full rank operator.

Isometric invariance

If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then $S_0(\rho)=S_0(U \rho\, U^{\dagger})\ .$

The Max entropy is additive, i.e. $S_0(\rho\otimes\sigma)=S_0(\rho)+S_0(\sigma)\ .$ This follows from the fact that $$\mathrm{rank}(\rho\otimes\sigma)=\mathrm{rank}(\rho)\,\mathrm{rank}(\sigma)$$.

Dam and Hayden (2002) proved that the Max entropy is subadditive, i.e. $S_0(\rho_{AB})\leq S_0(\rho_A)+S_0(\rho_B)\ .$

As mentioned by Linden et. al. (2013) max-entropy does not satisfy the strong subadditivity inequality, but according to Konig et. al. (2009) it satisfies a version of it.

In (Konig et. al., 2009), conditional max entropy is defined as $S_{\max}(A|B)_\rho\:=-S_{\min}(\rho_A|\rho_C) ,$ where $$\rho_{ABC}$$ is a purification of $$\rho_{AB}$$ and $$S_{\min}(\rho_{AC}|\rho_C)$$ is explained in (3). Then, Konig et. al. (2009) show the following inequality$S_{\max}(\rho_A|\rho_{BC})\leq S_{\max}(\rho_A|\rho_B)\ ,$ which is thought of as generalizing the strong subadditivity of the von Neumann entropy.

Concavity

Hu and Ye (2006) showed that the Max entropy is not concave. However, it is Shur concave: if $$\rho\succ \sigma$$, then $S_0(\rho)\leq S_0(\sigma)\ .$ Here, $$\rho\succ \sigma$$ means that the spectrum of operator $$\rho$$ majorizes spectrum of $$\sigma$$. If $$\sigma$$ has eigenvalues $$\gamma_1\geq\gamma_2\geq\cdots\geq\gamma_r$$ and $$\rho$$ has eigenvalues $$\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_r$$, then $\sum_{j=1}^r\lambda_j\geq\sum_{j=1}^r\gamma_j\ ,$ for all $$1\leq j \leq r$$. See (Dam, Hayden, 2002) for details.

Data processing inequality

The Max entropy satisfies the data process inequality. For a unital quantum channel $$\mathcal{N}:\mathcal{L}(\mathcal{H_A})\to \mathcal{L}(\mathcal{H_B})$$, the following holds $S_0(\mathcal{N}(\rho))\geq S_0(\rho)\ .$ See (Datta, 2009) for more details.

Triangle inequality

Since the Max entropy satisfies subadditivity, it follows that it also satisfies the triangle inequality: $\left|S_0(\rho_A)-S_0(\rho_B)\right|\leq S_0(\rho_{AB})\ .$ This is shown by taking $$\alpha=0$$ in the Weak subadditivity inequality.

Continuity

Max entropy is not continuous on $$\mathcal{D}(\mathcal{H})$$, since the rank of a matrix is not continuous, see (Wehrl, 1991) or (Muller-Lennert et. al., 2013) for more details.

## Min entropy

The Min entropy is defined as a limit of Rényi entropies $S_\infty(\rho)=\lim_{\alpha\to \infty}S_\alpha(\rho)=-\log \|\rho\|\ ,$ where $$\|\cdot\|$$ denotes the operator norm.

Typically, entropy has operational meaning in storage or transmission of data and information between systems. In contrast, Konig et. al. (2009) prove that the Min entropy of classicacl information is interpreted as a guessing probability. That is the probability of guessing classical values of a system $$X$$ correctly using some optimal strategy. This strategy is described by positive operator valued measure (POVM). This explanation extends to guessing probabilities between two quantum systems.

Non negativity

Since $$\|\rho\|\leq 1$$ the Min entropy is non-negative.

Minimum Value

From definition, the Min entropy is zero if and only if $$\|\rho\|=1$$, which happens if and only if $$\rho$$ is a pure state.

Maximum value

Dam and Hayden (2002) explain the Min entropy is upper bounded by $S_\infty(\rho)\leq \log d\ .$ Equality is achieved if and only if $$\rho$$ is a maximally mixed state.

Isometric invariance

If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then $S_\infty(\rho)=S_\infty(U \rho\, U^{\dagger})\ .$ This is because $$\|\rho\|=\|U\rho U^\dagger\|\ .$$

The Min entropy is additive $S_\infty(\rho\otimes\sigma)=S_\infty(\rho)+S_\infty(\sigma)\ .$ This follows from the fact that $$\|\rho\otimes\sigma\|=\|\rho\|\, \|\sigma\|$$.

Dam and Hayden (2002) explain that the Min entropy is not subadditive.

Let $$\mathcal{H}_A$$ and $$\mathcal{H}_B$$ be two Hilbert spaces. In (Konig et. al., 2009), conditional min entropy is defined as follows: $\tag{3}S_{\mathrm{min}}(A|B)_\rho=-\inf_{\sigma_B} D_\infty(\rho_{AB}\|\text{id}_A\otimes\sigma_B) \ ,$ where $$\sigma_B$$ is a density operator on $$\mathcal{H}_B$$, and where $D_\infty(\tau\|\tau'):=\inf\{\lambda\in\mathbb{R} : \tau\leq2^\lambda\tau'\}\ .$

Konig et. al. (2009) later explain the following inequality$S_{\min}(\rho_A|\rho_{BC})\leq S_{\min}(\rho_A|\rho_B)\ .$ This inequality is considered as the min entropy equivalence of the strong subadditivity of von Neumann strong subadditivity.

According to Dam and Hayden (2002), a weaker version of subadditivity holds: for a bipartite density state $$\rho_{AB}$$ the following holds $S_\infty(\rho_A)-S_0(\rho_B)\leq S_\infty(\rho_{AB})\leq S_\infty(\rho_A)+S_0(\rho_B)\ ,$ where $$S_0(\rho)$$ is defined in the Max entropy section.

Concavity

Hu and Ye (2006) showed that the Min entropy is not concave. However, it is Shur concave: if $$\rho\succ \sigma$$, then $S_\infty(\rho)\leq S_\infty(\sigma)\ ,$ Here, $$\rho\succ \sigma$$ means that the spectrum of operator $$\rho$$ majorizes spectrum of $$\sigma$$. If $$\sigma$$ has eigenvalues $$\gamma_1\geq\gamma_2\geq\cdots\geq\gamma_r$$ and $$\rho$$ has eigenvalues $$\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_r$$, then $\sum_{j=1}^r\lambda_j\geq\sum_{j=1}^r\gamma_j\ ,$ for all $$1\leq j \leq r$$. See (Dam, Hayden, 2002) for details.

Data processing inequality

The Min entropy satisfies the data process inequality: for a unital quantum channel $$\mathcal{N}:\mathcal{L}(\mathcal{H_A})\to \mathcal{L}(\mathcal{H_B})$$, the following holds $S_\infty(\mathcal{N}(\rho))\geq S_\infty(\rho)\ .$ See (Datta, 2009) for more information.

Triangle inequality

The Min entropy does not satisfy the triangle inequality, which is explained in (Linden et. al., 2013).

Continuity

The authors are unaware of any result that either proves or disproves continuity of the Min entropy with respect to the trace norm.

## Tsallis entropy

The quantum Tsallis entropy is defined as follows: $S_q(\rho)=\frac{1}{1-q}\left(\mathrm{Tr}\left(\rho^q\right)-1\right), \ q\in(0,1)\cup(1,\infty)\ .$ This is a quantum version of a classical Tsallis entropy, which was introduced by Tsallis (1988). If $$\{p_i\}_i$$ are the eigenvalues of $$\rho$$, then the quantum Tsallis entropy reduces to a Tsallis entropy of a random variable $$X_\rho$$ with probability distribution $$\{p_i\}$$ $\tag{4} S_q(\rho)= H_q(X_\rho)=\frac{1}{1-q}\left(\sum_i p_i^q-1\right)\ .$

As in the classical case, the von Neumann entropy is a limiting case of the Tsallis entropy $S(\rho)=\lim_{q\to 1}S_q(\rho)\ .$

This family of entropies has many applications in entanglement and thermodynamics (Caruso, Tsallis, 2008), nonextensive statistics (Nobre et. al., 2011), optical lattice theory (Berhamini et. al., 2006), particle charging (Aad et. al., 2011), statistical mechanics (Gell-Mann et. al., 2005) or (Majhi, 2017), and many other fields of mathematics and physics as in (Abe, Okamoto, 2001).

Non negativity

From (Hu, Ye, 2006), the Tsallis entropy is non-negative. i.e. $S_q(\rho)\geq 0\ .$

Minimum Value

The Tsallis entropy is zero if and only if the density operator is a pure state. Hu and Ye (2006) explain that this follows from the fact that Tr$$(\rho^q)\leq 1$$ with equality if and only if $$\rho$$ is a pure state.

Maximum value

Audenaert (2007) proved that the Tsallis entropy is bounded by $S_q(\rho)\leq\frac{1}{1-q}\left(d^{q-1}-1\right)\ .$ This bound is achieved when the density operator is maximally mixed.

Isometric invariance

Isometric invariance of Tsallis entropy is proven in (Furuichi et. al., 2007). If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then $S_q(\rho)=S_q(U \rho\, U^{\dagger})\ .$

Straightforward from definition, the Tsallis entropy satisfies $\tag{5} S_q(\rho\otimes\sigma)=S_q(\rho)+S_q(\sigma)\ +(1-q)S_q(\rho)\,S_q(\sigma)\ ,$

for all $$q\in (0,1)\cup(1,\infty)$$. Hence, the Tsallis entropy is additive if and only if $$(1-q)S_q(\rho)S_q(\sigma)=0\ ,$$ i.e. either $$\rho$$ or $$\sigma$$ is a pure state.

From (5), it follows that the Tsallis entropy is not subadditive for $$q<1$$. Audenaert (2007) showed that the Tsallis entropy is subadditive for $$q>1$$ $S_q(\rho_{AB})\leq S_q(\rho_A)+S_q(\rho_B)\ .$

Petz and Virosztek (2015) proved that the only strongly subadditive Tsallis entropy is the von Neumann entropy, i.e. $$q\to 1\ .$$

Concavity

Hu and Ye (2006) prove the concavity of the quantum Tsallis entropy. Let $$\{\rho_j\}_j$$ be density operators on Hilbert space $$\mathcal{H}$$, for some finite collection of indices $$\{j\}$$, and let $$\{p_j\}_j$$ be a probability distribution. i.e. $$0\leq p_j\leq 1$$ and $$\sum_j p_j=1$$, then $S_q(\rho)\geq \sum_j p_jS_q(\rho_j)\ ,$ where $$\rho=\sum_jp_j\rho_j$$.

Data processing inequality

For $$q\in(0,1)$$, it is proven in (Furuichi et. al., 2007) the following data processing inequality:

For any $$\rho$$ density operator on Hilbert space $$\mathcal{H}$$, and for any $$\mathcal{N}:\mathcal{L}(\mathcal{H_A})\to \mathcal{L}(\mathcal{H_B})$$ unital quantum channel, $S(\mathcal{N}(\rho))\geq S(\rho)\ .$

Triangle inequality

According to Audenaert (2007) the triangle inequality holds for the Tsallis entropy when $$q\in(0,1)$$ $\left|S_q(\rho_A)-S_q(\rho_B)\right|\leq S_q(\rho_{AB})\ .$ This is shown by using the subadditivity property of the Tsallis entropy for $$q\in(0,1)$$, a purification argument, and by using the fact that marginal entropies of pure states are the same, which is shown in generality in (Hu, Ye, 2006) or for the von Neumann entropy in Araki and Lieb (1970).

Continuity

Raggio (1995) first showed continuity of the Tsallis entropy with respect to the trace norm. For $$q>1$$, $\left|S_q(\rho)-S_q(\sigma)\right|\leq q(q-1)^{-1}\|\rho-\sigma\|_1\ ,$ where $$\rho$$ and $$\sigma$$ are density operators on finite dimensional Hilbert space.

Audenaert (2007) showed the natural generalization of the sharp continuity of the von Neumann entropy with respect to the trace norm, which holds for $$q\in(0,1).$$ For all $$d$$-dimensional states $$\rho$$ and $$\sigma\ ,$$ such that their trace distance is given by $$T:=\frac{1}{2}\|\rho-\sigma\|_1$$, then $\left|S_q(\rho)-S_q(\sigma)\right|\leq \frac{1}{1-q}\left((1-T)^q-1+(d-1)^{1-q}T^q\right) \ .$

Datta and Hanson (2017) also show uniform continuity bound for the Tsallis entropy as follows: For $$\epsilon\in(0,1]$$ and any density operators $$\rho$$ and $$\sigma$$ such that $$T \leq \epsilon$$ then for $$q\in(0,1)\cup(1,\infty),$$

$\left|S_q(\rho)-S_q(\sigma)\right|\leq \begin{cases} (1-q)^{-1}\left((1-\epsilon)^q+(d-1)^{1-q}\epsilon^q-1\right) & \epsilon<1-\frac{1}{d} \\ (d^{1-q}-1)(1-q)^{-1} & \epsilon\geq 1-\frac{1}{d} \\ \end{cases}\ .$

Monotonicity

Tsallis entropy $$S_q^T(\rho)$$ can be viewed as a function of Rényi entropy $$S_q^R(\rho)$$: $S_q^T(\rho)=F_q(S_q^R(\rho))\ ,$ where $F_q(x)=(1-q)^{-1}[e^{(1-q)x}-1]\ .$ Function $$F_q(x)$$ is a monotonically increasing function with respect to $$x$$, and a monotonically decreasing with respect to $$q>1$$. Since the Rényi entropy is monotonically decreasing, for all $$q>1$$, Tsallis entropy is monotonically decreasing as well $\text{for } 1<q_1<q_2, \ S_{q_2}^T(\rho)<S_{q_2}^T(\rho)\ .$

## Unified entropy

The von Neumann entropy is a limiting case of the Rényi entropy and Tsallis entropy. The Unified entropies, introduced by Hu and Ye (2006), is a family of entropies that includes these three types of entropies as particular or limiting cases. For a given density operator $$\rho$$, the Unified entropy is defined as $S_r^s(\rho)=\frac{1}{(1-r)s}\left[\left(\mathrm{Tr}\left(\rho^r\right)\right)^s-1\right]\ ,$ with parameters $$r>0, r\neq 1, \text{ and } s\neq 0\ .$$

The Unified entropy is then related to von Neumann, Rényi, and Tsallis entropies: $S_r^1(\rho)\ \text{is a Tsallis entropy},$ $\lim_{r\to 1}S_r^s(\rho)\ \text{converges to a von Neumann entropy},$ $\lim_{s\to 0}S_r^s(\rho)=S_r(\rho)\ \text{converges to a Rényi entropy}.$

Non negativity

Hu and Ye (2006) prove the quantum unified entropy is non-negative. i.e. for all parameters $$r>0, r\neq 1, \text{ and } s\neq 0$$, $S_r^s(\rho)\geq 0\ ,$ where $$\rho$$ is any density operator.

Minimum Value

For all parameters $$r>0, r\neq 1, \text{ and } s\neq 0$$ a density operator $$\rho$$ is pure if and only if $$S_r^s(\rho)=0$$, (Hu, Ye, 2006).

Maximum value

Hu and Ye (2006) show that for finite dimension $$d$$, the Unified entropy is bounded in the following way: $S_r^s(\rho)\leq \frac{1}{(1-r)s}\left[d^{(1-r)s}-1\right]\ ,$ where $$r\neq 1 \text{, and }s\neq 0\ .$$ Equality holds in the inequality if and only if $$\rho$$ is an equidistribution of order rank $$\rho$$.

Isometric invariance

If $$U:\mathcal{H_A}\to\mathcal{H_B}$$ is an isometry, then for all parameters $$r>0, r\neq 1, \text{ and } s\neq 0\ ,$$ $S_r^s(\rho)=S_r^s(U \rho\, U^{\dagger})\ .$ This is because the Trace operation is invariant under isometries.

The following results are proven in (Hu, Ye, 2006). Suppose $$\rho$$ and $$\sigma$$ are density operators, and $$\rho\otimes \sigma$$ is a product state. Then for any $$r>0, r\neq 1, \text{ and } s\neq 0\ ,$$ $\tag{6} S_r^s(\rho\otimes \sigma)=S_r^s(\rho)+S_r^s(\sigma)+(1-r)s\, S_r^s(\rho)S_r^s(\sigma)\ .$ When ever $$0<r<1, s<0\text{ or } r\geq 1, s\geq 0\ ,$$ then $\tag{7} S_r^s(\rho\otimes\sigma)\leq S_r^s(\rho)+S_r^s(\sigma)\ .$ Whenever $$r>1, s<0 \text{ or } 0<r<1, s>0$$, then $\tag{8} S_r^s(\rho\otimes\sigma)\geq S_r^s(\rho)+S_r^s(\sigma)\ .$

Inequalities (7) and (8) become equalities if and only if either of the states $$\rho$$ or $$\sigma$$ is a pure state.

In general, subadditivity fails for generalized quantum entropies. However, from (7), it follows that the Unified quantum entropy is subadditive whenever $$0<r<1, s<0\text{ or } r\geq 1, s\geq 0\ ,$$ i.e. $S_r^s(\rho_{AB})\leq S_r^s(\rho_A)+S_r^s(\rho_B)\ ,$ where $$\rho_{AB}=\rho_A\otimes\rho_B$$, is a density operator product state on Hilbert space $$\mathcal{H}_{AB}$$.

Unified entropy is strongly subadditive in the limiting case resulting in the von Neumann entropy, as explained above. Strong subadditivity of the Unified entropy is unknown for other parameters.

Concavity

Let $$0<r<1, s>0, \text{ and } rs\leq 1$$. Then the Unified entropy under these parameters is concave, i.e. let $$\{\rho_j\}_j$$ be density operators on Hilbert space $$\mathcal{H}$$, for some finite collection of indices $$\{j\}$$, and let $$\{p_j\}_j$$ be a probability distribution. ($$0\leq p_j\leq 1$$ and $$\sum_j p_j=1$$) Then $S_r^s(\rho)\geq \sum_j p_jS_r^s(\rho_j)\ ,$ where $$\rho=\sum_jp_j\rho_j$$. This is proven by Hu and Ye (2006).

### Error bound for convex combinations

Hu and Ye (2006) prove the following result: Let $$\{\rho_j\}_j$$ be density operators on Hilbert space $$\mathcal{H}$$, such that $$1\leq j \leq n$$, and let $$\{p_j\}_j$$ be a probability distribution. ($$0\leq p_j\leq 1$$ and $$\sum_j p_j=1$$) For parameters $$r, s\geq 1$$, then $\sum_j p_jS_r^s(\rho_j)+F_r^s(p_1,p_2,\dots,p_n)\geq S_r^s(\rho)\geq \sum_j p_jS_r^s(\rho_j)\ ,$ where $$\rho=\sum_jp_j\rho_j\text{ and } F_r^s(p_1,p_2,\dots,p_n)=((1-r)s)^{-1}(p_1^{rs}+p_2^{rs}+\cdots+p_n^{rs}-1).$$

Data processing inequality

Data processing inequality is known only for the von Neumann, Rényi, and Tsallis entropies. Authors are not aware of any results for other parameters.

Triangle inequality

For $$r>1$$ and $$s\geq r^{-1}$$, Rastegin (2011) proves the triangle inequality for Unified entropies: $\left |S_r^s(\rho_A)-S_r^s(\rho_B)\right|\leq S_r^s(\rho_{AB})\ ,$ where $$\rho_{AB}$$ is a density operator on Hilbert space $$\mathcal{H}_{AB}$$.

Continuity

Hu and Ye (2006) prove Lipschitz continuity for the Unified quantum entropy, which is stronger than continuity: For parameters $$r>1$$ and $$s\geq 1$$, $\left|S_r^s(\rho)-S_r^s(\sigma)\right|\leq \frac{1}{r(r-1)}\|\rho - \sigma\|_1\ ,$ where $$\|\rho - \sigma\|_1$$ denotes the trace distance between density operators $$\rho$$ and $$\sigma\ .$$

Monotonicity

For $$s=1$$ the Unified entropy is the Tsallis entropy, which is monotonic for $$r>1$$, as explained above. The limiting case of the Rényi entropy is also monotonic for $$r>1$$, as explained above. For other parameters, authors are not aware of any results.

## References

• Aad, G.; Abbott, B. and et. al., (2011). Charged-particle multiplicities in pp interactions measured with the ATLAS detector at the LHC New Journal of Physics 13: .
• Abe, Sumiyoshi and Okamoto, Yuko (2001). Nonextensive Statistical Mechanics and Its Applications 560 Springer, 1. 278 ISBN 9783540409199
• Aberg, Johan; Furrer, Fabian and Renner, Renato (2011). Min- and Max-Entropy in Infinite Dimensions Communications in Mathematical Physics 306(1): 165–186. doi:10.1007/s00220-011-1282-1.
• Araki, Huzihiro and Lieb, Elliott H. (1970). Entropy inequalities. Communications in Mathematics Physics 18(2): 160-170. doi:10.1007/BF01646092.
• Audenaert, Koenraad M R (2007). A sharp continuity estimate for the von Neumann entropy. Journal of Physics A: Mathematical and Theoretical 40(28): 1827-1836. doi:10.1088/1751-8113/40/28/S18.
• Audenaert, Koenraad MR (2007). Subadditivity of q-entropies for q>1 Journal of Mathematical Physics 48(8): 083507. doi:10.1063/1.2771542.
• Batle, Jay; Plastino, Angel Ricardo; Casas, Montserrat and Plastino, Angelo (2002). Conditional q-entropies and quantum separability: a numerical exploration Journal of Physics A: Mathematical and General 35(48): 10311–10324. doi:10.1088/0305-4470/35/48/307.
• Bergamini, Silvia; Douglas, Phillips and Renzoni, Ferruccio (2006). Tunable Tsallis Distributions in Dissipative Optical Lattices Physical Review Letters 96: 110601. doi:10.1103/PhysRevLett.96.110601.
• Buhrman, Harry; Christandle, Matthias; Hayden, Patric; Lo, Hoi-Kwong and Wehner, Stephanie (2006). Security of Quantum Bit String Commitment Depends on the Information Measure Physical Review Letters 97(25): 250501. doi:10.1103/PhysRevLett.97.250501.
• Carlen, Eric A and Lieb, Elliott H (2012). Bounds for entanglement via an extension of strong subadditivity of entropy. Letters in Mathematical Physics 101: 1-11. doi:10.1007/s11005-012-0565-6.
• Caruso, Filippo and Tsallis, Constantino (2008). Nonadditive entropy reconciles the area law in quantum systems with classical thermodynamics Physical Review E 78: 021102.
• Chen, Zhihua; Ma, Zhihao; Nikoufar, Ismail and Fei, Shao-Ming (2017). Sharp Continuity Bounds for Entropy and Conditional Entropy Science China Physics, Mechanics, and Astronomy 60: 20321. doi:10.1007/s11433-016-0367-x.
• Csiszar, Imre (1995). Generalized cutoff rates and Renyi's information measures IEEE Transactions on information theory 41(1): 26-34. doi:10.1109/18.370121.
• Cui, Jian; Gu, Mile; Kwek, Leong Chuan; Santos, Marcelo F.; Fan , Hengand Vedral; Vlatko. (2012). Quantum phases with differing computational power. Nature communications 3: 812.
• Fannes, Mark (1973). A continuity property of the entropy density for spin lattice systems Communications in Mathematical Physics 31(4): 291-294. doi:10.1007/BF01646490.
• Dam, Wim van and Hayden, Patric (2002). Renyi-entropic bounds on quantum communication. 1: 16. arXiv:0204093
• Datta, Nilanjana (2009). Min- and Max-Relative Entropies and a New Entanglement Monotone IEEE Transactions on Information Theory 55(6): 2816-2826.
• Datta, Nilanjana and Hanson, Eric P. (2017). Tight uniform continuity bound for a family of entropies 1: 16. [1]
• Frank, Rupert L. and Lieb, Elliott H. (2013). Monotonicity of a Relative Rényi Entropy  : 54. doi:10.1063/1.4838835.
• Furuichi, Shigeru (2006). Information theoretical properties of Tsallis entropies Journal of Mathematical Physics 47: 023302. doi:10.1063/1.2165744.
• Furuichi, Shigeru; Kuriyama, Ken and Yanagi, Kazuhiro (2007). A Generalized Fannes' Inequality Journal of inequalities in pure and applied mathematics 8(1): .
• Furuichi, Shigeru; Kuriyama, Ken and Yanagi, Kazuhiro (2004). Fundamental properties of Tsallis relative entropy Journal of Mathematical Physics 45: 4868. doi:10.1063/1.1805729.
• Gell-Mann, Murray; Sato, Yuzuru and Tsallis, Constantion (2005). Asymptotically scale-invariant occupancy of phase space makes the entropy Sq extensive Proceedings of the National Academy of Science of the United States of America 102(43): 15377-15382. doi:10.1073/pnas.0503807102.
• Holevo, Alexander (1998). The capacity of the quantum channel with general signal states IEEE Transactions on Information Theory 44(1): 269-273. doi:10.1109/18.651037.
• Hu, Xinhua and Ye, Zhongxing (2006). Generalized quantum entropy. Journal of mathematical physics 47(2): 023502. doi:10.1063/1.2165794.
• Konig, Robert; Renner, Renato and Schnaffer, Christian (2009). The operational Meaning of Min- and Max- Entropy IEEE Transactions of Information Theory 55(9): 4337-4347. doi:10.1109/TIT.2009.2025545.
• Lieb, Elliott H. and Ruskai, Mary Beth (1973). Proof of the strong subadditivity of quantum‐mechanical entropy. Journal of Mathematical Physics 14(12): 1938-1941. doi:10.1063/1.1666274.
• Lindblad, Göran (1975). Completely positive maps and entropy inequalities. Communications in Mathematical Physics 40(2): 147-151. doi:10.1007/BF01609396.
• Linden, Noah; Mosonyi, Milán and Winter, Andreas (2013). The structure of Rényi entropic inequalities. Proceedings of the Royal Society A Mathematical, Physical, and Engineering Science 469(2158): . doi:10.1098/rspa.2012.0737.
• Majhi, Abhishek (2017). Non-extensive statistical mechanics and black hole entropy from quantum geometry Physics Letters B 775: 32-36. doi:10.1016/j.physletb.2017.10.043.
• McIrvine, Edward C. and Tribus, Myron (1971). Energy and Information Scientific American 225(3): 179-190.
• Mosonyi, Milan and Ogawa, Tomohiro (2015). Quantum hypothesis testing and the operational interpretation of the quantum Rényi relative entropies. Communications in Mathematical Physics 334: 1617. doi:10.1007/s00220-014-2248-x.
• Muller-Lennert, Martin; Dupuis, Frederic; Szehr, Oleg; Fehr, Serge and Tomamichel, Marco (2013). On quantum Rényi entropies: A new generalization and some properties. Journal of Mathematical Physics 54(12): 122203. doi:10.1063/1.4838856.
• Nobre, Fernando Dantas; Rego-Monteiro, M.A. and Tsallis, Constantino (2011). Nonlinear Relativistic and Quantum Equations with a Common Type of Solution Physical Review Letters 106: 140601. doi:10.1103/PhysRevLett.106.140601.
• Ohya, Mansanori and Volovich, Igor V. (2003). On Quantum Capacity and its Bound Infinite Dimensional Analysis, Quantum Probability and Related Topics 6(2): 301-310. doi:10.1142/S0219025703001171.
• Ohya, Masanori and Watanabe, Noboru (2010). Quantum Entropy and Its Applications to Quantum Communication and Statistical Physics MDPI 12: 1194-1245. doi:10.3390/e12051194.
• Ozawa, Masanao and Yuen, Horace P. (1993). Ultimate information carrying limit of quantum systems Physics Review Letters 70(4): 363-366. doi:10.1103/PhysRevLett.70.363.
• Petz, Denes and Virosztek, Daniel (2015). Some inequalities for quantum Tsallis entropy related to the strong subadditivity Mathematical Inequalities and Applications 18(2): 555-568. doi:10.7153/mia-18-41.
• Raggio, Guido A. (1995). Properties of q‐entropies Journal of Mathematical Physics 36(9): 4785-4791. doi:10.1063/1.530920.
• Rastegin, Alexey E. (2011). Some General Properties of Unified Entropies Journal of Statistical Physics 143: 1120-1135. doi:10.1007/s10955-011-0231-x.
• Renyi, Alfred (1961). On Measures of Entropy and Information. University of California Press, Berkeley, California 1: 547-561.
• Schumacher, Benjamin (1995). Quantum coding Physical Review A 51(4): 2738. doi:10.1103/PhysRevA.51.2738.
• Schumacher, Benjamin (1996). Sending entanglement through noisy quantum channels Physical Review A 54: 2614. doi:10.1103/PhysRevA.54.2614.
• Shannon, Claude E. (1948). A Mathematical Theory of Communication Bell System Technical Journal 27: 379-423. doi:10.1002/j.1538-7305.1948.tb01338.x.
• Tsallis, Conantino (1988). Possible generalization of Boltzmann-Gibbs statistics Journal of Statistical Physics 52(2): 479-487. doi:10.1007/BF01016429.
• Vollbrecht, Karl Gerd H. and Wolf, Michael M. (2002). Conditional entropies and their relation to entanglement criteria AIP Journal of Mathematical Physics 43(9): 4299. doi:10.1063/1.1498490.
• von Neumann, John (1932). Mathematische Grundlagen der Quantenmechanik (Mathematical Foundations of Quantum Mechanics) Princeton University Press., . ISBN 978-0-691-02893-4.
• Wehrl, Alfred (1991). Many facets of entropy Reports on Mathematical Physics 30(1): 119-129. doi:10.1016/0034-4877(91)90045-O.
• Wilde, M.M. (2013). Quantum Information Theory , Cambridge University Press. ISBN 1316813304
• Wilde, Mark; Winter, Andreas and Yang, Dong (2014). Strong converse for the classical capacity of entanglement-breaking and Hadamard channels via a sandwiched Rényi relative entropy. Communications in Mathematical Physics 331: 593.

• Bhatia, R. (1997). Matrix Analysis , Springer.
• Carlen, E. (2009). Trace Inequalities and Quantum Entropy: An Introductory Course. Contemp. Math. 529: .
• Nielsen, M. (2000). Quantum Computation and Quantum Information , Cambridge University Press.
• Ohya, M. and Petz, D. (1993). Quantum Entropy and Its Use , Springer.
• Wilde, M.M. (2013). Quantum Information Theory , Cambridge University Press. ISBN 1316813304