多天线高斯信道的容量

目录

1 系统模型

y=Hx+n

where H is a r×t matrix and n is zero-mean complex Gaussian noise with independent, equal variance real and imaginary parts. We assume nnT=Ir , that is the noises corrupting the different receivers are independent. The transmitter is constrained in its total power to P ,

{xTx}P

因为 xTx=tr(xxT) , and expectation and trace commute,

tr(E{xxT})P

可以看出有功率限制和噪声分布限制。信道矩阵限制 H ,分三种情况:

  1. H is deterministic.
  2. H is a random matrix (for which we shall use the notation HH) , chosen according to a probability distribution, and each use of the channel corresponds to an independent realization of HH .
  3. H is a random matrix, but is fixed once it is chosen.

    if QCn×n is non-negative devinite then so is ˆQR2n×2n .

The probability density(with respect to the standard Lebesgue measure on Cn ) of a circularly symmetric complex Gaussian with mean μ and covariance Q is given by

γμ,Q=det(πˆQ)1/2exp((ˆxˆμ)TˆQ1(ˆxˆμ))=det(πQ)1/2exp((xμ)TQ1(xμ))

The differential entropy of a complex Gaussian x with covariance Q is given by

H(γQ)=EγQ[logγQ(x)]=logdet(πQ)+(loge)E[xTQ1x]=logdet(πQ)+(loge)tr(E[xxT]Q1)=logdet(πQ)+(loge)tr(I)=logdet(πeQ)

For us, the importance of the circulary symmetric complex Gaussians is due to the following lemma: circularly symmetric complex Gaussians are entropy maximizers.

Suppose the complex random vector xCn is zero-mean and satisfied E[xxT]=Q , i.e., E[xixj]=Qij,1i,jn then the ectropy of x satisfies Hlogdet(πeQ) with equality if and only if x is a circulary symmetric complex Gaussian with

E[xxT]=Q

Let p be any density function satisfying Cnp(x)xixjdx=Qij,1j,jn Let

γQ(x)=det(πQ)1exp(xTQ1x).

Observe that CnγQ(x)xixjdx=Qij , and that logγQ(x) is a linear combination of the terms xixj . Thus EγQ[logγQ(x)]=Ep[logγQ(x)] . Then,

HpHγQ=Cnp(x)logp(x)dx+CnγQ(x)logγQ(x)dx=Cnp(x)logp(x)dx+Cnp(x)logγQ(x)dx=Cnp(x)logγQ(x)p(x)dx0,

with equality only if p=γQ . Thus HpHγQ

if xCn is a circularly symmetric complex Gaussian then so is y=Ax , for any ACm×n .

We may assume x is zero-mean. Let Q=E[xxT] . Then y is zero-mean, ˆy=ˆAˆx , and

E[ˆyˆyT]=ˆAE[ˆxˆxT]ˆAT=12ˆAˆQˆAT=12ˆK.

where K=ˆAˆQˆAT.

if x and y are independent circularly symmetric complex Gaussians, then z=x+y is a circulary symmetric complex Gaussian.

Let A=E[xxT] and B=E[yyT] . ThenE[ˆzˆzT]=12ˆC with C=A+B

2 具有固定传输函数的确定高斯信道

We will first derive an expression for the capacity C(H,P) of this channel, where HCr×t . By the SVD theorem y=Hx+n can be written as y=UDVTx+n , where DRr×t .Let ˜y=UTy,˜x=VTx,˜n=UTn .Then, ˜y=D˜x+˜n . Since H is of rank at most min{r,t} , at most min{r,t} of the singular values of it are non-zero. Denoting these by λ1/2i,i=1,2,,min{r,t} , we can write the matrix form of ˜y=D˜x+˜n component-wise, to get ˜yi=λ1/2i˜xi+˜ni,1imin{r,t} , and the rest of the components of ˜y (if any)are equal to the corresponding components of ˜n . We thus see that ˜yi for i>min{r,t} is independent of the transmitted signal and that ˜xi for i>min{r,t} don't play any role. To maximize the mutual information, we need to choose {˜xi:1imin{r,t} to be independent, with each ˜xi having independent Gaussian, zero-mean real and imaginary parts. The variances need to be chosen via"water-filling" as

E[Re(˜xi)2]=E[Im(˜xi)2]=12(μλ1i)+

where μ is chosen to meet the power constraint. Here a+ denotes max{0,a} . The power P and the maximal mutual information can thus be parameterized as

P(μ)=i(μλ1i)+,C(μ)=i(ln(μλi))+.

\subsubsection{Alternative Derivation of the capacity} The mutual information I(x;y) can be written as

I(x;y)=H(y)H(y|x)=H(y)H(n)

and thus maximizing I(x;y) is equivalent to maximizing H(y) Note that if x satisfies ExxTP , so does xE[x] , so we can restrict our attention to zero-mean x . Furthermore, if x is zero-mean with covariance E[xxT]=Q , then y is zero-mean with covariance E[yyT]=HQHT+Ir , when the x is circular symmetric complex Gaussian, the mutual information is given by:

I(x;y)=logdet(Ir+HQHT)=logdet(Ir+QHTH)

where the second equality follows from the determinant identity det(I+AB)=det(I+BA) , and it only remains to choose Q to maximize this quantity subject to the constraints tr(Q)P and that Q is non-negative definite. Let logdet(I+HQHT)=Ψ(Q,H)

2.1 Error Exponents

仅仅知道信道容量是不够的,通常还需要知道如何才能逼近这个容量,或者说,逼近这个容量的难度如何。于是,提出错误指数函数的概念给出差错概率一个上限。随机编码的上限是:

P(error)exp(nEr(R))

where the 随机编码的指数上限为:

Er(R)=max0ρ1E0ρρR

where, in turn, E0ρ is given by supremum over all input distributions qx satisfying the energy constraint of

E0(ρ,qx)=log[qx(x)p(y|x)1/(1+p)dx]1+ρdy

In our case p(y|x)=det(πIr)1exp((yx)T(yx)) . If we choose qx as the Gaussian distribution γQ we get (after some algebra).

E0(ρ,Q)=ρlogdet(Ir+(1+ρ)1HQHT)=ρΨ((1+ρ)1Q,H)

2.2 Conclusion

The use of multiple antennas will greatly increase the achievable rates on fading channels if the channel parameters can be estimated at the reciver and if the path gains between different antenna pairs behave independently. The second of these requirements can be met with relative ease and is somewhat technical in nature. The first requirement is a rather tall order, and can be justified in certain communication scenarios and not in others. Since the original writing of this monograph in late 1994 and early 1995, there has been some work in which the assumption of the availability of channel state information is replaced with the assumption of a slowly varying channel.