Abstract
In digital communication systems, the criteria of merit of system performance is usually average probability of error as function of signal-to-noise ratio. The Gauss quadrature rule (GQR) formulation and the maximum entropy method (MEM) have been proposed in the literature to determine and calculate an unknown distribution and average error rate from moments. We compare the accuracy of these two methods when a distribution is estimated and the average error rate is calculated. It is shown that the MEM needs significantly less moments when the distribution is estimated from its moments than the GQR formulation, and that the GQR formulation fails under certain conditions when average error rate is calculated. Specifically, the latter is encountered at high signal-to-noise ratios, where the MEM still delivers reliable results
Users
Please
log in to take part in the discussion (add own reviews or comments).