Insights into Maximum Likelihood Estimation and Information Theory [1]

📌
For more content check out the Circuit of Knowledge.

1 Introduction

Maximum Likelihood estimation is a widely-used classical estimation method, in which the parameter estimator is the result of maximizing the likelihood function. The likelihood function is defined as the probability density function (PDF) with respect to the parameter . In this paper, we explore the intriguing relationship between Maximum Likelihood Estimation and Information Theory, specifically the Kullback–Leibler divergence.

2 The Kullback–Leibler Divergence

The Kullback–Leibler divergence (KLD) is a measure of the difference between two probability distributions. Given two probability distributions and , the KLD from to is defined as:

where the integral is taken over the support of the distributions.

3 Minimizing the KLD and Maximum Likelihood Estimation

Now, let's consider the problem of estimating the parameter based on a set of observed data points. We denote the empirical probability density function estimate of the data as . The goal is to find the value of that maximizes the likelihood function .

Interestingly, it can be shown that minimizing the Kullback–Leibler divergence between the empirical PDF estimate and the true PDF with respect to leads to the Maximum Likelihood Estimator (MLE) for . Mathematically, we have:

4 Proof

To prove this relationship, we start by expanding the KLD as follows:

We observe that minimizing the KLD over is equivalent to maximizing only the second term in the KLD, since the first one is independent of , hence,

so will be .

Now, we use the definition of the empirical PDF

where is the Dirac delta function, and 's are the observed data points, so the cost function takes the following form,

since the 's were assumed i.i.d.

Hence, minimizing the KLD between the empirical PDF estimate and the true PDF with respect to yields the MLE of .

5 Conclusion

In conclusion, the relationship between Maximum Likelihood Estimation and Information Theory, particularly the Kullback–Leibler divergence, provides valuable insights into the estimation of parameters from observed data. Taking the actual PDF and seeking a that will make it closer to the empirical PDF in a Kullback–Leibler sense, is equivalent of searching for the MLE, a widely-used and powerful estimation method.

References

[1] “Information-Theoretic Signal Processing and Its Applications”, Kay, S.M., 2020