# Normal distribution

## Theoretical generalities

The normal, or Gaussian, distribution is the most frequently used probability model, both for real and theoretical reasons. In general, it is used to model physical quantities that are subject to a variety of small random fluctuations. It is also used to approximate other models based on the Central Limit Theorem.

A random variable X is normally distributed with parameters $$\mu \in \mathbb{R}$$ and $$\sigma \gt 0$$, $$X \sim N(\mu, \sigma)$$, when its density function has the form

load("distrib")$pdf_normal(x,mu,sigma);  ${{e^ {- {{\left(x-\mu\right)^2}\over{2\,\sigma^2}} }}\over{\sqrt{2} \,\sqrt{\pi}\,\sigma}}$ Its support is $$\mathbb{R}$$. The cumulative distribution function is defined in terms of the error function, cdf_normal(x,mu,sigma);  ${{\mathrm{erf}\left({{x-\mu}\over{\sqrt{2}\,\sigma}}\right)}\over{2 }}+{{1}\over{2}}$ The mean and the standard deviation are equal to the values of parameters $$\mu$$ and $$\sigma$$, respectively. [mean_normal(mu,sigma), std_normal(mu,sigma)];  $\left[ \mu , \sigma \right]$ First quartile, Q1: quantile_normal(1/4, mu,sigma);  $\sqrt{2}\,{\it inverse\_erf}\left(-{{1}\over{2}}\right)\,\sigma+\mu$ Second quartile or median, Q2: quantile_normal(1/2, mu,sigma);  $\mu$ Third quartile, Q3: quantile_normal(3/4, mu,sigma);  $\sqrt{2}\,{\it inverse\_erf}\left({{1}\over{2}}\right)\,\sigma+\mu$ Interquartile range, IQR: Q3 - Q1;  $\sqrt{2}\,{\it inverse\_erf}\left({{1}\over{2}}\right)\,\sigma- \sqrt{2}\,{\it inverse\_erf}\left(-{{1}\over{2}}\right)\,\sigma$ For the case of the standard distribution, where $$\sigma=1$$, float(subst(sigma=1,IQR));  $1.348979500392164$ The moment generating function for any random variable X with distribution function F is defined as $M_X(t) = \mbox{E}[e^{tX}] = \int_{-\infty}^\infty e^{tx} dF(x).$ In the particular case of the normal distribution, the moment generating function becomes assume(sigma>0)$
M: integrate(exp(t*x)*pdf_normal(x,mu,sigma),x, minf, inf);


$e^{{{\sigma^2\,t^2+2\,\mu\,t}\over{2}}}$

Column matrix showing the four first moments of $$X \sim N(\mu, \sigma)$$,

transpose(
matrix(
makelist(
subst(t=0, diff(M, t, n)),
n,4)));


$\pmatrix{\mu\cr \sigma^2+\mu^2\cr 3\,\mu\,\sigma^2+\mu^3\cr 3\, \sigma^4+6\,\mu^2\,\sigma^2+\mu^4\cr }$

The characteristic function is defined as $\phi_X(t) = \mbox{E}[e^{itX}] = \int_{-\infty}^\infty e^{itx} dF(x).$ In our case,

integrate(exp(%i*t*x)*pdf_normal(x,mu,sigma),x, minf, inf);


$e^{{{2\,i\,\mu\,t-\sigma^2\,t^2}\over{2}}}$

## Simulation

Random simulation of variable $$X \sim N(15,3)$$, together with its density function,

m: 15 $s: 3$
n: 100 $dat: random_normal(m, s, n)$

/* we need package descriptive for the histogram */
load("descriptive") $draw2d( grid = true, dimensions = [400,300], terminal = svg, histogram_description( dat, nclasses = 9, frequency = density, fill_density = 0.5), explicit(pdf_normal(x,m,s), x, m - 3*s, m + 3* s) )$


## Maximum likelihood estimation

As in the simulation section above, let's generate $$n=15$$ random Gaussians with $$\mu=15$$ and $$\sigma=3$$,

fpprintprec: 5$dat: random_normal(15, 3, n:15);  $\left[ 11.319, 15.544, 17.707, 12.983, 15.611, 11.09, 12.685, \\ 15.667, 14.203, 20.182, 16.63, 17.792, 11.664, 17.897, 15.44\right]$ We want to estimate $$\mu$$ and $$\sigma$$ by the method of maximum likelihood. First, we obtain the log-likelihood function, logexpand: all$
loglike: log(product (pdf_normal(dat[i],mu,sigma), i, 1, n));


$-15\,\log \sigma-{{\left(20.182-\mu\right)^2}\over{2\,\sigma^2}}-{{ \left(17.897-\mu\right)^2}\over{2\,\sigma^2}}-{{\left(17.792-\mu \right)^2}\over{2\,\sigma^2}}-{{\left(17.707-\mu\right)^2}\over{2\, \sigma^2}} \\ -{{\left(16.63-\mu\right)^2}\over{2\,\sigma^2}}-{{\left( 15.667-\mu\right)^2}\over{2\,\sigma^2}}-{{\left(15.611-\mu\right)^2 }\over{2\,\sigma^2}}-{{\left(15.544-\mu\right)^2}\over{2\,\sigma^2}} - \\ {{\left(15.44-\mu\right)^2}\over{2\,\sigma^2}}-{{\left(14.203-\mu \right)^2}\over{2\,\sigma^2}}-{{\left(12.983-\mu\right)^2}\over{2\, \sigma^2}}-{{\left(12.685-\mu\right)^2}\over{2\,\sigma^2}}- \\ {{\left( 11.664-\mu\right)^2}\over{2\,\sigma^2}}-{{\left(11.319-\mu\right)^2 }\over{2\,\sigma^2}}-{{\left(11.09-\mu\right)^2}\over{2\,\sigma^2}}- {{15\,\log \pi}\over{2}}-{{15\,\log 2}\over{2}}$

The log-likelihood estimators of $$\mu$$ and $$\sigma$$ are those who maximize the likelihood or log-likelihood function,

ml: float(
solve(
[diff(loglike, mu)=0, diff(loglike, sigma)=0],
[mu, sigma]));


$\left[ \left[ \mu=15.094 , \sigma=-2.6359 \right] , \left[ \mu= 15.094 , \sigma=2.6359 \right] \right]$

Since $$\sigma$$ must be positive, the true estimators are given by the second solution,

ml: ml[2];


$\left[ \mu=15.094 , \sigma=2.6359 \right]$

These values coincide with the sample mean and standard deviation,

load(descriptive) $[mean(dat), std(dat)];  $\left[ 15.094 , 2.6359 \right]$ The observational information matrix is given by infomatrix: subst(ml, -hessian(loglike, [mu, sigma]));  $\pmatrix{2.1588&5.2374 \times 10^{-15}\cr 5.2374 \times 10^{-15}& 4.3177\cr }$ According to the maximum likelihood theory, estimators are asympthotically normal with covariance matrix equal to the inverse of the information matrix, covmat: invert(infomatrix);  $\pmatrix{0.46321&-5.6188 \times 10^{-16}\cr -5.6188 \times 10^{-16} &0.23161\cr }$ The covariance of the two estimators is near zero. It can be demonstrated that they are in fact independent random variables. Asymthotic confidence interval for $$\mu$$ with significance level $$\alpha=0.05$$, alpha: 0.05$
float(rhs(ml[1])+[-1,1] * quantile_normal(1-alpha/2,0,1) * sqrt(covmat[1,1]));


$\left[ 13.76 , 16.428 \right]$

Asymthotic confidence interval for $$\sigma$$,

float(rhs(ml[2])+[-1,1] * quantile_normal(1-alpha/2,0,1) * sqrt(covmat[2,2]));


$\left[ 1.6927 , 3.5792 \right]$

Akaike's Information Criterion, in case we want to compare this model to others,

-2*subst(ml, loglike) + 2*2, numer;


$75.645$