- #1
- 5,779
- 172
What exactly prevents us from ruling out a uniform distribution on infinite sets? To be more precise, why are distributions and limits like
[tex]\int_{-\infty}^{+\infty}dx\,\lim_{\sigma\to\infty}f_{\mu,\sigma}(x) = 1[/tex]
[tex]\int_{-\infty}^{+\infty}dx\,\lim_{\Lambda\to\infty}\frac{1}{\Lambda} \chi_{[a,a+L]} = 1[/tex]
not allowed or not reasonable in probability theory? What prevents us from interpreting this as uniform distributions on infinite intervals?
(f is a normal distribution with mean μ and standard deviation sigma; χ is the characteristic function on some interval)
[tex]\int_{-\infty}^{+\infty}dx\,\lim_{\sigma\to\infty}f_{\mu,\sigma}(x) = 1[/tex]
[tex]\int_{-\infty}^{+\infty}dx\,\lim_{\Lambda\to\infty}\frac{1}{\Lambda} \chi_{[a,a+L]} = 1[/tex]
not allowed or not reasonable in probability theory? What prevents us from interpreting this as uniform distributions on infinite intervals?
(f is a normal distribution with mean μ and standard deviation sigma; χ is the characteristic function on some interval)
Last edited: