Draw Confidence Contours to Hubble's 1929 Data

  • Thread starter humanist rho
  • Start date
In summary: It sounds like this is more in line with what you are interested in.actually i am searching for a project for my graduate course.i've hardly one month time.then can i go on with this idea?Well, I'm not sure that undertaking such a project would be a good use of your time. There are plenty of other things you could be doing with that time, such as completing your graduate coursework.
  • #36
The \chi^2 depends on the values obtained by the data and the values predicted by the model.
 
Space news on Phys.org
  • #37
Yes,I know.
Chi2 value'll give maximum likelihood function.
max.likelihood function is related to fisher information matrix.
This is my idea.Am i in correct path?
 
  • #38
The best-fit model minimizes the chi^2; it maximizes the likelihood. The Fisher matrix is the matrix of second derivatives of the likelihood function about some fiducial (reference) point. The Fisher matrix tells you nothing about what the best-fit parameter values are, rather, it tells you what their theoretical variances are. For this reason, the Fisher matrix is typically used as a quick and easy way of doing error forecasting -- you simply pick a fiducial model and calculate the Fisher matrix at that point. The resulting errors constitute an accurate projection only if the true parameter distributions are uncorrelated Gaussians.

So, in summary, the maximum likelihood is not related to the Fisher matrix -- the Fisher matrix is the 2nd derivative of the likelihood function about some reference point. For Gaussian distributions, the likelihood is related to the chi^2 as

[tex]\mathcal{L} \sim {\rm exp}(-\chi^2/2)[/tex]
 
  • #39
bapowell said:
The best-fit model minimizes the chi^2; it maximizes the likelihood. The Fisher matrix is the matrix of second derivatives of the likelihood function about some fiducial (reference) point. The Fisher matrix tells you nothing about what the best-fit parameter values are, rather, it tells you what their theoretical variances are. For this reason, the Fisher matrix is typically used as a quick and easy way of doing error forecasting -- you simply pick a fiducial model and calculate the Fisher matrix at that point. The resulting errors constitute an accurate projection only if the true parameter distributions are uncorrelated Gaussians.

So, in summary, the maximum likelihood is not related to the Fisher matrix -- the Fisher matrix is the 2nd derivative of the likelihood function about some reference point. For Gaussian distributions, the likelihood is related to the chi^2 as

[tex]\mathcal{L} \sim {\rm exp}(-\chi^2/2)[/tex]

I've completed upto maximum likelihood estimation.
The likelihood is obtained as gaussian.(not sure whether it is true).
But donno how to set desired confidence levels to draw the contours to estimate the parameters.Can u give any hint?
 
  • #40
humanist rho said:
I've completed upto maximum likelihood estimation.
The likelihood is obtained as gaussian.(not sure whether it is true).
But donno how to set desired confidence levels to draw the contours to estimate the parameters.Can u give any hint?
Okay, that's good. You're almost there. Typically what is done is to write down the probability distribution as a Gaussian, and then draw contours that enclose 68% and 95% of the probability (these are the "one sigma" and "two sigma" contours). With a two-dimensional Gaussian probability distribution, these contours are ellipses.

There are a few ways you could figure out what the ellipses are for your distribution. You could do it analytically by first figuring out what circle encloses 68% and 95% of the probability for a two-dimensional Gaussian with two independent, unit variance variables, and then performing a transformation on that to get what it looks like for your data.

Or you could do it numerically by computing the normalized values of your probability distribution in a grid, and then figuring out what level of probability makes it so that the total probability for values above that level encloses 68% and 95% of the probability, respectively. The boundary between values below and above this level makes your contour. Just bear in mind that you have to be sure to have a grid that is large enough to capture the whole distribution.
 
  • #41
Chalnoth said:
Okay, that's good. You're almost there. Typically what is done is to write down the probability distribution as a Gaussian, and then draw contours that enclose 68% and 95% of the probability (these are the "one sigma" and "two sigma" contours). With a two-dimensional Gaussian probability distribution, these contours are ellipses.

There are a few ways you could figure out what the ellipses are for your distribution. You could do it analytically by first figuring out what circle encloses 68% and 95% of the probability for a two-dimensional Gaussian with two independent, unit variance variables, and then performing a transformation on that to get what it looks like for your data.

Or you could do it numerically by computing the normalized values of your probability distribution in a grid, and then figuring out what level of probability makes it so that the total probability for values above that level encloses 68% and 95% of the probability, respectively. The boundary between values below and above this level makes your contour. Just bear in mind that you have to be sure to have a grid that is large enough to capture the whole distribution.
I can't get the ellipses.
My probability distribution is in the form Exp(chi2-chimin2)/2
I've tried to draw them with varrying the two parameters omega(m) and omega(lamda).
 
Last edited:
  • #42
humanist rho said:
I can't get the ellipses.
My probability distribution is in the form Exp(chi2-chimin2)/2
I've tried to draw them with varrying the two parameters omega(m) and omega(lamda).
Um, okay. Maybe you can describe in more detail what you're trying to do.

P.S. I'd start figuring out how to draw a circle from a toy probability distribution that has unit variance in two independent parameters (that is, [itex]P(x,y) \propto e^{-(x^2 + y^2)/2}[/itex]).
 
  • #43
The problem is in maximum likelihood estimation.When I consider flat universe and approximate as
density parameter for matter+that of vacuum=1,
I can get the maximum likelihood as a 1D gaussian with minimum value of matter density parameter 0.38.The image is uploaded as probability1D.bmp

But when the flat universe approximation is not considered,the maximum likelihood doesn't become a gaussian.I've normalized the probability density,and marginalized over H0.
Marginalization is done by Integrating normalized PDF w.r.to H0 from -Infinity to +infinity
The probability density I got is uploaded as ML2D.bmp :(
 

Attachments

  • ML2D.bmp
    206.3 KB · Views: 528
  • Probability1D.bmp
    71.7 KB · Views: 456
  • #44
Chalnoth said:
P.S. I'd start figuring out how to draw a circle from a toy probability distribution that has unit variance in two independent parameters (that is, [itex]P(x,y) \propto e^{-(x^2 + y^2)/2}[/itex]).
ToyDistribution :TPDF.bmp
Toycontour: TPDFcontour.bmp
I know these are of no use until my likelihood brcomes gaussian:cry:
 

Attachments

  • TPDF.bmp
    294.3 KB · Views: 543
  • TPDF contour.bmp
    136.7 KB · Views: 520
  • #45
Well, first point is that when you're putting things online, I would highly recommend converting them to PNG format. PNG is a lossless image compression, so that it perfectly preserves the image, but is much, much smaller than a BMP file (TPDF.bmp, at 295KB, for example, becomes 39KB). BMP files are also not usually directly viewable in a web browser, while PNG files are.

If you are using Windows, Windows Paint will do the conversion (load the image, save as). If you are using Linux, use the ImageMagick command line tool "convert", like so:

convert TPDF.bmp TPDF.png

Alternatively, you could see if your plotting program directly saves to PNG in the first place to save you the hassle.

Anyway, with that out of the way, a couple of points.

First of all, I wasn't thinking about the fact that you have to calculate things a bit differently when considering non-flat universes, and to get contours in [itex]\Omega_m[/itex], [itex]\Omega_\Lambda[/itex], you have to consider non-flat universes. Basically, there is an extra geometric factor which depends upon the curvature that you have to take into account, which comes in as a sine or hyperbolic sine of the distance, depending. The exact formulation is dealt with in detail in this paper:
http://arxiv.org/abs/astroph/9905116

It is equation 21, [itex]D_L[/itex] that you want to use. Note that this depends upon [itex]D_M[/itex] written down in equation 16, with [itex]\Omega_k = 1 - \Omega_m - \Omega_\Lambda[/itex]. Also don't forget to factor in [itex]\Omega_k[/itex] into the Friedmann equation if you go this route.

The other route to go is to just ignore the possibility of non-zero spatial curvature, and just go with a one-parameter fit as you did.
 
  • #46
Using that equation for k=+ve universe,got Likelihood something like this...:(
Now I'm thinking of continuing the 1 parameter fitting and winding up the project by calculating the age and deceleration parameter from the one parameter fit.
 

Attachments

  • 2para.png
    2para.png
    13.8 KB · Views: 429

Similar threads

Replies
19
Views
2K
Replies
8
Views
2K
  • Cosmology
2
Replies
37
Views
3K
Replies
10
Views
1K
Replies
17
Views
3K
Replies
4
Views
1K
Replies
1
Views
945
Replies
21
Views
1K
  • Cosmology
Replies
3
Views
2K
Back
Top