Camera Calibration: Estimate Lens Distortion from 1 Photo

In summary, the conversation discusses the problem of estimating lens distortion from one photo and the use of a reference pattern to compute polynomial coefficients. The solution involves detecting each spot as a shape and calculating geometrical factors, such as the centroid, to correct for lens distortion. The speaker also mentions their own image processor, GRIP, which has a new version available that includes distortion correction. They encourage the listener to try it out and provide feedback for improvement.
  • #1
ProTerran
29
0
Hello,

Here is my problem:
I've created reference pattern for camera calibration. It consist of well defined dots where each dot is unique.
What I'm trying to do is to estimate lens distortion from only one photo. I can easily identify dots on photo, but I have no idea how to estimate the position of dots that are undistorted.
I need undistorted dots in order to compute difference between positions of undistorted and distorted dots which are then used in computing of polynomial coefficients.

Sorry for my bad english.
 
Last edited:
Computer science news on Phys.org
  • #2
You need to detect each spot as a shape, in terms of its boundary. Then you can calculate geometrical factors such as width, height, perimeter and, most usefully in your case, centroid (ie, the centre of graviity of all the pixels comprising the object.
I do this kind of thing in my own image processor (GRIP) that I wrote for making astrophotographs. The problem I faced there was that I only had a fixed tripod when I started, so the stars moved from one frame to the next due to the Earth's rotation. So for stacking multiple exposures I had to take account of lens distortion. I have made GRIP available for others to use from my own web site: www.grelf.net. It is written in Java and so it is possible for anyone to extend for their own purposes. The API is available on the web too: use the API button on the menu on each of my pages. Particularly look at the class called Blob. A blob is a detected object, described in terms of its boundary and enclosed region.
 
Last edited:
  • #3
I am currently modifying GRIP so that correcting the lens distortion of a photo will be possible as a menu option, after a reference pattern of the kind I think you mean has been used to calibrate it. A programmer could do it with my API now but I'll make it so a non-programmer can do it.
 
  • #4
Thank you for your reply.
Can you tell me what kind of method are you going to use for camera distortion calibration?

I came across at two kind of methods. One is based on modeling radial distortion and second is based on fitting user defined polynomial (very rare method). Second method is more general because you can fit any type of function, thus you are not only limited to radial distortion, but it is possible to model any kind of image distortion.

As I mentioned at my previous post, the problem that i have is how to estimate difference between positions of the reference points in undistorted image and distorted image (I only have one image - distorted). One way to solve this problem is by modeling pinhole camera and by knowing the exact position of the camera with respect to the reference pattern and also by knowing intrinsic camera parameters i.e. zoom and focus.
When you have all this informations then it is possible to project reference points and get undistorted image.

But I don't have those informations, so I am looking to solve this problem by making different approach. Assuming that the only informations I have is distorted image and exact position of the points on the reference pattern, I try to find iteratively best fit of the undistorted points on distorted image. After that, I will be able to use least square method to best fit polynomial and obtain map of distortion on whole image.

Hope that what I wrote is clear enough :-)
 
Last edited:
  • #5
I have nearly finished a new version which does the following. Use an image of a square array of dots, taken with the same optical set-up as you wish to correct. GRIP measures the positions of the dots and creates a regular grid of the same average spacing and orientation. It then knows what second order polynomials to use on a real photo to warp it so the calibrated dot positions move to the regular grid. The grid info is saved in a file so that it can be reapplied to multiple images - in fact it will be possible to batch process a sequence of photos.
This warping is the same as I use for stacking multiple exposures in astrophotography, where GRIP matches star patterns from one frame to the next. It has been available for programmers for several years but I have now made it easily accessible from menus, with wizards to guide you through. I plan to upload the new version of GRIP sometime this week.
 
  • #6
That is great. Thanks sir.
 
  • #7
A new version of my GRIP application is now available, including the kind of distortion correction I believe you want. Read carefully my initial description here: http://www.grelf.net/new.html then I hope you will be able to download and use it.
Please give me feedback on whether it does what you need and what may need improving.
 

Related to Camera Calibration: Estimate Lens Distortion from 1 Photo

1. What is camera calibration?

Camera calibration is the process of estimating and correcting any distortion in the lens of a camera, in order to produce more accurate and precise images.

2. Why is camera calibration important?

Camera calibration is important because it helps to improve the accuracy and quality of images captured by a camera. It also ensures that the images are consistent and can be easily compared and analyzed.

3. How does camera calibration work?

Camera calibration works by using a set of known reference points, such as a calibration grid or a pattern with known dimensions, to measure the distortion in the lens. This information is then used to correct the distortion in the captured images.

4. Can camera calibration be done with just one photo?

Yes, camera calibration can be done with just one photo. However, the accuracy of the calibration will depend on the quality of the photo and the amount of distortion present in the lens.

5. What are the benefits of using camera calibration?

Some benefits of using camera calibration include improved image accuracy, better image quality, and easier comparison and analysis of images. It can also help to save time and money by reducing the need for manual correction of distortion in images.

Similar threads

Replies
10
Views
1K
Replies
2
Views
1K
  • Mechanical Engineering
Replies
3
Views
1K
Replies
152
Views
5K
Replies
1
Views
1K
Replies
1
Views
2K
  • General Discussion
Replies
1
Views
2K
  • Classical Physics
Replies
9
Views
6K
  • Advanced Physics Homework Help
Replies
7
Views
2K
  • Special and General Relativity
2
Replies
42
Views
4K
Back
Top