Negative background in image processing
7 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi, I’m working with pulsed laser measurements and is calculating how much energy I have within an certain angle. My problem is that I get so different result when I am processing the “same” images taken at different times. The images have been calibrated with an energy that has been measured.
I’m taking an monocromatic image of an laser lobe. I also take a second image without the laser, I.e. the background image.
im = double(cdata)-double(background)
The room is dark and this background image should contain the electrical noise (black level is set to be right above zero in my image capture program) and the background light. I substract the two image but I still get a lot of negative values in the surrounding area around the laser lobe.
The pixels intensities is all summed up and is set to the measured energy. So I get a certain energy per pixel intensity. I then calculte the energy that I have within a certain angle. Problem is that this result vary to much from image to image at the same location taken at different times.
I don’t analyze the whole image. What I do is that I identify the location of the laser lobe and take out an box that is twice the size of the laser lobe, and analyz only this area. This is because it would take to much time to analyze the whole image.
Am I doing anything wrong here?
I have noticed that I get a lot of negative values in the area that is not the laser lobe? Will this affect the calibration?
I don’t understand how I can get negative so much negative values.. shouldn’t the background image and surround area around the laser lobe image take out each other?
I have filers in front of the camera at all times. The filters might have been different when I take same image at different times but it’s always the same as the background image. The black level might have been different for the lobe images measurements at different times.
Confused ...
3 comentarios
Wick
el 1 de Mayo de 2018
If IMAGE - DARK yields negative values then either your sensor has nonlinear response to its neighbor's values or your DARK image isn't actually dark.
Some sensors have cross talk. So when one pixel is lit, the pixels next to it read incorrectly as compared to their photons. Usually this results in the value being too high but in some systems there is current saturation such that a value too high in one well can reduce the sensitivity of the neighboring cells. This is not likely but possible.
As Guillaume mentioned, your dark image needs to be the same conditions. I wouldn't advocate setting your zero level above the noise threshold but rather in the middle of it. Then, calculate an RMS of your noise in the dark image. Then when you take the difference in bright-dark you'll expect a certain fraction of negative values for unexposed pixels due to noise. If, however, the noise behavior is the same you can model the difference between and remove that amount of negative sum in that region.
Respuestas (0)
Ver también
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!