Image Prcoessing - Automatic Region of Interest Extraction - Help!!

2 visualizaciones (últimos 30 días)
Hello,
I am working on a project where I have to calculate the relative area of a region of interest to the area of the whole object. Separating the whole object out is easy and done. I need help/example code on how to automatically separate out the region of interest (pictures below)
Green - Whole image, Red - Region of interest(drawn by hand). The region of interest has to automatically be extracted when the code is run.
I have already tried all the basic methods such as thresholding, pixel values of the ROI, boundary conditions, etc.
I will answer any questions if those arise.
Thank you!

Respuesta aceptada

Ahmet Cecen
Ahmet Cecen el 14 de Mayo de 2016
Editada: Ahmet Cecen el 14 de Mayo de 2016
Ok here is what I could do in 5 mins with my above suggestions and a little more:
1. Get Blue channel with the green line on the image:
2. Convert to double, take square of the image, erode with a disk strel until satisfied:
XPX = double(blue);
XPX = XPX.^2;
se = strel('disk',3,0)'
XPX = imerode(XPX,se);
3. Threshold and find biggest connected component inside the green boundary, then fill holes:
4. Activecontour with the thresholded image as a mask, then imopen with a giant disk to round:
bw3 = activecontour(blue,BW2,300);
se3 = strel('disk',100,0);
BWXP3 = imopen(bw3,se3);
5. ROI Overlayed:
I am sure you can get better results with more fine tuning or maybe resizing to a smaller image, however this process makes some specific assumptions about the nature of your data, which may not be applicable for the rest of your images as Image Analyst mentioned. He would also have more experience with this stuff, I only do computer vision in a narrow field of study.
  5 comentarios
Image Analyst
Image Analyst el 14 de Mayo de 2016
You can get the same effect with eroding just once with a larger structuring element.
Ahmet Cecen
Ahmet Cecen el 14 de Mayo de 2016
Editada: Ahmet Cecen el 14 de Mayo de 2016
Yeah pretty much, with the exception of the curvature effect, which shouldn't really matter here (radius 3 is pretty much a diamond with flat diagonal edges, whereas radius 50 is pretty circular). It is just unclear what size will do the trick, so you can either keep trying with different sizes and plot and check, or just erode with something smaller and plot and check. Both are for loops, pick a preference.
I have an irrational fear of the "disk" element in large radius, in small sizes when I look at the eroded image I can relate to the jagged diagonals of a diamond more easily. I can always use a large diamond, but then some guy asks me to justify why diamond and not circle, whereas in small sizes I don't have to answer. It's one of those convoluted habits you pickup when you start coding and didn't know better, then it just sticks with you.

Iniciar sesión para comentar.

Más respuestas (2)

Ahmet Cecen
Ahmet Cecen el 13 de Mayo de 2016
Since you didn't provide a clean image, I can only speculate. My strategy would be:
  1. The region of interest seems "bluer" than the rest. Pick out the blue channel out of the RGB matrix. Convert to double, maybe even take element-wise square for highlighting.
  2. Repeatedly erode with a small disk to eat off the bluer regions around the main object (the shines) as I erode, since that green line will become larger as you erode.
  3. After experimenting with the number of erosion steps, I would threshold out what I can, hoping that after the green line expands enough, the region of interest is the only "bluer" region remaining.
  1 comentario
Shawn Castelino
Shawn Castelino el 14 de Mayo de 2016
Thank you Ahmet, for your answer. Please look at the image in the comment below for a picture without the lines (clean image) and tell me if that changes your answer at all.

Iniciar sesión para comentar.


Image Analyst
Image Analyst el 14 de Mayo de 2016
Please post the original image without annotation lines. The problem is you can't do it by color segmentation or morphology alone because the top of the seed, or whatever that is, is not well defined. Just use the Color Thresholder app and you'll see what I mean. And if you get something that works for that image, it might not work for other images.
  4 comentarios
Shawn Castelino
Shawn Castelino el 14 de Mayo de 2016
Editada: Shawn Castelino el 14 de Mayo de 2016
Yes I have, and for example, 3rd image in the RGB column,at the very top of the seed, you can see the thresholding also includes that line that highlights the top of seed. Is there any way to remove that? Something like cut it off where it narrows? As you can see in the first image i posted, I only want that red outlined part somehow.
Im sorry the questions I'm asking seem to be poorly worded. Im just figuring out the best way to describe the situation!
Image Analyst
Image Analyst el 14 de Mayo de 2016
Yes, for this particular image the top of the kernel is a problem. One approach is to use watershed to cut it off. See Steve's blog entry. Then use bwareafilt() to extract the largest blob. However if you do that on an image that does not exhibit that problem, it might just mess it up. My next thing to try or suggest was activecontour() - it looks like Ahmet gave you code for that. I'm also attaching my standard demo for that (not using your image).
One thing you've got to look out for in these kind of adaptive/automatic methods is that sometimes you will force them to find a blob when they shouldn't. What is there is no lighter stuff there and you did something foolhardy like used graythresh()? Well it would return a threshold. What if you had two well separated humps in the histogram. Well it would return a valid splitting level. However what if there was no brighter blob and you just had a single hump on the dark end of the histogram? Well it would find a threshold in the middle of the hump and tell you that half of your background is actually foreground, and that would be wrong. So in your situation, you don't want to be so locally adaptive that it's finding stuff it shouldn't. Often what is done in these situations is to calibrate the intensity by setting the lux level to a known value, using constant camera settings (no automatic anything in the camera), and image a known standard like the Color Checker Passport. Then you can use fixed threshold or color gamut segmenting values.
You may need a combination of fixed algorithms and adaptive algorithms. For example you may get an algorithm that works on all levels (from none there to 100% there) of white-ish in yellow-ish corn but then find that algorithm won't work for greenish or whitish kernels. So it might have to adjust for the color of the bulk kernel, then adapt and use "fixed" values determined from the bulk color. Like maybe you compute the bulk color, and say that the inside region is always 10 units of Delta E from the bulk. I attach a delta E demo.
See my File Exchange for more color segmentation demos. http://www.mathworks.com/matlabcentral/fileexchange/?term=authorid%3A31862
Finally, I attach a visualization of the 3-D color gamut in CIE LAB color space:

Iniciar sesión para comentar.

Categorías

Más información sobre Images en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by