Scan the image until the first and last points I set

Hi Matlab community.
I need your valuable help on this.
Please see attached image. I annotate the starting and end point of scanning. What I want is to scan the image starting from top (where the first black pixel begins) and stop where the first gray-white pixel of the ruler of the image that lies vertically at the right edge of the image.
The same process will be repeated from the bottom of the image scanning until the first gray-white pixel of the vertical ruler that lies at the right end side of the image. The inbetween part where the imaging window lies, will be segmented, defined somehow or separated by the rest of the image, means that anything else shall be masked.
Continuing scanning from right to left after imaging window finishes, black pixels again start, so the segmentation, differentiation shall be stopped and all other details like letters, colorbars, logos, whatever shall be masked.
The only visible part of the image should be the rectangular gray imaging window. But not at the size as you see it since the actual size of the imaging window is bigger. It starts from the first gray dash of the ruler until the last gray dash close to the bottom, even if at this place the imaging window is pure black.
Am I understood? Please, ask, whatever you don't understand.
Hope to have a and solving solution, soon enough.
Thanks for everything and also the time you spent to read this.

21 comentarios

To get the essence of my question is, I need only the main square of the imaging window in the image and mask everything else.
The size of the square (imaging window) should be the dimensions of the depth and not only what is visible. For instance, in this image, the depth is 40 mm but what you can actual see in the image is less. So, the square segmented should be 40 mm or around 447 pixels although a portion of it should be pure black extending downwards.
Am I clear?
Jan
Jan el 25 de En. de 2019
I do not understand the question. Please explain, what the input is: A grey-scale image stored in a matrix? Or does the white border belong to the image and it is stored as RGB array? Which part of the posted image do you want? Please mark it manually to clarify this. Are the yello lines a hint already? "where the first black pixel begins" and "It starts from the first gray dash of the ruler until the last gray dash close to the bottom" is not clear enough.
"The size of the square (imaging window) should be the dimensions of the depth and not only what is visible" is not clear to me also. What exactly is "the depth"? The height or the width?
Guillaume
Guillaume el 25 de En. de 2019
Editada: Guillaume el 25 de En. de 2019
Yes, the language is very confusing. As far as I understand, you don't want to do any segmentation (which has a precise meaning in image processing) or scanning. All you want to do is crop the image.
In terms of height, my understanding is what you call depth (images don't have depth) are the pixels between the first and last tick mark of the ruler. So you want to crop the image height between these two tick marks.
In terms of width, I'm not sure what you want. The left and right white border is trivially removed. But after that, I'm not sure. Should the width be restricted to the inner rectangle or something else?
By the way, since the image appears to be a screen capture of an actual image displayed in some sort of software, don't you have access to that actual original image which would avoid you having to remove all that UI stuff?
Stelios Fanourakis
Stelios Fanourakis el 25 de En. de 2019
Editada: Jan el 25 de En. de 2019
@Jan. First of all, thanks for just making the effort to understand or give a solution to my problem as it seems you are the first one.
The image is RGB. It has all colors. What I do care about is the black and white rectangle in the middle of the image which is the ultrasound imaging window. Everything else shall be masked (annotations, logo, dates, colorbars, titles etc).
If you carefully notice, at the right end of the image and plus at the bottom of it, there are two rulers, visible as gray dash and dot lines. Those dash lines indicate the height and width of the imaging window.
Yes, height is equivalent to depth in our case.
My yellow marks say that I want scanning to start from the first black pixel (row of black pixels) until the first dash pixel (gray) and the same procedure starting from bottom up until the first gray (dash) pixel.
Yes it is somehow of cropping. Everything else should be removed. Only the dimensions among the rulers should be visible.
But not actually cropping since that will cause resampling. Just masking out and keep only the imaging window among the rulers dimension visible
Stelios Fanourakis
Stelios Fanourakis el 25 de En. de 2019
Editada: Jan el 25 de En. de 2019
@Guillame.
I don't want actually cropping but masking. Cropping will cause resampling.
I want segmenting this area. Not cropping it.
Exporting the image from the ultrasound unit comes along with all those annotations.
@Stelios: I'm not sure if I can see what you call rulers. The posted image seems to be scaled. There are several grey dashs, but it is not clear, how they "indicate the height and width of the imaging window".
This sentence is not clear:
"My yellow marks say that I want scanning to start from the first black pixel (row of black pixels) until the first dash pixel (gray) and the same procedure starting from bottom up until the first gray (dash) pixel."
Does "first" mean horizontally or vertically? "first pixel (row of pixels)" is confusing - pixel or row of pixels?
Cropping does not cause any resampling. Why do you assume this? Cropping means just to crop the array.
  1. Is the posted image the real input you have? Do the image contain the yellow lines on the right side?
  2. What is is the wanted output? "Masking" means, that the output has the same array size as the inputs. Cropping means, that you crop out the area of interest - without any resampling.
  3. What is a unequivocal definition of the location you want to use? Something like:
Left top pixel: the pixel under the left lower corner of the blue rectangle
Left bottom pixel: same horizontal position, vertical position is ???
Right top pixel: ???
Right bottom pixel: is well defined then, when the area is a rectangle
Please try to use less words and more clarity. Actually 5 sentences are enough to define the problem. Post an input, the above definition and as bonus a handmade output. This will save time for you and for the readers.
Guillaume
Guillaume el 25 de En. de 2019
Cropping will cause resampling.
No it won't. Cropping results in a smaller image. You just chop off the bits you don't want. Of course if you display that smaller image at with the same width and height as the original image, then the display will have to be scaled.
In any case, regardless of whether or not you crop the image, as far as I understand all you want to do is identify the left-right and top-down border of the part of the image you want to keep. Whether or not you just chop these borders or set them to black doesn't really matter. Either is trivial to do.
You still haven't clearly explained how to identify these borders. My understanding is that for top and bottom it's the first an last tick mark (outer or inner edge?). I've no idea what it is for the left and right border.
Again, what you describe has nothing to do with image segmentation and will not involve any scanning
Exporting the image from the ultrasound unit comes along with all those annotations
I would assume that the ultrasound unit would allow you to export the raw ultrasound, but maybe not. It's something I would explore. What you're exporting is the display, you're talking about not wanting any resampling but that screen capture is definitively a downsampled version of the raw image.
@Jan
First means first row of black pixels. First yellow line is under the blue label where the first row of black pixels begin. The second yellow line is where the first dash of the ruler is. By that mean, I want everything above the first dash of the ruler to be masked.
Cropping to my experience causes resampling. So no cropping. I had bad experience cropping. Pixel size gets smaller.
If we start scanning from right bottom of the image, the first pixel it finds is from the dash line of the horizontal ruler.
So, I think I made it simple. I want only the areas of the ruler to be visible. Nothing else
Guillaume
Guillaume el 25 de En. de 2019
Editada: Guillaume el 25 de En. de 2019
Cropping to my experience causes resampling. So no cropping. I had bad experience cropping. Pixel size gets smaller.
You seem to be confusing cropping and resizing. cropping never changes the size of the pixels. Cropping is never a resampling operation. However, as said whether the final step is to crop or blank image border is not important.
the first pixel it finds is from the dash line of the horizontal ruler.
I cannot clearly see a horizontal ruler. The vertical ruler is better defined but some tick marks are poorly visible. The ruler is also not touching the edge of the right black border of your image so reliably locating these ticks marks is going to be the main difficulty. Note that the reason that all these tick marks are poorly visible (and that they don't all have the same size) is because your image is already downsampled. Badly! The fact that the image has been saved as jpeg also doesn't help as this creates its own artifacts (jpeg is a lossy format that particularly does not cope well with sharp transitions from black to white).
I want only the areas of the ruler to be visible
Can you draw a rectangle on the image clearly showing the area of the image you want to keep. That would make your explanation simpler.
By the way, does this have to be done programmatically. It would be easier if you selected interactively the area of the image you want to keep. That's trivial to implement in matlab.
No, the marks meant to be that shape and size. They are not downsampled.
Some are dashes and the middles are dots.
See the image I attach. Horizontal ruler is clearly defined.
I draw a yellow rectangular to indicate the area I want to segment. The ruler marks do not need to touch the edge of the imaging window. They are there to indicate the height and width of the imaging window and how big/long the rectangular should be.
It needs to be programmatically. I hope to be an automated method.
Stelios Fanourakis
Stelios Fanourakis el 25 de En. de 2019
Editada: Jan el 26 de En. de 2019
In very simple words.
How to segment the black and white square inside the square image. I'll find out a way to adjust the size of segmentation.
What can I use for automated rectangular/square segmentation?
Jan
Jan el 26 de En. de 2019
@Stelios: You have posted 2 images. The first one contains 2 yellow bars, the second one a yellow rectangle. As far as I understand, the yellow elements are not part of the real data, but only inserted as an explanation, correctly? The upper yellow bar has no correlation to the yellow box, does it? The lower position of the yello box is determined by the upper limit of the 2 small lines (one consisting of about 3 pixels and the other of 2 pixels)? The Image contains a huge white border also? The image is saved as JPG with a lossy compression. This means, that it contains artefacts and no sharp edges. This will impede the detectiopn of certain coordinates massively.
You still did not explain exactly, what the inputs are and how the wanted positions can be recognized uniquely. There seems to be a confusion about the definition of "cropping": An image is a 3D array with the indices [1:X, 1:Y, 1:3]. Cropping means, that you cut out an array with the indices [x1:x2, y1:y2, 1:3]. This does neither do any downsampling and the size of pixels do not matter in any way.
My question for clarification is still open: How are these x1, x2, y1, y2 exactly defined?
@Jan. I don't want cropping. Just segmentation or maksing everything else but the main square of the image.
The yellow rectangle I drew, is what I want to be segmented. Can you find me a way to automatically create this yellow square??
The upper limit of the yellow rectangle OR the segmentation method is AT the row where the vertical ruler BEGINS (first gray dash line).
The lower limit of the yellow rectangle is AT the same row as the last gray dash of the verical ruler is. If you notice, the last gray dash of the vertical ruler is coincide with the first or last dash of the horizontal ruler.
The input is just the image.
Can you find me a way to autocreate this rectangle at the start and end rows I explained?
Hope I covered you.
Stelios Fanourakis
Stelios Fanourakis el 26 de En. de 2019
Editada: Stelios Fanourakis el 26 de En. de 2019
I have another idea.
What about making an initial square/rectangle segmentation. This can be easy done by starting with a drawrectangle command and continue by expanding the rectangle, make it grow until the rows (up and down) of the ruler (first dash up, last dash down).
It sounds great idea, but I need suggestions for making the rectangle drawing shape auto expanding. Like an active contour in a rectangle shape
Jan
Jan el 28 de En. de 2019
Editada: Jan el 28 de En. de 2019
@Stelios: As far as I can see, you still did not answer, what your inputs are exactly. You have posted a JPEG file, which is compressed with lossy method, such that the file contains artifacts. This impedes the determination of the pixels of the ruler massively. A lossless file format would be ways better. In addition, this image contains huge white borders and you did not explain yet, if they are existing in the original files also (if files are the input at all). It is not clear, if you want to apply the procedure to one image manually or thousands of images automatically.
The text looks like the image is downsamples already: The fonts are slightly deformed. You have explained, that it is not downsampled, but it definitely looks like it is. The image you have posted here looks much better: Not scaled and without JPEG artifacts: https://www.mathworks.com/matlabcentral/answers/uploaded_files/200898/A1b.jpg . Why not using this images as input? Even the horizontal ruler is perfectly visible in opposite to the images you post here.
There seems to be a small O between the image and the vertical ruler. Did you mention this already?
An automatical masking of lossy JPEGs taken from scaled screenshots seems to be an unreliable and inaccurate method, which is not useful for scientific work, especially in the field of medicin.
Actually it should be easy to use a find command and the row, which contains the vertical ruler. But as long as it is not clear, if the original image contains this white border, and if a downsampling or lossy compressions blurrs the grey pixels of this ruler, suggesting specific code would be a crude guessing only.
I give up here. After 14 comments and some clear questions for clarifications, I do not know, what the inputs are and what you exactly want to achieve. I do not see, why you want a "masking", when then main part of the image is masked than and simply wastes space. Therefore I assume, that I cannot help you. Good luck.
Stelios Fanourakis
Stelios Fanourakis el 28 de En. de 2019
Editada: Stelios Fanourakis el 28 de En. de 2019
@Jan. I don't understand why you are so confused. My statement is pretty clear. Can anyone else argue that it's not clear what I ask?
The input are the jpgs images. They are not downsampled. They are going to be many images. Not just one. The automated segmentation is already succesful as you can see the middle red line in the middle of the ultrasound image. It is already done by using an automated active contour model.
What I only ask and it is pretty CLEAR what I ask. A way to make a rectangle shape active contour (see image) to expand/grow vertically NOT horizontally, and it will keep growing until it finds the last pixel of dash line (ruler) on top (ruler gray value is different tone than the letters at the left of the image) AND first gray tone pixel (ruler) at the bottom.
Why is it so difficult to understand my query? If someone else, follows my post and do not understand what I ask, please, comment, as well.
Rik
Rik el 28 de En. de 2019
Compare the image you have just posted with the image that is attached to the question itself. There is a difference in size. That makes it unclear what is actually the imput that you start with. You say you will be processing a lot of files. That makes it very important to know the boundary conditions of you data.
Your jpg inputs are lossy, which makes it difficult to find things that are easy for the human eye to find. jpg is not optimized for analysis, it is optimized for human viewing. All sharp edges (like the ticks, which are extremely important here) will get blurred.
I understand your frustration, but apparently your question is not clear enough, otherwise we could have given you an answer already. If you ignore direct questions ('which of the images you have attached is the actual start of your workflow') and then continue on to say that your question is clear is not helpful for anyone.
Jan has written several remarks in a quite long comment, but you hardly reply to them.
You want to find a mask that covers the ultrasound area, based on the depth and width indicators (the ticks at the border). That much is clear. What is not clear is what your exact input image is. Some of the images you have posted look like very lossy jpg files, others don't suffer as much from any downsampling. Some images you have posted have a white border, some have a red thin border, some lack either or both. Which of these is the input?
Let it put in another way, that may make it so much simpler.
Forget about resolutions and active rectangle contours.
At the moment, I am looking a way to start scanning the image from right to left for only around 3-4 pixel columns. Enough columns to identify the first and last dash lines (gray color) of the vertical ruler so to define those points.
Forget about the rest. Only to define the first and last points of the vertical ruler. This needs to be automated.
Can you help me on this?
@Rik.
What I ask at the moment, can be applied to all images, I have uploaded so far. SO, no frustration, about my input images.
I just need Matlab to identify those two points. First and last dash of the vertical ruler. That's all!
Jan
Jan el 28 de En. de 2019
Editada: Jan el 28 de En. de 2019
@Stelios: But due to the choice of a lossy and low quality JPEG compression and due to downsampling/re-scaling, there is no way to recognize pixels in the vertical ruler exactly and uniquely. I've collected the images you have posted yet and it is obvious, that each one would need another method to identify a certain pixel - if this is not made impossible by the bad choice of the image compression, rescaling, downsampling and adding of the border.
"Only to define the first and last points of the vertical ruler" - exactly this was made extremely hard, because you've posted very different inputs and the blurred contents does not allow to identify specific points. See this zoomed area of the right bottom ruler mark with increased contrast:
ruler.png
So which one is the wanted pixel?
Of course you can set some thresholds based on some expert knowledge and use AI to remove the JPEG artifacts. But it would be trivial, if you just use inputs without a lossy compression. Do you have any good reason to work with very noisy inputs? I'm convinced you have caused this noise by a repeated resampling and JPEG compression. Now the input images are trashy.
Guillaume
Guillaume el 28 de En. de 2019
I just need Matlab to identify those two points. First and last dash of the vertical ruler. That's all!
This needs to be automated.
Yes, we understood that. But if you want something reliable we need to know exactly what we're working with. Because the solution and its complexity is going to depend on (amongst other things):
  • Are these marks always in the same location in reference to the edges of the image. You've posted all sort of images where there is sometimes a white border, sometimes not. So, do we first have to identify that white border or not?
  • Are these marks clearly defined or plagued by jpeg compression artifacts and sometimes downsampling (despite your protests you've shown us downsampled images, see Jan's answer)

Iniciar sesión para comentar.

 Respuesta aceptada

Jan
Jan el 28 de En. de 2019
This does not solve the question, but is a collection of images you have posted yet:
  1. A cropped output (logo only partially visible at the top), some downsampling seems to be applied also (clear text, but zooming into the horizontal ruler reveals some artifacts), saved as JPEG with a high quality, a white border at the bottom (I added a green box to show it):
A1b.jpg
2. A downsampled image (you can recognize this by the artifacts of the text and the almost vanishing horizontal ruler (right click on the image to enlarge it, and of ). A large white border is added (I've added a green border such that you can see the white area). Zooming in e.g. at the dashes of the ruler shows strong artifacts from the JPEG compression, which impedes the further processing massively:
marked.jpg
See the JPEG artifacts, zoomed by 800% - do you see the dark grey pixels and the different colors of the actuall dashes?
zoomed.png
3. Another file with a yellow box - not explained how you have created it - same white border (again I added the grren border), artifacts, downsampled:
rect.jpg
4. A contour graph, which was obviously build based on an image with JPEG artifacts - see the unclear tick marks at the horizontal ruler at the bottom. It is unlikely that any meaningful measurement can be done based on such images:
boundaries.jpg
As soon as you explain, that type 1. is the used as input, a solution would be rather easy. In opposite to this starting with the noisy type 4. images is very demanding.
You exaplain: "The input are the jpgs images. They are not downsampled." Then why do you post downsampled JPEGs and not the real input? Why do you make the processing much harder (or impossible), by using lossy JPEGs as input?
I've worked with ultra-sound deviced repeatedly. In all cases the screen has been stored lossless as PNG, TIFF or GIF, to avoid the JPEG artifacts. The recognition of the actual data has been easy, because it was at well defined pixels positions. An automatic cropping is very easy also, when the positions are exactly defined.
"Why is it so difficult to understand my query?" Because your explanations do not match the posted examples, because you have a confused idea about cropping, because you do not answer questions for clarifications and ignore suggestions. This is your problem. Nobody but you suffers, if it is not solved. I only want to help you as I do in all other threads.

5 comentarios

Guillaume
Guillaume el 28 de En. de 2019
I totally agree with Jan. We need a clear definition of the input image (and are you sure the software isn't capable of output the raw image?) preferably not as jpeg and of the area to extract (I think we've got that now).
And leave us to decide what the procedure to extract that area is going to be.
Stelios Fanourakis
Stelios Fanourakis el 28 de En. de 2019
Editada: Image Analyst el 28 de En. de 2019
Sorry for the frustration I might caused. Here is one of the original exported images from the ultrasound unit.
It is in .bmp format. I guess there is no quality loss on those images.
The JPGs were generated by other software used for manual segmentation.
I need a way to detect the first and last dashes of the vertical ruler -- that's it.
I guess the scanning should start from right end of the image going left. I only need a couple of rows to estimate the pixel intensities and then find the gray values.
It will exclude any other color like the blue and the orange in the label at the top
@Jan. Can you also help me on both images? JPGs and the BMP I uploaded at my last comment?
If it is jpg, let's detect the pixel with the highest intensity value. The brightest of them all. The faded ones are artifacts. That's for sure.
Looking to your feedback.
Thanks again.
Image Analyst
Image Analyst el 28 de En. de 2019
Editada: Image Analyst el 28 de En. de 2019
Can you ask the university to give you the image only, rather than a screenshot with all kinds of annotations on it?
Aren't the dashes in the same location every time? They should not move around from image to image would they? Just use imtool() to find out where they are.
Stelios Fanourakis
Stelios Fanourakis el 28 de En. de 2019
Editada: Stelios Fanourakis el 28 de En. de 2019
I will ask, I don't know if this is possible, since, any exporting I have done so far are in either .dcm or .bmp and come together with all annotations.
Besides, it would be better, if the scanning would be applied on those type of images, 'cause it is something supposed to work for all units.
So more or less, a lot of images going as input to the programe may have annotations.
If the algorith tries to detect only a specific pixel value that corresponds to gray. I guess blue, orange, white and gray they all shall have different values.
Different units may have differently their rulers placed. So, it needs to be automated and trace the gray pixels. I cannot just give Matlab directly the pixel coordinates.
This shall not be automated then.
Guillaume
Guillaume el 28 de En. de 2019
Different units may have differently their rulers placed
Can they have different numbers of tick marks?
Can the tick marks be different size?
Can they be on the other side of the image?
Can they be different shape?
Can they be different colour?
It would be very difficult to come up with a truly generic detection algorithm that could cope with all the above, so it's important to know what can and what can't be different from image to image.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Read, Write, and Modify Image en Centro de ayuda y File Exchange.

Preguntada:

el 24 de En. de 2019

Comentada:

el 28 de En. de 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by