letter reconstruction and filling for ocr

7 visualizaciones (últimos 30 días)
Wajahat
Wajahat el 16 de Jul. de 2015
Comentada: Image Analyst el 22 de Jul. de 2015
Hi
I am trying to detect text on tires which is black engraved text on black background. I am using several pre-processing stages including edge detection and erosion/dilation. However, the characters appear with discontinued/broken edges.
I am looking for a solution as to how to fill the gap between the strokes of the letters and reconstruct the edges. One sample image is attached.
Looking forward to your feedback.
Best Regards
Wajahat

Respuestas (2)

Ghada Saleh
Ghada Saleh el 20 de Jul. de 2015
Hi Wajahat,
I understand you want to fill the gaps in the text, One possible way to accomplish this is by dilating the image with a strel constructor. You can find an example of doing this in http://www.mathworks.com/help/images/ref/imdilate.html#examples. In your case, dilating the text in the image should fill the gaps between the points and the white edges. You can try using different structure elements using strel and choose the one that best fits your case.
I hope this helps,
Ghada
  2 comentarios
Wajahat
Wajahat el 22 de Jul. de 2015
Hi Ghada
Thanks a lot for the response. I have already tried dilation and it does not help in proper filling of the gaps. The stroke width gets dilated both ways and the nearby characters start merging in. Best Regards Wajahat
Image Analyst
Image Analyst el 22 de Jul. de 2015
Why am I not surprised?

Iniciar sesión para comentar.


Image Analyst
Image Analyst el 21 de Jul. de 2015
I could be wrong but I don't think you need to do that. There is an easier approach if you just think outside the box. You don't want to turn what you got into perfect letters. No - what you need to do is to create a new alphabet and recognize that you got, that is, decide which letter in your new alphabet best matches your unknown/test/mystery letter. So you don't need to have a perfect binary mask of the D letter for example. But if you can assume that your lighting is the same for all images (illuminated at a glancing angle from the lower right) then you want to define a D as that gray scale pattern. It doesn't matter what it is, it just matters that what it is, is defined by you as a D. So whenever it sees that same pattern of bright, dark, and gray pixels, it will say it's a D. So you make up a library of all letters and numbers with those actual patterns and associate them with the letter that makes that pattern. So for example, you cut out the bounding box of that shadow cast D letter and call that "D.png". Do the same for all the other letters and numbers. OK, now you have your library.
Next what you want to do is to compute the Hu's moments for each letter's image. Then you isolate a blob that represents a letter by any reasonable technique and you compute it's Hu's moments. Then you compare that letters Hu's moments to the Hu's moments of each of your library of letters and see which letter in your library is the closest match to your unknown letter.
See https://www.youtube.com/watch?v=Nc06tlZAv_Q for a nice example of hou you can use Hu's moments to recognize patterns in an image, regardless of scaling, rotation, and location.
Please give it a try - it should not be too difficult.
  2 comentarios
Wajahat
Wajahat el 22 de Jul. de 2015
Editada: Wajahat el 22 de Jul. de 2015
Hi
First of all, thanks a lot for your response.
I also agree to you here because there does not seem to be any quick use of OCRs here. OCR will only work once the characters are complete in form. I just wanted to know if there is a way of character reconstruction.
And it appears that the answer is NO.
Still I would like to know if someone has been dealing with a similar situation of reading embossed or engraved characters on a background with same color?
If so, how did they solve the problem?
I will definitely apply the approach you described and will get back to you with the results.
Best Regards
Wajahat
Image Analyst
Image Analyst el 22 de Jul. de 2015
For what it's worth, an alternative method, though probably more complicated than my suggestion, is to use your Canny edges with the Hausdorf distances. See the jet-finding example on this page: http://cgm.cs.mcgill.ca/~godfried/teaching/cg-projects/98/normand/main.html

Iniciar sesión para comentar.

Categorías

Más información sobre Convert Image Type en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by