Character recognition: OCR on license plates

Last modified date

Comment: 1

Today we will take a look at some simple OCR applied on license plates. I know this sounds very exciting (and it is) because of what you can learn if you’re a novice (like me) in this field. Let’s take a look at what I did.

First of all, at this link you can find the images I worked with. You also need to have PyTesseract installed (together with Tesseract OCR v4).

cars
Cars that kindly lent their license plates

Next, and advise before continue. This it’s not state of art programming nor the perfect apporach. It was for me fun to do and very challenging. There are better ways to do this. But if you are a noob like me, in my opinion this can be a good start.

First of all, for the sake of semplicity (and because of my lazyness ;), you need to create four folder wherever the python file will be: plates, processed, resized, borders. These will contain the images for each step.

After this, to generate the first round of images, use the LPEX script to extract license plates from images.

Download the script and, after you opened the terminal/console type:

python Extraction.py -i car.png

This will extract the license plate from the image and will save it to a temp folder which you should have on your desktop (you can change this directly in Extraction.py).

After that, put all the images you want to work with in the plates folder.

originalPlates
The plates LPEX extracted from the images abov

Now, the code is split in four main functions.

The first, adaptiveThreshold(), will take all the plates and, for each of them, will apply two main transormations (apart of gray and thresh):
ADAPTIVE_THRESH_MEAN_C and ADAPTIVE_THRESH_GAUSSIAN_C both under cv2.adaptiveThreshold method.

ADAPTIVE_THRESH_MEAN_C value is the is the mean of neighbourhood area.

ADAPTIVE_THRESH_GAUSSIAN_C value is the weighted sum of neighbourhood values where weights are a gaussian window.

We are using adaptiveThreshold mainly because there are different lightning conditions. You can see for yourself that each image has been taken under different light.

With a fixed threshold operation the following result would not be possible unless you change the values for each image.

adaptiveThreshold
adaptiveThreshold() function resul

After thresholding the images, I wanted to resize them to a fixed size. resize() function come in hand to do this. The main features of this function are that it resize the image keeping it’s ratio and not too much quality is lost.
I used cv2.INTER_CUBIC interpolation because I enlarged most of the images.

Also, the addBorder() is used just after the resize. This will add a 10 pixel border to each image.

resizedPlates
The plates after resizing

And now the interesting part. cleanOCR() function.

This is splitted in two main parts:

  • the cleaning part: for each image I calculate the edges (with Canny function) and after, using HoughLinesP I detect the lines. If the line is between a certain range, I “delete” it
  • the OCR part which, using PyTesseract wrapper, will detect the characters in the the image

For a cleaner output I also created a valid chars list which will be used to compare the output chars with the one in this list. If they match, each char will be appended to a list.

cleanPlates
Final output images with OCR results below

These are what the script gave me out on these processed images:

FN240NG ADL908ZWT DZST2FE ESO97HX
FC547LY EC012MF FF473BV FM916DP

So, the 2nd, 3rd and 4th are wrong detected because of two reasons: debris in image (2) and char misreading (3-4).

For the first problem there are many solutions: one of them, which will for sure help in cleaning the image, is to apply histogram calculation to each ROI in the image and then delete every that not satisfy a certain range.

For char misreading it’s a little bit difficult with simple OCR. Even these results, at least for me, are amazing (thanks to v4 of Tesseract OCR which now use some machine learning algorithms to recognize chars) because with previous version of Tesseract these images were really difficult to decipher. A solution could be to train a neural network with these special fonts (European plates are used here) and see the result.

A third problem is that these results are not formatted. All the chars here have some standard format. In Italy we have LL NNNLL format (L = letter, N = number). By tuning the code a bit you can achieve better results (for example in the third plate, “S” letter will never exists in that position because it must be a number there. And so on…

That’s all. Any constructive comment and aimed to improve content of this site is widely accepted. We are here to learn.

See you at the next article.
Thanks for reading.

P.S: I highly reccomend you to use the tessdata files available in the project GitHub repository (which have been downloaded from official site). I achieved best results with them.


Source code is available on GitHub.

1 Response

Leave a Reply

Your email address will not be published. Required fields are marked *

Post comment