For example, suppose an ANPR system is mounted on a toll road. It needs to be able to detect the license plate of each car passing by, OCR the characters on the plate, and then store this information in a database so the owner of the vehicle can be billed for the toll.
That speeding camera caught me with my foot on the pedal, quite literally, and it had the pictures to prove it too. There is was, clear as day! You could see the license plate number on my old Honda Civic (before it got burnt to a crisp in an electrical fire.)
Or maybe you want to build a camera-based (radar-less) system that determines the speed of cars that drive by your house using a Raspberry Pi. If the car exceeds the speed limit, you can analyze the license plate, apply OCR to it, and log the license plate number to a database. Such a system could help reduce speeding violations and create better neighborhood safety.
The aspect ratio range (minAR to maxAR) corresponds to the typical rectangular dimensions of a license plate. Keep the following considerations in mind if you need to alter the aspect ratio parameters:
Lines 30 and 31 perform a blackhat morphological operation to reveal dark characters (letters, digits, and symbols) against light backgrounds (the license plate itself). As you can see, our kernel has a rectangular shape of 13 pixels wide x 5 pixels tall, which corresponds to the shape of a typical international license plate.
Back on Lines 35-38, we devised a method to highlight lighter regions in the image (keeping in mind our established generalization that license plates will have a light background and dark foreground).
This light image serves as our mask for a bitwise-AND between the thresholded result and the light regions of the image to reveal the license plate candidates (Line 68). We follow with a couple of dilations and an erosion to fill holes and clean up the image (Lines 69 and 70).
Before we begin looping over the license plate contour candidates, first we initialize variables that will soon hold our license plate contour (lpCnt) and license plate region of interest (roi) on Lines 87 and 88.
Starting on Line 91, our loop begins. This loop aims to isolate the contour that contains the license plate and extract the region of interest of the license plate itself. We proceed by determining the bounding box rectangle of the contour, c (Line 94).
If our clearBorder flag is set, we can clear any foreground pixels that are touching the border of our license plate ROI (Lines 110 and 111). This helps to eliminate noise that could impact our Tesseract OCR results.
However, in real-world implementations, you may not be able to guarantee clear images. Instead, your images may be grainy or low quality, or the driver of a given vehicle may have a special cover on their license plate to obfuscate the view of it, making ANPR even more challenging.
Our ANPR method relied on basic computer vision and image processing techniques to localize a license plate in an image, including morphological operations, image gradients, thresholding, bitwise operations, and contours.
Additionally, you may need to train your own custom license plate character OCR model. We were able to get away with Tesseract in this blog post, but a dedicated character segmentation and OCR model (like the ones I cover inside the PyImageSearch Gurus course) may be required to improve your accuracy.
Enter your email address below to get a .zip of the code and a FREE 17-page Resource Guide on Computer Vision, OpenCV, and Deep Learning. Inside you'll find my hand-picked tutorials, books, courses, and libraries to help you master CV and DL!
Recognizing a Car License Plate is a very important task for a camera surveillance-based security system. We can extract the license plate from an image using some computer vision techniques and then we can use Optical Character Recognition to recognize the license number. Here I will guide you through the whole procedure of this task.Requirements:
4. Apply Closing Morphological Transformation on the thresholded image. Closing is useful to fill small black regions between white regions in a thresholded image. It reveals the rectangular white box of license plates.
7. Now find the contours in the validated region and validate the side ratios and area of the bounding rectangle of the largest contour in that region. After validating you will get a perfect contour of a license plate. Now extract that contour from the original image. You will get the image of the plate:
Steps 8 to 13 are performed by the segment_chars function that you can find below in the full source code. The driver code for the functions used in steps 6 to 13 is written in the method check_plate of class PlateFinder.
Method extract_contours returns all external contours from the preprocessed image. Method find_possible_plates preprocess the image with preprocess method then extracts contours by extract_contours method then it checks side ratios and area of all extracted contours and cleans the image inside the contour with check_plate and clean_plate methods. After cleaning the contour image with the clean_plate method, it finds all characters on the plate with find_characters_on_plate method. find_characters_on_plate method uses segment_chars function to find the characters. It finds characters by computing the convex hull of the contours of a thresholded value image and drawing it on the characters to reveal them. Code: Make another class to initialize Neural Network to predict the characters on the extracted license plate.
License plate recognition involves capturing an image or video and processing that through a series of algorithms. In the end, the user receives a record of the license plate characters and numbers, often along with additional information about the vehicle.
It sounds simple, but it actually involves hundreds of thousands of lines of code, because there are so many complexities and variations. For a license plate recognition system to be successful, its algorithms must do the following:
We provide license plate recognition software for more than 90 countries, with friendly, accessible customer support for them all. Some of the most common uses for our ALPR include parking management, highway monitoring and toll management, but we also have customers utilize it to manage everything from drive-through traffic to car wash subscriptions.
Think of the problems that could occur with a system that failed just 10 percent of the time. The wrong data could cause you to ticket or fine the wrong person. Software that fails to capture a license plate might let a criminal get away. Misreads in your paid parking lot could cost you revenue. A curbside pickup customer in your monitored space might go unnoticed indefinitely.
Most ALPR data is captured by cameras mounted to buildings, traffic signal poles or vehicles. However, if you need to capture and process license plate data with your cell phone, OpenALPR has a solution for that. Their app allows users to receive license plate data from a live video on an Android device. It could be useful for security guards, parking lot attendants and other individuals who need real-time data from a handheld device.
Automatic license plate recognition (ALPR) on stationary to fast-moving vehicles is one of the common intelligent video analytics applications for smart cities. Some of the common use cases include parking assistance systems, automated toll booths, vehicle registration and identification for delivery and logistics at ports, and medical supply transporting warehouses. Being able to do this in real time is key to servicing these markets to their full potential. Traditional techniques rely on specialized cameras and processing hardware, which is both expensive to deploy and difficult to maintain.
The pipeline for ALPR involves detecting vehicles in the frame using an object detection deep learning model, localizing the license plate using a license plate detection model, and then finally recognizing the characters on the license plate. Optical character recognition (OCR) using deep neural networks is a popular technique to recognize characters in any language.
All the pretrained models are free and readily available on NGC. TAO Toolkit provides two LPD models and two LPR models: one set trained on US license plates and another trained on license plates in China. For more information, see the LPD and LPR model cards.
In this section, we go into the details of the LPR model training. NVIDIA provides LPRNet models trained on US license plates and Chinese license plates. You can find the details of these models in the model card. You use LPRNet trained on US license plates as the starting point for fine-tuning in the following section.
For the license plate recognition task, you predict the characters in sequence of a license plate image. Just like other computer vision tasks, you first extract the image features. Take advantage of widely used DNN architecture, such as ResNet 10/18, to be the backbone of LPRNet. The original stride of the ResNet network is 32 but to make it more applicable to the small spatial size of the license plate image, tune the stride from 32 to 4. Then, feed the image feature into a classifier. Unlike the normal image classification task, in which the model only gives a single class ID for one image, the LPRNet model produces a sequence of class IDs. The image feature is divided into slices along the horizontal dimension and each slice is assigned a character ID in the prediction. 2b1af7f3a8