Namburi Srinath
5 min readDec 13, 2018

--

Hey hi,

I followed same steps to train number plate ,given a car. But I can’t detect any number plate given a car image. Here I give my details

I just want to detect the number plate region(Not the number plate….Thats next task!!)

My dataset is BRNO university number plate one.

https://medusa.fit.vutbr.cz/traffic/research-topics/general-traffic-analysis/holistic-recognition-of-low-quality-license-plates-by-cnn-using-track-annotated-data-iwt4s-avss-2017/

I am using Reld dataset which is subdivided into folders

I am also using Alexey AB darknet

Q — That dataset has repetitive images(Eg:- around 20 images which has 7C2 4698 and similarly for all others). Will it matter???

Images in dataset repeats. Sample ones

Q — Between my images are around 100*32(different images, different sizes obviously) having only number plates and I didn’t do any resizing. I think YOLO will resize that. Is it true??

Q— YOLO annotation label is [0 0.5 0.5 1 1] for all the images as the entire image needs to be looked by the network. Is this correct?. Example image below

An example of YOLO annotation label. As I want the complete image to be looked by network, all my labels have same values. Is this correct?
  1. I am using yolo v2. I have copied the cfg file of yolov2 and changed classes to 1(as I have only one class, NUMBERPLATE) and filters to 30 at the end.(mentioned in bold)

[net]
# Testing
batch=64
subdivisions=64
# Training
# batch=64
# subdivisions=8
width=416
height=416
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches = 500200
policy=steps
steps=400000,450000
scales=.1,.1

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=64
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

#######

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[route]
layers=-9

[convolutional]
batch_normalize=1
size=1
stride=1
pad=1
filters=64
activation=leaky

[reorg]
stride=2

[route]
layers=-1,-4

[convolutional]
batch_normalize=1
size=3
stride=1
pad=1
filters=1024
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=30 (OTHER ONE)
activation=linear

[region]
anchors = 0.57273, 0.677385, 1.87446, 2.06253, 3.33843, 5.47434, 7.88282, 3.52778, 9.77052, 9.16828
bias_match=1
classes=1(ONE CHANGE)
coords=4
num=5
softmax=1
jitter=.3
rescore=1

object_scale=5
noobject_scale=1
class_scale=1
coord_scale=1

absolute=1
thresh = .6
random=1

2. My obj.data looks like this

3. My obj.names has only Numberplate written

I have completed 1700 iterations so far and the average loss is very less(< 0.06 which was mentioned in tutorial). I attach some here

Image taken at 1127 iteration. Nearly 72000 images trained

When it is getting killed, I take the most recent weights and start training from there.

Also, can you help me with interpreting the output in terminal. The other blogs just explain average loss and IOU. What does the other values signify?

Final chart

But I was not able to get any good results. It is not even detecting number plates.

sample image. Wrong detections

Q — . I have placed my images in many folders like this(s01_l01, s01_l02,s02_l01, s02_l02, s03_l01, s03_l02, s04_l01, s04_l02 ). But the paths are correctly mentioned

Is it valid or should I need to place all the images and their corresponding .txt files only in one folder?

Because I read in other post like that

Image from that link saying about this issue
Screenshots of different folders. What I did
My train.txt. See the changes in folders path(From s01_l01 to s02_l02)

Q3 My training dataset has same images repeatedly. (Please observe in the above folder images or BRNO number plate dataset or sample images that I provided). Will it matter for detection?

Q4 Should we need to give some negative samples while training like (hey, this is not number plate!!!!). Because in your tutorial you didn’t mention that.

If any other details required, please ask me.

Thanks in advance

Srinath

--

--

Responses (3)