Sunday, August 12, 2012

Reading Gauges - Detecting Lines and Circles

I received a question from a reader on how I would approach reading a simple gauge with one needle on a good frontal image of a circular gauge meter. This makes a good example to introduce Hough transforms. Detecting circles or lines using OpenCV and Python is conceptually simple (each particular use-case requires some parameter tuning though). Below is a simple example using the OpenCV Python interface for detecting lines, line segments and circles. The documentation for the three relevant functions are here. You can also find more on using the Python interface and the plotting commands in Chapter 10 of my book.
import numpy as np
import cv2

"""
Script using OpenCV's Hough transforms for reading images of 
simple dials.
"""

# load grayscale image
im = cv2.imread("gauge1.jpg")
gray_im = cv2.cvtColor(im, cv2.COLOR_RGB2GRAY)

# create version to draw on and blurred version
draw_im = cv2.cvtColor(gray_im, cv2.COLOR_GRAY2BGR)
blur = cv2.GaussianBlur(gray_im, (0,0), 5)

m,n = gray_im.shape

# Hough transform for circles
circles = cv2.HoughCircles(gray_im, cv2.cv.CV_HOUGH_GRADIENT, 2, 10, np.array([]), 20, 60, m/10)[0]

# Hough transform for lines (regular and probabilistic)
edges = cv2.Canny(blur, 20, 60)
lines = cv2.HoughLines(edges, 2, np.pi/90, 40)[0]
plines = cv2.HoughLinesP(edges, 1, np.pi/180, 20, np.array([]), 10)[0]

# draw 
for c in circles[:3]:
 # green for circles (only draw the 3 strongest)
 cv2.circle(draw_im, (c[0],c[1]), c[2], (0,255,0), 2) 

for (rho, theta) in lines[:5]:
 # blue for infinite lines (only draw the 5 strongest)
 x0 = np.cos(theta)*rho 
 y0 = np.sin(theta)*rho
 pt1 = ( int(x0 + (m+n)*(-np.sin(theta))), int(y0 + (m+n)*np.cos(theta)) )
 pt2 = ( int(x0 - (m+n)*(-np.sin(theta))), int(y0 - (m+n)*np.cos(theta)) )
 cv2.line(draw_im, pt1, pt2, (255,0,0), 2) 

for l in plines:
 # red for line segments
 cv2.line(draw_im, (l[0],l[1]), (l[2],l[3]), (0,0,255), 2)
  
cv2.imshow("circles",draw_im)
cv2.waitKey()

# save the resulting image
cv2.imwrite("res.jpg",draw_im)
This will in turn; read an image, create a graylevel version for the detectors, detect circles using HoughCircles(), run edge detection using Canny(), detect lines with HoughLines(), detect line segments with HoughLinesP(), draw the result (green circles, blue lines, red line segments), show the result and save an image. The result can look like this: From these features, you should be able to get an estimate on the gauge reading. If you have large images, you should probably scale them down first. If the images are noisy, you should adjust the blurring for the edge detection. There are also threshold parameters to play with, check the documentation for what they mean. Good luck.

7 comments:

  1. A few years ago I tried the same thing as an excercise. I started with basically
    the same method you suggested, because the matlab documentation guided me that
    way.
    It is good to mention, that for any practical use there a two critical points for
    the performace. The first and most critical is to find the middle of the dial, which can be
    solved by solid camera calibration but if the gauge changes physically in
    the picture most papers I found said that they let the user point out the
    middle of the dial.
    The second one is the look of the dial itself. If you face a precission measurement
    instrument, chances are it has a very clean dial and a needle pointer.
    In that case you can easily tranform the picture to binary and search for
    activeted pixels in a small range.

    I endded up selecting the middle of the dial myself once. Then I just
    grayscaled the images (extracted from a video stream) and applied a
    polar transform. To find the pointer I used the longest line the Hough Transform
    found in a preselected width spanning over the markings an some of the pointer.
    You can get the reading very
    easily that way because you know the range (min and full scale deflection) of
    the dial and you can scale the image when transforming using the polor coordinates.
    Hence, if the resulting image is 720px high, and the longest line starts in
    row 360 the pointer deflection is 180 degree.

    ReplyDelete
  2. Hi, I tried to run this code as is but I get a TypeError: 'NoneType' object is not subscriptable
    This is shown on the Hough Transform for circles lines which uses cv2.HoughCircles()
    Can you please help? Thanks!

    ReplyDelete
  3. @Anonymous:
    This is due to the fact, that cv2.HoughCircles() yields None.
    "None is not subscriptable" means None[0] is not possible.

    I get a feeling that there is a problem in the current cv2 implementation. (I'm using 2.4.3 and no detects either)

    ReplyDelete
  4. Actually, I just managed to get some detects, so it's working after all…

    ReplyDelete
  5. "Actually, I just managed to get some detects, "

    So what did you do differently now?

    ReplyDelete
  6. Totally working.
    Many Thanks!

    ReplyDelete