Error measurement between given perfect 2D shape and freeform shape drawn by user

What method should I use to calculate the error between a given perfect shape (e.g. circle, triangle, rectangle etc.) and a freeform shape drawn by the user, which more or less closely matches the perfect shape?

The application context is a program that measures the precision of users hand-drawing shapes displayed on a touch screen. The users try to redraw the shape that is shown on the screen with the finger or a stylus pen and because users are not perfect, the drawn shape does not completely overlap with the given one. I would like to measure the difference or the error between the perfect shape provided by the application and the imperfect shape drawn by the user.

Thanks for your help.

Solutions Collecting From Web of "Error measurement between given perfect 2D shape and freeform shape drawn by user"

One common measure is the Hausdorff distance. However, I think in your case
the Fréchet distance might be best, as it is specifically tailored to curves (rather than sets).
Computing this distance has been the focus of several recent papers in computational geometry, one with the interesting title “Homotopic Fréchet distance between curves, or Walking your dog in the woods in polynomial time”!
This is not as whimsical as it first appears, because
“the Fréchet distance between two curves is the minimum length of a leash required to connect a dog and its owner.”

The method you should use depends strongly on your reasons for making measurements, and on what you plan to do with the measurements. Absent such details, mathematical arguments for one method or another seem bootless.

That said, here are three methods you might consider:

  • report maximum error (eg, length of longest vector from pattern curve to result curve, normal to pattern or result)
  • report sum of squared error (eg, at each pixel of pattern, measure length as in previous item, and add up squares of all the lengths)
  • report absolute value of area between pattern and result (eg, on each scan line of pixels, count pixels between the curves, and total the counts.)

If you divide by curve length to normalize the result, the third method is nearly equivalent to reporting average absolute error; but counting pixels on scan lines may be an easier calculation than finding error-vector lengths.

Thanks for the answers.

So I finally opted for using the exclusive-OR area of the pattern shape and the drawn shape (i.e. the combined area of the two shapes minus their intersection) as the error measurement. It seems to work fine, as long as the shapes overlap (if not, I define the error to be infinite).