Re: Zero mean normalized cross correlation

einsteinhelpme@xxxxxxxxx wrote:

I'm doing some template matching. Here are the formulae for the
different comparison methods I use (I denotes image, T - template, R -
result. The summation is done over template and/or the image patch:
x'=0..w-1, y'=0..h-1):




where T'(x',y')=T(x',y') - 1/(w·h)·sumx",y"T(x",y")
I'(x+x',y+y')=I(x+x',y+y') - 1/(w·h)·sumx",y"I(x+x",y+y")

The latter might be called zero mean (something); T' and I' are zero
mean --- means subtracted.

R(x,y)=sumx',y'[T'(x',y')·I'(x+x',y+y')]/sqrt[sumx',y'T'(x',y')2·sumx',y'I'(x+x',y+y')2] (1)

IMHO, normalised and zero mean, assuming T' and I' are as in the

In ZMNCC, I think we have R(x,y) =1 if T' and I' are perfect matches at
displacement (x,y).

If there is an assumption that T and I are 'the same', I'm pretty sure
that ZMNCC is invariant under the affine transformation

vi = a.vt + b (2),

where vi is any pixel value in I and vt is any pixel value in T. Or is
it the other way round? Or is the inverse of (2) affine also?

My question is:

1- What's the difference between these and zero mean normalized cross
correlation? Can you give me any formula of the ZMNCC?

See above.

2- Is there some code to do ZMNCC?

(1) gives a pretty easy formula, but you have to cater for when you
fall off the edges of each image. If I and/or T are large, then FFT
will help; O(n^4) -> O(n^2) or less (?), where n ~ w, h.

If you want to play around with the mathematics, for example to verify
the invariance mentioned above, it will be a lot easier to start in
1-D, so that I, T are vectors and the sum of products is a scalar
product. In that case, normalisation is equivalent to vector
normalisation, i.e. reduce to unit length. I'm not sure there is any
bector equivalent of shifting to zero mean.

Incidentally, you will see SSD (sum of squared differences) also used
in certain cases.

Best regards,

Jon C.