Implementing the Harris Corner Detector

Implementing the Harris Corner Detector

As an introduction to OpenCV and using it with modern C++, I decided to code a Harris corner detector. I’d previously only used MexOpenCV so this was new to me. I’m 100% certain that this could’ve been done more efficiently but I think that I should prioritize moving on to new material rather than perfecting this. Quality vs quantity. This was also my first introduction to makefiles and gdb, but I’ll include that elsewhere.

My main problem when coding this was that I kept mixing up types for cv::Mat. This website was so incredibly helpful for me. I can’t even begin to explain how many errors I had where it was simply because I was mixing up Mat types. I’m not certain why the compiler doesn’t throw an error when this happens, but I might switch to a different one. I also found the at function strange in OpenCV, as in image.at(i, j). Why can’t the be type deduced?

These are the general steps of the Harris Corner Detector

  1. Take the grayscale of an image
  2. Apply the Sobel operator to find the gradient values at each pixel
  3. Compute the gradient covariance matrix elements
  4. Apply gaussian blur to the covariance matrix elements
  5. Calculate Harris score
  6. Threshold and apply non maximum suppression

I’ve seen sources apply Gaussian blur to the image after step 1 as well.

I wanted to learn this from a sort of first principles approach, so I started with coding a Sobel operator. This is a method of finding the x and y gradients at every pixel of an image. Functionally, this detects edges in an image, which is useful because corners are the intersection of edges. How it works is you take a specific kernel (matrix) and multiply element wise with a 3×3 patch of the image.

Sobel operator kernel

My implementation iterates over each pixel in the image. I hardcoded the kernel, as opposed to creating a Mat, simply because it seemed simpler. I created two temp Mats of int type to store the output. This was necessary because the output of multiplication like this would’ve overflowed uchar. At the end I cast and scale the temporary Mat data to be back to uchar for consistency. You can see the results of running on a chessboard image below.

The original image

X Gradient

Y Gradient

I then needed to calculate the gradient covariance matrix, which is this

If you remember that the Sobel operator calculates the x and y gradients at each pixel, this is just going to be iterating over each pixel in the gx and gy Mats and multiplying as required. I used the mul function, which does element wise multiplication.

This was one tricky part for me. I had gotten to the part where I had to calculate the Harris score, which is determined by the equation below.

I had a problem though. Wouldn’t the determinant of M always be 0? What needs to happen before I do the Harris score calculation is I need to apply a window function to M.

For this I chose a Gaussian blur with radius of 2. Gaussian blur is simply applying a 3×3 kernel to a 3×3 image patch again, similar to the Sobel operator. At the end you multiply by the inverse of the sum of the matrix elements, to compute an average.

Gaussian blur kernel

I tried to do something a little strange here. Rather than hardcoding the values of the Gaussian kernel, I created a loop to fill the values in for me. If I had to do this next time, I would probably hardcode the values in a Mat, then use the mul function. I don’t think I would actually precalculate the values from the Gaussian function, since it’s not too difficult to just hardcode.

Blurred XX Gradient

Blurred XY Gradient

Blurred YY Gradient

Now we can calculate Harris score. I use k constant of 0.04 since that’s what was recommended here. I then create a float Mat called window and fill it with the elements of the blurred gradient covariance matrix. Again, its type float to avoid overflow when calculating trace and determinant. I then threshold based on an empirically determined value (I chose it based on when I was getting a reasonable number of corners). It’s important to note that both very negative and very positive values are what you’re looking for. I then put the absolute value of the score into a Mat. I was considering using a std::vector<cv::Point>, but I wanted the score, as well as the coordinates.

I then ran a quick and not very good form of non maximum suppression. The idea is to find the best corner in an area, and then suppress (ignore) all the others. I iterated over the corners Mat with a 40×40 window and placed the point with the highest score into a std::vector<cv::Point>. I have a gut feeling this could’ve been done much more efficiently, without needing a Mat with all the corners but I’m certain I’ll have the opportunity to reimplement this at some point in the future. The major issue is at the intersection of the 40×40 windows. The image below shows what happens.

I then put all of my corners onto the color image for easy viewing. I seem to have issues with not detecting the corners perfectly on center, as well as the previously mentioned non max suppression problem. This OpenCV tutorial details how sub pixel accuracy can be achieved.

I’ve added my full code below.

#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/imgproc.hpp>
#include <iostream>
#include <vector>
#include <math.h>

std::vector<cv::Point> nonMaxSuppression(cv::Mat corners){
  cv::Mat window = cv::Mat(40, 40, CV_64F);
  std::vector<cv::Point> points;
  double maxval;
  cv::Point point;
  for(int i = 0; i < corners.cols - 40; i += 40){
    for(int j = 0; j < corners.rows - 40; j += 40){
      window = cv::Mat(corners, cv::Rect(i, j, 40, 40));
      minMaxLoc(window, NULL, &maxval, NULL, &point);
      if(maxval != 0)
        points.push_back(cv::Point(point.x + i, point.y + j));
    }
  }
  return points;
}

void cornerDetect(cv::Mat ixx, cv::Mat ixy, cv::Mat iyy, cv::Mat corners){ //change corners to float??
  const int k = 0.04;
  cv::Mat window = cv::Mat(2, 2, CV_32F);
  for(int i = 0; i < ixx.cols; ++i){
    for(int j = 0; j < ixx.rows; ++j){
      float Ixx = (float) ixx.at<uchar>(cv::Point(i, j));
      float Ixy = (float) ixy.at<uchar>(cv::Point(i, j));
      float Iyy = (float) iyy.at<uchar>(cv::Point(i, j));
      window.at<float>(0, 0) = Ixx;
      window.at<float>(0, 1) = Ixy;
      window.at<float>(1, 0) = Ixy;
      window.at<float>(1, 1) = Iyy;
      double score = cv::determinant(window) - k * cv::trace(window)[0] * cv::trace(window)[0];
      if(score > 20000 || score < -20000)
        corners.at<double>(cv::Point(i, j)) = std::abs(score);
    }
  }
}

void gaussBlur(cv::Mat image, cv::Mat blurredimage){
  for(int i = 3; i < image.cols - 3; ++i){
    for(int j = 3; j < image.rows - 3; ++j){
      int val = 0;
      for(int h = - 1; h < 2; ++h){
        for(int k = - 1; k < 2; ++k){
          int vertical = (h == 0) ? 2 : 1;
          int horizontal = (k == 0) ? 2 : 1;
          val += (int)image.at<uchar>(cv::Point(h + i, k + j)) * vertical * horizontal;
        }
      }
      blurredimage.at<uchar>(cv::Point(i, j)) = val / 16; 
    }
  }
}

void sobel(cv::Mat image, cv::Mat gx, cv::Mat gy){
  cv::Mat temp1 = cv::Mat(image.cols, image.rows, CV_32S);
  cv::Mat temp2 = cv::Mat(image.cols, image.rows, CV_32S);
  for(int i = 3; i < image.cols - 3; ++i){
    for(int j = 3; j < image.rows - 3; ++j){
       temp1.at<int>(i, j) = -1 * image.at<uchar>(i - 1, j - 1) - 2 * image.at<uchar>(i - 1, j) - image.at<uchar>(i - 1, j + 1) + image.at<uchar>(i + 1, j + 1) + 2 * image.at<uchar>(i + 1, j) + image.at<uchar>(i + 1, j - 1);
       temp2.at<int>(i, j) = image.at<uchar>(i - 1, j - 1) + 2 * image.at<uchar>(i, j - 1) + image.at<uchar>(i + 1, j - 1) - image.at<uchar>(i + 1, j + 1) - 2 * image.at<uchar>(i, j + 1) - image.at<uchar>(i - 1, j + 1);
    }
  }
 cv::convertScaleAbs(temp1, gx);
 cv::convertScaleAbs(temp2, gy);
}

int main(){
  cv::Mat image, colorimage;
  image = cv::imread("image.png", 0);
  colorimage = cv::imread("image.png", 1);
  cv::namedWindow("display window", cv::WINDOW_NORMAL);
  cv::Mat gx(image.cols, image.rows, CV_8UC1);
  cv::Mat gy(image.cols, image.rows, CV_8UC1);
  sobel(image, gx, gy);
  cv::Mat gxx = cv::Mat(image.cols, image.rows, CV_8UC1);
  cv::Mat gyy = cv::Mat(image.cols, image.rows, CV_8UC1);
  cv::Mat gxy = cv::Mat(image.cols, image.rows, CV_8UC1);
  gxx = gx.mul(gx);
  gyy = gy.mul(gy);
  gxy = gx.mul(gy);
  cv::Mat blurredgxx = cv::Mat(image.cols, image.rows, CV_8UC1);
  cv::Mat blurredgxy = cv::Mat(image.cols, image.rows, CV_8UC1); 
  cv::Mat blurredgyy = cv::Mat(image.cols, image.rows, CV_8UC1);
  gaussBlur(gxx, blurredgxx);
  gaussBlur(gxy, blurredgxy); 
  gaussBlur(gyy, blurredgyy);
  cv::Mat corners = cv::Mat(image.cols, image.rows, CV_64F);
  cornerDetect(blurredgxx, blurredgxy, blurredgyy, corners);
  std::vector<cv::Point> points = nonMaxSuppression(corners);
  for(auto i : points)
    cv::circle(colorimage, i, 7, (0,0,255));
  cv::imshow("display window", colorimage);
  cv::waitKey(0);
  return 0;
}

Leave a Reply

Your email address will not be published. Required fields are marked *