Tag: en

[OpenCV] detectMultiScale: output detection score

OpenCV provides quite decent implementation of the Viola-Jones Face detector.

A quick example looks like this (OpenCV 2.4.5 tested):

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            cascade.detectMultiScale(img, objs, scale_factor, min_neighbors);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}
g++ -std=c++0x -I/usr/local/include `pkg-config --libs opencv` main.cc -o main

The detection results are as shown below:
result

For more serious user, it would be nice to have a detection result for each detected face.
The OpenCV provides a overloaded function designed for this usage which is lack of detailed documentation:

vector reject_levels;
vector level_weights;
cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors);

The reject_levels and level_weights will keep being empty until you write it like this (The whole file):

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            vector reject_levels;
            vector level_weights;
            cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, 0, Size(), Size(), true);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
                putText(img_color, std::to_string(level_weights[n]),
                        Point(objs[n].x, objs[n].y), 1, 1, Scalar(0,0,255));
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}

However, this will give you a large number of detected rectangles:
result-org

This is because OpenCV skips the step of filtering out the overlapped small rectangles. I have no idea whether this is by design. But output likes this would not be helpful at least in my own case.

So we would need to make our own changes in the OpenCV's source code.
There are different ways to design detection score, such as
"In the OpenCV implementation, stage_sum is computed and compared against the i stage_threshold for each stage to accept/reject a candidate window. We define the detection score for a candidate window as K*stage_when_rejected + stage_sum_for_stage_when_rejected. If a window is accepted by the cascade, we just K*last_stage + stage_sum_for_last_stage. Choosing K as a large value e.g., 1000, we ensure that windows rejected at stage i have higher score than those rejected at stage i-1." from http://vis-www.cs.umass.edu/fddb/faq.html

Actually, I found a straightforward design of detection score works well in my own work. In the last stage of the face detector in OpenCV, detection rectangles are grouped into clustered to eliminated small overlapped rectangles while keeping the most potential rectangles. The number of final detected faces is at most same as the number of clusters. So we can simply use the number of rectangles grouped into the cluster as the detection score of the associated final rectangle, which may not be accurate but could work.

To make this change, in OpenCV-2.4.5, find the file modules/objdetect/src/cascadedetect.cpp (line 200)

// modules/objdetect/src/cascadedetect.cpp (line 200)
// int n1 = levelWeights ? rejectLevels[i] : rweights[i]; //< comment out this line
int n1 = rweights[i]; //< the change

We then modify the file main.cc accordingly:

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            vector reject_levels;
            vector level_weights;
            cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, 0, Size(), Size(), true);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
                putText(img_color, std::to_string(reject_levels[n]),
                        Point(objs[n].x, objs[n].y), 1, 1, Scalar(0,0,255));
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}

And we can have the detection scores like this:
result-final

On 2 dimensional array of C++

I was asked about this today. In practice, I rarely use 2-dimensional array, instead I use vector of vectors.

To allocate a 2-d array on the stack, a C-style array is

int d[2][3];

Then to refer to an element it is like

d[i][j];

To make a dynamical allocation, one can NOT write his code like this

int **wrong_d = new int[2][3];

since the d in int d[2][3]; is not a int**, instead it is of the type

int (*)[3]

or in your evil human words, it is a int[3] pointer.

It is a little tricky to declare a 2-d array dynamically.

int (*d)[3] = new int[2][3];

Or

int v1 = 2;
int (*d)[3] = new int[v1][3];

The number 3 here can NOT be replace by a non-constant. My understanding is that since this value is associated with the type of d, in a strong type language like C/C++ which should be known by the compiler.

We can check the size of variables to verify this interpretation.

 1 #include 
  2 
  3 using namespace std;
  4 
  5 int main(int argc, char **argv)
  6 {
  7     int v1 = 2;
  8     int (*d)[3] = new int[v1][3];
  9     cout << "sizeof(d): " << sizeof(d) << endl;
 10     cout << "sizeof(d[0]): " << sizeof(d[0]) << endl;
 11     cout << "sizeof(d[1]): " << sizeof(d[1]) << endl;
 12     cout << "sizeof(d[0][0]): " << sizeof(d[0][0]) << endl;
 13     return 0;
 14 }

The output is:

$./test 
sizeof(d): 8        //< 2 pointers, &d[0] and &d[1]
sizeof(d[0]): 12    //< int[3], d[0][0] d[0][1] d[0][2]
sizeof(d[1]): 12    //< int[3], d[1][0] d[1][1] d[1][2]
sizeof(d[0][0]): 4  //< an integer

To save your life, I would recommend to use vectors.

int v1 = 2;
int v2 = 3;
vector > d(v1, vector(v2, 0)); 

[OpenCV]detectMultiScale

I met a problem when using the interface ‘detectMultiScale’ of OpenCV. The rectangles it gives out may not be fully inside the frame of the original image. As a result, if these rectangles are applied directly on the original image to crop out the detected objects, your programs crash.

These are the interfaces

virtual void detectMultiScale( const Mat& image,
                               CV_OUT vector& objects,
                               double scaleFactor=1.1,
                               int minNeighbors=3, int flags=0,
                               Size minSize=Size(),
                               Size maxSize=Size() );

and

virtual void detectMultiScale( const Mat& image,
                               CV_OUT vector& objects,
                               vector& rejectLevels,
                               vector& levelWeights,
                               double scaleFactor=1.1,
                               int minNeighbors=3, int flags=0,
                               Size minSize=Size(),
                               Size maxSize=Size(),
                               bool outputRejectLevels=false );

I am copying what I wrote for a pull request on Github, this ‘may-be issue’ can be fixed easily by modifying one line in the source file

modules/objdetect/src/cascadedetect.cpp

Replace this one

Size processingRectSize( scaledImageSize.width - originalWindowSize.width + 1, scaledImageSize.height - originalWindowSize.height + 1 );

with this line

Size processingRectSize( scaledImageSize.width - originalWindowSize.width , scaledImageSize.height - originalWindowSize.height);

My explanation goes here, ignore the line numbers if they look wrong to you.
“Actually, in the code, the workflow is more complicated. In the file cascadedetect.cpp

This is the line building the final detected rectangle

995 rectangles->push_back(Rect(cvRound(x*scalingFactor), cvRound(y*scalingFactor), winSize.width, winSize.height));

the winSize is assigned here

969 Size winSize(cvRound(classifier->data.origWinSize.width * scalingFactor), cvRound(classifier->data.origWinSize.height * scalingFactor));

while the maximum value of x and y can be find here, they are related to the processingRectSize.

971         int y1 = range.start * stripSize;
972         int y2 = min(range.end * stripSize, processingRectSize.height);
973         for( int y = y1; y < y2; y += yStep )
974         {
975             for( int x = 0; x < processingRectSize.width; x += yStep )

Say the original image size is O, the original window size is W, scaling factor is F. O and W is integer and F is a decimal usually larger than 1. The width and height are assumed to be the same for example.

If we calculate the right-most point of the detected rectangle, it should be:

the maximum x is: (cvRound(O/F) - W), current winSize is W*F, following the line 995 we get:

cvRound( (cvRound(O/F) - W) * F ) + W*F

This can be larger than O, say O is 600, F is 4.177250, W is 24, the number we can get above is 601.254 which is larger than 600."

Hope these help.