[OpenCV] detectMultiScale: output detection score

OpenCV provides quite decent implementation of the Viola-Jones Face detector.

A quick example looks like this (OpenCV 2.4.5 tested):

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            cascade.detectMultiScale(img, objs, scale_factor, min_neighbors);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}
g++ -std=c++0x -I/usr/local/include `pkg-config --libs opencv` main.cc -o main

The detection results are as shown below:
result

For more serious user, it would be nice to have a detection result for each detected face.
The OpenCV provides a overloaded function designed for this usage which is lack of detailed documentation:

vector reject_levels;
vector level_weights;
cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors);

The reject_levels and level_weights will keep being empty until you write it like this (The whole file):

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            vector reject_levels;
            vector level_weights;
            cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, 0, Size(), Size(), true);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
                putText(img_color, std::to_string(level_weights[n]),
                        Point(objs[n].x, objs[n].y), 1, 1, Scalar(0,0,255));
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}

However, this will give you a large number of detected rectangles:
result-org

This is because OpenCV skips the step of filtering out the overlapped small rectangles. I have no idea whether this is by design. But output likes this would not be helpful at least in my own case.

So we would need to make our own changes in the OpenCV's source code.
There are different ways to design detection score, such as
"In the OpenCV implementation, stage_sum is computed and compared against the i stage_threshold for each stage to accept/reject a candidate window. We define the detection score for a candidate window as K*stage_when_rejected + stage_sum_for_stage_when_rejected. If a window is accepted by the cascade, we just K*last_stage + stage_sum_for_last_stage. Choosing K as a large value e.g., 1000, we ensure that windows rejected at stage i have higher score than those rejected at stage i-1." from http://vis-www.cs.umass.edu/fddb/faq.html

Actually, I found a straightforward design of detection score works well in my own work. In the last stage of the face detector in OpenCV, detection rectangles are grouped into clustered to eliminated small overlapped rectangles while keeping the most potential rectangles. The number of final detected faces is at most same as the number of clusters. So we can simply use the number of rectangles grouped into the cluster as the detection score of the associated final rectangle, which may not be accurate but could work.

To make this change, in OpenCV-2.4.5, find the file modules/objdetect/src/cascadedetect.cpp (line 200)

// modules/objdetect/src/cascadedetect.cpp (line 200)
// int n1 = levelWeights ? rejectLevels[i] : rweights[i]; //< comment out this line
int n1 = rweights[i]; //< the change

We then modify the file main.cc accordingly:

// File: main.cc
#include 

using namespace cv;

int main(int argc, char **argv) {

    CascadeClassifier cascade;
    const float scale_factor(1.2f);
    const int min_neighbors(3);

    if (cascade.load("./lbpcascade_frontalface.xml")) {

        for (int i = 1; i < argc; i++) {

            Mat img = imread(argv[i], CV_LOAD_IMAGE_GRAYSCALE);
            equalizeHist(img, img);
            vector objs;
            vector reject_levels;
            vector level_weights;
            cascade.detectMultiScale(img, objs, reject_levels, level_weights, scale_factor, min_neighbors, 0, Size(), Size(), true);

            Mat img_color = imread(argv[i], CV_LOAD_IMAGE_COLOR);
            for (int n = 0; n < objs.size(); n++) {
                rectangle(img_color, objs[n], Scalar(255,0,0), 8);
                putText(img_color, std::to_string(reject_levels[n]),
                        Point(objs[n].x, objs[n].y), 1, 1, Scalar(0,0,255));
            }
            imshow("VJ Face Detector", img_color);
            waitKey(0);
        }
    }

    return 0;
}

And we can have the detection scores like this:
result-final

[OpenCV]detectMultiScale

I met a problem when using the interface ‘detectMultiScale’ of OpenCV. The rectangles it gives out may not be fully inside the frame of the original image. As a result, if these rectangles are applied directly on the original image to crop out the detected objects, your programs crash.

These are the interfaces

virtual void detectMultiScale( const Mat& image,
                               CV_OUT vector& objects,
                               double scaleFactor=1.1,
                               int minNeighbors=3, int flags=0,
                               Size minSize=Size(),
                               Size maxSize=Size() );

and

virtual void detectMultiScale( const Mat& image,
                               CV_OUT vector& objects,
                               vector& rejectLevels,
                               vector& levelWeights,
                               double scaleFactor=1.1,
                               int minNeighbors=3, int flags=0,
                               Size minSize=Size(),
                               Size maxSize=Size(),
                               bool outputRejectLevels=false );

I am copying what I wrote for a pull request on Github, this ‘may-be issue’ can be fixed easily by modifying one line in the source file

modules/objdetect/src/cascadedetect.cpp

Replace this one

Size processingRectSize( scaledImageSize.width - originalWindowSize.width + 1, scaledImageSize.height - originalWindowSize.height + 1 );

with this line

Size processingRectSize( scaledImageSize.width - originalWindowSize.width , scaledImageSize.height - originalWindowSize.height);

My explanation goes here, ignore the line numbers if they look wrong to you.
“Actually, in the code, the workflow is more complicated. In the file cascadedetect.cpp

This is the line building the final detected rectangle

995 rectangles->push_back(Rect(cvRound(x*scalingFactor), cvRound(y*scalingFactor), winSize.width, winSize.height));

the winSize is assigned here

969 Size winSize(cvRound(classifier->data.origWinSize.width * scalingFactor), cvRound(classifier->data.origWinSize.height * scalingFactor));

while the maximum value of x and y can be find here, they are related to the processingRectSize.

971         int y1 = range.start * stripSize;
972         int y2 = min(range.end * stripSize, processingRectSize.height);
973         for( int y = y1; y < y2; y += yStep )
974         {
975             for( int x = 0; x < processingRectSize.width; x += yStep )

Say the original image size is O, the original window size is W, scaling factor is F. O and W is integer and F is a decimal usually larger than 1. The width and height are assumed to be the same for example.

If we calculate the right-most point of the detected rectangle, it should be:

the maximum x is: (cvRound(O/F) - W), current winSize is W*F, following the line 995 we get:

cvRound( (cvRound(O/F) - W) * F ) + W*F

This can be larger than O, say O is 600, F is 4.177250, W is 24, the number we can get above is 601.254 which is larger than 600."

Hope these help.

PCA的实现

PCA,全称是Principal component analysis,中文叫做主成分分析,是一种常用的数据处理手段。

直观的说,PCA是一种降维的手法。比如现在我们有1000个数据点,每个数据点是一个128维的向量,存储上可以是一个1000×128维的数组。经过PCA处理,我们仍然得到1000个数据点,但是每个数据点是一个小于128维的向量,比如我们用PCA将128维的数据降到64维。
PCA可以保证,在降维之后,数据表示的信息损失最小。

“损失最小”具体怎么定义?
还是以1000个128维的点为例,这1000个点,也就是1000个向量在一个128维的空间中。从在任何一维,也就是一个方向上来看,如果在这个方向上,各个向量大小差异很大,那么这个方向是很重要的。
也就是,反过来看,如果在某个方向上,每一个向量大小都很接近,那么如果不考虑这个方向,也就是去掉这一维的数据,对我们分析这1000个点并没有多大的影响。所以,“损失最小”对应着“差异最小”。

那么具体怎么做呢?

这里是两种常用的方法: SVD分解和EIG分解(特征值分解)。
共同点在于先从数据得到一个矩阵M,M的特征值个数对应着数据的维度,特征值越大那么对应的这一维越重要,也就是“差异越大”。

SVD分解, matlab

    sub_input_data = (input_data - repmat(mean(input_data),count,1))/sqrt(count-1);
    [U,S,V] = svd(sub_input_data);
    % First out_dim columns as PCA bases
    pcaV = V(:,1:out_dim);
    output_data = input_data * pcaV;

EIG分解, matlab

    mean_input_data = mean(input_data);
    sub_input_data = input_data - repmat(mean_input_data, count,1);
    mean_mat = sub_input_data' * sub_input_data ./ (count - 1);
    cov_mat = mean_mat;
    [V D] = eig(cov_mat);
    % Last out_dim columns as PCA bases
    pcaV = V(:,in_dim - out_dim + 1: in_dim);
    output_data = input_data * pcaV;

如果用C++的话,OpenCV本身就提供PCA,当然也可以自己实现。
OpenCV下可以用这个方法做EIG分解。

cv::eigen(covMat, eigenValues, eigenVectors);

具体Matlab代码也比较简单,放在Github上了。
MatlabPCA

原始数据:

PCA之后: