Saturday, March 14, 2015

OpenBR + OpenCV + Qt with CMake

In this post I'll provide minimum required code to compile OpenBR, OpenCV, Qt with Cmake.

CMake is cross-platform free and open-source software for managing the build process of software using a compiler-independent method. It is designed to support directory hierarchies and applications that depend on multiple libraries.

OpenCV, OpenBR is popular computer vision libraries. First mainly aimed at real-time computer vision(general purposes), second mainly aimed at face detection and recognition.

OpenBR is highly dependent on Qt.

Example will consists from face detection problem. Main.cpp, CMakeLists and bash script to run all this stuff.

${PROJECT_DIR}/src/main.cpp:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
#include <iostream>
#include <QString>
#include <iostream>
#include <openbr/openbr_plugin.h>
#include <opencv2/objdetect/objdetect.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/opencv.hpp>

int main( int argc, char* argv[] ) {

    br::Context::initialize(argc, argv);
    br::Globals->quiet=true; //don't print output
    br::Globals->enrollAll = true; //enable to detect multiple faces
    QSharedPointer<br::Transform> detectionTransform = 
    br::Transform::fromAlgorithm(
        "Open+Cascade(FrontalFace)+Expand"
        "+ASEFEyes+Draw[distribute=false]"
        );
    cv::VideoCapture capture;
    capture.open(0);
    capture.set(CV_CAP_PROP_FRAME_WIDTH, 640);
    capture.set(CV_CAP_PROP_FRAME_HEIGHT, 480);

    if (!capture.isOpened()) {
            std::cerr << "---(!)Error opening video capture\n";
            exit(1);
    }

    int counter = 0;
    time_t start, end;
    double fps;
    time(&start);
    cv::Mat frame, final;


    while(capture.read(frame)) {
        time(&end);
        fps = round(++counter / difftime(end,start));
        std::stringstream ss;
        ss << fps;

        // Openbr magic
        br::Template query(frame);
        br::Globals->enrollAll = true;    
        query >> *detectionTransform;
        cv::Mat detectionResult = query.m().clone();
   
        if (detectionResult.rows != 0) {
            final = detectionResult;
        } else {
            final = frame;
        }

        //add Fps to frame
        putText(final, "FPS: " +  ss.str(), cv::Point(10, 30),
                        cv::FONT_HERSHEY_PLAIN, 2,
                        cv::Scalar::all(255),
                        2, 3);
        
        imshow("Test", final);

        int c = cv::waitKey(5);
        if ((c == 'c') || (c == 'q') || (c == 27)) {
            break;
        }
    }
    
    br::Context::finalize();

}


${PROJECT_DIR}/src/CMakeLists.txt:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
cmake_minimum_required(VERSION 2.8.11)
project(MainTest)

set(CMAKE_AUTOMOC ON)
set(CMAKE_INCLUDE_CURRENT_DIR ON)

find_package(OpenCV REQUIRED)
find_package(OpenBR REQUIRED)

link_directories(/usr/lib/)

find_package(Qt5 5.4.0 REQUIRED Core Gui Xml )

include_directories(${Qt5Widgets_INCLUDE_DIRS} 
                    ${Qt5Core_INCLUDE_DIRS} 
                    ${Qt5Gui_INCLUDE_DIRS})

add_executable(MainTest main.cpp)

target_link_libraries(MainTest ${OpenCV_LIBS} ${OPENBR_LIBS} Qt5::Core) 


${PROJECT_DIR}/start.sh:

1
2
3
4
5
6
rm -rf build
mkdir build
cd build
cmake -DCMAKE_PREFIX_PATH=/usr/local/Cellar/qt5/5.4.0 ../src
make
./MainTest


Few Notes:
1. You should change DCMAKE_PREFIX_PATH to your Qt path.
2. Change Qt Version in CMakeLists.txt if you have 5.1.1 as example.

Friday, March 13, 2015

Multi Agent Games for Pacman

In this post I want to show compact, simple and elegant way of implementing agents for Pacman Game using python.

Post will consists from implementing Minimax, Alfa-Beta pruning and Expectimax algorithms.

Minimax


This algorithm mainly for zero-sum games. It helps to make decisions for minimising the possible loss for a worst case scenario.

You can read more about zero-sum games http://en.wikipedia.org/wiki/Zero-sum_game and about minimax algorithm http://en.wikipedia.org/wiki/Minimax.

Complex property of minimax algorithm is deep level. How deep we should see the nearest future?

We need to implement maxvalue and minvalue functions. Maxvalue function is used by our Pacman. Minvalue function is used by ghosts.
Implementation will consist from four simple functions.

  • Maxvalue ~ for computing the best direction for Pacman.
  • Minvalue ~ for computing the worst case actions for ghosts. 
  • Value ~ for generalisation of maxvalue and minvalue.
  • GetAction ~ function which is called for getting the best action for current position of Pacman (with deep level parameter).

Maxvalue function



def maxvalue(self ,state, agentIndex, currentdepth):
      v = (float("-inf"), "Stop")
      for action in state.getLegalActions(agentIndex):
        v = max([v, (self.value(state.generateSuccessor(agentIndex, action), \
          (currentdepth+1) % self.number_of_agents, currentdepth+1), action)], key=lambda item:item[0])
      return v

Minvalue function



    def minvalue(self, state, agentIndex, currentdepth):
      v = (float("inf"), "Stop")
      for action in state.getLegalActions(agentIndex):
        v = min([v, (self.value(state.generateSuccessor(agentIndex, action), \
          (currentdepth+1) % self.number_of_agents, currentdepth+1), action)], key=lambda item:item[0])
      return v

Value function



    def value(self, state, agentIndex, currentdepth):
      if state.isLose() or state.isWin() or currentdepth >= self.depth*self.number_of_agents:
        return self.evaluationFunction(state)
      if (agentIndex == 0):
        return self.maxvalue(state, agentIndex, currentdepth)[0]  
      else:
        return self.minvalue(state, agentIndex, currentdepth)[0]

GetAction function



    def getAction(self, gameState):
        self.number_of_agents = gameState.getNumAgents()
        path2 = self.maxvalue(gameState,0,0)
        return path2[1]

Alpha-Beta Prunning

Basically our minimax algorithm works well, but for high deep level it's very slow. We can improve speed using Alpha-Beta Prunning. The main idea is preventing expanding unnecessary game nodes(in which we 100% sure they worse than we found one before).

You can read more about Alpha-Beta Prunning http://en.wikipedia.org/wiki/Alpha–beta_pruning.

Implementation will consists from same function as Minimax, but slightly improved.


Maxvalue function



def maxvalue(self ,state, agentIndex, currentdepth, alpha, beta):
      v = (float("-inf"), "Stop")
      for action in state.getLegalActions(agentIndex):
        v = max([v, (self.value(state.generateSuccessor(agentIndex, action), (currentdepth+1) % self.number_of_agents, currentdepth+1, alpha, beta), action)], key=lambda item:item[0])
        if v[0] > beta:
          return v
        alpha = max(alpha, v[0])
      return v

Minvalue function 



def minvalue(self, state, agentIndex, currentdepth, alpha, beta):
      v = (float("inf"), "Stop")
      for action in state.getLegalActions(agentIndex):
        v = min([v, (self.value(state.generateSuccessor(agentIndex, action), (currentdepth+1) % self.number_of_agents, currentdepth+1, alpha, beta), action)], key=lambda item:item[0])
        if v[0] < alpha:
          return v
        beta = min(beta, v[0])
      return v

Value function



    def value(self, state, agentIndex, currentdepth, alpha, beta):
      if state.isLose() or state.isWin() or currentdepth >= self.depth*self.number_of_agents:
        return self.evaluationFunction(state)
      if (agentIndex == 0):
        return self.maxvalue(state, agentIndex, currentdepth, alpha, beta)[0]  
      else:
        return self.minvalue(state, agentIndex, currentdepth, alpha, beta)[0]

GetAction function



    def getAction(self, gameState):
        self.number_of_agents = gameState.getNumAgents()
        alpha = float("-inf")
        beta  = float("inf")
        path2 = self.maxvalue(gameState,0,0,alpha,beta)
        return path2[1]


Expectimax


In previous algorithms we have one bottleneck. Pacman thinks that ghosts do the worst action for our agent. But in general they can randomly choose actions. This bug can prevent to loss of points and loosing the game in situation where Pacman have chance to win. 
Expectimax has the same logic for Pacman, but slightly different for ghosts. It thinks that our ghosts choose uniformly their actions instead of the worst for Pacman.

Implementation is the same as minimax, except we replace minvalue function for expected value.

Expected value function



def expectedvalue(self, state, agentIndex, currentdepth):
      v = [0.0]
      for action in state.getLegalActions(agentIndex):
        v.append(self.value(state.generateSuccessor(agentIndex, action), (currentdepth+1) % self.number_of_agents, currentdepth+1))
      return sum(v)/(len(v)-1)

Bonus:
Video of Pacman winning game using Expectimax algorithm with deep level equal to 2.



P.S. All code sources can be found on https://github.com/orenov/AI-Berkley/tree/master/multiagent .