1 - Overview

For French speakers, the entire project is described in French here.

NXT Tower Defense is a system that lets you build, develop and play with your own robot. This robot is designed with Lego Mindstorm NXT for hardware and uses URBI, Qt and Opencv for the software. This means that this robot can be used by anyone who has some knowledge of development - and Lego :-)- and the desire to learn about robotics and especially URBI.

This robot is a launching projectile that can detect, track and shoot objects like Picoo-z or AR Drone. It can be played by 2 or more players, one to control the helicopter and the other to control the 'NXT Tower Defense'.

This project comes with a version ready to use for testing and a developer version to custom the system.

Note : This project has been tested under virtualized Windows 7, urbi sdk 2.1 (for compatibility with Mindstorm NXT), opencv 2.0 and Qt 4.7.1. The used hardware is Lego Mindstorm NXT 2.0 and Logitech C910.



Good to know

  • There is a page dedicated to questions on the forum www.gostai.com
  • Become a fan of NXT Tower Defense on Facebook :-)

2 - Ready to Use

This pack contains all the elements needed to build and use your robot very quickly. In fact, you will find the construction manual Lego and all operating software for a quick start.

Begin by downloading the package and follow the installation instructions below:

  • Build the robot by following the instructions of assembly. Good to know : you only need the pack Mindstorm NXT 2.0, no other parts. The pack contains two ways to build your robot :

Lego Digital Designer
  • With Lego Digital Designer (recommended) : Download LDD on Lego Website and open the lxf files. With this method you can follow instructions with animations
  • With html instructions : you can follow instructions by open the Building Instructions [NXT Tower Defense].html in a browser.
  • Install Urbi for Mindstorm NXT: Follow instructions from part 2 to part 4 on this page (in french) : Installer URBI pour Lego Mindstorm NXT.
    Finish by copying the folder ‘NXT Tower Defense” in the Urbi directory.
  • Run NXT Tower Defense.bat contained in this folder
  • Play !

Note : Building instruction contains some details that are not exactly the same as shown on the photos because of lego software limitations, so you can check the construction with all provided photos

3 - Gameplay

The software is a graphical interface that is detailed below:

As you can see, the interface is divided into two parts: the camera view and robot info.

  • Camera View : This part shows what is captured by the webcam and the location of objects detected by drawing a square
  • Robot Info : This part collects information that are emitted by the robot and allows to set some parameters like motor's speed, activating the tracking or start/stop the camera.

Once the software is launched the robot is controlled using the arrow buttons on the keyboard and the space key to launch a projectile.

The camera does not appear: Check if the webcam is supported by OpenCV (list). Uninstalling the official driver can help you capture the stream using the Windows driver, but you might lose some functionality of the camera.
can’t load nxt.dll error : see this page (in french) : Installer URBI pour Lego Mindstorm NXT
Failed to connect to the NXT via default USB connection : follow the instruction in part 4 in this page (in French) : Installer URBI pour Lego Mindstorm NXT

4 - Source Code

This pack contains all the elements needed to develop NXT Tower Defense. You'll find cpp files, urbiscript files and headers but also some useful software.

Note : NXT Tower Defense has been developed under Window 7 within virtual machine with Visual Studio 2008, Gostai Editor and Gostai Console.

a) Requirement

Sources are available as a Visual Studio 2008 project and use the Qt framework and library Opencv, so you have to install :

  • Visual Studio 2008 (Included in the pack software>vcsetup.exe )
  • Urbi sdk 2.1 (Included in the pack software>urbi-sdk-2.1-windows-x86-vcxx2008.exe).
  • Qt 4.7.1 for Windows & Visual Studio (Qt 4.7.1 for Windows & Visual Studio)
  • Opencv Library 2.0 (Opencv 2.0)

The project is setup with the default installation directories, wich are:

  • <urbi_dir>=”c:/Program Files/Gostai Runtime/”;
  • <qt_dir>=”c:/Qt/4.7.1/”;
  • <opencv_dir>=”c:/opencv2.0/”;

=> Finally you have to copy the libraries from Developer Pack>Library to <urbi_dir>\bin.

Note : Compiling the project in Visual Studio is set to Release, the generation mode 'Debug' is not supported.

b) Components

The system uses the NXT library provided by Gostai, you can find the documentation here:

In addition to its components, the system is composed of 3 UObjects that are used by Urbi and 3 Qt objects that are used by interface, which are detailed below:

1) UCamera

a) C++ part
class UCamera : public UObject { 
  UCamera(const std::string&); 

  int init(int); // Urbi constructor. Throw error in case of error.
  bool initialize(int id);
  virtual int update();
  int getWidth();
  int getHeight();
  void start();
  void stop();
  void getVal(UVar&); // Called on access. 

  int frame;// Frame counter
  int accessFrame; // last frame. 
  int id_; // camera id. 
  UVar val;
  UVar width;
  UVar height;
  CvCapture* capture;
  IplImage* img; 
  UBinary bin; // Storage for last captured image. 
This component retrieves images from the camera and shares with all other components. It uses the OpenCV library for image capture and store the image, in IplImage format, in a binary container :
//initialization - init();
bin.type = BINARY_UNKNOWN;
bin.common.size = sizeof(IplImage);
//allocation - getVal();
img = cvQueryFrame(capture);
bin.common.data = img;
val = bin; // Deep-copy of the data
You can retrieve the images using the variable val, which contains an OpenCV image (IplImage) :
ObjectDetection::setImage(UBinary& image) {
    img = (IplImage*) image.common.data; //get the frame from UBinary
b) Urbiscript part
UCamera provides one constructor, three variables and two methods in urbiscript
Var camera = UCamera.new("0"); // 0 = Camera Default

camera.val    //Ubinary
camera.width  //width of image
camera.height //height of image

camera.start(); // start the camera
camera.stop() ; // stop the camera

2) ObjectDetection

a) C++ part
class ObjectDetection : public UObject {
  ObjectDetection(const std::string&);

  int init(const std::string&,const std::string&);
  void getVal(UVar&); // Called on access. 
  void onImageChange(UVar&);
  void setImage(UBinary&);
  void onLoadChange(UVar&);

private :
  CvHaarClassifierCascade* cascade;
  CvMemStorage* storage;
  IplImage* smallimg;
  IplImage* img;

  UVar* imagesource;
  UVar number; // number of visible objects
  UVar visible; // 1 if object is visible, 0 otherwise
  UVar x; // position x of object center
  UVar y; // position y of object center
  UVar val; // share processed image
  UBinary bin; // Storage for last captured image.
This component also uses the OpenCV library to detect an object in an image, draw a rectangle around and returns the position (X, Y) of the object in the image. The component then shares the images it has changed to be displayed by the interface.
void ObjectDetection::setImage(UBinary& image) {
    img = (IplImage*) image.common.data;
    // detect an draw process, affect x,y,visible
    bin.common.data = img;
    val = bin; //New image with square drawn

b) Urbiscript part
The constructor takes two parameters: an input for images and another for the path model of the object to be detected.
var tracker = ObjectDetection(“camera.val”,”c:/path/to/cascade/cascade.xml”)
tracker.val //image wich displays the detecting object
tracker.x   //position in x of the object center
tracker.y   //position in y of the object center
tracker.visible //if object is visible
tracker.load    //set 1 to auto-update

Note about tracking : The project comes with three cascade files, two are provided by OpenCV library (haarcascade_frontalface_alt.xml and haarcascade_eye_tree_eyeglasses.xml) which detects faces and the last one I created that detects the Picoo-z (haarcascade_picooz.xml), but it can be improved. For the detection of other objects look the part 5 - HaarTraining

3) DashBoard

a) C++ part
class DashBoard : public QObject, public  UObject {
  DashBoard(const std::string&);

   int init();
   void finish();

   void setImage(UBinary&);
   void setImageSrc(const std::string&);
   void setSvrAdress(const std::string&);
   void setYaw(const std::string&,const std::string&);
   void setPitch(const std::string&,const std::string&);
   void setWeapon(const std::string&);
   void setBattery(const std::string&);
   void setTrackText(const std::string&);
   void onLoadChange( UVar&);

   void imageChanged(UVar&);
   void yawSpeedChanged(UVar&);
   void yawValChanged(UVar&);
   void batteryChanged(UVar&);
   void pitchValChanged(UVar&);
   void pitchSpeedChanged(UVar&);
   void weaponValChanged(UVar&);
   void trackTextChanged(UVar&);
   void setSvrConnected();

  void imageChange(IplImage*);
  void trackChange(QString);
  void speedXChange(int);
  void speedYChange(int);
  void yawValChange(int);
  void pitchValChange(int);
  void weaponValChange(int);
  void batteryChange(int);
  void svrAdressChange(QString);

  void svrConnected();
  void cameraConnected();
  void nxtConnected();
  void trackerConnected();

private :
  UVar* imagesource;
  UVar* yawval;
  UVar* yawspeed;
  UVar* pitchval;
  UVar* pitchspeed;
  UVar* weaponval;
  UVar* battery;
  UVar* trackingtext;
  CvMemStorage* storage;
  myThread t;
This component serves as a gateway between Urbi and interface. It launches the application in a new thread and it receives events issued by the server and sends them to the interface. Events are received through the mechanism of UnotifyChange and UNotifyAccess provided by UObject and it transmits by signals Qt.
//Listen the track text update
void DashBoard::setTrackText(const std::string& text) {
	trackingtext = new urbi::UVar(text);
        UNotifyChange(*trackingtext, &DashBoard::trackTextChanged);	
	emit trackerConnected();
//Called when tracktext changes
void DashBoard::trackTextChanged(UVar& text) {
	emit trackChange(QString::fromStdString(text));
b) Urbiscript part
This components provides one constructeur which launch the QApplication.
var dash = DashBoard.new(); //launch interface

dash.load //set to 1 for auto-update

dash.setImageSrc("camera.val"); // set source of images
dash.setTrackText("tracker.text"); // set text to display in interface
dash.setYaw("yaw.val","yaw.speed"); //set reference to yaw parameters
dash.setPitch("pitch.val","pitch.speed"); //set reference to pitch parameters
dash.setWeapon("weapon.val"); //set reference to weapon value

dash.setSvrAdress(""); //transmit the server's adress

4) MainWindow

This object builts interface based on mechanisms provided by Qt, using QApplication, is thrown into a new thread. The window receives Qt signals emitted by the dashboard and controls the system by connecting to the Urbi server using a UClient.
The interface creates a client that will connect and will call a function to verify that the connection is established.
function Global.confirmConnect(){ 

5) myThread

This class just launch a new thread which contains the application.

6) QIpliImageViewer

This object can display an image in IplImage format in an QLabel.

Note about Qt : Qt uses signals and slots to connect all elements and this mechanism needs to create a intermediate header file which called moc file. With visual studio you have to generate yourself the moc file with the command below (in header directory) :

  • <qt_dir>\bin\moc.exe –Dashboard.h –o moc_DashBoard.h
  • <qt_dir>\bin\moc.exe –MainWindow.h –o moc_MainWindow.h

After that, you have to include the right moc header in your cpp file, for exemple in MainWindow.cpp :

  • #include “moc_MainWindow.h”

c) Urbiscript

fichier global.u :

var Global.camera = 0;
var Global.tracker = 0;
var Global.dash = 0;
var Global.beeper = 0;
var Global.battery = 0;
var Global.yaw = 0;
var Global.pitch = 0;
var Global.weapon = 0;
var Global.light = 0;
var Global.nxt = 0;

var Global.maxSpeedX = 10;
var Global.maxSpeedY = 10;
var Global.tolerance = 40;
var Global.imgCenterX = 0;
var Global.imgCenterY = 0;
var Global.tracking = 0;
var Global.automatic = 0;

// Global function (call by interface)
function Global.setMaxSpeedX(s){ maxSpeedX = s; };

function Global.setMaxSpeedY(s){ maxSpeedY = s };

function Global.setSpeedYaw(s){ if(yaw) yaw.speed = s; };

function Global.setSpeedPitch(s){ if(pitch) pitch.speed = s; };

function Global.setSpeedWeapon(s){ if(weapon) weapon.speed = s; };

function Global.start(){ start();};

function Global.stop(){ stop(); };

function Global.right(){ setSpeedYaw((0-maxSpeedX)); };

function Global.left(){ setSpeedYaw(maxSpeedX); };

function Global.up(){ setSpeedPitch(maxSpeedY); };

function Global.down(){ setSpeedPitch((0-maxSpeedY)); };

function Global.fire(){ setSpeedWeapon(100); };

function Global.setAutomatic(active){ Global.automatic = active; };

function Global.setTracking(active){
	Global.tracking = active;
		tracker.text = "Tracking Enabled";
		tracker.text = "Tracking Disabled";
	tracker.load = active;

function init(){
	}catch(var e){
		echo("Error during initialisation : %s" % e.message);
		return 0;

	Global.dash = DashBoard.new();
	sleep(1s); //wait for window initialization

	//start Camera
		dash.load = 1;
			do(Global.tracker){ //add dynamicly text var
				var text = "coucou";


	dash.setSvrAdress(""); // System.listenHost:System.listenPort;

	return 1;

t:whenever(Global.automatic == 1){
		if(tracker.visible ==1){
			setSpeedYaw(getSpeedX(tracker.x,imgCenterX)) &
			setSpeedYaw(0) &

t2:whenever(Global.tracking == 1){
	if(tracker.visible == 1){
		tracker.text = "X:%s Y:%s Obj:%s" % [tracker.x,tracker.y,tracker.number];

function initTracker(){
		Global.tracker = ObjectDetection.new("camera.val","C:\\uobject\\haarcascades\\haarcascade_frontalface_alt.xml");
	}catch(var e){
		echo("Error during init tracker %s" % e);
		return 0;
	return 1;

function initCamera(){
	   Global.camera = UCamera.new("0");
	}catch(var e){
           echo("Error init Camera %s" % e.message);
	   return 0;

	Global.imgCenterX = camera.width/2;
	Global.imgCenterY = camera.height/2;
	return 1;

function initNXT(){
	}catch(var e){
		echo("Error loading nxt library : %s" % e.message);
		return 0;

		Global.nxt = NXTBrick.new(System.arguments);
	}catch(var e){
		echo("Error init nxt library : %s" % e.message);
		return 0;

	Global.beeper = Beeper.new();
	Global.battery = Battery.new();
	Global.yaw = Servo.new("A");
	Global.pitch = Servo.new("B");
	Global.weapon  = Servo.new("C");

	beeper.play(400,500ms) | sleep(500ms);
	beeper.play(600,200ms) | sleep(200ms);
	beeper.play(800,200ms) | sleep(200ms);

        return 1;

function Global.confirmConnect(){ dash.setSvrConnected(); };

function Global.startCamera(){
	if(camera) camera.start();

function Global.stopCamera(){
	if(camera) camera.stop();

function Global.setTracker(activate){
	if(tracker != nil) tracker.load = 1;

function Global.getSpeedX(x,center){
	var v = (center-x)/10;
	if(v < (0-maxSpeedX)) v = (0-maxSpeedX);
	if(v > maxSpeedX) v = maxSpeedX;
	return v;

function Global.getSpeedY(y,center){
	var v = (center-y)/10;
	if(v < (0-maxSpeedY)) v = (0-maxSpeedY);
	if(v > maxSpeedY) v = maxSpeedY;
	return v;

function Global.shutdown(){

//Start the Application

5 - HaarTraining

To create an object model to detect you can use the tools provided by the OpenCV library. During development, I used the model provided by default, which detect faces, but I tried to generate a model for Picoo-z. This model works but it can be improved and the method can be used to create other models.
That's why I created a pack with all the elements that I used to create a learning base, hoping that someone generates a complete and effective model of Picoo-z, or others (such as A.R.Drone).

For more information, read this pages :

6 - Contribute

This project is completely open source so you can download it and modify it without limits. And if you run out of idea, I thought a few possible improvements:

  • Generate other cascade models (Picoo-z, AR Drone, etc.) - started but not conclusive.
  • Bring the interface on iPhone - planned but not scheduled.
  • Create a platform at the rear of the robot which would be the landing site for Picoo-z. With the 'Bumper' it is possible to detect the landing of the Picoo-z on the platform and disable the robot because it has lost.
  • Using the ultrasonic sensor to detect the distance to the Picoo-z and trigger an automatic fire.
  • Make compatible with Gostai Labs Studio.
  • Create a setup for Ready to use Pack.
  • Use Qt4Urbi when is available under windows.

Special thanks

Thank you to Romain Pichard for his help in making the video and my girlfriend to find all the lost balls in apartment :-).
Finally thank you to everyone who supported me during this project