Robots

Flash

NameFLASH (Flexible LIREC Autonomous Social Helper)
ManufacturerWroclaw University of Technology
TypeMDS (Mobile Dexterous Social)
Robots pageSee FLASH's official page http://www.flash.lirec.ict.pwr.wroc.pl
Projects pagehttp://www.lirec.eu
Videoshttp://www.youtube.com/user/LIRECWRUT


Overview

Wroclaw University of Technology (WRUT) robotic companion called FLASH (Flexible LIREC Autonomous Social Helper) is intended to be a mechanoid robot. It is designed and built within the EU FP7 LIREC project. Final form and skills will be a result of research collaboration between WRUT, responsible for the robot mechanical and electronic design and control systems, and other LIREC partners working on the issues of mind, memory, learning, reasoning, action planning, perception, etc. All components (mobile platform, the hands, and the head) are attached to the robot torso covering the FLASH main chassis frame. The robot mobile platform includes the motion and balancing controllers, the navigation module, and the robot power supply system. The platform base also supports the robot chassis, and the hands and head is fixed to the top part of this. FLASH is mounted on a balancing mobile platform with a natural compliance which makes this design acceptable to people (especially children) and safe. The FLASH hands are intended primarily to perform simple gesticulation tasks, but they may also be able to execute basic grasping. The main design objective has been to obtain a smoothly and reliably operating device, positively perceived by humans. The head has fundamental significance in robot-human communication. The FLASH head EMYS, has implement competences for generating facial expressions, gaze/head movement, and expressive behaviour. The head can express basic facial expressions, including happiness, sadness, surprise, anger, weariness, and disgust. The head is mounted on a movable neck for purposes of objects searching, turning toward a user, and gazing.

A robotic head EMYS (EMotive headY System) is a mechanoidal type robot. Its design has been inspired by characters from cartoon and movie series Teenage Mutant Ninja Turtles. Despite being an acronym, the name EMYS denotes a popular European pond turtle Emys orbicularis. The head has been mounted on top of a balancing platform FLASH endowed with arms and hands WANDA, that is the WRUT's flagship robotic companion.

The main movable elements of EMYS are its upper and lower disks. They are supposed to imitate, respectively, the human raising eyebrows and dropping jaw. Each of them has 1 DOF. The middle disk, hosting a vision camera, is not movable independently. From the viewpoint of facial expressions the eyes need to be perceived together with eyelids and eyebrows. In the design of EMYS the eyelids are mounted on the eyeballs, and can open and close (1 DOF each). The eyeballs and eyelids can turn around the horizontal axis (1 DOF each) that intensifies essentially the expressed emotions. This means, for instance, that to express sorrow or sadness they are turned outside, whereas when expressing anger or frustration they are turned inside. Another remarkable ability of pulling out the eyeballs (1 DOF each) enhances the head’s expressibility when showing surprise.

The LIREC software architecture is based on the concept of a three layer architecture [E. Gat, On ThreeLayer Architectures, Artificial Intelligence and Mobile Robots, 1998] facilitating software integration. The complete LIRECs platform control architecture consists of 3 sub systems: FAtiMA, CMION and SAMGAR [K. Du Casse, K. L. Koay, W. C. Ho, K. Dautenhahn, Reducing the cost of robotics software: SAMGAR, a generic modular robotic software communication architecture]. FAtiMA is the name of the agent’s mind responsible for decision making, planning and emotions. This module is implemented in Java and development is lead by INESC-ID (Portugal). CMION stands for Competency Management. The main purpose of this system is the translation of symbolic to subsymblolic information and vice versa. Development of CMION is lead by HeriotWatt University (UK). SAMGAR utilises the YARP [Fitzpatrick, Metta and Natale] framework that supports distributed computation and code reuse by structuring communication between modules (within and between the layers in the architecture). SAMGAR is written in C++ and developed by University of Hertfordshire (UK).

Of course most of this software is still development. It was decided to use URBI software to test all robot's components. This open source platform also allowed to prepare and conduct some experiment in very short time. It was possible to create simple scenarios using Gostai Studio or Gostai Lab software in few hours. Lot of URBI features like task parallelism, variable notifications, emitting urbi events, use trajectory generators, building robots tree structure were used to create excellent testing software platform. Now, URBI works in the lowest layer of the LIREC control platform.

Experiments

EMYS emotions recognition

The perception of the robot and human-robot interactions were examined in a series of experiments. One of the first was to investigate problem of expressing emotions in robotic companions with limited facial expression capabilities, in a way that they are correctly perceived by users.

The perception of EMYS when interacting with human was examined in experiment involved 50 schoolchildren aged 8-11 years. Scenarios of the EMYS behaviour were prepared to encourage children to play with him. The robot was full controlled by Gostai Studio where was implemented a simple scenario with two games. The experiment was designed for examining both how the robot’s emotional expressions affect the interaction and whether the children are able to decode the intended expressed emotions correctly. The robot was programmed to autonomous work and realize two game scenarios; each participant went through both scenarios. The first one, called “making faces”, relied on encouraging children to repeat some facial expressions made by EMYS. In the second scenario (“toy” scenario), the robot expressed some emotions and asked the children to show a toy of a color corresponding to the expression. Four named boxes with toys of different colors were available near subject. Box with green toys correspond to joy expression, red to angry expression, blue to sad expression, and yellow toys should be showed when face doesn’t fit to the three previous groups. With its implemented vision system EMYS was able to recognize the color of the toy and react accordingly, i.e. praising or dispraising. After each session the children watched the videotaped interaction from the first game and were asked which emotions EMYS showed. Thus, the experimental procedure consists of a mixture between affect description assessment (“making faces”) and affect matching assessment (“toy”). The duration of the interaction experiment with a single child was about 5-10 minutes. All sessions were recorded by two cameras from two different shots. Some of them had to be rejected because the robot does not work properly. After the interaction the participants were interviewed and answered questions on demographic data, on how they perceived EMYS, and how they liked the interaction.

More on YT: http://youtu.be/XHHgn5C31Ns

FLASH final review demo

The demonstration of FLASH integration is being prepared for LIREC final review in Wroclaw. In this scenario it is assumed that the agent embodied in FLASH has a number of goals. Primarily, he wants to meet many new people and tell them about himself.

Since initially the agent does not know anything about colours, he needs to learn them. When the agent is quite confident in his expertise about colours, he wishes to play with people in a "colour guessing game". The demo scenario has been divided into two phases.

In the first phase, when the agent does not know too many people, he is wandering around and trying to meet new people. Next, the agent asks that person to teach him colours, and to tell him what his/her favourite colour is. The person can teach the agent new colours by presenting in his/her right hand an object and proclaiming its colour. Having learnt a couple of new colours the agent gets bored with this activity, leaves the person and resumes wandering around to look for other people. In the second phase the agent feels very proud that he knows colours very well and wants to demonstrate his skills. Therefore, he starts wondering again, but this time when meets people he invites them to play with him in colour guessing. In this activity the person presents the robot an object in his/her right hand and asks to tell him the colour's name. The answers provided by the agent are verified by the person who announces the result. Correct answers increase the agent's happiness and his willingness to continue the game. Wrong answers evoke the opposite effect. The robot in this scenario is controlled by the Gostai Studio that has a collection of goals including learning news colours and playing in the colour guessing game. To achieve these goals the agent is able to perform actions like learning new colour, colour guessing, expressive behaviour, looking around, looking at human's head/hand and moving to.

More on YT:

http://youtu.be/Qg6YQL-e5Yg

http://youtu.be/o8kf14SZkaQ

http://youtu.be/P56Im9DlR3I

Example code used in FLASH

Init Dynamixel protocol (for head and arm).

loadModule("Dynamixel");
var Dyn=Dynamixel.new();
Dyn.Open("COM1",57600);
Dyn.FindServos(0,10);

Assign servo class.


  class Servo {
  };
  // servo 0
  //
  var servo0=Servo.new();
  UVar.new(servo0, "position");
  servo0.&position.notifyChange( closure() {Dyn.SetGoalPosition(0,servo0.position);});

Create robot tree structure.


load("share/gostai/urbi/component.u")|;
load("share/gostai/urbi/naming-standard.u")|;

  var robot = Component.new("robot");

  class identity: Interface.Identity
  {
    var robotType = "flash";
    var name = _RobotName;
    var model = "First Generation";
    var serial = "001";
  };
  robot.addProto(identity);

  var robot.body = Component.new("body");

  echo("OK: Robot created.");

Add some joints.

 //=================================
  // 	NECK (pitch)
  //	servo ID 0
  //	range 420-580:490
  //=================================
  robot.body.addComponent("neck");
  servo0.getSlot("position").copy(robot.body.neck,"pitch")|;

  do (robot.body.neck) 
  { 
    pitch->rangemax=580;
    pitch->rangemin=420;
    pitch = 490;
  };

Add to robot some behaviors.



  var a_EyeLeftLid = Tag.new;
  var a_EyeRightLid = Tag.new;

  function m_EyeClose(intensity,time) {
    {
      a_EyeLeftLid.stop| a_EyeLeftLid: {
        robot.body.neck.head.eye[left].lid =  (EyeLeftLid  - intensity*48) smooth:(time);
      };
    } & {
      a_EyeRightLid.stop| a_EyeRightLid: { 
        robot.body.neck.head.eye[right].lid = (EyeRightLid + intensity*48) smooth:(time);
      };
    };
  }|{};

  do (robot.body.neck.head) { 
    function EyeClose(intensity,time) {call.m_EyeClose(intensity,time);};
  };

  echo("OK: EMYS behaviours added.");

Flash use two URBI engines: one for smooth movement and the another for vision system. Vision utilizes Ucamera, UObjectDetetctor and UColorDetector modules. All those and rest modules are described in module section on this page.

Download

(This code was published for quick reference. Some scripts are not optimized.)




WRUT

Options: