The robotics lab, found in NB 02/77, is mainly used by the autonomous robotics group. They use it to test how cognitive architectures built in the framework of Dynamic Field Theory behave when interacting with the real-world via robots.
Below, we give a short overview of the robotic platforms available in the lab.
Our main platform is CAREN (left), short for Cognitive Autonomous Robot for Embodiment and Neural dynamics. CAREN consists of a seven degrees of freedom arm built by KUKA, a three-finger Schunk Dextrous Hand attached to the arm, and a set of high-resolution cameras that can be exchanged for a Microsoft Kinect sensor, mounted on a pan-tilt unit.
CoRA (right), the Cooperative Robotic Assistant is the now mostly retired predecessor to CAREN. It has a similar setup, featuring an anthropomorphic arm with seven degrees of freedom and a stereo camera head. Additionally, it features haptic and force/torque sensors for direct physical interaction as well as an additional camera system to detect a human's gaze direction.
NAO is a medium-sized humanoid robot, developed by the French company Aldebaran. The robot has 25 degrees of freedom, two HD (1280x720 pixel) color cameras, loud-speakers and microphones, as well as infrared, ultra-sound, and pressure sensors. We have used it in study projects, Master theses, and in ongoing research concerned with the organization of behaviors and autonomous learning.
Our approach has its origins in mobile robotics. Nowadays, we mostly use mobile robots as platforms for teaching our approach to students, visiting groups from schools, and participants of our summer school.
The main platform is the bluetooth-controlled e-puck robot. It is 7cm wide, has a differential wheel drive (one wheel on either side), and is equipped with a camera and infrared sensors to perceive its environment. A similar model, the Khepera II, is sometimes used in projects that make use of its gripper to transport small objects.
For lab classes and some rapid prototyping, we use Matlab. Our main software is cedar. It bundles our knowledge and ideology of autonomous robotics with an emphasis on cognition, embodiment, and dynamics.
cedar is the result of an ongoing effort to rewrite and integrate a collection of powerful, yet incoherent and fragmented code that had been developed over many years. We put a lot of effort in keeping the different parts of cedar compatible to each other. Linking up perception and motor control of a robot and putting some cognitive processing in between should not be constrained by programming issues, but should only be a question of applied concepts. cedar provides a tool aimed at facilitating the connection of modules to create robotic architectures. You can focus on "what" to connect, keeping the effort of "how" to connect it as low as possible.