Skip to the content.

πŸ“– Course program at MFR17

The course is organized in two consecutive days.

Day 1

Welcome

Β  Β 
10:30 β€” 10:45 slides

Basics on YARP middleware

YARP is an operating system for robotics that has been designed to facilitate a number of activities we have to deal with while developing the software that controls our robots. In particular, YARP is very good at mastering data transmission in a distributed system, abstracting the software API from the hardware details, and building complex architectures that regulate the high-level robot behaviors as a collection of simpler blocks.

Through hands-on sessions, you will learn how to make code snippets run as threads that exchange pieces of information with another program or even the robot. Then, you will also see how to command a motor of the robot head in a very simple manner.

Β  Β 
10:45 β€” 11:30 slides
Β  tutorial_yarp-basics
11:30 β€” 13:00 assignment_yarp-find-rgb

Basic PID Control

Controlling the motors in different modalities is a necessary part to let our robot undertake the correct action in response to the cues received from the world. Among all the huge number of control tools engineers have, the PID controller is undoubtedly one of the simplest and at the same time most effective solution. Further, the PID is so ubiquitous in our lives β€” it can control nearly everything, up to a given extent 😏 β€” that knowing how it works will certainly turn out to be a resource you will be happy to own.

In this lesson plus hands-on, you will learn how to control the robotic head with velocity commands to make the iCub gaze at different points in the space moving its articulated neck and eyes systems.

Β  Β 
14:00 β€” 14:45 slides
14:45 β€” 16:15 assignment_control-pid

Basic Computer Vision

Given you now know how to cope with the action, perception kicks in as the other side of the story. Therefore, to close the loop with the action, robots need to sense the world and gauge a large amount of information acquired from the surrounding.

In our settings, this means that you will learn how to analyze the content of the images the robot receives from the two cameras mounted in its eye bulbs. You will do so in terms of different cues (e.g. colors, shape, disparity…) and to perform some relevant processing (e.g. segmentation).

Additionally, we will briefly explain image recognition using traditional computer vision techniques. Although they have been quickly superseded by Deep-Learning based methods, traditional approaches still power many applications. Many of these algorithms are also available in computer vision libraries like OpenCV and work very well out of the box.

Β  Β 
16:15 β€” 17:00 slides
Β  tutorial_yarp-opencv
Β  tutorial_find-wally
17:00 β€” 18:30 assignment_closest-blob

Day 2

:busts_in_silhouette: Team contest as final assignment on the πŸ€–

Β  Β 
10:30 β€” 18:30 Non stop πŸ˜‰