Maja Pantic leads the Intelligent Behaviour Understanding Group (iBUG) at Imperial College in London.
Ioannis Marras is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include image/video processing, computer vision and pattern recognition. During his work, he has developed multiple computer vision techniques for 2D/3D face tracking and 2D/3D face recognition in the wild.
Jie Shen is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include software engineering, human-robot interaction, and affect-sensitive human-computer interfaces. During his work, he has developed the HCI^2 Framework (http://ibug.doc.ic.ac.uk/resources/hci2-framework/), which is a software integration framework for human-computer interaction systems, currently being used in multiple research projects.
Jie Shen defended his PhD thesis, titled ‘Realising Affect-Sensitive Multimodal Human-Computer Interface: Hardware and Software Infrastructure‘, in May, 2014.
Fernando Caballero Benitez and Luis Merino have been working together since 2003. They work on robot localisation and navigation, respectively. A robot first needs to know where it is (localisation) in order to carry out its mission (navigation). Fernando’s expertise is a bit more towards aerial robots, Luis’ towards ground robots.
PhD students who are fortunate enough to have these two as their supervisors are very lucky. It may be hard work but that goes with some solid training in the science that is the fundament of what they do, and sincere appreciation for what the PhD’s achieve. It’s a pleasure to see this close supervision in action – even if you don’t understand Spanish.
Apart from lots of lecturing and work in projects, Fernando is taking part in one of the European Robotics Challenge in the GRVC–CATEC team taking on Challenge 3. These Challenges are set up in stages, something like the levels in a game. Each stage has to be overcome in order to pass on to the next phase and to acquire more funding. The first stage is to sift out the serious participants. The GRVC – CATEC team hopes to go on to the second phase after October 2014.
Fernando likes a good barbecue and he designed our polos.
One day Fernando will get around to making a serious, English language website and you will then find the link here. Till then we’ll have to make do with his publications page.
Luis and Fernando lead and supervise UPO’s team of young researchers………..
Ignacio Pérez Hurtado de Mendoza is a computer science postdoc interested in the application of machine learning techniques to social robotics. In the FROG project supports the development of software modules related to the navigation of the robot and assists with the deployment of experiments.
Noé Pérez Higueras is in charge of the safe and robust navigation of the FROG robot in (indoor and outdoor) environments with people. Noé is trying to add social capabilities to navigation algorithms so that the robot respects human social conventions and guarantees the comfort of surrounding persons.
His PhD Thesis is on: “Robot autonomous navigation and interaction in pedestrian environments“.
Noé’s PhD is directly related to his work in the FROG project. Mainly, he is studying the different robot navigation algorithms and trying to extend them by adding social skills. To do that he employs machine learning techniques in order to learn from real people how they navigate between each other in crowded scenarios.
In 2014 Noé spent 3 weeks in the Netherlands, mapping the Gallery at the University of Twente for the opening ceremony with the Dutch king.
Noé fitted in well with the UT PhDs and will continue to work with some of them in the TERESA project. When he makes a website we will link to it here.
In the FROG project,Javier Pérez-Lara is working on improving localisation algorithms, based not only in laser readings but also on appearance matching, to recover from erroneous convergences to wrong localisation or even recovering from complete lost situations.
Javier’s thesis topic is related to robust localisation for mobile robot navigation and mobile robot interaction in crowded environments. Where we try to take into account the variability of crowded environments when localising and relocalising mobile robots.
Rafael Ramón Vigo’s knowledge of transversal competences like electronics, software and hardware are all put to good use in the FROG project. Rafael provides help with the general set up of the robotic platform and assists with the deployment of experiments.
Rafael’s PhD thesis work is on how to infer from data and its statistics the insights of human motion navigation, with the idea of transferring it to the robot’s navigation stack. The basis of this approach is to use machine learning algorithms.
Rafael also grows delicious mangoes. Recently his family planted nearly 2000 young trees that will come into production in 4 or 5 years time.
In 2013, Noé, Javier and Rafael had to spend weeks at a time in Lisbon. Though often cold or very tired, they did discover some good places to eat and were given Wi-Fi access at most of them.
How about this for an emblem? It looks great on the FROG polos.
The emblem printed on the polos is actually a reversed version of the docking target. It is a set of ArUco markers that just happen to say FROG – though it probably took Fernando Caballero quite some time to find them. UPO uses this ArUco marker to align the FROG to its charger during the docking procedure.
Yesterday, while the FROG was powered-down for hardware enhancements, Randy Klaassen and Jan Kolkmeier from the University of Twente added some features to their FROG Wizard-of-Oz web interface.
They designed and implemented this web interface to control not only the output of specific AR content but also to trigger chosen steps from the State Machine.
This interface is not used for the FROG tour when the robot is running autonomously but is a splendid tool for experiments or testing as it can trigger specific states or abilities. It can also be used to keep the FROG in action in the case f really bad weather as it can be used to run just the indoor parts of the robot mission.
Another thing that has changed since are last visit is tapestry with the map of Europe. The AR overlay was achieved by defining markers (the position of a number of specific features) on a photo of the original to build a model. These markers then had to be recognised by the FROG using incoming information from the antenna’s camera so that these could used to aim the overlay. The same set of markers was defined in the overlay and these had to match up to the features in the tapestry (the target) so that the overlay would be correctly positioned in the resulting overlay projection. All of this is necessary as, due to its autonomous social navigation, the FROG may stop at a different location in the hall on each separate tour.
Too many ripples in the tapestry at any time could mean that the feature markers cannot be recognised. Of course, there are several solutions to compensate for differences in the folds in the tapestry between tours. This would take some time to implement and as we have some more pressing matters for this session a quick solution was chosen. This involves using the whole tapestry area as a target instead of specific points on the target. Not as elegant as it could be but certainly a neat solution for a proof of concept.
A collaborative project under the FP7-ICT-2011.2.1 Cognitive Systems and Robotics (a), (d) area of activity.