Maja Pantic leads the Intelligent Behaviour Understanding Group (iBUG) at Imperial College in London.
Ioannis Marras is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include image/video processing, computer vision and pattern recognition. During his work, he has developed multiple computer vision techniques for 2D/3D face tracking and 2D/3D face recognition in the wild.
Jie Shen is a researcher from the Intelligent Behaviour Understanding Group (iBUG) at Imperial College, London. His research interests include software engineering, human-robot interaction, and affect-sensitive human-computer interfaces. During his work, he has developed the HCI^2 Framework (http://ibug.doc.ic.ac.uk/resources/hci2-framework/), which is a software integration framework for human-computer interaction systems, currently being used in multiple research projects.
Jie Shen defended his PhD thesis, titled ‘Realising Affect-Sensitive Multimodal Human-Computer Interface: Hardware and Software Infrastructure‘, in May, 2014.
Luis and Fernando lead and supervise UPO’s team of young researchers………..
Ignacio Pérez Hurtado de Mendoza is a computer science postdoc interested in the application of machine learning techniques to social robotics. In the FROG project supports the development of software modules related to the navigation of the robot and assists with the deployment of experiments.
Noé Pérez Higueras is in charge of the safe and robust navigation of the FROG robot in (indoor and outdoor) environments with people. Noé is trying to add social capabilities to navigation algorithms so that the robot respects human social conventions and guarantees the comfort of surrounding persons.
His PhD Thesis is on: “Robot autonomous navigation and interaction in pedestrian environments“.
Noé’s PhD is directly related to his work in the FROG project. Mainly, he is studying the different robot navigation algorithms and trying to extend them by adding social skills. To do that he employs machine learning techniques in order to learn from real people how they navigate between each other in crowded scenarios.
In 2014 Noé spent 3 weeks in the Netherlands, mapping the Gallery at the University of Twente for the opening ceremony with the Dutch king.
Noé fitted in well with the UT PhDs and will continue to work with some of them in the TERESA project. When he makes a website we will link to it here.
In the FROG project,Javier Pérez-Lara is working on improving localisation algorithms, based not only in laser readings but also on appearance matching, to recover from erroneous convergences to wrong localisation or even recovering from complete lost situations.
Javier’s thesis topic is related to robust localisation for mobile robot navigation and mobile robot interaction in crowded environments. Where we try to take into account the variability of crowded environments when localising and relocalising mobile robots.
Rafael Ramón Vigo’s knowledge of transversal competences like electronics, software and hardware are all put to good use in the FROG project. Rafael provides help with the general set up of the robotic platform and assists with the deployment of experiments.
Rafael’s PhD thesis work is on how to infer from data and its statistics the insights of human motion navigation, with the idea of transferring it to the robot’s navigation stack. The basis of this approach is to use machine learning algorithms.
Rafael also grows delicious mangoes. Recently his family planted nearly 2000 young trees that will come into production in 4 or 5 years time.
In 2013, Noé, Javier and Rafael had to spend weeks at a time in Lisbon. Though often cold or very tired, they did discover some good places to eat and were given Wi-Fi access at most of them.
Yesterday, while the FROG was powered-down for hardware enhancements, Randy Klaassen and Jan Kolkmeier from the University of Twente added some features to their FROG Wizard-of-Oz web interface.
They designed and implemented this web interface to control not only the output of specific AR content but also to trigger chosen steps from the State Machine.
This interface is not used for the FROG tour when the robot is running autonomously but is a splendid tool for experiments or testing as it can trigger specific states or abilities. It can also be used to keep the FROG in action in the case f really bad weather as it can be used to run just the indoor parts of the robot mission.
This is mini-FROG – it is actually only 28 cm tall and has been around for some time – but in secret.
mini-FROG was made for an online experiment. It is a simplified model of the FROG robot executed in cardboard, coloured paper and hobby foam – oh, and a drinking straw. It has a glossy see-through film so that different visuals can be put onto its ‘screen’.
mini-FROG’s art gallery was a table top with postcard sized masterpieces hung on the wall behind it. Stop-motion films lasting about a minute and a half were made using this setup . The films were put online and more than 200 people from all over the world answered questions about them to help our PhD candidate with her research.
The UPO research team has made a mockup of the space where the FROG will go to recharge. UPO is finalizing their implemention of the docking sequence.
After testing, the docking station will be placed in the shop near the entrance of the Royal Alcazar. This will be the FROG’s base for all of next week. When it is running low on power, the robot will return to the shop, align itself to the docking station and drive on to recharge its batteries.
This lab has a view of the Tennis Courtyard of the Royal Alcazar and the tower of the cathedral.
Next week the FROG robot will be back in Seville to test the new StateMachine, autonomous missions and the docking sequence for recharging the batteries between missions. There will be some playful content for the demo runs and the researchers will be collecting more material for expanding this to a full tour for the Final Event to be held in September 2014.
A collaborative project under the FP7-ICT-2011.2.1 Cognitive Systems and Robotics (a), (d) area of activity.