Intended audience: New-comers to the field of DRL to jump-start testing different robot environments and DRL algorithms in robots through ROS.
Organizer: Juan Rojas, Ph. D., Associate Professor, Guangdong University of Technology, China
Biography of organizer: Dr. Juan Rojas is an "100 Young Talents" Associate Professor and at the Guangdong University of Technology in Guangzhou, China, where he works at the Biomimetics and Intelligent Robotics Lab (BIRL). Dr. Rojas currently researches robot introspection, human intention prediction, high-level state estimation and skill acquisition for manipulation tasks. From 2015-2018 Dr. Rojas was an Associate Research Professor at Guangdong University of Technology. From 2012-2015 Dr. Rojas was an Assistant Professor at Sun Yat-Sen University School of Software where he started the Advanced Robotics Lab. From 2011-2012, Dr. Rojas was a post-doctoral at Japan’s National Institute of Advanced Science and Technology (AIST) Task and Vision manipulation group where he researched snap assembly automation and probabilistic error recovery methods. From 2009-2011 he served as a visiting scholar at Sun Yat-Sen University in China. Dr. Rojas received a B.S., M.S., and Ph.D. in Electrical and Computer Engineering from Vanderbilt University in 2002, 2004, and 2009 respectively. He was named an IEEE Senior Member in 2018.
List of topics: Reinforcement Learning, Deep Reinforcement Learning, OpenAI-Gym, Mujoco, ROS, Hindsight Experience Replay
The session will first highlight key advancements in the field of Deep Reinforcement Learning and then will provide a tutorial-style guidance to help participants master techniques and environments that can let them experience programming a variety of deep reinforcement learning algorithms and through the OpenAI toolkit as well as through the Mujoco.
A website will be provided where all training material, including slides, source code, demonstrations, and demonstration videos would be readily available.
Intended audience: The target audience will be researchers, engineers and computer scientists working in the areas of embedded real-time computer vision, machine learning, image and video processing and analysis that would like to enter in the new and exciting field of drone visual information analysis and processing for surveillance applications.
Organizer & Speaker: Ioannis Pitas, Ph. D., Professor, IEEE Fellow, Aristotle University of Thessaloniki, Greece
Biography of organizer: Prof. Ioannis Pitas is the IEEE fellow, IEEE Distinguished Lecturer and EURASIP fellow. He received his Diploma and PhD degree in Electrical Engineering, both from the Aristotle University of Thessaloniki, Greece. Since 1994, he has been a Professor at the Department of Informatics of the same University. He served as a Visiting Professor at several Universities.His current interests are in the areas of image/video processing, machine learning, computer vision, intelligent digital media, human centered interfaces, affective computing, 3D imaging and biomedical imaging. He has published over 1090 papers, contributed in 50 books in his areas of interest and edited or (co-)authored another 11 books. He has also been member of the program committee of many scientific conferences and workshops. In the past he served as Associate Editor or co-Editor of 9 international journals and General or Technical Chair of 4 international conferences. He participated in 69 R&D projects, primarily funded by the European Union and is/was principal investigator/researcher in 41 such projects. He has 28700+ citations to his work and h-index 81+ (Google Scholar).
List of topics: Introduction to multiple drone imaging, Embedded deep learning for target detection, Real-time visual 2D target tracking
Tutorial outline: The aim of drone cinematography is to develop innovative intelligent single- and multiple-drone platforms for media production. Such systems should be able to cover outdoor events (e.g., sports) that are typically distributed over large expanses, ranging, for example, from a stadium to an entire city. Real-time computer vision pays pivotal role both for drone cinematographic shooting and for drone safety. The drone or drone team, to be managed by the production director and his/her production crew, must have: a) increased multiple drone decisional autonomy, hence allowing event coverage in the time span of around one hour in an outdoor environment and b) improved multiple drone robustness and safety mechanisms (e.g., communication robustness/safety, embedded flight regulation compliance, enhanced crowd avoidance and emergency landing mechanisms), enabling it to carry out its mission against errors or crew inaction and to handle emergencies. Such robustness is particularly important, as the drones will operate close to crowds and/or may face environmental hazards (e.g., wind). Therefore, it must be contextually aware and adaptive, towards maximizing shooting creativity and productivity, while minimizing production costs. Real-time drone vision and machine learning play a very important role towards this end, covering the following topics: a) drone localization, b) drone visual analysis for target/obstacle/crowd/point of interest detection, c) 2D/3D target tracking. Most vision tasks must be embedded on-drone on multicore GPU/CPU processors. The tutorial will offer an overview of all the above plus other related topics, stressing the algorithmic aspects, such as: a) drone imaging b) target detection and c) target tracking.