Videos
The period for the TMI video competition ended.
Videos are no more available on the Web.
The following videos were submitted to the competition.
If you have any inquiry about this competition and these videos,
contact the organizers of this competition from the Google form on the top page of this competition
.

Predicting Failures in Autonomous Driving
Autonomous driving is inching closer and closer to becoming a part of everyone’s daily life. Despite impressive progress, tragic headlines such as the fatal Uber incident from 2018 or Tesla autopilot accidents continue to raise concerns about the safety of self-driving cars. In this video, we give an introduction to a recent field of research that addresses this critical issue: failure prediction in autonomous driving. First, we explain the sources of accidents caused by automated cars. The issues of novel environments, compromised sensors readings (e.g., due to sun flare or dirt on the sensor) and model uncertainty (i.e., the car sees the environment clearly, but does not know what to do) are explained in high-level terms. Next, we introduce the recent and active field of failure prediction. We give an overview of the main ideas from recent research used to predict failures in advance. We show one concrete example where machine learning has recently been used to learn from previous failures of an automated car, allowing to predict new failures of the car up to seven seconds in advance. Finally, we summarize the main ideas of the video. In conclusion, we intend to give the viewer a better understanding of news articles about crashes caused by autonomous vehicles, while also becoming familiar with current developments that will help prevent such tragic events. On top of that, the video offers those who wish to further read up on this field first insights into state-of-the-art research.

Watch Out! Your Car is Watching – Part 1: Classifiers for Driver Monitoring Systems
Driver's inattention is close related to a large number of fatal and injury crashes every year. To address this problem exists several studies around driver's monitoring systems or DMS. A DMS is an advanced safety feature that uses a camera or sensors to check the driver's alertness, and if needed, the car will warn the driver or in the near future, take control of the situation. This video explains some of the basic types of driver's monitoring systems: gaze zone classifiers, drowsiness classifiers, driver’s action classifiers, driver's physical & mental condition classifiers, and passengers monitoring classifiers. However, much of the current research inside these classifiers lacks robustness during challenging situations that recurrently appear during driving scenarios. To guide the listener on which kind of situations may be challenging for engineers when creating these classifiers, we mention some of the different difficult conditions: occlusions (masks, sunglasses, hands-on-face, etc.), environment conditions (strong light variations, smoke, dust, etc.), driver's physical and mental state and passenger intrusions. Then, we gave some very general suggestions for making a more robust driving monitoring system. Between them, we mentioned: usage of robust face and landmark detectors, camera type and position selection, equalize luminance, data quality, consider the context (space-time), and research on physiological responses/behaviors. These will hopefully be useful for any person that wants to starts digging into these kinds of systems.
Watch Out! Your Car is Watching – Part 2: Types of Cameras & Privacy Conflicts
DescriptionDriver's monitoring systems, or DMS, are crucial for detecting driver's behaviors that can cause fatal and injury crashes during a driving situation. In this video, we explain three types of cameras – RGB, Near-Infrared, and Thermal - that can be used in these kinds of systems. First, we explain that RGB cameras has color as their key feature. Color does not only give us information about the driver's physical state or the environment, but it is one of the most critical features for robust face and landmark detectors for making their classification under very challenging driving situations. Then, we talked about Near-Infrared cameras. They can see through very dark environments or sunglasses. However, it can make the driver's face too bright or cause so much reflection with the eyeglasses. Thirdly, we talked about Thermal cameras. They can detect the heat of the environment and see-through different materials and particles. However, it is hard to detect faces using this kind of camera because the person's pupil is barely seen, and while having eyeglasses, we cannot see through them. Finally, we discuss some privacy problems that can appear as a consequence of these emerging new technologies. Therefore, different companies and countries must reach a common agree on how to deal with them.
Role of Deep Learning and Machine Learning In Mobility Innovations
DescriptionDeep learning and machine learning techniques are grabbing the researcher’s attention as these techniques can imitate the human way of doing a task. These techniques have shown outstanding results and sometimes their usage beats human also. Mobility innovations involve the innovations that can ease human transportations or movement by means of launching of variety of vehicles that ease the mobility. In this video, We explained how the mode of mobility would shifts from manual mode to autonomous mode, what is the need or the role of ML/DL for the mobility shift, the introduction of deep learning and machine learning techniques, what mobility problems can be solved by ML/DL and how ML/DL solves these problem. Then we discussed the challenges with future scope. Concerning the feedback from reviewer, we made the video simple with too less technological explanation and a little introduction of ML /DL is provided. Most of the video content explains the problem and solution using animation to make it more interacting and understandable.
Reliability in Artificial Intelligence for Automated Vehicles
DescriptionGuaranteeing safety is the most important to implement intelligent vehicles (IVs) to our society. Since IVs are composed of a lot of artificial intelligent modules, guaranteeing their safety is key to realize the future society that the IVs work. We have to know whether the artificial intelligent modules reliably work in real time to guarantee the safety. The purpose of my research is to establish a method enabling to know the reliability of the intelligent modules in real time. Particually, I am focusing on localization, that is, estimating a vehicle pose in real time, and have developed a novel localization approach that simultaneously estimates reliability of the localization result. This method enables the IVs to know whether estimate by themselves is correct or not. Therefore, the new approach can contribute to guarantee localization safety. In this video, I first explain fundamentals of localization. Then, I will show you haw the IVs can know the reliability of its estimate.
Overview of IV Control Strategies: How intelligent does it get?
DescriptionIn this video we explain the broad view of the main components of an intelligent vehicle’s control system including perception, Planning and Control. While we go through all these components, we focus on the intelligent aspect of the algorithms employed to ensure an optimal trajectory generation as well as safety and comfort. Although each part is built around relatively intelligent concepts, we dwell on the question of the intelligence of an IV as a whole. Then, we explore some control schemes while still trying to get an answer to the main question of this Video. Among other control schemes, we emphasize on Model Predictive control, Imitation Learning based control and Reinforcement based controller to definitively tackle the main question of this video.
Cooperation between Human and Autonomous Mobile Robots: Technology and our Mission for Future Industry
DescriptionWhat are autonomous mobile robots (AMRs)? An autonomous mobile robot is any robot that can move through its environment without any human operator. AMRs have been used in many scenes such as delivering foods and transporting items in factories and warehouses all over the world. Many people have been ordering food online and one day this kind of robot might knock on your door and deliver your meals. In you purchase something on Amazon, the product will be shipped from a warehouse. And in a warehouse, there are also AMRs to help workers work efficiently. The main functions of an AMR include localization to estimate its current position, path planning to find a path from the current position to the target position, and obstacle avoidance to perform evasion maneuvers. One of the methods for localization is using a pre-made 2D / 3D map to determine the position and orientation of a robot. We describe 2D and 3D maps made by using the method called SLAM (Simultaneous Localization and Mapping) alongside its brief introduction in our video. Path planning is a computational problem to generate paths from point A to B. We describe the graph based path planning algorithm using a simple map with a grid on it. Obstacle avoidance is also a computational problem mainly to avoid obstacles nearby. AMRs could face hundreds of situations that require evasion maneuvers. We illustrate a simple example in a video. Our main research focus is human-robot collaboration or collaborative robots with human. Collaborative mobile robots operate autonomously and work alongside human in a workspace. We experiment our proposed method to operate multiple collaborative mobile robots in simulation and in a real warehouse. Moreover, we plan to do experiments using infrastructural sensors, Microsoft HoloLens2, etc. to capture environmental changes or gain interactivity between AMRs and human workers.