Lucas García, MathWorks
Self-driving cars, voice assistants, autonomous robots, smart devices. Autonomous systems are reaching and changing every part of our lives, and deep learning is the technology behind that change. Advanced levels of perception, enabled by deep learning, are key to the success of automated driving, from advanced driver assistance systems (ADAS) to fully autonomous driving.
Designing and deploying deep learning applications to embedded CPU and GPU platforms (as with cars) is challenging because of resource constraints inherent in embedded devices. In this session, you will be exposed to some of the most relevant and stimulating real-world problems in ADAS, focusing on the role played by Deep Neural Networks (DNN): image classification, object detection and localization, semantic segmentation, and deployment. You will get a walkthrough of a complete workflow, including data preparation (proper ground-truth labeling of data sets and driving scenes) and visualizing, creating, or fine-tuning DNNs, training such networks leveraging NVIDIA GPUs to build automated driving capabilities and generating portable and optimized CUDA code that can be deployed on boards (NVIDIA Jetson TX2 and NVIDIA DRIVE PX) leveraging TensorRT for very fast inference.