Applying AI to Enable Autonomy in Robotics Using MATLAB - MATLAB
Video Player is loading.
Current Time 0:00
Duration 20:40
Loaded: 0.80%
Stream Type LIVE
Remaining Time 20:40
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 20:40

    Applying AI to Enable Autonomy in Robotics Using MATLAB

    The AI applications in robotics greatly expanded in recent years to include voice command, object identification, pose estimation, and motion planning, to name a few. AI-enabled robots continue to grow for manufacturing facilities, power plants, warehouses, and other industrial sites. Warehouse bin-picking is a good example. Deep learning and reinforcement learning enable robots to handle various objects with minimal help from humans, reducing workplace injuries due to repetitive motions.

    Learn how to empower your robots using AI for perception and motion control in autonomous robotics applications. MATLAB® and Simulink® provide a powerful platform for successful AI adoption in robotics and autonomous systems. You can use the same development platform to design, test, and deploy your AI applications in intelligent bin-picking collaborative robots (cobots), autonomous mobile robots (AMRs), UAVs, and other robotics systems. This reduces development time as well as time-to-market.

    Gain insights into:

    • Reducing manual effort with automatic data labeling
    • Detecting and classifying objects using deep learning for robotics applications
    • Motion planning using deep learning
    • Controlling robot motion using reinforcement learning
    • Deploying deep learning algorithms as CUDA-optimized ROS nodes

    Published: 5 May 2023

    [AUDIO LOGO]

    Hello, everyone. In this Expo presentation, my colleague Tohru and I are excited to present on the topic of applying AI to enable the autonomy in robotics using MATLAB. As we all know, robotics is an exciting and rapidly advancing the field, and autonomy is becoming increasingly important in many applications. By leveraging AI techniques such as deep learning, you can enable robot to make a more intelligent decision and operate autonomously in a wide range of environment. With MATLAB, we have a powerful platform for developing and testing these AI algorithms.

    As you can see from the video, DLR's robot is able to use tactile information and deep learning to perform dexterous manipulation of cube. Additionally, with the help of the deep learning and optimization-based motion planning, DLR's Justin robot can navigate complex environments quickly and efficiently. So in this presentation, we will be discussing the potential of AI in robotics and showcasing some real-world examples of how MATLAB is being used to develop and test these algorithms.

    As we continue to explore the potential of AI in robotics, it is important to understand the needs of our customers and how they currently use AI in their robot. To that end, we conducted a user study, using over 60 data sources to understand how our customer currently use AI in robotics. First, where is AI being used in robotics? Let's take a closer look.

    AI is used in robotics for perception, planning, and control. We also found that as some robotics engineers are doing others, like wireless system, to bring AI for Cobots and designing smart batteries.

    Not surprisingly, we found that our customer primarily use AI for the perception compared to the other workflow steps. Why is that? The reason why they use AI primary for perception is because it is a critical step in enabling other part of the workflow.

    Safety, robustness, and certifications of matter for production and traditional algorithm has an advantage of AI in this regard for the planning and control, but recent advancement in AI is enabling new approaches to planning and control. In the next few slides, we will explore some of these AI-powered perception, planning, and control applications in robotics. We also uncovered several patents and challenges that robotics engineers are facing when implementing AI-powered solution.

    Why does this AI project sometimes fails? There are people's problems, such as lack of AI expertise. Data problems, including insufficient data, can hinder the development and performance of an AI model. Tool problems arise when engineers need to integrate AI models into the broader system and deploy them effectively. Finally, business problems, such as poor ROI, can also be challenges. So with that, I would like to hand over to my colleague, Tohru who will explore how MATLAB can help address these challenges and enable the development of robust AI-enabled robotic systems.

    Thank you, YJ. I'll be addressing the AI-driven design workflow where in the planning and efforts in running are effectively employed. The core element of robotic systems designer, perception, planning, and control. Around these core element you have a robotics platform. You need to deploy your implementation to the robot. System modeling and simulation will ensure your robot to meet customer needs.

    We are going to use that diagram as a guideline to explain how much we can help you with AI for each element. Now, with the perception part, I really use them in tangent in picking application, who are corroborated lower arm or cobot. This shows how the cobot perceive the environment to perform a pick and press task.

    Here is the example of pick and press task with a cobot to show how AI can be integrated into the system. The RGBD camera mounted on the double arm captures the RGB and depth images to detect the PVC parts and estimate their poses. With similar shape and color, PVC parts are difficult to classify using a traditional rule- based algorithms. However, by utilizing the AI model, we can easily classify the type of PVC pipes.

    To train an AI model to detect the black pieces, a huge number of the trained data are required. Accordingly, many data from of hardware is difficult and limited. However, by using a simulation, a large number of the data can be collected in a short time automatically. In addition, the shape of the back pieces, writing conditions, background, and so on can be easily changed. This makes it possible to generate the images of various scenes that are difficult to produce in real-world scenarios.

    On the other hand, it is necessary to collect the training data from the real hardware to improve the accuracy of the AI model. The acquisitions that can be also automated using out-of-box path planners.

    MATLAB and Simulink have various examples of the generating the synthetic data set pseudo simulation. For example, semantic segmentation for UAVs the Unreal Engine, lidar sensor models to generate the point cloud, and camera images using a Gazebo co-simulation shown here. This is also very helpful to take into account real events.

    Data labeling is a crucial aspect of a design workflow, but it can be also time-consuming. Now I'd like to show an automated data into rehab. When we utilize synthetic data, those synthetic data already contain the ground truth of the banding boxes.

    Then we can train an object detector using only the synthetic data, but the accuracy of the detector trained may not be sufficient. So training with a real-world image is still required. Image label app can be used for interactive labeling of actual acquired images from camera. In addition, by utilizing the automation function deriving of what can be drastically reduced as shown in the video.

    Next, you will manual effects only those areas where the detection results are inaccurate. The rest of the process is retraining with the labeled actual images to improve the accuracy of the object detector iteratively.

    Of course, it is important to have direct access to many AI algorithms and prebuilt models. Examples are the way engineers learn how to use AI models and find the best approach for their specific problem. For example, object detection with YOLOv4, instant segmentation with Mask R-CNN, and semantic segmentation with U-Net. We provide hundreds of example for building AI model in a wide range of domains.

    We also know that the broader deep learning community is incredibly active and the new models are coming out all the time. Because we support the important model | from TensorFlow, PyTorch, and ONNX, you have access to those new models and now can work with them within the material environment.

    Next let me talk about the AI application in path planning. Every showcase how they can be utilized by the UAV or pass planning within that environment. As YJ explained in the previous section, AI is currently primarily applied in the perception field in robotics applications.

    For example, let's consider using the UAV to create a road map around the city. Possibly obtain imaging data using a camera mounted on the UAV. Next we classified all the buildings using a semantic segmentation AI model. Then we stitch each of the classified patches to deconstruct a single occupancy map.

    Next how can we efficiently plan the flight path of the UAV? Traditional rule-based exploration algorhithims algorithms have been widely used for past planning. However, one of the challenges of those explanation is their execution time.

    To address this challenge, we have a deep learning based sample that improves the execution time. This approach implement a conditional variation or autoencoder that's generated ground samples, who are given the start date and goal state. Whatever is the latest version of the MATLAB you can apply AI models not only in perception field, but also in other areas such as branding.

    This brings us to the rest of the workflow of autonomous algorithms, which is making decision and control. For this, I would showcase an example of the obstacle avoidance for a delivery mobile robot in a dynamic environment. The obstacle avoidance algorithm executes a series of maneuver while simultaneously sensing and avoiding obstacle in tighter spaces. This complex task requires fine-tuning of the hyperparameters, which can be time-consuming.

    To address this issue, we developed reinforcement learning. We've also trained an agent on a simple scenario map, which facilitates a convergence of reenforcing learning. After the training within the tempo map, the agent gained the fundamental physical ability. Then they, again, can be trained efficiently within the higher fidelity Unreal Engine environment and performing fine-tuning. Reinforcement training toolbox simplified the complex workflow and the simulation environment can be easily scaled up.

    Now I would discuss how AI can be used in the part of the interoperable application. Looking at the overall system, AI is often part of a larger system. You can have each functional component of perception, planning, and control as autonomous algorithms. And you can also integrate them into our robot platform, which includes robot dynamics and sensor models.

    Sensing and the robot API can be either real hardware or simulation models. With a model-based design approach, the model can continue to evolve, and you can scale this model for different applications as necessary. As a a closed-loop system, system simulation is crucial in verifying that AI models performed as expected in various environment. With MATLAB, you can dynamic correspond the robot arm and the work pieces at runtime. The object spawned in the virtual environment can be easily modified.

    We can easily use the simulation model for another robot, such as universal robot, due to the modulization of each part. The end goal of the task is to pick up the cobot-shaped object distributed in the beam. The hardware in this can be easily switched to universal robot cobot providing a dedicated hardware support package for Universal Robots.

    Each robot application may have different deployment requirements, whether it is on an edge system for production range or a cloud-based streaming system retrieving data from a number of robots. So your AI model need to be able to deploy to any possible platform. We have a unique modulation framework that allows models developing MATLAB Simulink to be deployed anywhere, such as CPUs, GPUs, microcontrollers, and FPGAs without having to rewrite the original model or the many code generation to eliminate the coding errors is an enormous value driver for any organization adapting them.

    Let me show our deployment example that we can press about application. It is necessary to implement the AI models on edge devices to make the robot operate autonomously and in real-time. In this example, the Simulink model of the pick-and-place algorithm is generated as a CUDA ROS node by using a ROS Toolbox and GPU Coder.

    The generated CUDA ROS node is built on the execute on top of ROS ecosystem, and we adjust them. Object detection can be performed at the high speed by leveraging the embedded GPUs, even on edge devices. With that, I'd like to hand over to the presentation to YJ to summarize our discussion. Thank you.

    Thank you, Tohru. Before I wrap up today's talk, I would like to share our customer story. ASTRI in Hong Kong use the MATLAB and Simulink to create a model-based digital twin with a computer vision and the algorithms for their welding robot, reduced integration time by 40% and development time by 30%. This approach helped ASTRI move to smart manufacturing to meet the customer demand, and addressed rising costs, and inspire competenced in a model-based design approach. Combining MATLAB and Simulink approach, the digital and physical world closer together and offer a level of automation and real-world simulation that accelerates engineering process.

    Today, we discussed how to empower your robot using AI for perception, planning, and control in autonomous robotics application. MATLAB and Simulink with AI provide an integrated design environment for your autonomous system from design, simulation, and deployment in the same platform. In summary, with MATLAB and AI, you can build a data set by capturing and labeling images obtained from simulated and real-world scenarios.

    You can use image recognition and object detection techniques to build the map, estimate the poles of the object, and detect the dynamic obstacle. You can speed up the path planning by training a learning-based sampler and use reinforcement learning for the robot control. Finally, you can integrate AI model within model-based design workflow and build a system-wide simulation and test with AI model.

    Do you remember these challenges that I discussed earlier? So let's see how MATLAB makes these challenges a bit easy. First, data complexity. MATLAB is create modeling and simulation tool and provide automated labeling for different modalities of the data.

    Secondly, model complexity. With MATLAB Simulink, we have an easy access to pre-trained model with a single line of code. You can use a model developed in MATLAB or those available in open source framework, like TensorFlow and PyTorch. By building a system-wide simulation with AI model, you can validate the behavior with just a twin before physical execution with the local hardware.

    The majority of the successful AI project in robotics has been made possible by a clever combination of AI model and robotics algorithm acting together. So robotics engineers need to leverage their domain expertise and apply AI solution for successful projects. So I think you can be an AI expert with a method. Finally, by leveraging the powerful capability of the MATLAB Simulink you can really achieve successful AI-enabled robotic project.

    If you'd like to learn more, the next step is to get started with deep learning yourself. To do this, simply open your browser and launch one of our free online tutorials, starting with a deep learning onramp. In addition, we have a large number of example Tech Talk series and related webinars that are published on our website to help accelerate your development effort. And we will be happy to support your specific user cases as well, so feel free to reach out to us with your-- and have any question.

    [MUSIC PLAYING]

    View more related videos