Design and Deploy Collaborative Robots (Cobots) Using MATLAB - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 35:38
Loaded: 0.46%
Stream Type LIVE
Remaining Time 35:38
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 35:38

    Design and Deploy Collaborative Robots (Cobots) Using MATLAB

    Overview

    Constraining robots to cages limit their capabilities. And current markets demand reduced lead times and mass customization. These demands have stimulated interest in flexible and multi-purpose manufacturing systems through human and robot collaborations that do not endanger workers. A collaborative robot (Cobot) is a robot that allows humans to work alongside the robot through direct interaction without conventional safeguarding fences. The development of Cobot applications relies heavily on autonomy in perception, planning, and control.

    In this webinar, you will learn how to incorporate autonomous algorithms and AI tools into your Cobot applications. We will walk through the development of the real-world applications, such as glue dispensing robots on automotive windshield glass and smart bin picking Cobots using MATLAB and Simulink.

    Highlights

    You will gain insights on: 

    • Detecting and classifying objects for Cobot smart bin picking applications
    • Teaching Cobot motions using inverse kinematics and motion planning
    • Controlling robot motion for safe interaction
    • Deploying and connecting Cobots using ROS

    About the Presenters

    YJ Lim is a Principal Technical Product Manager of robotics and autonomous systems at the MathWorks. He has over 20 years of experience in robotics and autonomous systems area. Lim’s responsibility in MathWorks includes long-term strategy development and product management of robotics and autonomous systems. Before joining MathWorks, Lim worked at Vecna Robotics based in Waltham, MA as a Sr. Project Manager focused on Vecna’s advanced robotics system development. Prior to Vecna, he served as the Chief Innovation Officer at Hstar Technologies, a startup focused on agile mobile robotic platform and healthcare service robotics system. He worked with government agencies and served on governmental working groups on matters of advanced robotics system research. Lim also led development teams at Energid Technologies, a firm that provides engineering services and products for advanced robotic, machine-vision, and simulation applications, for robotic software development. Lim received his Ph.D. in mechanical engineering from Rensselaer Polytechnic Institute (RPI) and his Master from KAIST in S. Korea. yjlim@mathworks.com

    Ankur Bose is a Senior Team Lead for the UAV & Robotics Development team at the MathWorks. He has 8 years of experience in the field of Embedded Systems and has been working on developing solutions to integrate Autonomous vehicles with MATLAB at MathWorks for more than 5 years. He leads a team of software engineers at MathWorks which provides workflows to deploy algorithms from MATLAB and Simulink on UAVs and cobots. Ankur received his Master’s degree in Electrical Engineering from Indian Institute of Technology, Varanasi in India in 2014. abose@mathworks.com

    Recorded: 16 Nov 2022

    Hello, everyone. Welcome to this webinar. My name is YJ Lim, technical robotic product manager at MathWorks. I will be presenting with our embedded system development manager, Ankur Bose, from MathWorks Bangalore office in India today.

    The demand for the collaborative robot, or Cobot, is steadily mounting across all industries. Cobots continue to provide their effectiveness in the industrial application by working side-by-side with people just like a co-worker. Today, we will be talking about design and deploy collaborative robot application using MATLAB. This is a quick introduction of today's presenter.

    I am a principal technical product lead of the robotic and autonomous system at MathWorks based in Natick, Massachusetts. I have spent the past over 20 years developing robotics and autonomous systems. Before joining MathWorks, I worked on several robotics projects at the companies including Vecna, Hstar Technologies, and GM R&D center in South Korea.

    My main experience includes Cobots-- specifically, compliant actuator-- and AMRs. I have been in MathWorks for about five years now, and I am managing a couple of robotics product and developing product strategy. Ankur, would you introduce yourself?

    Sure, YJ. Hello, everyone. I'm an embedded system development engineer at MathWorks, India. I have close to eight years of experience in embedded systems development.

    I have been at MathWorks for six years now. Me and my team at MathWorks work on delivering solutions that enable deployment of algorithms from MATLAB and Simulink on UAVs and robot manipulators. Over to you, YJ.

    Thank you, Ankur. So here is what we want to discuss today. I will start with discussing Cobot and how to teach the Cobot, then AI in robotics-- specifically, why use AI in Cobot?

    Then, AI we discuss an intelligent bin picking use case using universal robot robot-- Cobot. Let's get started by introducing Cobot. As many of you probably know, there are six levels of driving automation as defined by SAE international from fully manual to fully autonomous.

    Similarly, I am trying to throw the roadmap over industrial robotics technology here. Not all robots are artificially intelligent. Many traditional industrial robots are non-intelligent and mostly perform a series of repetitive tasks with a safety fence around them.

    Cobots can perform more autonomous and flexible tasks while sharing workspace with the human laborer. This type of robot uses sensor input from the environment and makes decision and control. What AI brings to the robotics is some more autonomy.

    AI will increase the capabilities of robots to perform more complex and various tasks. Let's talk even more about the Cobot. There are over 700,000 manufacturing companies in the United States.

    Many are small sized with less than 100 employees. Those small manufacturing companies who are underserved by traditional automation since traditional robots are expensive and best suited for the large automation of operations. So Cobot is really a key market trend in factory automation.

    I'm borrowing a couple of quotes from Cobot makers here. Cobots makes deployment easy. For some applications, only simple programming is required. There are some application and technology gaps that hinder full Cobot deployment at the present time.

    So AI will allow the realization of-- learn more advanced Cobot application. Let's look at these five Cobot applications. Pick and place, assembly, machine tending, welding, and packaging.

    Even for the same type of applications, they have a different level of complexity. It scales from do-it-yourself type of simple tasks to the tasks that need computer vision systems, motion planner, and even AI. The question is, do you teach your Cobot with teach pendant or do you need offline programming or simulation with the Cobot application development?

    There are several options here for you to choose for programming your Cobot. Each has its own inherent pros and cons. There are many things to consider, such as what is an actual task that Cobot performs?

    Do you care downtime when you program using Cobot on-site? Do you have trained and skilled engineers to program? Do you want to have extendability from one application to another?

    For the advanced Cobot application, we may need offline programming. As for Universal Robot website, 70% of the Cobot application focused in first three areas-- bin picking, assembly, and machine tending. Welding and palletizing round out the top five.

    There are related or required technologies in each application. In order to ensure that complete solution is deployment ready, hardware is one way to prove. Overstimulation can address some key failure points. So photorealistic simulation is a must to ensure that this translates to the real world scenario and conditions.

    And a number of applications involve force and torque simulation that requires high fidelity model. One great thing is that since offline programming is more robot agonistic, it can be used to program other robot brands or models and scalable for other applications.

    Let me share the success story of our customer, ASTRI, who has adopted model-based system engineering. ASTRI creates a digital twin of a digital building mobile embedded with computer vision and machine learning algorithms using MATLAB. ASTRI engineers generated a set of synthetic RBGD images of weld pieces.

    Using deep learning, they estimate the pose of the weld piece. Dr. Koo from ASTRI told us this approach reduced integration time by 40% and development time by 30%, which is great. So let me move on-- AI in robotics. I will discuss why use AI for Cobot specifically.

    The core element of robotics and autonomous system designs are perception, planning, and control. The robot needs to perceive the environment, keep track of moving objects, and plan a course of motion for itself. And so as you see, there are many technology components here with all these autonomous algorithm in robotics.

    This means deep learning allows significant processes to perform tasks related to the perception. And reinforcement learning has shown the potential to solve some real hard control problems. They are expecting to scale more complex problems.

    So now, let me use a pick-and-place Cobot application. So this is going to show you the robot-- the Cobot-- how the Cobot interacts with environment to perform pick-and-place task. It starts with a basic initialization step for the perceiving environment, like scanning the environment to detect a part or building the planning scenes.

    Compared to the traditional pick-and-place task where everything is known beforehand, this step is very important for picking high mixed part as well as for practical operation. The robot can react to the dynamic environment, such as a change in the part location or even for obstacles. Next is to detect objects and to estimate the pose of the object for the robot to pick up. I'm showing here an example of a pick-and-place Cobot to show you how AI can be integrated into the Cobot application to perceive the environment.

    So this Cobot needs to detect the shape of the PVC pipe connector to sort out and estimate the pose of them to grab it. All pipes have the same color and similar shapes, making detection classification difficult with rule-based algorithm. Therefore, we need AI model for this.

    Typically, a huge number of training data sets are required to train an AI model. Acquiring many data from real hardware is difficult and limited. By using simulation, a large number of data can be collected in a short time.

    In the simulation, the shape of a work piece, lightning condition, and background and so on can be easily changeable. This will make it possible to generate the image of various scenes that are difficult to reproduce in real world scenarios. On the another hand, it is also necessary to collect the training data from real hardware to improve the accuracy of the AI model.

    Data labeling is one of the most time-consuming AI design workflows. Now, I want to show you an automated labeling tool that we have. When we utilize the synthetic data that we show earlier, those synthetic data already contained ground truth of the bounding box. Then, we can train an object detector using only synthetic data.

    But the accuracy of the detector trained with only the synthetic data is often insufficient, so training with the actual data is required. Image labeling that we have can be useful in an active labeling of acquired images from camera. In addition, by utilizing the automation function, labeling effort can be drastically reduced from this automation.

    We need manual fixes for only those area where detection meter done incorrect. The rest of the process is retraining with actual measure the images to improve the accuracy of the object detector. MathWorks has several labeling tools to choose for images, video, and LiDAR data, time signal, and even for medical imaging to make your data labeling easy.

    And now, the pose estimation is particularly important in the task of robotic bin picking. To obtain the fixed pose of the target object, we use here principal component analysis algorithm, or an FPGA, to approximate the centroid and the principal axis of the object. The goal for the PCA is to match the object from the YOLO result to the stored matrix from in a CAD model.

    We now know the part to pick up and there the pose to grab them for performing pick-and-place task. The motion planning algorithm accepts those poses, and then output collision-free trajectory. There are two parts-- test planner to find the collision-free waypoint connecting from the starting configuration to the goal configuration. And trajectory generation to translate those waypoints into the smooth motion that served the practical application.

    To use a path planner, we start with initial pose, final pose, and environment such as an obstacle as an input. The first step is to delete those poses to start and goal join configuration, which is typically done using inverse kinematic. The path planner then aims to connect those configurations.

    So what makes the problem interesting is the constraint then needs to be satisfied. Examples of constraints include robot joint limit and the obstacle. Depending on the characteristic of the application and the robot you use, you can solve it either using optimization theory or sampling based path planner.

    For the robot manipulator, MATLAB has manipulated spatial motion planner using bidirectional RRT planner from Robotics System Toolbox, or you can use other existing planners in Navigation Toolbar for your specific applications. Instead of using path planner, you can also use an interactive inverse kinematic function to position your lower configuration in an activity. Then, you can save those configurations of the robot model, then stored configuration can be easily integrated with a trajectory tool to quickly build a robot motion.

    Robotics System Toolbox also provides Inverse Kinematic Designer to visualize and tune inverse kinematic solver with a kinematic constraint. This designer tool makes it easy to design and verify joint configuration and step through waypoints, and you can check collision with the environment.

    It is a quite similar tool to program a Cobot using teach pendant, but you can avoid downtime since it is done in desktop simulation. So to review, we start with several input configuration-- start, pick, and the placing configuration. Then, we use the path planner to generate paths, which is order the sequence of the waypoint configuration.

    But when we actually pass this to the motion in a controller or robot firmware, we will need to encode time association, which is where trajectory generation comes into play. One intuitive way of doing this would just to interpolate linearly between waypoint given fixed time, and then pass those-- and a trajectory to the controller. The problem is that this leads to non-smooth trajectory as you can see here. There are distinct jumps in the velocity, and the manipulator is not able to execute this motion.

    So trajectory generator will generate time-based sequence for how to follow the path given constraints, such as position, velocity, acceleration by a class of a polynomial function locally. So trajectory is really a description of how to follow the path over time. So you can know now what is the difference between path planning and trajectory planning.

    We ship multiple trajectory generators such as minimum jerk for the smooth and continuous motion. Trapezoidal velocity profile trajectory, and picks and a third and fifth order, polynomial trajectory. Trajectories provided to the robot can be passed directly to the firmware or you can also design your own motion controller.

    We provide a tool to do this in Robotic System Toolbox. For example, this model uses computed torque controller, which compensates for manipulator dynamics using blocks in Robotic System Toolbox, and then assigns-- or describes the set of dynamics. Trajectory is then passed to the controller.

    This kind of a plot can also be used to create more interesting controllers, like impedance controller that ensures safe force interaction with unexpected contacts as you might want in a Cobot application development.

    Once we're sure we can execute pick-and-place portion of the workflow, we can step back and integrate it into the overall application. As you can see, this is a continuous process in which we repeatedly go through the state of the detecting object, and are picking them up, and placing them in the defined area. To do this, state flowchart can be used to schedule the high level task and step from task to task for pick-and-place workflow.

    State flow let you describe how your algorithm reacts to the input signal, event, and time-based conditions using state transition diagram and flowchart. You can graphically program and execute MATLAB object for supervisory control, task scheduling, or fault management using state flowchart. Now, looking at overall system, AI is often part of a larger system.

    So for example, this is a Simulink model with a pick-and-place Cobot application with AI. You can have each functional component of the sensing, control logic, planning, robot dynamics, and the visualization component along with AI model. Here, sensing and robot API can be either simulation mode or real hardware. With the model-based design approach, the Simulink model continues to evolve to include more detail, or you can scale this model for different applications as necessary.

    Each Cobot application may have a different deployment requirement, whether it is on edge system for production line or a cloud-based streaming system receiving data from a number of robots. So AI can stay any part of this robotic system. So your AI model needs to be able to deploy to any possible problem.

    We have a unique code generation framework that allows our model developed in MATLAB or Simulink to be deployed anywhere without having to rewrite the original model. Automatic code generation eliminates coding error, and it is enormous value driver for any organization adopting it. Let me show you deployment example of the pick-and-place application.

    As one popular platform, Robot Operating System, or ROS, is widely used for prototyping robotic systems. It is also necessary to implement AI model on edge device to make the robot operate autonomously and in real-time. So in this example, we can place in a Cobot-- the application model is generated as code ROS node by using ROS Toolbox and GPU Coder.

    The generated code ROS node is built and executed on top of a ROS ecosystem in NVIDIA Jetson. Object detection can be performed at high speed by leveraging embedded GPU and even on edge device. Now, I will turn it over to ASTRI. Then, he will introduce you on intelligent bin picking using universal robot.

    Thank you, YJ. Hello, again. Let me first start by showing a demo of intelligent bin picking being done on a UR5e Cobot. The end goal of the Cobot here is to pick up the cuboid-shaped objects that you can see distributed in the bin.

    How will the Cobot do this? The position of each object is estimated in MATLAB with the help of a depth sensor camera, and that is passed to the Cobot. As you can see, the Cobot reaches out to pick the objects one by one after getting their position from MATLAB and places them on the table. The trajectory for each object is also computed in MATLAB, which is then communicated to the robot. In the following slides, I will explain to you how you can use MATLAB and Robot System Toolbox to perform intelligent picking.

    Let's first have a quick look at the minimum setup required to create a bin picking application. As can be seen here, the UR5e Cobot has been mounted on an aluminum stand. An intel realsense camera is also mounted to the same stand in a way that it directly overlooks the bin below. The angle of the platform on which the bin rests can be adjusted for the UR end effector to reach the parts easily. The objects that need to be picked are placed in the bin.

    As YJ discussed earlier, let's see what are the challenges we need to solve to get an intelligent bin picking application. The full application can be broken down in three separate problems. The perception of the objects in the bin is the first problem that needs to be tackled.

    The perception algorithm should be able to identify the objects in the bin and estimate their positions as well. In our approach, we have used deep learning to train the YOLO network for the cuboidal objects and have applied PCA for pose estimation. The Computer Vision Toolbox from MATLAB has the necessary tools to achieve pose estimation.

    After we have the position of the objects, we need to determine a collision-free trajectory for the Cobot to pick up the objects and place it. We use a manipulatorRRT planner, as YJ mentioned earlier, to achieve the same. The Robotic System Toolbox also provides tools like Constrained Time-Optimal Path Parameterized Trajectory, or TOPP in short, to optimize the generated path.

    Finally, once the trajectory is computed, the scene needs to be sent to the Cobot to be followed. This can be achieved by the support package that is provided by the Robotic System Toolbox for Universal Robots Manipulators. The support package has APIs to command a Cobot over ROS.

    Let's see how ROS middleware is used in this case. Universal Robots provides ROS drivers to communicate and control the UR robot. Using ROS Toolbox from MATLAB, you can communicate with the UR Cobot over ROS. Thus, we can publish and subscribe to the ROS messages and command UR.

    Using ROS, we can also subscribe to the RGBD and point cloud images from the camera. As we will show you in the upcoming slides, these images are also used to train the deep learning network, and eventually in our perception algorithm, to estimate the position of the objects in MATLAB. Similarly, from MATLAB, we can subscribe to the current pose of the Cobot and publish the trajectory we want the Cobot to follow based on the pose estimation that has been done previously.

    Let's talk in detail about the perception algorithm for this use case. The first task is to use the deep learning on YOLO to train the network to recognize cuboidal objects. We can start by taking RGB images from the camera, and they are then sent to the host computer over ROS. The more the number of images, the better the network is at recognizing objects.

    Hence, we placed objects in a certain position and take a snapshot. Then, the objects are shuffled and we take another snapshot. The process is repeated until we have enough training data.

    In our example, we have used at least 400 images. As YJ mentioned, you can also use synthetic image data from simulation to combine with. The images are then labeled, and we train them using the YOLO network in MATLAB.

    It's recommended to use a GPU for faster training. At the end of the training, MATLAB outputs the learning rate and loss rate. Lower loss rate and higher learning rate indicates that a network is well-trained. If the rates for loss and learning are not satisfactory, we need to take new images and repeat the process. After the network is trained, YOLO can be used to create bounding boxes around the object.

    The next step after training is to use the bounding boxes provided by YOLO to extract the objects. We use point cloud images from the camera. First, the base of the bin is segmented, and then we extract the point clouds of the objects.

    This can be done by segmenting any point in the point cloud that is not part of the bin base, but part of the bounding box. Once we have the positions, we use the PCA analysis that YJ mentioned earlier to compute the primary axis and centroid of the objects. Here are the snapshots of the objects after PCA analysis. The major axis are denoted by the red and green color. The axis is drawn at the calculated centroid of the object.

    The next step is to generate a path for the Cobot to follow. YJ talked earlier about how planner algorithms, such as manipulatorRRT, generate a collision-free path given the starting position and destination. Once the path is ready, the trajectory generation algorithm will generate a time-based sequence on how to follow a path given constraints such as position, velocity, and acceleration.

    This is done by connecting the waypoints by a class of polynomial functions locally. In R2022b, Robotic System Toolbox provides a new trajectory generation algorithm called Constrained Time-Optimal Path Parameterized Trajectory, or TOPP. You can use a TOPP solver to compute the fastest possible way to traverse the path while still stopping at each waypoint given bounded velocity and acceleration.

    The Robotic System Toolbox support package for Universal Robots UR Series Manipulators first shipped in R2022b release of MATLAB-- provides APIs that the user can use to command a UR Cobot over MATLAB. Let's see what you can achieve after the support package is installed on MATLAB.

    Once the support package is installed, you can launch the hardware setup screens that will guide you on setting the ROS workspace on your host computer and configuring the network connectivity to the UR Cobot. The setup screen provides separate instructions for UR Cobot, URSim Simulator, and Gazebo.

    The first task is configuring the network for communication with the Cobot. Once the connectivity is established, you can verify the ROS workspace on the host computer. The installation instructions will guide you on how to correctly set up the ROS workspace on the computer.

    The UR ROS driver provides some additional ROS messages which can be added to the MATLAB path from the setup screens. At this point, your UR Cobot is ready to talk with MATLAB. You can verify the scene by entering the Cobot IP and acquiring the current joint angle. You can also change the Cobot orientation manually and verify the new angles by clicking Get Joint Angles. After this, the hardware setup process is over.

    You might be wondering, how does the UR Cobot talk to MATLAB over ROS? The answer is called URCap. Now, what is a URCap?

    A URCap is an executable file which, when installed on the teach pendant of the Cobot, acts as an add-on application. For our case, MathWorks has created a URCap to enable ROS communication with the UR Cobot, and this is hosted in the GitHub repository called MATLAB-URCap-for-External-Control. We have detailed documentation illustrating the process to install the URCap file on the teach pendant. After successful installation, the URCap file appears something like this on the program menu of the teach pendant.

    The Getting Started example from the support package will help you get started with communicating with the Cobot. The example shows how you can control the Cobot using joint space and task space control. The example starts with the visualization of a UR Cobot as a rigid body 3D model, and then helps you get connected and control a real Cobot over ROS.

    You might have noticed back in the intelligent bin picking example that we showed, the objects don't overlap with each other. This is called a semi-structured distribution. The pose estimation problem becomes complicated when objects overlap with each other. This is called a randomized distribution, as shown here.

    There is no pattern at all for such distributions. Such problems can be solved by using deep-learning-based image processing capability from Computer Vision Toolbox. In future MATLAB releases, we will create an application example where you can see intelligent bin picking being done on a randomized distribution. I will now hand over to YJ to summarize our discussion. Thank you.

    Thank you, Ankur, for sharing intelligent bin picking use case and how we support Universal Robot to connect and control it from MATLAB. So let me summarize what we discussed today, but before that, I would like to share another interesting customer success story involving inspection robot.

    The none is being used to find image abnormality for the industrial inspection. Musashi Seimitsu industry in Japan used deep learning detect-- anomaly detection system with MATLAB to inspect automotive part. In this project, Musashi worked with MathWorks consulting team to build the camera connection, preprocessor imaging, create custom annotation tool, and improved model accuracy.

    They generate the code for the trained AI model using GPU Coder and implemented it on an NVIDIA Jetson, and leverage the digital to the PLC. This approach is expected to reduce human workload and cost for manually operated visual inspection of the millions part. So automated visual inspection is another great Cobot application since Cobot is easy to deploy and consistently follows exact process and predefined workflow.

    So this is what we showed earlier that is the core element of robotics and autonomous system with AI. Around this core element of the autonomous system, we have our robotic platform. We need to connect and deploy UR implementation to the core in the robot.

    And then system engineering will ensure your Cobot meets the customer needs and the design engineering to not over-design or under-design the delivered system. So it is telling us there are numerous benefits of using MATLAB Simulink to design and deploy Cobot application. We support a full suite of robotic algorithms for you to design Cobot applications.

    You can design and evaluate AI application by using a variety of prebuilt AI models. And we provide hundreds of examples for building AI models as well. You can accomplish system-wide simulation and testing of your Cobot behavior within an integrated development environment.

    To get started, please report to the many example ship the way the Robotic System Toolbox. In addition, I'm encouraging you to check out our hardware support page to connect and the control robot hardware from MATLAB. If you'd like to learn more about AI in MATLAB, the next step is to simply open your browser and launch one of our free online tutorials starting with deep learning onramp.

    Before we start with Q&A, let me summarize the key takeaways of today's webinar. We can perceive the environment with advanced Cobot application using deep learning in MATLAB. We teach Cobot motion using inverse kinematic and the motion planner. You can design Cobot motion controller with safe interaction.

    You can connect and deploy the Cobot using MATLAB through the ROS. So thank you for your attention. Now, post your questions in the Q&A panel. We will take a few moments to review them and then come back online to answer your question. Thank you so much.

    View more related videos