Industrial Robots for Manufacturing | Robotics for Smart Factory, Part 1
From the series: Robotics for Smart Factory
Developing industrial robotics requires knowledge and experience in many engineering domains, including mechanical design, perception, decision making, control design, and embedded systems. In this talk, we will discuss how we can design, test, validate and verify, and deploy these complex industrial robotics applications. Several concrete examples, from pick-and-place manipulation to warehouse robot coordination, illustrate how the complexity of these systems can be managed through scalable physics simulation, Model-Based Design, and hardware connectivity and deployment.
Published: 14 Feb 2022
Hi. My name is Remo Pillat and I'm an Engineering Manager at MathWorks, working on several software products for robotics and autonomous systems. As part of our development work, we interact a lot with industrial robotics companies and we see some clear market trends that I would like to share with you today and summarize how these market trends impose requirements on the software development process.
Full disclosure, since I'm at MathWorks, some of my slides will focus on how MATLAB and Simulink can be used for this application space. But I believe that a lot of the basic development process translates to other tools as well. So let's get started.
Let me start with some industrial robotics trends. The digital technology development is now enabling a new wave on manufacturing innovation, and factories are incorporating more advanced systems and taking advantage of new technology. Specifically, factories are now combining advanced systems like collaborative robots and AI-based robots to get work done. And this exciting robot trend really follows many decades of deploying traditional industrial robots.
The first one of those was actually deployed by General Motors in 1961. And there wasn't much fanfare about it, and American companies didn't really use industrial robots for quite a while after that. But these traditional industrial robots really focused on automation and mass production and typically just follow a predefined path. Those robots also have to be fenced in for human safety.
Around 15 years ago, collaborative robots or "cobots" first entered the market. Working alongside humans, cobots had to be safe and employ a new level of autonomy to ensure that safety. One of the entrants back then was Universal Robots in 2008. They are now the market leader for cobots and sell more than 50% of all cobots on the market.
More recently, AI-enabled robot reached new heights for autonomy. But these are mostly relegated to the R&D domain, robots like the ABB YuMi. But the overarching trend here that I want to highlight is that manufacturing is going from mass production to mass customization, because more intelligent robots means more customization of the manufacturing process.
So I want to focus on these cobots again since they are just at this threshold of being economically viable and being deployed worldwide. So cobots impose unique requirements since they share workspace with humans, and that requires them to have a certain level of autonomy for ensuring worker safety. They also operate in an environment that is both dynamic and unpredictable, so none of these cages of the traditional robots for cobots anymore.
So what we'll discuss today in my talk is I'll begin giving some context about the place of smart robots in the factory and then talk about how we can develop autonomous robotic systems. And I'm going to show a particular application taking place, has been popular for many decades now, but it's still a task that has to be performed regularly. And finally, I will summarize what I showed.
So in recent years with the introduction of more digital tools, the smart factory concept has been promoted in manufacturing systems. And there are lots of parts to the smart factory, but one of the important one is that data collection happens all across the factory through IoT devices or other sensors in the environment. And from that data intelligence, that enables optimization of the manufacturing process.
So it becomes clear that smart factories go way beyond simple automation, and this transformation is sometimes framed as the Fourth Industrial Revolution or you might have heard the term "Industry 4.0." But really, what I would like to focus on today are the actual robots in the factory and what these advanced robots need to do and how to best develop their software and their autonomy.
OK. So the traditional robot performs mostly a single task and repetitively, and it does that task well. It does it fast. It does it repeatable. But as I mentioned before, those robots are typically fenced in so that humans don't get hurt when the robot malfunctions. And these robots typically do not have any sensors to measure the environment. They follow predefined tasks and are really quite dumb by today's standards.
Nowadays, with cobots, when they came to the factory floor, they allowed the application towards more autonomous and flexible tasks, things like painting and packaging and picking and placing of objects. Those tasks, you can't just preprogram them because objects might be in a different location, they might look slightly different, the shape of the body your painting might have slight variations. So you need to build autonomy into those robots, and you need to be able to have humans in that same workspace. So you need advanced algorithms to make that happen.
Who knows what the future holds, but one thing that seems to be clear is that AI, deep learning, reinforcement learning are really transforming robotics. And it's very likely that these trends will make it into autonomous systems in the smart factory in the future. And there are lots of possibilities how that could happen, but it's going to be exciting to watch that future.
But let's stay on today. So what are the challenges with this Industry 4.0 vision and these new cobots in the industrial space? So what we hear a lot from customers and from industrial robot companies is that the designs for these robots are getting more complex. And that's the top engineering challenge, because all this autonomy has to be built and all this autonomy is typically based around software. And software complexity has been increasing for a while now whereas the ability for software developers to develop that software has increased more linearly.
So how do you manage all this complexity that's needed for this autonomy on the factory floor? Well, one way to do this that I happen to think is a good way is using model-based design and really using that as a driving methodology for managing the complexity. So what is model-based design?
Well, the raw definition is that model-based design is an approach to representing system components and their interactions with their surrounding environment using mathematical model. It's a bit of a mouthful, but the two key pieces here is that you are representing parts of the system as modules and that you use math and simulation to validate these models.
So who uses model-based design? Well, it turns out most of the automotive and aerospace companies in the world adopted model-based design for designing their systems. So it seems natural to also apply that same methodology to industrial robots. And that's what I want to dig a little bit into. What would model-based design look like for industrial robotics?
So there are really like a common set of workflows that you will encounter when you design a robot application, and it really has three pillars. One is designing the platform and its environment that it needs to interact with. Then building some software intelligence on top of that platform for things like perception, planning, and control.
And in the end, you want this to run on the real robot. Right? It has to run in the factory, not in your lab or in your simulation system. So it will be ideal if we can reuse all these pieces that we are building continuously and design, simulate, analyze, implement, and test all system. This is the promise of model-based design. And as the name says, it starts with the robot model. And this is really a crucial step for everything else that follows.
So robot manipulators consist of multiple components, including mechanical actuators, electrical parts, and potentially also hydraulic parts depending on how manipulator structure. So how do you build a model like that? You can certainly build it from the bottle up, from individual components. Right? Connecting different lengths with joints.
But more likely than not, a lot of our industrial robot manufacturers have existing CAD models of those robots. And you can use those also as kind of your initial truth, if you want, about the sizing and the weight and the inertial properties of that robot. Another popular way to get this model is from the URDF format.
If you are not familiar with it, it has been popularized by ROS and Gazebo. And nowadays, really most robot manufacturers provide URDF descriptions of their robots. So this is a very handy way to quickly get the full description of a robot into the software of your choice.
So conducting experiments with real hardware is expensive and time-consuming, as you probably know. And sometimes, it might even be dangerous when you are first trying to tune your controllers. So simulation is really a powerful tool for analyzing, optimizing your algorithms and really taking a robot model into account. Right? Building the algorithms to deal with the complexities of your model.
So for many workflows, like testing and debugging your system, the physical system is replaced with a simulation. Right? And the simulation could really happen at many levels, from a fairly simplistic, low-fidelity representation to a high-fidelity, complex simulation.
It turns out, from many, many companies we talk to, this is easier said than done. Like how do you go from a simple representation to a more complex representation? And one thing you need to keep in mind, in large companies, you have tens or hundreds of engineers working on the same problem. They don't all need the same simulation complexity.
For example, a control engineer designing a torque controller needs the highest possible simulation fidelity to account for all dynamics of the robot when they tune their controller. For an engineer that works on path planning for painting, let's say, their input to the simulation is simple waypoints. They assume that that simulator will figure out how to interpolate trajectories, so their simulation model can be much simpler.
So let's discuss how this range of different simulation fatalities could be supported through model-based design. All right. Let's give some concrete examples. Right? At the early stage of application development, you may focus on the task scheduling of the robot. Right? Think of, where should the robot go?
And you might use some interactive tools to specifying different robot poses and then generate trajectories for the robot to reach those poses. To begin with, you can model your system as a simplified motion model, which is really just a fancy way of saying that this includes both a basic plant and joint controllers. So when you start out, you might not even want to deal with the control aspect. But you simply assume that this is a robot that can be controlled via high-level commands.
Note that we also don't really pay attention to any grasping at this stage, but this already gets you started to a good degree. And you can run simple simulations, like how this YuMi robot is picking up an object and then dropping it in a different spot. Again, we're not really modeling any grasping here, but you can very easily test basic trajectories with this.
So later on, you probably want to upgrade this model to something more complex, something that's closer to how your real robot behaves. So after this kind of task scheduling has been designed and verified, you probably want to model the controller for your robot. Again, a more complex manipulator dynamics model is needed that takes joint torques and gripper commands and you probably need a torque controller for this robot.
The power of model-based design is that the rest of the system model stays the same. So you can run the same verification test as before and verify that your arm behaves correctly. Again, here's just one example in the tools that we built here at MathWorks that would simply allow you to iteratively replace your robot model with higher-fidelity components.
And again, you probably want to verify, run the same automated tests for how would the robot behave when it picks up an object and then drops it off in a different location. But again, now likely you will model more dynamics of the robot and you can tune your actual torque controllers for each of the joints.
Now, we can add an even higher-fidelity model, again defining each of the mechanical parts of the robot, if you so choose. And you might also add other modeling details. Think of the electrical system or the hydraulic system. The main difference from the previous model here is the plant. The plant model now incorporates the full dynamics of that robot with built-in joint limits and contact modeling. And this simulation accuracy can be achieved with many tools.
From MathWorks's perspective, we have a range of software tools for physical modeling. And it helps you to really get accurate simulations, but it comes at a cost of simulation speed. Right? Whereas in the beginning, you could quickly iterate over different tasks scheduled for your robot and quickly change basic behaviors of the robot and simulate that as often as you want.
This more complex simulation takes time. And that's why you want to follow this process of iterative design so that, once you get to this high-fidelity simulation, there are really not that many parameters to tweak in the algorithms you're working on. Right? Because, again, simulations are more expensive in a higher-fidelity environment.
So one thing that is still missing from the simulation is a proper representation of the environment and the simulated sensor readings from the environment. And this is usually the time when you need a more full-fledged 3D simulator. In this case here, I'm showing Gazebo, which is a popular simulator for robotics. And again, you might want to use Gazebo, because it has a richer set of tools for specifying the environment around the robot and because it has a set of built-in sensors that can be simulated.
For example, cameras, depth cameras, LiDARs, and so on. Again, the ideal place for model-based designers, if you can reuse all the model pieces you already developed but simply replace the plant and sensor models with data coming from Gazebo, again, from our company's perspective, that's where we help lots of companies that want to leverage the latest robotics simulation tools but also want to leverage all the years and decades of work they put into their MATLAB or Simulink models for their controllers.
Gazebo, one drawback it has is the graphics are a bit antiquated. So actually, a lot of companies would use other tools for simulating more photorealistic scenes. This is great for things like AI and deep learning, where you need photorealistic camera images for training your network. And again, you want that transition to be as smooth as possible between the different tools you are using. And a lot of customers don't want to deal with the hassle of switching to a new simulator and then having to change all the tools or code they've already written.
All right. Moving on to the next pillar. Right? Adding autonomous algorithms for perception, planning, and control. Again, we have a robot model now. We have simulated sensors. How do you add algorithms for autonomy? And before we jump into a perception example, let's look how deep learning can be applied on a robot. Again, there are lots of ways that deep learning is really transforming robotics and also starting to have an impact on industrial robotics.
One is for speech recognition for robots. The more traditional deep learning application is to detect objects, and this is great for pick-and-place applications and also detect abnormality in parts manufacturing. LiDARs are also starting to have an impact or depth cameras, because they provide a more spatial understanding of the environment to the robot. And this is just different ways that deep learning is really having an impact on smart manufacturing.
So let me show you one example of how object classification can be used. Again, here's a robot that takes a number of training pictures for different objects. Typically, the companies we work with have some kind of labeling process in place that could be automated or manual for letting your deep learning algorithm know which objects are which in each of the images. And finally, this trained neural network can be deployed as an object detector. In this case, you're using the fairly common YOLO architecture.
So here, you can see an example just from the robot's perspective and how the deep-learning-based object detection is used in conjunction with path planning for a real robot. So one thing that I should note here is deep learning has been around for a while now, and we all know its benefits. Interestingly enough that most of our industrial robotics customers are interested in deep learning, but really only for systems or parts of the systems that don't impact the safety.
So for example, advanced bin-picking systems leverage deep learning for object recognition. But you don't really want to have a deep learning network control the robot motion. And the main concern there from the companies we talk to is that functional safety and the ability to certify systems are major considerations, and deep learning is a challenge to fit within this certification framework.
This is not new. Automated driving is dealing with this right now, that really a lot of the tried-and-true techniques for certifying safety critical systems are not easily applied to systems that employ deep learning and artificial intelligence. Again, industrial robotics companies are even more conservative. But again, deep learning is starting to have an impact on perception-related tasks.
So now, let's look at a higher-level workflow for pick-and-place. So here's a basic flowchart of kind of the pieces of each pick-and-place application, from sensing the environment, identifying paths, and then executing a pick-and-place workflow. And in traditional pick-and-place environments, again, everything is known beforehand. You know where parts are. You pick them up. You do it repeatedly.
For cobots and smaller factory environments, you likely have to use sensing to find objects. And I showed you an example of how this detection and training might happen for deep learning. So one thing that we're interested in now is, how do you execute this pick-and-place workflow? How do you deal with motion-planning control and hardware connectivity?
So let's talk about motion planning, and there are really two parts of it. And to most of you here, as roboticists, these might be obvious steps within the motion-planning workflow. To many of our industrial robotics customers, this is really fairly new. They've been dealing with manufacturing software tools, so software tools from the manufacturers that essentially have done this motion planning for them and all waypoints were pre-specified and the motion plan was automatically generated.
Now, for more autonomous systems, some of these tasks have to be solved on the robot and have to be customized. So it's actually interesting to see how they approach this problem. And typically, motion planning is subdivided into two parts. Right?
One is the path planning, meaning finding a collision-free path that your robot can go through. This is typically at the waypoint level. And in the end, you want to interpolate the trajectory that your robot can actually follow and that minimizes some kind of criteria. For example, a popular one is minimum jog.
But let's start with path planning. Right? I mean, just to get everybody on the same page, the problem of path planning is really starting with an initial pose and a final pose and some kind of environment representation. And then as a first step, most path planners relate these poses to actual start and goal configurations of the robots, typically using inverse kinematics. And then the planner aims to connect these configurations.
And what makes the problem more interesting is that there are certain constraints on the planner. These constraints might be obstacles, static or dynamic obstacles, or things like joint limits. And then the path planner will find a collision-free joint trajectory that gets the robot from the starting position to the end position. And depending on the characteristics of the application, you can either solve this through optimization-based approaches or sampling-based planners. These are the most popular ones in industrial robotics.
One sampling-based panel that's especially popular is the bi-directional RRT, and I'd like to explain this plan a little bit. So bi-directional RRT planner creates two trees. And one starts from the start configuration, and one starts from the goal configuration. And these trees grow outwards and the panel seeks to connect them, to find a path that goes from start to goal.
And the planner uses several heuristics to reduce planning times. So one thing you see right away is it's a sampling-based planner, so the waypoints you get are really dependent on how finally the planner samples the space. And the resulting trajectory, if you just connect the waypoints, is likely not a trajectory that you want your robot to follow. It's not going to be very smooth.
So here, I'm showing just one example of how this planner would solve a pick-and-place problem and where the robot first picks up the items on the right and then moves it over the obstacle to the table on the left. And the only thing this example is showing that, depending on your problem description and the type of obstacles you have, the type of robot you have, you probably have to tune that planner to find the best parameters for your particular application.
So up until now, we've been assuming that you can plan configuration to configuration, but many pick-and-place applications are more broad than that. Right? You don't just go from one spot to the next, but you also have to dynamically scan your environment and detect obstacles and that's really when your planning starts. So it might be a dynamic environment, not just a static point-to-point-type configuration.
And in applications like this, what might be more ideal is to plan to region rather than a specific configuration. And the workspace goal region is in the popular approach there. And here are just a couple of videos that show how you cannot just define where your robot should go exactly but define different boundaries around the goal.
Think of them as there might be a region you can go into or you might have a certain tolerance around your end-effector pose at the goal location. So all that really helps in ensuring that you can execute the robot motion while the environment might be changing. Right? So rather than sticking with a fixed plan, something like a workspace goal region allows you to react to more dynamically changes.
So to review, we started with a set of input configurations to pick and place and then use the path planner to get a path, which is kind of an ordered set of waypoints. But when we actually pass those into the motion controller, that's not enough. Right? You need some kind of time association. Like when is the robot supposed to reach these waypoints? And that's really where trajectory generation comes into play.
And one intuitive way of doing this would be simply to interpolate the positions in joint space. But the problem of that is that obviously that leads to non-smooth trajectories, which is really not desirable because it leads to extensive wear and tear on the manipulator arm. So trajectory generation really fulfills this task of mapping the auto-path from the planner to a controlled sequence.
And typically, most industrial robotics customers use some kind of polynomial functions to connect the waypoints. And there are various ways to create a trajectory that interpolates the joint configurations, and these trajectories can apply to joint space. Right? So for each joint angle, they might interpolate or they might apply to task space, so where are the end-effector is. Trapezoidal velocity trajectories, for example, are using a piecewise trajectory of constant acceleration, zero acceleration, and then constant deceleration.
And this leads to this well-known trapezoidal velocity profile. It's relatively easy to implement, but it gets the job done. And really, most traditional industrial robots use trapezoidal velocity profiles. But you can also interpolate between two waypoints using various orders of polynomials.
And these polynomial trajectories are particularly useful for continuous motion, because the trajectory can pass through the waypoints with non-zero velocities and you can get smooth velocity and acceleration profiles. So the most common polynomials here are typically cubic or quintic polynomials to ensure certain level of smoothness in the derivatives of the joint motions.
So again, just to summarize, now you have some kind of trajectory generator. Pick one of the many I just mentioned, and you can pass that then to the firmware of your manipulator and actually execute this path. So a trajectory provided to the robot can go directly to the robot, but you can also validate that path obviously through simulation.
Again, going back to the model, this trajectory you can then pass through your robot model and through the controllers that you designed. And it's a good way to trying out new control schemes and also tuning controllers that you eventually want to use on the real robot. Here are just a couple of examples playing with impedance control. Again, this is a very important way to have the robot react to external stimuli, especially when it comes in contact with objects or humans.
So now that I've kind of walked you through the steps for implementing pick-and-place from path planning to trajectory generation to motion control, we can deploy this and run it on a real robot. Again, this is just a little robot that is in our office. But the idea is the same, that ideally the promise of model-based designers that you can validate your complete algorithm and simulation, see on the left side. And the final deployment is just a click away, and that robot will behave exactly the same in the real world.
So once we're sure that we can execute this pick-and-place, we can step back and integrate this workflow. Again, in most applications, you don't just want to pick a single path. You want to do things repeatedly 24/7 and as fast as possible. So probably, there's some kind of state machine that you also have to implement to run particular tasks repeatedly.
So now, let's go to the last pillar, hardware implementation. And here, I'm going to cover a couple of different ways how we've seen companies connect to hardware. So one option obviously is to connect directly to the robot. Many cobot manufacturers have direct interfaces that you can use from MATLAB, from Python, from C++. And those typically give you quite low-level access all the way down to torque control.
A second option is many of the robots have their own onboard computer, and you can run your own control algorithms on those computers. Again, through MathWorks tools, we have this mechanism called "Co-Generation," which essentially allows you to package up C++ code with your test controller and run it on a range of different hardware tools.
A third and popular option that you might have used is connect to ROS. Right? ROS is a middleware that is a very common interface to many robots, and many of the industrial robots are supported by the ROSS-Industrial Consortium that provide dedicated drivers and interfaces to those robots. And ROS can be used to deploy code, and it can also be used to communicate to the robot in real-time while it's running.
So with that, I think we covered the whole cycle. And I hope I showed you a little bit that model-based design can be a way to start with a robot model, design your autonomous algorithms, and then deploy them to the robot. And again, we've seen success in many of the companies we worked with in using this approach to dealing with more complex smart factory problems and using cobots to address those problems.
So here are two more examples that have a bit more of a practical setting. So this first one here is for a simple warehouse pick-and-place application, where a robot sorts and detects objects and places them onto shelves. Again, it uses the same methodology that I just described but for slightly more complex application.
A lot of companies we work with also deal with Delta robots. So in this case here, you might place objects on conveyor belt. Again, same principle applies. Different type of platform. Different type of planning you're performing. But the same steps, the same modularity applies that I showed you in the earlier workflow steps.
So here's a quick recap of what I talked about. So I talked about kind of advanced robotic systems and how they will shape the factory of the future and then talked about the three pillars of what I see as important steps for developing reliable, safe, and scalable autonomous robotic systems. And I hope you got a flavor of what model-based design is and how it helps in those applications and how it helps our customers deploy those cobots. And with that, I want to say thank you and I'm looking forward to your questions in the Q&A. Thanks.