Mass Estimation for Robot Manipulators - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 19:54
Loaded: 0.83%
Stream Type LIVE
Remaining Time 19:54
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 19:54

    Mass Estimation for Robot Manipulators

    From the series: Perception

    One way to improve the robustness of robot manipulation is to expand the amount of physical information available from the operating environment to better complete tasks and develop comprehensive robot intelligence. Obtaining more environmental information is usually addressed by adding different types of sensors to the robot hardware. However, you can learn how to use estimation techniques to reconstruct the weight of a grasped object by using position information available from the robot’s joints.

    Published: 12 Apr 2022

    Welcome, everybody to the MATLAB and Simulink Robotics Arena. For today, we have Marco from the biorobotics laboratory in the Autonomous University of Mexico. He's going to be talking to us about mass estimation for robot manipulators. Hey, Marco. How are you doing.

    Hi, Jose. Thanks for inviting me to this space.

    Marco is going to be presenting on his research, and this is the agenda for today. He's going to be talking a little bit about manipulation perception, fault reconstruction, and how to derive an object's weight and mass from different types of signals, how to use lighting mode of servers for this, the estimation scheme he used on his research to achieve this algorithm, and finally, how to model with manipulator dynamics to make all of this possible in a software demonstration.

    Well thanks, Jose. Manipulation perception is an important capability in domestic service robots, and the genetic manipulation approach for our domestic service robot, it's generally composed of four steps. First, the robot recognizes and locates the object, plan movement for grasping the subject, may perform some low level control, usually a trajectory tracking control. Finally, the robot grasps the object. So most domestic service robots, these are the four general steps for manipulating something.

    But what's the problem? What if the user, for example, asks the robot to bring a beverage, and the robot goes and finds a beverage, for example, a can on a table, but there are two cans, one empty and one full. So usually with these four steps, the manipulation perception mainly relies on visual information. So robots commonly use cameras, or maybe the information, sometimes maybe light or sensors, but most of the information is based only-- it's only visual.

    So what happens if there is an empty and a full container? So with this approach, the robot is unable to distinguish what container is full and what it's empty.

    Great, so I think the interesting thing about this research is that what you're proposing does not require any more sensors other than the things that are already in the robot that you're using to control it, right? Yeah, correct.

    Awesome.

    So what we propose to distinguish is an empty and full container is just estimate the mass of the object that is grasped, and we we're proposing to use fault reconstruction approach. Fault reconstruction consists on detecting a behavior different to the nominal ones in our system. OK so in the case of the manipulator, we consider that the nominal behavior of the manipulator the behavior when nothing is being grasped. So we need to identify the system to know what is the nominal behavior.

    And what we assume is, of course, that the object exerts a kind of external torque that is exerted on each joint of the manipulator. So what we are doing is try to estimate the torque that is exerted by the weight of the object. So what we need is a system correctly identified, which can be a hard assumption, but there is also a lot of well known algorithms for estimating the physical parameters of the manipulator.

    And actually later we will discuss what happens when we don't know the exact values of the physical parameters. So the more accurate the system we have, the more accurate the system is, the more accurate will be the estimation of the mass of the object.

    Marco, what I think what you mean by having an accurate system model means mostly like knowing the weight and the geometry of the components in the robot arm, right?

    Mainly, yes, the geometry. So the distances between the joints. Also the kinematics change, OK, this is not difficult. And also we need a good estimation of the mass of the objects because part of the talk that most exert-- OK each joint is just to support their own weight or manipulator. So we need also the masses. But again, there is also another algorithms for estimating these parameters.

    So how can we reconstruct the tech and reconstruct this fault OK. The most classical approaches for reconstruction fault are based on what is called procedural generators. Procedural generators are observers that are designed to be sensitive to the faults signals, but the approach we're using is sliding mode observers. This kind of observers are designed to robustly track the measured outputs, even in the presence of these disturbances or faults. In this case, we don't have disturbances, we just have faults, which are external signals, OK?

    And the great advantage of these sliding mode observers is the discontinuous term that is injected in the output term. This discontinuous term can be low-pass filtering. And with this appropriate low-pass filtering, we can estimate the value of the signal, of the fault signal. So this is the great advantage of the slide mode observers, which is the approach we are taking for making this estimation.

    And anybody watching, if you're interested in learning more about in detail approach that Marco and his research has taken, we will be linking the formal paper and the files necessary to reproduce and explore in the description to the video.

    So the estimation scheme, we proposed it's a form of these five blocks. The first one of course, we need the system. We assume that in the manipulator, we only measure the position because measuring the speed, it's achievable. There are sensors, but the most common sensors available are just for measuring the joint position. So we are assuming we just measure the joint positions. And we assume that the input signal are the torques sent to the joints of the manipulator.

    And work with the signals, we designed sliding mode observer. The sliding mode observer is used only for estimating the weight of the object that is grasped. And we need to drive the manipulator to certain positions because imagine, for example, that the arm is just hanging.

    Well, the object being correct just hangs from the manipulator and does not accept any torque on the joints. So there is no meaningful signal for reconstructing the weight of the object. So we need the object exerts torque on any of the joints. So we need to drive the manipulator to certain positions. For example, a position for taking an object, which is mostly the manipulator in front of the robot. It's a very good configuration.

    So for moving the manipulator, we also design in an PD+ joint controller. This PD+ joint controllers needs the speed of the joints, which is not measured, but it needs to be estimated. And we implemented an extended Kalman filter because sliding mode observers have the disadvantage of chattering. So if we use the chatter signal of the estimated speeds, we generate also controlled signals with chattering, which is decided, which is not good for the joints. So instead of sliding model observer, we use an extended Kalman.

    So extended Kalman makes the smooth estimations of the speeds, but it's not a capable of reconstructing the full signal, and the slide mode observer reconstructs the full signal, but is not good for estimating the speeds for position control. So that's why we use these two kind of observers. And finally, with position control generates torques which are used also for estimating the weight and also for the extended Kalaman filter because all the input signals are sent to the manipulator.

    Well, we need a model for the manipulator which in the case of the manipulator of our robot is 7 degrees of freedom manipulator, which can be really hard for obtaining the analytical model. So instead of this analytical expression, we use a numerical solution provided by the robotic system toolbox.

    So we just need a representation of the manipulator, and we import it from URDF description. URDF descriptions are very common files for describing the kinematic configurations of robots are widely used in the robotics community. So it's very common to have an URDF describing our hardware. Then we use the importer from the robotic system toolbox. And with this toolbox, we can generate the model because we have blocks for the matrices of the model.

    OK, and I think if you guys look into the blocks and features from the robotic system toolbox, you will find not only the blocks that Marco is using, but you'll find inverse kinematics, forward kinematics, the same for dynamics, forward dynamics, gravity torque, a bunch of different utilities and functions. You can see how they're implemented in this research, but you can also use different ones for your own projects.

    So let's make some demonstration about how the system works. We are going to show you how it works using only Simulink, and also how it can be integrated with the cross platform because it's also widely used in robotics implementations.

    Thanks, Marco, and if anybody is interested in looking at the files or following along with the files that Marco will be showing you, there's a GitHub repository where everything is contained that you guys can check out and download files from that we will link in the description of the video.

    Thanks, Jose. So in the repository you can find files for running simulations only with Simulink and also with broast. So let's check the simulations only for Simulink. You can find simulations using our manipulator and also another simulation using the well known katana manipulator. OK so let's check the full implementation.

    As we said before, we have a block simulating the manipulator dynamics. In this block, we measure only the positions of the joints. And the inputs are the torques sent to the joints. And we're also using a small noise factor because we always have a small noise signal in the measurements. And also we are simulating an object. So this is the mass of the object that is supposed to be carried by the manipulator. In this case, the manipulator will be carrying an object of 250 grams.

    In this, block we have the extended Kalman filter, the PD+ control, we have some polynomial trajectory just for moving the manipulator, and the sliding mode observer, and of course, the mass estimated, which is the main goal of this research.

    OK, so let's run the simulation. The manipulator starts moving and we can check the estimation of the mass. And we can see how this mass is converging to the real value. So we can stop the simulation. and we can check-- we can change this mass. OK, so for example only 100 grams. Of course, this mass should be among the capabilities between the limits of the hardware limits of the actuators.

    And also, the manipulator moves the opposition. It should be positioned non-parallel. So they the axis of the wrist or some axis should be not parallel to the gravity force, because if it is parallel to the gravity falls, we are in a singularity and it's not possible to estimate the weight. So we just need to move the manipulator to a different position. We use this position because it's very common for grasping objects in domestic service robots. We can check that the mass after some seconds converges to the real value.

    Just to mention that the model of this manipulator, we don't make it by hand the way. We import, as mentioned before, from the URDF or the description of our manipulator, and also using the exporter tools stand alone ROS nodes, we can generate ROS nodes. We don't generate a single ROS node for the full system because it's not good for our software extensibility and maintenance. It is better to have isolated nodes making a very precise, very single tasks. So we generate a node for somebody in the manipulator, another one for the extended Kalman filter, another one for estimating the mass of the object, and another one for the control, but all nodes were generated with the Simulink exporter.

    So I will open the terminal as it is commonly done in the cross implementation. We will run our implementation, and in this separated we see always show in the manipulator, and we can use these small graphic user interface for running. So let's say that the manipulator will be carrying 200 grams. And let's move the manipulator to some predefined position.

    So the manipulator will start moving, and after some seconds, it starts the estimation of the mass because we need the robot to be in a given position, but they should be not moving because it is possible to estimate the mass if the manipulator is moving so speeds are different from 0, but it is harder. So in this stage of our research, we are just assuming that the manipulator has moved to a certain position has stuck in that position. So we can see that the estimation of the mass converges to the real value.

    Sounds good Thanks, Marc. And would you mind showing us how one of the Simulink model looks like when you want to deploy it to ROS node?

    Sure. In the same repository, you can find actually the models for this standalone blocks. So let's check for example the sliding mode observer. This is the algorithm that we use in our system. So this is the sliding mode observer, the low-pass filter that we mentioned before, and the mass estimator. So from the signals of the sliding mode observer, we can estimate the mass. We just added the relevant a publishers and subscribers for interacting to the other nodes. And with the exporter, we just built the model, and we select as target the ROS platform, and Simulink will generate the code necessary for the standalone ROS node.

    I think that's it for demo. We're going to go back and talk about some key takeaways. OK, we hope you enjoyed this video some. Key takeaways from the research we just showed you is, one, that you can now use and take into account the object weight as part of any of your manipulator programming algorithms, whether you want to account for something heavier, or lighter, or take different decisions based on the weight or mass of what you're picking up. And like I said all of the code and research that we presented here today is open source. So feel free to reuse that for your projects. And Marco is going to give us a couple of tips and advice on using and reusing this research.

    OK, just, first of all, when you use this code, with your manipulator choose the appropriate joint, which should be any joint that has an axis of rotation that should be not parallel to the gravity force. It should be preferably perpendicular to the gravity force. Also you should have a good model of your system. So the better identified your manipulator, the better will be your estimation. Also I think MATLAB and Simulink have a lot of tools for making this identification.

    And also another disadvantage of the stage of our research is that the influence of the object along the gripper, because the farther the object is in the gripper, the greater will be the x of the torque. So this can be this can alter the estimation of the mass. So just try to make your manipulator to grasp the object at the center of the gripper, and that will be the better position for this measure.

    Awesome. Thanks, Marco, for new research on sharing this with us. We hope that you guys found something of value here that you can reuse. And if you're interested in learning more about how to use MATLAB and Simulink for robotics, feel free to contact us at roboticsarena@mathworks.com. Check out our Facebook group and our blog, and get in touch with us if there's any other topics that you want us to cover.