Design and Simulate Inertial Navigation Systems
Overview
Pose estimation and localization are critical components for both autonomous systems and systems that require perception for situational awareness. Fusing data from multiple sensors and applying fusion filters is a typical workflow required for accurate localization. Sensor simulation can help with modeling different sensors such as IMU and GPS. Moreover, simulated data can be used to augment the data recorded or streamed from inertial sensors. Further, you can use filters to fuse individual measurements to provide a better result. You can tune these filters based on the sensors and the end-application.
Highlights
Through several examples, we will cover:
- Modeling sensors, tuning filters and calibrating IMU measurements with Allan Variance
- IMU, GPS, altimeter, and wheel encoder sensor fusion for inertial navigation
- Pose estimation from asynchronous sensors fusing accelerometer, gyroscope, magnetometer and GPS
- Automatic filter tuning to find optimal noise parameter values
- Visual-inertial odometry using synthetic data
About the Presenters
Rick Gentile, Sr. Product Manager, MathWorks
Rick Gentile focuses on Radar, Signal Processing, and Sensor Fusion applications at MathWorks. Prior to joining MathWorks, Rick was a Radar Systems Engineer at MITRE and MIT Lincoln Laboratory, where he worked on the development of many large radar systems. Rick also was a DSP Applications Engineer at Analog Devices where he led embedded processor and system level architecture definitions for high performance signal processing systems, including automotive driver assist systems. Rick co-authored the text “Embedded Media Processing” and maintains a monthly blog on mwrf.com. He received a B.S. in Electrical and Computer Engineering from the University of Massachusetts, Amherst and an M.S. in Electrical and Computer Engineering from Northeastern University, where his focus areas of study included Microwave Engineering, Communications and Signal Processing.
Mihir Acharya, Product Manager, MathWorks
Mihir Acharya supports Robotics and Autonomous Systems applications at MathWorks, focusing on Autonomous Navigation. Prior to MathWorks, Mihir has worked with ABB Corporate Research where he designed robot end effectors for pick-and-place. Mihir also worked with Omron Robotics and developed path planning applications for mobile robots. Mihir has an M.S. in Robotics Engineering from Worcester Polytechnic Institute (WPI).
Brian Fanous, Principal Software Engineer, MathWorks
Brian Fanous develops inertial navigation and sensor fusion tools for MathWorks products. Prior to that, Brian designed HDL and GPU algorithms for signal processing and communications toolboxes at the MathWorks. He has previously worked at MIT Haystack Observatory developing large radio astronomy array hardware. Brian received a B.S in Electrical Engineering and a B.A. in Mathematics from Tufts University and an M.S. in Electrical Engineering from the University of California at Berkeley.
Recorded: 25 Aug 2022
Hello, everyone. Today in this session, we will discuss various components involved in inertial navigation and pose estimation. I am Mihir Archarya, and I work with the Robotics and Autonomous Systems team at MathWorks. I've been here at MathWorks for more than three years now. And my focus has been on the application for autonomous navigation. And along with me in this panel today we have Rick and Brian.
Thanks, Mihir. And hi, everyone. My name is Rick Gentile. My background's in radar and sensor fusion. And I've been at MathWorks for about six years, where I focus on signal processing, radar, and sensor fusion products. Thanks for attending our session today.
Hi, everyone. My name's Brian Fanous. I'm a developer in the signal processing group. And I've been at the MathWorks for 14 years. For the last several years, I've been working on building inertial navigation features for the toolboxes.
All right. Let's get started with some discussion for this topic here, so starting with localization. Localization is a critical component in autonomous systems. And for our session today, we use the term "autonomous system" broadly to include systems like robots, drones, self-driving cars, and many other platforms.
Mihir, along with the autonomous systems, vehicles that need to maintain situational awareness use localization algorithms as well. They do this with a range of sensor modalities, sensors like cameras, lidar, radar, and sonar.
Localization is also referred to as pose estimation. Pose is built from position and orientation using one or multiple sensors. In the case of multiple sensors, it's often called multi-sensor pose estimation.
We also call it localization with sensor fusion. Now, each of these sensors are typically mounted in different locations on the autonomous system. When measurements for these types of sensors are combined, the fusion algorithms need to post information to the autonomous system to make sense of each of the sensor measurements. This pose ensures that we always have a consistent translation that integrates the sensor mounting position in the field of view of the sensor with respect to the platform pose.
An autonomous system also needs to make decisions based on its surroundings. These systems often use SLAM, that is, simultaneous localization and mapping, to generate a map of the environment in which the system will operate. This could be the inside of a building, an urban neighborhood, or even an unexplored planet. Along with SLAM, the autonomous system has to navigate using path planning techniques.
Now for SLAM and path planning as well, the autonomous system needs to understand its pose at times to make an accurate map and successfully move from point A to point B without colliding with another object.
Now, an autonomous system uses a combination of sensors to generate a pose that Mihir and Rick had described. Inertial sensors such as accelerometers and gyroscopes are typically used along with sensors like magnetometers and GPS. In areas where GPS is not available, for example, underwater or in a dense urban environment, the autonomous system could use position information from wheel encoders, or Doppler velocity logs, and various sources of odometry. Sometimes the inertial sensors are integrated with GPS to provide an integrated navigation system called an INS.
Now before we continue with the panel and the rest of the session, I would like to highlight the focus of our session today. Using various examples, we will cover how you can localize or estimate the pose of a given platform. We saw that determining the pose involved processing data from multiple sensors, such as accelerometers, gyroscopes, magnetometers, and GPS.
Now you may or may not have the sensor data available when you're developing your inertial navigation algorithms. So the first part of our session is about how you can generate or bring sensor data in MATLAB, as well as analyze and calibrate that data. In the second part, we will talk about how you can use different filters in MATLAB to fuse the data from multiple sensors and estimate the pose. Now let's bring back our panel and start with the first part. Now let's see some options how we can bring in sensor data in MATLAB.
To get started, you have a couple of options. You might already have your own localization data. We will talk about how to bring this data into MATLAB, either as recorded data or as streamed data. If you don't have the access to the data, that's no problem. There are sensor models that exist in MATLAB and Simulink to cover the types of sensors that Brian and Rick mentioned earlier.
That's right, Mihir. As you noted, you can generate data directly from our sensor models to test your localization and fusion algorithms. Now even when you have your own data, it's easy to augment those data sets with synthesized data. Think about the cases that are difficult to create in the field. We'll look at how to create ground true trajectories that you can use to generate data for these seeds.
The key to synthesized data is how close it matches the data collected with hardware. You have many options to configure the sensor models to match off-the-shelf hardware. For example, you can open the data sheet of an accelerometer and translate that data sheet directly into the sensor model configuration.
Now the workflow we'll describe can be used to set design choices in your system. For example, sensor update rates can help you drive your communication latency requirements. Also at the system level, you can determine how many sensors you need, where they need to be mounted, and most importantly, the quality of the measurements that they generate.
Now when you develop inertial navigation systems, one of the first stages of your algorithm development involves getting data you can use to test your algorithms. As we discussed earlier, this data can come from a variety of sources. I'd like to describe how you can use sensor models in MATLAB to generate data that you can use to develop and test your initial navigation systems.
To generate synthetic data from localization sensors, you start with trajectories that represent the ground truth motion of the systems that need to be localized. This could be from a robot, an autonomous system, or anything in motion for that matter. The types of sensors we have include inertial measurement units, or IMUs, in addition to GPS, altimeters, and wheel encoders. You can use the ground truth motion as an input to the sensor models. The output of the sensor models includes measurements you would expect to see for the specific sensor.
To generate the ground truth, you can specify a trajectory using waypoints, or these can be based on kinematics, things like velocity and acceleration. We'll take a look at a simple example shortly. The output of the trajectory generation includes position, orientation, velocity, acceleration, and angular velocity.
These outputs are fed as inputs into sensor models. The output of a sensor model includes measurements. And we'll next look at some simple examples using an IMU and GPS. But other sensor models work in a similar manner that we'll demonstrate.
Let's start with the IMU sensor model. You can generate accelerometer, magnetometer, and gyro readings. You can use these models to simulate rotations and translations for 6, 9, and 10 DOF systems. You can also model specific hardware by using values directly from the data sheets of these devices. And you can tune these parameters to mimic real world environments.
All right, let's take a look at this in MATLAB. I'm want to start with just a quick overview of the different trajectory options that I mentioned, starting with the waypoint trajectory.
You can see this is the default configuration. And it's for a single waypoint. You have the option of specifying multiple waypoints here and also times of arrival and other related information. This helps you create a trajectory of motion that you can use to feature a sensor model.
You can see the default here is North East Down, which is typically used in systems that are airborne. But there's also the option to set it to ENU, which is more typical of a ground vehicle or robot. There's a similar setup for a kinematic. So if you prefer to set up your trajectory as a kinematic definition, this includes a starting position, as well as things like the orientation, velocity, and acceleration of the platform.
Now let's take a look at the IMU in a little bit more detail. And as I mentioned, you can specify the different configurations and different combinations of these sensors. In this case, you can see the gyro, the accelerometer, magnetometer. We'll focus on the gyro just to give you an idea of what happens.
So here the definition of the gyro includes a lot of the measurement, and axis alignment, and noise parameters, as well as temperature-driven parameters. And by default, these are all set to ideal. But we can easily program these in MATLAB to include more specific definition. And this will typically come from the product or the device data sheet that you're using.
Now, as I noted, you can pick the different combinations here. I have all of the sensor types enabled. But I'll only focus on the gyroscope in this case. And so when we look at the definition of the IMU in MATLAB, you can see that there's the gyroscope parameter, which is the one we'll focus on, magnetometer, accelerometer, additional information related to how the sensor generates data and the configuration that operates, including the temperature and sample rate.
Let's just take a look at the output of this code. And this is actually the output of an ideal gyroscope. And in this example, just set it up so that you have sliders to control the different parameters of each of the definition items that are in the sensor model. This includes things like the quantization error, the access misalignment. You could see the effects of this.
And this kind of a framework allows you to select a device that you'll be using in your end system. You might be considering using one part versus another part. And you can compare side by side how the different performance parameters affect your inertial sensor result. So this is nice to see.
Also you have a range of operating conditions. You can change these sensor parameters to see exactly how your device will perform and get a range of outputs that you can use to test your algorithms. This includes things like biases, noise parameters directly in a system. So you can see if it's worth moving to a more capable device or whether or not you can use the other sensors in the IMU to generate the specific accuracy that you're looking for.
As I noted, the other sensor models allow you to generate data in a similar manner. Here you see the GPS model, which enables you to set high level accuracy in noise parameters, as well as the receiver update rate and also reference location. You can simulate the output of GPS receivers based on local position and velocity ground truths of the signals. This allows you, again, to explore the effects of losing satellites, for example.
Now that we've seen how to synthesize measurements, it's important to discuss how you can use these types of workflows, even when you have your own sensor data. This is because it can be challenging to capture all the corner cases in the field. This brings us to the motivation for our next example, which we can use noise data from a sensor to further refine and tune our sensor model settings.
To do this, we use the Allan Variance method on gyroscope measurements. It's worth noting that while we show this workflow with a gyroscope, it can be applied to all other IMU sensors as well. The Allan Variance can be used to identify various noise sources present in stationary gyroscope measurements.
And there's three key noise parameters you see here on the left, N, which is the angle random walk, K, the rate random walk, and B, the bias instability. And these are estimated using log data from a stationary gyroscope. Now note that all these noise parameters have been calculated. And the plot here shows the Allan Variance deviation with all the lines used for quantifying the parameters.
And on the right, you can see the plot with the gyroscope model that we created using the IMU sensor that I talked about earlier. And this generates measurements that are very similar to the Allan Variance deviation that we obtained from the log data. Now, the model measurements contain slightly less noise here since the quantization and temperature-related parameters are not set using the gyro parameters. But these could also be refined.
The gyroscope model can be used to generate measurements using these movements that are not easily captured with hardware. So, again, even if you have your own hardware, being able to move to the sensor model allows you to create corner cases that are specifically difficult to create in the field. So you can augment data that you've collected in the field once you have your model in sync with the actual hardware that you're using.
Now, each of these examples and workflows are also available in our online documentation, and I include some of them here. We have many others to look at, including a review of quaternions and orientation, which it's great to help you understand three-dimensional rotations and how they're done using the tools that I've described.
Finally, before we discuss how to bring data into MATLAB from sensors, we also have a model of an inertial navigation system. that I want to talk about. The focus of this webinar is really building algorithms to determine orientation or pose, as you see here on the left. These types of algorithms typically run at rates in the hundreds of hertz.
We also have an INS sensor model, which can be used to generate INS measurements at rates that are more typically used by the SLAM path planning and multi-object tracking algorithms. The output of your inertial filters usually have an accuracy in the measurements that are being generated. And you can use these to help configure the INS sensor, which can be then mounted on your virtual autonomous system for accurate simulations. Now I'd like to turn it back over to Mihir to talk about some examples which bring data into MATLAB from actual sensors.
Thanks, Rick. We saw how you can use sensor models to generate data for developing and testing your inertial navigation algorithms. Now when you have your algorithms ready to be tested with the data coming from a real sensor, you can stream the sensor data in MATLAB from a variety of hardware platforms, such as devices that include IMU and GPS sensors.
In some workflows, streaming the data from the hardware could be useful to plot, analyze, and understand the sensor output even before developing the algorithms. MATLAB provides several examples and tutorials that show how you can stream the data and apply filters to estimate the position and orientation.
Let's first briefly look at how you can stream data from an IMU sensor. You can wirelessly stream 9-axis accelerometer, gyro, and magnetometer data from an IMU chip to MATLAB using and Arduino and a Bluetooth module. This example shows a step-by-step process to connect, stream, and filter the data for sensor fusion.
MATLAB supports many popular IMU sensors that you can stream data from. And these sensors can either transmit data wirelessly or using a serial communication using USB cable as well. This example shows you how to stream the IMU data and apply fusion algorithms for orientation estimation. MATLAB provides a support package that lets you connect the Arduino with MATLAB.
Here you can see how the streamed data can be used to estimate the orientation of the sensor chip. The estimation in this video is only using accelerometer and magnetometer readings. I'll share the link that is full video In the end so you can watch how adding a gyro will improve the estimation.
Now, this is something you can do to test with off-the-shelf IMU sensors. However, we also carry all these sensors right there in our pocket that is in our smartphone. Next I'll show you how you can seamlessly stream your phone's sensor data right into MATLAB. If you don't already have it in your phone, let me first introduce you with MATLAB Mobile app for smartphones running on Android or iOS.
If you have a MathWorks account, you can use MATLAB Mobile to connect with MATLAB on your desktop using a support package. You can acquire sensor data from the sensors in your phone, such as accelerometer and GPS. Now let's switch to MATLAB Mobile and see how it looks. MATLAB Mobile sends the data in real time with the Wi-Fi or cellular connection to MATLAB running into MathWorks cloud. You can use the command line interface, which has the ability to interact with device sensors.
Let's go into the Sensor menu. Here you can choose if you want to stream data to MATLAB or to log the data locally as a .MAT file. For now, we'll see how to stream the data. You can collect sensor and camera data from your smartphone and log the data in MATLAB. We can see here acceleration, angular velocity, and other sensor data can be collected.
We can enable these sensors and start streaming the data. But for that, first we need to establish the connection between MATLAB Mobile running on the phone and MATLAB on the desktop where we want to stream this data. As the instructions display it here, we need to create an object called mobiledev in the desktop. For the sensor data, the device orientation convention is shown in this figure.
We first need to install a support package in MATLAB, and then we can use a couple of commands to stream the data from our phone. There's a script in the desktop where we use the first three commands to create a mobiledev object and start logging the accelerometer sensor data. The rest of the script is for plotting a part of the data as it streams from the phone.
Here we can see that the z-axis is a positive value when the device is lying flat on a surface. When the device lies flat, it counters the negative acceleration of gravity, that is negative 9.81 meters per second squared. Now we pick up the device, we see the change in its z coordinate in it in the plot.
Now, let's make it a bit useful. Currently I'm working from home, and getting distracted with my phone can be pretty easy. What if MATLAB can remind me to focus when I pick up my phone? I wrote a very simple condition to demonstrate the possibilities, but you can use these capabilities with MATLAB for your applications in a more robust way.
Now in cases where you don't have the network connection, you can log the sensor data as well. We can simply go to the Sensors menu and change the Stream To settings to Log. I will also change the sample rate to match with the code that I'll show later. Now I will turn on the sensor measurements as well as the orientation output that comes from the phones in-built orientation estimation algorithm.
Now we'll rotate the phone along each of its axis once. And then I'll stop and save this as a text file. This logged data can be retrieved on your desktop using MATLAB Drive. So now we have this data in a .MAT file.
And this brings us to the end of this section. Now I will join back with the panel for the next part of the session, that is, pose estimation.
So now we have the IMU data, or maybe data from a GPS. Now let's talk a little bit about what we can do with that and how our tools in MATLAB can help.
All right, Mihir, well, let's start with just an IMU. Now, engineers use IMUs to determine the orientation and heading. You might see an IMU-only system on a robot that needs to balance, or determine how an arm is positioned, or maybe on a wearable virtual reality headset, or as part of a larger navigation system, for example.
Right. The IMU is fundamental to any of these navigation systems because it can basically determine orientation. We have a few filters that can compute orientation from just IMU readings. They return orientation as a rotation matrix or a quaternion. We do almost all of our orientation math using quaternions. And we have a quaternion library in the toolbox which allows you to do all sorts of conversions and basic math operations.
Now, what's inside these filters? How do they really work? Could someone deploy them to a device?
Sure, Rick. So at the IMU-only level, we have four different fusion algorithms, depending on what combination of accelerometers, gyroscopes, and magnetometers you want to combine. Some are fairly simple, some use Kalman filters, which provide smooth outputs and good noise reduction, but are higher complexity. All of our filters support co-generation, so they can all be deployed to an embedded system.
Thanks, Rick and Brian. Now let's continue with our smartphone demo and apply what we learned from Rick and Brian just now.
AHRS, our attitude and heading reference system, is used to integrate multi-access accelerometers, gyro, and magnetometers to provide orientation estimation of a platform. The AHRS filter in MATLAB uses acceleration readings, gyro readings, and magnetometer readings and provides the orientation estimate as output.
Now I'll go over this example that shows how to estimate the phone's orientation in MATLAB using the AHRS filter. We will use the data we logged from the phone previously using MATLAB Mobile. I've already called the .MAT file here.
Since the date is recorded in the coordinate frame with the phone's convention, we will first convert it to the industry standard North East Down, or NED coordinate frame. We use quaternion here, which is one of the ways to specify a rotation of an object.
As you may remember, we have collected the orientation estimate from the phone as well. Now, we estimate the orientation using the AHRS filter in MATLAB, and we can compare these two results as well. But the phone may have a random rotational offset which can be difficult to compare.
So first we remove the initial rotation offset here using the first four samples of the data. Now, the noise parameters could differ for different phones, from sensor to sensor. So we need to tune the filter to optimize the noise parameters specific to my phone's sensor.
So the last preprocessing step here is to tune the AHRS filter. We can do that using the Tune function. The inputs to the Tune function are AHRS filter object, sensor data, and ground truth. We use the orientation data logged from the phone as ground truth here to tune our filter. Now, finally, we will use the filter object to fuse the data from these three sensors.
Now, we can plot and analyze the results. Let's start with a visual representation. Here we can see at the high level that the orientation estimation is working. We rotated the phone once around each axis, and that's what we are seeing here in the plot as well.
We can also analyze this more closely through a couple of plots. The first plot here shows the Euler angles for each orientation estimate for our AHRS filter. And we can see that it shows the first rotation here, the second one here, and the third-round z-axis here. We can also see the unintentional movements that I made during the rotations.
The second plot shows the phone's orientation estimate, which looks pretty much the same on a visual comparison with the AHRS filter in MATLAB. But other than comparing them visually, the third plot here is to compare them using a quaternion distance between the two orientation estimates. Quaternion distance is measured as the angle between two quaternions. And this distance can be used as an error metric for orientation estimation here.
So we can see here that the difference between the orientation estimate from the phone and from our filter in terms of RMS error and degrees. And this shows that the two estimates are very similar. So this brings us to the end of this demo where we initially saw how to bring sensor data from a smartphone, pre-process and tune the sensor data, and use AHRS filter to estimate the phone's orientation and visualize it in a few different ways.
Until now, we saw how to estimate orientation with the IMU data. Now let's join along with Rick and Brian again to learn further about other sensors and filters as well.
So in cases where GPS signals are available, maybe we can also combine GPS data with the IMU. For users who want to find their position or velocity, I think just an IMU data might not be sufficient. Is that correct, Brian?
Yeah, that's right, Mihir. We have four filters to confuse IMU data with other sensors to get position and velocity. Typically the other sensor is GPS, but it could be odometry. They all use extended Kalman filters, which is the standard approach to this kind of problem.
Knowing which filter to use when is important. For example, if you're trying to compute the position of a ground vehicle, like a car, you probably want to use the filter with the nonholonomic constraints, which basically means the device always moves forward not sideways. Let's see another demo of this in MATLAB.
Let's talk a bit about fusion filters that estimate position and orientation. We call that pose estimation. For these filters, we're still using an IMU as a core sensor. But a low-cost IMU alone is not typically sufficient to estimate position. So here we're going to add additional sensors to fuse with the IMU and help determine both position and orientation.
I want to recommend you take a look at this page in our documentation. It goes over all the filter options in the toolbox and offers some guidance on when to use them. It describes the motion models they use, the filter structures, and what sensors are expected.
Let's look at one of these filters up close. The insfilterAsync is the most flexible of all the filters that commute both position and orientation in our toolbox. It fuses accelerometer, gyroscope, magnetometer, and GPS. But the sensors can run at any rate, and the filter can tolerate sensor drop out.
So, for example, if you have an accelerometer running at 80 megahertz and a gyroscope running at a hundred megahertz, the insfilterAsync is the filter you should be using. Being able to tolerate drop out is important if you're in a car, say, going into a tunnel and you lose your GPS signal.
In this example, we'll use a filter to Fuse sensor data from a device traveling in a simple circular trajectory. We've pre-computed a circular trajectory and the resulting simulated data from sensor models. And we just saved them offline. So we're simply going to load that up. And these lines go here.
And now let's actually go ahead and construct the filter. So this is the insfilterAsync display. It looks like a lot of the other fusion filters. There's a state and a state covariance property. These are the heart of the Kalman filter, the extended Kalman filter, in this filter.
And then you see a bunch of process noises. And these have to do with your motion model and how accurate it is. We have a separate set of demos that figure out how to set these values in an automated way for you, so it's not a trial-and-error process.
So the device is going to travel-- let's just run the rest of the demo here. So the device is going to travel in a circular motion. Now on the left here, there's a red and green box that are showing the position-- ground truth, and the position estimate from the filter. The red is the ground truth, the blue is the estimate.
In the center here we have orientation, ground truth, and estimate. So you can see as the filter's going-- as the car is going around, the orientation's changing. And on the right, we have a bunch of checkboxes and pulldowns for sample rates of sensors and to turn sensors on and off. Now, you can change sample rates if you want and see what, if any, effect there is on the output. What I like to do is just turn some of these on and off to show you what dropout looks like.
So say we turn off the GPS, you'll see that the position estimate takes off and is quite bad very quickly. And if we put the GPS back on, it snaps back to the right spot. Similarly, if we turn the magnetometer off, you'll see the orientation estimate starts to wiggle a bit.
And these two start to get out of sync. It's a little hard to tell. They're out of sync there, right? And then so if we turn the magnetometer back on, they snap back to the right-- to a better estimate of the orientation.
So the insfilterAsync has actually a bunch of Fuse methods for each sensor that we've set it up to work with, a Fuse GPS, a Fuse Excel, a Fuse gyro. Those you can see here, a Fuse gyro, a Fuse Excel.
There's also several residual methods for figuring out, how close is the sensor data that you're providing to what the state vector is and what the filter expects? And you can figure out if you have an outlier that way. There's also a correct method if you want to bring your own sensor data. So you have something else that measures position besides the GPS or something that's like an orientation sensor, you can bring those measurements in and fuse those as well.
So as I said, the insfilterAsync is the most flexible of all the filters we offer because it allows for variable sensor rates and for sensor dropout. There are a bunch of different sensors to choose from in the toolbox, so there should be one that matches your target application. And I suggest you refer back to that page in our documentation that goes over all the different types.
Guys, I've seen customers try to use these Kalman filters before. And they can be challenging to configure correctly. There's a handful of noise parameters for each filter that need to be set based on the individual sensor and how the device is moving as well. Do we have anything that can help with that?
Yeah, you're right, Rick, these are tricky. We used to manually tune these parameters to get optimal performance, somewhat by trial and error. But now all the filters support a Tune function that automatically finds all the parameters for you. There's a demo of this that makes all this much easier.
So we talked about filters that you can use to estimate position, or orientation, or pose. But one of the challenges with using these filters is getting the noisy properties set properly. We call this process "tuning the filter," and this is a real challenge for using any Kalman filter. The math is one thing, but there's a huge amount of know-how, and experimentation, and trial and error involved in figuring out what you should set your noise values to.
There are both process noises and measurement noises. Process noises deal with how true your motion model is to the world your device is moving in and how it's moving. And measurement noises are how true your sensors are to the quantities they're supposed to be measuring. And the manufacturer gives you some help with the measurement noises in their data sheets, but those are really just starting points.
So finding the right measurement and process noises to get the filter to perform the way you want it to can be trial and error. And if you do it manually, it's really a headache. When I used to do it by hand, it took quite a long time.
So what we've done is added a Tune function to all the fusion filters to automate this process for you. It's an iterative optimization algorithm that works quite well. And it is much faster than doing it by hand.
Let's look at this demo of automatic tuning for the insfilterAsync. Now even though this demo's for the insfilterAsync, the techniques apply to all the filters, the AHRS filter, the IMU filter, the insfilter nonholonomic, all of them.
OK, so we're going to do a little setup. We're going to take a random trajectory from a bunch of random waypoints and then send that trajectory through the Mihir sensor and the GPS sensor. And those are set up to mimic common MEMS sensors. So we're going to run this section.
And we're going to store the sensor data in a timetable and the ground truth in a timetable. And timetables are great for these because you can have gaps in the data, and they can all fit together. So you can have different sample rates.
We're going to get-- we're gong to construct the filter and then initialize it. And we're initializing it with the ground truth of the state vector so the initial filter will match very well. We're cheating a little bit here. You're not going to have access to the ground truth in your real-world scenario.
But this is just to get the demo to run a little faster. It doesn't take away from the need to tune it. You'll see even a well-seated filter that has a good starting point that's not tuned at all will perform badly. So let's look at that now.
So we have the tuner. It's starting from-- the filter, it's starting from a good point. And we're just going to run it untuned and see what happens. So let's fuse the data, and let's look at the position estimate.
So you see it started from a good place because we seeded the state with the ground truth. But pretty quickly, the filter estimate diverges pretty wildly from the ground truth. The ground truth is in yellow, and the estimate from the filter is in red. So this is unusable in the state. The filter is no good for us.
We need to find the right noise value. So let's go through the tuning process. And it's quite short. You're going to create a configuration object. And that's only if the tuner's not in the default configuration, something that works for you. We're just going to run for 15 iterations instead of the default 20. You should probably run it for 40 or 50, depending on your application.
And then you just call Tune on the filter and give it the sensor data and ground truth. Basically you're saying, for this input sensor data, I want the state estimate to be this ground truth. And the tuner's going to try and minimize the RMS error between the state vector and the ground truth over time.
You're also passing at this measurement noise struct, which basically is, here is my guess-- my initial guess that the accelerometer noise, the gyroscope noise, what have you. And here we haven't even guessed very well. We're just setting them all to 1.
Now, when the tuner's done, it's going to return to you a tuned filter, see, all the noise values are now set to something that the tuner's figured out is optimum, and return to you a struct of measurement noises. And it's figured out like, OK, the accelerometer noise should be this, the gyroscope noise should be that.
And so it ran for several iterations. And over the course of this, we've gotten the RMS error down quite a bit lower, maybe seven or eight times lower. So let's see how much better the tuned filter is than the untuned filter. Let's run this section.
Now we have more plots. This is the position again. So, again, the yellow is the ground truth, the red is the untuned filter, and the blue is the tuned filter.
And you can see we're really matching the ground truth very well in the x and y. The z is a little bit off, but we didn't run the tuner that long. And if we ran for more iterations, we could minimize this further. So that's a huge improvement for just a couple of minutes of letting the tuner run.
Now, the orientation are-- so I like to look at orientation for training and distance. Some people look at Euler angle, compare the Euler angles. For me, quaternion distance makes a lot more sense. It's a single number that compares the two orientations, it ranges from 0 to pi. Here we're going to look at-- we'll look at it in degrees so it's easier to talk about.
So here's the orientation, or the untuned filter was of by over 30 degrees at some points. So that's pretty bad. The tuned filter got the orientation error down to, at most, five degrees. So that's a whole lot better for just a couple of minutes of auto-tuning. So hopefully you see that this can be a powerful workflow for getting your filters up and running quickly.
There's a second follow-on demo that I'd refer you to in our documentation that is custom tuning of the AHRS filter. And that shows you how to do things if you to minimize something other than the RMS error, like maybe jointly minimize the RMS error and that state covariance or something like that. That demo actually tries to heavily penalize outliers and minimizes the fourth power of the error. So I'd refer you to that if you need the more advanced workflow. But hopefully you can get going with just this.
All right. So in this session, we walk you through a workflow to obtain pose in the cases where you may have recorded sensor data, as well as different ways how you can bring or generate sensor data in MATLAB.
We showed some demos for sensor simulation, how you can stream sensor data from hardware to MATLAB, and how you can choose a filter for pose estimation in your application.
And there's a lot of resources available on our MathWorks.com site that you can use to get started. We have a tutorial video series, we have Getting Started e-books, and lots of descriptive examples that we went through today.
And we will be happy to learn about your specific use case. And please feel free to reach out to us. And Thank you for listening to this session. And please let us know if you have any questions now. Thank you.