Overview of AI Applications in Industrial Automation and Machinery - MATLAB
Video Player is loading.
Current Time 0:00
Duration 55:26
Loaded: 0.30%
Stream Type LIVE
Remaining Time 55:26
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 55:26

    Overview of AI Applications in Industrial Automation and Machinery

    Overview

    This webinar is Part 1 of the Artificial Intelligence in Industrial Automation and Machinery series.

    AI is everywhere. It's not just powering applications like smart assistants, machine translation, and automated driving, it's also giving engineers and scientists a set of techniques for tackling common tasks in new ways. And yet, according to recent estimates, while many organizations recognize the value and potential of AI, few are using it—Gartner's recent survey of 3,000 companies indicated that, of the 50% that are beginning to plan for AI, only 4% have actually implemented it.

    Many organizations are deterred by what they see as the overwhelming challenges of implementing AI:

    • Belief that to do AI, you need to be an expert in data science
    • Concern that developing an AI system is time-consuming and expensive
    • Lack of access to good quality, labeled data
    • The cost and complexities of integrating AI into existing algorithms and systems

    This talk will show how engineers can tackle challenges of using AI within manufacturing and examples where they have been applied from research and industry.

    Topics include

    • Visual Inspection using CNNs
    • Predictive Maintenance using Autoencoders
    • Soft Sensors using LSTMs
    • Robotics using Reinforcement Learning
    • Digital Twins using Model Based Design
    • Production Deployment

    About the Presenters

    David Willingham is a Principal Product Manager responsible for MATLAB’s Deep Learning Toolbox. David joined MathWorks in 2006, and during this time amassed over 14 years of applied engineering experience supporting a variety of application areas in Artificial Intelligence including Deep Learning, Machine Learning, Reinforcement Learning, Predictive Maintenance, Statistics, Big Data & Cloud Computing. David has worked with clients in Finance, Mining, Medical, Aerospace, Automotive, Industrial Automation & Energy Industries and has published papers on Predictive Maintenance at AUSIMM Industry conferences for Mining, Mill Operators in 2016, and Iron Ore 2017.

    Rareș Curatu is responsible for driving the business development of the Industrial Automation & Machinery industry for MathWorks in EMEA. Working with companies in energy generation and transmission, machine building, and automation equipment manufacturing, Rareș helps them to adopt Industry 4.0 and Industrial IoT workflows based on MATLAB and Simulink.

    Recorded: 22 Sep 2021

    Hello and welcome to the session where we will discuss an overview of artificial intelligence applications in industrial automation machinery. My name is Rares Curatu. I'm an industry manager at The MathWorks, and I specialize in industrial automation machinery. I am, by profession, an automation systems engineer, and I work with companies that develop automation components and automation systems to support their engineering research and development efforts in industry 4.0 and industrial IoT projects.

    Together with us today, we have David Willingham, senior product manager responsible for MATLAB's deep learning toolbox. David joined The MathWorks in 2006 and during this time amassed over 15 years of applied engineering experience supporting a variety of application areas in artificial intelligence, such as deep learning, machine learning, reinforcement learning, predictive maintenance, statistics, big data, and cloud computing.

    Working on The MathWorks in the industrial automation and machinery field-- we get to meet and talk to a lot of innovative engineering companies. Many of them are at the leading edge of their respective domains and have a lot of questions for which they need to find answers to. Here are some of the challenges that are reflected from some of the interactions that we've had with senior executives at companies that have adopted our tools.

    Can models run alongside plants 27/7? Can quality control be automated? Can maintenance be predicted on a machine earlier than otherwise? Can virtual sensors help condition monitoring? Can robots perform decisions in dynamic environments? And, "can simulations of physical systems help diagnose operational issues?" Therefore, we will be structuring today's agenda to cover exactly these questions.

    So today, we will talk about deploying AI model manufacturing, automated visual inspection, predictive maintenance, soft sensors, robotics using reinforcement learning, and digital twins used for generating synthetic data. The MathWorks is dedicated to accelerating the pace of discovery, innovation, development, and learning in engineering and science.

    When it comes to AI and engineering, experience has shown us that engineers and domain specialists are the people who are best qualified to efficiently solve challenges and successfully apply AI-based solutions in their respective domains. What they need, though, is the right tools. We developed mathematical computing software, MATLAB and Simulink, which enables engineers to develop algorithms, to analyze data, as well as to design, simulate, and test systems before moving to hardware.

    We understand that AI success demands more than just an effective model or algorithm. It is about delivering a product or service to the market that is based on AI. It is our understanding of that necessity, and The MathWorks' family of software tools that span the entire workflow in a way that incorporates AI, that make the difference. And analysts like Gartner recognize this as well.

    For the second year in a row, The MathWorks was named a leader in the Gartner Magic Quadrant for Data Science and Machine Learning Platforms. During today's presentation, we will talk about an AI workflow-- a simplified view that has four main stages.

    One, data preparation. Two, algorithms and modeling. Three, simulation and testing. And four, last but not least, deployment to the field. Through live demonstrations and case studies, we, with Dave's help, will go through all four parts of this workflow and talk about some of the challenges within each stage and point out how MATLAB can make it all come together efficiently and effectively to deliver value for you.

    Thanks, Rares. So I'm actually going to start at the right side of the AI system design workflow with deployment. Now this may sound weird, but you know, why am I starting with deployment? Well, the why is because where and what you're deploying your AI model into really dictates the types of models you can develop and deploy.

    For example, with industrial automation, this could be, you know, deploying your model to some sort of smart asset on site. It could be an edge system. Could be some sort of operational technology infrastructure. Or it could be an IT system. I'll give some examples of what those look like in a few moments, but what I'm going to focus on here is this particular graph, right?

    Each of these systems react to the data they are receiving in different ways. You know, react being when a decision needs to be made. For example, does a decision have to be done in a hard real-time scenario? Is it more time sensitive, maybe within a few seconds? Or does it require massive amounts of data to drive the decision that needs to be made? So, you know, this is where it gets interesting.

    There's a trade off that occurs across these systems between the speed in which a decision needs to be made and the scope of what the decision covers. You know, for example, monthly sample data most likely doesn't have any impact on hard, real-time control problems.

    Whereas, you know, millisecond data logged from a multitude of different smart assets you know, doesn't necessarily help someone make a decision if they're monitoring all of these assets as a collective, right? So to put this in perspective, this is where the models are going to be deployed to, right?

    So on the left, the smart asset could be a model deployed to a PLC, an FPGA, a CPU or GPU, an edge system-- could be an industrial PC sitting alongside the equipment-- for OT, this is a server on site, right, where all of the information from a particular plant, for instance, is consolidated to a central server and, you know, it's monitored from there from some sort of control room.

    And then for IT systems, it's when, you know, multiple plants and multiple assets-- all of the information is fed into a central location and then monitored centrally, usually from some sort of cloud-based, dashboard-type system. All right. So each of these have different requirements for how and where an AI model might sit.

    So on the left, models are integrated into the coding language, or the native coding language of the device being deployed to. And on the right, you know, the models are compiled and deployed and integrated into the software components and systems which they need to reside. So it's for this reason that, you know, with MATLAB we provide two pathways for deploying AI models.

    The first is automatic code generation, which allows users to deploy a model into the native optimized code for that device. The second is software component generation, where a model is compiled and deployed along with the runtimes of MATLAB so that it can be integrated anywhere. For example, some of the different cloud platforms you see there.

    So I mentioned to you that where you deploy your model matters. And what I'm going to show you now is a particular use case in which this was applied. So Rares mentioned in the introduction that Mondi asked the question, can a model run alongside the plant 24 hours, seven days a week.

    Their motivation for this was that they wanted to deploy a predictive maintenance and condition-monitoring system into production software that ran alongside the machines so that the operators could monitor how well these machines are operating. And if needed-- they need to be taken down, thanks to some sort of AI model that they developed.

    And so they were able to confirm that this question was possible after they prototyped the solution in MATLAB and then ultimately deployed it. You know, to-- for weather operators to use on site. And you can see there that they got some pretty impressive cost savings. So now, what we're going to do is we're going to move on to the next part of the talk. So I'll hand it back off to Rares.

    One of the applications that have seen significant improvement, thanks to AI, is automated optical inspection. As the industry is adding more and more AI for defect detection, vision-based based AI systems become an important component of manufacturing machinery.

    Let's first try to define what we mean by automated optical inspection. "Automated optical inspection is the image-based or visual inspection of manufacturing parts where a camera scans the device under test for both failures and quality defects." There are various other terms that are used for this process. Automated defect detection, machine vision, visual inspection, or automated inspection can all refer to automated optical inspection in one way or another.

    An example of a real-life application comes from Musashi Seimitsu Industry, who prototyped an anomaly detection system using deep learning with MATLAB for inspecting bevel gears that are used in automotive parts. After verifying the accuracy and speed, they are now carrying out demonstration experiments at the manufacturing side. This approach is expected to reduce the workload and cost for manually-operated visual inspection of 1.3 million parts per month.

    And this project, Musashi Seimitsu, worked with The MathWorks consultants to build a camera connection setup, pre-processed images, created a custom annotation tools using App Designer, and improved the model accuracy. They generated code for the trained model using GPU Coder and implemented it on NVIDIA Jetson. Then, they have leveraged the results on a PLC. Dave, could you please give us some insight into how an AI-based automated optical inspection system works?

    Thanks for the overview, Rares. This is typically what an automated optical visual inspection system would look like, right? You have a manufacturing belt with parts on it, an inspection camera that's capturing images of those parts, and a system that analyzes these images to classify if the part is good or defective. And it's doing it in an automated way.

    And the reason why we want to do this in an automated way is the scale and velocity in which we can produce these classifications of good or faulty. Because if we ought to try and do this manually, right, you know, if the scale is relatively low, it's pretty easy to see where a defective part might be just by looking at it. But as we go along to analyze more and more images, you know, it becomes progressively harder to go ahead and find out which one's-- the defective ones are, right?

    You know it just really gets difficult. And so we need to take in a deep learning approach to developing a system that can go ahead and automate that process that was done manually before, based on the images that are collected and captured. And so, you know, we're going to go through the AI system design approach that we showed earlier.

    But let's put [AUDIO OUT] into a bit of context. All right. So first, we're going to take a look at all of the different images that we have available, and we're going to make sure that we've got the appropriate data required to build a model. And so we're going to do some preprocessing. We're then going to go ahead and take an existing model that has been created-- you know, we're not going to build one from scratch-- and we're going to do something called transfer learning, where we take the existing model that's been pre-trained and modify it for use for the data that we have.

    We're then going to go ahead and test the model's performance, right, against what we trained to see how well it's going. And then we're going to do some manual inspection of those results to see why it made the predictions that it did. And then once we're satisfied, we're going to deploy the model onto some piece of embedded hardware.

    This could be any piece of embedded hardware. You know, it could be-- in this case, what you see here is a video of an embedded GPU. But it also could be, you know, a Raspberry Pi or some other particular type of system. So what I'm now going to do is jump into MATLAB and show you how we can put this all together for you.

    OK. So here I am in MATLAB, and what we're going to do is show how it can be used to develop that end-to-end system for automated visual inspection. Right, so where I'm going to start off with is the current folder browser here on the left. And you can see that I've got a couple of folders in here under the subfolder images, where all of my defective and good particular part images are located.

    Right, so the first thing I'm going to want to do is get all of that data into MATLAB in a quick way without too much drama. Right, so the good news is there's a nice construct cord and image datastore. It allows me to just, you know, look at the folder, right? And it creates a pointer to all of those images and it's-- that's pretty much it, right? I don't have to do too much more to load my data in.

    All right. So I've got my data there. It shows the count of good and bad. I'm actually quite happy that there's more defective parts in my labeled images than good. I think I can get a pretty good model out of the data I have. So, you know, we've got some visualizations here just to manually check what the data looks like, to see if there's nothing weird about it. Now everything looks fine at the moment. We then have to start preparing the data for training.

    So we're going to go and, you know, split the data up into a training and validation set. So we train on one set of data and validate it on a subset. And then what we're going to do here is define the network in which we want to use to train this particular model against. And we're going to use a pre-trained model called SqueezeNet.

    So a pre-trained model is a model architecture that is already being trained against a number of different image classification categories. I think there's about 1,000 different categories that this particular model has already been trained on. And what it enables us to do is that pre-trained models of-- you know, even though the bit learned to do a particular thing, they have still learned a lot of the underlying features that, you know, some of the images that I want to analyze-- to be.

    And so pre-trained models are a way in which you can rapidly save the amount of time you need to train a deep learning model, you know, for your particular task. And so what we do is we take that pre-trained model-- which we can, you know, it's a one liner in MATLAB-- and we do what's called transfer learning. So I can do transfer learning interactively. We have an app for it. But in the interest of time, I'm only going to show you how to do it through a coding infrastructure.

    So all this really is saying is, take this pre-trained model and point it to my data set that I've got and what I want to classify, and just retrain it again based on what I would like to do. Right. Along with this, we're going to add what's called data augmentation. So even though I've got a few hundred images, there is a way to get a lot more which will allow the model to train to get a more accurate result.

    It's called data augmentation. For images, it's relatively straightforward to do, right? What we're going to do is take the original images, rotate-- reflect them in a variety of different ways so that, you know, the model has different views of the data to train on, effectively allowing it to get new data. And so we go ahead and we add this so that, you know, we enrich the data we have prior to training. We select some training options, and then we go ahead and use a convent = trainNetwork to train that deep learning model. And away way it goes.

    The interesting thing here is I'm doing this all in a CPU. Look, for a lot of image-based deep learning models, it is better to get a GPU. It'll train a lot faster. But you know, I don't have access to that. So I'm just going to see how well this goes with my CPU. And lo and behold, it will-- able to do this in a relatively short amount of time, about 20 minutes to get almost close to 100% accuracy. So yay. Yay, I've got my model. Now I want to test it out.

    So one of the best ways to test it out is to load in one of the images. In this case, it's a defective one-- and there's a little defect there in the bottom left-- and I go ahead and run my particular model against it. And then you can, you know, perform some visualization techniques to see where the model-- you know, what made the model react and predict the way it did.

    So I'm using this particular visualization technique that will show me, hey yes, you've detected a defective part. But please tell me why. And you can see it's lit up here in the bottom left why that part was defective. So I'm pretty happy that the model is doing what it's doing. And I'm ready to now deploy this into production.

    All right. So the next thing we need to do is just take that particular model. We create a function, which will be a standalone thing that will take the input image, and will output, you know, if it's defective or not. Right. We have to take that model and create a function that will set the input and output, and we then go ahead and create in-- our code for the particular target we want.

    So, you know, I can-- I'm going to use our code generation technology to do this. Now there's a variety of targets you can output this function into. If I want to test it on my PC, I could output it to just C++ and test it, you know, then and there on my machine and this is what we do here. You know, codegen right, as C++. So I can get the .exe and run it on my local machine. Or I could, you know, re-jig it so that it will target a particular embedded device.

    So in the lead up, I showed you it running on a GPU. In this particular code, though, I'm going to show you, you know, how you could target a Raspberry Pi by, you know, defining the right architecture that the Raspberry Pi needs for the code-- the code that it accepts, right?

    So we choose an ARM output. And we go ahead and do that. And away we go. We can connect to our Raspberry Pi. And yep. The model runs the same on that particular hardware. So here, you've just seen an end-to-end system design workflow approach to developing automated visual inspection system.

    Electromechanical machinery require constant maintenance to ensure safe and reliable operation. Under increasing competitive pressure, companies are turning to data-based approaches to optimizing the way that machinery and other equipment are being used in order to maximize uptime. Here's where predictive maintenance plays a very important role.

    But first, what is predictive maintenance? According to one definition, "predictive maintenance evaluates the condition of equipment by performing periodic or continuous equipment condition monitoring." You might also hear other terms that are related to predictive maintenance, such as anomaly detection, condition monitoring, remaining useful life estimation, early fault detection, or defect detection.

    One example application comes to us from LG Energy Solution, who have decided to add predictive maintenance functionality to an industrial cutter. LG Energy Solution interfaced MATLAB with third-party data acquisition from NI, trained the model-- an AI model-- in MATLAB with apps, apps that require a click-based workflow and that offer an easy data visualization way, and deployed the solution to the plant floor using a standalone executable.

    Dave, could you please give us some examples of how predictive maintenance algorithms can be designed and implemented?

    Sure, Rares. I'm actually going to show three different examples of using AI for predictive maintenance. The first example shows how a deep learning algorithm called an LSTM-- stands for Long Short-Term Memory network-- can classify the state of operation of an air compressor. The model is trained on sequences of labeled historical signal data, where the set of operation was known, and then it's used to predict the state on new incoming sequences. This particular technique is useful on signal type problems.

    The second example uses a popular deep learning technique that's used typically on images called Convolutional Neural Networks, or CNNs. And it's going to be used to estimate the remaining useful life of the jet engine. Here, instead of using 2D image data, 2D sensor data is used as the inputs to train a model against labeled remaining useful life, or RUL, from a number of jet engines.

    The trained model is then used to predict the RUL of engines where the status is unknown. So there's another example. The final one is on anomaly detection. Here is an example where a deep learning order encoder is trained to detect if a machine metal cutter is working normally or abnormally. It is trained on historical data from an accelerometer from when the machine was operating in normal operation. And that's, you know, at the time directly after a scheduled maintenance period.

    So this trained model is-- can then be used to monitor the machine over time. You know, at a certain point, if the model output differs and drifts from the sensor data output, which it was trained against originally, it's indicating that the part is starting to wear over time. In other words, the more drift, the larger amount of anomalies are detected. Which is an indication that the machine is in need of maintenance and acting abnormal.

    And so what I'm going to do is now move on to an example of showing how you could develop this particular example, the anomaly detector. Once again, I'm going to use the AI system design workflow, where I'm first going to spend a bit of time on, you know, determining the best features from the numerous sensor data inputs that I have. I'm going to slice and dice it in various statistical ways and visualize them to work out what the best features are.

    But then I'm going to do something that's-- you know, something you've seen earlier. I'm going to train a model and then test the model-- get model's performance against some incoming data. For this section, I'm not going to focus on deployment, just the first three stages. So I'll now move into the demo.

    So once again, I'm in MATLAB to show you the predictive maintenance example. In this particular instance, what we're going to do is first load in the data, right? So it's in a MATLAB data file format. So away-- in it goes. To show you what that data looks like, we've got a number of sequences saved across a number of channels. So, you know, we've got a nice visualization here that shows how you could, you know, look at each of these sequences to see what the data looked like before and after maintenance, right?

    So we're visualizing the data. And we can see there are some differences there. But I really want a statistical way to articulate what those differences are. So I could go ahead and write a bunch of manual routines to determine these differences, but what I'm going to do is use an app called the Diagnostic Feature Designer app. It's part of our Predictive Maintenance Toolbox. And I've loaded that same data in. And what I can do is say, please generate a whole bunch of signal-based features for me.

    All right. I'm clicking OK. And what it's going to do is give me a histogram of all of a bunch of those statistical techniques and enable me to quickly and accurately understand are there any differences between the two outputs. Right. So we can see some there for channel 1. But they're not too strong, right, some of the differences. So I'll go ahead and do the same again.

    For channel 2, right? I'll change the channel here. It's off in the background calculating. And the result is going to be updated here in the app. Now-- so it's out of there. So for channel 2, we can see that there's some differences. You know, for example, around that peak value there seems to be some differences in it and a little bit of difference between the standard deviation.

    So along with that, I could go ahead and then rank the features. Right. And so here I have a nice rank of the different channels and which ones are, I guess, the dominant features, which I can then go ahead and export back to MATLAB for use for my own task. And we even have an option here to directly import it into our machine learning app called Classification Learner.

    But I'm not going to do that. I'm going to show that we're-- you know, we get some of those best features out. And we're going to have these features generated for deep learning, rather than classical machine learning. OK, so I have the features. I want got them pretty quickly using the app. I'm now going to go into creating the model.

    So we're going to train something called an autoencoder. And this is a particularly useful technique. What it does is it takes input data and it trains a model to try and regenerate that input data based on some high-level features that it determines. And if it can, you know, predict the output of that input data based on these high-level features it's trained on, what you should see is that the, you know, the input data-- coming in.

    When the autoencoder recreates it, it should match it exactly. Right, so it's an interesting technique. The input is used to compare to the output, which should be itself. All right. Why this becomes particularly important for predictive maintenance applications is the following. What we're going to do is train-- OK, that's the same data we saw before. We're actually going to train our autoencoder network when there's periods of good operation, which is directly after a scheduled maintenance period.

    All right. So this is when the machine is operating at its best. We then go ahead and take that trained model, which is only when the machine is in good operation, and then we go ahead and test it against data that is sometime after maintenance. And what will happen is that we will start to detect abnormalities, which is a difference between what the model input is and what the reconstruction is, as the machine starts to age over time.

    So we've got a snapshot of that over time. You can see there's some discrepancy between the trained model and its output. But what we really need to do is observe it over time. Here is a couple of plots that show the model and the error right after maintenance, yeah. It's pretty low. And the top plot is the error after some time. So you can see here that the output, which is being reconstructed, is heading-- hitting some high numbers.

    Let's put those along-- side by side and, you know, we'll put a threshold, which is something arbitrary. And you can see here that there's not a lot of anomalies or abnormalities detected prior to the maintenance period, which you can see there's relatively none. So there we have it. We have used an autoencoder to detect if the model is drifting over time and start flagging these anomalies so that it tells us that predictive maintenance should be performed.

    Equipment and machinery control and monitoring can only be as precise and as accurate as the measurements that are being made. However, some parameters are impossible or very impractical to measure directly. Here is where soft sensors come in. According to one definition, "soft sensor is a common name for software where several measurements are processed together."

    They can also be called virtual sensors. And virtual sensors provide feasible and economic alternatives to costly or impractical physical measurements. Soft sensors or virtual sensors have no industry standard name. They can go by other definitions, such as inferential sensing or surrogate sensing. But at the end of the day, it's about making more with the data that you are already measuring.

    One case study comes from RWE Renewables from Germany. And in a wind turbine, a failure of an important component-- such as the main bearing-- can lead to long lasting downtime. RWE Renewables used virtual sensor technology to assess the temperature of undamaged main bearings based on a selection of signals coming from the SCADA system.

    They used data from this virtual sensor to detect deviations from temperature, and they have used this tool to detect issues up to nine months before failure. Dave, can you please give us an example of how such an application looks like in MATLAB?

    No problem, Rares. So for some systems, you may want to extract a particularly important measurement. For the case of an electric motor, knowing its internal temperature is critical to knowing how it's operating. The challenge is there may not be a cost-effective or practical way to place a temperature sensor inside the motor for the life of the motor. Therefore, the value needs to be derived from sensor measurements surrounding the motor.

    This can be trained and tested from a sample of internal temperatures that are taken manually. And so this is the key point here. If you have the ability to manually acquire some output measurements of what you would like to train a model against-- in this case, the internal temperature of a motor-- you could take the sensor inputs, which don't cost much or are easy to obtain-- that you have available-- and see if they can be used to train against those manually-acquired sensor outputs for the variable that you'd like to train against.

    For the case of an electric motor, we do have that available. So what I'm going to do is, once again, go through the AI system design workflow where we will take the input sensor measurements and do some feature engineering, similar to what we saw before. We're then going to go ahead and train a model and test that model against the known outputs we sampled manually. For now, we'll ignore the deployment.

    So over to MATLAB we go. So once again, here we are in MATLAB. Right. So in the first instance, what we need to do is input the data into MATLAB. Right, so we've got the data in. It's directly read from a CSV file. You can see there's a number of sensor inputs, right? And we actually have the sensor outputs. So we've got the-- this is the temperature. We've got some other variables that we may want to predict as well. But the main one here is this PM temperature, which is the permanent magnet surface temperature.

    OK. So we go ahead and we do the feature engineering. Now in this particular instance, we're just going to use some of the MATLAB functions like weighted moving average, et cetera. Right. We didn't use the app for this particular example. That's OK. Not everyone uses apps. Some people know which functions they want to use. In this case, we do. Once we have those features available, we prepare our training for testing and validation.

    Right. So we go ahead and do that. We then go ahead and define the network which we want to use. In this case, we're doing another variation of a Long Short-Term Memory network. And then we go ahead and train our network. So we train the network. What was interesting-- that this one, because it's a time series-based model, it didn't-- I didn't need a GPU. It ran in a period of under 10 minutes. Another training exercise here.

    And once we have the model trained, yeah, let's go ahead and evaluate its performance. So the best way to do that is to, you know, measure the predicted temperature and other sensor outputs versus what was measured manually. And we can determine how well the model went. And just by looking at the plots here and the error, you know, we're doing pretty well, right? So the mean square error of the order of-- down below 3/3.5%. So here we go. This is how you can develop a soft sensor in MATLAB This was a short one.

    As automation solutions become more sophisticated, machinery is starting to face more complex decision-making. Here's where reinforcement learning is being used, especially where machines need to cope with changes in dynamic environments.

    "Reinforcement learning is an area of machine learning concerned with how intelligent agents ought to take actions in an environment in order to maximize the notion of cumulative reward." This means that the automation equipment-- the agent-- learns by interacting with the environment and the environment provides feedback in the form of reward.

    The agent's policy-- its objectives-- can be represented by deep neural networks. In other words, "in industrial manufacturing, reinforcement learning is used in processes where complex decision-making making skills are required, especially where machines need to cope with changes in dynamic environments." You might also hear other terms that are related to reinforcement learning applications such as, advanced controls, or path planning, process optimization, or complex decision-making.

    Why do industrial robots require teams of engineers and thousands of lines of code to perform even the most basic repetitive tasks, while giraffes, horses, and many other animals can walk within minutes of their birth? This is the question that researchers at the University of Southern California set out to find an answer to.

    Using MATLAB they have programmed a robotic limb that learned to move with no prior knowledge of its own structure or environment. Within minutes, their reinforcement learning algorithm learned how to move the limb to propel a treadmill. Dave, can you please show us how robots can learn to solve such challenges?

    I'm going to start by showing a couple of videos of the challenges that robots are trying to solve. Here's one showing path planning and collision avoidance of a packing robot. You can see it's gripping something, and it's trying to avoid some obstacles and it's going to play something.

    Here is that same robot being tested in an industrial setting. And while this looks impressive, the algorithms that drive these were using traditional control-- controls algorithms, combined with computer vision. So what we're here to talk about is situations where, you know, traditional methods really aren't able to solve the challenge or problem attempted at being solved.

    It's particularly evident in advanced control situations. So take the example of balancing a ball on a plate. You know, this is a particularly difficult task to use traditional controls on, right? So what we're going to do is highlight how a technique called reinforcement learning can be used to train a robot to learn this particular behavior. Right.

    And so how are we going to do this is to first take advantage of something called a digital twin. I'm going to introduce a digital twin in the next section. So I'll be brief about it now. But what we've got here is a representation of all the dynamics of that particular robot. And we're going to use this as an environment to train the robot arm to balance a ball.

    In order to train the robot to balance the ball, we actually have to reward the robot for doing good and bad things. And this sets the basis of how a reinforcement learning agent learns how to solve the problem. I'm not going to go into too much more details about reinforcement learning.

    It is a particularly, you know, theoretically-heavy concept and it is, you know, more on the cutting edge. And so this is relatively where I'll leave it, right? The idea here is that you have an environment, you have a reward, and you train a reinforcement learning agent to solve a problem based on that reward.

    And, you know, we're going to take an AI system design approach to this as well where we first will create an environment in Simulink, which is another platform that we have at our disposal. We're going to train our reinforcement learning agent and we're going to test it in a virtual simulation. Look, you could deploy this to a physical robot, but I don't have that robot with me, so we're going to leave that one blank. So we'll now go over to MATLAB and show you how this works.

    OK. So here we are once again in MATLAB. In this particular script for reinforcement learning, the first thing to highlight is we're going to have an environment in which the robot is going to train in, and it's going to be based in Simulink. So Simulink is another platform unique to our product line. And this is what Simulink looks like. You've got a bunch of blocks that represent a physical system. These blocks are put together so that you can simulate this physical system in a variety of scenarios.

    For example, for this robot arm, if I drill down into this block, you can see it's made up of physical components, such as shoulders, biceps, hands, et cetera. And so you paste these all together to build the robot. And then you can go ahead and simulate this robot in a variety of tasks. The task that we're going to simulate it in is balancing a ball on a plate.

    If you want to know more about Simulink and how to model physical systems, you know, this webinar isn't the place for it. You know, we've got a lot of other webinars that show how that's done. So check out our website if you want to see them in a bit more detail. So, you know, I've got this model pre-built for me. And I'm now going to use reinforcement learning to train this robot to balance a ball. So I'm just going to jump down to a concept here called the agent.

    So a reinforcement learning agent is the physical thing that will go ahead and learn how to balance the ball, right? So that's the thing that will learn how to balance that ball. What it does is it takes the reward that we've given it, it takes the simulation outputs, and it learns how to do better over time, right? And the backbone of it is actually some deep learning networks. So it remembers how to take appropriate complex actions over time and stores it in its deep learning network that's defined.

    So I'm not going to go into the code too much. But you can see here that when we start training, it initially doesn't know what to do, but at a certain point in time it learns to master the task that we're giving it. And here we see that reward for when it's the agent is almost fully trained. Once we're done, what we can do is simulate this trained agent for balancing that ball on a plate against some arbitrary starting point.

    So, you know, here it is, right? There's the starting point. And you can see that the robot did it and then dropped the ball because we've pre-trained it in advance. So, you know, that's a very brief preview on how you can use MATLAB to perform advanced controls using reinforcement learning. One of the more cutting-edge AI techniques available at the moment in manufacturing.

    Often, engineering teams need data that doesn't exist or it's not accessible. Here's how digital twins can be used to generate synthetic data. First, let's start with this good definition of what a digital twin is. "A digital twin is a virtual representation that serves as the real-time digital counterpart of a physical object or process."

    Digital twins can help overcome the challenges of having enough data to train a successful AI model and perform tests once the model is built. There are several reasons to use digital twins to create deep learning training data. You might be in a position where the data you work with belongs to your customers, so you can't use it for deep learning training.

    Or, you may be in a position where the resources you need to create the data are fully committed to customer projects. Or, you might have already developed a deep learning application but have found that you need specific data to tune and train your neural network to reach a certain required level of accuracy. But the cost to create the data is prohibitive. Or-- and this is almost nearly universal-- you know that you will not be able to find all the data, all the anomalous data that you need to train your deep learning networks adequately.

    One example application comes on Krones, a German packaging and bottling machinery manufacturer. One of their latest projects included integrating a tripod robot into the design. And in order to optimize the control designed to perform fault testing and generate data for predictive maintenance, Krones has resorted to using a digital twin of the robot.

    And simulation enabled them to obtain data and insights that would have otherwise been impossible to get via hardware tests or simply very costly and time-consuming. Dave, could you please show us an example?

    For this last section, I'm going to jump straight into the air system design workflow. And the section that I'm going to particularly talk about in a little bit of detail here is the one on the far left. You know, getting the data you need to train a model. And getting that data through synthetic data generation. So a digital twin enables you to generate synthetic data of your particular system operating in a variety of scenarios. Right. And it does it because, you know, it models the physical components of that system, which we saw earlier in the reinforcement learning example.

    The benefit of creating that model is yes, you can simulate this system in a variety of scenarios. And therefore, you can get the data you need to train a model and train the model to do a particular thing, such as predictive maintenance. And you can do that through pure synthetic data, right? And that's useful if it's difficult to obtain the data, you know, perhaps the faults that occur don't happen too often in the real world, or it's really expensive to obtain that data. Right, you know, if you physically have to destroy a prototype to get the data, you know, that's very costly.

    And so yeah. That's the benefits of a digital twin. It gives you the ability to generate synthetic data to build the models you want. So in this last demonstration, all I'm going to do is highlight how Simulink can be used to generate the data and also what the end result looks like. We're going to ignore that the middle parts. I think you've seen enough examples of those in action.

    And so here we have a movie of a Simulink model where we can go ahead and say, OK, I want to put some faults in the system, right? And please go ahead and simulate that. And so, you know, we could visually inspect it in Simulink or run it as a batch job if we wanted. But here is that running in a simulated-- within Simulink. And you can see that, yep, it's detected the blocks that we indicated. So we have the data that we need for creating the AI model.

    So we're going to fast forward and then show you what that looks like deployed to an edge asset, right? So, you know, perhaps an industrial PC. Industrial PCs aren't, you know, particularly glamorous. You know, here we've got a picture of a pump system and we've got one pump that's highlighted here. And when we click play, all it-- the only indication we know that this one is actually running is that the time is advancing. And as the time advances, you can see that it's detected a blockage based on the model that's running.

    Now look, we could add some, you know, visualizations to show that that's happening. But for an operator, that's all they really care about. Hey, there's a block that's been predicted. I don't have to go in and pull this thing apart. It's told me exactly what the problem is, thanks to s model of the belt. And so that's the end result of what a digital twin can give you. Gives you the data to generate the model, you create the model, you deploy it to an industrial PC. There we go.

    Thank you, Dave. To summarize, there are numerous benefits to building AI-driven products and services with MATLAB and Simulink. Your engineers can leverage their domain expertise without becoming programmers. Using domain specific tools, they can design powerful algorithms that incorporate years of experience and can handle rare events. They can use modeling and simulation to tackle integration challenges and reduce risk. And the end result can be deployed and managed wherever it needs to be without recoding.

    Atlas Copco, a Swedish industrial company, is an example of the whole story in action. They have 120,000 machines all over the world. And our running models, built in MATLAB, against the live data for applications in predictive maintenance. This has yielded tangible results with increased efficiency across the product range. And not only are the digital twins used to optimize operations, but the models are even being used by Atlas Copco's sales engineers to demonstrate their products.

    Remember that your engineers and scientists are the domain experts and they provide the insights that are critical to the success of AI projects. The MathWorks is dedicated to building the tools they need to make them the most productive at what they do best. If you'd like to learn more, the next best step is to get started with deep learning yourself. To do this, simply open your browser and launch one of our free online tutorials, starting with the Deep Learning Onramp.

    View more related videos