Aerospace & Defence Forum part 1: Modelling & Simulation, AI and Cloud - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 1:26:13
Loaded: 0.19%
Stream Type LIVE
Remaining Time 1:26:13
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 1:26:13

    Aerospace & Defence Forum part 1: Modelling & Simulation, AI and Cloud

    Overview

    The need for advanced civil and defence systems has been brought into sharp focus. In civil, the drive for zero climate impact is driving development of advanced propulsion, autonomous systems, and urban air mobility. In defence, geopolitical uncertainty has highlighted the necessity for Next-Generation physical and digital systems.

    The MathWorks Aerospace & Defence forum will bring experts on different key technology areas to discuss how we can accelerate the development of next-generation systems. Register today to join the forum and explore how models and data can help us respond to these challenges.

    Highlights

    Block#1: How to accelerate product development with modelling & simulation

    • What is Model-Based Design and why is it so useful?
    • What are the latest capabilities in MATLAB & Simulink to help accelerate development?
    • How can you get started?

    Block#2: The AI data journey: A wireless example

    • Learn how AI is being leveraged to solve complex wireless problems,
    • How MathWorks tools can accelerate generation of training data and image processing, and
    • How MATLAB can be used to automate a complex system and transition to a Model-based design approach

    Block#3: Digital transformation: Cloud, collaboration & DevOps

    • Accessing data in the cloud
    • Using MathWorks tools in the cloud
    • Scaling interactive workflows using the cloud
    • Deploying scalable solutions in the cloud
    • Collaboration, Version Control & DevOps

    About the Presenters

    Steven Thomsett is an application engineer at MathWorks, specialising in signal processing and computer vision applications, with an interest in the transition from algorithm development to hardware implementation. Prior to joining MathWorks in 2011, Steven worked on radar system analysis, modelling, and algorithm development at QinetiQ. During this time, he led the algorithm and modelling team in the adoption and implementation of Model-Based Design for a new radar system, generating code from MATLAB and Simulink® for real-time deployment. 

    Nadia Shivarova is an application engineer with MathWorks, based in Glasgow, UK where she focuses on wireless products such as 5G NR, LTE, WLAN, Antenna and RF. She joined the company in 2016 as a quality engineer for HDL products. She holds a MEng in Electronic and Electrical Engineering from Strathclyde University, Glasgow.

    Lewis Lea is an application engineer at MathWorks, working across a broad range of applications, with a focus on machine learning, computer vision and software deployment. Before joining MathWorks in 2020, Lewis performed statistical mechanics and materials physics research at the University of Cambridge, where he obtained his PhD, MSci and MA degrees.

    Recorded: 24 Feb 2023

    Hello, my name's Steven Thomsett and welcome to this talk on accelerating product development with modeling and simulation. As I was preparing and thinking about what to talk about today, I was thinking through some of the common trends we see within the aerospace and defense industries, things like intelligent combat systems, or autonomous systems, electrification, and AI.

    And I was thinking about what it means for you who are developing, designing, delivering these sorts of systems. Well, part of it is it should hopefully make your work more interesting, more challenging-- something to work on-- but it also makes it more complex at a point where we've got to consider the details of what's going in but also the coupling between systems is increasing. And so we try to think how can we then help you to resolve the challenges that that increased complexity and increased coupling introduces.

    And so to set this out and think about how we're going to work through this, we're going to start off by taking an example, through the session. This is one of a UAV package delivery. So I've got a model of a quadcopter here in Simulink. So just start in the model, it's not ready to run. And the model actually links to Unreal Engine so we can actually get a good view of what's happening. Let's just move some of these displays. These bigger windows are the sensor information coming back to Simulink. And then we've got the Unreal Engine view as we run.

    And so as we look at this quadcopter, we want to think about what are the systems involved in it. So we started because we've got the figures here from the sensor feedback-- so we've got the sensor systems. This could be a camera, it could be LiDAR, radar, that sort of system. And as we think about these, those systems then-- if we think about the coupling-- they are used by, say, the guidance, navigation, and control system to do things like localization, path planning. The outputs from that are fed to the flight control system, which controls how the airframe is going to respond and get us to where we need to go.

    Of course, the flight control system is driving what the rotor blades do and will guide that things like actuators. So we've got the mechanical and actuation side of what's happening. To make all those things work, we need a power system involved. So we need to both deliver the energy to where it needs to be, but we need a way to store the energy to help us to move around and actually take us through the whole journey. So as we think about all of these systems, the sort of systems on which many of you will be working-- we see that we can't just treat them independently anymore. We can't just say for the sensor system here's a specification, I need to see this target because it's all tightly coupled with how the rest of the system behaves. And so to understand that coupling, that's where simulation and modeling come into play.

    So as we look through this talk, we're going to look at some of the key ideas. Things around, say, modeling and simulation give you insight throughout the development cycle. That models can help you to explain and to collaborate with other teams within your organization or even with other organizations. And that through this workflow we can enable rapid iteration over our design space.

    So there's a few ways to look at this and I've seen several different diagrams, so I've picked out a few ways to think about it. One is thinking about the models and simulation to Matlab and Simulink at the core here of what we're thinking about and that they can be used in different ways.

    So the models can be used for technology demonstration in our early concept development phase. They can be used for system development to design things like algorithms and implementations. But also the models could be used for things like through life support or system sustainment, things like digital twins to help inform what we do and how we deploy our system.

    Now, if you think about the classic development cycle of a system, we've got the idea of starting with the V diagram. So things like requirements in the top left of the V. And we can use modeling and simulation to help elaborate and understand the requirements. But we can also use the models put in through things like the architecture development, the systems engineering process, and through component design, and actually use those models to generate code for our implementations. But we can also use the models for test and verification. So modeling a simulation has a place across the whole of this diagram.

    Another view that focused more on the component design and development viewpoint is what we use as the model-based design workflow, things like requirements feeding into design, and implementation, and integration, with test and verification underpinning the whole workflow. This is something we see used more and can bring back into ideas like certification, if that's important, which in an aerospace environment often is a key idea that we need to comply with.

    So we're going to use these themes and ideas to try to walk through the talk in thinking about going from early concept and requirements development through the component design and implementation and test and verification. This sets the structure of this talk.

    So we start here with thinking about early intervention we might think about architectures. So let's think about our UAV. There's a few systems involved here might be there's going to be some form of localization, some sort of power system, some path planning, user interfaces. We might not know the detail of what's inside these systems yet, but if you start laying them out and being about them perhaps in localization we know there's going to be a GPS sensor and a mag sensor, for example, to let us know where we are in space.

    And we can build up this system and add levels of detail through a cycle so that eventually we might end up with a full detailed architecture view, where we've included things like propulsion, flight computer, airframe, flight support, and so on. These details will be elaborated and evolve through our early process.

    But if it stays a static model that can help us develop. If we then use analysis and simulation upon this model, we can then start feeding back into our system design to elaborate further. So let's take an example of that. Let's look at the idea of endurance of a quadcopter.

    So here's another architectural view of it, where we've got the quadcopter represented, some comms, and ground control system. Here we've elaborated with some detail. We can see now-- we've added some-- a profile with stereotypes and this is air vehicle stereotype and there are things like battery capacity properties, mass power draw, endurance. We don't know these at the top level yet. But perhaps if we look inside the system, some of these ideas we might have an awareness of.

    Perhaps the camera might be coming in as an off the shelf camera. But some video processing, we know-- we've done this before-- we've got an idea of what's required for video processing. So we can build up knowledge from previous systems and think about what's the mass, what's the power draw of these components.

    So some will be things we've done before, some will be off the shelf components, some we can-- will be at an estimate for now and we'll have to refine it later. But once I've got these estimates, we can run things like analysis functions calculate endurance here. We're going to go bottom up because we've got some of the details at the bottom level. And we got to instantiate this analysis. And we can now run a static analysis of our system.

    And if we run this, we see now at the top level, we have an estimate of battery capacity and-- needed and the endurance of this quadcopter. 3.8 minutes isn't great for a package delivery quadcopter, but we'll work with that and have to refine it later. But the key thing here is now we've done the analysis. We've got the idea of is this concept viable? Could we work-- and is it worth developing further? So you can answer these questions early on in our system design.

    And so here we did a static analysis that involved running a MATLAB function which looked at the power consumption of mass of different parts of the system, added them up through the whole system hierarchy, and just use a simple static equation to calculate the endurance. Now, of course, for those who work on power systems and batteries, that simple static equation, which assumes that the endurance and battery capacity to the power draw, is assuming the power draw is linear.

    Now, in real battery systems, that's not going to be the case. There's going to be other impacts and effects upon it. So this static analysis might be a first iteration of the analysis. As we then develop our system, we'll revisit this, and we can consistently update our information to ensure that we are on the right path as we develop our system. So we might have started with a simple static equation to calculate our endurance and look at the state of charge of our battery but if we now think about increasing the fidelity of our model through the engineering lifecycle, it's going to help us to refine that estimate over time.

    And so as we get more information available, we can feed that back into our decision making to ensure that we are consistently updating and validating our choices. So we might just go from that static equation to doing some sort of dynamic model. We might move to using, for example, modeling the UAV as a point to mass. We could move to a six degrees of freedom model later on. We might even go as far as modeling the electrical and mechanical system. So let's have a look at doing that.

    Well, here we have a model of the quadcopter which is a further down the design lifecycle. We've made a few more choices, and now we want to validate and test those. We might want to do things like trade studies. So we might want to do things like look at sizing the motor or looking at profitability. Here we're going to look at stability and range. So what happens when we simulate a quadcopter scenario in the absence of wind and then in the presence of wind?

    So we see here we've got an animation of the scenario running as well. The simulation actually runs a lot quicker than the animation. You see the bar is halfway through on the simulation now, while the animation start. In the absence of wind, we can see the quadcopter follows the desired path quite accurately. I'm going to speed up the animation now, but the animations are pretty good for communicating what's going on, perhaps sharing with stakeholders.

    A move to the more interesting case here where we've got wind present. And what you'll see is that as we gain height, the wind is going to start to have an impact on my trajectory. Instead of following the required path precisely, it's been blown off course a bit. So this allows us to test things like our path following algorithm, how well we can keep to the path in the absence of deviations, for example, but also in this case, particularly interested in the impact of the wind and having to do this actual work to stay on path on our state of charge. And therefore, how is it going to affect our endurance?

    Now, we might be running this early because we've been set an operational requirement on the environment in which we need to operate. For example, we need to work with a wind of how many miles an hour of meters per second. We might, though, use this model later on to actually think about through life support and how it works in service to analyze the feasibility of making particular delivery routes.

    Anyway, looking back at the graphs that are produced, because the simulation has now finished, the animation is still running, we can see there's minimal difference in the state of charge between wind and no wind. We can see this point where the slope, the gradient of the graph changes. So when I'm carrying the package, the state of charge decreases faster than when I'm not carrying the package. That's expected. I need to do more work when I have more mass, i.e. the mass of the package.

    So we can take this forward. And it says so far, minimal difference between the wind and no wind cases. But, of course, I should rerun the simulation for different wind directions. I'm sure there will be a difference between a tailwind and a headwind, for example, compared to two side winds. We often get asked, what's inside these model? So let's just expand and take a look at what's inside this quadcopter model.

    Within the quadcopter subsystem, we've represented what is in there, so the body, the propellers, the load, the electrical system. Let's expand the electrical system for a moment because we are talking about the battery. So say at the top, we find there's the battery component, then going to each propeller, there will be a driveshaft, so an electric motor that's driving a shaft that then connects to the propellers. And so we've got four of those, one for each rotor.

    Now, the battery model, we've used just a simple component so far. We saw that we had a linear discharge cycle, but we can now increase the fidelity if we wanted. So we can do nonlinear discharge. We can model, for example, thermodynamic effects. So what's the effect of temperature on our discharge cycle? So we can look at those ideas to understand better how this system will perform further on. We might do things like take the model and use it for different purposes. So what's the impact of package mass?

    We saw that there was a difference in the gradient of the graph when I was carrying the package and when I'd release the package. So perhaps what happens, maybe we should analyze package mass. This might, for example, give us an idea of how much we need to charge people for each package. Do we do a charging system based on maths? Because that will change how often we need to recharge, how far we can go, and so on.

    So this idea of using the models to work with scenarios is quite powerful. So let's change to a different domain. Let's move to the domain of things like sensors. A UAV is going to have to operate within an urban environment. In the future, those urban environments will become more congested. We might consider things like personal air taxis, for example, as well as other package deliveries or surveillance drones, and so on. So how are we going to work and make sure we know where we are? Because it's not just about knowing where the buildings are. It's being able to track all these other air vehicles.

    And so it's tricky to answer that by just specifying, for example, we need to detect a target at such and such a range. We need to track them over this congested airspace, which means modeling the scenario. So how can we go about that? Well, we've got some grants to try building this up. We can, for example, bring in terrain data from DTED or building data from OpenStreetMap to create the scenario. We can set out, for example, the trajectories of the air vehicles.

    So our vehicle and the others, perhaps as UAV platforms, maybe as a waypoint trajectory, and we can fly it around and set it up. And then we start adding our sensors. So maybe a lidar on the quadcopter. Perhaps, we want to have a radar. Here we can see we're using a radar data generator. We don't need detailed knowledge. We may not have detailed knowledge of how the radar is going to work at this stage, but we're entering specifications such as the elevation resolution, the azimuth resolution, the range resolution.

    Are these design parameters of these resolutions sufficient to meet our objective performance? To answer that question, we need to run a scenario simulation to understand the performance in context. So here we have our MATLAB script, which is for our simulation. If we just run the script. It's going to pop up our graphic. Let's find the right Window and maximize it. Great. So our vehicle is at the top here. We could just see the arc representing the field of view from the top.

    We have perhaps an air taxi in the scene, a couple of other package delivery quadcopters or surveillance quadcopters, whatever. Flying through this urban scene, we have got the buildings as well. Out the back of each of the other vehicles, you can see the lines. These are representing the tracks that with forming. From blue is radar, red orange is lidar, and purple is the fuse track. So what we see is, on the air vehicle, we lose the lidar partway through.

    And so the question therefore is, well, what is the lidar giving us? What is it offering to us? Do we actually need to add this extra system, which is going to add cost to complexity, and weight to our system? Well, a good reason might be redundancy, but what else is it giving us? This is what running the simulation of the scenario and analyzing the metrics can tell us. So if we look here, there's a particular metric used in tracking called OSPA. There's a more generalized form called GOSPA. Don't worry if you're not familiar with it.

    The key thing to know is low numbers are good, higher numbers are where we're being penalized for missing tracks on objects in the scene, poor localization performance, so our accuracy is down compared to ground truth, and so on. So what we see at the start of the run, it takes a while for the radar to pick up form its tracks, make a selection. So actually, all of our performance is based upon the lidar. It's not great performance, but it's given us some idea that these objects are in the scene and a rough idea of their location.

    As the radar starts to build its tracks, we can see that the performance gets a lot better up to the point where it radar is dominating the overall tracking performance. Here we start to see where the lidar starts to drop tracks in objects partway through, but at the end what's interesting is that the lidar also starts to lose performance. But by having the lidar present in the scene, our overall fused track and performance is maintained a lot better, and so we can see the benefit that the lidar is having on our overall system.

    Now, I mentioned running these scenarios. When I used to work in radar, I just find that one of the issues was we'd spend a certain amount of time developing our algorithm or our system, and then we'd spend at least as much time, or twice as much developing the environment in which to test it. And so to say there are now apps and tools-- we saw the code earlier, but you can also build these using apps, where we can drag and drop objects into the scene that set up their trajectories, so the buildings, and so on, and run the simulations through to just check we've got the performance we wanted as we fly around the scene.

    So I found this a great benefit and would save a lot of time I think. Once we have developed those algorithms, we need to start thinking about design and implementation. And again, modeling and simulation can help us here. I'm going to take just one small, quick example of this, which is thinking about algorithm partitioning. So let's go back to our radar.

    Maybe we've got the radar running. Should the processing be running on a CPU or an FPGA, how do I decide what runs where? Because part of the consideration might be things like interface bandwidth. Maybe I'm thinking of running it on a system on chip. And so, how does the actually interface bandwidth affect the system performance? How do I identify things like bottlenecks and so on? And how can we get people to work together?

    So let's take the example based on that radar. We've got a range doppler processing. At its simplest, that boils down to-- we need matched filter, which is just a filter as each sample comes in, and an FFT which runs over multiple pulses. So what that means is I actually need to change the order between the filter and FFT where I might write as the data comes in, then I need to corner turn or transpose to perform the FFT over pulses over noncontigous data.

    So the limitation there is going to be the writing and reading from the memory buffer to store that data. Option B is that I do the filtering as the data comes in on the FPGA, but then pass it straight afterwards to the CPU, to the ARM processor, where I can perform the FFT there. That gives me the benefit that I'm not having to use this intermediate data store because I still need ramp, but it's all easy to access on a CPU. But I'm going to be limited by the performance of that CPU.

    So how could I understand then, going back to that option A, the FPGA case, what's the latency and bottleneck between the filter and the FFT? Well, I can actually simulate that memory buffer and the idea of writing contiguous data and reading back noncontigous data and simulate it. And here through that simulation, I can see writing contiguous data is going to be fast because we can actually write multiple times. We've got a large burst length, whereas reading back the noncontigous data, I'm limited to reading one word bursts. So that is the limitation.

    And so we can run this and get a time estimate for latency. Perhaps to read a frame of data is going to take 438 milliseconds, for example. So I'd be taking that, and then comparing it to how long it would take to run on the CPU the FFT, and that would help to inform my partitioning decision. And with that, I can then go along and go on and develop the detailed implementation.

    Once that detailed implementation has been developed, I need to test it. And so the verification process needs to look like-- thinking about requirements, I've got some down the bottom. Don't worry. You don't need to read what the requirements are for now, but it's just so we can have requirements, and we can link those requirements to blocks in the model. Here in a state flowchart, they're linked to specific transitions in the model. So I can get that traceability of implementation against requirements, but I can also link the requirements to tests. So we can also see what test cases are used to verify, my design, against those requirements.

    I can run those tests, and I can see I've got instantly whether they passed, that's the white cake in a green circle, or failed would be across in a red circle. And I can actually look at the detail values, do comparisons against tolerance, and so on. But I also get this summary metric of, was there an implementation against each requirement? Yes. The blue bars are complete. Did the tests against each requirement pass? Yes. The green bars are complete.

    So now we know we're ready to go ahead to the next phase, which is then deployment. So we can take these models and say generate code to put on-- what we talk about UAV, so an autopilot, or seek out to go on a DSP or standard CPU, maybe HDL code to go on FPGAs or ASIC, or perhaps we're combining to go on a system on chip. Perhaps we're going to middleware, such as ROS, Robot Operating System, or DSS, or something similar. Or perhaps we're going to a GPU, so we might be generating CUDA code, for example. So all of these are possible, and we could take the models and pull them right through now into our deployment phase onto our actual hardware.

    So what we've now seen is the way that modeling and simulation can take us through and accelerate us throughout the lifecycle from the idea of technology demonstration and concept development through system development and making design decisions and verification and deployment. And we can actually pull those models through to reuse in our through life support system sustainment or digital twins perhaps.

    So to summarize, modeling and simulation gives you insight throughout the development lifecycle. It helps you to explain and collaborate within your team, with other teams in your organization, and with other organizations where appropriate. And it can enable rapid iteration over the design space. All of that will help you to accelerate your product development with modeling and simulation. So the question might be where to get started.

    So there's a lot of reference dot examples within documentation, most of the examples I show today came straight from the documentation. So you'll find things for UAVs, for aircraft, for satellite constellations perhaps. But beyond that, we've also got training curricula, so training courses to help guide you through the process of ramping up on the tools but also on specific technical areas.

    So some of those training courses, there's a growing number of free Onramp courses, all done online, but for more depth, you might be interested on some of the full courses, which, again, can be run, self-paced, or instructor led. So we can help you to get started. And so with, that I turn to you and ask, are there any questions?

    Hello, everyone. And welcome to this presentation called the AI data journey, a wireless example. My name is Nadia Shivarova. I'm an application engineer at the MathWorks. I've been working for the MathWorks since 2016, and I support customers in the wireless standards, applications, as well as trying to apply AI. As we saw previously in the design of, for example, a UAV, it will contain many sensors in order to understand what is happening in the environment. The UAV will also need to communicate either to an operator or perhaps a cellular network. So it needs to understand what is happening in the spectrum in order to be able to coexist.

    So in this simple example, we're just going to see how we can sense the spectrum for either LTE or 5G signals. We're going to capture some data using quite a simple off the shelf Pluto, software defined radio, set the sample rate, the center frequency. And we will thus receive a burst of IQ samples. Alternatively, we may have some prerecorded data. So we can just load that in instead and use that as well.

    The way this operates is it uses a deep learning network that operates on spectrogram images. Just like an image recognition AI that you may be familiar with, this is using the same concept but using the spectrogram as the image of interest. So we're going to process this data into spectrograms. We already have a pretrained network here. And I'll explain later how we got there.

    And we're going to load this data in, and we're going to perform what is called semantic segmentation using the pretrained network that will then color in, if you like, areas of the image. It will tell us which of the pixels belong to which class. We can see here that we have a score, so we have some kind of probability. And the three classes we're looking for here is NR 5G, LTE, or noise pixels on a 256 by 256 spectrogram image.

    Finally, we can visualize our results here with this real data, and we can see how it's performed. We can see the receive spectrogram once again, and we can see the areas colored in, based on what the network predicted, the signal is. So to the naked eye, obviously, we can see that there is some noise in between the LTE purple patches there, but we know that this is very likely to be an LTE signal based on the prediction. And let's take a look at how we will end up with this kind of network.

    So in the agenda for today, we're going to take a look at, well, what is actually AI at a high level? We won't go into too many technical details on AI. We're going to try and understand how does an AI fit into a wireless workflow when we have to develop a wireless application. We'll focus on the most important part, which is a data preparation for training a neural network and some kind of tips and tricks and pitfalls that we need to be aware of. And we're going to bring it back again to the bigger picture of where everything fits together once we have actually prepared data and trained the network, and we're happy with it, and we wish to deploy.

    So what is AI? So AI stands for artificial intelligence. I don't expect somebody will not know what this means. But it's been around for a long time, certainly, mathematically, before powerful computers were built to be able to perform the computations necessary. So since the '50s, it's been around. And it's really just a mathematical technique that enables a machine, if you like software, to mimic human intelligence. And there are two really main types here that are really popular these days. There's machine learning and deep learning. Deep learning is a subset of machine learning.

    Machine learning essentially just means that these machines that we built, they can learn from the data without explicitly programming them. So they can take some decisions based on some results of operations. Deep learning is a debate on how deep a network has to be until it is considered a deep learning network, but that has many more layers. So many more operations happen on the data, and many more decisions have to get made. And right now, really there's a lot in the media about deep learning networks, ChatGDP, so on and so forth, mainly based on images. Right now, words as well text.

    Just to break it down a little bit more what is machine learning, there's three main types, unsupervised, supervised, and reinforcement. So just to give some examples, for unsupervised learning, we do not tell the network what the training data is. So we do not label it. So essentially, it tries to figure out patterns by itself. An example of this could be clustering. It tries to find out relationships between variables and data. For example, could be used for small device-to-device user clustering or an anomaly or intrusion detection. And these are just examples.

    On the supervised one, which is very, very popular, we do tell the network what the data is, and we train it. So we give it labels. And there are two types of supervised learning results that we can get out. We can either classify what the data is or we can do regression to find out what the data is as well. So examples, and we have a lot of these actually within our tools, channel estimation. Instead of using traditional mathematical methods, we are using a neural network beam selection, modulation, recognition, and classification, spectrum sensing, which is what we will go into, antenna design.

    So if you have a wireless background, you can see where these have been applied. These have been applied to problems that are not easily solvable. They're optimization problems, and there is no one right answer. Many possibilities are there and are possible. And mathematical methods give a solution but not always the most optimal solution, and so on. So they've been applied to such problems here.

    Reinforcement learning, you are already familiar with, even if you don't know it. If you are familiar with things like adaptive filters, it's the same idea. The network interacts with the data and makes changes as the data changes. So control systems, for example, have to do this. Your thermostat at home adjust the temperature based on what is the ambient temperature and so on. Very, very simple. Some examples of these, cognitive radio, maybe resource allocation based on the usage, and so on. There's always a feedback loop involved.

    You can see how deep learning sits below those two where we have much more involved networks. And they will do similar kind of operations but many, many more of them. This is a common question that comes up. So which AI technique should I use for my application? It really depends. And it's a research topic in itself. So as we can see here, if you have domain knowledge, with more domain knowledge, you could get away with less training data and use a machine learning type approach where you have shallower networks, if you know what you're supposed to get, what to expect.

    Now, at the top end of the scale, if you have little domain knowledge, you could potentially use a deep learning network, but that requires a lot, a lot of good quality data that is well labeled. For example, if you just throw a lot of images to a deep learning network, it could eventually learn some patterns and be able to do some recognition. In between sit some operations that can be done to preprocess and post process the images that can actually help a little bit more.

    So using a little bit of human intelligence, we can change the data from perhaps raw IQ samples to time frequency as you saw earlier in the spectrogram. We can apply some feature extraction in advance, and then feed that to the network. So the network learns the features easily. And that's the idea here. So this is an intermediate step that can make it easier for a network to recognize patterns. It's all about patterns really.

    So if we look at AI workflow in a nutshell, what do we need to do to actually get to a point where our network is useful and it can do something and can be used out in the world? So we must prepare the data. It has to be good data. It has to be labeled, depending on the type of approach we choose, and we need to have a good representative set of it. We have to then model our actual network. So how many layers they're going to have?

    This could come from a journal. This could come from already established network. We can do transfer learning. We can reuse a network and train it on different data. Then we need to plug it into a larger system and simulate the effect of this network on our larger system. And then finally, we can deploy.

    Now there's an arrow going back not because this is specifically reinforcement learning, that's not the point, but what I'm trying to explain here is that over time, as your network is deployed in the field, you will come across corner cases. You will come across cases where your network does not perform as well. And what you may do when that happens is you're going to go back to square-- well, square one essentially to update, to get more data, to reupdate your network and redeploy. So your network could live in the cloud and can be easily retrained and redeployed. That sort of idea.

    So job is never really done once you deploy it. There's always going to be some cases where your network does not perform as well. So it fails to do the correct recognition and things like that. Today, I'm really going to focus on data preparation and really drive the point of why that's important and where things could go wrong there. So what I'm going to cover today is the following. On the data preparation, there are different places you can obtain data, different ways you can obtain data, different things you can do to your data to get it ready to be used as training data for our network.

    I'm not going to cover how you actually choose an AI network, but what's important really here is the data because the data is just everything when it comes to training on AI network. You need to have good data well labeled with good patterns that your network can then recognize. So for example, you can capture over the air. You can get a radio, set outside, capture some data. That's what we did in the example earlier. I had set up a Pluto, and I captured some data, and I tried to see is there a 5G or an LTE network there.

    We can simulate synthetic data, capturing all it takes us so far. We don't have control over everything, especially when it comes to wireless data and the variability of wireless channels. We can also import data. There are databases online of various data, whether it be medical imaging, object recognition, that sort of thing. You could import and augment the amount of data you already have. So it's worth searching. And what's really important then is we need to clean our data, so remove any-- just poor samples of it, blurry images if you like, too much noise in the data.

    We need to prepare it, so we may need to do an extra step of processing like turning it into a spectrogram. And we need to label it, so we need to actually know what this data represents. If it's images, like for example, a cup, I know that it's a cup, so it's very easy to label. However, if we have some image that came from the spectrum, some IQ data, we might have no idea what it is. So we need to make sure we know what it is when we are getting that data. And we can label it correctly and label portions of it correctly if necessary.

    The main thing I'd like you to take away from this is that rubbish in equals rubbish out. If you do not spend the required amount of time on data preparation and your training data is bad, your network will perform badly. So the better your data is, the better your network will perform. So for that, you need to employ a lot of human knowledge and intelligence. It's not as simple as saying that the AI will just do all the work because it can't. It's only as good as the training data it sees.

    So what options do we have? If we jump into the first one, which is capturing over the air data, we can actually acquire live data directly into MATLAB. And I showed you how you can do that with a Pluto earlier. So we have some real world signals. They may come from cellular network. They may come from some kind of communication between a car and the world. They may be from airplanes, and so on. If we have a radio nearby and we can tune that radio, we can then capture some real-world data. That can be done maybe with an instrument or maybe with an off the shelf radio.

    So for example, this could be USRP. This could be a simple Pluto. This could be a Zynq SDR with the ADI front end, that sort of thing. So as long as we can tune it to that frequency, we know where we're looking for data, we can find some data. Regarding high-end high data rate capture, here are some support for new NI radios. As of '23a is going to be the X410 also will be supported.

    So something that's important. So what do I do if my signal isn't always present? So for example, I'm trying to capture a wideband signal, it's periodic, it's not always there. What can I do? I will basically run out of memory on the board if I have to capture seconds and seconds and only a small portion of it is a signal of interest. So actually intelligent data capture becomes really important here. For example, this is the block diagram of radio front end. What is good to have is actually something like a preamble detector. So that already is possible today with some of the USRP radios here.

    So if we can insert our preamble detector, then the instrument will-- the radio will trigger upon correlating with the preamble. So that could be synchronization signals of an LTE signal or something different could be pilots for WLAN waveform. So that means you're only capturing what data is present, and you're also kind of automatically labeling it because you have said that you want to capture this kind of data and that's correlated with it. So that is the data that you've captured. So the labels are already there by definition. You can do it just in a few lines of code as you can see here.

    What are the drawbacks? Well, variability is really hard to capture because it's a real radio. You're at the mercy of what's out in the air. It can be expensive, especially for wideband data capture. You have no control over the environment, as I said. So what can we do instead? So option 2, to augment our training data further, we can simulate synthetic data. So what does that mean? So we can actually generate baseband signals. We can see here that could be radar signals, Bluetooth signals, WLAN signals, LTE, 5G signals.

    So basically, we can just start off with a simple 5G waveform that we can set the parameters of. This is a TM model, which is already a pre-canned waveform that comes from 3GPP standard. And what we can do is we can choose, for example, the channel bandwidth and subcarrier spacing, and we can choose to add impairments to that waveform, for example, a WGN phase offset, frequency offsets, and things like that. So we are adding variability. This helps our AI network to recognize better when it sees that variability in the real world. We generate our base band waveform, and then what we can do is we can just directly export it to the workspace as a dot-- sorry-- as a variable. And then we can save it, of course, as a dot MAT file.

    In addition, we can then augment this even further by automating how much data we generate. We can get the underlying code. We can then use a for loop to iterate over various parameters. That make a difference to our data and add more variability. This could be the data allocations. This could be changing, again, the impairments and so on. So we can actually automate generating an entire library of data.

    We can see here that when we create a waveform, it's an IQ based band waveform of raw data. So how do we do this in the context of the first example that I showed you? To help you navigate to this example that I'm talking about here, if you go to the communication toolbox for wireless, you'll find the example here, spectrum sensing with deep learning. So I'm not going to talk about the AI part, but I will talk about how the training data was collected. So the whole example goes through basically getting the training data, applying transfer learning to our network, and then testing the results.

    We can see here we've chosen some parameters in order to vary the data. Could be bandwidths subcarrier spacing, so on and so forth. We're using channel models, mathematical channel models, CDL and LTE fading channel to add noise or perturbations to our data and increase the variability. We can see here in the code that we have the parameters listed out.

    And what we're leveraging here is a powerful loop which uses parallel computing toolbox to generate all these different data sets. We can see here that the code to generate the 5G waveform is essentially the code that we initially saw coming from the app that I showed you earlier. So we've leveraged that then and taken it further. We are using the same function in our waveform generator that we saw from the app.

    So let's take a look at what this data looks like. We can see we already have a large library of this data, and we can see that a lot of it is essentially frames-- I'm sorry-- spectrograms images. They have various noise. The data is located with some frequency offset. So we have some variability there for LTE and NR data.

    Lastly, we can see here that as we are generating the data, we are saving the spectrogram image as shown, and also we are saving pixel labels. So we tell the-- well, in the library, we save which pixel belongs to what. So we would kind of-- if you cut out where the data of interest is, and then that will be labeled as a label, for example, in our data and the rest would be labeled as noise. Then as we already have our data, we will then load it into an image data store. And I'll talk more about those later, and a pixel label data store. What we can take a look at is we can do some analysis in advance to see how well we're doing. So we can look at some data set statistics.

    So something that often happens when we're doing such a thing is that we may end up biasing our network. If they see too much of one thing, they will be biased to select that thing. For example, if I train the network to recognize dogs and cats, I've got too many cats, there's a chance it will make more mistakes on labeling dogs because it's been biased towards cats. So we can make some preliminary analysis to see how we're doing there. Lastly, what we can do is we can actually export a lot of these results. We can export the code as a PDF. We can export the statistics as images, or we can generate reports also.

    So here's a summary. We've got maybe one gigabyte of data, about 8,000 files. We saw the statistics of how it was split between LTE, NR, and noise. And so we've generated our training data. Now, what are we using here? So we're using a spectrogram, but we don't have to always use a spectrogram. There are various, different time frequency transformations that can help us with the image extraction. For example, we can see here there's a wonderful page that shows various different types of time frequency transformations, and they enhance different features depending on the type of signal that we have.

    We can see here that we have a-- on right, we have quadratic chirp and a complex chirp, and we can see how nicely it's emphasized the wave and the crossing points between the two chirps where they act together. This is a very fantastic way to enhance features. We can see here at the top, we have another kind of spectrogram, and there is a constant q transform. So this is a research activity in itself to figure out what kind of transformation I should use, which one enhances the features the most.

    I've also added in a very nice link here to image segmentation because that's another kind of tool that we have in our-- here to help us partition an image into parts or regions. That's another thing we can try also. There are many options for these spectrograms. We can flip axes. We can change windowing. We can change overlap. And they all have an effect on how clear the image may come out, how well the features will get emphasized.

    So I mentioned image data stores and pixel data stores earlier, but I didn't go into detail. So let's take a look at what they're able to do for us. So data stores are actually able to access a batch of images in memory, which cannot be all loaded into memory. So I say we have one gigabyte of images here. If we try to load them all into memory, we'll run out of memory. So what we can do here is we can actually put our images into image data stores. What's really good is that we can then use an image data augmenter to actually augment our data store based upon various features, for example, random rotation, random scaling, and so on.

    And we can see what's happened here. So I've taken the spectrogram images from before. I've applied some random operations to them, like rotation and scaling, and I can now preview my augmented data, and I can see that it will give me some a random transformation. So that's used to augment the library of images that you have. Of course, this is a spectrogram, so we don't want to rotate it around. The axes will always be fixed. It's just demonstrating the point that if we have a, for example, radar return, it could make sense that we scale it any note.

    Easiest thing people can think of is, for example, images of dogs and cats. It makes sense that we can mirror it or we can scale it so it's able to operate at different sizing as well. So there's a bunch of these data stores for images, for signals, for pixel labels, and, of course, augmented image data store, which allows us to then apply these random operations, rotations, scaling, mirroring, et cetera, et cetera. There's a nice page that gives ideas of how you might want to process data for domain specific deep learning applications. So that's kind of part of the research that needs to go in to understand how to prepare your training data.

    Lastly, I talked about the spectrum sensing with deep learning example, but there's a lot of other examples that are applied to different areas. For example, you can see maritime clutter removal. You can see classification of pedestrians and cyclists. There are more wireless ones, like modulation classification, including ones that are deployed to FPGA. Intruder detection for wireless LAN access points and so on. So there's a lot of examples here that can inspire you and get you started with something.

    Lastly, so we covered all of these points. And where does that fit? So we're going to prepare our data, then we're going to do some AI modeling of our actual neural network. That's yet another research experiment in itself, where we need to tune the parameters and figure out which parameters work the best. And there are apps to help you with this as well. You can also import your network. You don't have to build all in MATLAB. It can come from elsewhere. It can be imported.

    Finally, you can then actually load that network into Simulink, build up an entire system simulation, and observe the performance of that, and if necessary, go back a step to improve things. Finally, you can then deploy to embedded devices or also the cloud and so on. And there are tools to help you auto generate code. Just to conclude, so AI is applied to many, many areas. It could be wireless, a multifunction RF, et cetera. It's not simply just dogs and cats.

    What is a real challenge, though, is obtaining training data that is good data, good quality data, and that is correctly labeled. So that's kind of one challenge we have to deal with. And finally, to help us with this challenge, we can leverage a mixture of using hardware, using simulation, and using data importation to accelerate the development. And there's plenty of image-based algorithms and transformations that can help us to enhance and extract features and help us with labeling, including automated labeling to make the process much, much faster. And with that, I will open for questions. Thank you very much.

    Hi. My name is Lewis. I'm an application engineer at the MathWorks. Welcome to this talk where I'm going to be outlining how some techniques and technologies from the software world can be used to enhance model-based design workflows. To start, let's define exactly what I mean by the cloud. For the purposes of this talk, the cloud will be any remotely-hosted service that contains your file storage, your software, and also allows you to offload your compute from your main machine.

    Generally speaking, cloud services fall into one of two categories, either public clouds of which MathWorks provides our own SAP or internally-hosted private clouds MathWorks products can be used in either environment. So what are the benefits of moving to the cloud? Well, there's a number of stages that you can move in, and each of these will provide benefits that compound as you continue moving to the cloud.

    In the first case, you can move your data to the cloud. This can streamline your workflows by centralizing the data that all of your teams get access to and the models that they work on. It can eliminate variations due to local copies and version issues, and it can protect your data by providing varying access levels to different users, depending on the privilege that they need to work with that data.

    The next step will be moving your software to the cloud. This will provide easier access to your data but also the new benefit of Elastic Compute. This is where more heavy computational power, such as extra cores or GPUs, can be borrowed by users for more demanding tasks and then return for other users or in a public cloud to save money when those tasks are completed.

    When you establish your cloud setup, you can start integrating with more technologies, with benefits such as being able to scale your deployed algorithms, easily provide dashboards or graphical front ends for downstream users, or even automating large chunks of your development pipelines. So how does any of this translate to an improved model-based design workflow? Well, firstly, centralizing and versioning data can be a substantial enabler when it comes to creating and maintaining a single source of truth.

    Centralized data combined with scalable analytics also enables the integration of real world data into your design and testing in a more easy manner. Trials are a good example of this. They typically introduce large data sets and record it at multiple sites. Automation pipelines can also provide a variety of substantial benefits to your modeling work. They can allow you to assess the impacts of design changes and allow you to communicate those to collaborating teams. They can allow you to automatically rerun tests either at the unit or system level and detect any flaws in your implementation early. They can also ensure that analytics are rerun upon new valid systems and allow decision makers to always have the most up-to-date metrics which make key decisions.

    It's important to remember when doing all of this that the models you make aren't just valid during the design period, but they can actually be used throughout the entire product lifecycle, in these stages, how technologies can help you manage and respond to new information from the field. For the rest of this talk, we'll break down some of the things we mentioned into separate stages and look at how we can use them one by one in conjunction with MathWorks tools.

    To begin, we'll look at how you can use MATLAB and Simulink along with data that's been stored in a centralized cloud location, but then look at moving MATLAB and Simulink themselves to the cloud and exploiting some of the Elastic Compute that's available. After that, we'll have a look at how you can deploy some of your algorithms in a scalable manner in the cloud. And finally, we'll look at how you can link everything together into a single process using CI and DevOps.

    So to begin, let's look at accessing data in the cloud. Generally speaking, we see customers want to access cloud data in one of four ways. The first of these is through a database. And thanks to database toolbox. You access data in exactly the same way you would a local database. The only real difference being that sometimes a small change may be required in the driver file used. Second way we see is blob stores such as Amazon S3 or Zillow blob storage. These are, generally speaking, exactly the same as a local file system, with the sole difference that credentials may be required to access the data.

    The third way is dedicated cloud data services. These vary, but we support most of them. And on our MathWorks reference architecture website, which is a useful resource for virtually anything involving cloud platforms, you can find connectors for most of these. Finally, there are also web APIs, where you can go to a web address and receive a page of data. MATLAB has a built-in function called webread that can access these. And we also have another function called webright that can post to locations.

    Next, let's talk about using MATLAB and Simulink in the cloud. Most MATLAB licenses already work on the cloud, so it's possible to spin up a cloud machine, install in license MATLAB, and start benefiting from Elastic Compute resources immediately. In fact, our reference architecture page has prebuilt images and links to spin them up in your public cloud of choice. For internally-hosted clouds, we provide MATLAB Online Server. Here, users can go to a web address in their browser and access MATLAB tools already hosted next to their data with Elastic Compute, authentication, and access control capabilities. If you'd like to see what this is like, you can try it on ours right now on matlab.mathworks.com.

    So we've moved MATLAB and our data to the cloud, let's start benefiting from some of those new capabilities that the cloud can provide us. First, let's scale up our analytics. I mentioned trials data earlier. It's not uncommon for trials data sets to be much larger than the RAM available on the system. So you can't access the whole data set at once. It's also pretty common for trials data to be spread across multiple files. In scenarios like this, even a simple operation like finding the largest number in your data set can become much more difficult.

    For reasons like this, MATLAB has a dedicated data type for big data called the tall array. This deals with all of your big data problems for you so that you can focus on building your MATLAB algorithms and have a variable that behaves exactly the same as if it was an in-memory variable. Tall arrays can be built from data stores. Unlike a tall array tolarate of memory data, these objects make multiple file data sets behave as if they're one single file. The only real change you'll notice when you're making a big data algorithm is that you'll eventually have to call the function gather.

    Between tool and gather, all of your code is optimized to minimize the slow reads that have to be done from the hard disk to get the data, and where possible, your code will automatically be run in parallel to accelerate your analysis. If you already have any big data platforms in place, then either MATLAB has tools to integrate or operate within them or our engineering team is available to help you combine your platforms.

    Let's talk some more about parallelization and using those extra resources we now have access to. MATLAB has a variety of simple entry points into parallel computing. For example, the parfor loop can allow you to quickly change a for loop of independent sequential operations into ones that execute in parallel. Meanwhile, similarly to tall arrays, GPU arrays look and behave like a standard MATLAB variable, but the results are calculated on a CPU. Where appropriate, many functions in MATLAB that are resource intensive, like deep learning functions, we use the relevant parallel environments automatically if they're available.

    It's also possible to use Simulink in parallel through the function parsim. Or alternatively, we've recently added a multiple simulation app. This allows you to quickly set up things like design sweeps across multiple variables and run those studies in parallel without writing a single line of code. When the simulations are running, a simulation manager app appears and allows you to track how many simulations have run, what region of your design space you've explored, and it will also show plots of the relevant metrics you're interested in as they're calculated across your design space.

    This way, you can catch any potential errors or edge cases without having to wait for your entire study to complete. Everything we just saw, run in parallel locally, although that local machine could actually be a cloud VM. In many cases, however, high-performance compute can reside on dedicated cluster. MathWorks provides a tool for HPC clusters called MATLAB parallels server. When parallel server is installed and registered in an end users MATLAB parallel settings, MATLAB users can swap from local prototyping of parallel processes to high-performance compute just by selecting the relevant option from a dropdown menu on the MATLAB tool strip.

    Parallel service can be set up in the cloud for Elastic Compute. And we, again, provide reference architectures to help set these up, or we can spin one up for you using MathWorks Cloud Center. If you already have a HPC cluster in use, parallel server can integrate with the existing job schedulers and load balances in that cluster to help you use MATLAB on your cluster alongside other software.

    Now we have our models and algorithms developed and optimized against real-world data, let's take a look at deploying them scalable in the cloud. Broadly speaking, there are two routes to deploying from MATLAB and Simulink. The first is our compiler products, which convert your code into a deployable archive with industry standard encryption. This archive runs on a license-free MATLAB runtime. The other route is code generation. This is often used for deployment to hardware but can also generate binary executables.

    Both routes allow you to deploy your software to the cloud in ways that protect your IP. And the most basic way you could do this could be to take your outputs and run them on a virtual machine, but there are more advanced deployment options which are also easily available. One example of these is a Docker container. So MATLAB Compiler can produce a Docker container. This is a piece of encapsulated software that contains all of the dependencies it needs to run. So for an example, when something is produced by MATLAB Compiler, you can package the required runtime in with it. This means that whatever platform you put it on, this little element of software is going to run reliably.

    You can combine Docker containers with elastic computing resources called orchestrators. And to have a look at how one of these, in this case, we look at Kubernetes, works we'll take an example based on Stephen's talk earlier. Let's say we've built one of our drone delivery devices, and that thing is sending telemetry home. Now, that telemetry will come in and be processed by some kind of stream processing platform such as Apache Kafka. That can trigger a job on Kubernetes, and that job can be anything, run in any Docker container, including MATLAB code.

    So the telemetry will be sent home. It could be played through our analytics or our Simulink model, and some kind of metric can be sent back to Kafka that can then be sent say to a database or some kind of storage area that will ultimately be visible to end users through some kind of dashboard software. So where's the scaling here? In this example, we're running our analytics against the telemetry from one drone, but in practice, both of these platforms can run against the signals from multiple drones at the same time.

    So combining these platforms with Elastic Compute means that we can provide MATLAB analytics and run Simulink models against an entire fleet of drones at the same time and gather all that data in one easy-to-find place for the downstream users. You also provide an out-of-the-box platform for MATLAB algorithms and Simulink models called MATLAB Production Server. This platform allows you to horizontally scale your code and models with ease and present them as a web service with a simple API for other cloud-based platforms to access.

    Now that we can access our data and scalably design and deploy algorithms, let's look at some technologies that can help us do this more collaboratively and allow us to mesh all of these stages together using automation. Many of you are probably already familiar with the key collaboration tool in MATLAB. It's projects. And this is even more true in the cloud. Projects allow you to automate your path management and any start up or shut down tasks you have. You can share projects with your colleagues to ensure that they are all developing in the same environment as you.

    This becomes even more valuable when you move to the cloud and automation because it ensures that your automation pipeline is also running in the same environment as your development. There are also tools for checking the dependencies of code and models within your project, providing an easy completeness check for a code base that shared with you but also making it easier to detect the impacts that changes in your code have and where else in your code base is affected by those changes in any one file or model.

    Projects also integrates with version control tools such as Git and provides a user-friendly interface for dealing with local or remote repositories. They also provide tools for viewing and comparing the differences between branches as well as tools for differing emerging Simulink models. When you set up in version control, the next step is to incorporate continuous integration. In order to do this, you'll want to keep your analytics and tests alongside your models.

    Let's go through an example of what happens when you push changes to a CI pipeline. We've made some changes in our local repository. We want to push them out to the remote repository so they're available for other teams. When we push the changes, our remote repository host will trigger a continuous integration server. In this example, we're using Jenkins, but MATLAB has support for a wide variety of servers. And in practice, any server can use MATLAB as long as it is capable of running it in batch mode.

    The CI server will open MATLAB and run your tests and potentially any other tests designed at the integration level. MATLAB and Simulink can hand over the results of their test in an interpretable form for the CI server. And that can check whether or not the tests that you've defined as critical have passed. If they fail, you have the option to reject the push at this point, rather than merge it in, and send an error report to the person that pushed those changes.

    If all of your tests pass, the code can be merged in, and a number of other tasks will be automated. These could be emailing relevant stakeholders or groups affected by the changes, running analytics on the new state of the system, and pushing these results to databases or updating dashboards, things like performance metrics, system completeness, or progress towards certification. You can even automate things like building toolboxes, generating code, and potentially even setting up later stage tests such as software or processor in the loop testing.

    So how difficult is it to set up a CI pipeline? It's quite straightforward. And thanks to the new MATLAB build tool. Large parts of the build and test pipeline can be authored completely in MATLAB. Let's have a look at setting one up. So starting in a project, I can easily add it to source control using the projects tool In this case, I'm going to add it to give. I can then set up a remote repository which will be a prepared URL on my remote Git server.

    First, I will need to commit my changes locally, and then I can push them to the remote repository. When the changes have been pushed, I can go to the remote repository and create a new pipeline. In this example, we're working in DevOps, but the process is a Simulink in most platforms. I can talk a branch and make my pipeline either from scratch or start from a pre-existing template file. In this case, the pipeline is defined using a YAML file, where we define step-by-step commands.

    MATLAB integration into many platforms makes it quite straightforward because most of the tasks you want to do will exist as predefined steps within the platform. As the pipeline will start from a fresh VM, we'll start by installing MATLAB, and we can specify which version we want to install and run tests in. We can then configure the pipeline to run all of our tests. And in this case, no extra inputs are required. Additional predefined tasks are also present to on-build or run general MATLAB commands.

    After some waiting, the new pipeline will run on our current commit, as well as any future ones that are pushed to the declared branch. Looking at the logs, we can see that the VM clones are commit, installs MATLAB, and runs our tests, then cleans up. This takes about seven minutes end to end. Depending on how well you've broken up your system into submodels, CI can report back in close to real time. In this case, all of our have our tests passed.

    Let's have a look at some examples of what can be triggered downstream after the tests are run. And for that, let's have a look at the environment in GitLab. This time, the pipeline took 10 minutes to run, but extra stages occurred. These stages created artifacts which we can then browse. Firstly, let's have a look at the results. Here we can see a number of directories, depending on different parts of the system, and we're going to look at a report generated by the Simulink process advisor. This is a tool for performing CI and Simulink models and available on the file exchange.

    This report is generated automatically, and it summarizes the log results from a variety of automated checks against modeling standards or design errors. And it also provides logs from code generation and the tests themselves. Navigating back out, we can also have a look at the code folder. This contains the results from the code generation. So these artifacts can easily be found by software and integration teams or handed over to other pipelines.

    So we've just seen some examples of a CI workflow, but how is DevOps any different? Through a model-based design lens, we can think of DevOps as the practice of extending everything we've just seen to the in-service system. This will allow us to continue to benefit from the models and analytics that we develop during the design stages. Let's explore this with an example product lifecycle based on the drone example we saw Stephen discuss earlier.

    Starting with the planning stage, we can architect our system using system composer and Simulink requirements. This gives us a platform to author system wide analytics, including the propagation to our system of impacts due to changes or failures. We can then take this architecture and populate it with our physical models or our software models. We can inform these using trials data, or we can explore multiple solutions or iterations simultaneously under version control.

    CI pipelines will help us collaborate with other teams by simplifying merging and ensuring that robust testing frameworks are applied to every iteration and at every level. When our design is complete, we can release it. In this case, that corresponds to generating code from the modeled systems and embedding it on the drone. That drone will then enter service where it can report back telemetry or other signals.

    Those signals can be automatically handed by our cloud-deployed streaming analytics for a variety of benefits. For example, we could deploy a predictive maintenance algorithm that allows us to prevent drone failure by servicing the drones proactively without having to maintain a costly frequent service across the drones in our fleet that don't need it.

    The reported telemetry can also be used to expand our trials data sets. While the product is in service, unseen scenarios could arise. Or if the product has a long service life, it could be deployed into a new operating environment. These changes in circumstances could inhibit the system, but now we have the data and an automated analytics pipeline that will detect this suboptimal behavior. So we can immediately begin exploring changes to the system software or the design to optimize it for these new scenarios.

    And we still have our models, so we can replay the data through those models, which themselves sit within an analyzable system architecture. So we can spot where changes are needed and identify their downstream impacts. And as we determine and integrate changes that will improve the system performance, we can do so within an automated testing pipeline that we've already set up.

    Every step in this process reduces time and cost and allows us to get a software patch or design optimization back into the surface product. This rapid iteration is made possible by a combination of model-based design and the collaboration and automation technologies that we looked at earlier in this talk.

    In summary, MATLAB and Simulink provide cloud integrations that allow you to streamline your data management, to exploit Elastic Compute, to accelerate simulations, and manage large data sets, such as trials data, to automate your systems analysis, to design changes or to new real world scenarios, and to develop alongside other teams in a seamless manner. What this results in is a faster design cycle driven by quicker and easier access to key information when making decisions and more rapid detection of design errors or defects. And this isn't just true in the design stage. This is true throughout the entire product lifecycle.

    If you are interested in exploring any of the technologies shown today, I recommend that you have a look at our cloud solutions pages which will have more in-depth information about some of the things that were shown today. I also recommend that you have a look at our reference architectures page, which has prebuilt images for a variety of VMs and also connectors to help you get started quickly of other systems. Thank you for listening.

    View more related videos