Aerospace & Defence Forum part 2: From Systems of Systems to HW and SW Certification - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 1:40:22
Loaded: 0.16%
Stream Type LIVE
Remaining Time 1:40:22
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 1:40:22

    Aerospace & Defence Forum part 2: From Systems of Systems to HW and SW Certification

    Overview

    The need for advanced civil and defence systems has been brought into sharp focus. In civil, the drive for zero climate impact is driving development of advanced propulsion, autonomous systems, and urban air mobility. In defence, geopolitical uncertainty has highlighted the necessity for Next-Generation physical and digital systems.

    The MathWorks Aerospace & Defence forum will bring experts on different key technology areas to discuss how we can accelerate the development of next-generation systems. Register today to join the forum and explore how models and data can help us respond to these challenges.

    Highlights

    Block #1: From “Systems of Systems” architectures to simulation models

    • Define/describe complex system architectures
    • Using simulation and analysis to inform system design
    • Building executable simulations from a library of shared assets
    • Connecting systems architectures with software/firmware implementation

    Block #2: Simulating complex dynamic system models

    • How complex is your model? Learn to profile & compare models.
    • How can you scale your simulation? Learn solver & co-simulation strategies.
    • How credible is your model? Learn to trade speed & accuracy for real-time simulation.

    Block #3: Certification compliance through verification & validation            

    • Building compliance using Model-Based Design techniques.
    • Applying verification & validation techniques for both hardware & software.
    • How to approach certification throughout a program lifecycle.

    About the Presenters

    Marc Willerton is an application engineer at MathWorks, specializing in signal processing and communications with MATLAB and Simulink. Primarily, Marc works with customers across the aerospace and defense, communications, electronics, and semiconductor industries around applications including wireless communications (including LTE and WLAN), software-defined radio, RF design, radar, and sonar. Before joining MathWorks in 2013, Marc spent a number of years in academia, where his research interests centered on array signal processing and software-defined radio. His work was sponsored by the University Defense Research Centre in Signal Processing. Marc received his master’s and Ph.D. degrees in electrical and electronic engineering from Imperial College London, UK.

    Lachlan Jardine is an Application Engineer at MathWorks, with particular interest in decarbonizing power, propulsion, and industrial systems. Lachlan specializes in modelling & simulation of physical systems - completing his PhD in the Whittle Laboratory, Cambridge, then obtaining a Knowledge Transfer Fellowship to partner with Rolls-Royce. His work in this area has received many awards, including from the IMechE, Stem for Britain, and the University of Cambridge.

    Juan Valverde is the Aerospace and Defense Industry Manager for the EMEA region at MathWorks. His technical background is on the design of dependable embedded computing solutions for aerospace. Prior to MathWorks, Juan was a Principal Investigator for Embedded Computing at the Advanced Technology Centre for Collins Aerospace - Raytheon Technologies in Ireland. Juan has a PhD in Microelectronics and Computing Architectures by the Technical University of Madrid (Spain).

    Recorded: 3 Mar 2023

    Hi, everyone. My name is Mark Willison, and today I'm going to be talking about systems of systems architectures and their connection to simulation models. So, I'm going to set the main context in my presentation around this search and rescue mission. So here you can see we've got a number of different mission requirements that are listed.

    This is an incomplete list that's going to perhaps grow, be elaborated over time. And then you can see the sort of mission area where our mission is going to take place and a collection of possible assets that might be involved within that mission. So the main focus of my talk is going to be split into two main sections. First of all, I'm going to be giving an overview of the systems engineering design life cycle and why models are important.

    And then I'm going to be going back to that example of the search and rescue mission and showing how these systems engineering principles for the engineering lifecycle can be applied to that problem and the role of simulation in doing that. So, let's start by looking at this collection of assets.

    So this collection of assets might be used to fulfill some sort of mission, and you can see that they're connected to each other. So this is a fairly complex system, and in arriving at this collection of assets we're going to need to figure out whether it's the right set, whether they can fulfill that mission that we've got there. And the way to do that really is to start at this very top level and take a systems engineering approach of breaking things down into their constituent components.

    So we go from the mission to the system of systems through to the single systems, the subsystems, and their lower level components. The idea is at each level we're going to be performing design traits to make sure that we've understood the problem at that particular level, and then iterating as required as we move between the levels. So let's look at this from a top level and see what we mean.

    So here's our design life cycle, and we can see here that we've got a concept, and we take that concept going through down to the deep level component design which is going to form our implementation. However, crucially, what I said earlier was we're going to iterate on that design at the different levels.

    And naturally, it's important for there to be able to be linkages throughout the design life cycle to get from one stage to the next, but also to be able to go back to previous stages to be able to improve on the overall solution. So this process is known as digital continuity, creating these links between the different stages in the design lifecycle.

    And its main aim really is to enable collaboration between the different people that are going to be involved in this design life cycle to produce a solution that meets the correct requirements of the mission. So one implementation of digital continuity is known as a digital thread.

    So here we can see the various stages in a standard system development process, going all the way from requirements through to the design of the architecture and the subsystem design through to the implementation of that subsystem, its testing and integration. And then once we've got this system out into the field, how we can then perform things like predictive maintenance and other operational requirements on our system.

    So this quite understandably involves a whole collection of processes, of tools and people, and the connection of all of those things together going through the life cycle of this system is known as the digital thread. So let's turn to look at what a mission and system of systems development process might look like. So here you can see we've got our top level mission requirements. So here we talk about basically, what am I making?

    This will need to flow into some requirements of the system of systems. So now, rather than defining the what you're making, it's more about how are you making it. What are the different assets that you might have involved in this solution? Once we've defined those, we flow them down into system requirements. So these are the individual assets and how and what they are required to do.

    And then, of course, we need to go ahead, once we've worked that out, how to go and actually make the thing. What you can see here is there's going to be potential interactions that need to take place between these different systems, and as a result, we need to make sure that that works correctly. We've got our system integration working as it should.

    At each of these levels, we're going to iterate on these questions to perform different trades to make sure that we can make the right thing. Going up the other end of this development cycle, we then need to turn to the verification and validation. So these are answering questions like, have I made the system that I've been told to make, and have I made it right?

    But the system of systems level, is the collection of systems actually going to work to fulfill the mission? And this mission requirements level, am I actually making the right thing? Now, ideally you don't want to get all the way down to a systems implementation and all the way back up again before you work out that the answer to this question is no, and this is where the role of models start to come in.

    So, model based design is really a common ground for collaboration of lots of different people trying to make a design of either something that's going to fulfill a mission or a system. So it's a place where we can capture collections of requirements, collections for missions systems of systems or the systems themselves, as well as the interactions between them.

    And a good way of doing that is to capture them within models so that we can clearly express how these things-- how the mission is going to be met and how the systems are going to be doing the role that they need to do. So let's have a look at that in a little bit more detail. So models can be used at lots of different levels in the mission lifecycle.

    So at the very top level, when we're trying to define the objectives of the mission, that's going to involve communication between our customers, operators, and any other stakeholders in the mission that we're trying to meet. So models are a great way of domain experts or systems engineers talking to those customers and operators to communicate ideas and validate the mission objectives. So one way of doing that might be to exchange early prototypes of the systems of systems design, or maybe even some initial concepts-- perhaps some sort of operational analysis.

    Next we go to managers. So once we've actually defined the mission, what it needs to do, and we've got something that we actually might want to design a solution for, we need to agree on responsibilities and decomposition of the system. So this is going to involve communication with people like architects and program managers to make sure that it's very clear who's going to be responsible for which bit of a solution and what all of the different bits are.

    Models also have a great place for engineers. So once the architecture has been decided and allocations have been made for who is responsible for what, we can then make allocations that will manage the implementation of the system. So this might be specifying things like which engineering teams are going to be responsible for implementing the particular components and what is the progress that is being made against that implementation.

    It also might start capturing things like the architecture, so things like interfaces and lower level breakdown of the various systems within the solution into their components. And finally, if we look at collaborators. So quite often our mission is-- a solution to our mission involves lots of other external collaborators, and all of these people are going to need to get a view on what the solution is going to potentially look like.

    And one way of doing this might be to create a virtual environment which allows it to become easier for us to share models and ideas with these external collaborators. So you can see at each of these levels there's a place, first of all, for systems engineering models, but also for simulation to go about maturing these models and the requirements that are captured within them.

    OK. So now we've gone through the basic principles of the systems engineering lifecycle and the role of models within them. Let's now go on to look at-- go back to the search and rescue example to look at more of how these ideas might be applied. So as I said before, if we look back at this mission lifecycle, we start off with defining our mission and goals and then mapping these onto requirements.

    So that might be as simple as writing things down like this, but in reality we're probably going to want to communicate with our customer to be able to do that, and any other stakeholders that are involved in defining the solution. So one way of doing that might be whiteboarding some ideas, but we're basically going to iterate on these requirements until we get to a point where we think we have captured them in a clear way that doesn't miss anything out.

    Once we've done that, we're going to want to start mapping these mission requirements onto a scenario. We'll do that, first of all, because it's going to help us to validate whether the scenario that we've drawn out here will actually be captured by the mission statements, the mission requirements that we've specified. If the scenario that we've created happens to throw up new requirements, it then gives us a discussion point to have with our stakeholders to work out whether the mission requirements need to be elaborated any more.

    In addition to that, it also, by creating a scenario, starts to give us some artifacts which we can use a bit later on in the validation of our system. So when we actually start creating some components in real life, we can play them through this scenario and use that as a way to check that that system that's been designed does what we need it to do.

    So in this case, we've used Unreal Engine to create that scenario. And what we can see here is just a video within Unreal Engine which shows the scene in which our search and rescue mission is going to take place. And we can synthetically generate those and maybe even optimize the different areas in which our mission is going to take place using Matlab.

    So once we've got our scenario, we can start breaking our mission down into use cases and the corresponding functional architecture. So the functional architecture in this case describes the what. Basically, what the different components of the system actually do. And we can see here that we can use a requirement editor to capture the mission, flow down to the lower level use cases, and then we can actually start to capture the traceability that connects these use cases to the mission.

    And it's also possible to have use cases that are going to interact or be dependent on one another. So if we pick a use case here, we can then start mapping these use cases to a collection of possible assets. So these are how we might fulfill a given use case, again, showing the traceability from the asset to the use case and then to the mission. And here we start seeing models for the first time.

    We're then going to want to start taking these models and breaking them into what the functional architecture of the system is going to be. For a given use case, we have different functions that we need to perform, and then we might start allocating those functions to the different assets that we've got available to us. So here we can really start seeing the digital thread taking place.

    And then we've got different views that we might create, which in this case is showing the different functions allocated currently to the different assets. And this is where things like trade studies might come in because we might be interested in swapping around where each of these functions actually go in.

    We've also got the option then of checking, are all of our functions captured somewhere, and is one thing performing way too much compared to another? Or maybe even there are assets now in this view that don't actually have a role, are no longer needed. So now we've broken down our mission into its use cases and into some of its functions that it needs to perform.

    We can then start diving more into the mission planning problem. So at this stage in our life cycle, we might start drilling more into, OK, how do these components connect to each other? If they need to, how are they going to communicate with each other? Are there any latency requirements that might be fed down to us from the requirements of the mission that we need to account for when these different systems are going to be communicating with each other?

    So in the context of our search and rescue mission, one thing might be, are we using the right type of asset for the time that it's going to take us to cover this terrain within the scenario that we've defined? And this, again, is a really good place for simulation.

    So what we do in this example is basically use Matlab and statistics and optimization to give it the scenario and a number of different platforms and their starting positions, and then work out just, is it possible to get those different assets at the speed in which they're going to be able to travel to the various points within the scenario to be able to fulfill the mission with the time constraints that are in place within the mission requirements?

    And what that then starts to give us is some trade off studies to arrive at our final systems of systems architecture. Once we've got our top level architecture defined, we can then start to look at the individual system requirements. So up until now, we've really seen requirements as things that arrive as text.

    But back when I was talking about these mission requirements, I was also talking about the idea of whiteboarding things and sharing ideas with our stakeholders to make sure that we've captured the requirements correctly. So another way of doing that can, in a more visual way, can be to draw models. So these models can be much more compelling to have that kind of discussion around your system requirements, and this is because basically they have the ability to simulate.

    We can execute them. We can check that we understand the requirements. We can check back and share that communicate-- those models with other people and use them to grow any requirement set as needed. It's a much more communicative way versus writing something just in text.

    Once we've got the system requirements written down, we can then move on to the actual system architecture. So here we're back to the UAV, and we can start drawing out the actual UAV architecture itself. So we start seeing things like the components-- for example, the propulsion system, the sensor systems, the power, and how they interface with each other.

    So being able to have a model that communicates that is clearly going to be crucial in making sure that when these components are created they all interact with each other in the right way. We can also start using these models to perform trades as well. So for example, if our systems of systems architecture has flowed down a system requirement that our quadcopter has to have a specific response time or a specific amount of autonomy, or it needs to be able to travel a fixed-- at least a certain distance, all of those things are going to start flowing into decisions that I make when I spec out all of these components.

    So for example, how big does the battery need to be to be able to make sure that my quadcopter can get to the place where my search and rescue mission is going to take place? And that's the kind of thing, again, where simulation comes in. And what we can see in this video here, if we go into the propulsion system, the sort of battery management components.

    And we can start selecting these and then doing a trade off analysis looking at different variants of the battery system, performing an underlying simulation to check the drainage characteristics of that quadcopter under different types of battery, and use that to specify the correct component. So now we've got the systems architecture defined, we can actually get to the point where we start designing some of these components.

    Again, that's going to be an iterative loop. And at this level what we might actually find is we get to the point where we have different simulation components that are going to not need to be exchanged with people in other teams to make sure that they integrate correctly. So when we start to get to things like model exchange, we really need to make sure that the models that we are sharing communicate alongside them how they're intended to be used.

    And System Composer can be used to actually capture and explain that information as we're sharing the components, and this can help to overcome problems like needing to share models which are protected for IP reasons, or being able to share models which can then call out to third party simulators.

    So this really is saying we can use systems engineering tools to communicate our simulation architecture requirements. Once our components are designed, we can start to move on to a point where they're actually going to get implemented. So the model based design process continues through to implementation, but there's still an important feedback loop here that can happen back to the system's architecture.

    Let's take this flight computer, for example. This might consist of components that are going to be implemented in software, or in perhaps even in hardware on an FPGA or on a GPU. Now, these components, these firmware components may well be implemented to integrate with middleware, which is quite common in implementations today.

    To be able to do this, it's often useful at the systems level to explain how each of these components, how each of the software components, for example, or the firmware components in general are going to interact with each other. It may also be useful to create that linkage from that firmware architecture through to the implementation models, which we then go on to maybe even generate code from.

    So the following few slides show an example of that. So here we've chosen a middleware called open CPI, and this shows an example of how System Composer can be used to create a firmware architecture that then feeds through to create a middleware compliant component. So here we can see that we're using System Composer to, first of all, create compositions that consist of multiple firmware components.

    Now, these components can map onto templates, onto middleware templates. So this component, for example, has been mapped onto C++ templates, which will basically provide the guts of the component so that it's able to operate with our open CPI middleware. At the interface we also describe the protocols that our middleware will accept to marshal the data coming in and out of our component.

    This System Composer model that describes our architecture flows down into a Simulink model that describes the implementation. Because we've got that connectivity from the top level component through to the middleware template and down to the implementation model, what we can then do is use the code generation capabilities within Simulink to generate C or HGL code from this thing, to wrap it into the template that is expressed within System Composer to create an open CPI compliant component, and then also generate the XML files that describe the interfaces of this component and the composition of a selection of components.

    So in this sense, we basically use System Composer as a mechanism to take our designs from implementation and integrate them with middleware at the systems level. So it's essentially an extension of our digital thread. In conclusion, in this session I hope you've seen that system engineering and formal processes is essential to manage the lifecycle of complex systems of systems designs, and that really, simulation and modeling takes prime place in the development of that systems engineering lifecycle in order for us to be able to make decisions.

    Finally, I hope you've seen how that can extend all the way from the initial mission requirements definition right the way through to the implementation of a system, in particular with the example for targeting different middlewares. So that ends my presentation. Thank you very much for listening.

    Welcome, everybody. Let me start with a bold statement. Accelerating technology development has never been so important. So as an industry, I can't think of another time in the history of aviation there's been such consensus for revolutionary change, from new markets to climate impact to defense capability.

    This year, I'd like to highlight three exciting news stories just to set the scene. Many EV auto manufacturers have taken large steps towards certification. On the left here, I'm showing Vertical Aerospace becomes the first British company in 20 years to lift off with a new aircraft. Fantastic achievement for all involved. In the middle, modeling the optimum path to zero climate impact has been an area of great focus from industry, government, and academia.

    As the Prince of Wales, King Charles convened a roundtable to bring together leaders from across aviation to discuss the policies needed to address our climate impact. Here he is running a Matlab optimization algorithm to evaluate these future technology roadmaps. Finally, at Farnborough Airshow, the UK government announced the future air program demonstrator would fly in five years' time.

    A myriad of exciting technology demonstrators for reducing climate impact or for future air capability. Here I've highlighted the Rolls-Royce demonstrator called Orpheus. For all these and more, modeling and simulation is enabling this acceleration in technology development. In this talk, I'm going to discuss how to model and simulate a complex dynamic system. In fact, it will be this electric propulsion system sketched here.

    On the left, we've got a thermal model. There's a cooling system for both the battery and a motor connected to an environment. The battery is our power source that's driving an electric motor that's connected to our propulsion system via a shaft. We're going to explore the dynamics of this system, simulation considerations, and how to effectively deploy this to a real time device. Let's see the model.

    So this is the electric propulsion model. It consists of a scenario, a battery, a motor, and a very simple platform. The cooling system for both the battery and the motor are modeled, as is a simple control system in the environment. So if we go into the platform here, we can see that we've got a propeller that's driving a mass. That propeller is very simple characteristics with a constant thrust and power coefficient. Moving over to the motor, here I'm using a DC motor.

    Again, it's a very simple representation. If I wanted more complex I could use a PMSM or I could, for instance, use a DC motor model which would contain fault modeling capability. Finally, I'd like to take a look at the battery. This is a module that has been exported from Simscape Battery. Let's take a look at that process in more detail.

    Simscape Battery is a new product released with 22B. It contains many predefined algorithms and helps bridge between the cell level and pack modeling. In this example, we're going to produce a module, including thermal effects. OK. I'm going to move this diagram to the left hand side and walk through the live script on the right. Before starting, we need to import some of the functions and objects used within the battery builder.

    Now, let's start with the cell. So what we're going to do here is create an object in Matlab that represents the cell that we want to model. So for instance, I can define the geometry to be cylindrical and I can change some of the modeling options. I can then plot this using this battery chart. The options I'm going to change to do with exposing the thermal port and temperature dependency.

    So plotting the cylinder on a chart we can see I have one cell. Moving on to the parallel assembly. Now, this is a similar approach. We're creating an object in Matlab. I'm defining it to have 10 parallel cells. This is going to be a square topology, and I'm going to have a detailed model resolution.

    On the chart this time, we can see that I've got a simulation strategy overlaid onto these cells. What this is saying is that each of these cells is going to be represented in my model. To explore this in a bit more detail, I'm going to transfer these lines of code to be evaluated in the command window. And I'm going to change the model resolution within my parallel assembly. So it was detailed.

    It's now lumped. I'm going to use Tab to expose the options there. Now I've changed the object, I'm going to replot the parallel assembly. And what we can see here is that those 10 cells are now being represented by a single time constant. They're all being lumped together.

    OK. Moving on to the module, and it's a similar story. We're creating an object in Matlab to represent our module. Here I've said I want 10 of these parallel assemblies in series. I'd also like the module resolution to be detailed. So what we can see now is that all 100 of our cells are being represented. I'm also specifying the thermal boundary conditions.

    So, is there a single temperature node or is there an array of nodes that I would like to expose? This might be so they can add a cooling plate to be attached to the bottom. So for instance, here I could expose a hundred nodes.

    The final step is to use this build battery command to convert this object in Matlab into a Simscape component. What it's doing is it's creating a library of components here containing the parallel assembly and the module. Once it's created these components in these containers here, it then compiles the library.

    On the right, we can see our parallel assembly. It contains 10 cells, detailed model resolution. On the left we've got a module, and within this module I can examine the source code. So this is the Simscape code that's been produced by Simscape battery. We can see that there's a variety of parameters. So for instance, I can see that there's 10 batteries in parallel, 10 in series.

    Now, what's really helped scale from the cell through to the pack level is this use of for loops. So we can have for loops for components, for loops for connections, equations, nodes. This means that you can have one unit that you're able to reparameterize, reconnect, and scale from the cell through to a pack. So back to the model.

    And we're going to take this model and deploy it to a real time platform. But before we do that, I want to assess the dynamics within this system. So I'm opening up the solar profiler here. What this enables is, while the simulation is running, a few more pieces of information are being recorded to help with diagnosis. We can see at the bottom the simulation progress, so we're at 33%.

    OK, simulation has completed. What we're seeing here in the center of the picture is a graph of the step size versus the simulation time, and we can see there's a few key events where it drops. There's 112 states within our simulation, and it took about 20 seconds to run. Now it's highlighting that the maximum step size was used 97% of the simulation time.

    So it's suggesting that we improve this by increasing the maximum step size. It's also suggesting that we are limited by the fastest discrete rate. So perhaps we could decouple the continuous states from the discrete. Now the parameters it's suggesting is this enable decoupled continuous integration. We can right click on this and get a bit more information, and it can link us through to some documentation about what it does.

    So we can see it removes the coupling between continuous and discrete rates. OK. Let's apply this and see how it does. So, 20 seconds was how long the previous simulation took. And another number to note is that the total solvable exceptions were 331.

    OK. New graph. Step size versus time looks remarkably different. It took seven seconds to complete. But we can see that actually the total number of solvable exceptions is now at 1,400. So what that means is it's able to stimulate faster, but it's having to overcome quite a few solvable exceptions along the way. So let's compare to see what the difference is in terms of results from our simulation.

    Our scenario varies the throttle position and the incline. The throttle is fully opened and then is pulled back. The plane starts off level, then inclines, cuts back, and then descends. And we can see the velocity profile during this journey. So while it's accelerating, throttles open. It then slowed the rate of acceleration, drops while we're inclining.

    We then reach constant velocity before our dive where we reach the maximum velocity. So that's our scenario. We can plot the difference between these two runs. So in red we've got our decoupled simulation. We can visually inspect differences between these simulations. Another route is that we can use the comparison tool to compare the results between these two different runs.

    We can see the throttle and the incline are identical but the velocity is different. Indeed, at the bottom we can see the difference in velocity. So we can see that there is a peak of 1.5 meters per second difference between these two different runs. That's probably too much, in our case. Let's recouple the continuous and discrete states, but note that there was a factor of three performance improvement that could have been attained if the continuous states could take a larger step size.

    So in our model, there's a discrete part of our system for the controller and there's a continuous part that's representing our plant model. And these are being interfaced using a first order hold block. Now I'm going to use the local solver within Simscape to run the plant model with a fixed step solver. So turning on an information overlay I can now see that the whole model is being run at the same step size, this D1 rate. So let's rerun this through the solver profiler.

    OK. Results are in. So the graph of step size versus time isn't particularly informative anymore, but we can see that the runtime is 22 and 1/2 seconds. So there was a negligible performance change between the continuous and the discrete run, in this case. Comparing the results between the fully coupled baseline and this fixed step simulation.

    And we can see that there's a good match. In fact, the difference shown at the bottom graph is measured in a hundredth of a meter per second. Now, to get slightly more insight for a fixed step simulation we're going to use the Simulink profiler. So this looks at all the different function calls that are being called during the simulation and chalks up the amount of time the simulation spends within those.

    So we can see at the bottom, again, the simulation progress. OK, so this is the overall top model, and then we can see a breakdown of the time spent in these different functions. So for instance, the Simscape network characterized here in the solver configuration took the majority of the time.

    So in that model, the Simscape network was all using the same step size. However, within our system, the battery, the cooling system, and the plant, they all have different dynamics. They all have different time scales of interest. So what we can do is split our network into three sections, each with its own solver configuration, each with its own step size, and that step size can be tailored to the dynamics.

    So for instance, the thermal system here can have a significantly larger step size compared to the other two. So having split this network into three parts, I'm now going to reprofile. What we can see here is, one, the total simulation time is significantly reduced, but also we now have three different solver configurations representing these three different networks in our system.

    The way I've split these is using these network coupler blocks. So these were introduced in 22A, and what they do is replace a Simscape component with a Simulink implementation. So if we go into this network couple in more detail, on the left we can see that there's a voltage source, and on the right we can see there's a current source.

    Now, the choice of where you split your network matters, and it may not always be beneficial to split in a certain location. So let's compare the results between this fixed step network and this split network. And what we can see is the effect of introducing these network couplers has been negligible. The difference is measured in thousandths of a meter per second.

    Now that we have our multirate model, let me refactor this based upon the timescale of interest. So those network couplers have now expanded so we can see the interfaces on the left and right hand side here. This has brought the rate transitions to the top level. So we can see I've grouped the thermal subsystem, the battery, and the platform and drive.

    We can then create a subsystem around those groupings. So for instance, here we can see that it's got one half of that network coupler. Each of these subsystems has a different sample rate associated with it. The next step is to convert these subsystems into model references that can be called at those sample rates. So here's one I've prepared earlier. Opening up the thermal model reference, we can see that it's exactly the same as that subsystem previously.

    OK. To schedule the rate at which these were being called, traditionally we would go to this configure tasks within the configuration parameters and we'd explicitly partition our model. We can now use the schedule editor to do the same task. So this is showing which of these model references are being called and at what sample rate they're being triggered. So he's got these different partitions for the different model references within my subsystem.

    We can see what events there are. I can create an event if I wanted. There's also this handy legend to explain what the diagram is showing. I'm just going to update the model. What we can see now is the different colors represent the different rates within our system. And from the schedule editor we can highlight where in the model is being scheduled.

    To further show the link between the schedule editor and the model reference, I can show the block parameters. So this is showing that the block is being scheduled and it's been scheduled with the schedule editor rather than the ports. We've also got this Solver tab now where you can use a local solver for this model reference. So this would use the configuration settings of that model reference as a local solver.

    In a similar way to how we're using a local solver for the Simscape network, we could use a local solver for this model reference. OK, a model is now in a state where we can deploy it to our real time platform. So let me open up the configuration parameters, and this time we're going to go to the code generation section and choose Simulink real time. So this slrealtime.tlc.

    And in doing so it gives me this tab, this Simulink real time tab, where I can select which device I would like to target. And I can also run on the target or do these step by step commands to deploy my model. Now, I want to keep the model on the left hand side of our screen and I'm going to make the command window a bit bigger here.

    So what do these step by step commands look like programmatically? So first of all, let's connect to our target. So I'm using this slrealtime command to connect to my target. So in doing so, I've now got this green connected symbol in the top left of my model. The next step is to build my application onto the target. So I'm going to select my model and I'm going to build my model.

    So in the interest of time we've just got to speed this section up. So I now have code that's been built on my machine, and the next stage is to get that code onto my real time platform. So this real time explorer is a helpful window into what's going on on my target machine. So now I'm going to load my model onto my target.

    OK, so on this target configuration I can see that I've got a new application name. I've got I can also see that I've got a number of different signals and a number of different parameters associated with that model. The next step is to run this simulation. So I'm just going to get up a few monitors.

    I'm also going to capture a bit more information using this start profiler on the target. So let's start the model on our target, and I'm going to run it for 30 seconds. As it's running I can get information about the execution time for these different model references. I can also see how my scenario is evolving and the output of my scenario.

    Clearly we can see from the total execution time I've got quite a bit more bandwidth. So while that was running, I was profiling the results. So let's take a look at what additional information I was able to capture. So this is now transferring the profiler data to my desktop, and I'm just going to skip ahead a little. So the information I get is this code execution profiling report for my model.

    It's got a summary and then it's also got some information about these different sections, these different model references that I'm calling. I can get histograms of how often they're being called. I can also get other information, for instance, about the maximum CPU utilization. Looking at some of this information through time, in the Simulink data inspector I can see what's being called on the various cores, on my real time device.

    I can also see the relative usage on my various calls. This is at around 1%, so plenty more bandwidth for a larger model. OK, so we've examined the various dynamics and we've been able to deploy our model onto a real time device. I just want to touch briefly on how to set up version control, which is essential for working on some of these larger models. So in my Matlab project I'm adding a source control link, so that's setting up this Git repo.

    I can set up a remote URL associated with this and validate its existence. I can view and commit these changes, add a comment. So now this repository is committed locally, and then I can push it to the remote repository. I can view the various different branches with my repo. This is one I've only just set up, and it can get pretty complex pretty fast.

    So what does this enable? Well, it enables things like using DevOps processes. So for instance, connecting it to a CI system. Here I've got a Simulink test inheriting from a Matlab based class. What this test is doing is selecting my model, varying a few variables, and then simulating. This test is called in my Azure environment using this pipeline. This pipeline consists of a set of tasks.

    This pipeline can then be triggered based upon any mergers with the brand master, for instance. So how did I set this up? Well, I started-- selected my repository. I then need to select my pipeline. And in this case, I've got a few tasks. If I didn't know what these tasks were, I can use this assistant to find out some useful commands. So for instance, I might need to install Matlab first.

    So in this case I'm going to use 22B, and then after that I'm going to run my tests. And there you have it. That's my pipeline. So I'm just going to save this. I'm going to save this to my Git repo. And when I commit, it's going to then trigger it to be run for the first time. So this job is now queued, and I'm just going to speed it up in the interest of time.

    So success. It's finished. What does that look like? Well, I can have a look at what's happened within my pipeline. So first of all, installed Matlab. There's some clever ephemeral loading. It means that it only took 22 seconds. And then I can run my tests. Here it spent most of its time. At the end, I can see that it passed. And there you have it, a whistle stop tour of setting up a Git repo and then connecting it with a CI pipeline.

    To summarize, modeling and simulation is accelerating technology development. In this talk, we saw how to examine a complex dynamic system looking for timescales of interest. We were then able to split our network in order to take advantage of both our scaling and solver technology to accelerate your simulation. We tried to target a real time platform and also how to use version control as well as CI/CP platforms. Thank you very much.

    Hello, everyone. My name is Juan Valverde, and today I'm going to be talking about how our solutions can support your certification activities. Our support for certification is extensive, unfortunately it is difficult to go into too many details in 30 minutes, but I hope I will be at least showing some depth of what you are able to do with our solutions and services.

    Today, I will focus-- I will be focused on civil standards, although we all know their use is very normal in defense programs or even space applications. I will talk a bit about the systems level, mostly ARP for 754, how models are recommended and can help in the development process. I will talk about electronics hardware following the DO254 and so forth following the DO178C.

    I will also mention some important things about the supplements. Mostly about tool qualification and model based design development and verification, although I will mention a bit about the use of formal methods as well but not a lot. Today, unfortunately, I will not have time to go into details about cybersecurity, but this is another standard we can cover.

    Before we start with the different standards, I would like to highlight some of the main concepts we will hear along the presentation. Today we will talk about verification. This is if I'm designing my system in the right way. We will talk about validation. This is if I'm designing the right system. We will see design standards, including a code and modeling standards.

    We will mention the importance of traceability from requirements to code, going through models and test cases and event reports. We will also talk a lot about tests and coverage at different levels, and we will mention some advantages of use informal methods for property proving, a test generation, or perform static analysis at code level. The main idea that I would like you to get out from this presentation today is that we can cover all these aspects for different standards.

    So let's start at the system level with the ARP4754. Today we'll not cover 4761 specifically, but of course they are highly related. This is an overview of the VM model for hardware and software development starting at system level. This process includes system requirements, the identification, allocation to items, items design, and on the right side of the V, item and system verification.

    This way we will go top down from safety requirements to development and bottom up performing requirements verification. In reality, we will have smaller Vs at the different stages. In the left side of the standard, the standard includes a analysis, such as the system functional hazard assessment, preliminary system safety assessment, and common cost analysis. To then perform allocation, run four tree analysis, common mode analysis to end up with a system safety assessment.

    Our solutions will not cover all these processes but will be very helpful in a lot of them, as we will see. But first, let me show you some examples of how the standard is recommending the use of models and how are the main advantages of using them. First, by using models, we can have an unambiguous description of the system to discuss more effectively with customers and suppliers.

    Using models it will be also possible to identify missing requirements more easily than with, for example, only documents. And of course, it will be possible to correctly specify interfaces among systems that are crucial to establish contracts among partners in a project. In the standard we can see how prototypes as models are mentioned to help with what I just mentioned. The standard also mentions the importance of the reusability of these models. This can be extremely beneficial for subsequent projects.

    The concept of model is mentioned across the whole standard. So for instance, requirements capture phase, it states that one of the acceptable formats is a graphical model. We will need to specify the intended tools to be used, modeling standards, et cetera, so that it is possible to achieve a common understanding of how these models are going to be used.

    It is also mentioned how models will be used as validation methods to validate system requirements. Models will be used to discuss about unintended functionality even before testing starts. Models are, of course, mentioned as verification means. Models are normally used for system parameter evaluation, provide early information about the system, et cetera.

    And they are a very important part of integration. While not everything about integration can be done with models, some cost effective early analysis using models is often beneficial to reduce integration testing effort in the final system implementation. Then let's provide an overview of what we can do with our solutions to cover ARP. Some of these features were already mentioned by Mark during the system of systems presentations where I will not go into a lot of details.

    First, within the requirements capture phase, we will be able to alter or import requirements and link them to our system models in System Composer. Then we will also be able to validate the requirements. For instance, by using the requirements table format we will be able to automatically detect certain inconsistencies among requirements or even incompleteness issues when certain conditions are not specified.

    Then from CONOPS you will define the use cases for your missions, identify what assets and functions will be required during the aircraft function development, and you will be able to link and generate diagrams for use case dependencies, assets needed per use case, and allocate functions to those assets. Actually, by using the allocation editor you will be able to allocate, using a matrix representation, your functions of your different subsystems. This way you will be able to create and compare different scenarios.

    Then you will be able to continue developing your system architecture in System Composer, specifying interfaces among systems, system attributes via stereotypes, and perform trade analysis, as it was mentioned by Mark before. Then you will be able to also slide your models and create simplified views, et cetera. Then you will be able to allocate system requirements to items and measure implementation.

    Even later while testing you will be able to capture a verification status, et cetera. Then you continue with your system implementation by adding behavior to your architecture blocks in Simulink. In these states, it is important to mention the possibility of starting with low level of fidelity to help establish contracts among systems to later refine the implementation using models from which you will be able to generate production code.

    Then we cannot forget how we can help with configuration management, data, and documentation using projects to improve the organization of your files, integration with SVN and KIT, be able to generate dependency analysis graphs to see connections among artifacts, and track changes by seeing, for example, what artifacts are modified when changing certain files, et cetera.

    Of course, the implementation verification part, we will see a lot more about this in the following sections. Here I wanted to highlight some public customer references about the usage of our tools to comply with the ARP standards. First, for instance, Embraer used our solutions for the flight control system of their legacy 500 business jet.

    They highlighted how they were able to shorten design time by at least six months, how they minimized issues related to requirements validation, and how they used models for real time testing. Another interesting story is about Airbus. They used the model based design for the fuel management system of the A380. They highlight how the user model based design, they could improve visibility of their functional design, how this helped with requirements validation, and with the simulation of different simultaneous failures.

    Now that we have seen a bit about how we can help support in ARP activities-- it was not in detail of course, but let's go into more details about the hardware and software development processes. So then for that we are going to go one step down and go into the DO178C and the DO254 standards. First I have to say that DO254 and DO178C standards are not equivalent.

    In some cases they are close, at least process wise, but they are very, very different in nature. However, there are certain parts that, because of the way the tools are used, today we can show together. As general steps, the two standards consist of the following phases. Both starts with requirements. In hardware is the requirements capture process, while in software they refer to them as the high level requirements.

    But in general, these are textbook requirements, although we will see some exceptions later. Then we move into the conceptual design in hardware and low level requirements and software architecture in software. In our case, this will be the design model, the Simulink model. Then we will go, in both cases, to code generation. Very different nature, of course, depending on the case.

    To get it to the detail design in hardware and directly into source code in software. Then we will have HCL code synthesis and compilation of the C code to get into the bitstream, for instance if working with an FPGA and the object code in the case of software. At the very beginning of this process, the two sides are fairly equivalent. Although once we start refining the models for code generation, model transformation will be very different in both cases.

    Another part that I would like to mention is the supplements of the standard. In principle, we have the DO178C supplements for two qualification, model based development and verification, object oriented code, and formal methods. We will see that even though two qualification is mentioned in a slightly different way in the DO254 standard, the acceptable means of compliance, AMC2152A also mention the tool qualification according to the DO330.

    Apart from that, the model base design standard is not officially linked to DO254, but we know that there are many cases where some of the concepts are applied successfully and this is accepted by the authorities. So let's provide a bit of information about the requirements capture phase for both standards. Using the requirements toolbox, it will be possible to both alter requirements or import them from different external requirements management tools such as Polarion or DOORS.

    One normal way of doing so is by using the required format. This way, we will be able to update and synchronize requirements changes from the source and have robust links. We mentioned that traceability is a very important aspect for certification, so using the requirements toolbox, it will be possible to link requirements to test cases and models, and even later on to code in the form of comments.

    This way it will be possible to identify gaps in the design and track what requirements have been tested, et cetera. There are different ways to capture requirements. Normally they will be captured as text, but as I will be mentioning later on, it is also possible to formalize them using the table format. The design description report with the traceability data can be qualified.

    Then we move into the design model. We will see that there are a lot of things we can do at this level, and this is applicable for both standards. For instance, starting with compliance to modern standards, the DO331 standard talks specifically about describing the modern techniques, modern notation, et cetera. Do178C and DO331 will not ask for specific modeling standards, but they will ask you to describe your own model standard, and of course, to prove compliance.

    We have a-- this is an example for DO254. We have a selection of modeling techniques that can be checked automatically in your Simulink models. For DO254 we have a collection of text that are taken from experience as best practices. This includes things like the identification of duplicated names, how to configure your model for code generation so that your code most likely will comply with certain standards, et cetera.

    The conformance to this standard is also a standard that's also a qualifiable feature. Then we can go a bit farther and use Simulink Design Verifier to run model checking. This can be done using some predefined properties like division by zero, overflows, et cetera, and this feature is qualifiable. But we can also define other properties and run property checking.

    Of course, as always with formal methods, conversions will depend on the complexity of your model, the complexity of the property, data types, min-max values, et cetera. So this feature is not quantifiable at the moment. Then we already mentioned it, but I think it's important to mention this capability at model level is the connection of requirements and test cases, which, as we know, it is specifically an objective in the standard.

    We can have metrics for implementation completion, verification status, trace broken links, and generate reports. The design description report with this trace data, as I mentioned, is also a quantifiable feature which is very important. Then we move into the model testing realm. The design model can be tested using the test cases defined before. With this, we will answer two different questions. Did we meet requirements?

    This is if all past criteria from the requirements based tests are satisfied. But we will also answer, using model covariates, if our model is completely tested. This is in my exercise in my model fully. The coverage assessment of tests is also a qualifiable feature, and this will be very important. For instance, in DO178C we'll ask for test coverage for low level requirements. Since our design model is equivalent to LLR, this is satisfying the objective.

    All these results can be seen in the model testing dashboard or my requirements. I can see if all my requirements are linked to tests. I can see if all my tests are linked to requirements, which tests pass or not, what is the level of structural coverage following different criteria, like decision coverage, MCDC, et cetera. Having this snapshot of the situation is very helpful because this way I can identify gaps among requirements, tests, and test results, and this is also very important and useful when I'm using continuous integration capabilities.

    But what happens when I test my model? I calculate coverage and I'm missing coverage. What is that? Why is that? Am I missing requirements, or maybe I do not have all test cases required to test my model, or maybe there is an error? Using Simulink Design Verifier, it is possible to generate tests that will show what are the non-exercise parts of your model.

    Then it is the responsibility of the user to understand why. It might be the case that I have non-reachable states by mistake, or that my test case is to exercise. My requirements are not complete. In any case, being able to flag these cases is very useful. Great. So this was a small summary of the things that can be done at model level to comply with what standards are asking for.

    I will mention a bit more about model transformations later on. I now have a mix in DO178C and DO254, but let's say that at this level these activities are fairly equivalent. Now, if we go into a bit more details on the D178C and we focus on what happens after the model, at this level we can see primarily three things-- a qualifiable code generation process, a qualifiable formal analysis of the source code, and a qualifiable software testing process.

    I will not have time to cover everything. I will show a bit about the objectives for the source code and how they can be satisfied using code generation when I explain the two qualification standard. Today I will briefly mention the use of formal methods to analyze source code, but not in detail, and I'm going to focus a bit more now on the testing process.

    So if we focus for a second on table A6 of the standard for the testing of outputs of the integration process, we see different objectives about the executable code. The executable code needs to comply and be robust with high level requirements and low level requirements. In our case, we will see how it is possible to rerun the same test cases that I mentioned before, including robustness that we executed in the model by using, for example, instrumented code in a software, in the loop function, or even on target using a processor in the loop.

    Our inputs and outputs at model level are logged and they are used to compare results during execution. Of course, we do not control if the compiler includes any undesirable effects. So this is also a way to see possible discrepancies caused by the compiler. Then I already mentioned that this test case reusability also applies to robustness tests, but there are also other ways to support compliance to robustness testing using Polyspace.

    Well, robustness evidence not testing. For instance, Polyspace code prover can help identifying abnormal inputs and conditions by identifying things like arithmetic faults, violation of array limits, et cetera. Polyspace can also help identify that components-- software components interact correctly and that satisfy the requirements and the software architecture. This is done by identifying uninitialized variables, parameter parsing errors, global data corruption, et cetera.

    A Polyspace code prover can also be qualified using the DO qual kit. So, great. These were only some examples of how our solutions can help in comply with DO178C. Like I mentioned, I will go into some more details about the code generation later on. Of course there are a lot more, so please do not hesitate to contact us if you want to have more details. So let me jump now a bit more into examples related to DO254.

    Unlike with the software process that MathWorks has a qualified code generation process, no one has a qualified code generation process for HCL for models. Our solutions go from requirements to HCL code and different flavors of test benches are created from a normal RTL to two system very low convergence, UVM, et cetera.

    HCL code simulation, code coverage analysis, and code standard checks are done with our code in our test benches but normally using partner tools. But this should not be a problem. So we have already seen some of the capabilities at model level, but let's see a more specific capabilities for HCL code generation, testing, et cetera. Generating code is not magic.

    You need to work on your model to obtain the best results possible. For that there are many different model transformations that you can do to improve your code generation possibilities. One very important transformation, of course, is automatic fixed point conversions in case you want to use fixed point, of course. The tool will be able to process-- to propose, sorry, fixed point values based on inputs from your tests. Then you can accept them and modify them, et cetera.

    Then you can also achieve more memory efficient implementations, for example, using a lookup table optimizer. This is done by using an approximation of the behavior of certain blocks. There are other optimizations like the use of specific blocks for modulo and divide by constant operations. This will guarantee that we get to optimize code that is not generic.

    Also there are structures to have in mind, for example, using buses or using for each blocks that will eliminate code replication. You state flow machines for FSMs, or finite state machines, et cetera. But these model transformations can be done with the help of different types of analysis. I already mentioned a fixed point, but you can also estimate, for example, the critical path and latency and represent them graphically.

    You can automatically insert then pipeline stages to break this critical path. Then you can optimize your implementation for area or performance by forcing, for instance, resource sharing. So these are a lot of changes that you can do at modal level that will support and will help in improving your code. There are many other configurations you can do by, for instance, forcing multiplications by constants done by shift registers, or even use pragmas to help the synthesis process to match functions to DSP logic, for instance.

    Then we generate our code and generate some reports that will bring information about traceability, estimated resource occupation, et cetera. You can also select different coding standard rules at model levels that your code will most likely follow, for instance, stark coding standard. Just to make a quick reference to the standard, the standard will ask that the detailed design, the HCL code, in our case, is designed from the conceptual design, which is the model, and that there are certain standards that are followed and that we can guarantee that these standards are followed.

    Then, as you know, it is a normal practice to test HCL code is doing that in simulation. In this example, we are using Questa Sim, then it is possible to either simulate the automatically generated code that is excited by the automatically generated test benches directly in Questa or it is also possible to cosimulate Simulink and Questa to compare results with the model level test cases and use pass/fail criteria assessments, et cetera.

    This can also be generated like a system very low cost version. Like I mentioned, all the UVM checks, et cetera, it's also possible. Then tests can be reused after synthesis using an FPGA in the loop setup. The standard also asks for evidence that the hardware implementation meets the requirements, so by reusing requirements based tests at this level, it's also possible to show this evidence. Again, if there are some changes included by the synthesizer, we should be able to see what are the discrepancies.

    Great. So we have seen some possibilities to help complying with the standards done at model level. We have seen some examples of how to cover objectives for both hardware and software standards, including code testing. Now I would like to mention some details about two of the supplements starting by the DO330 for two qualification.

    The DO330 defines three criteria for two qualification. Criteria one applies to design tools, whose output is part of the resulting software. Criteria two, which is a bit confusing, applies to verification tools whose output is used to eliminate some verification processes that are different from the original one, or even development process. And criteria three is focused on verification tools that are eliminating some review processes.

    Regardless of the data level, every criteria three tool needs to be qualified to TCL 5. Now, this is what MathWorks does. MathWorks qualifies our verification tools. There is always a misunderstanding about the terms qualified and qualifiable. Tools are not pre-qualified by the vendor. They need to be qualified on the user's environment. To that end, we provide the qual kit, the DO qual kit, to run tests and provide evidence to authorities to achieve this qualification.

    Then qualification of tool is great and saves time, but it is not compulsory. You can use tools and not qualify them. You just need to review the output, which in some cases is desirable as well. So this does not mean you cannot use it, just that it involves certain reviews. Another clarification is that, of a tool there might be features that are quantifiable while there are others that are not quantifiable.

    So it is not exactly a tool, but we qualify features of a tool. Then, like I mentioned before, actually just for clarification, even though the DO330 supplement is officially linked to DO178C, the acceptable means of compliance, 2152A, also mentioned this supplement in the text, as you can see in the slide.

    Then, as I mentioned before, I wanted to show one example of two qualification with example of the qualification of the co-generation process, but not the code generator. This is done by qualifying the tools that verify the code. So if we see the objectives specified in table A5 about verification of outputs of the software coding we see that we can cover them in full with the exception of source code is accurate and consistent, which is only partial since there are, for example, aspects like cash management or memory usage that are not covered.

    But in general, we have that covered, which is equivalent to the process with a qualified or qualifiable code generator. Then, for instance, using results obtained during simulation, Simulink code inspector in Polyspace, it is possible to comply with these requirements. So let me give you some examples. So for instance, to guarantee that the code complies with low level requirements and software architecture, we can directly rely on Simulink code inspector since it guarantees equivalency between model and code and we already tested the model.

    400 in code, code prover, Polyspace code prover can analyze the usage of global variables and build a data dictionary and see access to share data. Based on this analysis, the tools will compute the application country. So this is also supporting this objective. To see if the source code is verifiable, well, we have that the model was verifiable and we have a Simulink code inspector, so the code does not need to be altered to be verifiable.

    Apart from that, Polyspace back finder can also help by identify, for instance, some untraceable code, and that is, therefore, non-verifiable. This is also for manually written code. Then the standard asks for conformance to coding standards, so we already mentioned some of that. So we can, for instance, check that the code against MISRA, MISRA C, for example, standard. So this is a possibility.

    And then about traceability, we'll explain that and how this is kept. So by qualifying the code verification tools, we can have a qualified code generation process, and this is not a problem. This is accepted by authorities, et cetera. And then to finish with the standards, let me mention some details about the DO331 for model base development and verification.

    The standard defines two types of models, a specification model that represents-- and I say represents, and not implements-- high level requirements. This should be a representation indicating the what and never the how. And then it defines the design model, which is what we have seen today along the presentation that defines the software design. This is the implementation of the requirement indicating the how.

    The standard shows different examples and possibilities based on the usage of these models. For example, today I was mostly focused on example one with no specification model, text requirements for systems and software, and design model and code. But this is only one option. Now, while specification models can be very beneficial to improve verification, even to generate tests, we need to be careful because in some cases it can be easy to mix specification with implementation.

    For instance, I think it's better to use the requirements table or Simulink Design Verifier blocks instead of directly Simulink or state flow. Don't get me wrong, nothing stops you from using them. But you need to be extra careful because by using the same language to specify the requirement and to implement it, if your requirements are not really clear in what and how, you can mix them. So this is some caution there.

    Great. So to finish with the standards, I just wanted to share some customer references. The first one about DO178C is a story shared by Leonardo for the radar navigation software for search and rescue missions with helicopters. Some very nice highlights are how recertification cycle times were reduced by more than 90%, and the use of automatically generated documentation with 250,000 pages. These are remarkable results.

    Another example comes from Bell Helicopters for the design of their fly-by-wire system to A. The previous one was C. With integration time cut by also 90%, and the qualification of Simulink code inspector with the FAA. Of course we have more, but these are some of the ones we can share. For the DO254 I wanted to show a pretty recent one that we have with Airbus Defense and Space in Spain.

    This is the airborne electronic hardware group that use our tools for their flaps, a high level control of the C295 aircraft. Some interesting highlights are how they were able to have different teams working at the same time on the same architecture by using reference models, the automatic generation of HCL code, and how they were able to find errors by reducing tests at the different levels, like I was saying before.

    So I hope I was able to provide some hints about the possibilities of our tools to cover certification standards. Let me now close the presentation with some information about the ways MathWorks can support you and some general conclusion. MathWorks has many engineers dedicated to providing support to your users-- to our users, sorry. This is something that we have to always remember.

    This is done through a close relationship of our application engineers and consultants in your projects. This is by providing training and by continuing working with you to develop the required features to support your needs. So please talk to us. Let us bring to you the latest developments and bring to us your needs so we can communicate that, and let our engineers work closely with you because this is the way to accelerate your design.

    And then just to close the presentation, some general remarks. We can cover certification standards from systems to hardware and software, even cybersecurity, DO326-- I'll not explain today. Model base design in general will help you accelerate in your work by enabling traceability, code generation, report generation, reusability, and reusability of verification activities are very important.

    Two qualification is possible, and this will help eliminate manual reviews in many cases. Not always two qualification is required, but it helps a lot. And like I was just saying, work with us. Design for certification is long and is full of details. So MathWorks can support you not only providing tools, but we have many field ex--

    View more related videos