Generating High Integrity Production Code Using Model-Based Design
Overview
Embedded code generation is fundamentally changing the way engineers work. In high integrity embedded applications, where extremely high product confidence is needed as a failure may prove to be catastrophic, it is crucial to develop a more robust code. Automatic code generation is an essential step inefficient adoption of Model-Based Design as it enables the user advanced verification and validation techniques to produce high-quality code, reducing the number of iterations in a typical industry-based product development cycle and eliminating errors introduced due to manual coding.
Instead of writing thousands of lines of code by hand, engineers are automatically generating production code to increase productivity, improve quality, and foster innovation.
Embedded Code Generation using Embedded Coder® generates readable, compact, and fast C and C++ code for embedded processors used in mass production. Embedded Coder offers built-in support for AUTOSAR, MISRA C®, and ASAP2 software standards. It also provides traceability reports, code documentation, and automated software verification to support DO-178, IEC 61508, and ISO 26262 software development.
Code generated using Embedded Coder is portable and can be compiled for and executed on any processor.
In this webinar, you will learn how you can use features of Embedded Coder to generate code that meets industrial standards, as well as the flexibility they offer when configuring the model and generating optimized production-ready code.
Highlights
- Preparing the Model for Automatic C Code Generation
- Applying customizations based on software architecture for Code Generation
- Perform Back-to-back testing of generated code and model for consistency
- Generate code that complies with Industry guidelines and standards for production code
About the Presenters
Aravind Singh, Senior Application Engineer, MathWorks
Aravind Singh is a Senior Application Engineer. Aravind is working as Technical Account Manager for major AUTO OEM. Aravind’s areas of focus are Embedded Code Generation, Verification, and Validation.
Before joining MathWorks, Aravind has worked for 4 years at Tata Motors as Design Engineer, he was responsible for Application development for OBD, OBD2, and Control Logic for EGR for Emission Control ECU (for the commercial vehicle) in Embedded C and Model-Based Design.
Aravind has also worked with American Axle and Manufacturing Services for 4.5 years as Software Lead for all-terrain vehicles and he has also worked with John Deere as technical lead where he was responsible for team management and software development for various Deere products. Aravind holds a Master's degree in embedded and electronics and M.B.A Degree in Marketing.
Gaurav Dubey, Senior Team Lead, Pilot Engineering, MathWorks
Gaurav Dubey is a senior team lead for the pilot engineering team in MathWorks India and specializes in the fields of model-based development workflows, automatic code generation, verification and validation, and certifications. He closely interacts with customers in different domains to help them use MathWorks products for model-based development, production code generation, and software verification and validation. Gaurav brings more than 15 years of experience in embedded system development. Prior to joining MathWorks, Gaurav worked with Tata Motors Limited, where he gained hands-on experience in engine management system ECU development. He has also worked as a software analyst at Tata Consultancy Services on different automotive projects involving model-based development. Gaurav holds a master’s degree in instrumentation engineering and a master’s degree in electronics and communications.
Recorded: 25 Feb 2021
All right. Good afternoon, everyone. So I'm Gaurav Ahuja from the application engineering group in MathWorks and I have my colleague Aravind with me, again, from the application engineering group. Today we are going to discuss on how we can develop a high integrity application in the production workflow using the model-based design.
Now we know that the code generation functionality has fundamentally changed the way engineers work in today's world, because the high integrity application requires you to have high product confidence, high confidence on what application we are building, the way we are building, the way we are testing, because the cost of a failure could be catastrophic. And that is what we'll discuss in this webinar today, how we can use model-based design to develop such an application.
During the webinar, we'll also see some poll questions. So they look like as a window opening. I request you to please record your answers in that. Let's begin. So, as you've seen, as I was saying, all the newer devices and the applications that we are building are getting more and more complex, because the customers that we target to are requesting us to put in more and more features back, and more and more features make the embedded system more and more complicated and complex. That's also making it a difficult thing to manage, right?
And since we see that all these applications are being used in the field where, as we said, a failure can cost a human life, and it's more than what money and all are there, and that's the reason we need to make sure that what we are doing, under the pressure of competition, of our peer competition, of reaching faster into the market, we do our due diligence of making our software more and more robust, and our application more and more safe from the failures. Now, as a matter of an example, I'm taking here from an automotive industry, as part of BMW 7 Series in 2014, now this is told by one of their global sales marketing head, Dr. Ian Robertson, says that in 2014, in 2011, there were merely just one million lines of code. And as a matter of fact, in 2018, there are hundred million lines of code in the BMW 7 Series.
And what they say is that the challenge for the industry to come in next 5 to 10 years is unforeseen, or the amount of development that's needed is more than what we did in last 50 years, because they are expecting that we might need to have one billion lines of code in 2025, to address all the complexities of hyper-connectivity, autonomous driving, where each and every car becomes a supercomputer. And it's not just in the automotive domain. It could be in other industries, where you would see that, as and when newer devices are coming out, they're becoming more and more complex, and more and more packed with the embedded code into it.
Now, in order to make such a device safe, and in order to make a device which has like one million lines of code, what I want to have is I need to test, I would want to be able to test my strategy of implementation, even before I actually start writing the code, because if I wait for writing one million lines of code and then test it, I think that'll be too late. So I want an environment where I could actually test, simulate my ideas first. And then once I have already developed those ideas, as a simulatable model, I would want to then leverage some capability, by which I can represent the same logic in form of the code.
And this is where we see that the importance of auto coordination comes up, because I would want to do that, because in order to make it more and more secure, I want to make sure that I am also taking out the error, probably the human error involved in the coding, which could be that I want to develop a standardized-looking readable code, each and every time. Right? So I would want to leverage a capability, where I can take my concept, simulatable concept, that executable.
Now, when we talk to our customers, our users, and ask them what are your present challenges in doing so, in developing such high critical software projects, the challenges are often beyond just writing the code, or just making the model. The challenges are far beyond, like, are we doing it right? Are we following the industry trend? Is everybody who's into the same business doing it in our way? Are we maintaining some industry standards, because I want my product to be one of the highest standardized or reputed product into the market.
So I need to follow, I need to make sure that I am following some industry standards. The other challenge that we keep hearing from our people, from our users, is that, in the whole development process, there are lot of tools, a lot of time and effort goes in mingling out and working with a lot of different tools. So a key challenge that they really want to be addressed is how do I unify my software development process, how do I streamline my software open process.
And streamlining is a very key way to tame the burst of this complexity that's coming up in the devices nowadays. And there's, as I said, a lot of pressure from peer competitors. Not that it is a requirement that I want to pack more and more features, but I also want to deliver the product faster and with the highest quality. That's how I am going to compete with my competitors.
So achieving the quality, by reducing the engineering language, becoming more and more efficient, are some of these common questions that we keep hearing from people, from our users. Let's talk about one, let me take one of our users as an example to base our conversation today, Ather Energy, where we have been engaging with them from day one, when they started. And when they started as a startup, they also had all of these challenges.
They had some idea of what they wanted, was able to simulate and test those ideas, before they could actually decide on how to proceed. And using the model-based design, they were able to identify and validate the best ideas, through simulation. And then they were able to deliver that as a full-featured product in less time to market.
And we will talk about this. This is a quote from Shivaram. And we will talk this as a basis and understand how we can use what they used, what's the secret that they used. Now, as I said, their journey started from proof of concept. They had many ideas. They wanted to narrow it down to some one idea, then they wanted to roll it out as a product.
And then they wanted to expand, and make the process more standardized, scale up their workflows, and become more and more efficient, and address all the challenges that we've seen earlier. Now what they used, and what we say an answer to such a particular problem is, using model-based design. And what model-based design essentially means is that, before you start writing your code, you are able to make some model out of your specifications, out of your requirement, which is essentially simulatable, and then you can test it there and then. Identify if there are any issues with the algorithm, and then, once you're satisfied, you will go for the purpose of coordination.
So if you see the diagram over here is what model-based design is. We will start from the software requirements, which will be textual requirements, then, based on that, we will be making our architecture. And then simulatable requirements, which is nothing but an executable specifications on your simulatable models. And then, after that, you'll apply certain settings on that. You will define the interfaces and probably then make that model ready for code generation.
In that model I know specifically what is the data type of its interface, and how my data is flowing, and things like that. And once I have that precise model which I can use for code generation, I want to generate a code automatically out of that, and then probably take that code, compile it, and then test it on my hardware. And in doing so, I also want to make sure that my model complies and is in accordance with my software requirements, and I want to make sure that my generated code is in compliance with my software requirement, is in compliance with my model, in the process of code generation I have not injected anything, and then I want to test that, and I want to test it onto the hardware.
So this is what is called as model-based design. And you can find this workflow in the documented form in our Cert Kits, and this workflow has been certified by TUV SUD, for making such high integrity application and following industry standards like ISO or RDO, in the industries of automotive, aero, or medical devices, or railways, or other industries.
Now today, we'll be talking about, in this workflow, how do we do the code generation once we have our model ready. Moving on, as I said, we'll see how we can generate code once the model is ready. So if you see in a typical embedded system, what we have is a piece of a control algorithm, and a piece of a plant or environment, which I will control from that control algorithm. The control algorithm can be made in Simulink, or you can integrate some legacy code C/C++ code, or we can make the use of state machines, or bring in a MATLAB code, and then make our environment, which is, again, in terms of a physical model, it's, again, a model trying to define how the environment would look like and how we'll control and how the environment will respond, and probably sensing that, and tuning our control according to that. And what goes into my controller of my embedded system is the code of this control algorithm, or the logic that I've implemented.
Now if you talk about a typical microcontroller, what I have here is, I would generally have the communication drivers, my input drivers, my special drivers, to talk to my devices or actuators and sensors, and then I'll be having my scheduling algorithms to schedule my task and perform it in the way I want. And then would be the model, as I said, for which we want to generate code, which essentially is my algorithm that I've modeled. If you wanted to understand, for example, if you are from the medical industry, and you're making a ventilator, so you'd want to have a logic which essentially takes a value from the sensor and then probably try to understand what's the level of oxygen of carbon dioxide in the blood right now. Right?
So that algorithm is what we have modeled, and that is what we'll generate the code out of. And you might be working on some projects where you already have some written code, so we also enable you to integrate your existing legacy code, and then make your whole algorithm to work in the application. Now, the question comes, I have the model ready with me, and I want to make an efficient C/C++ code. But how do I start?
So when I see my model, I know that there will be a lot of settings, because I've made my model. But I really don't know whether this model is really compatible right now for code generation. Have I chosen the correct settings? Have I ever chosen which solvers should it be? Have I chosen what optimisations I need, what interfaces should look like? Should my interfaces have these data types?
And is my code ready or optimized enough so that I can target an embedded hardware? Is the footprint all right? So I know that there are a lot of parameters that I need to work with. But then I kind of get lost in this gridlock situation. Now, to rescue from here, we have what we call as a Quick Start Guide to generate the code, which essentially is a way by which you can-- let me just click on the arrow-- and which I can quickly send that code as a first draft.
When I say the first draft, what I'm trying to do is, I'm letting the tool take that initial confusion and help me and guide me through the process, and suggest to me something so that I can quickly see how the code looks like for my algorithm. And it's an easy seven-step guide, where it will ask you some questions, or it will try to look for what are the values and try to get some values, and then give me some suggestions, right?
So let's proceed and see. So I'm saying that I want you to generate code. Let me go forward. So I want to generate code for my model right now, and not for any particular subsystem. So it's asking me a question. And then when I do next, I say that I want to generate a C-code, not a C++ code. And my code should be single instance.
I'm just trying to generate a first draft of code. Now it's automatically extracting, based on what are the configuration settings that I've chosen. And it's saying that your rate at which model runs is, for example, 0.96 seconds right now. And taking these things into account, it's asking me which embedded hardware I'm going to target. What does int mean for it? For example, does it mean 64 bit, does it mean 32 bit? What do all these paradigms mean for that hardware?
Now, I can write my data types as I want to, or I can select from a list of already available devices. And then I can say, OK. It's asking me what is your objective? I'm saying, OK, I'm looking for an efficient code. And based on that, it found the values. And it found that these are the old values and suggesting you this, based on your objective.
And quickly, I can just take a glance and see whether I accept these or not. If I accept this, I can just click on Next. And the code is generated. I can click and view the code generation report.
Now, if you see the code generation report, we'll see that the report has a lot of hyperlinks. And it has a few comments, and these hyperlinks are essentially taking me back to the model, suggesting me which piece off my model is actually generating this code. So I have a complete bidirectional traceability into the code and report, that writes on trace. Now the question still remains, how should I proceed?
And this is where, after I have my first draft ready and I've done some testing, I want to see that, OK, now I want to be an advanced user. I need to understand what I'm looking for. How do I want to integrate this code? Do I need to change the interfaces? Am I looking for input arguments, are the structures, are the global variables, should I write to global variables? Should I make it more optimized?
Should I change some configuration settings? So now I need to be an advanced user, and I need to understand what I am looking for from here. And I can build upon my first draft of code right now. It looks a lot, there are a lot of options, definitely.
And for the sake of this presentation, we'll be looking for a few high level things. But, again, we also have training to help you out in coming out of this confusion, how to generate, how to modify the generated code, or tailor this production code, and learn in-depth off of this facility. Now, let me start by the first thing I want to do is, I know what my other code, where I'm going to integrate this code, looks like.
And I want to architect my present code. Now how do I start? Let's get started. Before that, Embedded Coder gives us a very good tool, or a perspective, I would say, call it as the Coder Perspective, which is a part of, which consists of a quick start guide, CoderDictionary, CodeMapping, editor tool. We will look at all of this. But what we want to say is that it's a perspective view, where you can go and do all the things related to code generation at that one place.
So how do I go over there. I just go and click on these three dots of a model. Some are more or less ready right now. And I enter the code perspective.
Now let me start again with Quickstart Write. What I want to do is, this lets me get the first draft of my code. Let me see how it looks like. I'm going to press Next, next. So here I am choosing one of the hardware devices. And then I'm looking for an execution efficiency.
And I am generating code, and I am looking at the report that is submitted. And I first see the input arguments, or the argument interfaces that I have are all backed in an inner structure, which may or may not be a case that you're looking for, for example, you might be looking for that I want to work on global variables.
Now let's say I wanted to work on global variables. How do I do that? So this is the CodeMapping Editor, where it can provide which storage class I'm looking for. So right now let me say that, for all my imports, I want to make them as imported externs. And all my global, all my outputs, I want to make as global, exported global. Sorry for skipping this video.
Let me bring out, now what I'm trying to say is that I want my variables to be done on global variables, so that it can be consumed with the other design models. And then I'm going to this icon, which is launch, what is a Coder Dictionary for me. In the Coder Dictionary, I can select a storage class. Now what a storage class is, is essentially defining the behavior of the variables that I have.
So there are a lot of options. So what I'm doing is I'm trying to make a new storage class for me. I'm calling it, for example, I want to create a storage class for my calibration variables. And I am naming this as my, let's say, "CAL" storage class. And then I want to say that those variables should be structured.
And I can further define how those structure-naming conventions should look like, and things like that. Then I want to come and say what my function names would look like, how I want to call that. I'm really sorry it got skipped. And then I want to prototype that function name.
Let's say I wrote a new function name, called as my interface. And what I'm doing in that right now is selecting a memory section. Let me just go a bit back. And then, in the name, I'm saying that, let me write my own name. So we will go back on that. So I'm writing memory sections, and what memory sections are, essentially, I'm trying to say that my variables on a function name should be present in certain section of the memory.
So I am trying to put some pragmas, our compiler and assignable pragmas over there, and then I'm selecting that from my this function, which would be generated to my interface. I want my memory section 1 to be there. And I'm just editing the name. It should look like my underscore and probably the name of the model.
Now I go back and I generate the code. And before I do that, so I made my myInterface, I'm just applying that to my execution function. And when I generate the code, I see that it has my underscore name of the model, which you can see over here. And I have all the variables as exported externs. And these are my input variables, which are, which I'm saying that variables are coming from the outside.
Now moving forward, we saw how we do that, using the Coder Dictionary. So we used imported externs and exported globals, which are a part of already available out-of-the-box, for example, I might be looking for making calibration variables. I might want them, ConstVolatile, again, that's already blocks, but it's easy to create your own storage class using the Coder Dictionary as we have seen. And I was saying that we can generate a template of how my function should look like, where I'm saying it should be now called as MyCustom.
And I am putting up a memory section 1, which essentially brings up this MyCustom name of my model step function, and then packs it within a pragma fast begin and fast end, which is the memory section I've just created, saying pragma fast and pragma end, essentially trying to say that this variable or this function should go into the fast execution memory. And this is the CodeMapping Editor where we saw how we can apply these storage classes, the out-of-the-box storage classes, or custom storage classes to my inputs, outputs, on the parameters.
Now the question comes, I've architected my code. How should I optimize? And when I want to optimize, I'm looking for, how can I optimize my RAM or how can I optimize my ROM, or how I can optimize my execution speed. Obviously, I cannot get all three of them optimized.
So I want to find a trade-off, a best trade-off, where I can say that is the right trade-off and my code is good for me. Now to do so, all I can change is probably the model design, maybe find a better model for what I've implemented, a better algorithm, that can lead to an optimization, or change certain configuration settings, or probably look for optimizations that are available for the target that I'm trying to choose where I would put my embedded code.
Let's see, one by one. Obviously, there are a lot of options, but let's see a few things that can be done. When I'm making my model, it's always advised that I follow the best industry practices and industry standards, for which we provide you standards, like MAAB, MISRA, and if you're targeting any of the high safety critical application for those industries, then we have the guidelines from ISO, IEC, or DO. So those should be run on the model.
We provide you checks for these guidelines, which can be executed. And we can see how they are deviated. Another thing that we can do is probably detect clones. Now, clones are, we might unintentionally be using a similar kind of logic again and again. For example, in this model, I'm using this yellow part of logic again and again.
And when I generate the code out of this model, it will have representation of the same logic three times in the code. What can be done, essentially, is making a library out of this. So that can be automatically done, we can detect clones and then make a library out of this, and then generate a code where, instead of the inline code, we can have reusable functions as like function calls. Another thing what we can do is, by using the storage classes, and the most common or most usable storage class that we can use is, for example, the reusable storage class, or a localizable storage class.
So let's see in this model how we can do it. So we see that there's a model where I have some input. And if I just take a look at this reuse right now, we'll see in the video, but what's happening is, if the code takes a variable to store my input signal, and then a variable to store my intermediate signals, and then, again, a new variable to store my output, then I'm kind of doing a lot of variables wasted, I would say a lot of wasted of the memory space.
But let's see if, when I generate the code of this, and if the configuration, if that storage class is not applied, my size of the variables is like 48 bytes now. When I go and inspect my code, then I see that, yeah, I have my variables created, different variables, rtDW.Gain, and I have dot In1, In3, all these variables are like separate variables. And I have Out1, right?
Now let me go back and see how I can apply storage class and reduce this. So I'll go back to my model. And I'll go to the Property Inspector and give a name to the signal, because I want to reuse this. So let me give this a name as "inout," I want to use the same signal for input and output, or same variable. So I'm going to change the name of my output signal also, as "inout." And then, what I'm saying is, make a signal object out of this.
And I'm going to change the property of the signal object, this inout object I'm trying to bring and make it a reusable storage class. Now when I generate code, if I go and see my code metric report, I see, yeah, the sizes came down to 40 bytes. And when I go and explore my code, when I investigate, I see I have one in and out variable created for my input.
And there was another inout variable created which I'm writing to again, to store my output. Now let's see how we can do the optimization by changing some configuration settings. The configuration settings we have, the optimization levels defined as minimum or balance, with the readability of maximum. Now, obviously, if we're going for maximum optimization, there'll be a lot of issues with the readability of the code. As we see that, the one example we saw, there is a use of, reuse of variables.
So the readability might be hampered. And then I want to say that, when I want to have these levels of optimization, what is my priority? Am I looking for the speed, RAM, or am I looking for minimize RAM, or I'm looking for a balance. And what it does is, taking this as a combination initially, it will select a lot of other settings, behind the scenes of Embedded Coder, and produce a code that matches this objective. Now again, if I don't want to use a preset, what I can do is I can specify my custom options, optimizations, and a lot of optimizations.
Let's see how one of them can help us. For example, there's an optimization for using minimum maximum value. If you see this algorithm, what it does is, I have defined my min and max at these ports. I'm saying that, if the minimum value here is zero and this value here is 50, then the minimum value of the sum can be 50. And if I said maximum, this port could be 200K, and over here could be maximum 20, then I'm saying the value over here, maximum could be 40, and I'm comparing, but is 50 less than 40 or not.
So I know the answer over there. But let me generate the code with this optimization as off. If I see I have actually generated the code of the whole algorithm, I have a lot of different answers. But when I do the code with this option enabled, I see that I really do not need this logic at all, because I know if this is always false, then the result of this is always false, and I'll be using all of this logic. And what I end up getting is just one line, or a product of these three input.
So we see that there is optimization, and a lot of things are available. But I would say that this is, we need to be cautious with the optimization, based on the kind of application we are having. For example, we really need to be really sure of the minimum value, minimum maximum values. Otherwise our code might not look like the algorithm that we are trying to target, whether we'll optimize a lot of things.
And then if are looking for certain standards, then I should make sure what kind of optimizations are allowed as per those industry standards. So we need to be really cautious while using this. Now let us come to how we can optimize our code using, or specifically for the target.
Now we can have all the support packages for the target that we're looking for, or code replacement libraries, where code replacement libraries are essentially like the code that compiler gives you, for example, at a hardware might be capable of doing 70 of our vector operations. So there are specific lines of code that you'd want to see in your C-code which the compiler understands and say that, yeah, this is a single instruction, multiple data kind of an instruction.
And I want to use my hardware accelerator for this. So that kind of specificity can be brought into the order to generate code using code replacement libraries. Or else I might be already having a C-code or a legacy code written for a particular algorithm, let's say, an image processing algorithm, which was developed in-house.
Then I can bring in those codes by using a C Caller Block or by using S-functions. In order to give an example of how efficient these things can go for a particular model, we can use a library assembly, library which is provided by ARM. It's called the NEON Library. And if you can read the code for a particular model, we see that execution time is 410 microseconds.
And if you optimize it using the compiler optimization, it still goes to 185. But if you generate the SIMD libraries for SIMD specific code, for that particular algorithm, and then we optimize using the TCC, we see that we can achieve high optimizations in terms of execution time. Now, if we talk about all the tools that we have discussed today, so for the embedded applications, we need Embedded Coder, essentially which works on MATLAB Coder and Simulink Coder, as a base product to understand which code it needs to generate as the embedded code.
And if you are from the automotive industry, with the help of AUTOSAR block sets we can target Classic and Adaptive orders or applications, and submit code for those applications. Now we have trainings, again, and we saw just a few options, with a lot of options, that we have dedicated to your training for the Embedded Coder and one training for the AUTOSAR components of the components. And we learn in depth on all these functionalities and these features and how to work with them.
Now let me talk about, I have generated the code, but that's not just important. I need to verify and validate that code, because I'm going to address a high integrity application. So I need to make sure that my code or my model adheres to my requirements. And I know for sure what are my requirements. Where I'm deviating from those requirements, can I test those deviations? Is a model testable?
So I need to have a traceability into each and every block or component that I'm implementing in my model, with what requirement it's coming from. And then I want to also make sure that this particular block, and in the generated code, what lines of code are corresponding to this block. So I need to have a bidirectional traceability.
So this is really important if you're looking for certain standards, and want to make a robust application, because now I'm making sure that, whatever the code that has generated, is as per my requirements, and I'm not deviated or I know what kind of deviations I am having. These can be seen, the requirements can be linked to the blocks. You can see how they can be linked.
You can get a report. We can visually see what are the test vectors, testing this particular requirement. What are the lines of code, for example, code line numbers. You can see, and we can have a metric available. And in the code as well, we can see this is coming from which particular requirement, which particular block, and where is the code. So complete traceability is available.
Now, once I made sure that the model is as per my requirement, I now want to make sure that, since I am generating code automatically from this model, in the process of doing so, am I injecting, or is the tool injecting, any error or bug into it, or, by mistake, if that has happened. So I want to make sure that my generated code is equivalent to my model. So I'll do what is called a software loop testing, an equivalence testing, where what I am doing is doing the functional testing of a model.
I'm making the model from my requirements and I'm making my test request from the requirements, the functional test work that's giving it to the model, and then giving the same test vectors to the score as well, and then trying to compare the output results. So if they are functionally the same, then they should have the same output. This is what is called as software loop testing. And, in doing so, we can collect code coverage and model coverage, essentially to make sure, so this metric is making sure what is the quality of testing that I am doing, on my model.
Am I writing just ten test vectors which are just testing one addition block, let's say 2 plus 3, 3 plus 4, 4 plus 5, and I'm not really touching any other parts of a model and code? So I need to collect certain metrics to make sure that I'm doing a quality testing. So those code metrics, code coverage metrics and model coverage metrics, can collect it and then this process.
And then again, I want to make sure that, when I'm compiling this code, the compiler had also not unintentionally injected any bugs into it. So I still want to make sure that my application code, which is going to be on the target hardware, is behaving the same as my model. So I do an equivalence testing of my model, giving the same functional vectors, or test vectors, functional test vectors, to my model as well as to my code, which is into the object code, into the hardware. And this type of testing is what is called a processor-in-loop. [COUGH] I'm sorry about that.
And again, I want to collect the code coverage on the hardware and on the model, to make sure what is the quality of my testing. And I want to see how much time did my algorithm use for execution on that particular hardware. So I can collect all those metrics using Simulink test and Simulink coverage. Now, once I've made sure that my code is equivalent to my model, my code is equivalent, my object code is equivalent to the model, what I want to do right now is, I want to integrate this to make my full application.
So it would be there, integrated with some other, which could be handled in code or other code. And I want to make sure that this code is compliant to all the industry standards, that I was trying to focus on initially, and this code is also free from bugs or runtime errors. So for that, we need to do what is called as static analysis, or static code analysis, and verification, which can be done by using Polyspace, which is a third party tool, as in it's different from the tool that we are using for code generation, because, by the standards, we need to prove that we are following industry standard by a different tool, which essentially is not generating the code.
So we can use BugFinder to understand the compliance of the generated code as well as the other code, which is handwritten, whether it's complying to MISRA-C or CERT-C, and then we can use Code Prover to see the absence of certain defects or runtime errors. For example, defects like division by zero, and then we can generate reports and artifacts to prove those things. We can have our credits, we can have, using this report, get credits for testing these standards and safety standards.
Now, as I said, MISRA is definitely an important thing when we talk about the code. And we already discussed that it is highly recommended that we run more levels of checks on our model, to see whether it's compliant and it's going to produce a MISRA-C code. But then the standards would request you to actually verify that claim. So you can use Polyspace and check for MISRA on the generated code. And if you find any deviations, for example, the deviations that you might have already done, we can trace back. We can have a traceability, again, bidrectional traceability, from Polyspace back to model, and then modify those things or probably justify those deviations.
And in the Help, we have all the mandatory and required checks and the documentation, so probably if there are deviations. Now, once we have made our code and it's ready to go into the hardware, what we want to do is, since are targeting high integrity applications, we want to look for certification standards for these industries. So we get to do a lot of certification standards, be it Aero or Auto, Rail, Medical, and others.
So we help you do that certification in a more easier manner. We have Cert Kits for that. Oh, coming back to the story, the journey of a user Ather, who, as I said, initially had the challenge that I wanted to make an evaluation of all my designs, and then finally some design I want to make a product. And then I want to scale up. So MathWorks can help you at each and every state in your development cycle and your development process.
So we can help you jump start, we can help you if you need some advisories, our application engineering team can help you advice on a few things, how things can be done. And we have our different groups doing all the help that you need at each and every step in your success journey. So we took Ather as one of our story, but it could be any other industry, as we see that what they've achieved from being in touch with our design team at the start.
They were able to reduce a lot of time. They were able to evaluate different designs in weeks and not in months. And they were able to-- Oh, let me talk about, in short, briefly, again about our services. So we have our training courses available for all the help that you need. And when we talk, if you will, with our customers, that's a key thing that they require help on.
So there are a lot of courses on web which you can take as a free offering, I mean, not as a free offering but as a self-paced offering, on the web. And we can have a customized offering for your requirement or we can have an on-site or a public training offering. And a lot of other ways that we have seen by when we have talked with our users, and what they tell us about this is that per engineer and per project they are able to get a lot of returns.
And similarly, if you talk about sometimes that the out-of-the-box is not matching your needs, then we have our consulting services to customize those things and, again, accelerate your returns, to customize a few of the offerings, and enhance the present capability of the tools, and we can avoid some common mistakes. So MathWorks consulting and training are there, and we have advisory service for a lot of industry standards, and in terms of embedded co-generation, we can target different aspects that you'll be looking for.
Essentially training is what helps you jump start, and, then, as I said, consulting helps you complement that by extending and customizing the capabilities to your need. These are some additional resources, I would request, that you can see based on your industries and the applications that you are targeting. And I would ask you, if you are interested, you can reach out to our website and try a evaluation kit of production codes.
And there's an evaluation kit where you can see how you can accelerate code and then for a period try the controller. And then we can discuss further, to see whether you would like us to engage and help you out. Thank you, everyone, for your time and for being with us today. Have a good evening.