Hardware-Aware, Model-Based Software Development to Speed Up Embedded Designs - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 39:26
Loaded: 0.42%
Stream Type LIVE
Remaining Time 39:26
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 39:26

    Hardware-Aware, Model-Based Software Development to Speed Up Embedded Designs

    Overview

    The complexity of Embedded Software Applications is significantly increasing, with the number of lines of code in today’s systems, such as cars, industrial machinery, airplanes, trains, and medical devices, in the order of millions. This is driving the need for embedded software engineers to work at higher levels of abstraction and to move to model-centric development workflows, such as Model-Based Design, defined as ‘The systematic use of models throughout the development process’.

    Such an approach allows engineers to model software algorithms together with the environment they are designed to interact with, validate them through simulation and generate the corresponding embedded code. This workflow provides several advantages, as it permits to shorten the development time by shifting-left verification and validation, automatically generating embedded code and abstracting from the hardware architecture. If kept fully hardware-agnostic, this would lead to sub-optimal and incomplete code, since the hardware-specific aspects would not be fully exploited and integrated. Two typical examples of such aspects, playing a key role in software optimization and integration, are device drivers and hardware accelerators. In this online session we will show how to overcome these drawbacks through a hardware-aware, model-based software development workflow.

    Highlights

    • Typical and Model-Based Design workflows for embedded systems
    • Integration of platform software, drivers, and libraries
    • Hardware-aware applications modeling
    • Generation of C code from models
    • Simulation based Verification and Validation
    • On target applications deployment and data visualization

    About the Presenters

    Mauro Fusco is an Application Engineer at MathWorks in Eindhoven. He specializes in supporting customers in Aerospace, Automotive and Machinery industries for the establishment of Design Automation workflows. Modelling, simulation, testing and implementation through automatic code generation whilst conforming to international standards are key aspects of his work.

    Before joining MathWorks, he worked for 5 years at the Dutch Organization for Applied Research, TNO, focusing on the domain of Controls for Cooperative and Autonomous Driving. Mauro has a Masters in Automation Engineering from the University of Naples Federico II, during which time he conducted research at Eindhoven University of Technology. His technical expertise lies in the areas of Control Theory, Non-linear and Network Control and their implementation.

    Irina Costachescu is a Software Engineer with the Model-Based Design Team at NXP Semiconductors in Bucharest, Romania. She holds a Bachelor’s Degree in Automatic Control and Systems Engineering and a Master’s Degree in Complex Systems, both from the Polytechnical University of Bucharest. During her studies Irina deepened her control systems theory knowledge by developing robust and optimal algorithms for various systems control while also being a teaching assistant for this subject. At NXP, Irina contributes to all the development phases, from framework design to peripherals support, working collaboratively with internal teams for integrating other tools into the Model-Based Design Toolbox, dedicating her time also to creating toolbox related webinars, videos, and training sessions. 

    Recorded: 15 Nov 2022

    [MUSIC PLAYING]

    Hello, and welcome to our webinar introducing Hardware-Aware, Model-Based Software Development to Speed Up Embedded Designs. My name is Mauro. I am an application engineer at MathWorks, the leading developer of MATLAB Computing software. I worked at MathWorks for three years, and I've been previously working on deploying software to test automated vehicles. Today, we will see how good generation can be integrated deeply with the outlook fully exploiting its capabilities. And for this reason, I have the pleasure to be today with an expert and we have Edina on the phone. Edina, would you like to introduce yourself?

    Hi, Mauro. Thanks for having me and hello to everyone. I am Edina and I am a software engineer for the model-based design team at NXP. I have a background on automatic control and systems engineering, and I'm here today to show you, as Mauro mentioned, how to design embedded applications and how Simulink algorithms can be made hardware-aware.

    Thank you, Edina So let's get started. So let me start with saying that today, we will represent all the development phases of an embedded design, starting from the idea incubation with a MATLAB Simulink model, achieving at the end, a running on target optimized application. So throughout the webinar, we will demonstrate very simple designs, with the purpose of allowing the attention to be focused on the workflow, instead of application complexities. So here in this video, you see a very simple application that is monitoring some data and alerts the user when certain parameters exceed the normal operation ranges by highlighting an LED and sending some messages informing about this event.

    But how can we get to this deployed application? So here we have an agenda. And let's see what are the steps to get to this final application running on the hardware. First, Edina we talk about the typical and the model-based design workflows for embedded application development, and then second, should we focus on the NXP Model-Based Design Toolbox, covering its main features and demonstrating the application development steps using this tool with a simple example. Then, we would pass the same application, developed by Edina, to a verification and validation step. I will be showing you how you can use dynamic testing for this purpose. Eventually, Edina will showcase how to deploy a validated application on hardware visualizing the data while the application runs as a standalone executable on the target. So without any further ado, I would say Edina please take it away.

    Thank you, Mauro. So let's get it started. But before diving into details, let us answer the following question first. How can we create embedded designs? For a start, we need some hardware, like a microcontroller, which communicates within a larger system via its peripherals as the imposed behavior specifies. So next, we need to software implement the system behavior, and for this, we could use hardware-optimized device drivers to simplify access to the hardware. On top of these, different layers of software abstraction can be used, like middleware and libraries, thus enhancing the drivers' code optimization and reusability. And ultimately, applications can be developed by writing the corresponding C code. The NXP ecosystem provides both the hardware and the software optimized for its microcontrollers, together with documentation and reference designs, reducing thus the complexity of the applications development process.

    Moreover, NXP provides the ecosystem where the programming is done, the IDE. Integrating build tools for compiling C code into running on target file and debugger tools where features, such as registers view, allow application monitoring at each step of the process. The IDE also integrates configuration tools, which allow setting and initializing the drivers used inside a design in a graphical manner, while also generating corresponding C code based on the selections. Peripheral settings can be made from drop-down menus having all the necessary documentation integrated into this interface. And similarly, pins can be configured, routed, and initialized, while clock settings enablement and routing are available options as well.

    So using the NXP ecosystem applications can be developed without leaving the IDE, since this integrates all the hardware optimized software and tools for fast prototyping embedded designs. Besides the classical debugger options, NXP provides also a real-time debugging data visualization tool, featuring options like writing and reading registers, and memory locations, and monitoring signals on the embedded target. Both, in an oscilloscope like display, or in text form. A tool like this can be also used to build quick demos by allowing parameters tuning and designing intuitive user interfaces. This tool is called FreeMASTER, and it ensures a bit of rational communication with the embedded system to check the status of the application at any moment. We will present a detailed manner of using it later during this webinar.

    So using NXP ecosystem, embedded systems can be designed by writing C code, which can be prototyped from the target, validated, and then deployed as a final solution. This iterative flow from idea to application becomes easy to handle, since all the hardware dedicated tools are available in one place. Yet, since the testing part is coming when the application is already running on the target, this sequential flow does not necessarily provide intuitive details on possible issues that may appear, which could be either algorithm or hardware related. The debugging is powerful under these kind of situations. However, is there a way to improve this? And the answer to this question is provided by a paradigm called model-based design.

    Regardless of their specializations, engineers are used to solving problems by drawing various diagrams. And this is exactly what model-based design implies, putting together graphical elements that implement certain functionalities with the purpose of designing the logic of an application. Given the increasing complexity of embedded applications, such an approach of moving to higher levels of abstraction in the development process and replacing C code writing with creating diagrams creates the context of easier to be followed applications designs with an increase to usability and robustness against coding errors.

    A software environment for model-based applications design is provided by MathWorks, offering MATLAB which incorporates all could think of under different routines and similarly. A MATLAB based graphical programming environment, where functionalities are implemented by customizable blocks, which range from simple mathematical operations to more complex functionalities, like physical systems models, and so on. Moreover, MathWorks also includes additional tools that enable continuous verification and validation of the design at each application development step. Verification of the algorithm high-level behavior can be performed starting from the requirements definition face. Simple models can be designed and simulated for validating the operating modes. Then, functionalities of the system can be modeled at a more detailed level in subsystems and simulated independently.

    Based on the design, code can also be generated for a variety of supported hardware options. Eventually, designed models and subsystems can be integrated within larger ones. Therefore, with this approach, the embedded designs flow becomes Application Centric in comparison to the common approach already introduced. Code can be generated from the model, integrated with necessary hardware-specific software, and downloaded on the target. But to make this integration we need one more thing, besides the MathWorks ecosystem. The specific target support to be able to use, customize, and download algorithms on specific hardware.

    And this is where the NXP's Model-Based Design Toolbox comes into the picture. With all its functionalities, it helps users to run embedded applications on NXP microcontrollers directly from Simulink. The Model-Based Designed Toolbox comes as a gateway between the MathWorks and the NXP ecosystems. Leveraging the model-centric applications development benefits and incorporating features like simulation, code generation, verification, and validation, and the context of embedded systems design. While adopting this approach, the flow of translating ideas into running on target applications modifies as well since functionalities can be now tested and integrated at each phase of the development process given the continuous verification capabilities Simulink provides. Hence, reliability into the designs can be gained through simulation prior to their deployment on target, which brings into discussion a reduced time to market and also an increased degree of algorithms reusability across hardware platforms.

    Let us now see some of the Model-Based Design Toolbox features. To get a clearer view on how similar algorithms can be made hardware-aware by using such a tool. NXPs MBDT represents a collection of drivers, libraries, and tools, including all the sources necessary to generate and download code on any space-specific platforms directly from Simulink. The Toolbox provides a library of custom blocks that implement MCU peripheral functionalities and generate code on top of hardware-optimized drivers, while also offering extensive application examples and documentation. In the case of the NXP S32K1 family of microprocessors, the MBDT covers most of the peripherals, core, and system's functions as highlighted in red. Together with this, we offer support for multiple configuration and simulation modes, processors, additional hardware devices used for specific applications, and additional tools integration.

    The blocks provided by the MBDT are integrated within the Simulink Library and cover communication, current system, I/O reading and writing, and modern control peripherals. Each of the supported peripherals comes with a set of customizable blocks. The configuration block allows peripheral initialization with user-made settings directly from the Simulink block mask. Blocks that implement action functions can be used to control MCU peripherals. For example, certain ADC conversions. An interrupt service routine block allows the enablement of interrupt events triggering the execution of a user-implementable action when the event occurs. The configuration of the entire application for a specific microcontroller is made with the help of the model config block, which provides option like the selection of the processor, and of the system clock frequency, as well as options for the compiler and the target memory where the executable file is downloaded. And of the download method, which can be enabled using the bootloader tool integrated inside the Toolbox, The OpenSDA interface, or via JPEG.

    To enhance the user experience with modern control applications, the Toolbox integrates the NXP phase automotive and modern control library that grants access to optimized routines for these kind of designs. Other utilities include a set of blocks which enable the communication of the FreeMASTER Tool with the target, enabling real-time application monitoring, profiling application execution time, reading and writing registers, and memory locations, and many others. External devices commonly used together with S32K microcontrollers for applications like modern control and battery management systems are also covered. In a wide range of Simulink applications, targeting all the supported peripherals and, including also FreeMASTER demos, are ready to run for demonstrating all of MBDT's features and functionalities.

    So using the MBDT blocks, we can configure the microcontroller and peripherals. We can also add inputs and outputs to any developed algorithm, which is hardware-independant thus, making it hardware-aware. Therefore, it is easy to port a Simulink model from one hardware to another, only by changing the outside configuration and the hardware action blocks, leaving the algorithm unaltered. Let us now see all these in action and develop a model that will control the intensity of the LED on the board using PWM signals. The LEDs in the digital world can have two states, on and off. But a pulse width modulation represents a way of delivering variable power to a system by fast switching between on and off states.

    The amount of power delivered is controlled by how much time the signal is kept on, relative to a fixed time interval, or the duty cycle. Therefore, by generating such a signal for our LED, we could rapidly turn it on and off for periods that visually appear as a steady dimmed light. We can control the intensity of the LED by varying the duty cycle of the signal as a sine wave to simulate the dimming effect. And we will do this for both the green and red LEDs on the board. The application will be downloaded on the hardware via the OpenSDA interface by connecting the board to the PC via USB port. Additional to the LED behavior, we can watch the PWM signal controlling both LEDs for validating our design.

    Let's now build our model. We open a new Simulink canvas and the library browser from where we drag and drop an MBDT application configuration block for selecting the desired processor for our application. In this case, the S32K144 hardware part. Then, we will leave the rest of the settings on their default values.

    For performing the duty cycle computation, which evolves as a sine wave, we will use one of the automotive math and model control sine blocks, which computes the sine value of the provided input. We will set the block to compute the sine on the minus pi pi interval by feeding it an appropriate input, which will be added later as a simulation input test. We will then add a PWM configuration block, that initializes the flex timer module and generates a PWM signal with the duty cycle provided as an input. We open the block settings and first we uncheck the advanced options since we are not going to need them in this example. So we select the flex time remodule hardware instance and choose to generate edge line signals from the available options.

    The PWM channels are hardware designed in pairs. So we enable the zero one pair of signals and we set them as independent, since we want to generate identical signals for both the green and red LEDs. We set the signals frequency to 50 Hertz and then we make sure that we enable the duty cycle simulation output. This will make the block to output the duty cycle value provided as an input, and we will need it to simulate our model. However, for now, what we do is that we add output ports to the signals. Next, we need to route the signal that is going to be generated to the physical pins on the board corresponding to the green and red LEDs.

    Well, we'll use the automotive math and modern control library also for data type conversions, to respect the required data type input of the PWM block. Since this structure of blocks will compute the sine for minus pi to pi, its output will range between minus 1 and 1, being centered in 0.

    So we will further process this signal by scaling its amplitude by 2 and shifting it by 0.5. for making it range between 0 and 1, as the PMW block requires, meaning 0 to 100% unity cycle. And now that our model is complete, I will pass it to Mauro to present to you how we can simulate the model's behavior by adding input and output tests and how to enable the MBDT simulation modes

    Thank you, Edina. In this part indeed I want to give some insights on how to perform verification and validation on the model that you just built and generate code for the NXP hardware. Before going to that specific model let me say that in general, regardless of the application that you're working on, modeling simulation, co-generation, and verification can up and hold together in the Simulink environment, including the NXP production hardware. On the screen, we see a model that could be implemented by any company working on automated driving application. But again, this is just an example. You might be working on another application. And again, you could have a high-level model architecture that implements an algorithm.

    In this case, this is for an autonomous driving car, so it has sensory inputs, complex algorithms for perception, localization control that are vehicle dynamics that mimic the VR view. And not the top right you see the NXP S32K1 volt that Edina already presented. On the bottom right, we also have here a photorealistic 3D visualization with the tool called Unreal Engine. When we press the 1 button we see all these components running together.

    From the slow motion, we can see that control logic running on the processor is reacting to an input scenario and is giving feedback to the car. This processor is running generated object code that controls the emergency braking action of this vehicle. So when there is a cutting of the vehicle in front here we see the boat triggering an emergency braking also with LED turning to red. And we have the feedback also in the Simulink canvas here with this dashboard LED.

    So how is this possible? As Edina explained already at the beginning, we are using model-based design and this encompasses basically four phases. The first one is the idea incubation via modeling and simulation. Then, we go to automatic code generation where we can also perform verification in this case software in the loop. And finally, we can do optical generation and process in the loop process or in the loop regulation and achieve a final product realization on production.

    For the NXP algorithm, this entire process is facilitated by the use of the NXP Model-Based Design Toolbox that contains all these features that Edina has explained before that are necessary to be able to connect the desktop environment, where you do the design directly to the output generate how the optimized code and realize full application deployment.

    Now, coming to the verification, I would like to focus on two verification tasks that are enabled by model-based design. And these are two equivalence tests that are respectively scored software in the loop and process of in the loop. Now, equivalent test because we compare generated source code and object code respectively with resonance that is the model in the loop simulation. And this test basically enabled engineers to early detect defects that are introduced during the design before going to testing on the final product.

    There exists also another class of obligation activities and those are possible as well on models and code and that's called static. But we will not see those today. The focus will be on the dynamic tests. So model in the loop is the process of running simulations on a development PC. And here the simulations are validated with verification criteria that are derived from the requirements. So you see that here we have models we define what are the test vectors to simulate the models and then we get some results and we check them against the requirements.

    Software in the loop basically consists in reusing the models to generate code and this source code can be tested by reusing the test vectors that were previously written to exercise the model, and by compiling the generated code with the PC compiler and using the same PC employed to design the models. We are able to compare the results with the previous simulations and possible numerical mismatches would indicate errors introduced in the source code when customizing the deployment.

    For example, we can use the model that was explained by Edina. This model contains half the web blocks that generate optimal code for the NXP S32K144. We can generate a test on this with test inputs and PWN simulation blocks in order to test the model and the genetic code with a model in the loop software in the loop equivalence test. Let's see this one in action. So here the inputs are produced by state machine that generates values. And these values are used as arguments of the sine controlling the PWM signals in the model under test. Going to the Simulink environment, we can open the CP manager, and we can run an automated verification choosing to save all the data in these settings.

    When we press the left button, this will run basically two simulations. First, a modeling group simulation and then a simulation. So here you see that the modeling group happened very quickly and now the model will be reused to generate code and then it will produce results from the generated code. When the two simulations end they are compelled and we can see the results in the Simulink data inspector. Both code and model show identical behavior in this case. So in this case, the equivalence test has passed. We can see this for all the signals that we have decided to block.

    So here you see the PWM signal, also can be compared, and we see that the results are the same. We can also go faster and explore profiling results, and here we see that the source code has been profiled on this host PC. And the average execution shown time is here visible on a histogram. So we can see that the function's that are being used from the source code are running below a certain amount of time.

    Another thing I want to highlight, that was already mentioned by Edina, is that when we work with the MATLAB Simulink together with the Model-Based Design Toolbox by NXP, we can access the typical software engineering tools directly from Simulink. So basically here, we have this stack of tools that is developed by NXP and are dedicated to the hardware and then via the Model-Based Design Toolbox we can directly deploy models from the MATLAB Simulink environment and we are also able to access typical software engineering tools. In particular, I will now show how the genetic code can be debugged with a software debugger, while we run a single link simulation.

    So we can go again in our app, the CP manager, and select this C debugging mode. And this will showcase the link between the classical debugging tools and the model-based design tools. So here we see that we have started the simulation with this option enabled and now what happens is that Visual Studio will be prompted at simulation time. So here you see that we have the Visual Studio debugger sync with Simulink. When we first continue in Visual Studio we get a step faster in the simulation in Simulink and here we see that we can actually add breakpoints, so we can stop at those breakpoints when we define, that and see what happens in the graph that are placed in Simulink. Here we can use all the tools that are available in Visual Studio for debugging. So here you see that we can insert a big point also in another place of the code and get to that point again. We can also inspect values from inside Visual Studio, in this case, and then we can remove those breakpoints. So again, use the tools that are available in Visual Studio in different ways by using the context menu of just using some the UI. And finally, we can finish the simulation and have the results inspected in Simulink as for a normal let's say model simulation.

    So similar to software in the loop, we can also do the next step to do another equivalence test. This time to debug the object code running on the NXP hardware. In this case, what we do is we change the PC compiler across from pilot and we also change the target from our host PC to a target MCU that is this next keyboard in this case. And now comparing the new test, that is called positive in the loop, with the model in the loop, we are able to determine if there are issues introduced either by the cross compiler or there is some issue with the order itself.

    So let's see, again, this test live. Again here we can select Automatic verification but we go from software in group to process of in the group and by clicking one in this case, we start this first model in the loop simulation, as a term of comparison, as we did with software in the loop and then here we have the boat connected to our computer. Similarly, with seek for this upgrade we'll download the code on it and then here we are performing the simulation, getting data from the algorithm itself.

    And we can see how this looks in the Simulink dashboard. But also, we can see the lady blinking on the both and again with the Simulink that inspector we can do exactly the same thing that we did before with software in the loop but in this case is with the object code running on the processor for each signal we can see, how they compare, and if there is any difference in this case this test went well. So we have exactly the same view in simulation and on the hardware. So I think here I have explained a bit what are the verification steps that we can take to validate the application. And then I would like to give back to work to Edina that will show us how we can finally deploy as a standalone such an application and then do some data visualization.

    Let us now focus on how applications can be deployed on the target and hardware validated in terms of their behavior. For this, we will see a different application during the section incorporating multiple MCO peripherals and during the following section, we will not reiterate the simulation and validation steps since the workflow was previously presented in detail, and the purpose of this application is only to illustrate another use case of MBDT and its integration with free master. So now we propose to convert the analog signal provided by the potentiometer of the board and compare each sample to an imposed threshold.

    If the converted value is higher than the imposed one, we will send messages on the Canvas to notify the occurrence of this event and turn on the red LED. When the converted value respects the threshold, the red limit will remain or be turned off. Welcome messages will no longer be transmitted for testing this application we first need to power the CAN transceiver on the board via an external power supply and do the necessary hardware adjustments for selecting this power source. We can monitor the messages being sent by connecting a can analyzer to the pins on the board, and also by using the FreeMASTER master tool to inspect graphically the signal's evolution. We will download the executable file on the board via the OpenSDK interface.

    Let's see how all this look by opening the already designed model, which as you can see is structured into three different parts. First part is represented by the application configuration, achieved with the help of the already presented config block. We also use a free master configuration block to enable the communication of the hardware with the FreeMASTER application. We have selected work instance, and we use some data memory blocks to declare variables that we will use throughout the application. Next, we need to configure and initialize the peripherals our application needs. Based on the scenario we will need to initialize an ADC instance for converting the potentiometer signal and again block for sending messages when the red values exceed the threshold.

    After everything was taken care of in terms of configuration, we move on to the actual application implementation. An ADC start block will trigger the conversion and an interrupt service routine block will trigger the execution of this subsystem every time a conversion is completed. Every time a conversion is completed, we compare the converted data to the imposed threshold value. If the value is higher than the imposed one we light on the red lead and we also send a message via the already configured instance of the camera referral. If the converted value is lower than the imposed threshold, the LED is turned off and messages are no longer transmitted via can.

    We will now build the application and download it on the target. During this process, C code is generated based on our model and the cross compiled with hardware specific files, which are ultimately linked into an executable file, which will be downloaded on the board we are using. We will fast forward through this process and move directly to the step where the application was downloaded on the target. The generated code pops in the right side of the model and can be inspected from there.

    Next we open the FreeMASTER project from the Options tab we go in specify the connection type and then the board right in accordance to the settings made in the FreeMASTER configuration block. Next we go to the next tab. And select the generated executable file under this option. I have added the variables we want to inspect represented by the potentiometer threshold and LED inside the variable watch. New variables can be added by right clicking this area. Then the name of the Simulink model, data store memory can be added here and additional settings may.

    By right clicking this area, you can also add graphical elements for data inspection like the oscilloscope I have added here. We press now the go button and connect to the hardware currently the LEDs in 0 logic therefore off and now the potentiometer value converges to zero. We increase the value by turning the potentiometer when the threshold is reached the LEDs turned on. For a value that respects the imposed threshold the LED is turned back off. Also when the threshold limit is over reached can messages are being sent on the Canvas.

    The can messages will stop being sent when the value respects again the imposed upper limit. Therefore, this application illustrates the use case somehow common for automotive applications. Monitor some data and alert the user when certain parameters exceed their normal operational ranges, like sending an overheating event to the dashboard when cars water temperature becomes larger than imposed limits. However, the steps presented today on these two simple designs, which were used for a streamlined workflow illustration without interfering with details of more elaborated algorithms.

    And hence, there are benefits on complex applications like modern control and battery management systems. To find out more on how this workflow can help you design, program, test, and fine tune complex applications in a fast and reliable manner you can check the MBDT community page the place where we provide customer support by answering forum questions, posting articles for all the MVP supported hardware platforms. Here you can also find a series of webinars, from beginner level to even more complex application demos, and workshops. So that was it for today. Thank you very much for your time and attention and we will move now to the Q&A section.

    View more related videos