Pragmatic Digital Transformation
Organizations with digital transformation initiatives are making the shift from visionary ambitions to practical projects. These organizations have defined their high-level digital transformation objectives and are now looking to their engineers and scientists to achieve them. This will involve learning new technologies, collaborating with unfamiliar groups, and proposing new products and services. To meet this challenge, technical organisations must master how to systematically use data and models, not only during the research and development stages, but also across groups throughout the lifecycle of the offering. An effective digital transformation plan needs to consider changes in people’s skills, processes, and technology. MathWorks Fellow Jim Tung describes this pragmatic approach to digital transformation and demonstrates how engineering and scientific teams are leveraging data and models to achieve their digital transformation objectives.
Published: 10 Oct 2021
Now, indeed, we are going further in this direction, and we hear Jim Tung.
How are you?
Jim Tung, Jim, good to see you here. We are very glad to have you at our conference. You are a fellow at MathWorks, and you have been 35 years in real time systems data acquisition and technical computing fields, including the 30 years at MathWorks and earlier at Lotus and Keith Lee Data Acquisition Systems.
We will hear from you about a pragmatic approach to digital transformation. Organizations that want to do this transformation have to shift from visionary ambitions to practical projects. And Jim Tung is telling us how this involves learning about new technologies, collaborating with unfamiliar groups, and how to master systematic use of data and models, not only during the research and development stages, but through the full lifecycle. So we are looking forward to your presentation. Thank you very much for joining us here.
Thank you very much. It was also interesting to hear Gurdeep's talk. With MathWorks, our Simulink products have integrated with Bonsai over the years. So it's good to get a good update on the status of that, very interesting. I'd like to start my slides. Thank you very much. As he mentioned, my name is Jim Tung. And I'm honored to speak at this high tech summit.
I've titled my talk Pragmatic Digital Transformation. Most organizations have been mapping their own digital transformation, and probably each of yours has as well, defining specific strategies, goals, KPIs, and so on. But once those strategies are determined, the next important question becomes how to execute, how to execute successfully and pragmatically. And that's what I want to briefly discuss today. Next slide.
First, a brief introduction to MathWorks. Our two core platforms, MATLAB and Simulink, are quite well recognized and are widely adopted, with millions of users. We serve and support those customers with a team of over 5,000 people around the world. Our perspectives on pragmatic digital transformation are based on working with customers across a broad range of industries.
So I'd like to take a quick look at those. Next slide, please. They include aero defense and automotive, which is transforming their electromechanical routes to systems that are becoming more software-defined and autonomous, using a mix of model-based and data-driven approaches. These industries must deal with a variety of platforms, leveraging data for performance optimization and predictive maintenance. Some customers that we have don't even have physical products.
They were early adopters of big data analytics, agile and DevOps workflows, and integration of systems. In health tech, we enable collaboration between a wide variety of engineers and scientists. Working with electronics and semiconductor manufacturers, we provide tools to support development in new compute platforms. And last but certainly not least, we work with communications companies, including ground station providers, handset manufacturers, and carriers.
But of course, as we all know, at this point, every industry on this slide considers communications an important role of their systems design and implementation strategy. So this is a brief view of our perspective, or the basis for our perspective, in digital transformation. Each of these industries is going through its own set of transformations, and perhaps each of you sees some themes on these slides that they share with your own situation. Next slide.
So when we look at the factory of the future, what's unique about the digital transformation there? Well, with industry 4.0, a key theme is mass customization, combining the flexibility and personalization of custom-made products, with a low unit cost associated with mass production. In a recent article, Rainer Brehm described it as, quote "producing what matters," reducing the waste by producing what the market actually wants and actually needs.
In lean manufacturing or in kaizen terminology, that really means eliminating the muda. Next slide, please. So how can you achieve mass customization? It requires production that is connected, flexible, and autonomous, to adjust to the needs of the individual customer. Now that type of production requires more capable systems, systems that can be reconfigured using software that can perform different kinds of operations that are multi-functional. In other words, systems that themselves are connected, flexible, and autonomous.
However, we face some key challenges in moving forward toward that vision of a smart factory. One of them, shown on the left, is the design complexity of those autonomous, flexible systems. Surveys of companies show that the complexity really ranks as a top system engineering challenge. Another challenge is the complexity of the software that is in those systems. This recent McKinsey report on the right notes a widening gap between the growth and software complexity, and the growth in software development productivity.
The productivity is not able to ramp up as quickly to cover the needs that the complexities are causing. That's a real challenge. Next slide. So what does an efficient approach look like? This is the Robobox Tribot robot from Krones, based in Neutraubling, Germany. It arranges containers continuously as they come into the grouping station of this line. No stop and go.
The Tribot robot can group up to 500 layers per hour. And, as shown in this section of the clip, that robot can be combined with other machines from Krones, that load the containers onto pallets, for packing and for shipping. Now Krones uses models to develop its complex systems more efficiently and robustly. They use a system level model that can be used for virtual commissioning, while subsystem models are used for the design and the analysis of the behavior of the drive shafts, the gripper mechanisms, and more of the component behavior that are under software control.
Next slide, if you look at the typical system lifecycle, those models fit into two particular stages of the life cycle, the system level conceptualization, and the subsystem design stages. But digital transformation needs more than that. It comes from the systematic use of models through the life cycle, where the models are a source of truth, an authoritative source of truth. And they're used to accelerate some key workflows, such as the refinement of designs, and also to enable the consideration of components for re-use.
Next, these models can be linked and traced to use cases and requirements to ensure the system will do what it needs to. In this way, we're starting to weave together a digital thread of information and traceability. Next, the models really can be a source of truth, because they can be used to automatically generate software that will be used in the production system. You can generate code in different programming languages, like as C+/C or C++, Cuda, or the IEC 61131 structured text, and so therefore, to get those algorithms and models into different platforms, running from microcontrollers to FPGAs to PLCs to GPU chips.
It can even be packaged into software containers, containers being a format that can run on edge systems or in the cloud. Next slide, the same models, again, getting value across the different stages, can be used for verification and validation, not only at the state that we show here, during the test of the real integrated system, but also earlier, on models, virtual representations of that same system. And in this way, the V&V activities are shifted dramatically to the left earlier, so that errors are found as soon as they are introduced into a design, rather than carrying them through and having to extract and remove them later.
Next, and finally, moving into operation, those same models can take on a different dress. They can be tuned to represent a single, specific actual asset in operation, a digital twin that can support the operation and the maintenance of that specific asset. Now, many companies across different industries are using this type of a systematic approach. It's an approach that we call model-based design. Next slide.
But customers are asking for more. They want to upgrade functionality of systems that are already in the field. They want to use operational data from systems to improve performance. And so I state it this way. They are asking for a capacity to offer perpetually upgradeable machines, faster development in release cycles, getting them out quickly into the market, being able to take data from the system in operation, and using that to make decisions, and to renew the capability and functionality of the system.
It's a vision, going forward. But to see how this can be done, let's change from this flat, horizontal view of the system lifecycle, next. Instead, we can view the development and operations processes as agile workflows, and then add the ability to continuously update, for example, the software in the system, and to get continuous telemetry from the systems in operation.
What starts to emerge from this transformation, is perhaps a familiar picture. It's DevOps, a set of practices to automate and to integrate processes between the development workflows and the operation workflows. But if you've heard the term DevOps, it's most likely that you associate that concept and the implementation, just with software-only systems.
That's typical, IT groups using DevOps to implement IT software. But can we think of it for systems that include physical components, not only software. Next, we can do that through the systematic use of models and data. The same set of systematic use of models that I showed earlier, in the horizontal picture, can be mapped here onto the agile development workflow, on the left.
At the top center of this diagram, we see data, archived data, data from data lakes, data from lab experiments, tests, historian databases, and so on, that can be used and inform the development process. Therefore, we can then get to the same development lifecycle mapped on to this DevOps picture. But we can go further.
Next, because with the system in operation, we can start with that system in operation, that bar shown in red, and then have the flow shown here. The digital twin model can be used to monitor the asset, data from the operating system can be streamed back and used by the developer, or by an automated routine, to update a component, or to automatically update model parameters, depending on what it's programmed to do.
The same simulations and verification steps that were used in the design process interactively at the beginning, can be run automatically under script, whether on an HP system, HPC environment, or in the cloud, under an automation server paradigm, so that the testing for regressions, confirming system behavior, even as you're going through the loop as quickly as possible, perhaps under the management of a continuous integration system, such as Jenkins. So you have systematic use of models on the left hand side, and on the right, systematic use of data shown at the top, and at the bottom, with the data streaming back from the actual operating asset. Next slide.
Let's look at this in a case study. Atlas Copco is a Belgian manufacturer of advanced air compressor systems. Next, they use model-based design approaches to design and implement the system functionality. They integrate models of mechanical, electrical, algorithms, and software subsystems, and that type of environment and platform that they have enables the collaboration between engineering teams. And you can think of this, next, as a digital twin of the system as designed.
During production, the models are integrated with the manufacturing system and manufacturing data, test results, to help ensure that the quality during the manufacturing process, stays within the target ranges and bounds. And that becomes, click, a digital twin of the system as produced. The models that have been created, or the system that could be modeled, a system that's been produced can be used with models that are integrated with the configuration tools that are used by the sales engineers, the application engineers, who work with customers to create those custom configurations, again mass customization, to address the individual customer's needs.
And that becomes, we click, the digital twin, as configured. Now, all of the deployed systems from Atlas Copco are equipped with wireless connections, enabling Atlas Copco and their customers to quickly, immediately, spot any developing problems, uncover potential energy savings, and achieve maximum uptime, for the compressors and for the production lines. And that enables them to instantiate, click, a digital twin, as maintained.
That set of digital twins, and the digital thread that connects them, enables Atlas Copco to provide a more reliable high performance system, because of the feedback back to the development process, while defining and offering its customers additional services with additional, with significant value, to those customers. Next, please. Now, if you go back to this DevOps diagram, I highlighted that segment on the right hand side, because it's deceptively simple. There's really a lot of things going on underneath there. Next, please.
And that's because there are several parts that make up that operational setup. Next, they're the assets that perhaps have lots of sensors, so they generate a lot of data. The data may stream to an edge system, such as a SCADA, a 5G gateway, or a compute server. At the edge, the data may be thinned and processed and then sent to an on-prem or cloud data center or IT environment, and then be integrated with other information, such as business data and customer data, to create more insight, more informed decision-making.
But if we look at the flow of data from right to left, if we look at the opportunities for doing compute on the embedded system, the edge system, and in the IT and OT system as well, what we see is that decision-making can be done at different points, with this data in motion. And determining where to make specific decisions to get maximum value depends on the location, on the response time that's needed, and the data that's required to make the decision. Next. So for example, anomaly detection might be best done in the embedded system at the asset, fast turnaround.
Predictive maintenance for several assets may be done on an edge system, where different data from different systems can be aggregated, performance optimization, perhaps done at the data center, and fleet management, perhaps for all assets worldwide, as in the case with Atlas Copco, using a huge amount of historical operating data that's been archived, perhaps might be best done in cloud. They can offer different time scales. In fact, cloud can offer a much more rapid time scale.
But to be able to tap into the huge amount of historical data may take more time. And so the flexibility of being able to locate decisions in different places is an important aspect. And we've designed our tools so that the analytics and simulations that are authored in our tools, described once, can then be deployed to any of the scenarios that we mentioned here. Next slide.
Gartner, which is a well-known market research firm, has assessed this breadth of capability, as they look at this area for data science and machine learning platforms. And we're pleased and honored that they recognize MathWorks as a leader in the Gartner Magic Quadrant for data science and machine learning platforms, for the last two years. The other companies that are in this Magic Quadrant are not competitors. In many cases they are partners of ours. And that's just the way that the Magic Quadrant is defined and composed. Next slide.
And so to close, I would like to summarize with a few points. First of all, when we look at pragmatic digital transformation, a key approach, not the only approach, but a key approach, is to look at the systematic use of models and data through the life cycle, at different stages, different ways, different purposes, but leveraging the models in that way. A second point, to enable that, view your system lifecycle not just as a V cycle, not just as a left to right workflow, but from a DevOps perspective, because that can give you additional insight about how you can exploit data, how you can reuse models, not only during the development process, but also in operation.
And then third, do the thought experiment. What capabilities, processes, and skills would give you the capacity for perpetually upgradeable systems. And then, what do you want to do with that capacity? Thank you very much, and thanks for your attention.
Thank you very much, Jim, for this exciting presentation, of a holistic approach from development to operations, talking already about upgradeable factories in the future, and using a lot of modeling and model power through the entire value-added chain. Now let me ask a question that, of course, comes from an automation perspective. If you go into Atlas Copco, or any other industrial company, they already have an established system, or maybe an automation system goes from development to manufacturing to maintenance.
That's a lot of software in the manufacturing execution system area. And now we bring in a new machine, where the machine manufacturer, of course, has his own software, or we bring in MATLAB and your experience. Will we ever end up with an open architecture, where we easily recognize new components and where we can easily integrate new pieces of software, or a new skill of modeling?
That's a great question. I think it has several different layers to the question. When we talk about introducing new models, and perhaps taking a new model and looking at a virtual integration of the system, and running simulations to see how that works, that's already being done. In MathWorks we've had APIs in our Simulink product, and so we can interoperate and integrate with a very broad ecosystem, I think over 100 different modeling tools, that just connect into Simulink. And with the API, anyone, our customers, our partners, we can all extend it using the same API without having to get inside the guts.
And so that already exists. There are also standards in modeling like FMI, which is an interchange and exchangeability format for doing the same thing. So it's not quite plug and play. It must be done a little bit carefully, but that sort of paradigm exists.
When you move into the manufacturing floor, into the automation environment itself, that becomes more challenging, because I'm not quite convinced that everyone has the same spirit of interoperability. It is starting to happen. It certainly happens in places with some of the field bus architectures.
Wireless communications actually becomes an interesting door opener to look at that, because it normalizes how systems can connect with each other. But then you still need a schema. You still need some way of representing what is going to be exchanged between two different systems.
So I think that becomes interesting, because it becomes more of a software problem. And I think the idea, the concept and the opportunity for standardization, for how data gets exchanged, through those kinds of wireless communication protocols, becomes very interesting.
Also, because a lot of data starts to traverse in through, let's say, streaming protocols in Azure or in AWS, cloud platforms, that also becomes a collection method and a way of normalizing and aggregating information, so that the aggregation could be done in several different points, not just in the factory floor, but also in the cloud platforms, and also in the simulation environment as models are brought together.
Thank you very much, Jim. I really like this thought that you propose here, that when many proprietary systems come together, 5G, in wireless, may be the door opener, that forces everyone to talk to each other. And we will have to find a way to have a more open architecture to be able to incorporate each others' machines and software.
I think that's true. I think that the hard part is to break down the walls. Once the walls are broken down, then the constructive conversations can happen.
Thank you very much. With this great final statement, we'll have something to take to our hearts. Thank you for coming today.
Thank you very much.