MATLAB Based DevOps Workflow in AWS for Hospital Patient Monitoring Applications
Mohammad Khair, GE HealthCare
The Maternal Infant Care division at GE HealthCare is interested in remote patient monitoring in neonatal intensive care units (NICUs). Some of the babies admitted in NICU are fragile and must be isolated for special care in an optimal incubator environment. This presentation by Mohammad Khair from GE HealthCare explores how remote monitoring enables healthcare professionals to assess babies’ health information and enable parental bonding with the baby in an incubator. This is enabled with a video camera attached to the NICU bed which streams data to MATLAB® on the cloud. The doctors and parents can access baby information via apps deployed on web and mobile applications. MATLAB Production Server™ hosts the MATLAB runtime environment in a Kubernetes-based AWS cloud platform integrated with Edison™ cloud services developed internally by GE HealthCare. As new patient data is collected, the research teams tune processing algorithms developed using MATLAB following the standard DevOps workflow. Deployment is scaled using MATLAB Production Server for multiple data streams.
This presentation was given at the 2023 MATLAB EXPO which brings together engineers, educators, researchers, and scientists to hear real-world examples, see hands-on demonstrations, and learn more about the latest features and capabilities in MATLAB and Simulink.
Published: 7 May 2023
[AUDIO LOGO]
Hello. This is Mohammed Khair, principal engineer with GE Healthcare and with the presentation on MATLAB-based DevOps workflow in AWS for hospital patient monitoring applications.
The application we are sharing or discussing today is an application where we have multiple patient cameras streaming RGB videos into an edge server. The edge server essentially aggregates those video streams and uses Amazon Kinesis producer SDK to stream the data into the AWS cloud. Within the AWS cloud, it is received by the Amazon Kinesis Consumer SDK. And there is a storage within the AWS system that's enabled for S3 bucket.
Also we are able to connect that video stream via live feed application either through HLS or dash to mobile users that can consume this video feed. Those users are able to view the videos such as a nurse, or a clinician, or family members as well as we can develop viewers such as dashboards that are web browser-based dashboards.
So that's in general is the application that we're going to discuss today. There is a DevOps workflow, development and operations workflow, that we will discuss around development of this framework and deploying it into the cloud. There's a lot of algorithms and code that may be needed to derive diagnostics and analytics from the video streams of the patient video. So we developed these within the MATLAB environment. We built them. And we built a Docker platform.
We built and test a Docker platform that is enabled with multiple types of applications that we will share in a few slides. The algorithms and code that we developed we are able to compile into a RESTful API interface via the MATLAB Compiler or MATLAB Compiler SDK. And then we deploy these onto Docker where they are received and used by a MATLAB Production Server.
And the MATLAB Production Server running these algorithms, et cetera, can also share some operational metrics and measurements that share them with some monitoring kind of dashboards so as well as into the databases so we can operate and manage these models' performance, et cetera. And then we can monitor and track those metrics over time so we can improve in another cycle of development the performance of these algorithms.
So as we mentioned, we use Amazon Kinesis to sync the data. And there is a lot-- and there is also database services for storing the data. The tools used for DevOps is, essentially, there is a continuous cycle of exploration where we hypothesize, we collaborate and research, we architect, we synthesize our algorithms or software. And then there's a continuous integration step where we develop, we build, we test end to end, we set up a staging platforms for testing our deployment followed by tools for deploying, for verification, for monitoring and responding to issues, and so there are several tools for that.
And then there are tools for continuous monitoring, monitoring of when we release to make sure that we stabilize, that it is stabilized in terms of performance. We measure its metrics. And we learn from its deployment, and process, and feedback back again into another exploration cycle. So those are different phases of development. And for each phase, there are different set of tools and activities or actions within each step.
Which tool is the right tool that we would like to choose? There are so many tools available. And the tools that are on the screen are some of the ones that we recommend. But there are different criterias for choosing these tools such as is it open source or is it a paid license. What's the pricing for it? Is it supported by the operating system? Does it enable large deployment to teams for collaboration?
Does it support integration? Does it support branching, for example? Ease of use, popularity, so many different criteria, versioning support, for example, source code management, paralllelization performance, all kinds of metrics that you have to consider when choosing tools for cloud deployment. But these are the different stages that we discussed in planning, and development, encoding, building, securing and storing artifacts. All those stages have different tools that are specializing in those functions.
So in DevOps, we need to be aware that we need to develop a culture for DevOps. And that culture, essentially, can have the following as characteristics. Initially, we want to use the DevOps dashboard as one of the input metrics for measuring our performance. We want to follow the industry trends and the best practices. We can also create a DevOps week annually so that we can share best practices and help collaboration across teams. We can measure DevOps maturity to make new product initiatives.
So it's a key metric for any new products initiatives, or products, or projects that are started. And those maturity models can look for standardization of tools and processes across projects, documentation of procedures and training, optimization on cost and performance. We can also hold periodic skill assessment for engineers and collaborate with other teams and stakeholders and host brainstorming sessions to document solutions, ideas, and automation.
For a patient monitoring application, the problem really is about deploying this video feed which is in a video format. There's a lot of data so data intensity is high. And so there's a need for optimization of performance and parallelization of the process.
And we want to be able to provide access on mobile devices and web browsers. So the solution is really to-- and there are several algorithms involved. So the algorithms needed to extract diagnostics of the patients from the video streams are essentially deep learning models that perform classification measurement. And those need to be deployed into a secure, reliable service with scalable computing resources.
We need to develop web browser or mobile-based applications to allow viewing of the data feeds and return model outputs to those devices. For system management, we need to deploy into a secure, scalable, reliable environment, provide tools to automate module training, testing, and deploying, and tools for monitoring of operational and performance metrics.
Those are some of the challenges that we are trying to address. So videos and data streaming. We use AWS Kinesis to offer optimized streaming to the cloud service for data and video channels. So Kinesis offers both a video channel and a data channel. And of course, you can scale those up as needed per the data volume you're addressing. AI model development training and data diversity. MATLAB provides the tools that are needed to do model training on large data with data sourced to optimize memory utilization as well.
So data stores allow you only to load data that you need or working out on at the moment so that you're not cluttering your memory with unused data, so very efficient technique for loading data and for doing training. Scalable compute and memory for deployment of AI models. Cloud services offer configurable machines so that you can scale up per how many processors or memory size requirements.
And MATLAB Production Server flexibly scales the processing based on the traffic demand that is incoming to it. Data storage and cost optimization. AWs data lakes offers storage and configures data retention period as needed on an as needed basis. Data visualization and event retrieval. MATLAB web server provides a simple web application development environment for deployment into web browsers.
This is the overall architecture. Essentially, within the AWS, there is a Kubernetes infrastructure or network, Kubernetes network, that's deployed as a Docker platform. And essentially, within this Kubernetes system, we have a MATLAB production server as a Kubernetes instance and web application servers as well.
And so essentially, the algorithms that are developed in a script format in MATLAB, whether it's MATLAB Online Server or MATLAB offline, those algorithms can-- so MATLAB or MATLAB Online Server can be used to develop, build, and test models, develop scripts, optimize the algorithms, and then compile them for deployment into a MATLAB production server.
So we deploy models to production. And within MATLAB Production Server, there is a request broker for whenever traffic is incoming from the RESTful API. So what happens during the deployment is these scripts are transformed from a algorithm or a function that accepts input and output into a RESTful API that you can do post and get to.
And the Matlab Production Server provides the scalability to manage the requests that are needed by deploying several worker processes or across several worker processes to balance the workload that's incoming. It also publishes data to monitoring metrics-- monitoring instance of applications that can be deployed with MATLAB, or through web application server, or through third-party tools like Grafana, for example, for dashboard displays.
The architecture of our system is, essentially, the video stream comes in as a patient image. We train the network models on the images until we have an optimized trained model. And then we deploy that on the AWS cloud for predicting either prediction or classification of information from new images that are fed into it.
The architecture of a cloud-based software, essentially, as a medical device in a cloud architecture looks as displayed here. We have an image acquisition that there's data routing that happens as well as streaming through Kinesis into a Matlab connector.
And the algorithm then is-- data is fed into the MATLAB production server that essentially scales. That's running the algorithm engine, essentially, running these algorithms and scales as needed based on the incoming data volume our rendering classification results or prediction results back into to be fed back through Kinesis or Kafka, for example. Both works. There are connections for both Kafka and Kinesis.
And Kafka is a carriage service into the MATLAB. And Kinesis, at the moment, we develop separately. But hopefully, we'll get support in the future. And that data is essentially provides a-- is fed into a database. We are connected to a database so that we can do SQL query on the results or displayed into a Power BI or other tools for dashboard and visualization of data.
This is pretty much similar kind of information but just wanted to add in the fact that we leverage also the MATLAB Parallel Server for parallelizing the code and to produce more efficient results faster. And there could be other tools used for visualization as well for reporting data on a dashboard, or web app, or a form.
This architecture offers opportunities for growth of hospital monitoring applications. And one such application is providing, for example, automated labeling to facilitate iterative retraining of deep learning models, using deep learning models to perform predictive algorithms where we can predict the patient's future state of order or alert before deterioration happens so that we can more proactively follow up with the patient. Offer models that provide adaptive modeling of patient's state and adaptive control, for example, of the therapy that's being given so that you can provide personalized medicine to that patient.
Offer a digital twin for study and analysis of patient dynamics. Also provide research tools to improve algorithms. And examine different use case scenarios on the patient. So provide customized algorithms to adapt to different clinical use case scenarios to improve usability and the workflow of the clinician.
These are several references that I thought were useful for DevOps. Can standardize best practices, centralization and best practices, and maturity models. So I hope you benefit from them. And thank you very much for your attention.
[AUDIO LOGO]