Developing AI Enabled Digital Health Applications
Overview
Artificial Intelligence techniques like Machine and Deep Learning are becoming mainstream technologies that can analyze large volumes of data and potentially unlock insights especially in biomedical applications. Artificial Intelligence systems in medicine have the potential to have a positive impact for improving healthcare outcomes. These capabilities are becoming mainstream and in early 2020, FDA approved the first cardiac ultrasound software that uses Artificial Intelligence to guide untrained technicians for capturing high quality ultrasound images.
In this introductory webinar, we would like to share real customer examples and case studies from medical devices industry on how MathWorks is enabling scientists and engineers to identify opportunities to leverage Artificial Intelligence techniques for developing Digital Health Applications. We will also cover how MATLAB & Simulink can be used for complying with global regulatory guidelines and international medical device standards.
About the Presenter
Kirthi K. Devleker is the global Medical Devices Industry Manager at MathWorks. In his current role, Kirthi Devleker closely works with customers developing Digital Health and Medical Devices, academic researchers, and regulatory authorities to help them see the value of modeling & simulation and how one can leverage latest trends such as AI to build the next generation medical devices. Prior to joining this role, Kirthi served as a senior product manager focusing on evolving the roadmap and long-term strategy for Artificial Intelligence and signal processing products at MathWorks. Kirthi has a background in signal and image processing and has been with MathWorks for 10 years. Kirthi has a MS in Electrical Engineering degree from San Jose State University, CA.
Recorded: 18 Feb 2021
Hello, everyone. Thank you very much for joining this webinar, Developing AI Enabled Digital Health Applications, using MATLAB and Simulink. I'm your host. My name is Kirthi Devleker, and I'm the Global Medical Devices Industry Manager here at MathWorks.
My primary interests have been signal and image processing in the area of machine and deep learning within the medical industry. So I'm particularly excited about today's talk and I look forward to working with you all to leverage AI on all your projects in the medical industry.
So there are two main key takeaways from my presentation. The first is AI, or artificial intelligence, and machine learning techniques is becoming an integral aspect of modern health applications. And my second takeaway is MATLAB and Simulink are perfect tools for developing the next generation digital health applications.
So during this webinar, I'll cover why we think this is the case. And I'll also show you some very good examples and some workflows on how our customers have used MATLAB and Simulink to develop next generation digital health applications.
So first, let's start with where we see artificial intelligence in our daily lives. So there are many applications of AI as a technology. And you may be already encountering or using some of these applications. For instance, you have smart assistants like Alexa or Google or Siri that respond to voice commands.
You've probably also seen self-driving cars on the road with various levels of automated driving. Facial recognition is also another key task that helps you identify your friends on social media networks or even in any other platforms. And finally augmented reality for say automated language translation, just to name a few.
These are some very well-known applications. Some most likely you might have come across this in your daily life. And AI is finding its opportunities across all sorts of engineering and science applications. But what about biomedical applications, do you need AI here? So, before we answer that question, let's take a step back and look at this area where we are seeing a lot of interest in AI.
So, the first thing is I think I want to mention maybe it's a good time to introduce digital health, and what do we mean by digital health? So simply put, digital health is the convergence of some of these newer technologies like wearables, mobile health, telehealth, clinical decision support systems, and personalized medicine. These are some of these areas where I think there is a lot of innovation that's happening.
And there are some other technological advancements like AI that is helping a lot of people to actually build applications that are more centered to patient's health and making their patient life better. So we've seen, for instance, some examples in the area of mobile health wear which are all. And the good part of this here is we have seen these examples that are some of these applications are regulated meaning FDA is also taking a keen interest in these applications.
So, one thing we've seen is FDA, although it has certainly encouraged the use of AI and ML, but it has not stopped there. I mean, there is a new Center of Excellence of FDA on digital health. And more recently in January 2021, that's last month, they've released a new AI based software, Medical Software as a Medical Device Action Plan. At a high level, this action plan consolidates a lot of things that FDA has been hearing from customers like from the industry participants on how to regulate AI, how to incorporate AI into these devices and make them more smarter.
And the FDA has kind of taken all those feedback or comments from industry and participants and rolled this action plan out. So you'll see how FDA is approaching the-- how it's kind of what its plan is to regulate this area of digital health and how it's planning to take an action on this.
So now that you have some background of AI, digital health, let's look into some applications of biomedical images or the biomedical applications, where AI is really useful. So the first example here is I think you probably have seen this a lot, especially in the case of COVID. You have lots of patients who are kind of getting X-rayed or people are trying to look for some disease symptoms. Now having a doctor go through all these reports is very tedious and time consuming work.
The big question here we want to ask ourselves is, what if some of this work could be automated to help alleviate the pressure and workload on doctors? For example, during a pandemic it can be very challenging for a doctor to go through all these reports manually. As we'll see in a moment, replacing some repetitive tasks is often suited for AI. And some of our, I mean a lot of the customers we see are actually building digital health applications to actually help patients and also the care providers in this case.
Today I will be showing you some real world user stories to challenge you to think about what the rise of artificial intelligence could do for you and how you could leverage these systems, leverage artificial intelligence for the system you develop, the services you use, or the research that you perform.
OK. So in the medical industry, now coming some specific examples, AI is already there. So for instance I'll show you this example. On the bottom left, you'll see a child who's strapped with some kind of sensor sort of jacket that's a smart jacket. And the goal there is to help save children's lives by diagnosing pneumonia, even when medical personnel are not present.
So, what you see here is two engineers from Uganda from the Makerere University. I hope I got that right. They built this smart jacket that has some sensors leading onto it. And their goal is to identify the onset of pneumonia in children. Apparently a lot of children die only because pneumonia is not diagnosed very well.
So, this is one area I think this example got a lot of coverage. And I can show you some links on how we've worked with this customer to actually help them build this next generation device. The second example is in the area of supporting the production of contact lenses by using AI, this is done by monitoring quality and help investigating year losses.
So a contact lens as you can imagine it's actually a medical device. And there are some good examples here where you could actually use AI to actually not only increase quality but also reduce year loss. Third example is re-enabling motor neuron disease to actually communicate by converting brain signals into words and phrases. So this is one other good example where you kind of basically trying to enhance patient's life without really being too intrusive there.
And finally, AI has given a drummer, on the bottom right here, who has lost his arm-- a drummer who actually lost his arm AI has given him some superhuman musical abilities. And I'll show you some examples here. So the one good thing across or the one common thing across all these examples is MATLAB has been really instrumental.
And when I MATLAB, I mean MATLAB and Simulink have been really instrumental in helping these people not only build these devices but also get them to build them in a way that they can get to market quickly without really-- while avoiding a lot of pitfalls in terms of lost revenue and things like that.
So, in the medical industry AI, is now -- we are basically in the early stages of what we could call a significant transformation. So, here is a report from Gartner. Gartner is a market research company. And the recent survey of 3,000 companies indicated that AI is starting to emerge and some of these are-- I mean in some organizations at least. And the interesting fact is only 4% have actually deployed it. So I think there is a lot of good opportunity here for you to get involved. And Gartner already predicts that this is going to be happening at large scale in the next few years.
One other thing I also want to mention is in 2020, MathWorks was recognized as a leader in the Gartner Magic Quadrant for data science and machine learning platforms. As you'll see today, we believe this recognition demonstrates our ability to empower your team, including helping those with limited experience in AI or medical device development. We also help in providing complete workflows from data preparation modeling, all the way to deployment. Whether you're deploying it on cloud or deploying it on edge devices.
As well as we can also show you how you can use Simulink to tackle integration issues and reduce its risks. So, before we go any further, let's take a step back or least let's pause here for a few minutes just to clear some terminologies here. So when I say artificial intelligence, or AI, this is a basic definition that has been in use since AI, the term, was first coined in 1950s. And the definition was, this is the capability of a machine to imitate intelligent human behavior.
Now as you can imagine this is a very broad statement or a definition, which begs the question, if AI has been around for decades, why all the buzz now? But this is where it gets really interesting, when the machine can sometimes match or even exceed human performance. So today this is giving us the opportunity to offload repetitive tasks or even to get computers to do more jobs safely and efficiently than we can.
Now practically speaking, when people say AI today they are always, always means they're basically trying to exceed intelligent human behavior by training a machine to learn the desired behavior. Now what do I mean by this? So, you could think of it this way, there are two main ways to get a computer to do what you want. The traditional approach, which I'm guessing most of you are familiar with, is to take some data and write a program that processes the data and produces the desired output.
With an approach called machine learning, this is actually flipped around. So you have data and outputs as to begin with. So basically this is what you feed to a machine. And the machine actually writes a program for you. That it even some techniques like clustering where you don't even need to know output. You just need the data. But it is not accurate to call the thing that the computer creates a program. Sometimes people call that as a model.
Now machine learning models are largely black boxes. They can generate the desired output but they're not comprised of a sequence of operations like a traditional program. But there are ways to actually get machine learning models to be more interpretable and explainable. And we can probably talk about that later. So to put the key terms in context, you have artificial intelligence which is the broad area, machine learning is a subset of AI. This is the most common family of techniques that apply today. In fact people are using this terms AI machine learning interchangeably.
But there are other technologies in AI and that are not machine learning, such as for instance there's a subset of machine learning that is called deep learning. Now deep learning it refers to a deep neural networks of the use of convolutional neural networks or there are some other networks here. And the main distinction here is that the model learns to perform these tasks directly from the data. In other words, you don't really need to know ahead of time what features are important, as it was the case in machine learning. But the lines are getting blurred here a little bit now. This distinction will become more clear as we discuss some real world applications.
So as we ask if we are ready for AI or if you're ready for AI, we start to get a sense of what you might need. So first thing is I think you need-- I think you probably would have guessed it by now, you need some data. And oftentimes you need lots and lots of it. The reason why you need lots of data especially in health care areas, is because let's say if you're developing a model, you want to make sure the model works for a wide variety of population and that the model does not really have some kind of bias in there with respect to demographics and certain things like that. So the big question is need to have data and you need to get some data.
Now then the corresponding desired outputs, known as and response. So you probably need to know exactly what it means. Now in the area of health care again you probably could get maybe experts like cardiologists annotate your ECG signals, or maybe a radiologist and annotates your data. And finally, you need this ability to create these machine learning models, right? So, these are the kind of things that you would need. But we are not quite ready yet.
Now if you want put an AI solution into production, you're going to need a bit more. To explore what's missing, let's return to the self-driving car example application, OK? So, consider developing a lane detection algorithm in the lane assist system, which gentle steers you back on the track if you drift out of your lane. So so far, you have some data, you have the output, which is the location of the lanes and the model detects these lanes from the camera.
But the lane detection is just one tiny part of the overall lane assist feature. And developing that complete feature requires much more than just data, output, and model. The system needs to access and synchronize data from multiple sensors, perhaps multiple cameras or even knowing that one indicator is on or not. The data needs to be analyzed and pre-processed for example, collecting images for bad weather or snow and things like that. Now we have a model that can detect lanes, but it isn't useful on it's own. All these need to be deployed in the larger system framework.
So the big question is, what else do we need for effective AI solution? So in short you need to incorporate everything else as we do with engineers and scientists. So developing the AI model is typically just one stage of our workflow. In fact, if you look at it as different parts of your workflow, developing this model is just one small aspect. To just put it into context, the focus of dedicated AI tools like for example, Caffe, TensorFlow or even MATLAB supports a lot of the AI model development. But just look at a high level. These are some of the tools that you would need.
But more importantly, as you see, MATLAB also integrates with the complete workflow to help you develop a fully engineered system. So as we look at the complete workflow, you also need the ability to access data from sensors, files, databases. You need to analyze, explore, and process this data, because simply having the data is not really going to be enough. And sometimes you probably have to do some kind of feature engineering on the scientific data that you're using.
Now the AI model will be ultimately integrated into a larger system, that is likely also uses traditional algorithms. And as such, these need to integrate with the simulations we use to design tests and verify the overall system behavior. And finally, we'll need the tools for deploying the final algorithms, for example, a standalone desktop application in an enterprise or cloud system, or on embedded devices.
So this is the main part of this talk. Now I want you to think AI as just another tool in your toolkit. Another technique for solving problems in a different way or tackling new opportunities or problems or depending on what you're looking at. And hopefully this gives a pretty good sense of why MathWorks is talking about AI. We've been working hard to make AI easy and accessible for engineers and scientists within the context of your overall workflow.
Now coming to the big question here, so are you ready to develop AI based digital health applications? We will pose a few more questions to figure out if you're ready for AI. Now we talk about the importance of machine learning when implementing the AI solutions, but what if you don't have any experience with machine learning? Let us take a look at the surprising application of AI where an entrepreneur in Uganda learned and applied machine learning to tackle a problem he wasn't able to solve any other way.
So, pneumonia here, pneumonia is the number one infectious cause of death for children under the age of five worldwide. This is primarily due to misdiagnosis, especially in remote areas where you don't have a lot of medical professionals or equipment. Understanding the impact of so few doctors in the region, two people named Brian and his colleague from Makerere University in Uganda set out to find a way to help their community prevent pneumonia deaths.
As recent engineering graduates, Brian and his team sought to turn the medical issue into an engineering problem. This required turning the symptoms into data. So the major vital signs of temperature and breathing rate were relatively straightforward to measure. Quantifying the lungs sound was the bigger challenge. To do so, they designed a variable medical device. It's called Smart jacket. The Mama Ope Smart Jacket prototype has five microphones effectively serving as a variable stethoscope to listen for pneumonia's distinctive wheezing and crackling lungs out.
The team collected clinical data and with the help of medical professionals labelled the recordings of healthy patients and those with known cases of pneumonia. They then used MATLAB signal crossing tools to find and extract features from this audio recording and help identify the crackle sound. Using the extracted features from the labelled data, they were able to train a classified using MATLAB interactive machine learning tools. And they integrated the model into the jacket.
The jacket uses Bluetooth to connect to a mobile phone application which records and analyzes the collected data. It then sent the results to a health care professional, so he or she can make an informed diagnosis without requiring an in-person examination of the child. Let us take a closer look at developing the machine learning model in MATLAB. One of the challenges with machine learning is to determine which model to use. With so many different techniques and options available, figuring this out can be very time consuming. Instead of manually trying out each option, Brian used the Classification Learner app to automatically try every possible model without rewriting the code.
To begin, open the Classification Learner app with App tab in MATLAB. You then import your labelled data from the workspace or file, or select how much data to reserve to validate your model accuracy. With your data selected, you can now begin the session. As you can see, there are many models to choose from. But as I mentioned earlier, you also have the option to simply train them all and then compare the resulting accuracy to help you decide.
Here you can see I am utilizing both course on my laptop to train models in parallel. Next the confusion matrix lets you assess how the currently selected classifier performed in each class. Once you are satisfied with the model, you export your model or generate MATLAB code for use in other applications to programmatically retrain your model when the new data becomes available.
All right. So in summary Brian followed the machine learning workflow using MATLAB. He was able to quickly train, try different signal processing approaches to extract features from recording. And then develop a classifier based on these features and again quickly trying several different models. Brian says that for the prototype of Smart Jacket requires a child wear it for about three minutes. So even a single Mama Ope jacket in a clinic or a school could result in many fast and accurate diagnosis where doctors are unavailable. He says UNICEF has already expressed interest in helping Mama Ope bring its technology to schools, hospitals, and clinics in the region. Then perhaps UNICEF might one day report childhood pneumonia deaths that are reduce or deaths will reduce with the help of the smart jacket and a fair amount of Ope or hope. OK.
So, now going to answer this first question to determine if you're ready for AI. We have seen that you don't need any experience to start trying AI with MATLAB. You can supplement or complement your domain expertise with domain specific applications to prepare your data. And you can use machine learning apps to try different models and parameters to see what works the best. One of the hardest and most time consuming parts of this application was identifying the best features for developing the model, which brings us to our next question. What if you can't identify features in your data?
So there might be several reasons why you can't identify features or you might have very complex features that's more than you could practically find in the data. In the previous example, we had Brian listened to sounds of lungs. So he could actually correlate the sound of a lung or maybe identify the features based on that. But sometimes it's not that obvious and feature engineering can actually take months or even years to develop a machine learning model.
However, you can still use AI even if you can't identify these features. Recall earlier when I mentioned a subset of machine learning or models or algorithms called as deep learning, that machine learning can required a significant amount of time and expertise to identify features. And deep neural networks learn features directly from your data. Besides saving time, deep learning and often identify features that we miss as domain experts, because we inevitably narrow our search based on our own biases of what we think might work.
However, one big trade off with deep learning is that it can take a lot more data, more data than any of us have for a typical engineering or scientific application. Let us explore this trade off with an example where researchers use MATLAB to compare machine learning and deep learning approaches to decode and map brain signals into phrases. This is particularly useful to re-enable motor neuron disease patients to communicate by imagining the act of speaking specific phrases.
Now or moving on to the next story of our webinar here, for patients with advanced motor neuron disease, also known as ALS, communication becomes increasingly difficult. As the disease progresses in many cases, a motor neuron disease leads to locked in syndrome, which when a patient is completely paralyzed but remains cognitively intact. You might have seen this or an example in TV of Stephen Hawking.
So this is basically the same challenge that we are talking about that Stephen had. Now, eye tracking devices and more recently EEG based brain computer interfaces enable motor neuron disease patients to communicate by spelling phrases letter by letter. But it can take several minutes to communicate even a simple message. Fortunately a non-invasive technique known as Magnetoencephalogram or MEG may be able to detect magnetic activity produced by the electric signals occurring naturally in the brain.
So the researchers at the University of Texas in Austin have developed a non-invasive technology to decode images signals and detect entire phrases as the patient imagines speaking them. Because the researchers were working with the new kind of data that acquired a tool that would enable them to rapidly evaluate that idea of signal processing and machine learning approaches including deep learning. They began by using some statistical features such as means standard deviation to train a machine learning model.
And when this yielded a classification accuracy about only 80%, they decided to turn to advance signal processing or wavelets and deep learning, which increased the classification accuracy to more than 96%. To do this, they used wavelet scalograms of MEG signals to represent the rich signal features and in time frequency domain. The idea here was that the resulting figures could then be input to a convolutional neural network, which can identify and extract the patterns from these time frequency representations.
Using MATLAB's interactive deep learning tools, they were able to quickly explore various existing networks. Then by changing only one line of MATLAB code, they were able to scale their training to a server with multiple GPUs and trained their deep neural networks 10 times faster. Let us take a closer look at developing such a deep learning model in MATLAB. As mentioned earlier, one of the challenges with deep learning is that it often requires lots of labeled data.
A common approach to counter the challenge is to use transfer learning, which leverages existing networks that have already been trained. For example a CNN that has been trained on a million images we'd already have learned how to detect genetic features such as edges and colors. With this based knowledge, we can modify and retrain this network for our classification task using far less images. To illustrate the transfer learning workflow, let's walk through a quick example using the Deep Network Designer app in MATLAB.
This example uses wavelets scalogram images of echocardiogram or images-- I'm sorry not echocardiogram, of images signals to retrain a pretrained network using transfer learning. We will then use the trained network to make predictions on how these signals can be interpreted, new signals can be interpreted. So this example, we are actually using an ECG signal-- I'm sorry. I meant to say ECG not MEG signals.
So, this is to just show you the workflow. And then we will train and train network and then use the trained network to actually get the results of the signal. So this is a workflow that was very similar-- this is exactly the workflow that was used by researchers at UT Austin. The only difference was they were using MEG signals and I'm here using ECG signals just to show you the workflow here.
Now one other thing I also want to mention is, once you develop this model, then the next step is to actually take the model and deploy it on some kind of a mobile or an IoT device or an edge device. And I'm also going to cover that in this example. The UT Austin team was able to rapidly accelerate the pace of this research by leveraging the transfer learning model workflow in MATLAB.
They quickly loaded and tried a variety of popular and proven pre-trained networks, which significantly reduce the amount of data, time, and expertise to get started. With the Deep Network Designer app, they were able to quickly modify and retrain these networks without needing to write any code. By automatically generating MATLAB code, they were able to scale their training to multiple GPUs.
So to begin, you can open the Deep Network Designer app from the Apps tab in MATLAB. And when you do so, you'll see that you have many pretrained networks available to choose from. We will open AlexNet, which was trained to classify a thousand different objects or categories such as animals, vehicles, et cetera. The deep neural networks are made up of many layers, each layer performing the specification or specified mathematical operation. In this case, AlexNet has 25 layers, but we only need to modify the layers responsible for making the final classification. To do so, we will drag over a fully connected layer and a new classification layer, which in this case will classify between three options instead of 1,000. Once we reconnect the layers, we are ready to train the network. To begin training, we must select our training data. In this case, we only have 130 images to train the network. And we've also set aside 32 images to validate the training accuracy available. Next we will modify some of the common training parameters here, such as learning rate.
Let's just wait for this to load. Next we'll modify some of the common training parameters such as the learning rate, validation frequencies, so on and so forth, and also batch size. Now these things are dependent on what kind of device you're using. With our training options set, we can now go ahead and begin training. We can monitor the training programs to ensure that the network is still learning but not over fitting to our data. As before, we can automatically generated MATLAB code to programmatically retrain our network when new data becomes available, or we can export the trained network for further evaluation.
In particular, we can use the trained network to classify the test data that we have set aside and once again use the confusion matrix to assess how the network performed in each class. Now once you have the trained network, you can also use Simulink here as I'm showing you. So Simulink, you can have a basic workflow set up here, which can provide you with an output of what the-- I mean, the prediction of the model itself. Now what I have here is some model here. And I'm just showing a zoomed in version of the preprocessing layer, which is the workflow that we saw earlier.
So you have a pretrained network that you can drag and drop from Simulink. This is the MATLAB function block, which basically has some MATLAB code to convert the input signal into a time frequency representation. And once you have this model, Simulink actually helps you to not only do this very quickly, but you can also benefit from generating code. So once you're comfortable or satisfied with your model, you can now generate GPU code here from your model. And GPU code is basically the CUDA code. And probably it's not very clear here in the screen, but you can see you have you can select or generate some GPU code.
This is your CUDA code. And then you can deploy it on an embedded device that could be used on the field. All right, so now that you've seen two examples I would like to end our examples with one where AI exceeds human capabilities. So this is a picture of Jason Barnes, a drummer who lost the lower half of his right arm in an accident. Jason is wearing a robotic prosthetic drumming hand created for him by researchers at the Georgia Tech Center for Music Technology.
You might notice that the right hand is actually holding two sticks. The longer one is this primary stick. Jason directly controls it by flexing his muscles as if you are controlling a drumstick with his own wrists fingers. And the shorter stick is autonomous. It plays on its own based on what it hears from Jason, other musicians, other signals Jason sends it. Here we'll see Jason trying out the basic prosthesis behavior, where he is pleasantly surprised to see how his natural wrist and finger movement is translated into the stick's action.
Flex. Relax. Flex.
[DRUMMING]
[INSTRUMENTAL MUSIC]
We can look at this application as the control system with both Jason and the prosthesis working together to create music. There is a feedback loop as Jason plays based on what he hears. The prosthesis is as fed input from the EMG sensors measuring the muscle movement in Jason's arm, which is then fed into a processing laptop and entered into a PID the controller running in Simulink on a host computer. The controller parameters determine the stick response. And Jason can adjust them simply by flicking his wrist forward or backwards while holding a stick away from the drums to tighten or loosen the feel.
Additionally a microphone can listen to what Jason or other musicians are playing and this feeds to the same system into AI algorithms running the same Simulink model to control the autonomous stick. Now let's see how it all comes together as Jason plays in a quartet with other jazz musicians. You will be able to hear that the researchers have succeeded not just in restoring Jason's artistry, but also have given him capabilities that would make fully able-bodied drummers jealous.
[INSTRUMENTAL MUSIC]
Hopefully through this wide range of applications you've developed some ideas as to how AI might be able to help your work. We have learned that you can be ready for AI even if you don't have any experience with machine learning. You can easily use programming interfaces and apps in MATLAB to quickly try out different approaches just as importantly leverage your own demo and expertise to prepare the data.
Now if it's not practical or possible to identify the features you can also consider other techniques too like deep learning, which helps you identify features that part of your training process. And as labeling tasks get more complicated and time consuming, you can use the or other automation techniques to help you label or annotate your data. And if you don't have the right data, you can generate it with simulations using Simulink.
So we've seen that there are ways to work through each of these challenges for creating an AI model. But remember, you can't do AI without everything else. So to build the complete AI systems, they must be able to tie into your existing workflow. MathWorks offers you a lot of support and help along the way. So before we end today, I'd just like to highlight some of the ways you can get started even without MATLAB license or installation.
Now one other thing we didn't cover is how you can actually use Simulink to help you go all the way from modeling and simulation to testing and verification simultaneously by leveraging this code generation. And this entire process is designed to help you to meet quality standards such as IEC 62304. You can develop next generation devices with these workflows and while saving yourself expensive time by leveraging code generation and modeling capabilities and which help you to test and validate early without having to wait until later to do so.
So Simulink has-- it's basically MATLAB and Simulink. It's easier to use. You can use these higher levels of abstraction to deal with your device development workflow. It's designed for engineers and scientists who are not professional programmers and we also provide you support for vertical applications like advance signal processing. And you can get some higher levels of service and support through MathWorks. Now if you want to start off on your own, we have multiple training courses for AI, data science, or Simulink's. You may want to check out our online section on mathworks.com.
We are also partnering with Coursera on providing this practical data science with MATLAB specialization course. Or if you prefer, we offer a wide variety of instructor led trainings. And these typically have been a mix of in-person and online offerings. But in recent months, we have expanded our instructor led online trainings, which have been very well-received. Our worldwide team of training instructors are engineers themselves with medical specific background and industry expertise.
We offer courses from fundamentals to advanced topics including statistical methods signal processing, big data, feature extraction and more. And these courses can be customized. In addition to formal training application engineers and MathWorks also frequently holds seminars and workshops such as these biomedical AI hands-on workshops. This uses real world biomedical signals and images to help you completely explore the AI pipeline from building the data all the way to the deployment.
These workshops are hosted on MATLAB Online. So you don't have to worry about installing MATLAB as long as you you need to have access to internet and a laptop you should be able to good to go. Now when the time is critical, we also have consulting services that can help you. Our consulting team is white box with the goal that your team owns and operates the resulting work with customized services based on your needs and optimized to optimize your investment and ensure your success.
In summary MathWorks has partnered with thousands of companies to take advantage of disruptive technologies. Working with you, we will focus on these three areas, provide you with the fastest and most productive platform available, a platform that will enable your team to quickly adapt changing business needs and allow users to easily on board from different industries. And we'll provide you with something no open source language can. World-class customer and technical support that will be with you from day one, whether you are just trying to explore some ideas or if you are really serious to build a medical device and willing to go through all sorts of standards and regulatory guidelines. MathWorks is here to help.
Our success is your success. And we plan to partner with you as we look to develop AI driven systems. Now coming back to this biometrical AI workshop, I just want to give you some more details onto this so you can-- so the workshop is available in two flavors. This is the imaging flavor, where we'll cover all these steps in action. So we'll start from labeling. We look at deep learning. We look at the machine learning workflows. We'll also look at how you can improve accuracy and deploy models. This is a special topic. And we'll do this with real world signals.
Very similarly, we'll do the same thing with real world ECG signals as well. So depending on what you're using, we kind of try to follow the same format. So with that, I'm coming to an end to our webinar. Thank you so much for tuning in and attending my presentations. If you have any questions, feel free to reach out. Our email address is medical@mathworks.com. Or anytime you want to send me an email, I'm more than happy to answer your questions. So with that, thank you very much, and have a great day everyone. Bye bye.