Modelling and Simulation of Radar Signal Processing Applications with MATLAB
From the series: Radar Applications Webinar Series
Overview
Engineers working on signal processing for radar applications simulate systems at varying levels of abstraction and use a combination of methods to express their designs and ideas.
This session looks at how recent developments in MATLAB® and Simulink® enable more effective design and development of radar system models through efficient simulation. Highlights include:
Modelling & Simulation of Radar Systems for Aero Defense applications
- Challenges faced by radar engineers
- Making engineering trade-offs early in the design cycle
- Selecting the right level of model abstraction
- Overview of deploying radar signal processing algorithms to processors & FPGAs
Development of signal processor and extractor module for 3D surveillance radar using MATLAB
- Modelling customized signal processor modules for 3D surveillance radars
- Discussion on algorithmic complexities in conventional approach
- Ease of implementing and testing algorithms in MATLAB
- Quantifying performance during multiple developmental phases
About the Presenter
Sumit Garg | Sr. Application Engineer | MathWorks
Sumit Garg is a senior application engineer at MathWorks India specializing in design analysis and implementation of radar signal processing and data processing applications. He works closely with customers across domains to help them use MATLAB® and Simulink® in their workflows. He has over ten years of industrial experience in the design and development of hardware and software applications in the radar domain. He has been a part of the complete lifecycle of projects pertaining to aerospace and defense applications. Prior to joining MathWorks, he worked for Bharat Electronics Limited (BEL) and Electronics and Radar Development Establishment (LRDE) as a senior engineer. He holds a bachelor’s degree in electronics and telecommunication from Panjab University, Chandigarh.
Mohit Gaur | Manager | Bharat Electronics Limited
Mohit Gaur is working as Deputy Manager in Bharat Electronics Ltd, focusing on development of radar systems for defense applications. His core domain is Radar Signal and Data Processing and Radar system Engineering . Mohit holds a bachelor’s degree in Electronics and Communications from Delhi College of Engineering, India, and currently pursuing M Tech in Microelectronics from IIT Madras. He is actively involved in development of upcoming Radars.
Pratishtha Jaiswal | Sr. Engineer | Bharat Electronics Limited
Pratishtha Jaiswal is working as Senior Engineer in Bharat Electronics Limited, focusing on development of Radar Systems for defence applications. Her core domain is Radar Data Processing. Pratishtha holds a bachelor’s degree in Computer Science And Engineering from Institute Of Engineering And Technology Lucknow, India. She is involved in development and integration of upcoming Radars.
Recorded: 8 Feb 2023
Hello, everyone. Thank you for joining the session on modeling and simulation of radar systems. My name is Sumit Garg, and today, I'll be talking about how we can design the radar systems using MathWorks tools.
So in this presentation, I will cover broadly four topics, starting with an overview of some challenges radar system developers need to consider in building radar systems. Second, I will show you solutions MathWorks provides to help you perform system engineering in radars and how to make smart engineering trade-offs early in the design cycle. Third, I will show you how you can perform modeling and simulation at various levels of fidelity, starting from a preliminary design, and ending in a detailed one that can become a real manufactured system. And finally, I will talk about how you can deploy all these applications to processor and FPGAs.
Upgradation to radar systems is a common trend that we have seen over the past few years. Radar systems are currently required to perform different tasks, driving an evolution towards multifunction radars. One of the key-- one of the key enablers is phase data frontend, which facilitates systems to perform multiple tasks that were previously performed by individual dedicated radar systems.
So multiple functions are performed by multifunction data. Additionally, these tasks are like search, drag, develop and understand what is going on in the environment. They can also identify what is out there. We see AI being adopted more and more for this kind of classification.
Multifunction radar includes communication. This is part of flexibility that enables same RF system being used for multiple radar functions to transmit and receive communication signals. Other common functions include environmental assessment, imaging, and interference mitigation. So these are areas in which we are seeing radar systems are getting used in multifunction data. These are the industry trends that we are noticing upon.
Another challenge is to integrate radar with different sensors on board in case of state-of-the-art new fighters. These vehicles need to be more connected, able to autonomously interact with other vehicles, such as UAVs, have enhanced intelligence and surveillance capabilities, for example, next generation fighter that are currently being developed. In addition to dealing with these complexities, modern radar systems also need to detect smaller targets. Let's have a look at air traffic control system, like here.
Sumit?
Yep?
You can increase your voice. It's a bit low.
OK. Let's have a look at the air traffic control system, like the one shown here. An air traffic control system was designed to detect larger passenger aircraft. In this example, we have placed the radar, and you could see the radar in model start detecting and tracking the targets.
What type of challenges do we face for detecting smaller targets like drones in these scenarios? Smaller targets are difficult for our radar to detect because less energy is being reflected back to the radar. Radar cross-section is a measure of how detectable our target is by the radar system.
A drone presents a challenge due to its small cross-section. And smaller targets also fly at lower altitudes. In general, radar systems have to operate in a range of environmental conditions, and these challenges increases at lower altitudes.
For example, the lower light altitude parts bring in effects of terrain, which produces unwanted cloud regions. This makes it more difficult for radar to detect small targets. It also makes it difficult to avoid false tracks which cause confusion in systems that maintain situational awareness.
When radar systems do all these tasks, the system complexity greatly increases. This makes the need of modeling simulation and deployment even more important. There are many portions of radar that need to be included to produce a faithful simulation, and these are all the components inside and outside of the radar itself, which includes phased array antenna, scene creation and visualization, effective management of radar resources, mitigating the risk of clutter and interference, modeling targets, environment, and propagation channel, and also, forward integration of different subsystems.
So these are the challenges which generally is faced by the radar engineers. And so, given these challenges, how to overcome these? How MathWorks tools can help you to explore these design trade-offs and what solutions from MathWorks can help in the early design cycle? Let me talk about that now.
So you can develop radar systems with MATLAB and Simulink. In our view, the building blocks for a radar system are a frontend, antenna systems, phased arrays, signal processing, and data processing blocks for generating detections and tracks, and resource management block for modeling and multi-function radar system that controls the resources between different tasks. Because it is important to predict and analyze the performance of radar system in realistic environments, it is essential to be considered the effects of environment and targets by modeling scenes and scenarios, and MATLAB and Simulink provides a platform to analyze, simulate, design, deploy, integrate, and test all these components of the radar system.
There is always a trade-off between the simulation and time and fidelity. In early stages of the design where you want to focus upon, then idea is possible. For this, you need a low-level fidelity model to be able to explore a lot of what-if scenarios in a short span of time.
To validate your design, you need a higher level of fidelity. And for detailed designs, you need much level-- much higher level of fidelity. As you can see in this chart, the simulation-- the simulation in time increases as the fidelity of models are increased. So it is essential to have models with different levels of fidelity that can serve different stages of a radar system lifecycle.
One of the key enablers to support the full radar lifecycle is three levels of abstraction for analyzing and modeling radars, power level, measurement level, and waveform level. The power level is based upon the radar equation and is used for link budget analysis at the system level. At the measurement level, the signal processing is abstracted out and detections are generated based on the target SNR receiver operating characteristics.
This abstraction level is used for the scenario analysis and designing tracker systems that need longer duration scenario simulations. And in the waveform level is used to generate the I/Q level data. In this abstraction level, you can actually model all the components of the radar system, including antenna and transceivers are modeled, and algorithm development can be an end to end performance analysis of the two application example for this abstraction level.
The fidelity of these abstraction level increases from power level to waveform level, so does the required computational resources. That is why measurement level is more suitable for long duration scenarios, whereas the waveform level is mainly used for much shorter duration scenarios. It is important to make sure the result of the analysis using different abstraction levels are consistent. There is an example link that I've included on the bottom of the slide. I encourage you to take a look at this example that shows the predicted SNR in the link budget analysis, and the scenario of the detection in both the measurement and waveform abstraction levels are consistent.
With the help of the Radar Designer app, you can interactively perform link budget analysis. You can use this app in early stages of the radar system development when you know the requirements of the system and want to come up with a set of parameters that satisfy those requirements. You can start with five built-in configurations to get a head start, including automotive radars.
There is a great set of visualizations available for trade-off analysis, such as stoplight chart, range Doppler coverage, to name a few. In the table at the bottom of the GUI, you see the radar performance metrics, such as minimum range, range resolution, and the threshold and objective values for each of the parameters. The color coding of the last column of the table will indicate the system performance compared to the performance objectives and thresholds.
If the objective is met, the color is green. If the performance is not meeting the objective, but still having the threshold, the color is yellow. And the red is indicating that the performance is below the threshold.
Now, in the next slide, I would like to show the parameters, including the radar system parameters, as well as target and environmental parameters that you can define for your link budget analysis. The system parameters include fundamental radar parameters, such as operating frequency, peak power, hardware specifications, such as noise figure, beam shape loss. We can also consider the processing and scanning gains and losses in your budget analysis. And of course, you want to include the effects of an environment in your link budget analysis and system design, such as target RCS and swelling models, as well as surface clutter and precipitation.
Once you are done with your trade-off analysis, you can export the radar design and generate reports and scripts, including building an Excel spreadsheet. Continuing with the system engineering, you can also analyze the performance of synthetic aperture radars by defining system parameters, such as antenna size, motion signal bandwidth, and predict the performance of-- predict the performance such as resolution and metrics. You can also assess effectively MTI processing as a function of frequency, PRF, and clutter spread.
And one more example in the radar system engineering is analyzing detectability of a target over a terrain. You can place the target at a specific location and have a target trajectory, as shown here. The color map of the target trajectory showing the SNR of the target. The red color indicates the lack of line of sight from the radar. And you can obtain a detectability map based on target SNR, line of sight, and effect of signal processing gains, for example, pulse integration here.
Now, with that preliminary design in place as an output from the app and scenario defined, we have a range of modeling options to start with. We can build a model starting with one of the shipping examples in MATLAB and Simulink, or you can construct your own building blocks as well. With the data toolbox, you can alter and simulate radar scenarios. In this slide, I will briefly introduce you the workflow and I will provide more details of each of the steps in the workflow.
First, we start by modeling platform and targets. We will see different ways of defining the RCS for these platforms and targets, then we will model the effects of environment, which includes clutter from land and sea surfaces and refraction due to atmosphere, and you'll define the trajectory of platforms and targets in your scenario. After that, you want to model the radar sensors. Based on your simulation needs, you want to pick the appropriate abstraction level that we discussed in the beginning. And I will discuss more on this later.
At this point, you are ready to run the simulation based upon the sensor model. You also perform Monte Carlo analysis and perturb either the ground route or the sensor. Let's go through each of these steps with a little more detail.
You can model radar targets with different levels of fidelity. You can have point targets with the radar cross-section that is function of azimuth, elevation angles, as well as frequency and polarization. You can also utilize analytical RCS for basic shapes, such as cylinder, and with superimposition of these basic shapes, you can make a non-rigid body model with multiple scatters, and you can also use measurement data or full wave simulation results, for example, from Antenna Toolbox, as radar cross-section for the targets.
To increase the level of fidelity for analysis, you can model the environment and assess the effect of land and sea surface clutter. In the next couple of slides, I will talk about the land and sea surface clutter. And just briefly, for the atmospheric refraction part, there are a number of built in atmospheric models based on effective Earth radius and refractivity gradient.
There are three options for you to define the land elevation height map. You can define a flat surface. You can import different data files, for example, digital train elevation data or a detailed file. You can generate a custom elevation height map. There are helper functions that are available for you to synthesize custom elevation data, or you can use your own functions to generate these elevation height maps.
After you define the elevation height map, you want to assign a reflectivity model to your surface. Here is a table of built-in land surface reflectivity models that covers a large frequency range for all grazing angles. And if there is a model you like to work, you don't see it in the list, you can bring your own models as well. Like the land surfaces, you can also model sea surfaces with motion.
We provide built-in motion models for sea surfaces based on alphali model where we multiply a spectral model and a spreading function to model two kind of waves, gravity waves with low frequency and higher height elevation, and capillary waves with low height elevation and higher frequency. You have three parameters to control the behavior of sea motion, wind direction, wind speed, and fetch. You can also control these parameters through the sea state. It is important to note that beside the available built-in model, you can bring in your own models as well.
From the platform and targets that you defined earlier, you can model a motion by defining the trajectory. You have two ways of defining a trajectory, either by defining the V points or kinematic properties. Having access to different frames like global frame, platform frame, and ultimately, the sensor or local frame provides the required flexibility to model complex scenarios with multiple platforms while the sensors are standing in different directions.
There are two different radar sensor models available. The radar data generator is a measurement-level model that generates detections or tracks. There are three detection modes that you can use in your simulation, monostatic, bistatic, or electronic support measured.
In the bistatic mode, you can set up the radar model to generate cluster or unclustered detections. The other sensor model is radar transceiver, which is based upon the waveform level. You can define waveform and then a system-based array, as well as transmitter and receiver.
With this model, you can generate I/Q signals which can be used for training a network for artificial intelligence and for developing signal processing algorithms. With a number of available scopes, you can easily generate different visualizations, such as range Doppler range angle maps. Next, I will show you some examples of using this workflow to model different scenarios.
Our first example is to simulate radar interference and passive sensors. In this example, we use measurement-level radar model or radar data generator. As shown here, we built a scenario with platform for radar sensor, that is blue diamond shape, a platform for the target, you can see the black triangle here, and a third platform for an interference source or RF emitter that is the yellow diamond shape.
The line beside the target is the trajectory of the target, and the blue circle is the radar detection. You can also see the coverage of both the radar and interference source. As the target is moving on the trajectory line, the radar beam scans the area and generate detections when target is in the field of view.
But when the target is in line with the interference source, the radar model does not generate a detection due to interference. Now, you can change the radar model detection mode to passive or ESM. This mode can generate angle only detections from RF emitters, and in this case, shown here, because the RF emitter has high powered reflections of the emitted waveform of the target are also detected by the ESM sensor. The detected angles are shown by the purple lines here.
The second example is to model and analyze performance of an FMCW radar altimeter. Radar altimeter is a flight safety instrument that provides altitude reading. First, we need to alter the scenario.
For the radar altimeter, the reflection from land surface is a target instead of clutter. To model a land surface, we import a data file and use the reflectivity model which supports a 90-degree grazing angle in urban environment. With the waypoints, we define a landing trajectory for the airplane.
The blue patch on the surface is the radar beam footprint on the surface. In this example, the waveform level model or radar transceiver is used to model an FMCW radar. For the waveforms are triangular, FMCW waveform is used. An antenna is modeled by Gaussian antenna element.
The transmitter and receiver blocks are modeled based on the parameters. And by simulating, the scenario you generate I/Q signal for each measurement point, then you generate range response by applying the jumping in the 50. After you apply the CFAR to generate detections on the range data, the goal is to find the first reflection points from the ground for an accurate altitude measurement.
The next example I want to show you in this section is synthesizing synthetic aperture radar I/Q signal which will be used for image formation. In this example, we define a random terrain for a scenario, then we locate three targets on the top of this surface and define the radar trajectory with a length of 6 meters and squint angle of 0 degrees for a broad side operation. In the next step, we customize the reflectivity map based on the elevation height map. We assign two reflectivity models for woods and hilly areas.
By running the simulation, you can generate the raw I/Q signal here. Now, you can use this raw I/Q signal for evaluating image formation algorithms. For example, let's pass this data through range migration processing to generate synthetic aperture radar image.
First, the algorithm performs a two-dimensional FFT. This transforms the start signal into wave number space. Secondly, the algorithm focuses on image using a reference signal. This is a bulk focusing stage.
The reference signal is computed for a selected range. Target at the selected range will be correctly focused, but the targets away from the reference are only partially focused. Next, this differential focusing stage that uses stored interpolation to focus the remainder of the targets.
Finally, a two-dimensional ISFT is performed to return the data to time domain. Comparing the ground total with three targets, we can only identify two here in the SAR image, but performing line of sight of analysis, you can verify that the targets on the left side of the SAR image is always occluded by the radar. We have looked at some simple examples to illustrate the modeling you can perform, but the models can scale to a larger radar systems, adaptive tracking in case of maneuvering targets, maritime surveillance, or space tracking as well.
You have access to a library of signal and data processing algorithms, including algorithms to process a radar data cube with beamforming, match filters, and range of processing. You can generate detections with CFAR algorithms. You can cluster these detections with algorithms such as DBSCAN and track the objects as they move through the radar field of view. These algorithms also work on synthesized data and data you collect from your hardware.
The next I will be discussing in the radar system is about deploying these radar signal processing algorithms to processors or FPGAs. All algorithms that we ship are whitebox MATLAB code and generate standalone C/C++ code with different code generation products. I also want to highlight or mention MATLAB Coder, HTL Coder, and GP Coder that you can build up code that you can deploy to hardware. And for signal processing, you can do all the signal processing in single precision C code using MATLAB Coder, and this includes things like beamforming, direction of arrival, CFAR. From HDL point of view, we also provide workflow examples that you can use to go through and implement these on FPGAs.
And finally, for data processing, these are typically deployed to processor. So again, MATLAB code can be used here. I will show you one example of a HDL Coder.
A HDL Coder connects MATLAB and Simulink to hardware implementation. You still need to add hardware architecture to your algorithm but you can use the same environment and test it against your golden difference with the same tests. And we have a lot of resources to get you started here.
We have worked with a lot of customers who do this. There are a variety of deployment options both for prototype and for production, so you can use this work to jump start production and development. This example shows how to use logs from Phased Arrays Toolbox to create a behavior model to serve as a golden reference and how to create a subsystem for hardware implementation using Simulink blocks that support HDL code generation. It also compared the output of implementation model to the output of corresponding behavioral model to verify that the two algorithms are functionally equivalent.
Once you verify that your fixed point implementation model produces the same result as your floating point behavioral model, you can generate HDL code and test bench. We set the appropriate HDL code generation parameters in Simulink via the configuration parameter dialog, and once you are done with those settings using HDL code generation, you can generate the complete test bench. You can see in this screen we have generated the complete test range model using model ModelSim or QuestaSim to perform code simulation with generated comparisons.
And we have a list of examples. If someone is dedicatedly looking for algorithm acceleration and deploying these on hardware, we have a quick set of examples that you can use as a starting point. To summarize what I have discussed, I showed you capabilities and examples for each of the pillars that ultimately enables full support of radar lifecycle.
We saw the typical challenges that we face as a radar engineer. Then we looked at the workflow for how we can use radar designer app for linked budget analysis. Then we looked at the workflow for authoring and simulating with our designers, along with some examples. Then we briefly looked at the workflow for deploying the radar applications on hardware and a quick example. We have one webinar lined up for tomorrow for AI for LiDAR section that I'll be discussing tomorrow.
Thank you. Thanks a lot to Matt for the introduction. Good afternoon, my name is Mohit Gaur I'm working as Deputy Manager in Bharat Electronics Limited, Ghaziabad. Today, I am joined by my colleague, Pratishtha Jaiswal. She is working as senior engineer in our team. We are working on defense radars. Today we'll be discussing about modeling, signal processing algorithms, and data extraction algorithms for a 3D surveillance radar in MATLAB.
The agenda for the session will be modeling algorithms for signal processor for radars in MATLAB and the algorithmic complexities that we usually face and the conventional approach-- how MATLAB can help us in ease of our design, and how we can evaluate the performance during several design phases. This has been our journey of radars. We have worked on this many radars till now.
Now, moving ahead, let us first have a brief look of the radar system block diagram. See, radar stands for radio detection and ranging. RF energies transmitted and reflected-- echo is processed. It is received by different receivers and is processed after digitization by a digital signal processor, which performs this many functions, such as preprocessing, filtering, clutter cancellation, cipher-based thresholding. And the intermediate output of this stage in the form of detections is sent to the extractor module, which performs centrally-- data arrangement, clustering, and centrality. And it is sitting sending its output to the tracker-- sensor fusion tracking system. Our project is focusing on signal processor and extractor module.
Let us have a brief look at the algorithmic workflow for radar signal processor. Analog I R signals, which are digitized using an ADC, are downconverted to the baseband for further processing. Then it is sent to pulse compression module, which is implemented using match filtering. It provides us better range resolution and higher signal-to-noise ratio, as a radar signal processor utilizes complex signal processing. So both I channel and Q channel data is used, and the data is formatted in the form of radar data cube. Then MDI pulse cancelers are used, followed by a range Doppler map formation and Doppler filter bank processing, wherein we try to cancel the stationary clutter.
Post that, C Firebase processing is being used for declaring detections. After this, maximum filter will run, and we try to provide detections to the extractor module. But in case there may be some tangential targets, which may be available in the zero velocity filter or the central filter-- where we assume to be the stationary clutter to be present. So we process it to the ultimate formation.
And the RD-- that is a data extractor-- module will perform three major functions. First is-- since we are using, in the receive part, multiple parallel stack bins-- so it will try to first estimate the elevation, followed by range centroiding and clustering, followed by the range azimuth centroid, the output of which is sent to the record module. We, in the field, are facing a major challenge of electronic hardware obsolescence. And in order to upgrade any kind of hardware, it entails a lot of time and cost effort. So it is better to have a proof of concept type of thing beforehand. That's how we began this project.
But there are several other kind of limitations. In order to have a high fidelity of our model, we would like to test this-- our model-- on the-- best testing is on the field-recorded data. The field-recorded data for a signal processor is IQ-level data, which is a high volume-- very high volume data, which is our next challenge.
Then, since radar signal processing performance evaluation under known homogeneous cluttered environment is also a major challenge, and now it is, as we all know, because of wind turbines, the radar clutter-- the radar clutter is another major challenge for-- all the low as well as medium pair of radars. So the requirement for our project was modeling of the different stages, different algorithms of the signal processor, validating the same on the field-recorded data, which is IQ-level data, realization of the extractor module in MATLAB, and to generate a testbed for performance evaluation for different types of algorithms, which are available in the literature, as well as some new algorithms, which, if required, we can work upon. We have divided our developmental task into three major phases. First was the realization of algorithms, then its validation on the site-recorded data, then all the CFAR algorithms performance assessment, and RDE-- that is data extractor and centroidal module, modeling in the MATLAB as well as its validation.
Moving ahead, let me first explain, in brief, what is the radar PPI-- that is planned position indicator. The terminology we use here is that of the range cells or range beams. The complete radar range is divided into concentric circles, and the width in the range dimension being equal to the range resolution of the radar. Because of the finite azimuth bandwidth and the scan time, we are able to have multiple pulses at the same antenna position, because of which we can utilize pulse integration-- again, by transmitting multiple pulses at a single position.
So these azimuth lines-- red colored lines-- are basically dividing the scan times into processing intervals, and we call it as a coherent processing interval, CPI. In our case, both I channel and Q channel are around 16 bits wide. So a simple data calculation of IQ data volume estimate-- if we consider these are exemplary figures-- if we can see the range of 150 kilometers and range cells of the radar to be 30 meters, and around 400 coherent processing intervals, and multiple beam processing is used, the size of the single scan data is around half a GB. And for a single minute, it will expand more than around 6 to 7 GBs.
This high-volume radar is a major challenge. With the help of MATLAB, we were able to read this radar and arrange it in the form of radar data cube. Radar data cube has this many dimensions. First is the range dimension, where each sample in time correspond to a sample of arrangement. Then, as multiple pulses can be used, if we are using a number of pulses, this dimension shows the pulses and multiple parallels. Receive beams are used. So these z dimension-- along the z dimension, we can have different phase centers. So this is how typical radar data cube will look like for a single dwell or a single CPI. On the right side, you can see the result which we have achieved using MATLAB by running input as our site-recorded data, the IQ-level data.
Moving ahead I will now discuss about the next stage of the Doppler filtering or the Doppler filter bank formation. There are two main stages or main techniques, which we are aware-- for realization of the Doppler filter bank. One is a FIR-based approach that is a FFT-based approach. Here, we have utilized a Phased Array Toolbox from the MATLAB, to realize FFT-based Doppler filters. And this spread in the spectral domain was from minus PRF by 2 to plus PRF by 2. If they are using any number of pulses, then we have to at least take m point FFT or next higher radix to number.
So then, if, here, in this figure-- if K number of-- K point FFT has been taken, then at least K Doppler filters will be created. Assuming that the central filter will correspond to the stationary clutter, we are going to reject this filter for the main channel processing. And for moving target detection, we will be trying to process other remaining filters. So this is the result of the same data cube for a single beam. Similar results were achieved for all different beams.
Moving ahead, except zero velocity filter-- that is the central filter-- we had processed the remaining filters in a CFAR detector. There are various types of CFAR available, such as CA, GO-CA, SO-CA, trimmed mean order statistics, etc. We started with CA CFAR implementation.
The result is shown here. See, this CS CFAR detector is readily available in the Phased Array Toolbox for MATLAB. And this is showing the detection map for a single dwell data, for a single beam. Now, after achieving these results, we were able to replicate this performance for all the other CPIs in the scan.
And here, you can see the detections, which were generated for the complete scan. The detections here are shown in two colors, red and blue. Red one is the lower beam, and the blue one is the next higher beam. Similarly, other beam results can be achieved now. The best point is we were able to generate the picture. Here, what we are seeing is similar-- exactly similar to what we see in the actual deployed radar. The validation was done for over 24 scans, and the data was very high.
Now, moving ahead, in the next phase of our development, we provide with other CFAR algorithms, such as GO-CA and SO-CA. And we were able to relate the results with what we see in the literature. GO-CA is better at clutterages. Now, moving ahead, I would like to invite my colleague, Pratishtha, to share about radar data extractor functionality. Pratishtha, over to you.
Hello, everyone. Thank you sir. As the rightly said that the output of digital signal processor are called detections. Now, these detections will be fielded as input into the radar data extractor module. The main job of the extractor module is to group these detections that are coming from the same target and declare a central position for them. The central position is termed as centroid. And these groups are termed as clusters.
So here, you can see there are four clusters, and correspondingly, extractor module has declared four center positions for them. In the forthcoming slides, we will be discussing, in detail, about how the extractor was being implemented in the MATLAB. So here, we can see that the whole development process was traveled to three phases.
The first one was giving the detections as input into the extractor module, then followed by a detailed modular-level approach to develop the extractor module in MATLAB, which was having three stages. The first was elevation estimation, followed by clustering, and finally, the centroid. The last stage was testing and the validation phase, in which, firstly, each module was being tested, and then all were integrated and treated with field-recorded data that we already have, and then, finally, evaluating with the final results, with that of the existing radar performance, and come to the conclusion.
So in the next slide, we'll be discussing each of these modules in detail. The first is the data feeding phase. Here, you can see a graph in three dimension. The x-axis denote the ranges. The y-axis are azimuth, in terms of CPI. And the z-axis are the B. So here, you can see that there are multiple-- data has presented multiple beams that is coming from the SP. So now, our main job is to now calculate the elevation.
So for this, we have used the monopulse technique of strength ratios to calculate the elevation and depicted the elevation in this graph. So here the detections that are being shown-- are at calculated elevation angle, which we have found with the help of monopulse technique. Hence, the data is being jotted down at one place now.
Now our main task is now to calculate range centroid that is to cluster the data in terms of range proximity and find the center for that. So here, you can see the data is now in a sparse location, as compared to that of the previous slide because the data is now present at centers only. Now, our next job will be to cluster the data in terms of azimuth, since the returns are being available at multiple azimuth, also, from the radar.
So here in the zoom-in picture, you can see various color detection. Each color detection correspond to one cluster. So here, one cluster is having multiple detections. So finally, we have to do the centroid of this cluster and declare one plot position corresponding to it and end up this jitter, which has been shown over here. So the next phase will be discussing about the clean trajectory of the plot that is outputted by the extractor module. So here, you can see those clusters are centroided and presented as one single plot.
Now, we'll be comparing it with that of the existing extractor module results. And here, we can see that we are pretty close to the results, and we are sure that this development will serve as a great aid for all our future development. Over to you, sir.
Thank you, Pratishtha. Finally, to summarize, in this developmental project, we have successfully used the phase array toolbox and the radar toolbox for several stages of our design and simulation. As I already mentioned, GO-CA is better at clutterages than CA and SO-CA. The data, which was available in the MATLAB for validation, has aided us in realizing the design, and its visualization tools have helped us to measure our performance. Definitely, our cycle time for such a development has reduced greatly. And the fidelity is also quite high, as we were able to test it with our own data, also.
In the future, we would like to model land, sea, and wind turbine clutter for the radar, also, and the waveform analysis for effective wind turbine cluster mitigation because this is a major challenge for us. Then we would like to evaluate sensor fusion and tracking toolbox in order to minimize our efforts in the RD. And we are targeting the deployment of the signal processing algorithms on some target FPGA families, where we can take seven.
At the last, I would like to extend my sincere gratitude to MathWorks for providing this platform, and especially to Mr. Sumit for his excellent and extended technical support. And sincere thanks to our senior, Mr. Dheeraj Talwar and Mr. Ram Pravesh from BEL for all their great support and motivation. Thank you. Thanks a lot.