How to prepare input for the MUSIC DOA algorithm using real world audio signal (NOT simulation)
Mostrar comentarios más antiguos
Hi,
I am working on using the MUSIC algorithm to calculate DOA of ultrasonic sound input. I have the configuration of the microphone array and the number of microphone is 7. Therefore, the received audio signal would be something like [7xN] in real values, where N is the number of samples collected in a fixed time duration.
The matlab function/example I am refering to is: https://www.mathworks.com/help/phased/ref/phased.musicestimator2d-system-object.html.
Now, I am able to define the phased.URA object using the array configuration (microphone spacing, positions..and so on). The problem is: I do not know how to convert the input signal into the matrix generated by the collectPlaneWave function in the demo. Specifically, the output of collectPlaneWave is a complex matrix while my audio input is a [7xN] real-valued matrix.
Thanks in advance!
Respuestas (2)
Priyanka Rai
el 2 de Feb. de 2021
0 votos
Colin Waters
el 24 de Mzo. de 2022
0 votos
No, that link does not help. The description fails to specify the data types (real, complex) and matrix shapes for the variables, e.g. x(t).
There is a way to get the complex x(t) matrix (hence complex covariance matrix) if the signal is narow band and plane wave. Assume you have a plane wave (e^i[k dot r - wt] model) propagating across the sensor array and you have them placed to avoid spatial aliasing etc.
For each of the (real) time series from each sensor, choose one sensor as a reference. You can do an FFT to get the frequency of the narrow band signal. Then do the cross spectrum for each pair, using the reference sensor time series. This gets you the phase differences at the relevant frequency between the reference and each of the other sensor time series. This phase difference is the k dot r for each sensor pair and you know the r vector from the location of the sensors, measured from the reference sensor.
You can then generate the complex time series at each sensor, using the phase differences as k dot r in the signal model. Then you can get the complex covariance etc.
9 comentarios
Colin Waters
el 24 de Mzo. de 2022
I just checked and for the basic MUSIC algorithm, using the spectral matrix works. Instead of using x(t) (complex) as quoted in lots of papers, you can use X(f) (complex from the FFT of x(t) real).
- Calc the FFT of each time series. This gets you real (time series) to complex (FFT)
- For each frequency, f, of interest form the covariance matrix as an outer product of the (complex) Xi(f). The cov matrix(f) is complex and sqare of shape [nsig X nsig]. The diagonals are real and are the autopowers.
- Proceed as normal with this complex cov matrix, i.e. calculate the eigenvectors and eigenvalues and project onto the noise subspace
Worked for my data
Remya Prabhakaran
el 9 de Mayo de 2022
Editada: Remya Prabhakaran
el 9 de Mayo de 2022
I need a help on how you are determining the covariance matrix from each frequency of interest using fft. Can you suggest any basic equation or a sample code to do the same.
Iam really stucked on this.
Colin Waters
el 9 de Mayo de 2022
What have you done so far? What variables do you have and what are the data types and structures?
Remya Prabhakaran
el 11 de Mayo de 2022
Iam using real time speech data for direction estimation. I have implemented the music algorithm in matlab without using the MUSICEstimator function of matlab. I have created a circular array and i have real data corresponding to this circular array. So Iam not using any signal collector functions like collector, Widebandcollector that are available in matlab instead iam using real time speech signals from microphones and loading it to the algorithm. But everytime Iam getting an estimation of zero degree.
For wideband also i seen that the approach you mentioned will work. But Iam getting confused on how to create the covariance matrix from fft values. (like you mentioned outer product) any equation or sample code that clealy explains will help me on this.
Remya Prabhakaran
el 11 de Mayo de 2022
You mentioned that for each frequency bin calculate the covariance matrix.
if i have used 1024 bins. Then for each frequency bins i would generate 1024 x 1024 covariance matrix right. So how actually it is calculating nsig x nsig values
Colin Waters
el 11 de Mayo de 2022
You need to think clearly about what data type you have (e.g. real, complex) and the associated data structure (e.g. single values, matrix, vector) for each of the data types your are using. For example, you say you have a circular array of microphones with one 'real' data stream from each microphone. Fine.
How many microphones?
The covariance matrix is a square (complex) array. The dimensions is num_mics by num_mics.
Remya Prabhakaran
el 11 de Mayo de 2022
I am using currently 6 microphones and it is uniformly distributed. Iam taking 1024 samples from each microphone for estimation.
These are real values in real time scenario. I found that in matlab the collector function is using complex values. To redefine the samples to complex values as mentioned in your steps we have to do the following steps:-
- FFT of samples (if fftsize is defined as 1024). will give 1024 complex values. size would be 1024 x 6
- calculating the covariance matrix from fft for each frequency bin. Rxx= 1/N(Sxx * Sxx') then it would be 1024 x 1024.
if I am defining my input data matrix be 6 x 1024 then i would get 6 x 6 matrix for covariance.
3. Iam doing eigen vector decomposition and iam getting the noise subspace which later be used for music spectrum calculation. I found that Iam not able to estimate the angle correctly.
Is there anything I missed here. My application is using speech signals for estimation.
Colin Waters
el 11 de Mayo de 2022
Firstly, an FFT of 1024 input real values only gives 512 unique complex values, the other 512 are complex conjugate so add nothing new.
Secondly, choose one frequency bin. The covariance matrix for that one frequency is a 6 x 6 (complex) matrix - called the 'spectral matrix' by some.
Put the 6 complex values from the same frequency bin at all 6 microphone data streams (the FFT of these) into a 6 x 1 complex vector. Then calculate the outer product to get the spectral matrix ( 6 x 6 complex). The diagonal elements should be real values.
Remya Prabhakaran
el 18 de Mayo de 2022
I tried different ways to implement the same you mentioned. Iam attaching my code here will you please help me to find what is wrong here.
clear all;
close all;
clc
load('audio_mic6.mat','audio111');
frame_1024 = audio111(1:1024,:);
N=6; %% number of elements
FS=16000; %% sampling rate
Radius=0.03856; % radius
L=1024; %number of samples
Fn =FS/2;
fft_sig = fft(frame_1024);
abs_fft_sig = abs(fft_sig);
energy_db = 20*log(abs_fft_sig);
minimum_db = -50; %% setting a minimum energy
Fv = linspace(0, 1, fix(L/2)+1)*Fn;
Iv = 1:numel(Fv);
for k1=1:6
for k =1:513
if(energy_db(k,k1) < minimum_db ) %% comparing the energy
gain =0;
else
gain =1;
end
abs_fft_sig(k,k1)=gain*abs_fft_sig(k,k1);
end
end
%% Finding peaks for each channel to find the dominant frequency
[pks1,locs1] = findpeaks((abs_fft_sig(1:513,1)));
[pks2,locs2] = findpeaks((abs_fft_sig(1:513,2)));
[pks3,locs3] = findpeaks((abs_fft_sig(1:513,3)));
[pks4,locs4] = findpeaks((abs_fft_sig(1:513,4)));
[pks5,locs5] = findpeaks((abs_fft_sig(1:513,5)));
[pks6,locs6] = findpeaks((abs_fft_sig(1:513,6)));
a=Fv(locs1)
b=Fv(locs2)
c1=Fv(locs3)
d=Fv(locs4)
e=Fv(locs5)
f=Fv(locs6)
%% Spectral matrix creation
mat_creation = fft_sig(4,:);
mat = (mat_creation)';
spectral_matrix = mat *(mat_creation);
fc= 156.25; %% common peak from each channel
%% Eigen value decomposition
[eigenvec eigenval] = eigs(spectral_matrix,N)
eigv=eigenvec(:,1+1:6)
c=343;
lambda = c/fc;
p=zeros(361,361);
%% P music spectrum for UCA
ph =(0:0.5:360);
th =(0:0.5:360);
for i = 1:length(th)
for j = 1:length(ph)
k=1:1:1*N;
theta=th(i)*pi/180;
phi = ph(j)*pi/180;
% phase = exp(-1j*2*pi*0.03864*(0:N-1)'*sind(theta)/lambda);
phase =((2*pi*0.03864)/lambda)*(sin(theta)*cos(phi-((2*pi*(k'))/N)));
aa=exp(1i*phase);
p(i,j)=10*log(abs(1/((aa'*eigv)*(eigv'*aa))));
end
end
[f,idx]=maxk(p(:),1);
doa=ph(ceil(idx/length(th)));
doa1=th(mod(idx,length(ph)));
Categorías
Más información sobre Direction of Arrival Estimation en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!