Reconstructing signal using the IFFT

Hello, I am using the FFt to convert a time series signal into images by reshaping the matrix (N*N). But i am having hard time to get the original signal back from the images. IS it because in fft i am considering only the magnitude of the signal not the phase of the signal ? Is there any way to solve this proble. May be using STFT ot any kind of other techniques

Respuestas (2)

hello
try this (it works for images but also for any 2D numerical array)
output = input with minimal error :
max(abs(outpict - inpict),[],'all') = 1.5543e-15
filename = 'image.png';
inpict = im2double(rgb2gray(imread(filename)));
% take the 2D fft
spec_img = fftshift(fft2(inpict));
% generated backward the output image by inverse fft
outpict = real(ifft2(ifftshift(spec_img)));
figure(1)
subplot(2,1,1),imshow(inpict)
subplot(2,1,2),imshow(outpict)

13 comentarios

Opy
Opy el 24 de En. de 2025
Hello Thanks for this great help.. I understand what are you trying to say. Can you give a simple example where you will generate random signal (1D), then apply the fft and convert it into 2d image. Then apply the same as you have given to reconstruct the signal directly using the image . So that i can understand how much data can be lost in this whole process.
Opy
Opy el 27 de En. de 2025
Thanks for your answers.
But my problem actually is little bit different. I may not explain it precisely. Suppose I have a signal , i have applied the fft and turns into that an grey scale image. This is okay.
Now consider that you only have that image , you just know that this is an spectral image which represents the time series. Now if you want to get the data back from the images how would you do that? Becuase is FFt is invertible and scale invariant at the both time?
William Rose
William Rose el 27 de En. de 2025
Editada: William Rose el 27 de En. de 2025
You are right that you cannot retrieve the original signal exactly, since the grayscale image lacks the phase info, or it lacks the imaginary part, depending on how the grayscale image was made. If you have the option, you can make two images: a real part image and an imaginary part image. Or a magnitude image and a grayscale image showing the phases. Then you could retrieve the necessary info and reconstruct.
See here for a summary of an analogy, the phase problem in x-ray crystallography. But the solutions to that problem are not useful for you. So it is just for your reading pleasure :)
Opy
Opy el 27 de En. de 2025
Thanks for the clarifications. In your knoeledge do you have any idea of other avilable technology that might helpful in terms of reconstructing the signal. My problem formulations is to generate synthetic images which will contain the time series data. Then use the GEN AI to produce more images like that. But the problem lies in the second part where i will have to generate the data from synthetic images. Is there any way to tackle this problem?@William Rose
William Rose
William Rose el 28 de En. de 2025
I don;t know what steps or apsectsof yur problem you consider required.
You could encode magnitude,phase (or real,imaginary) as red and blue in a color image. You could make a bigger image with mag,phase (or real,imaginary) as grayscale in the top and bottom halves, or left and right halves, or interleaved columns or rows. All of these are simple to implement.
You could do a 2D wavelet transform, or the Walsh-Hadamard transform. They return real, not complex, results. They are reversible.
In my example above, I reshaped x(t) to make a real-valued array, x2, which can be considered a grayscale image. Then I took the 2D FFT of the image. You could reverse the order: take the 1D FFT, then reshape the resut into a complex array. This would be a different complex array that the earlier one. You'd still have to decide how to encode the real,imag or mag,phase parts.
Opy
Opy el 28 de En. de 2025
@William Rose Thanks for your answers. Could you kindly show me one of these approaches adaptions. Like i take a 1d signal, apply the fft, then make two images for magnitude and phase or another way is that make one greyscale image (left side for the magnitude and the right side for the phase). Then solely use these images to reconstruct back the images. It would be very helpful for me.
Another questions is that, can i apply CWT or any wavelet transform techniques to make images from complex 1d signals like medical signal.. Then just use these iamges to reconstruct the original signal back from the images. ?
William Rose
William Rose el 29 de En. de 2025
Editada: William Rose el 29 de En. de 2025
An important quesiton for your process is: Will you get the image of the 2d FFT as a screenshot of the displayed image, or will you get the actual image file? The difference is important, because the image on screen subject to clipping, also known as threshold and saturation effects. (If the data is double precision, then all values <=0 get mapped to black, and all values >=1 get mapped to white.) This will greatly affect your ability to recover a signal, if the original signal extends outside the [0,1] range.
A related significant issue is that when you do an FFT of a non-zero-mean signal, or a 2D FFT of a non-zero-mean image, you get a very large value at DC, which dwarfs everything else. This makes the scaling issue, discussed above, even more challenging.
I would prefer to use real, imag rather than mag,phase for the images of the 2D FFT, because real,imag have the same units, and will generally have the same approximate range, whereas mag,phase will not have the same range. Therefore it is trickier to combine mag,phase in the same image. You can do it with mag, phase, but you have to know how you did it, so that you can undo it during the signal reconstruction.
fs=8000; % sampling rate (Hz)
t=0:1/fs:4;
x=chirp(t,131,4,524); % 2 octave chirp signal, 4 seconds long
M=round(sqrt(length(x))); % find approximate square dimensions
N=floor(length(x)/M);
y=reshape(x(1:M*N),M,N); % y=[x, reshaped to ~square]
Y=fft2(y); % 2D FFT of image y
reY=real(Y); imY=imag(Y);
Yc=cat(3,reY,imY,zeros(size(reY))); % combined color image
Yud=[reY;imY]; % combined up & down grayscale
%% Display images
figure;
subplot(3,1,1), imshow(y); title('y=x(t), reshaped')
subplot(3,2,3), imshow(reY); title('re[Y=fft2(y)]')
subplot(3,2,4), imshow(imY); title('im[Y=fft2(y)]')
subplot(3,2,5), imshow(Yc); title('Ycolor')
subplot(3,2,6), imshow(Yud); title('Yup,down')
%% Retrieve y from image data
Z1=squeeze(Yc(:,:,1))+1i*squeeze(Yc(:,:,2));
z1=ifft2(Z1);
xc=reshape(z1,1,[]); % x recovered from color image
[m2,n2]=size(Yud);
Z2=Yud(1:m2/2,:)+1i*Yud(m2/2+1:end,:);
z2=ifft2(Z2);
xud=reshape(z2,1,[]); % x recovered from up-down image
%% Plot original and recovered signals
figure
subplot(121)
plot(t,x,'-ro',(0:length(xc)-1)/fs,xc,'gx',(0:length(xud)-1)/fs,xud,'b+')
xlim([0,.03]); legend('x','xc','xud'); title('Orig & Recon, t=0-0.03')
subplot(122)
plot(t,x,'-ro',(0:length(xc)-1)/fs,xc,'gx',(0:length(xud)-1)/fs,xud,'b+')
xlim([3.97,4]); legend('x','xc','xud'); title('Orig & Recon, t=3.97-4.00')
The figure above shows the original signal and two reconstructed signals. xc is reconstructed from the color image. xud is reconstructed from the up-down image. Left panel shows first 30 ms, right panel shows last 30 ms. (If you plot the full length signals, the points all overlap and you can't see anything.) The figure shows that the reconstructed signals exactly match the original. The reconstructed signals end before the original, because, when we reshaped the original 1D signal into a 2D array, there were points at the end of the 1D signal that were not enough to make a full image row, so they were dropped.
Opy
Opy el 30 de En. de 2025
Hello @William Rose. Thanks for the clarifications. Still, we will have some problems. In this code, you are not necessarily importing the images to reconstruct the images. Ultimately, you are using the matrix Yc values for applying the IFFT. But I tried to save the Yc and Yud as images first, but before converting them into images, I used scaling to make sure the values are between 0 to 255, and then used uint8 to ensure the scale remains within 0 to 255 for image storage. Then I loaded the images from saved modules, extracted the pixel values, and rescaled them using the maximum and minimum of Yc.
But here’s the problem I encountered: Even in your method, you are not fundamentally using the image for IFFT. Instead, you are relying on matrix values. However, in my case, I must use images because I will get them from the generative model, meaning I cannot rely on Yc and Yud directly. I have to solely work with images, treating them as pixel data rather than numerical matrices. Yes, I assume the images will be properly saved as actual image files and not screenshots, but even with this approach, my main concern remains—the synthetic images from the generative model cannot be rescaled using the max(Yc) and min(Yc) values from the original data. Doing so will give incorrect results because the generated images may have an entirely different scale than the original Yc.
I don’t know if I was able to express my fundamental concern clearly. I have reviewed several documents, but I still cannot find a solid answer to my problem. The issue remains that the signal must be generated purely from the images, not from stored matrix values. The images will be treated purely as pixel data.
Another thing—what do you think? Suppose instead of storing magnitude and phase in a single image, we instead use three different signals and generate the first RGB image using their real values, then generate the second RGB image using their imaginary values.
clc
clear all
close all
fs=50; % sampling rate (Hz)
t=0:1/fs:8;
x=chirp(t,131,8,524); % 2 octave chirp signal, 4 seconds long
M=round(sqrt(length(x))); % find approximate square dimensions
N=floor(length(x)/M);
y=reshape(x(1:M*N),M,N); % y=[x, reshaped to ~square]
Y=fft2(y); % 2D FFT of image y
reY=real(Y); imY=imag(Y);
Yc=cat(3,reY,imY,zeros(size(reY))); % combined color image
Yud=[reY;imY]; % combined up & down grayscale
%% Display images
figure;
subplot(3,1,1), imshow(y); title('y=x(t), reshaped')
subplot(3,2,3), imshow(reY); title('re[Y=fft2(y)]')
subplot(3,2,4), imshow(imY); title('im[Y=fft2(y)]')
subplot(3,2,5), imshow(Yc); title('Ycolor')
subplot(3,2,6), imshow(Yud); title('Yup,down')
% Normalize Yc and Yud to [0, 255] for saving as an image
Yc_norm = (Yc - min(Yc(:))) / (max(Yc(:)) - min(Yc(:))) * 255;
Yud_norm = (Yud - min(Yud(:))) / (max(Yud(:)) - min(Yud(:))) * 255;
% Convert to uint8 for image storage
Yc_uint8 = uint8(Yc_norm);
Yud_uint8 = uint8(Yud_norm);
% Save images
imwrite(Yc_uint8, 'Yc_image.png');
imwrite(Yud_uint8, 'Yud_image.png');
%%
% Load images
Yc_img = imread('Yc_image.png');
Yud_img = imread('Yud_image.png');
% Convert back to double
Yc_double = double(Yc_img) / 255; % Scale back to [0, 1]
Yud_double = double(Yud_img) / 255;
% Rescale to original FFT range
Yc_rescaled = Yc_double * (max(Yc(:)) - min(Yc(:))) + min(Yc(:));
Yud_rescaled = Yud_double * (max(Yud(:)) - min(Yud(:)));
% Extract real and imaginary parts from images
Z1 = squeeze(Yc_rescaled(:,:,1)) + 1i * squeeze(Yc_rescaled(:,:,2));
[m2, n2] = size(Yud_rescaled);
Z2 = Yud_rescaled(1:m2/2, :) + 1i * Yud_rescaled(m2/2+1:end, :);
% Inverse FFT from color image
z1 = ifft2(Z1);
xc = reshape(z1, 1, []); % Convert back to 1D
% Inverse FFT from up-down grayscale image
z2 = ifft2(Z2);
xud = reshape(z2, 1, []); % Convert back to 1D
%% Plot original and recovered signals
figure
subplot(121)
plot(t,x,'-ro',(0:length(xc)-1)/fs,xc,'gx',(0:length(xud)-1)/fs,xud,'b+')
Warning: Imaginary parts of complex X and/or Y arguments ignored.
xlim([0.1,4]); legend('x','xc','xud'); title('Orig & Recon, t=0-0.03')
subplot(122)
plot(t,x,'-ro',(0:length(xc)-1)/fs,xc,'gx',(0:length(xud)-1)/fs,xud,'b+')
Warning: Imaginary parts of complex X and/or Y arguments ignored.
xlim([4,8]); legend('x','xc','xud'); title('Orig & Recon, t=3.97-4.00')
William Rose
William Rose el 30 de En. de 2025
You wrote "I don’t know if I was able to express my fundamental concern clearly... The issue remains that the signal must be generated purely from the images, not from stored matrix values."
Yes you did express your concern clearly. Which is why I raised the issue as the very first part of an earlier comment: "An important question for your process is: Will you get the image of the 2d FFT as a screenshot of the displayed image, or will you get the actual image file? The difference is important..."
An image saved as uint8 is equivalent to a screenshot of a grayscale image - it has 8 bits of resolution (256 levels), introducing quantization error. In your example in your most recent post, the reconstructed signals look like the original, which indicates that the quantization error is small. Of course you used the min(Y), max(Y) to reconstruct, and, as you noted, you won't have that. Without those, you would have gotten the same shape signal, but the mean and amplitude would be off, by an unknown amount.
I can think of two reasons that the quantization error was not too bad in this example:
  1. You wisely rescaled the signal to [0,1], then multiplied by 255, to take full advantage of the 8 bits availaable for each real part and each imaginary part.
  2. The mean of x(t)=0. This helps because it means there's not a very large value of the 2D DFT at fx,fy=0,0 (the zero-frequency point of the 2D DFT). If the mean of x(t) were NOT=0, then the DFT would have a spike at the origin. If you rescale the DFT so the spike=255, there won't be a lot of bit resolution for the other parts, and therefore the quantizaiton error will be worse. Therefore, if the original signal is NOT zero mean, you should condsider removing the mean before computing the 2D DFT. But the image wont include information about the mean value, which may or may not be OK with you.
FYI, Matlab's y=rescale(x) is equivalent to your y=(x-min(x))/(max(x)-min(x)).
The image, as currently constructed, does not include timing information, so the sampling rate must be known apart from the image, in order to reconstruct with the correct time scale.
More later on using color or other methods to encode more info.
William Rose
William Rose el 30 de En. de 2025
You said "Suppose instead of storing magnitude and phase in a single image, we instead use three different signals and generate the first RGB image using their real values, then generate the second RGB image using their imaginary values."
Yes you can store three signals as 3 colors of an RGB image. Our previous discussion shows how you can encode the real and imaginary parts of a comlex sequence as two colors. The same approach works for 3 real signals stored as 3 colors. A signal can be represented as single or double precision, signed or unsigned int. In every case, it is just bits: 32 or 64 bits for single and double, etc. You can take those bits 8 at a time for an 8 bit grayscle image, or a 24 bits at a time for a color image, etc.
What is the real goal here? To encode 1D signals as images in a clever way, and see if an AI agent can learn how to decode them?
Opy
Opy el 31 de En. de 2025
@William Rose The main goal is to encode the 1D time series data into images, then evaluate the generation capability of GEN AI models to produce more synthetic images like those. After that, the images will be decoded back into 1D time series data.
This is a rough idea I got from an article, but after starting the work, I am finding it very difficult to decode the data from images. Before training the GEN AI models, I want to ensure that I can successfully decode the images to retrieve the synthetic time series data. Without this, my whole effort would be meaningless.
You might ask why I am converting the time series into images in the first place. The answer is quite simple— I wanted to explore the image generation capability of GEN AI models. Using time series data directly with GEN AI models has not shown promising results.
To the best of your knowledge, do you have any idea how I can encode and decode images in the way I have described? I found a paper that clearly states other techniques are not suitable for my problem, as most of them are not scale-invariant and not invertible.(link)
I only found a single paper where FFT and IFFT have been used, but the authors did not explain the method explicitly. (link)
William Rose
William Rose el 31 de En. de 2025
This sounds like a good masters or PhD topic. Too much for a Matlab Answers discussion.
This also illustrates why it is good when asking a question on Matlab Answers to state what you're really after. You kind of did, but not much. I guess I should have pushed you for more info first, before spending time demonstrating stuff which, I now realize, was completely irrelevant.
"This is a rough idea I got from an article, but after starting the work, I am finding it very difficult..."
What article gave you a rough idea?
"I found a paper that clearly states other techniques are not suitable for my problem, as most of them are not scale-invariant and not invertible.(link)"
Is this preprint [Hellerman & Lessmann 2023] the article from which you got a rough idea? Hellerman & Lessmann (2023) describe the novel XIRP method for making an image from a 1D signal (eq. 8). They also describe other methods including GASF. Note that their methods generate an image with size SxS, where the original signal has S samples. That is very different from the stuff we did above, where we rearranged a signal of length S into an image of size sqrt(S) x sqrt(S), or 2*sqrt(S) x sqrt(S). Since invertibility is important to you, you need to udnerstand what the authors mean by "stochastic inversion", which they menton three times in their preprint. And you should implement the IM and IRC methods for inversion, which they discuss.
Hellermann & Lessmann 2023 goal, as stated in the ir introduction, seems very similar to yours. Therefore I reocmmnd that you understand their work fully. If it were me, I would reproduce many aspects of their preprint, to be sure you really understand. Note that XIRP only works on positive sequences. It follows from eq.8 that XIRP images are anti-symmetric.
"I only found a single paper where FFT and IFFT have been used, but the authors did not explain the method explicitly. (link)"
The Science Direct link to Hu et al., (2024) J Energy Storage, does not have the full paper. Have you read the full paper? They don't explain their methods in the full paper?
I have no expertise in this area. I reocmmend that you consult with someone who does.

Iniciar sesión para comentar.

William Rose
William Rose el 25 de En. de 2025
Movida: William Rose el 1 de Feb. de 2025
fs=22050; % sampling rate (Hz)
t=0:1/fs:5;
y1=chirp(t,131,5,1048); % 3 octave chirp signal, 5 seconds long
% sound(y1,fs) % play it
M=round(sqrt(length(y1))); % find approximate square dimensions
N=floor(length(y1)/M);
img1=reshape(y1(1:M*N),M,N); % reshape to approximately square
imshow(img1)
img1fft = fftshift(fft2(img1));
% inverse fft
img2 = real(ifft2(ifftshift(img1fft)));
imshow(img2)
y2=reshape(img2,1,[]);
% sound(y2,fs) % play it
fprintf('max(abs(y2-y1))=%.2e.\n',max(abs(y2-y1(1:M*N))))
max(abs(y2-y1))=2.66e-15.

4 comentarios

Opy
Opy el 31 de En. de 2025
Movida: William Rose el 1 de Feb. de 2025
@William Rose sorry for not explain you in depth initially. I thought it would not be that much of problem initially.
Indeed it's a masters' project. There are significant amount of works which have been done in this area, where the synthetic images has been used for classfications. But the main problem in my case is that i have to invert the images back to the signals. Now i find it difficult to formulate my problem again. If you have any suggestions from your side regarding this, please do share.
William Rose
William Rose el 1 de Feb. de 2025
No problem.
I think you are formulating your problem clearly: you need to invert the images you create.
The XIRP method converts a 1D signal of length S into an image with size S^2. You can do an exact inversion because the 1D signal is on the diagonal of the XIRP image (Hellermann & Lessmann 2023, eq.8). If I were you, I implement Hellermann & Lessmann's "stochastic inversion", specifically, inversion by mean (IM) and inversion by random column (IRC), which they describe on p.8. They report (p.14) that IRC, more than IM, produces signals that look like the source signal. The signals produced by IM are too smooth. I think "ICR" on p.15 is a mis-spelling of IRC.
This paper may help you understand Hellermann & Lessman 2023. See especially the discussion at the bottom of page 3, regarding how to recover the signal from the image, deterministically or approximately.
Follow the chain of citations (full text, not just abstracts).
Contact authors of papers by email to ask for assistance.
Your advisor will be your best source of assistance, I hope.
Good luck with your masters thesis project.
Opy
Opy el 1 de Feb. de 2025
@William Rose Thanks for the coversations. One of the problem that i will encounter that in the mentioned paper they have only 20 sample. But in my case there are 1000s of different points. Also the image is quite indistinguishable. I don't know how they have generated these images using GEN AI again. So i may need to rethink my problem again.
William Rose
William Rose el 2 de Feb. de 2025
@Opy, I agree that the method of Hellerman & Lessman may not be practical for signals with thousands of points.

Iniciar sesión para comentar.

Etiquetas

Preguntada:

Opy
el 24 de En. de 2025

Comentada:

el 2 de Feb. de 2025

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by