Am I using the Fourier transformation the right way? - matlab

I am wondering if I am using Fourier Transformation in MATLAB the right way. I want to have all the average amplitudes for frequencies in a song. For testing purposes I am using a free mp3 download of Beethovens "For Elise" which I converted to a 8 kHz mono wave file using Audacity.
My MATLAB code is as follows:
clear all % be careful
% load file
% Für Elise Recording by Valentina Lisitsa
% from http://www.forelise.com/recordings/valentina_lisitsa
% Converted to 8 kHz mono using Audacity
allSamples = wavread('fur_elise_valentina_lisitsa_8khz_mono.wav');
% apply windowing function
w = hanning(length(allSamples));
allSamples = allSamples.*w;
% FFT needs input of length 2^x
NFFT = 2^nextpow2(length(allSamples))
% Apply FFT
fftBuckets=fft(allSamples, NFFT);
fftBuckets=fftBuckets(1:(NFFT/2+1)); % because of symetric/mirrored values
% calculate single side amplitude spectrum,
% normalize by dividing by NFFT to get the
% popular way of displaying amplitudes
% in a range of 0 to 1
fftBuckets = (2*abs(fftBuckets))/NFFT;
% plot it: max possible frequency is 4000, because sampling rate of input
% is 8000 Hz
x = linspace(1,4000,length(fftBuckets));
bar(x,fftBuckets);
The output then looks like this:
Can somebody please tell me if my code is correct? I am especially wondering about the peaks around 0.
For normalizing, do I have to divide by NFFT or length(allSamples)?
For me this doesn't really look like a bar chart, but I guess this is due to the many values I am plotting?
Thanks for any hints!

Depends on your definition of "correct". This is doing what you intended, I think, but it's probably not very useful. I would suggest using a 2D spectrogram instead, as you'll get time-localized information on frequency content.
There is no one correct way of normalising FFT output; there are various different conventions (see e.g. the discussion here). The comment in your code says that you want a range of 0 to 1; if your input values are in the range -1 to 1, then dividing by number of bins will achieve that.
Well, exactly!
I would also recommend plotting the y-axis on a logarithmic scale (in decibels), as that's roughly how the human ear interprets loudness.

Two things that jump out at me:
I'm not sure why you are including the DC (index = 1) component in your plot. Not a big deal, but of course that bin contains no frequency data
I think that dividing by length(allSamples) is more likely to be correct than dividing by NFFT. The reason is that if you want the DC component to be equal to the mean of the input data, dividing by length(allSamples) is the right thing to do.
However, like Oli said, you can't really say what the "correct" normalization is until you know exactly what you are trying to calculate. I tend to use FFTs to estimate power spectra, so I want units like "DAC / rt-Hz", which would lead to a different normalization than if you wanted something like "DAC / Hz".
Ultimately there's no substitute for thinking about exacty what you want to get out of the FFT (including units), and working out for yourself what the correct normalization should be (starting from the definition of the FFT if necessary).
You should also be aware that MATLAB's fft has no requirement to use an array length that is a power of 2 (though doing so will presumably lead to the FFT running faster). Because zero-padding will introduce some ringing, you need to think about whether it is the right thing to do for your application.
Finally, if a periodogram / power spectrum is really what you want, MATLAB provides functions like periodogram, pwelch and others that may be helpful.

Related

Find zero crossing points in a signal and plot them matlab

I have a signal 's' of voice of which you can see an extract here:
I would like to plot the zero crossing points in the same graph. I have tried with the following code:
zci = #(v) find(v(:).*circshift(v(:), [-1 0]) <= 0); % Returns Zero-Crossing Indices Of Argument Vector
zx = zci(s);
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t(zx),s(zx),'o')
But it does not interpole the points in which the sign change, so the result is:
However, I'd like that the highlighted points were as near as possible to zero.
I hope someone can help me. Thanks you for your responses in advanced.
Try this?
w = 1;
crossPts=[];
for k=1:(length(s)-1)
if (s(k)*s(k+1)<0)
crossPts(w) = (t(k)+t(k+1))/2;
w = w + 1;
end
end
figure
set(gcf,'color','w')
plot(t,s)
hold on
plot(t, s)
plot(crossPts, zeros(length(crossPts)), 'o')
Important questions: what is the highest frequency conponent of the signal you are measuring? Can you remeasure this signal? What is your sampling rate? What is this analysis for? (Schoolwork or scholarly research). You may have quite a bit of trouble measuring the zeros of this function with any significance or accurracy because it looks like your waveform has a frequency greater than half of your sampling rate (greater than your Nyquist frequency). Upsampling/interpolating your entire waveform will allow you to find the zeros much more precisely (but with no greater degree of accurracy) but this is a huge no-no in the scientific community. While my method may not look super pretty, it's the most accurate method that doesn't make unsafe assumptions. If you just want it to look pretty, I would recommend interp1 and using the 'Spline' method. You can interpolate the whole waveform and then use the above answer to find more accurate zeros.
Also, you could calculate the zeros on the interpolated waveform and then display it on the raw data.
A remotely possible solution to improve your data;
If you're measuring a human voice, why not try filtering at the range of human speech? This should be fine mathematically and could possibly improve your waveform.

MATLAB: Apply lowpass filter on sound signal [duplicate]

I've only used MATLAB as a calculator, so I'm not as well versed in the program. I hope a kind person may be able to guide me on the way since Google currently is not my friend.
I have a wav file in the link below, where there is a human voice and some noise in the background. I want the noise removed. Is there anyone who can tell me how to do it in MATLAB?
https://www.dropbox.com/s/3vtd5ehjt2zfuj7/Hold.wav
This is a pretty imperfect solution, especially since some of the noise is embedded in the same frequency range as the voice you hear on the file, but here goes nothing. What I was talking about with regards to the frequency spectrum is that if you hear the sound, the background noise has a very low hum. This resides in the low frequency range of the spectrum, whereas the voice has a more higher frequency. As such, we can apply a bandpass filter to get rid of the low noise, capture most of the voice, and any noisy frequencies on the higher side will get cancelled as well.
Here are the steps that I did:
Read in the audio file using audioread.
Play the original sound so I can hear what it sounds like using. Do this by creating an audioplayer object.
Plotted both the left and right channels to take a look at the sound signal in time domain... if it gives any clues. Looking at the channels, they both seem to be the same, so it looks like it was just a single microphone being mapped to both channels.
I took the Fourier Transform and saw the frequency distribution.
Using (4) I figured out the rough approximation of where I should cut off the frequencies.
Designed a bandpass filter that cuts off these frequencies.
Filtered the signal then played it by constructing another audioplayer object.
Let's go then!
Step #1
%% Read in the file
clearvars;
close all;
[f,fs] = audioread('Hold.wav');
audioread will read in an audio file for you. Just specify what file you want within the ''. Also, make sure you set your working directory to be where this file is being stored. clearvars, close all just do clean up for us. It closes all of our windows (if any are open), and clears all of our variables in the MATLAB workspace. f would be the signal read into MATLAB while fs is the sampling frequency of your signal. f here is a 2D matrix. The first column is the left channel while the second is the right channel. In general, the total number of channels in your audio file is denoted by the total number of columns in this matrix read in through audioread.
Step #2
%% Play original file
pOrig = audioplayer(f,fs);
pOrig.play;
This step will allow you to create an audioplayer object that takes the signal you read in (f), with the sampling frequency fs and outputs an object stored in pOrig. You then use pOrig.play to play the file in MATLAB so you can hear it.
Step #3
%% Plot both audio channels
N = size(f,1); % Determine total number of samples in audio file
figure;
subplot(2,1,1);
stem(1:N, f(:,1));
title('Left Channel');
subplot(2,1,2);
stem(1:N, f(:,2));
title('Right Channel');
stem is a way to plot discrete points in MATLAB. Each point in time has a circle drawn at the point with a vertical line drawn from the horizontal axis to that point in time. subplot is a way to place multiple figures in the same window. I won't get into it here, but you can read about how subplot works in detail by referencing this StackOverflow post I wrote here. The above code produces the plot shown below:
The above code is quite straight forward. I'm just plotting each channel individually in each subplot.
Step #4
%% Plot the spectrum
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(f(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure;
plot(w,abs(y2));
The code that will look the most frightening is the code above. If you recall from signals and systems, the maximum frequency that is represented in our signal is the sampling frequency divided by 2. This is called the Nyquist frequency. The sampling frequency of your audio file is 48000 Hz, which means that the maximum frequency represented in your audio file is 24000 Hz. fft stands for Fast Fourier Transform. Think of it as a very efficient way of computing the Fourier Transform. The traditional formula requires that you perform multiple summations for each element in your output. The FFT will compute this efficiently by requiring far less operations and still give you the same result.
We are using fft to take a look at the frequency spectrum of our signal. You call fft by specifying the input signal you want as the first parameter, followed by how many points you want to evaluate at with the second parameter. It is customary that you specify the number of points in your FFT to be the length of the signal. I do this by checking to see how many rows we have in our sound matrix. When you plot the frequency spectrum, I just took one channel to make things simple as the other channel is the same. This serves as the first input into fft. Also, bear in mind that I divided by N as it is the proper way of normalizing the signal. However, because we just want to take a snapshot of what the frequency domain looks like, you don't really need to do this. However, if you're planning on using it to compute something later, then you definitely need to.
I wrote some additional code as the spectrum by default is uncentered. I used fftshift so that the centre maps to 0 Hz, while the left spans from 0 to -24000Hz while the right spans from 0 to 24000 Hz. This is intuitively how I see the frequency spectrum. You can think of negative frequencies as frequencies that propagate in the opposite direction. Ideally, the frequency distribution for a negative frequency should equal the positive frequency. When you plot the frequency spectrum, it tells you how much contribution that frequency has to the output. That is defined by the magnitude of the signal. You find this by taking the abs function. The output that you get is shown below.
If you look at the plot, there are a lot of spikes around the low frequency range. This corresponds to your humming whereas the voice probably maps to the higher frequency range and there isn't that much of it as there isn't that much of a voice heard.
Step #5
By trial and error and looking at Step #5, I figured everything from 700 Hz and down corresponds to the humming noise while the higher noise contributions go from 12000 Hz and higher.
Step #6
You can use the butter function from the Signal Processing Toolbox to help you design a bandpass filter. However, if you don't have this toolbox, refer to this StackOverflow post on how user-made function that achieves the same thing. However, the order for that filter is only 2. Assuming you have the butter function available, you need to figure out what order you want your filter. The higher the order, the more work it'll do. I choose n = 7 to start off. You also need to normalize your frequencies so that the Nyquist frequency maps to 1, while everything else maps between 0 and 1. Once you do that, you can call butter like so:
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
The bandpass flag means you want to design a bandpass filter, beginFreq and endFreq map to the normalized beginning and ending frequency you want to for the bandpass filter. In our case, that's beginFreq = 700 / Nyquist and endFreq = 12000 / Nyquist. b,a are the coefficients used for a filter that will help you perform this task. You'll need these for the next step.
%% Design a bandpass filter that filters out between 700 to 12000 Hz
n = 7;
beginFreq = 700 / (fs/2);
endFreq = 12000 / (fs/2);
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
Step #7
%% Filter the signal
fOut = filter(b, a, f);
%% Construct audioplayer object and play
p = audioplayer(fOut, fs);
p.play;
You use filter to filter your signal using what you got from Step #6. fOut will be your filtered signal. If you want to hear it played, you can construct and audioplayer based on this output signal at the same sampling frequency as the input. You then use p.play to hear it in MATLAB.
Give this all a try and see how it all works. You'll probably need to play around the most in Step #6 and #7. This isn't a perfect solution, but enough to get you started I hope.
Good luck!

Matlab - creating psd with right scaling

I want to analyze an audiodata (.wav with pcm, 32k as sampling rate) and create the psd of it with the axes Sxx (watts/hertz not db) and f (hertz).
So I would start by reading out the audiodata with:
[x,fs]=audioread('test.wav');
After this I'm having some problems because I dont really know how to proceed and also Matlab always tells me that psd functions won't be supported in the future and that I should use pwelch.. (also tried to build the autocorr and afterwards use fourier to get to the Sxx but it didn't work out really well)
So could anybody tell me how I can get from my vector x to a vector with the psdvalues in watts/hertz and plot it afterwards?
very grateful for every kind of help! :)
Update1: Yes I did read the documentation of pwelch but I'm afraid my english is too bad to understand it completly.
So if I use the psd documentation:
nfft = 2^nextpow2(length(x));
Pxx = abs(fft(x,nfft)).^2/length(x)/fs;
Hpsd = dspdata.psd(Pxx(1:length(Pxx)/2),'fs',fs);
plot(Hpsd)
I'm able to get the plot in db with the peak at the right frequency. (I dont know how dspdata.psd work though)
I tried out:
[Pyy,f]=pwelch(x,fs)
plot(Pyy)
this gives me a non db-scale but the peak is at the wrong frequency
Update 2:
First of all, thanks a lot for your detailed answer! At the moment I'm working on my matlabskills as well as my english language but all the specific technical terms give me a hard time..
When using your example of pwelch on a wav-data with a clear frequency of 1khz, the plot shows me the peak at round about 0.14, could it maybe still be a special-scaled x-axis?
If I try it this way:
[y,fs]=audioread('test.wav');
N=length(y);
bin_vals=0:N-1;
fax_Hz= bin_vals*fs/N;
N_2=ceil(N/2);
Y=fft(y);
pyy=Y.*conj(Y);
plot(fax_Hz(1:N_2),pyy(1:N_2))
the result seems right (is this way correct?), but I still need some time to search for a proper way to display the y-axis in W/Hz, since I dont know how the audiosignal was created.
Update 3:
http://s000.tinyupload.com/index.php?file_id=33803229773204653857
This wav file should have a dominant frequency at 1khz with a duration of 3 seconds and a sampling frequency of 44100Hz. (If I plot the data received from audioread the oscillation seems reasonable)
with
[y,fs]=audioread('1khz.wav');
[pyy,f]=pwelch(y,fs);
plot(f,pyy)
I get a peak at 0.14 on the x-axis.
if I use
[y,fs]=audioread('1khz.wav');
[pyy,f]=pwelch(y,[],[],[],fs);
plot(f,pyy)
instead, the peak is at the 1000. Is this way right? And how could I interpret the difference scaling on the y-axis? (pwelch vs. square of abs)
I also wanted to ask if it is possible to get a flat psd of awgn in matlab? (since you just have finite elements I don't know to get there)
Thanks again for your detailed support!
Update 4
#A.Donda
So I have a new Problem for which I think it is probably necessary to go a bit more into detail. So my plan is basically to do the following:
Read and Audiodata ([y,fs]) and generate white Noise with a certain SNR ([n,fs])
Generate a Filter H which shapes the PSD(y) similiar to the PSD(n)
Generate an inverse Filter G=H^(-1) which reverts the effect of H.
My problem is that with using pwelch, the resulting vectorlength of pyy is way smaller than the vectorlength of y. Since my Filter is determined by P=sqrt(pnn/pyy), I can't multiply fft(y)*H and therefore get no results.
Do you know any help for this Problem?
Or is there a way to go back from a PSD (Welch estimated) to a normal signal (like an inverse function for pwelch)?
In the example you have from the psd documentation, you compute a psd estimate yourself, then put it into a dspdata.psd container and plot it. What dspdata.psd data does here for you is basically compute the frequency axis and provide it to the plot command, nothing more. You get a plot of the spectral density estimate, but that's the one you compute yourself using fft, which is the simplest and worst psd estimate you can get, a so-called periodogram.
Your use of pwelch is almost correct, you just forgot to use the frequency axis information in your plot.
[Pyy,f]=pwelch(x,fs)
plot(f,Pyy)
should give you the peak at the correct frequency.
Your use of pwelch is almost correct, but you have to give the sampling frequency as the 5th argument, and then use the frequency axis information in your plot.
[Pyy,f]=pwelch(y,[],[],[],fs);
plot(f,Pyy)
should give you the peak at the correct frequency.
What pwelch gives you is the spectral density of the signal over Hz. Correct axis labels would therefore be
xlabel('frequency (Hz)')
ylabel('psd (1/Hz)')
The signal you give pwelch is a pure sequence of numbers without physical dimensions. By specifying the sampling rate, the time axis gets a physical unit, s, therefore the resulting frequency is in Hz and the density is in 1/Hz. But still your time series values have no physical dimension, and therefore the density cannot be related to something like W. Has your audiosignal been obtained by a calibrated A/D converter? If yes, you should be able to relate your data to a physical dimension and units, but that's a nontrivial step.
On a personal note, I'd really advise you to brush up on your English, because using software, especially programming interfaces, without properly understanding the documentation is a recipe for disaster.

Remove noise from wav file, MATLAB

I've only used MATLAB as a calculator, so I'm not as well versed in the program. I hope a kind person may be able to guide me on the way since Google currently is not my friend.
I have a wav file in the link below, where there is a human voice and some noise in the background. I want the noise removed. Is there anyone who can tell me how to do it in MATLAB?
https://www.dropbox.com/s/3vtd5ehjt2zfuj7/Hold.wav
This is a pretty imperfect solution, especially since some of the noise is embedded in the same frequency range as the voice you hear on the file, but here goes nothing. What I was talking about with regards to the frequency spectrum is that if you hear the sound, the background noise has a very low hum. This resides in the low frequency range of the spectrum, whereas the voice has a more higher frequency. As such, we can apply a bandpass filter to get rid of the low noise, capture most of the voice, and any noisy frequencies on the higher side will get cancelled as well.
Here are the steps that I did:
Read in the audio file using audioread.
Play the original sound so I can hear what it sounds like using. Do this by creating an audioplayer object.
Plotted both the left and right channels to take a look at the sound signal in time domain... if it gives any clues. Looking at the channels, they both seem to be the same, so it looks like it was just a single microphone being mapped to both channels.
I took the Fourier Transform and saw the frequency distribution.
Using (4) I figured out the rough approximation of where I should cut off the frequencies.
Designed a bandpass filter that cuts off these frequencies.
Filtered the signal then played it by constructing another audioplayer object.
Let's go then!
Step #1
%% Read in the file
clearvars;
close all;
[f,fs] = audioread('Hold.wav');
audioread will read in an audio file for you. Just specify what file you want within the ''. Also, make sure you set your working directory to be where this file is being stored. clearvars, close all just do clean up for us. It closes all of our windows (if any are open), and clears all of our variables in the MATLAB workspace. f would be the signal read into MATLAB while fs is the sampling frequency of your signal. f here is a 2D matrix. The first column is the left channel while the second is the right channel. In general, the total number of channels in your audio file is denoted by the total number of columns in this matrix read in through audioread.
Step #2
%% Play original file
pOrig = audioplayer(f,fs);
pOrig.play;
This step will allow you to create an audioplayer object that takes the signal you read in (f), with the sampling frequency fs and outputs an object stored in pOrig. You then use pOrig.play to play the file in MATLAB so you can hear it.
Step #3
%% Plot both audio channels
N = size(f,1); % Determine total number of samples in audio file
figure;
subplot(2,1,1);
stem(1:N, f(:,1));
title('Left Channel');
subplot(2,1,2);
stem(1:N, f(:,2));
title('Right Channel');
stem is a way to plot discrete points in MATLAB. Each point in time has a circle drawn at the point with a vertical line drawn from the horizontal axis to that point in time. subplot is a way to place multiple figures in the same window. I won't get into it here, but you can read about how subplot works in detail by referencing this StackOverflow post I wrote here. The above code produces the plot shown below:
The above code is quite straight forward. I'm just plotting each channel individually in each subplot.
Step #4
%% Plot the spectrum
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(f(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure;
plot(w,abs(y2));
The code that will look the most frightening is the code above. If you recall from signals and systems, the maximum frequency that is represented in our signal is the sampling frequency divided by 2. This is called the Nyquist frequency. The sampling frequency of your audio file is 48000 Hz, which means that the maximum frequency represented in your audio file is 24000 Hz. fft stands for Fast Fourier Transform. Think of it as a very efficient way of computing the Fourier Transform. The traditional formula requires that you perform multiple summations for each element in your output. The FFT will compute this efficiently by requiring far less operations and still give you the same result.
We are using fft to take a look at the frequency spectrum of our signal. You call fft by specifying the input signal you want as the first parameter, followed by how many points you want to evaluate at with the second parameter. It is customary that you specify the number of points in your FFT to be the length of the signal. I do this by checking to see how many rows we have in our sound matrix. When you plot the frequency spectrum, I just took one channel to make things simple as the other channel is the same. This serves as the first input into fft. Also, bear in mind that I divided by N as it is the proper way of normalizing the signal. However, because we just want to take a snapshot of what the frequency domain looks like, you don't really need to do this. However, if you're planning on using it to compute something later, then you definitely need to.
I wrote some additional code as the spectrum by default is uncentered. I used fftshift so that the centre maps to 0 Hz, while the left spans from 0 to -24000Hz while the right spans from 0 to 24000 Hz. This is intuitively how I see the frequency spectrum. You can think of negative frequencies as frequencies that propagate in the opposite direction. Ideally, the frequency distribution for a negative frequency should equal the positive frequency. When you plot the frequency spectrum, it tells you how much contribution that frequency has to the output. That is defined by the magnitude of the signal. You find this by taking the abs function. The output that you get is shown below.
If you look at the plot, there are a lot of spikes around the low frequency range. This corresponds to your humming whereas the voice probably maps to the higher frequency range and there isn't that much of it as there isn't that much of a voice heard.
Step #5
By trial and error and looking at Step #5, I figured everything from 700 Hz and down corresponds to the humming noise while the higher noise contributions go from 12000 Hz and higher.
Step #6
You can use the butter function from the Signal Processing Toolbox to help you design a bandpass filter. However, if you don't have this toolbox, refer to this StackOverflow post on how user-made function that achieves the same thing. However, the order for that filter is only 2. Assuming you have the butter function available, you need to figure out what order you want your filter. The higher the order, the more work it'll do. I choose n = 7 to start off. You also need to normalize your frequencies so that the Nyquist frequency maps to 1, while everything else maps between 0 and 1. Once you do that, you can call butter like so:
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
The bandpass flag means you want to design a bandpass filter, beginFreq and endFreq map to the normalized beginning and ending frequency you want to for the bandpass filter. In our case, that's beginFreq = 700 / Nyquist and endFreq = 12000 / Nyquist. b,a are the coefficients used for a filter that will help you perform this task. You'll need these for the next step.
%% Design a bandpass filter that filters out between 700 to 12000 Hz
n = 7;
beginFreq = 700 / (fs/2);
endFreq = 12000 / (fs/2);
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
Step #7
%% Filter the signal
fOut = filter(b, a, f);
%% Construct audioplayer object and play
p = audioplayer(fOut, fs);
p.play;
You use filter to filter your signal using what you got from Step #6. fOut will be your filtered signal. If you want to hear it played, you can construct and audioplayer based on this output signal at the same sampling frequency as the input. You then use p.play to hear it in MATLAB.
Give this all a try and see how it all works. You'll probably need to play around the most in Step #6 and #7. This isn't a perfect solution, but enough to get you started I hope.
Good luck!

matlab FFT. Stuck understanding relationship between frequency and result

We're trying to analyse flow around circular cylinder and we have a set of Cp values that we got from wind tunnel experiment. Initially, we started off with a sample frequency of 20 Hz and tried to find the frequency of vortex shedding using FFT in matlab. We got a frequency of around 7 Hz. Next, we did the same experiment, but the only thing we changed was the sampling frequency- from 20 Hz to 200 Hz. We got the frequency of the vortex shedding to be around 70 Hz (this is where the peak is located in the graph). The graph doesn't change regardless of the Cp data that we enter. The only time the peak differs is when we change the sample frequency. It seems like the increase in the frequency of vortex shedding is proportional to the sample frequency and this doesn't seem to make sense at all. Any help regarding establishing a relation between sample frequency and vortex shedding frequency would be greatly appreaciated.
The problem you are seeing is related to "data aliasing" due to limitations of the FFT being able to detect frequencies higher than the Nyquist Frequency (half-the sampling frequency).
With data aliasing, a peak in real frequency will be centered around (real frequency modulo Nyquist frequency). In your 20 Hz sampling (assuming 70 Hz is the true frequency, that results in zero frequency which means you're not seeing the real information. One thing that can help you with this is to use FFT "windowing".
Another problem that you may be experiencing is related to noisy data generation via single-FFT measurement. It's better to take lots of data, use windowing with overlap, and make sure you have at least 5 FFTs which you average to find your result. As Steven Lowe mentioned, you should also sample at faster rates if possible. I would recommend sampling at the fastest rate your instruments can sample.
Lastly, I would recommend that you read some excerpts from Numerical Recipes in C (<-- link):
Section 12.0 -- Introduction to FFT
Section 12.1 (Discusses data aliasing)
Section 13.4 (Discusses FFT windowing)
You don't need to read the C source code -- just the explanations. Numerical Recipes for C has excellent condensed information on the subject.
If you have any more questions, leave them in the comments. I'll try to do my best in answering them.
Good luck!
this is probably not a programming problem, it sounds like an experiment-measurement problem
i think the sampling frequency has to be at least twice the rate of the oscillation frequency, otherwise you get artifacts; this might explain the difference. Note that the ratio of the FFT frequency to the sampling frequency is 0.35 in both cases. Can you repeat the experiment with higher sampling rates? I'm thinking that if this is a narrow cylinder in a strong wind, it may be vibrating/oscillating faster than the sampling rate can detect..
i hope this helps - there's a 97.6% probability that i don't know what i'm talking about ;-)
If it's not an aliasing problem, it sounds like you could be plotting the frequency response on a normalised frequency scale, which will change with sample frequency. Here's an example of a reasonably good way to plot a frequency response of a signal in Matlab:
Fs = 100;
Tmax = 10;
time = 0:1/Fs:Tmax;
omega = 2*pi*10; % 10 Hz
signal = 10*sin(omega*time) + rand(1,Tmax*Fs+1);
Nfft = 2^8;
[Pxx,freq] = pwelch(signal,Nfft,[],[],Fs)
plot(freq,Pxx)
Note that the sample frequency must be explicitly passed to the pwelch command in order to output the “real” frequency data. Otherwise, when you change the sample frequency the bin where the resonance occurs will seem to shift, which is similar to the problem you describe.
Methinks you need to do some serious reading on digital signal processing before you can even begin to understand all the nuances of the DFT (FFT). If I was you, I'd get grounded in it first with this great book:
Discrete-Time Signal Processing
If you want more of a mathematical treatment that will really expand your abilities,
Fourier Analysis by Körner
Take a look at this related question. While it was originally asked about asked about VB the responses are generically about FFTs
I tried using the frequency response code as above but it seems that I dont have the appropriate toolbox in Matlab. Is there any way to do the same thing without using fft command? So far, this is what I have:
% FFT Algorithm
Fs = 200; % Sampling frequency
T = 1/Fs; % Sample time
L = 65536; % Length of signal
t = (0:L-1)*T; % Time vector
y = data1; % Your CP values go in this vector
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(y,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2);
% Plot single-sided amplitude spectrum.
loglog(f,2*abs(Y(1:NFFT/2)))
title(' y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|')
I think there might be something wrong with the code I am using. I'm not sure what though.
A colleague of mine has written some nice GPL-licenced functions for spectral analysis:
http://www.mecheng.adelaide.edu.au/~pvl/octave/
(Update: this code is now part of one of the Octave modules:
http://octave.svn.sourceforge.net/viewvc/octave/trunk/octave-forge/main/signal/inst/.
But it might be tricky to extract just the pieces you need from there.)
They're written for both Matlab and Octave and serve mostly as a drop-in replacement for the analogous functions in the Signal Processing Toolbox. (So the code above should still work fine.)
It may help with your data analysis; better than rolling your own with fft and the like.