Matlab: Analysis of signal - matlab

I have a problem with this task:
For free route perform frequency analysis and give parametrs of each signal component:
time of beginning and ending of each component
beginning and ending frequency
amplitude (in time domain) in the beginning and end of each signal's component
level of noise in dB
Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz
For example I have signal like this:
Nx=64;
fs=1000;
t=1/fs*(0:Nx-1);
%==========================
A1=1;
A2=4;
f1=500;
f2=1000;
x1=A1*cos(2*pi*f1*t);
x2=A2*sin(2*pi*f2*t);
%==========================
x=x1+x2;

you are horrendously under-sampling your signal. You will be able to see your 500Hz sin wave, but just barely and your 1000Hz sine-wave wont appear where you would like it to. You will have aliasing issues.
You are also not going to see too many samples (64 samples is not enough data)
MaxTime = 1;%second;
fs = 2000; %minimum for shannon-nyquist
t = 0:1/fs:MaxTime; %this ensures that you are getting the correct sampling rate and you can adjust the time range.
Noise Level = -infinity dB (there is no noise component here)

Related

MATLAB: Remove high frequency noise from wav file

I'm trying to remove the high frequency noise from the following file.
It's a file of a woman reading the news, with a high pitched noise playing loudly over it. Towards the end of the file, someone else begins to speak, but in a different language.
I want to filter out this high pitched noise, and be able to clearly hear the woman reading the news. Looking at the frequency domain:
I have tried using a low pass filter, and band stop filter. The bandstop filter produces a signal that no longer has the high pitch ringing, but the audio isn't very clear and it's hard to make out what is being said - the same goes for the low pass filter. I surmise that this is due to me filtering out not only the noise, but the harmonics of the speech as well. It was also necessary that I amplify the audio signal after I filtered it, because it was quieter than before.
Is there some clever way for me to reconstruct the harmonics of the speech in order to hear what is being said more clearly? Or is there a clever way for me to filter the signal without losing too much audio clarity?
I can include any code I used in matlab if needed.
Note:
I shifted the signal to 0 in the image I linked
I did use filtfilt() instead of filter()
I used butter() for the filters
Given the fairly dynamic nature of the interference in your sample, stationary filters are not going to yield very satisfying results. To improve performance, you would need to dynamically adjust the filtering parameters based on estimates of the interference.
Fortunately in this case the interference is pretty strong and exhibits a fairly regular pattern which makes it easier to estimate. This can be seen from the signal's spectrogram.
For the following derivations we will be assuming the samples of the wavfile has been stored in the array x and that the sampling rate is fs (which is 8000Hz in the provided sample).
[Sx,f,t] = spectrogram(x, triang(1024), 1023, [], fs, 'onesided');
Given that the interference is strong, obtaining the frequency of the interference can be done by locating the peak frequency in each time slice:
frequency = zeros(size(Sx,2),1);
for k = 1:size(Sx,2)
[pks,loc] = findpeaks(Sx(:,k));
frequency(k) = fs * (loc(1)-1);
end
Seeing that the interference is periodic we can use the Discrete Fourier Transform to decompose this signal:
M = 32*fs;
Ff = fft(frequency, M);
plot(fs*[0:M-1]/M, 20*log10(abs(Ff));
axis(0, 2);
xlabel('frequency (Hz)');
ylabel('amplitude (dB)');
Using the first two harmonics as an approximation, we can model the frequency of the interference signal as:
T = 1.0/fs
t = [0:length(x)-1]*T
freq = 750.0127340203496
+ 249.99913423501602*cos(2*pi*0.25*t - 1.5702946346796276)
+ 250.23974282864816*cos(2*pi*0.5 *t - 1.5701043282285363);
At this point we would have enough to create a narrowband filter with a center frequency (which would change dynamically as we keep updating the filter coefficients) given by that frequency model. Note however that constantly recomputing and updating the filter coefficient is a fairly expensive process and given that the interference is strong, it is possible to do better by locking on to the interference phase. This can be done by correlating small blocks of the original signal with sine and cosine at desired frequency. We can then slightly tweak the phase to align the sine/cosine with the original signal.
% Compute the phase of the sine/cosine to correlate the signal with
delta_phi = 2*pi*freq/fs;
phi = cumsum(delta_phi);
% We scale the phase adjustments with a triangular window to try to reduce
% phase discontinuities. I've chosen a window of ~200 samples somewhat arbitrarily,
% but it is large enough to cover 8 cycles of the interference around its lowest
% frequency sections (so we can get a better estimate by averaging out other signal
% contributions over multiple interference cycles), and is small enough to capture
% local phase variations.
step = 50;
L = 200;
win = triang(L);
win = win/sum(win);
for i = 0:floor((length(x)-(L-step))/step)
% The phase tweak to align the sine/cosine isn't linear, so we run a few
% iterations to let it converge to a phase locked to the original signal
for iter = 0:1
xseg = x[(i*step+1):(i*step+L+1)];
phiseg = phi[(i*step+1):(i*step+L+1)];
r1 = sum(xseg .* cos(phiseg));
r2 = sum(xseg .* sin(phiseg));
theta = atan2(r2, r1);
delta_phi[(i*step+1):(i*step+L+1)] = delta_phi[(i*step+1):(i*step+L+1)] - theta*win;
phi = cumsum(delta_phi);
end
end
Finally, we need to estimate the amplitude of the interference. Here we choose to perform the estimation over the initial 0.15 seconds where there is a little pause before the speech starts so that the estimation is not biased by the speech's amplitude:
tmax = 0.15;
nmax = floor(tmax * fs);
amp = sqrt(2*mean(x[1:nmax].^2));
% this should give us amp ~ 0.250996990794946
These parameters then allow us to fairly precisely reconstruct the interference, and correspondingly remove the interference from the original signal by direct subtraction:
y = amp * cos(phi)
x = x-y
Listening to the resulting output, you may notice a remaining faint whooshing noise, but nothing compared to the original interference. Obviously this is a fairly ideal case where the parameters of the interference are so easy to estimate that the results almost look too good to be true. You may not get the same performance with more random interference patterns.
Note: the python script used for this processing (and the corresponding .wav file output) can be found here.

MATLAB: Apply lowpass filter on sound signal [duplicate]

I've only used MATLAB as a calculator, so I'm not as well versed in the program. I hope a kind person may be able to guide me on the way since Google currently is not my friend.
I have a wav file in the link below, where there is a human voice and some noise in the background. I want the noise removed. Is there anyone who can tell me how to do it in MATLAB?
https://www.dropbox.com/s/3vtd5ehjt2zfuj7/Hold.wav
This is a pretty imperfect solution, especially since some of the noise is embedded in the same frequency range as the voice you hear on the file, but here goes nothing. What I was talking about with regards to the frequency spectrum is that if you hear the sound, the background noise has a very low hum. This resides in the low frequency range of the spectrum, whereas the voice has a more higher frequency. As such, we can apply a bandpass filter to get rid of the low noise, capture most of the voice, and any noisy frequencies on the higher side will get cancelled as well.
Here are the steps that I did:
Read in the audio file using audioread.
Play the original sound so I can hear what it sounds like using. Do this by creating an audioplayer object.
Plotted both the left and right channels to take a look at the sound signal in time domain... if it gives any clues. Looking at the channels, they both seem to be the same, so it looks like it was just a single microphone being mapped to both channels.
I took the Fourier Transform and saw the frequency distribution.
Using (4) I figured out the rough approximation of where I should cut off the frequencies.
Designed a bandpass filter that cuts off these frequencies.
Filtered the signal then played it by constructing another audioplayer object.
Let's go then!
Step #1
%% Read in the file
clearvars;
close all;
[f,fs] = audioread('Hold.wav');
audioread will read in an audio file for you. Just specify what file you want within the ''. Also, make sure you set your working directory to be where this file is being stored. clearvars, close all just do clean up for us. It closes all of our windows (if any are open), and clears all of our variables in the MATLAB workspace. f would be the signal read into MATLAB while fs is the sampling frequency of your signal. f here is a 2D matrix. The first column is the left channel while the second is the right channel. In general, the total number of channels in your audio file is denoted by the total number of columns in this matrix read in through audioread.
Step #2
%% Play original file
pOrig = audioplayer(f,fs);
pOrig.play;
This step will allow you to create an audioplayer object that takes the signal you read in (f), with the sampling frequency fs and outputs an object stored in pOrig. You then use pOrig.play to play the file in MATLAB so you can hear it.
Step #3
%% Plot both audio channels
N = size(f,1); % Determine total number of samples in audio file
figure;
subplot(2,1,1);
stem(1:N, f(:,1));
title('Left Channel');
subplot(2,1,2);
stem(1:N, f(:,2));
title('Right Channel');
stem is a way to plot discrete points in MATLAB. Each point in time has a circle drawn at the point with a vertical line drawn from the horizontal axis to that point in time. subplot is a way to place multiple figures in the same window. I won't get into it here, but you can read about how subplot works in detail by referencing this StackOverflow post I wrote here. The above code produces the plot shown below:
The above code is quite straight forward. I'm just plotting each channel individually in each subplot.
Step #4
%% Plot the spectrum
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(f(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure;
plot(w,abs(y2));
The code that will look the most frightening is the code above. If you recall from signals and systems, the maximum frequency that is represented in our signal is the sampling frequency divided by 2. This is called the Nyquist frequency. The sampling frequency of your audio file is 48000 Hz, which means that the maximum frequency represented in your audio file is 24000 Hz. fft stands for Fast Fourier Transform. Think of it as a very efficient way of computing the Fourier Transform. The traditional formula requires that you perform multiple summations for each element in your output. The FFT will compute this efficiently by requiring far less operations and still give you the same result.
We are using fft to take a look at the frequency spectrum of our signal. You call fft by specifying the input signal you want as the first parameter, followed by how many points you want to evaluate at with the second parameter. It is customary that you specify the number of points in your FFT to be the length of the signal. I do this by checking to see how many rows we have in our sound matrix. When you plot the frequency spectrum, I just took one channel to make things simple as the other channel is the same. This serves as the first input into fft. Also, bear in mind that I divided by N as it is the proper way of normalizing the signal. However, because we just want to take a snapshot of what the frequency domain looks like, you don't really need to do this. However, if you're planning on using it to compute something later, then you definitely need to.
I wrote some additional code as the spectrum by default is uncentered. I used fftshift so that the centre maps to 0 Hz, while the left spans from 0 to -24000Hz while the right spans from 0 to 24000 Hz. This is intuitively how I see the frequency spectrum. You can think of negative frequencies as frequencies that propagate in the opposite direction. Ideally, the frequency distribution for a negative frequency should equal the positive frequency. When you plot the frequency spectrum, it tells you how much contribution that frequency has to the output. That is defined by the magnitude of the signal. You find this by taking the abs function. The output that you get is shown below.
If you look at the plot, there are a lot of spikes around the low frequency range. This corresponds to your humming whereas the voice probably maps to the higher frequency range and there isn't that much of it as there isn't that much of a voice heard.
Step #5
By trial and error and looking at Step #5, I figured everything from 700 Hz and down corresponds to the humming noise while the higher noise contributions go from 12000 Hz and higher.
Step #6
You can use the butter function from the Signal Processing Toolbox to help you design a bandpass filter. However, if you don't have this toolbox, refer to this StackOverflow post on how user-made function that achieves the same thing. However, the order for that filter is only 2. Assuming you have the butter function available, you need to figure out what order you want your filter. The higher the order, the more work it'll do. I choose n = 7 to start off. You also need to normalize your frequencies so that the Nyquist frequency maps to 1, while everything else maps between 0 and 1. Once you do that, you can call butter like so:
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
The bandpass flag means you want to design a bandpass filter, beginFreq and endFreq map to the normalized beginning and ending frequency you want to for the bandpass filter. In our case, that's beginFreq = 700 / Nyquist and endFreq = 12000 / Nyquist. b,a are the coefficients used for a filter that will help you perform this task. You'll need these for the next step.
%% Design a bandpass filter that filters out between 700 to 12000 Hz
n = 7;
beginFreq = 700 / (fs/2);
endFreq = 12000 / (fs/2);
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
Step #7
%% Filter the signal
fOut = filter(b, a, f);
%% Construct audioplayer object and play
p = audioplayer(fOut, fs);
p.play;
You use filter to filter your signal using what you got from Step #6. fOut will be your filtered signal. If you want to hear it played, you can construct and audioplayer based on this output signal at the same sampling frequency as the input. You then use p.play to hear it in MATLAB.
Give this all a try and see how it all works. You'll probably need to play around the most in Step #6 and #7. This isn't a perfect solution, but enough to get you started I hope.
Good luck!

Why doesn't matlab give me an 8KHz sinewave for 16KHz sampling frequency?

I have the following matlab code, and I am trying to get 64 samples of various sinewave frequencies at 16KHz sampling frequency:
close all; clear; clc;
dt=1/16000;
freq = 8000;
t=-dt;
for i=1:64,
t=t+dt;a(i)=sin(2*pi*freq*t);
end
plot(a,'-o'); grid on;
for freq = 1000, the output graph is
The graph seems normal upto 2000, but at 3000, the graph is
We can see that the amplitude changes during every cycle
Again, at 4000 the graph is
Not exactly a sinewave, but the amplitude is as expected during every cycle and if I play it out it sounds like a single frequency tone
But again at 6000 we have
and at 8000 we have
Since the sampling frequency is 16000 I was assuming that I should be able to generate sinewave samples for upto 8000, and I was expecting the graph I got at 4000 to appear at 8000. Instead, even at 3000, the graph starts to look weird
If I change the sampling frequency to 32000 and the sinewave frequency to 16000, I get the same graph that I am getting now at 8000. Why does matlab behave this way?
EDIT:
at freq = 7900
This is just an artifact of aliasing. Notice how the vertical axis for the 8kHz graph only goes up to 1.5E-13? Ideally the graph should be all zeros; what you're seeing is rounding error.
Looking at the expression for computing the samples at 16kHz:
x(n) = sin(2 * pi * freq * n / 16000)
Where x is the signal, n is the integer sample number, and freq is the frequency in hertz. So, when freq is 8kHz, it's equivalent to:
x(n) = sin(2 * pi * 8000 * n / 16000) = sin(pi * n)
Because n is an integer, sin(pi * n) will always be zero. 8kHz is called the Nyquist frequency for a sampling rate of 16kHz for this reason; in general, the Nyquist frequency is always half the sample frequency.
At 3kHz, the signal "looks weird" because some of the peaks are at non-integer multiples of 16kHz, because 16 is not evenly divisible by 3. Same goes for the 6kHz signal.
The reason they still sound like pure sine tones is because of how the amplitude is interpolated between samples. The graph uses simple linear interpolation, which gives the impression of harsh edges at the samples. However, a physical loudspeaker (more precisely, the circuitry which drives it) does not use linear interpolation directly. Instead, a small filter circuit is used to smooth out those harsh edges (aka anti-aliasing) which removes the artificial frequencies above the aforementioned Nyquist frequency.
That is problem of matlab but a nature of sampling.
16KHz sampling makes 16K (16,000) sampled data per second. 8KHz signal has 8K (8000) cycles per second. So two sample data per a cycle.
Two is minimum number of data per cycle. This is know a part of "sampling theorem".
Let try to show two cycles with three points on graph, you may understand that its impossible to show two cycles by three points. In the same way, you can't show 2N cycles by (2N-1) points.
The effect seen for 8 kHz is as all other answers already mention aliasing effects and arises due to that the sine wave for 8 kHz is sin(2*pi*n*8000*1/16000) = sin(n*pi), which is explained in Drew McGovens answer. Luckily the amplitude is not the only parameter that defines the signal. The other parameter that is required to completely define the signal is the phase. This means that when doing for fourier analysis of the signal, it is still possible to find the right frequency. Try:
close all; clear; clc;
dt=1/16000;
freq = 7300;
t=-dt;
for i=1:64,
t=t+dt;a(i)=sin(2*pi*freq*t);
end
plot(a,'-o'); grid on;
figure; plot( linspace(1,16000,1000), abs(fft(a)) );
A side comment: some people might argue against using i as index variable since that can also be used as the imagiary number i. Personally I have nothing against using i since the runtime and overhead only is affected slightly and I always uses 1i. However, just make sure to use 1i consistently for the imaginary unit then.

Remove noise from wav file, MATLAB

I've only used MATLAB as a calculator, so I'm not as well versed in the program. I hope a kind person may be able to guide me on the way since Google currently is not my friend.
I have a wav file in the link below, where there is a human voice and some noise in the background. I want the noise removed. Is there anyone who can tell me how to do it in MATLAB?
https://www.dropbox.com/s/3vtd5ehjt2zfuj7/Hold.wav
This is a pretty imperfect solution, especially since some of the noise is embedded in the same frequency range as the voice you hear on the file, but here goes nothing. What I was talking about with regards to the frequency spectrum is that if you hear the sound, the background noise has a very low hum. This resides in the low frequency range of the spectrum, whereas the voice has a more higher frequency. As such, we can apply a bandpass filter to get rid of the low noise, capture most of the voice, and any noisy frequencies on the higher side will get cancelled as well.
Here are the steps that I did:
Read in the audio file using audioread.
Play the original sound so I can hear what it sounds like using. Do this by creating an audioplayer object.
Plotted both the left and right channels to take a look at the sound signal in time domain... if it gives any clues. Looking at the channels, they both seem to be the same, so it looks like it was just a single microphone being mapped to both channels.
I took the Fourier Transform and saw the frequency distribution.
Using (4) I figured out the rough approximation of where I should cut off the frequencies.
Designed a bandpass filter that cuts off these frequencies.
Filtered the signal then played it by constructing another audioplayer object.
Let's go then!
Step #1
%% Read in the file
clearvars;
close all;
[f,fs] = audioread('Hold.wav');
audioread will read in an audio file for you. Just specify what file you want within the ''. Also, make sure you set your working directory to be where this file is being stored. clearvars, close all just do clean up for us. It closes all of our windows (if any are open), and clears all of our variables in the MATLAB workspace. f would be the signal read into MATLAB while fs is the sampling frequency of your signal. f here is a 2D matrix. The first column is the left channel while the second is the right channel. In general, the total number of channels in your audio file is denoted by the total number of columns in this matrix read in through audioread.
Step #2
%% Play original file
pOrig = audioplayer(f,fs);
pOrig.play;
This step will allow you to create an audioplayer object that takes the signal you read in (f), with the sampling frequency fs and outputs an object stored in pOrig. You then use pOrig.play to play the file in MATLAB so you can hear it.
Step #3
%% Plot both audio channels
N = size(f,1); % Determine total number of samples in audio file
figure;
subplot(2,1,1);
stem(1:N, f(:,1));
title('Left Channel');
subplot(2,1,2);
stem(1:N, f(:,2));
title('Right Channel');
stem is a way to plot discrete points in MATLAB. Each point in time has a circle drawn at the point with a vertical line drawn from the horizontal axis to that point in time. subplot is a way to place multiple figures in the same window. I won't get into it here, but you can read about how subplot works in detail by referencing this StackOverflow post I wrote here. The above code produces the plot shown below:
The above code is quite straight forward. I'm just plotting each channel individually in each subplot.
Step #4
%% Plot the spectrum
df = fs / N;
w = (-(N/2):(N/2)-1)*df;
y = fft(f(:,1), N) / N; % For normalizing, but not needed for our analysis
y2 = fftshift(y);
figure;
plot(w,abs(y2));
The code that will look the most frightening is the code above. If you recall from signals and systems, the maximum frequency that is represented in our signal is the sampling frequency divided by 2. This is called the Nyquist frequency. The sampling frequency of your audio file is 48000 Hz, which means that the maximum frequency represented in your audio file is 24000 Hz. fft stands for Fast Fourier Transform. Think of it as a very efficient way of computing the Fourier Transform. The traditional formula requires that you perform multiple summations for each element in your output. The FFT will compute this efficiently by requiring far less operations and still give you the same result.
We are using fft to take a look at the frequency spectrum of our signal. You call fft by specifying the input signal you want as the first parameter, followed by how many points you want to evaluate at with the second parameter. It is customary that you specify the number of points in your FFT to be the length of the signal. I do this by checking to see how many rows we have in our sound matrix. When you plot the frequency spectrum, I just took one channel to make things simple as the other channel is the same. This serves as the first input into fft. Also, bear in mind that I divided by N as it is the proper way of normalizing the signal. However, because we just want to take a snapshot of what the frequency domain looks like, you don't really need to do this. However, if you're planning on using it to compute something later, then you definitely need to.
I wrote some additional code as the spectrum by default is uncentered. I used fftshift so that the centre maps to 0 Hz, while the left spans from 0 to -24000Hz while the right spans from 0 to 24000 Hz. This is intuitively how I see the frequency spectrum. You can think of negative frequencies as frequencies that propagate in the opposite direction. Ideally, the frequency distribution for a negative frequency should equal the positive frequency. When you plot the frequency spectrum, it tells you how much contribution that frequency has to the output. That is defined by the magnitude of the signal. You find this by taking the abs function. The output that you get is shown below.
If you look at the plot, there are a lot of spikes around the low frequency range. This corresponds to your humming whereas the voice probably maps to the higher frequency range and there isn't that much of it as there isn't that much of a voice heard.
Step #5
By trial and error and looking at Step #5, I figured everything from 700 Hz and down corresponds to the humming noise while the higher noise contributions go from 12000 Hz and higher.
Step #6
You can use the butter function from the Signal Processing Toolbox to help you design a bandpass filter. However, if you don't have this toolbox, refer to this StackOverflow post on how user-made function that achieves the same thing. However, the order for that filter is only 2. Assuming you have the butter function available, you need to figure out what order you want your filter. The higher the order, the more work it'll do. I choose n = 7 to start off. You also need to normalize your frequencies so that the Nyquist frequency maps to 1, while everything else maps between 0 and 1. Once you do that, you can call butter like so:
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
The bandpass flag means you want to design a bandpass filter, beginFreq and endFreq map to the normalized beginning and ending frequency you want to for the bandpass filter. In our case, that's beginFreq = 700 / Nyquist and endFreq = 12000 / Nyquist. b,a are the coefficients used for a filter that will help you perform this task. You'll need these for the next step.
%% Design a bandpass filter that filters out between 700 to 12000 Hz
n = 7;
beginFreq = 700 / (fs/2);
endFreq = 12000 / (fs/2);
[b,a] = butter(n, [beginFreq, endFreq], 'bandpass');
Step #7
%% Filter the signal
fOut = filter(b, a, f);
%% Construct audioplayer object and play
p = audioplayer(fOut, fs);
p.play;
You use filter to filter your signal using what you got from Step #6. fOut will be your filtered signal. If you want to hear it played, you can construct and audioplayer based on this output signal at the same sampling frequency as the input. You then use p.play to hear it in MATLAB.
Give this all a try and see how it all works. You'll probably need to play around the most in Step #6 and #7. This isn't a perfect solution, but enough to get you started I hope.
Good luck!

Matlab: Finding dominant frequencies in a frame of audio data

I am pretty new to Matlab and I am trying to write a simple frequency based speech detection algorithm. The end goal is to run the script on a wav file, and have it output start/end times for each speech segment. If use the code:
fr = 128;
[ audio, fs, nbits ] = wavread(audioPath);
spectrogram(audio,fr,120,fr,fs,'yaxis')
I get a useful frequency intensity vs. time graph like this:
By looking at it, it is very easy to see when speech occurs. I could write an algorithm to automate the detection process by looking at each x-axis frame, figuring out which frequencies are dominant (have the highest intensity), testing the dominant frequencies to see if enough of them are above a certain intensity threshold (the difference between yellow and red on the graph), and then labeling that frame as either speech or non-speech. Once the frames are labeled, it would be simple to get start/end times for each speech segment.
My problem is that I don't know how to access that data. I can use the code:
[S,F,T,P] = spectrogram(audio,fr,120,fr,fs);
to get all the features of the spectrogram, but the results of that code don't make any sense to me. The bounds of the S,F,T,P arrays and matrices don't correlate to anything I see on the graph. I've looked through the help files and the API, but I get confused when they start throwing around algorithm names and acronyms - my DSP background is pretty limited.
How could I get an array of the frequency intensity values for each frame of this spectrogram analysis? I can figure the rest out from there, I just need to know how to get the appropriate data.
What you are trying to do is called speech activity detection. There are many approaches to this, the simplest might be a simple band pass filter, that passes frequencies where speech is strongest, this is between 1kHz and 8kHz. You could then compare total signal energy with bandpass limited and if majority of energy is in the speech band, classify frame as speech. That's one option, but there are others too.
To get frequencies at peaks you could use FFT to get spectrum and then use peakdetect.m. But this is a very naïve approach, as you will get a lot of peaks, belonging to harmonic frequencies of a base sine.
Theoretically you should use some sort of cepstrum (also known as spectrum of spectrum), which reduces harmonics' periodicity in spectrum to base frequency and then use that with peakdetect. Or, you could use existing tools, that do that, such as praat.
Be aware, that speech analysis is usually done on a frames of around 30ms, stepping in 10ms. You could further filter out false detection by ensuring formant is detected in N sequential frames.
Why don't you use fft with `fftshift:
%% Time specifications:
Fs = 100; % samples per second
dt = 1/Fs; % seconds per sample
StopTime = 1; % seconds
t = (0:dt:StopTime-dt)';
N = size(t,1);
%% Sine wave:
Fc = 12; % hertz
x = cos(2*pi*Fc*t);
%% Fourier Transform:
X = fftshift(fft(x));
%% Frequency specifications:
dF = Fs/N; % hertz
f = -Fs/2:dF:Fs/2-dF; % hertz
%% Plot the spectrum:
figure;
plot(f,abs(X)/N);
xlabel('Frequency (in hertz)');
title('Magnitude Response');
Why do you want to use complex stuff?
a nice and full solution may found in https://dsp.stackexchange.com/questions/1522/simplest-way-of-detecting-where-audio-envelopes-start-and-stop
Have a look at the STFT (short-time fourier transform) or (even better) the DWT (discrete wavelet transform) which both will estimate the frequency content in blocks (windows) of data, which is what you need if you want to detect sudden changes in amplitude of certain ("speech") frequencies.
Don't use a FFT since it calculates the relative frequency content over the entire duration of the signal, making it impossible to determine when a certain frequency occured in the signal.
If you still use inbuilt STFT function, then to plot the maximum you can use following command
plot(T,(floor(abs(max(S,[],1)))))