Convert ambient response signal into impulse response using Random Decrement technique (RDT)? - matlab

I was looking at ambient response signals, however, I am not sure how to convert the response into an impulse response. I then found out about a method called the Random Decrement Technique (RDT) from the following link:
Random Decrement Technique (RDT)
But, I am not sure how to setup the RDT part to acquire the impulse response. Can someone advise how to approach this part and how to setup the variables required to run the code? or is there another way to acquire impulse response other than the RDT file (from the link)?
Also, how can I obtain the interpolation coefficient to be used in the RDT and is there a method to identify the difference between impulse and random signals?
Here is the code used to plot the acceleration plots:
% Plotting the graph of acceleration:
for ci = 1:13
% Display the signal:
sen_val = ci;
thist_len = length(thist(:,ci));
t = linspace(0, 64, thist_len);
figure(sen_val),
plot(t, thist(:,ci));
xlabel('Time (s)');
ylabel('Acceleration (ms^-2)');
xlim([0,t(end)])
title("Single video Sensor " + sen_val + " Data");
pause(10);
end
For the data and description, it can be found at the following link:
Convert ambient response signal into impulse response using Random Decrement technique (RDT)?

Related

Performing FFT on a time-domain signal using MATLAB

I have been trying to perform signal processing action (FFT) on a time-domain signal, which I have obtained from another software. However, I keep getting a wrong answer (figure 1), when I use the following code. The sampling frequency of the signal is 1600.
%Healthy engine IAS
DATAH=xlsread('IASH_5seconds_1500rpm_FS1600.xlsx');
xh=DATAH(:,1);
yh=DATAH(:,2);
plot(xh,yh);
hold on;
legend('a','b')
%% performing fft
Fs=1600;
L=length(yh);
NFFT = 2^nextpow2(L);
X = fft(yh,NFFT)/L;
f = Fs*linspace(0,1,NFFT);
plot(f,2*abs(X))
I tried with Amesime software to get its frequency spectrum, and I did right (see figure 2). But, I want to do this with MATLAB.
I don't have any clue where my mistake is. My problem becomes odder for me when I just change the signal and the code works perfectly. So, I thought my signal has a problem. Does my signal have a problem (you can see the signal in figure 3)?
Figure 1:
Figure 2:
Figure 3:

How to generate smooth filtered envelope on EMG data in Matlab

I'm new to analysing EMG data and would appreciate some carefully explained help.
I would like to generate a smooth, linear enevelope signal of my EMG data (50kHz sampling rate) like the one published in this paper: https://openi.nlm.nih.gov/detailedresult.php?img=PMC3480942_1743-0003-9-29-3&req=4
My end goal is to be able to analyze the relationship between EMG activity (output) and action potentials fired from upstream neurons (putative input) recorded at the same time.
Even though this paper lists the filtering methods out quite clearly, I do not understand what they mean or how to perform them in matlab, which is the analysis tool I have available to me.
In the code I have written so far, I can dc offset as well as rectify my data:
x = EMGtime_data
y = EMGvoltage_data
%dc offset
y2=detrend(y)
% Rectification of the EMG signal
rec_y=abs(y2);
plot(x, rec_y)
But then I am not sure how to proceed.
I have tried the envelope function, but it is not as smooth as I would like:
For instance, if I used the following:
envelope(y_rec,2000,'rms')
I get this (which also doesn't seem to care that the data is rectified):
Even if I were to accept the envelope function, I'm not sure how to access just the processed envelope data to adjust the plot (i.e. change the y-range), or analyse the data further for on-set and off-set of the signal since the results of this function seem to be coupled with the original trace.
I have also come across fastrms.m, which seems promising. Unfortunately, I do not understand how to implement this function since the general explanation is over my head and the example code is lacking any defined variable (so I don't know where to integrate my own data!)
The example code from fastrms.m file exchange is here
Fs = 200; T = 5; N = T*Fs; t = linspace(0,T,N);
noise = randn(N,1);
[a,b] = butter(5, [9 12]/(Fs/2));
x = filtfilt(a,b,noise);
window = gausswin(0.25*Fs);
rms = fastrms(x,window,[],1);
plot(t,x,t,rms*[1 -1],'LineWidth',2);
xlabel('Time (sec)'); ylabel('Signal')
title('Instantaneous amplitude via RMS')
I will be eternally grateful for help in understanding how to filter and smooth EMG data!
In order to analysis EMG signals in time domain, researcher use The combination of rectification and low pass filtering which is also called finding the “linear envelope” of the signal.
And as mentioned in both the above sentence and your attached article image's explanation, in order to plot overlaid signal, you could simply low pass filter your signal at specific frequency.
In your attached article the said signal was filtered at 8 HZ.
For better understanding the art of EMG signal analysis , i think this document could help you a lot (link)

Unexpected FFT output of the impulse response of an integrator - MATLAB

I am trying to get the frequency response of any transfer functions using the Fourier transform of the impulse response of the system. It works pretty well for most of the cases tested but I still have a problem with transfer functions in which there is an integrator (e.g. 1/s ; (4s+2)/(3s^2+s) etc.).
Let's take the example of a pure integrator with H(s) = 1/s. The impulse response obtained is a step function as expected but then, the Fourier transform of the impulse response does not give the expected theoretical results. Instead it gives really small results and do not lead to the classic characteristics of an integrator (-20dB/decade magnitude and -90deg phase) after processing.
Maybe a few lines of codes can be helpful if I was not clear enough:
h = tf(1,[1 0]);
t_step = .1;
t = [0 : t_step : 100000]';
[y,t1] = impulse(h,t);
y_fft = fft(y);
Do you know where this problem may come from? If you need further information, please let me know. I am working on MATLAB R2013b.
As mentioned in my comment, the problem is related to:
fft assumes a periodic signal, i.e. an infinite repetition of the provided discrete signal
you should also include the response for negative times, i.e. before the pulse occured.
h = tf(1,[1 0]);
t_step = 1;
t = [0 : t_step : 999]';
[y,t1] = impulse(h,t);
y = [y; zeros(1000, 1)];
y_fft = fft(y);
figure
semilogx(db(y_fft(1:end/2)), 'r.');
figure
semilogx(180/pi*angle(y_fft(y_fft(1:end/2)~=0)), 'r');
Further remarks
Note that due to the periodicity of the fft (and y), half of the values are minus infinity, which I did not plot to obtain a nicer result.
Note that the effect of the difference between the fft and the continuous fourier transform depends on the real fourier transform of the impulse response. Especially aliasing may be a problem.

Initialize a Room Impulse Response using reverberation time(T60)

I am doing Speech dereverberation using Non-negative matrix factorization.
To be precise, I am working on this paper by Nasser(paris.cs.illinois.edu/pubs/nasser-icassp2015.pdf) which involves obtaining optimal solution for a Room Impulse Response(Equation 10). So, for that I need to initialize H first. He has mentioned in the paper that "Each row of H was initialized identically using a linearly decaying envelope"(Section 4, at the end of page 3). I need to initialize an impulse response(H) such that its reverberation time(T60) is 300 ms. Let the length of H be 10.
This is what I tried but its an arbitrary solution.
x=1:10;
h = exp(-x/2);
H = repmat(h,600,1);
This will give me a H of dimension 600 * 10.
But, I don't understand how to use T60 for the initialization in MATLAB.
Hmmm. If you're trying to create a reverberation effect, then H should just be a vector, it seems to me that you have a matrix with 10 columns. When creating a reverb effect, you generally get your impulse response and convolve it with you audio signal. In this case, h seems pretty arbitrary and I don't know if it will give you the amount of reverb you are looking for. However, if you wanted to implement h as an impulse response for a reverb, all you have to do is convolve your audio signal with ythe impulse response.
[x, fs] = audioread('myaudio.wav');
y = conv(x,h);
If you had an impulse response from a recording and an impulse response from the room the recording was made, you could apply deconvolution to remove the reverb using the deconv function in Matlab.
You should be able to work out a formula so that h is just a exponentially decaying vector that takes roughly 300ms to die (although actually hearing that may be tricky)
If you want to get really advanced with your impulse response calculations, I recommend trying an image source approach to creating your impulse response. Check out the following paper(old, but golden);
http://www.umiacs.umd.edu/~ramani/cmsc828d_audio/AllenBerkley79.pdf
If you're interested in blind deconvolution, this might be of interest to you.
https://www.academia.edu/1370250/Predictive_deconvolution_and_kurtosis_maximization_for_speech_dereverberation
A slight caveat, deconvolution and room reverberation is a very tricky business. The image source model given above, while interesting and effective, doesn't really capture the complexity of reverberation and dereverberation. There are several things that can affect the sound (standing waves, etc) I can't guarantee you that simply calculating the RT60 using just a decaying exponential vector and deconvolution will yield amazing results. Still though, wort a shot and lots of fun!

Matlab: Finding dominant frequencies in a frame of audio data

I am pretty new to Matlab and I am trying to write a simple frequency based speech detection algorithm. The end goal is to run the script on a wav file, and have it output start/end times for each speech segment. If use the code:
fr = 128;
[ audio, fs, nbits ] = wavread(audioPath);
spectrogram(audio,fr,120,fr,fs,'yaxis')
I get a useful frequency intensity vs. time graph like this:
By looking at it, it is very easy to see when speech occurs. I could write an algorithm to automate the detection process by looking at each x-axis frame, figuring out which frequencies are dominant (have the highest intensity), testing the dominant frequencies to see if enough of them are above a certain intensity threshold (the difference between yellow and red on the graph), and then labeling that frame as either speech or non-speech. Once the frames are labeled, it would be simple to get start/end times for each speech segment.
My problem is that I don't know how to access that data. I can use the code:
[S,F,T,P] = spectrogram(audio,fr,120,fr,fs);
to get all the features of the spectrogram, but the results of that code don't make any sense to me. The bounds of the S,F,T,P arrays and matrices don't correlate to anything I see on the graph. I've looked through the help files and the API, but I get confused when they start throwing around algorithm names and acronyms - my DSP background is pretty limited.
How could I get an array of the frequency intensity values for each frame of this spectrogram analysis? I can figure the rest out from there, I just need to know how to get the appropriate data.
What you are trying to do is called speech activity detection. There are many approaches to this, the simplest might be a simple band pass filter, that passes frequencies where speech is strongest, this is between 1kHz and 8kHz. You could then compare total signal energy with bandpass limited and if majority of energy is in the speech band, classify frame as speech. That's one option, but there are others too.
To get frequencies at peaks you could use FFT to get spectrum and then use peakdetect.m. But this is a very naïve approach, as you will get a lot of peaks, belonging to harmonic frequencies of a base sine.
Theoretically you should use some sort of cepstrum (also known as spectrum of spectrum), which reduces harmonics' periodicity in spectrum to base frequency and then use that with peakdetect. Or, you could use existing tools, that do that, such as praat.
Be aware, that speech analysis is usually done on a frames of around 30ms, stepping in 10ms. You could further filter out false detection by ensuring formant is detected in N sequential frames.
Why don't you use fft with `fftshift:
%% Time specifications:
Fs = 100; % samples per second
dt = 1/Fs; % seconds per sample
StopTime = 1; % seconds
t = (0:dt:StopTime-dt)';
N = size(t,1);
%% Sine wave:
Fc = 12; % hertz
x = cos(2*pi*Fc*t);
%% Fourier Transform:
X = fftshift(fft(x));
%% Frequency specifications:
dF = Fs/N; % hertz
f = -Fs/2:dF:Fs/2-dF; % hertz
%% Plot the spectrum:
figure;
plot(f,abs(X)/N);
xlabel('Frequency (in hertz)');
title('Magnitude Response');
Why do you want to use complex stuff?
a nice and full solution may found in https://dsp.stackexchange.com/questions/1522/simplest-way-of-detecting-where-audio-envelopes-start-and-stop
Have a look at the STFT (short-time fourier transform) or (even better) the DWT (discrete wavelet transform) which both will estimate the frequency content in blocks (windows) of data, which is what you need if you want to detect sudden changes in amplitude of certain ("speech") frequencies.
Don't use a FFT since it calculates the relative frequency content over the entire duration of the signal, making it impossible to determine when a certain frequency occured in the signal.
If you still use inbuilt STFT function, then to plot the maximum you can use following command
plot(T,(floor(abs(max(S,[],1)))))