Matlab: Compare (nearly) the same signal with a different sampling rate - matlab

Here is my matlab problem:
I have two signals (time/distance excitations).They are called sig_one and sig_two. Sig_one is a simulation signal and sig_two is an on-track measurement signal.
I want to compare both signals. Both signals have a different sampling rate. My simulation signal has 600 Hertz and the on-track measurement signal has approx. 100 Hertz.
I want to compare both signals to check if there is a difference. The simulation signal starts at 0 seconds and ends 200 seconds later but the on track measurement signal starts later at 2.1431 seconds and ends by 200.2463 seconds.
Conclusion both signals have a different size (XXX*2 double array). How can I interpolate the measurement signal to compare it with the simulation signal and plot it? Thank you for answer!
sig_one:
[s] [mm]
DATA
0.0000000000e+00 -9.2010301749e+00
2.0000000000e-03 -9.2061877312e+00
4.0000000000e-03 -9.2118709835e+00
6.0000000000e-03 -9.2180018117e+00
8.0000000000e-03 -9.2244784046e+00
1.0000000000e-02 -9.2311885833e+00
1.2000000000e-02 -9.2380179051e+00
sig_two:
2.1428223 0.0060634273
2.152832 0.018430086
2.1628418 0.034733064
2.1728516 0.058855027
2.1828613 0.070850573
2.1928711 0.096776709
2.2028809 0.14273462
2.2128906 0.17558892
This is only the beginning of the two signals.

Related

Does the duration of a signal affect its frequency component's amplitude? Also, does the sampling frequency affect the power of a signal?

I have two questions that are bugging me:
Does the duration of an audio signal affect the amplitude of frequency components of that same signal? For example, I am recording the sound of a fan using a microphone. At first, I record only for 10 sec and convert the audio signal into frequency spectrum. Then, I record the same sound for 20 sec and then convert the audio signal into frequency spectrum. In both the cases, the sound of the fan is same, but does the duration of the signal affect the amplitude of frequency components in the spectrum plot?
For example, I have 2 audio signals. For the first one, I have that same fan sound recording for 10 sec and the sampling frequency is 5KHz, and for the second recording, I have that same audio signal but now the sampling frequency is changed to 15KHz. I used MATLAB to check the power for both the signals and the power for both the signals was same, however I want to know why. Formula that I used was Power=rms(signal)^2. According to me the second signal should have more power because now there are more number of samples compared to the first recording and since those extra samples would also have a random amplitude, the average shouldn't be the same as for the first one. Am I thinking it right?
Can anyone provide their thoughts? Thank You!
This answer is from: https://dsp.stackexchange.com/questions/75025/does-the-duration-of-a-signal-affect-its-frequency-components-amplitude-also
Power is energy per unit time. If you increase the duration, you increase the energy, but due to the normalization with time the power would be the same.
The DFT as given by
X[k]=∑n=0N−1x[n]e−j2πnk/N
will scale the frequency component by N as given by the summation over N samples. This can be normalized by multiplying the result by 1/N.
The frequency components of the signal levels will be the same in a normalized DFT (normalized by dividing by the total number of samples) for signal components that occupy one bin (pure tones), but the noise floor as observed may be lower by the change in sampling rate: if the noise floor is limited by quantization noise, the total quantization noise (well approximated as white noise, meaning constant across all frequencies) will be spread over a wider frequency range, so increasing the sampling rate will cause the contribution of quantization noise on a per Hz basis (noise density) to be lower. Further the duration will effect the frequency resolution in each bin in the frequency domain; for an unwindowed DFT, the equivalent noise bandwidth per bin is fs/N where fs is the sampling rate and N is the number of bins. At a given sampling rate, increasing the number of bins will increase the total duration and thus reduce the noise bandwidth per bin; causing the overall DFT noise floor as observed to decrease. (Same noise power just less measured in each bin). Windowing if done in the time domain will reduce the signal level by the coherent gain of the window, but increase the equivalent noise bandwidth such that pure tones will be reduced more than noise that is spread over multiple bins.

sinusoidal signal with varying frequency

I want to generate a variable frequency sinusoidal signal. I am changing the frequency from 0Hz to 30Hz, but the frequency sine output increases above 30Hz during first 1 second simulation and finally after 1 second the frequency settles down to 30Hz.
Please suggest me why the frequency of the sine wave not following the frequency.
It because of simulation Sample time , It is one second.
Change the Sample time to a small number such as 0.01
It looks like you misread some parameters of chirp usage, particularly target frequency and target time.
When you work with a swept cosine sweep, target frequency will be reached at half of target time. So if you set target frequency = 30 Hz and target time = 1 s you will have 30 Hz at 0.5 s and 60 Hz at 1 s.
From Matlab documentation:
Target frequency is the instantaneous frequency of the output at half the Target time, tg/2.
Target time is the time at which the sweep reaches 2*f(tg).

Calculate the RMS for a sinusoidal wave

I want to calculate the RMS value for a sinusoidal wave by taking only 40 samples per cycle. The sampling frequency is 2 KHz and the sine frequency is 50 Hz.
Please, can anyone give me a hint?
I created the simulink model shown in the image, using the following blocks:
Sine Wave - Simulink > Sources
RMS - Simscape > Power Systems > Specialized Technology > Control & Measurements
Mux - Simulink > Signal Routing
Scope - Simulink > Sinks
In order to configure the Sine Wave block I chose a Sample Based (discrete) Sine type with an Amplitude of 10 V and 40 Samples per period, as you requested. Since you want the sine to have a frequency of 50 Hz, the Sample time must be the period of the signal T = 1/(50 Hz) = 0.02s divided by 40, which yields 5e-4 s. The remaining parameters were left at the default values.
Then, I configured the RMS block with a Fundamental frequency of 50 Hz to match the frequency of the sine wave, and changed the Initial RMS value to 0 V. The remaining parameters were left at the default values.
Finally I simulated the model for 0.08 s (4 cycles). Since the sine wave has an amplitude of 10 V, the theoretical RMS value is the amplitude divided by the square root of 2, which yields 7.07 V. The readings obtained from the scope confirm this value (purple line).
Note how the RMS block requires to wait for one period of the signal to generate the first reading. During this period, the displayed reading is the Initial RMS value that we configured previously at 0 V.

How to generate a non-stationary signals

The code I have in this question I just modified it to generate a non-stationary signals as mentioned below. I just want to know is it the correct way to generate a non-stationary signals?
Code
%% Time specifications:
Fs = 8000; % samples per second
dt = 1/Fs; % seconds per sample
StopTime = 1; % seconds
t = (0:dt:StopTime-dt); % seconds
x = (10)*cos(2*pi*3*(t-.2))...
+ (20)*cos(2*pi*6*(t-.7))...
+ (20)*cos(2*pi*2*(t-.5));
No, its not the correct way because non stationary signal means that the properties of the signal does not remain constant, i.e the properties of the signal changes after some time but in your case at every point the signal is the sum of three cosine wave with same frequencies , amplitudes and phases.
For different time intervals signal contains different frequency components in non stationary signal. The signal you have generated is stationary signal, as at any instance of time you have same frequency components. Speech signal as you record through microphone will have different components and is an example of non stationary signal. Another example of non stationary signal is ultrasonic A scan obtained in pulse echo testing. What Narendra generated can be called as nonstationary signal.

i am trying to write a code to cross correlate a transmitted signal with a received signal to determine the number of samples delay

Cross correlation is to be used to measure distance to an aircraft by transmitting a
known wide-band signal and correlating the transmitted signal with incoming signals
received via the radar reception dish
The transmitted signal x(n) is of length N=512 while the received signal y(n) is of length N=2048.
y(n)=kx(n-d)+w(n); where 'kx(n-d)' is x(n) delayed by d samples and attenuated by a factor k, and w(n) is reception noise.
i am trying to write a MATLAB program to cross correlate x(n) with the y(n) to determine the value of d, the number of samples delay.
And also Determine a suitable sampling frequency if the distance to the aircraft is to be
determined within 50 km to an accuracy of 50 m, given that the transmitted
and received data is travelling at the speed of light.
The easiest way to do this is with the "xcorr" function. This is part of the Signal Processing toolbox for matlab, but should be available for GNU Octave here. I have not checked if the octave script is completely MATLAB compatible.
You can use the xcorr function as:
[correlation,lags] = xcorr(x,y);
The lag value can be found using
delay = lags(find(correlation==max(correlation)))
At the speed of light, the signal will be travelling at 3 x 10^8 m/s, so to have a resolution of 50m, you should be sampling at at least (3e8/50m) = 6MHz. At this sampling rate, each lag will be 1/6000000 second. If you multiply your delay by this value, you get your total time intervel between transmission and reception of the signal. Multiply this time intervel by the speed of light to get your distance.
You can use generalized Cross correlation -Phase transform GCC PHAT
The following is the MATLAB code for it
function time=GCCPHAT_testmode(b1,b2)
b1f=fft(b1);
b2f=fft(b2);
b2fc=conj(b2f);
neuma=(b1f).*(b2fc);
deno=abs((b1f).*(b2fc));
GPHAT=neuma./deno;
GPHATi=ifft(GPHAT);
[maxval ind]= max(GPHATi);
samp=ind
end
we can ignore the 'find' function in matlab, the command could be changed to
delay = lags(correlation==max(correlation))
'xcorr' suits for vectors with long length;
'gcc' prefer frame by frame.
Aj463's comment above is good, indeed GCC-PHAT is better than unweighted correlation for estimating delay of wide-band signals.
I would suggest a small improvement to the code posted above: to add a small value epsilon to the denominator, epsilon -> 0, in order to avoid eventual division by zero.
Thus, I would change the line
deno=abs((b1f).*(b2fc));
to
deno=abs((b1f).*(b2fc)) + epsilon;