Energy from Power - matlab

I am working in MATLAB and I have 2 arrays. Current and Voltage flowing in a capacitance when I connect and disconnect it from the voltage. Of course I have a time_stamp vector in which there are all the samples of time at which I made the measurements.
I want to plot the power and energy associated to those arrays.
For the power I just need to do:
z1 = plot(time_stamp_ms,measured_voltage.*current,'-b','LineWidth',1);
Right?
Instead, how can I do to plot the energy?
Thanks for your time.

I think your power looks right. For energy, it's just power multiplied by time right so:
dt = diff(time_stamp_ms);
power = measured_voltage.*current;
energy = dt.*power(2:end);
well this gives the energy used between each time step. If you wanted the cumulative energy then:
energy_cum = cumsum(energy)

Related

Integrate a function using trapz with datetime - Calculate Energy with a current vector

i have a problem with my calculation tool. I have a vector of current values and a time vector in datetime format. Now i need to get the integral for overall Power consumption and im struggling.
I tried to integrate about the vector with trapz but the outcome isn't realistic for this vector. I tested different vectors and signal but same result.
My timevector wasn't used in this example.
%% Power Calculate
Energy = Ch1L1*230; %P=U*I scale vector with 230V
EnergySum = trapz(Energy)/1000 %Output: Power in kW
Does anyone of you know a solution?
I think i misunderstand the time in this outcome. Normally the integral should be the overall Power consumption for the time i logged the data.

implementation butterworth filter in matlab

I have an accelerometer 3 axis.
As far as we know that acceleration is the sum of static acceleration (gravity) and dynamic acceleration.
my goal's to extract a gravity acceleration which will show me the direction of the device.
i will apply a butterworth filter to extract a gravity acceleration. but i have a problem in choosing cut-off frequency and a filter order.
T = 0.16 s ; %Time of sample rate so Fs = 1/0.16 ; % sampling rate? is this correct?
After reading a few articles, i found that the cut-off varie between 0.1 to 0.5 , here i will choose a 0.5 (because as i don't know they are based on their choice.
this is the program which i will execute in Matlab to extract gravity acceleration from the 3 axis.
Fc = 0.5 ; %cut-off frequency
Fs = 6.26 hz (1/0.16) ; % sampling rate order = 4;
[b,a] = butter(order,fc(fs/2),'low');
x = filter (b,a,x0);
y = filter(b,a,y0);
z = filter(b,a,z0);
What you are doing is, you "slow" the measurements down with a butterworth filter. Thus, in practice you try to get rid of the "fast" part. What this means in frequency domain is: you want a lowpass (low frequency = slow signal goes through, while high = fast gets filtered out). So I guess with what you know already you should be able to find a reasonable value.
Generally for estimating the angles you have gyroscopes at hand. Then you would rather go for kalman filtering, since this doesn't add as much of a delay to your measurements (if it is time dependent).
As already commented, keep in mind, that you are dealing with sampled data (nyquist freq. is a natural restriction).
Cutoff frequency depends upon the interested range of signal you want to extract from a noisy signal. Assume: You use your mobile phone inside the trouser pocket and collect raw data during activities. Then your cutoff frequency is directly related to the rate of human activity( Eg: Running, walking ,jogging etc).
Note: As the filter order N increases, the actual frequency responses approach the ideal.
Filter order will also be a factor of the signal you want to extract to analyse.
Refer this publication for more info:
A gyroscopic data based pedometer algorithm by Jayalath.S
Link:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6553971&queryText%3DA+gyroscopic+data+based+pedometer+algorithm

Scipy periodogram terminology confusion

I am confused about the terminology used in scipy.signal.periodogram, namely:
scaling : { 'density', 'spectrum' }, optional
Selects between computing the power spectral density ('density')
where Pxx has units of V*2/Hz if x is measured in V and computing
the power spectrum ('spectrum') where Pxx has units of V*2 if x is
measured in V. Defaults to 'density'
(see: http://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.periodogram.html)
1) a few tests show that result for option 'density' is dependent on signal and window length and sampling frequency (grows when signal length increases). How come? I would say that it is exactly density that should be not dependent on these things. If I take a longer signal I should just get more accurate estimation, not different result. Not to mention that dependence on window length is also very surprising.
Result diverges in the limit of infinite signal, which could be a feature of energy, but not power. Shouldn't the periodogram converge to real theoretical PSD when length increases? If, so, am I supposed to perform another normalisation outside of the signal.periodogram method?
2) to the contrary I see that alternative option 'spectrum' gives what I would previously call Power Spectrum Density, that is, it gives a resuls independent on window segment and window length and consistent with theoretical calculation. For instance for Asin(2PIft) a two sided solution yields two peaks at -f and f, each of height 0.25*A^2.
There is a lot of literature on this subject, but I get an impression that also there is a lot of incompatibile terminology, so I will be thankful for any clarification. The straightforward question is how to interpret these options and their units. (I am used to seeing V^2/Hz which are labeled "Power Spectrum Density").
Let's take a real array called data, of length N, and with sampling frequency fs. Let's call the time bin dt=1/fs, and T = N * dt. In frequency domain, the frequency bin df = 1/T = fs/N.
The power spectrum PS (scaling='spectrum' in scipy.periodogram) is calculated as follow:
import numpy as np
import scipy.fft as fft
dft = fft.fft(data)
PS = np.abs(dft)**2 / N ** 2
It has the units of V^2. It can be understood as follow. By analogy to the continuous Fourier transform, the energy E of the signal is:
E := np.sum(data**2) * dt = 1/N * np.sum(np.abs(dft)**2) * dt
(by Parseval's theorem). The power P of the signal is the total energy E divided by the duration of the signal T:
P := E/T = 1/N**2 * np.sum(np.abs(dft)**2)
The power P only depends on the Discrete Fourier Transform (DFT) and the number of samples N. Not directly on the sampling frequency fs or signal duration T. And the power per frequency channel, i.e., power spectrum SP, is thus given by the formula above:
PS = np.abs(dft)**2 / N ** 2
For the power spectrum density PSD (scaling='density' in scipy.periodogram), one needs to divide the PS by the frequency bin of the DFT, df:
PSD := PS/df = PS * N * dt = PS * N / fs
and thus:
PSD = np.abs(dft)**2 / N * dt
This has the units of V^2/Hz = V^2 * s, and now depends on the sampling frequency. That way, integrating the PSD over the frequency range gives the same result as summing the individual values of the PS.
This should explain the relations that you see when changing the window, sampling frequency, duration.
scipy.signal.peridogram uses the scipy.signal.welch function with 0 overlap. Therefore, the scaling is similar to the one provided by the welch function, density or spectrum.
In case of the density scaling, the amplitude will vary with window length, as the longer the window the higher the frequency resolution e.g. the \Delta_f is smaller. Since the estimated density is the average one, the smaller the \Delta_f the less zero energy is considered in the averaging.
As you have mentioned spectrum scaling is an integration of the energy density over the spectrum to produce the energy. Therefore, the integration over zero values does not affect the final value.
Fourier transform actually requires finite energy in an infinite duration of time series (like a decay). So, If you just make your time series sample longer by "duplicating", the energy will be infinite with an infinite duration.
My main confusion was on the "spectrum" option for scipy.signal.periodogram, which seems to create a constant energy spectrum even when the time series become longer.
Normally, 0.5*A^2=S(f)*delta_f, where S(f) is the power density spectrum. S(f)*delta_f, representing energy is constant if A is constant. But when using a longer duration of time series, delta_f (i.e. incremental frequency) is reduced accordingly, based on FFT procedure. For example, 100s time series will lead to a delta_f=0.01Hz, while 1000s time series will have a delta_f=0.001Hz. S(f) representing density will accordingly change.

Time delay estimation using crosscorrelation

I have two sensors seperated by some distance which receive a signal from a source. The signal in its pure form is a sine wave at a frequency of 17kHz. I want to estimate the TDOA between the two sensors. I am using crosscorrelation and below is my code
x1; % signal as recieved by sensor1
x2; % signal as recieved by sensor2
len = length(x1);
nfft = 2^nextpow2(2*len-1);
X1 = fft(x1);
X2 = fft(x2);
X = X1.*conj(X2);
m = ifft(X);
r = [m(end-len+1) m(1:len)];
[a,i] = max(r);
td = i - length(r)/2;
I am filtering my signals x1 and x2 by removing all frequencies below 17kHz.
I am having two problems with the above code:
1. With the sensors and source at the same place, I am getting different values of 'td' at each time. I am not sure what is wrong. Is it because of the noise? If so can anyone please provide a solution? I have read many papers and went through other questions on stackoverflow so please answer with code along with theory instead of just stating the theory.
2. The value of 'td' is sometimes not matching with the delay as calculated using xcorr. What am i doing wrong? Below is my code for td using xcorr
[xc,lags] = xcorr(x1,x2);
[m,i] = max(xc);
td = lags(i);
One problem you might have is the fact that you only use a single frequency. At f = 17 kHz, and an estimated speed-of-sound v = 340 m/s (I assume you use ultra-sound), the wavelength is lambda = v / f = 2 cm. This means that your length measurement has an unambiguity range of 2 cm (sorry, cannot find a good link, google yourself). This means that you already need to know your distance to better than 2 cm, before you can use the result of your measurement to refine the distance.
Think of it in another way: when taking the cross-correlation between two perfect sines, the result should be a 'comb' of peaks with spacing equal to the wavelength. If they overlap perfectly, and you displace one signal by one wavelength, they still overlap perfectly. This means that you first have to know which of these peaks is the right one, otherwise a different peak can be the highest every time purely by random noise. Did you make a plot of the calculated cross-correlation before trying to blindly find the maximum?
This problem is the same as in interferometry, where it is easy to measure small distance variations with a resolution smaller than a wavelength by measuring phase differences, but you have no idea about the absolute distance, since you do not know the absolute phase.
The solution to this is actually easy: let your source generate more frequencies. Even using (band-limited) white-noise should work without problems when calculating cross-correlations, and it removes the ambiguity problem. You should see the white noise as a collection of sines. The cross-correlation of each of them will generate a comb, but with different spacing. When adding all those combs together, they will add up significantly only in a single point, at the delay you are looking for!
White Noise, Maximum Length Sequency or other non-periodic signals should be used as the test signal for time delay measurement using cross correleation. This is because non-periodic signals have only one cross correlation peak and there will be no ambiguity to determine the time delay. It is possible to use the burst type of periodic signals to do the job, but with degraded SNR. If you have to use a continuous periodic signal as the test signal, then you can only measure a time delay within one period of the periodic test signal. This should explain why, in your case, using lower frequency sine wave as the test signal works while using higher frequency sine wave does not. This is demonstrated in these videos: https://youtu.be/L6YJqhbsuFY, https://youtu.be/7u1nSD0RlwY .

interpolation of fortnightly annual temperature data into hourly measurements in matlab

I have a dataset of annual temperature measurements recorded at fortnightly intervals. The data looks similar to the following:
t = 1:14:365;
% GENERATE DATA
y = 1 + (30-1).*rand(1,length(t));
y1 = 20*sin(2*pi*t/max(t)); % Annual variation °C
y1(y1<0) = [];
tt = 365/14;
time = 1:tt:365;
plot(time,y1,'-o');
where it clearly follows a annual temperature cycle.
From this I am wondering if it is possible to add a sine function (which would represent a diurnal temperature range) onto the data? For example, from the fortnightly data, if we were to interpolate the series to have 8760 measurements i.e. hourly measurements, for the series to be believable it would need to be characterized by a diurnal temperature cycle in addition to the annual temperature cycle. Furthermore, the diurnal temperature cycle would need to be a function of the temperature measurements at that time i.e. would be greater in the summer than in winter. So maybe it would be better to firstly use linear interpolation to get the data to represents hourly intervals and then add the sine function. Is there a method for writing this into a script? or does anyone have an opinion on how to accurately achieve this?
You could first interpolate your data (down to 1 hours) using something like
x = 1:inv(24):365;
T_interp = interp1(t,y1,x,'spline');
Check out Matlab documentation for interp1 (example 2)
and then add a sine onto it. The following a sine of period 1 (24 hours) with amplitude A, with a minimum at 3am.
T_diurn = -A*sin(2*pi*x+(3/24)*2*pi);
Then
T_total = T_diurn + T_interp;
First: you know that good-looking plots are the most misleading things in existence? Interpolating data gathered every 14 days so that it will look like data collected every hour is considered at least bad practice most circles...
Having said that, I would use splines to do the interpolation -- they are a lot more flexible when it comes to changing from fortnightly and hourly to some arbitrary other combination, plus the annual temperature variation will be a lot smoother.
Here's how:
% Create spline through data
pp = spline(time, y1);
% define diurnal variation (this one is minimal at 4 AM)
T_diurn = #(t) -A*cos(2*pi*(t-(4/24)));
% plot example
t = 150 : 1/24 : 250;
plot( t, ppval(pp,t)+T_diurn(t) , 'b')