I have an Output sample for electronic components and I would like to know the p-value for robust my system is. Ideally, I would like to get a p-value (P<0.05) to prove that my system can constantly produce the same results. Noting, my data samples are small.
My Output:
sample=[2.180213,2.178298 ,2.310851 ,2.114255 ,3.012553 ,2.69234 ,2.079787];
I tried using :
[h,p] = chi2gof(sample,'CDF',pd)
[h,p,ci,stats] = ttest(x)
[h,p,stats] = fishertest(x)
[h,p,ksstat,cv] = kstest(___)
I am lost! what kind of test do I perform on MATLAB to truly test how close my outputs are from each other and how consistent my system output is (using p-value)?
Edit:
I tried this:
sample=[2.180213,2.178298 ,2.310851 ,2.114255 ,3.012553 ,2.69234 ,2.079787];
n = numel(sample);
xobs = mean(sample); % Sample mean
s = std(sample); % Sample standard deviation
[h,p] = ttest(sample,xobs)
The result is:
h =
0
p =
1
My numbers are kind of close to each other but the results do not make sense. h = 0 means that the mean is true and not rejected, but the p-value is 1! Why is it high!
I believe I figured It out. I selected an arbitrary mean which is my desired value to outputted Ideally and use it as the hypothesis condition.
sample=[2.180213,2.178298 ,2.310851 ,2.114255 ,3.012553 ,2.69234 ,2.079787];
n = numel(sample);
xobs = mean(sample); % Sample mean
s = std(sample); % Sample standard deviation
[h,p] = ttest(sample,3)
Related
Can someone help me create Lilliefors test in Matlab so that I could compare values with the given function: [h,p,kstat,critval] = lillietest( ). I do not know how to compare data after standardization, as given vectors have different length. This is the beginning of my code:
clear;
clc;
[prices_daily] =xlsread(...);
logreturns_daily = log(prices_daily(2:end,:))-log(prices_daily(1:end-1,:)); % daily log returns
m = mean(logreturns_daily);
s = std(logreturns_daily);
z=(prices_daily(1:end,1)-m)/s;
e = ecdf(z);
n = normcdf(z);
You do not need to normalize, since Lillietest tests for Normality whichever the mean and standard deviation are. Simply apply:
[h,p,k,c]=lillietest(prices_daily(1:end,1),'Alpha',0.01);
I want to generate a Gaussian Random Process with Unit Mean(mean=1) in MATLAB. I tried to do randn function but I later learned that it can be only used when mean is 0 so I tried to write the process by hand. I wanted to write the Gaussian function with mean = 1 and var = 1. I tried this code:
N = rand(1000,1);
g1 = (1/(sqrt(2*pi)))*exp(-((N-1).^2)/2);
plot(g1)
m = mean(g1)
v = var(g1)
However, when I check the mean and variance values I get m=0.3406 and v=0.0024. Can you help?
If you take the vector from randn() and then add one it will have the same standard deviation as before but now it'll also have a mean of 1.
v=randn(1000,1)+1
Given a signal x(t), we need to find the symmetrical with respect to the Y-axis signal, x(-t)
If it's helpful to you, here's how my code works thus far:
t = [-5:0.01:5];
wt = (t>=0)&(t<=1);
r = #(t) t/5;
x = r(t).*wt;
%reflection - HERE IS WHERE I AM STUCK, basically looking for v(t) = x(-t)
%Shift by 2
y = v(t-2);
%The rest of the program - printing plots basically
I have tried using these:
v = x(t(1:end));
v = x(t(end:-1:1));
v = x(fliplr(t));
But it's not correct, since I get the error Array indices must be positive integers or logical values. as expected. Any ideas?
One solution is that you consider something like this:
x=signal; % with length 2*N+1 and symmetric
t= -N:N;
now, consider you want the value of index -2.
x(find(t==-2))
For ramp signal as an instance:
signal=[r(end:-1:1) 0 r]
with this assumption that r is a row-base vector and N length.
You should first define the signal function and then sampling it, not the other way round.
For instance, I here define a signal s(t) which is a windowed ramp:
s = #(t) t.*((t>=0)&(t<=1))
Then, I can find the samples for the signal and its symmetrical:
t = -5:0.01:5;
plot(t,s(t),t,s(-t))
What worked for me was defining a reflect function as follows:
function val = reflect(t)
val = -t;
end
And then using it with the shifting function in order to achieve my goal.
Given any sampled signal in an arbitrary time interval:
t = [-5.003:0.01:10];
x = randn(size(t));
You can reflect x around t=0 with:
t = -flip(t);
x = flip(x);
Note that in the example above, t=0 is not sampled. This is not necessary for this method.
I have a question if that's ok. I was recently looking for algorithm to calculate MFCCs. I found a good tutorial rather than code so I tried to code it by myself. I still feel like I am missing one thing. In the code below I took FFT of a signal, calculated normalized power, filter a signal using triangular shapes and eventually sum energies corresponding to each bank to obtain MFCCs.
function output = mfcc(x,M,fbegin,fs)
MF = #(f) 2595.*log10(1 + f./700);
invMF = #(m) 700.*(10.^(m/2595)-1);
M = M+2; % number of triangular filers
mm = linspace(MF(fbegin),MF(fs/2),M); % equal space in mel-frequency
ff = invMF(mm); % convert mel-frequencies into frequency
X = fft(x);
N = length(X); % length of a short time window
N2 = max([floor(N+1)/2 floor(N/2)+1]); %
P = abs(X(1:N2,:)).^2./N; % NoFr no. of periodograms
mfccShapes = triangularFilterShape(ff,N,fs); %
output = log(mfccShapes'*P);
end
function [out,k] = triangularFilterShape(f,N,fs)
N2 = max([floor(N+1)/2 floor(N/2)+1]);
M = length(f);
k = linspace(0,fs/2,N2);
out = zeros(N2,M-2);
for m=2:M-1
I = k >= f(m-1) & k <= f(m);
J = k >= f(m) & k <= f(m+1);
out(I,m-1) = (k(I) - f(m-1))./(f(m) - f(m-1));
out(J,m-1) = (f(m+1) - k(J))./(f(m+1) - f(m));
end
end
Could someone please confirm that this is all right or direct me if I made mistake> I tested it on a simple pure tone and it gives me, in my opinion, reasonable answers.
Any help greatly appreciated :)
PS. I am working on how to apply vectorized Cosinus Transform. It looks like I would need a matrix of MxM of transform coefficients but I did not find any source that would explain how to do it.
You can test it yourself by comparing your results against other implementations like this one here
you will find a fully configurable matlab toolbox incl. MFCCs and even a function to reverse MFCC back to a time signal, which is quite handy for testing purposes:
melfcc.m - main function for calculating PLP and MFCCs from sound waveforms, supports many options.
invmelfcc.m - main function for inverting back from cepstral coefficients to spectrograms and (noise-excited) waveforms, options exactly match melfcc (to invert that processing).
the page itself has a lot of information on the usage of the package.
I have the following code which is used to deconvolve a signal. It works very well, within my error limit...as long as I divide my final result by a very large factor (11000).
width = 83.66;
x = linspace(-400,400,1000);
a2 = 1.205e+004 ;
al = 1.778e+005 ;
b1 = 94.88 ;
c1 = 224.3 ;
d = 4.077 ;
measured = al*exp(-((abs((x-b1)./c1).^d)))+a2;
rect = #(x) 0.5*(sign(x+0.5) - sign(x-0.5));
rt = rect(x/83.66);
signal = conv(rt,measured,'same');
check = (1/11000)*conv(signal,rt,'same');
Here is what I have. measured represents the signal I was given. Signal is what I am trying to find. And check is to verify that if I convolve my slit with the signal I found, I get the same result. If you use what I have exactly, you will see that the check and measured are off by that factor of 11000~ish that I threw up there.
Does anyone have any suggestions. My thoughts are that the slit height is not exactly 1 or that convolve will not actually effectively deconvolve, as I request it to. (The use of deconv only gives me 1 point, so I used convolve instead).
I think you misunderstand what conv (and probably also therefore deconv) is doing.
A discrete convolution is simply a sum. In fact, you can expand it as a sum, using a couple of explicit loops, sums of products of the measured and rt vectors.
Note that sum(rt) is not 1. Were rt scaled to sum to 1, then conv would preserve the scaling of your original vector. So, note how the scalings pass through here.
sum(rt)
ans =
104
sum(measured)
ans =
1.0231e+08
signal = conv(rt,measured);
sum(signal)
ans =
1.0640e+10
sum(signal)/sum(rt)
ans =
1.0231e+08
See that this next version does preserve the scaling of your vector:
signal = conv(rt/sum(rt),measured);
sum(signal)
ans =
1.0231e+08
Now, as it turns out, you are using the same option for conv. This introduces an edge effect, since it truncates some of the signal so it ends up losing just a bit.
signal = conv(rt/sum(rt),measured,'same');
sum(signal)
ans =
1.0187e+08
The idea is that conv will preserve the scaling of your signal as long as the kernel is scaled to sum to 1, AND there are no losses due to truncation of the edges. Of course convolution as an integral also has a similar property.
By the way, where did that quoted factor of roughly 11000 come from?
sum(rt)^2
ans =
10816
Might be coincidence. Or not. Think about it.