I have an Nx2 matrix with columns as 'Time' and 'Progress'.
Progress is integral and Time is a real value corresponding to each progress unit.
I want to reverse the dependency and make 'Time' integral and output the fractional 'Progress' at every unit time step.
How can this be done?
Use interp1(Progress,Time,TimesWanted) where TimesWanted is a new vector with the times that you want. For example:
Progress=1:10; %just a guess of the sort of progress you might have
Time=Progress*5.5; %the resulting times (say 5.5s per step)
TimesWanted=10:5:50; %the times we want
interp1(Time,Progress,TimesWanted)
gives me:
ans =
1.8182 2.7273 3.6364 4.5455 5.4545 6.3636 7.2727 8.1818 9.0909
which is the progress at TimesWanted obtained by interpolation.
Related
I am relatively new to matlab. I found the consecutive mean of a set of 1E6 random numbers that has mean and standard deviation. Initially the calculated mean fluctuate and then converges to a certain value.
I will like to know the index (i.e 100th position) at which the mean converges. I have no idea how to do that.
I tried using the logical operator but i have to go through 1e6 data points. Even with that i still can't find the index.
Y_c= sigma_c * randn(n_r, 1) + mu_c; %Random number creation
Y_f=sigma_f * randn(n_r, 1) + mu_f;%Random number creation
P_u=gamma*(B*B)/2.*N_gamma+q*B.*N_q + Y_c*B.*N_c; %Calculation of Ultimate load
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u))); %Progressive Cumulative Mean of system response
logical(diff(prog_mu==0)); %Find index
I suspect the issue is that the mean will never truly be constant, but will rather fluctuate around the "true mean". As such, you'll most likely never encounter a situation where the two consecutive values of the cumulative mean are identical. What you should do is determine some threshold value, below which you consider fluctuations in the mean to be approximately equal to zero, and compare the difference of the cumulative mean to that value. For instance:
epsilon = 0.01;
const_ind = find(abs(diff(prog_mu))<epsilon,1,'first');
where epsilon will be the threshold value you choose. The find command will return the index at which the variation in the cumulative mean first drops below this threshold value.
EDIT: As was pointed out, this method may potentially fail if the first few random numbers are generated such that the difference between them is less than the epsilon value, but have not yet converged. I would like to suggest a different approach, then.
We calculate the cumulative means, as before, like so:
prog_mu=cumsum(P_u)./cumsum(ones(size(P_u)));
We also calculate the difference in these cumulative means, as before:
df_prog_mu = diff(prog_mu);
Now, to ensure that conversion has been achieved, we find the first index where the cumulative mean is below the threshold value epsilon and all subsequent means are also below the threshold value. To phrase this another way, we want to find the index after the last position in the array where the cumulative mean is above the threshold:
conv_index = find(~df_prog_mu,1,'last')+1;
In doing so, we guarantee that the value at the index, and all subsequent values, have converged below your predetermined threshold value.
I wouldn't imagine that the mean would suddenly become constant at a single index. Wouldn't it asymptotically approach a constant value? I would reccommend a for loop to calculate the mean (it sounds like maybe you've already done this part?) like this:
avg = [];
for k=1:length(x)
avg(k) = mean(x(1:k));
end
Then plot the consecutive mean:
plot(avg)
hold on % this will allow us to plot more data on the same figure later
If you're trying to find the point at which the consecutive mean comes within a certain range of the true mean, try this:
Tavg = 5; % or whatever your true mean is
err = 0.01; % the range you want the consecutive mean to reach before we say that it "became constant"
inRange = avg>(Tavg-err) & avg<(Tavg+err); % gives you a binary logical array telling you which values fell within the range
q = 1000; % set this as high as you can while still getting a value for consIndex
constIndex = [];
for k=1:length(inRange)
if(inRange(k) == sum(inRange(k:k+q))/(q-1);)
constIndex = k;
end
end
The below answer takes a similar approach but makes an unsafe assumption that the first value to fall within the range is the value where the function starts to converge. Any value could randomly fall within that range. We need to make sure that the following values also fall within that range. In the above code, you can edit "q" and "err" to optimize your result. I would recommend double checking it by plotting.
plot(avg(constIndex), '*')
Didn't know how to paraphrase the question well.
Function for example:
Data:https://www.dropbox.com/s/wr61qyhhf6ujvny/data.mat?dl=0
In this case how do I calculate that the rest point of this function is ~1? I have access to the vector that makes the plot.
I guess the mean is an approximation but in some cases it can be pretty bad.
Under the assumption that the "rest" point is the steady-state value in your data and the fact that the steady-state value happens the majority of the times in your data, you can simply bin all of the points and use each unique value as a separate bin. The bin with the highest count should correspond to the steady-state value.
You can do this by a combination of histc and unique. Assuming your data is stored in y, do this:
%// Find all unique values in your data
bins = unique(y);
%// Find the total number of occurrences per unique value
counts = histc(y, bins);
%// Figure out which bin has the largest count
[~,max_bin] = max(counts);
%// Figure out the corresponding y value
ss_value = bins(max_bin);
ss_value contains the steady-state value of your data, corresponding to the most occurring output point with the assumptions I laid out above.
A minor caveat with the above approach is that this is not friendly to floating point data whose unique values are generated by floating point values whose decimal values beyond the first few significant digits are different.
Here's an example of your data from point 2300 to 2320:
>> format long g;
>> y(2300:2320)
ans =
0.99995724232555
0.999957488454868
0.999957733165346
0.999957976465197
0.999958218362579
0.999958458865564
0.999958697982251
0.999958935720613
0.999959172088623
0.999959407094224
0.999959640745246
0.999959873049548
0.999960104014889
0.999960333649014
0.999960561959611
0.999960788954326
0.99996101464076
0.999961239026462
0.999961462118947
0.999961683925704
0.999961904454139
Therefore, what I'd recommend is to perhaps round so that the first 5 or so significant digits are maintained.
You can do this to your dataset before you continue:
num_digits = 5;
y_round = round(y*(10^num_digits))/(10^num_digits);
This will first multiply by 10^n where n is the number of digits you desire so that the decimal point is shifted over by n positions. We round this result, then divide by 10^n to bring it back to the scale that it was before. If you do this, for those points that were 0.9999... where there are n decimal places, these will get rounded to 1, and it may help in the above calculations.
However, more recent versions of MATLAB have this functionality already built-in to round, and you can just do this:
num_digits = 5;
y_round = round(y,num_digits);
Minor Note
More recent versions of MATLAB discourage the use of histc and recommend you use histcounts instead. Same function definition and expected inputs and outputs... so just replace histc with histcounts if your MATLAB version can handle it.
Using the above logic, you could also use the median too. If the majority of data is fluctuating around 1, then the median would have a high probability that the steady-state value is chosen... so try this too:
ss_value = median(y_round);
I am a borderline novice in Matlab.I am trying to write a rolling function of CMSE
(Compose Multiscale Entropy) over a time series. I tried slidefun but that only works when the output is a scalar and the output for CMSE is a vector. The rolling window for the time series is supposed for 500 and the ouput of each windowed CMSE is a 100 x 1 vector. XX is the time series.
roll_CMSE_100=zeros(100,(length(xx)-499));
for i=1:(length(xx)-499)
roll_CMSE_100(i)=CMSE(xx(i:(499+i)),100)
end
I get the following output
??? In an assignment A(I) = B, the number of elements in B and
I must be the same.
Thank you for your time and consideration
Matlab is telling you the problem: you are assigning to the element in position "X" a vector but should be a number, because roll_CMSE is a matrix. Or you use cell array or you make assignment correctly.
If the output of CMSE(xx(i:(499+i)),100) is a 100x1 vector the correct way to assign the values is
roll_CMSE_100=zeros(100,(length(xx)-499));
for i=1:(length(xx)-499)
roll_CMSE_100(:,i)=CMSE(xx(i:(499+i)),100)
end
This simply assigns the ouput to column "i" of roll_CMSE matrix.
How can I compute the summation of an interval. I will use Matlab's code for eg.
data=[1;2;3;4;5;6;7;8;9;10;11;12]
I would like to perform this summation.
sum(1)=data(1)+data(2)+data(3)
sum(2)=data(4)+data(5)+data(6)
sum(3)=data(7)+(data(8)+data(9)
sum(4)=data(10)+data(11)+data(12)
How can I get about this? (Using for loop)
No for loop needed, if indeed this interval is constant like in your example:
Ans=sum(reshape(data,3,[]))
note that I reshape the vector data to a matrix that has the right number of columns, so the value 3 relates to the interval size you wanted...
I'd like to implement a piecewise periodic function, which should be zero in certain intervals and look like a test function elsewhere (e.g. exp(a^2/(abs(x)^2-a^2)) for abs(x)< a and zero otherwise).
I tried
nu = #(x) ((8*10^(-4)/exp(1)*exp(30^2./(abs(mod(x,365)-31).^2-30.^2))).* ...
and((1<mod(x,365)),(mod(x,365)<61)) + ...
(8*10^(-4)/exp(1)*exp(10^2./(abs(mod(x,365)-300).^2-10.^2))).* ...
and((290<mod(x,365)),(mod(x,365)<310)));
respectively
nu = #(x) ((0*x).* and((0<=mod(x,365)),(mod(x,365)<=1)) + ...
(8*10^(-4)/exp(1)*exp(30^2./(abs(mod(x,365)-31).^2-30.^2))).* ...
and((1<mod(x,365)),(mod(x,365)<61)) + ...
(0*x).* and((61<=mod(x,365)),(mod(x,365)<=290)) + ...
(8*10^(-4)/exp(1)*exp(10^2./(abs(mod(x,365)-300).^2-10.^2))).* ...
and((290<mod(x,365)),(mod(x,365)<310)) + ...
(0*x).* and((310<=mod(x,365)),(mod(x,365)<365)));
which should behave the same. The aim is to have a period of [0,365), therefore the modulo.
Now my problem is that nu(1)=nu(61)=nu(290)=nu(310)=NaN and also in a small neighborhood of them, e.g. nu(0.99)=NaN. But I excluded these points from the exponential function, where this one would cause problems. And even if I use a smaller interval for the exponential functions (e.g (2,60) and (291,309)) I receive NaN at the same points.
Any ideas? Thanks in advice!
One trick I use when performing vectorised calculations in which there's risk of a division by zero or related error is to use the conditional to modify the problem value. For instance, suppose you wanted to invert all entries in a vector, but leave zero at zero (and set any value within, say, 1e-8 to zero, too). You'd do this:
outVect = 1./(inVect+(abs(inVect)<=1e-8)).*(abs(inVect)>1e-8);
For values satisfying the condition that abs(value)>1e-8, this calculates 1/value. If abs(value)<=1e-8, it actually calculates 1/(value+1), then multiplies by zero, resulting in a zero value. Without the conditional inside the denominator, it would calculate 1/value when value is zero, resulting in inf... and then multiply inf by zero, resulting in NaN.
The same technique should work with your more complicated anonymous function.