Loop seems to go forever when plotting a graph - matlab

I have written the following piece of code:
M = [3 0 0; 0 2 0; 0 0 0.5] % mass matrix
i_vals = 1:1000:60e06; % values of k_12 from 1 to 600 million in steps of 1000
modes = zeros(3, length(i_vals));
for n=1:length(i_vals)
i = i_vals(n) % i is the value of k_12
K = [i+8e06 -i -2e06; -i i+2e06 -1e06; -2e06 -1e06 5e06]; % stiffness matrix
[V,L]=eig(K,M);
V(:,I)=V;
A = V(:, [1])
transpose(A)
modes(:, n) = A
end
loglog(i_vals, modes')
But the loop seems to go forever and I do now know what is wrong with it. The idea was to get the first column from matrix V, and see what happens to the 3 elements in this column when value of k_12 is changed.

I don't know how you make this run forever. To me it looks as if it won't run at all. This won't answer your question, but will hopefully help you on the way =)
What do you want to do with this line? V(:,I)=V; What is I? Was it supposed to be i? Btw, using i and j as variables in MATLAB is not recommended (however, if you don't use complex numbers in your field, you shouldn't care too much).
You have a loop that goes 60,000 times, with calculations of eigenvalues etc. That is bound to take time (although not forever, as you state it does). You should get the answer eventually (if only the rest of the code worked). The resolution of your plot would be more than accurate enough with 10,000 or even 100,000 steps at a time.
This part:
A = V(:, [1])
transpose(A)
modes(:, n) = A
could simply be written as:
modes(:,n) = V(:,1)';
assuming you want the transposed of A. transpose(A) does nothing in this context actually. You would have to do A = transpose(A) (or rather A = A') for it to work.

There are all kinds of problems with your code - some of which may contribute to your issue.
You are computing values of i on a linear scale, but ultimately will be plotting on a log scale. You are doing a huge amount of work towards the end, when there is nothing visible in the graph for your effort. Much better to use a log scale for i_vals:
i_vals = logspace(0, 7.778, 200); % to get 200 log spaced values from
% 1 to approx 60E6`
You are using a variable I that has not been defined (in the code snippet you provide). Depending on its size, you may find that V is growing...
You are using a variable name i - while that is legal, it overwrites a built in (sqrt(-1)) which I personally find troublesome.
Your transpose(A); line doesn't do anything (You would have to do A = transpose(A);).
You don't have ; after several lines - this is going to make Matlab want to print to the console. This will take a huge amount of resource. Suppress the output with ; after every statement.
EDIT the following program runs quickly:
M = [3 0 0.0;
0 2 0.0;
0 0 0.5]; % mass matrix
i_vals = logspace(0, 7.78, 200); % values of k_12 from 1 to 600 million in steps of 1000
modes = zeros(3, length(i_vals));
for n=1:length(i_vals)
i = i_vals(n); % i is the value of k_12
K = [i+8e06 -i -2e06; -i i+2e06 -1e06; -2e06 -1e06 5e06]; % stiffness matrix
[V,L]=eig(K,M);
modes(:, n) = V(:,1);
end
loglog(i_vals, modes')
Resulting graph:
If I didn't break anything (hard to know what you were doing with I), maybe this can be helpful.

Related

Sampling and DTFT in Matlab

I need to produce a signal x=-2*cos(100*pi*n)+2*cos(140*pi*n)+cos(200*pi*n)
So I put it like this :
N=1024;
for n=1:N
x=-2*cos(100*pi*n)+2*cos(140*pi*n)+cos(200*pi*n);
end
But What I get is that the result keeps giving out 1
I tried to test each values according to each n, and I get the same results for any n
For example -2*cos(100*pi*n) with n=1 has to be -1.393310473. Instead of that, Matlab gave the result -2 for it and it always gave -2 for any n
I don't know how to fix it, so I hope someone could help me out! Thank you!
Not sure where you get the idea that -2*cos(100*pi) should be anything other than -2. Maybe you are not aware that Matlab works in radians?
Look at your expression. Each term can be factored to contain 2*pi*(an integer). And you should know that cos(2*pi*(an integer)) = 1.
So the results are exactly as expected.
What you are seeing is basically what happens when you under-sample a waveform. You may know that the Nyquist criterion says that you need to have a sampling rate that is at least two times greater than the highest frequency component present; but in your case, you are sampling one point every 50, 70, 100 complete cycles. So you are "far beyond Nyquist". And that can only be solved by sampling more closely.
For example, you could do:
t = linspace(0, 1, 1024); % sample the waveform 1024 times between 0 and 1
f1 = 50;
f2 = 70;
f3 = 100;
signal = -2*cos(2*pi*f1*t) + 2*cos(2*pi*f2*t) + cos(2*pi*f3*t);
figure; plot(t, signal)
I think you are using degrees when you are doing your calculations, so do this:
n = 1:1024
x=-2*cosd(100*pi*n)+2*cosd(140*pi*n)+cosd(200*pi*n);
cosd uses degrees instead of radians. Radians is the default for cos so matlab has a separate function when degree input is used. For me this gave:
-2*cosd(100*pi*1) = -1.3933
The first term that I got using:
x=-2*cosd(100*pi*1)+2*cosd(140*pi*1)+cosd(200*pi*1)
x = -1.0693
Also notice that I defined n as n = 1:1024; this will give all integers from 1,2,...,1024,
there is no need to use a for loop since many of Matlab's built in functions are vectorized. Meaning you can just input a vector and it will calculate the function for every element in the vector.

Simulation of Markov chains

I have the following Markov chain:
This chain shows the states of the Spaceship, which is in the asteroid belt: S1 - is serviceable, S2 - is broken. 0.12 - the probability of destroying the Spaceship by a collision with an asteroid. 0.88 - the probability of that a collision will not be critical. Need to find the probability of a serviceable condition of the ship after the third collision.
Analytical solution showed the response - 0.681. But it is necessary to solve this problem by simulation method using any modeling tool (MATLAB Simulink, AnyLogic, Scilab, etc.).
Do you know what components should be used to simulate this process in Simulink or any other simulation environment? Any examples or links.
First, we know the three step probability transition matrix contains the answer (0.6815).
% MATLAB R2019a
P = [0.88 0.12;
0 1];
P3 = P*P*P
P(1,1) % 0.6815
Approach 1: Requires Econometrics Toolbox
This approach uses the dtmc() and simulate() functions.
First, create the Discrete Time Markov Chain (DTMC) with the probability transition matrix, P, and using dtmc().
mc = dtmc(P); % Create the DTMC
numSteps = 3; % Number of collisions
You can get one sample path easily using simulate(). Pay attention to how you specify the initial conditions.
% One Sample Path
rng(8675309) % for reproducibility
X = simulate(mc,numSteps,'X0',[1 0])
% Multiple Sample Paths
numSamplePaths = 3;
X = simulate(mc,numSteps,'X0',[numSamplePaths 0]) % returns a 4 x 3 matrix
The first row is the X0 row for the starting state (initial condition) of the DTMC. The second row is the state after 1 transition (X1). Thus, the fourth row is the state after 3 transitions (collisions).
% 50000 Sample Paths
rng(8675309) % for reproducibility
k = 50000;
X = simulate(mc,numSteps,'X0',[k 0]); % returns a 4 x 50000 matrix
prob_survive_3collisions = sum(X(end,:)==1)/k % 0.6800
We can bootstrap a 95% Confidence Interval on the mean probability to survive 3 collisions to get 0.6814 ± 0.00069221, or rather, [0.6807 0.6821], which contains the result.
numTrials = 40;
ProbSurvive_3collisions = zeros(numTrials,1);
for trial = 1:numTrials
Xtrial = simulate(mc,numSteps,'X0',[k 0]);
ProbSurvive_3collisions(trial) = sum(Xtrial(end,:)==1)/k;
end
% Mean +/- Halfwidth
alpha = 0.05;
mean_prob_survive_3collisions = mean(ProbSurvive_3collisions)
hw = tinv(1-(0.5*alpha), numTrials-1)*(std(ProbSurvive_3collisions)/sqrt(numTrials))
ci95 = [mean_prob_survive_3collisions-hw mean_prob_survive_3collisions+hw]
maxNumCollisions = 10;
numSamplePaths = 50000;
ProbSurvive = zeros(maxNumCollisions,1);
for numCollisions = 1:maxNumCollisions
Xc = simulate(mc,numCollisions,'X0',[numSamplePaths 0]);
ProbSurvive(numCollisions) = sum(Xc(end,:)==1)/numSamplePaths;
end
For a more complex system you'll want to use Stateflow or SimEvents, but for this simple example all you need is a single Unit Delay block (output = 0 => S1, output = 1 => S2), with a Switch block, a Random block, and some comparison blocks to construct the logic determining the next value of the state.
Presumably you must execute the simulation a (very) large number of times and average the results to get a statistically significant output.
You'll need to change the "seed" of the random generator each time you run the simulation.
This can be done by setting the seed to be "now" (or something similar to that).
Alternatively you could quite easily vectorize the model so that you only need to execute it once.
If you want to simulate this, it is fairly easy in matlab:
servicable = 1;
t = 0;
while servicable =1
t = t+1;
servicable = rand()<=0.88
end
Now t represents the amount of steps before the ship is broken.
Wrap this in a for loop and you can do as many simulations as you like.
Note that this can actually give you the distribution, if you want to know it after 3 times, simply add && t<3 to the while condition.

Fixed Point Iteration

I am new to Matlab and I have to use fixed point iteration to find the x value for the intersection between y = x and y = sqrt(10/x+4), which after graphing it, looks to be around 1.4. I'm using an initial guess of x1 = 0. This is my current Matlab code:
f = #(x)sqrt(10./(x+4));
x1 = 0;
xArray(10) = [];
for i = 1:10
x2 = f(x1);
xArray(i) = x2;
x1 = x1 + 1;
end
plot(xArray);
fprintf('%15.8e\n',xArray);
Now when I run this it seems like my x is approaching 0.8. Can anyone tell me what I am doing wrong?
Well done. You've made a decent start at this.
Lets look at the graphical solution. BTW, this is how I'd have done the graphical part:
ezplot(#(x) x,[-1 3])
hold on
ezplot(#(x) sqrt(10./(x+4)),[-1 3])
grid on
Or, I might subtract the two functions, then looking for a zero of the difference, so where it crosses the x axis.
This is what the fixed point iteration does anyway, trying to solve for x, such that
x = sqrt(10/(x+4))
So how would I change your code to fix it? First of all, I'd want to use more descriptive names for the variables. You don't get charged by the character, and making your code easier to read & follow will pay off greatly in the future for you.
There were a couple of code issues. To initialize a vector, use a form like one of these:
xArray = zeros(1,10);
xArray(1,10) = 0;
Note that if xArray was ALREADY defined because you have been working on this problem, the latter form will only zero out that single element. So the first form is best by a large margin. It affirmatively creates an array, or overwrites an existing array if it is already present in your workspace.
Finally, I like to initialize an array like this with something special, rather than zero, so we can see when an element was overwritten. NaNs are good for this.
Next, there was no need to add one to x1 in your code. Again, I'd strongly suggest using better variable names. It is also a good idea to use comments. Be liberal.
I'd suggest the idea of a convergence tolerance. You can also have an iteration counter.
f = #(x)sqrt(10./(x+4));
% starting value
xcurrent = 0;
% count the iterations, setting a maximum in maxiter, here 25
iter = 0;
maxiter = 25;
% initialize the array to store our iterations
xArray = NaN(1,maxiter);
% convergence tolerance
xtol = 1e-8;
% before we start, the error is set to be BIG. this
% just lets our while loop get through that first iteration
xerr = inf;
% the while will stop if either criterion fails
while (iter < maxiter) && (xerr > xtol)
iter = iter + 1;
xnew = f(xcurrent);
% save each iteration
xArray(iter) = xnew;
% compute the difference between successive iterations
xerr = abs(xnew - xcurrent);
xcurrent = xnew;
end
% retain only the elements of xArray that we actually generated
xArray = xArray(1:iter);
plot(xArray);
fprintf('%15.8e\n',xArray);
What was the result?
1.58113883e+00
1.33856229e+00
1.36863563e+00
1.36479692e+00
1.36528512e+00
1.36522300e+00
1.36523091e+00
1.36522990e+00
1.36523003e+00
1.36523001e+00
1.36523001e+00
For a little more accuracy to see how well we did...
format long g
xcurrent
xcurrent =
1.36523001364783
f(xcurrent)
ans =
1.36523001338436
By the way, it is a good idea to know why the loop terminated. Did it stop for insufficient iterations?
The point of my response here was NOT to do your homework, since you were close to getting it right anyway. The point is to show some considerations on how you might improve your code for future work.
There is no need to add 1 to x1. your output from each iteration is input for next iteration. So, x2 from output of f(x1) should be the new x1. The corrected code would be
for i = 1:10
x2 = f(x1);
xArray(i) = x2;
x1 = x2;
end
f(x)x^3+4*x^2-10 in [1,2] find an approximate root

Matlab -- random walk with boundaries, vectorized

Suppose I have a vector J of jump sizes and an initial starting point X_0. Also I have boundaries 0, B (assume 0 < X_0 < B). I want to do a random walk where X_i = [min(X_{i-1} + J_i,B)]^+. (positive part). Basically if it goes over a boundary, it is made equal to the boundary. Anyone know a vectorized way to do this? The current way I am doing it consists of doing cumsums and then finding places where it violates a condition, and then starting from there and repeating the cumsum calculation, etc until I find that I stop violating the boundaries. It works when the boundaries are rarely hit, but if they are hit all the time, it basically becomes a for loop.
In the code below, I am doing this across many samples. To 'fix' the ones that go out of the boundary, I have to loop through the samples to check...(don't think there is a vectorized 'find')
% X_init is a row vector describing initial resource values to use for
% each sample
% J is matrix where each col is a sequence of Jumps (columns = sample #)
% In this code the jumps are subtracted, but same thing
X_intvl = repmat(X_init,NumJumps,1) - cumsum(J);
X = [X_init; X_intvl];
for sample = 1:NumSamples
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
while(~isempty(k))
change = X_intvl(k-1,sample) - X_intvl(k,sample);
X_intvl(k:end,sample) = X_intvl(k:end,sample)+change;
k = find(or(X_intvl(:,sample) > B, X_intvl(:,sample) < 0),1);
end
end
Interesting question (+1).
I faced a similar problem a while back, although slightly more complex as my lower and upper bound depended on t. I never did work out a fully-vectorized solution. In the end, the fastest solution I found was a single loop which incorporates the constraints at each step. Adapting the code to your situation yields the following:
%# Set the parameters
LB = 0; %# Lower bound
UB = 5; %# Upper bound
T = 100; %# Number of observations
N = 3; %# Number of samples
X0 = (1/2) * (LB + UB); %# Arbitrary start point halfway between LB and UB
%# Generate the jumps
Jump = randn(N, T-1);
%# Build the constrained random walk
X = X0 * ones(N, T);
for t = 2:T
X(:, t) = max(min(X(:, t-1) + Jump(:, t-1), UB), 0);
end
X = X';
I would be interested in hearing if this method proves faster than what you are currently doing. I suspect it will be for cases where the constraint is binding in more than one or two places. I can't test it myself as the code you provided is not a "working" example, ie I can't just copy and paste it into Matlab and run it, as it depends on several variables for which example (or simulated) values are not provided. I tried adapting it myself, but couldn't get it to work properly?
UPDATE: I just switched the code around so that observations are indexed on columns and samples are indexed on rows, and then I transpose X in the last step. This will make the routine more efficient, since Matlab allocates memory for numeric arrays column-wise - hence it is faster when performing operations down the columns of an array (as opposed to across the rows). Note, you will only notice the speed-up for large N.
FINAL THOUGHT: These days, the JIT accelerator is very good at making single loops in Matlab efficient (double loops are still pretty slow). Therefore personally I'm of the opinion that every time you try and obtain a fully-vectorized solution in Matlab, ie no loops, you should weigh up whether the effort involved in finding a clever solution is worth the slight gains in efficiency to be made over an easier-to-obtain method that utilizes a single loop. And it is important to remember that fully-vectorized solutions are sometimes slower than solutions involving single loops when T and N are small!
I'd like to propose another vectorized solution.
So, first we should set the parameters and generate random Jumpls. I used the same set of parameters as Colin T Bowers:
% Set the parameters
LB = 0; % Lower bound
UB = 20; % Upper bound
T = 1000; % Number of observations
N = 3; % Number of samples
X0 = (1/2) * (UB + LB); % Arbitrary start point halfway between LB and UB
% Generate the jumps
Jump = randn(N, T-1);
But I changed generation code:
% Generate initial data without bounds
X = cumsum(Jump, 2);
% Apply bounds
Amplitude = UB - LB;
nsteps = ceil( max(abs(X(:))) / Amplitude - 0.5 );
for ii = 1:nsteps
ind = abs(X) > (1/2) * Amplitude;
X(ind) = Amplitude * sign(X(ind)) - X(ind);
end
% Shifting X
X = X0 + X;
So, instead of for loop I'm using cumsum function with smart post-processing.
N.B. This solution works significantly slower than Colin T Bowers's one for tight bounds (Amplitude < 5), but for loose bounds (Amplitude > 20) it works much faster.

How can I speed up this call to quantile in Matlab?

I have a MATLAB routine with one rather obvious bottleneck. I've profiled the function, with the result that 2/3 of the computing time is used in the function levels:
The function levels takes a matrix of floats and splits each column into nLevels buckets, returning a matrix of the same size as the input, with each entry replaced by the number of the bucket it falls into.
To do this I use the quantile function to get the bucket limits, and a loop to assign the entries to buckets. Here's my implementation:
function [Y q] = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q)
q=transpose(q);
end
Y = zeros(size(X));
for i = 1:nLevels
% "The variables g and l indicate the entries that are respectively greater than
% or less than the relevant bucket limits. The line Y(g & l) = i is assigning the
% value i to any element that falls in this bucket."
if i ~= nLevels % "The default; doesnt include upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#lt,X,q(i+1,:));
else % "For the final level we include the upper bound"
g = bsxfun(#ge,X,q(i,:));
l = bsxfun(#le,X,q(i+1,:));
end
Y(g & l) = i;
end
Is there anything I can do to speed this up? Can the code be vectorized?
If I understand correctly, you want to know how many items fell in each bucket.
Use:
n = hist(Y,nbins)
Though I am not sure that it will help in the speedup. It is just cleaner this way.
Edit : Following the comment:
You can use the second output parameter of histc
[n,bin] = histc(...) also returns an index matrix bin. If x is a vector, n(k) = >sum(bin==k). bin is zero for out of range values. If x is an M-by-N matrix, then
How About this
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
Y = zeros(size(X));
for i = 1:numel(q)-1
Y = Y+ X>=q(i);
end
This results in the following:
>>X = [3 1 4 6 7 2];
>>[Y, q] = levels(X,2)
Y =
1 1 2 2 2 1
q =
1 3.5 7
You could also modify the logic line to ensure values are less than the start of the next bin. However, I don't think it is necessary.
I think you shoud use histc
[~,Y] = histc(X,q)
As you can see in matlab's doc:
Description
n = histc(x,edges) counts the number of values in vector x that fall
between the elements in the edges vector (which must contain
monotonically nondecreasing values). n is a length(edges) vector
containing these counts. No elements of x can be complex.
I made a couple of refinements (including one inspired by Aero Engy in another answer) that have resulted in some improvements. To test them out, I created a random matrix of a million rows and 100 columns to run the improved functions on:
>> x = randn(1000000,100);
First, I ran my unmodified code, with the following results:
Note that of the 40 seconds, around 14 of them are spent computing the quantiles - I can't expect to improve this part of the routine (I assume that Mathworks have already optimized it, though I guess that to assume makes an...)
Next, I modified the routine to the following, which should be faster and has the advantage of being fewer lines as well!
function [Y q] = levels(X,nLevels)
p = linspace(0, 1.0, nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
Y = ones(size(X));
for i = 2:nLevels
Y = Y + bsxfun(#ge,X,q(i,:));
end
The profiling results with this code are:
So it is 15 seconds faster, which represents a 150% speedup of the portion of code that is mine, rather than MathWorks.
Finally, following a suggestion of Andrey (again in another answer) I modified the code to use the second output of the histc function, which assigns entries to bins. It doesn't treat the columns independently, so I had to loop over the columns manually, but it seems to be performing really well. Here's the code:
function [Y q] = levels(X,nLevels)
p = linspace(0,1,nLevels+1);
q = quantile(X,p);
if isvector(q), q = transpose(q); end
q(end,:) = 2 * q(end,:);
Y = zeros(size(X));
for k = 1:size(X,2)
[junk Y(:,k)] = histc(X(:,k),q(:,k));
end
And the profiling results:
We now spend only 4.3 seconds in codes outside the quantile function, which is around a 500% speedup over what I wrote originally. I've spent a bit of time writing this answer because I think it's turned into a nice example of how you can use the MATLAB profiler and StackExchange in combination to get much better performance from your code.
I'm happy with this result, although of course I'll continue to be pleased to hear other answers. At this stage the main performance increase will come from increasing the performance of the part of the code that currently calls quantile. I can't see how to do this immediately, but maybe someone else here can. Thanks again!
You can sort the columns and divide+round the inverse indexes:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
[S,IX]=sort(X);
[grid1,grid2]=ndgrid(1:size(IX,1),1:size(IX,2));
invIX=zeros(size(X));
invIX(sub2ind(size(X),IX(:),grid2(:)))=grid1;
Y=ceil(invIX/size(X,1)*nLevels);
Or you can use tiedrank:
function Y = levels(X,nLevels)
% "Assign each of the elements of X to an integer-valued level"
R=tiedrank(X);
Y=ceil(R/size(X,1)*nLevels);
Surprisingly, both these solutions are slightly slower than the quantile+histc solution.