How do I run the Initial value x0 also in parallel - matlab

This code works fine but the plot is not correct because the optimization function fmincon will depend on the initial condition x0 and the number of iterations. For each value of alpha (a) and beta (b), I should run the optimization many times with different initial conditions x0 to verify that I am getting the right answer. More iterations might be required to get an accurate answer.
I want to be able to run the optimization with different initial conditions for x0, a and b.
Function file
function f = threestate2(x,a,b)
c1 = cos(x(1))*(cos(x(5))*(cos(x(9))+cos(x(11)))+cos(x(7))*(cos(x(9))-cos(x(11))))...
+cos(x(3))*(cos(x(5))*(cos(x(9))-cos(x(11)))-cos(x(7))*(cos(x(9))+cos(x(11))));
c2=sin(x(1))*(sin(x(5))*(sin(x(9))*cos(x(2)+x(6)+x(10))+sin(x(11))*cos(x(2)+x(6)+x(12)))...
+sin(x(7))*(sin(x(9))*cos(x(2)+x(8)+x(10))-sin(x(11))*cos(x(2)+x(8)+x(12))))...
+sin(x(3))*(sin(x(5))*(sin(x(9))*cos(x(4)+x(6)+x(10))-sin(x(11))*cos(x(4)+x(6)+x(12)))...
-sin(x(7))*(sin(x(9))*cos(x(4)+x(8)+x(10))+sin(x(11))*cos(x(4)+x(8)+x(12))));
f=(a*a-b*b)*c1+2*a*b*c2;
Main file
%x=[x(1),x(2),x(3),x(4),x(5),x(6),x(7),x(8),x(9),x(10),x(11),x(12)]; % angles;
lb=[0,0,0,0,0,0,0,0,0,0,0,0];
ub=[pi,2*pi,pi,2*pi,pi,2*pi,pi,2*pi,pi,2*pi,pi,2*pi];
x0=[pi/8;0;pi/3;0;0.7*pi;.6;0;pi/2;.5;0;pi/4;0];
xout=[];
fout=[];
options = optimoptions(#fmincon,'Algorithm','interior-point','TolX',10^-10,'MaxIter',1500);
a=0:0.01:1;
w=NaN(length(a));
for i=1:length(a)
bhelp=(1-a(i)*a(i));
if bhelp>0
b=sqrt(bhelp);
[x,fval]=fmincon(#(x)threestate2(x,a(i),b),x0,[],[],[],[],lb,ub,[],options);
w(i)=fval;
w(i)=-w(i);
B(i)=b;
else
w(i)=NaN;
B(i)=b;
end
end
%surface(b,a,w)
%view(3)
%meshc(b,a,w)
x=a.^2;
plot(x,w)
grid on
ylabel('\fontname{Times New Roman} S_{max}(\Psi_{gs})')
xlabel('\fontname{Times New Roman}\alpha^2')
%ylabel('\fontname{Times New Roman}\beta')
title('\fontname{Times New Roman} Maximum of the Svetlichny operator(\alpha|000>+\beta|111>)')

If you have the Matlab Parallel Toolbox, you can use the parfor which is like a regular loop but runs in parallel.
To use it you should make your all big messy script in a function. Assuming you store your initial conditions in A(i) and you store the results in B(i) you can use something like that:
parfor i=1:length(B)
B(i)=optimise(A(i));
end
If you don't have the toolbox there are some other ways (for example MEX files) but you basically have to manage the threads yourself so I wouldn't recommend it.

Related

MATLAB Initial Objective Function Evaluation Error

I am trying to create a plot in MATLAB by iterating over values of a constant (A) with respect to a system of equations. My code is pasted below:
function F=Func1(X, A)
%X(1) is c
%X(2) is h
%X(3) is lambda
%A is technology (some constant)
%a is alpha
a=1;
F(1)=1/X(1)-X(3)
F(2)=X(3)*(X(1)-A*X(2)^a)
F(3)=-1/(24-X(2))-X(3)*A*a*X(2)^(a-1)
clear, clc
init_guess=[0,0,0]
for countA=1:0.01:10
result(countA,:)=[countA,fsolve(#Func1, init_guess)]
end
display('The Loop Has Ended')
display(result)
%plot(result(:,1),result(:,2))
The first set of code, I am defining the set of equations I am attempting to solve. In the second set of lines, I am trying to write a loop which iterates through the values of A that I want, [1,10] with an increment of 0.01. Right now, I am just trying to get my loop code to work, but I keep on getting this error:
"Failure in initial objective function evaluation. FSOLVE cannot continue."
I am not sure why this is the case. From what I understand, this is the result of my initial guess matrix not being the right size, but I believe it should be of size 3, as I am solving for three variables. Additionally, I'm fairly confident there's nothing wrong with how I'm referencing my Func1 code in my Loop code.
Any advice you all could provide would be sincerely appreciated. I've only been working on MATLAB for a few days, so if this is a rather trivial fix, pardon my ignorance. Thanks.
Couple of issues with your code:
1/X(1) in function Func1 is prone to the division by zero miscalculations. I would change it to 1/(X(1)+eps).
If countA is incrementing by 0.01, it cannot be used as an index for result. Perhaps introduce an index ind to increment.
I have included your constant A within the function to clarify what optimization variables are.
Here is the updated code. I don't know if the results are reasonable or not:
init_guess=[0,0,0];
ind = 1;
for countA=1:0.01:10
result(ind,:)=[countA, fsolve(#(X) Func1(X,countA), init_guess)];
ind = ind+1;
end
display('The Loop Has Ended')
display(result)
%plot(result(:,1),result(:,2))
function F=Func1(X,A)
%X(1) is c
%X(2) is h
%X(3) is lambda
%A is technology (some constant)
%a is alpha
a=1;
F(1)=1/(X(1)+eps)-X(3);
F(2)=X(3)*(X(1)-A*X(2)^a);
F(3)=-1/(24-X(2))-X(3)*A*a*X(2)^(a-1);
end

Using a for loop inside a parallel for loop in Matlab

I am trying to use a for loop inside of a parfor loop in Matlab.
The for loop is equivalent to the ballode example in here.
Inside the for loop a function ballBouncing is called which is a system of 6 differential equations.
So, what I am trying to do is to use 500 different sets of parameter values for the ODE system and run it, but for each parameter set, a sudden impulse is added, which is handled through the code in 'for' loop.
However, I don't understand how to implement this using a parfor and a for loop as below.
I could run this code by using two for loops but when the outer loop is made to be a parfor it gives the errors,
the PARFOR loop cannot run due to the way variable results is used,
the PARFOR loop cannot run due to the way variable y0 is used and
Valid indices for results are restricted in PARFOR loops
results=NaN(500,100);
x=rand(500,10);
parfor j=1:500
bouncingTimes=[10,50];%at time 10 a sudden impulse is added
refine=2;
tout=0;
yout=y0;%initial conditions of ODE system
paras=x(j,:);%parameter values for the ODE
for i=1:2
tfinal=bouncingTimes(i);
[t,y]=ode45(#(t,y)ballBouncing(t,y,paras),tstart:1:tfinal,y0,options);
nt=length(t);
tout=[tout;t(2:nt)];
yout=[yout;y(2:nt,:)];
y0(1:5)=y(nt,1:5);%updating initial conditions with the impulse
y0(6)=y(nt,6)+paras(j,10);
options = odeset(options,'InitialStep',t(nt)-t(nt-refine),...
'MaxStep',t(nt)-t(1));
tstart =t(nt);
end
numRows=length(yout(:,1));
results(1:numRows,j)=yout(:,1);
end
results;
Can someone help me to implement this using a parfor outer loop.
Fixing the assignment into results is relatively straightforward - what you need to do is ensure you always assign a whole column. Here's how I would do that:
% We will always need the full size of results in dimension 1
numRows = size(results, 1);
parfor j = ...
yout = ...; % variable size
yout(end:numRows, :) = NaN; % Expand if necessary
results(:, j) = yout(1:numRows, 1); % Shrink 'yout' if necessary
end
However, y0 is harder to deal with - the iterations of your parfor loop are not order-independent because of the way you're passing information from one iteration to the next. parfor can only handle loops where the iterations are order-independent.

Is it possible to change two function using parfor loop?

Suppose I have two functions written on different scripts, say function1.m and function2.m The two computation in the two functions are independent (Some inputs may be the same, say function1(x,y) and function2(x,z) for example). However, running sequentially, say ret1 = function1(x,y); ret2 = function2(x,z); may be time consuming. I wonder if it is possible to run it in parfor loop:
parfor i = 1:2
ret(i) = run(['function' num2str(i)]); % if i=1,ret(1)=function1 and i=2, ret(2)=function2
end
Is it possible to write it in parfor loop?
Your idea is correct, but the implementation is wrong.
Matlab won't let you use run within parfor as it can't make sure it's a valid way to use parfor (i.e. no dependencies between iterations). The proper way to do that is to use functions (and not scrips) and an if statement to choose between them:
ret = zeros(2,1);
parfor k = 1:2
if k==1, ret(k) = f1(x,y); end
if k==2, ret(k) = f2(x,z); end
end
here f1 and f2 are some functions that return a scalar value (so it's suitable for ret(k) and each instance of the loop call a different if statement.
You can read here more about how to convert scripts to functions.
The rule of thumb for a parfor loop is that each iteration must be standalone. More accurately,
The body of the parfor-loop must be independent. One loop iteration
cannot depend on a previous iteration, because the iterations are
executed in a nondeterministic order.
That means that every iteration must be one which can be performed on its own and produce the correct result.
Therefore, if you have code that says, for instance,
parfor (i = 1:2)
function1(iterator,someNumber);
function2(iterator,someNumber);
end
there should be no issue with applying parfor.
However, if you have code that says, for instance,
persistentValue = 0;
parfor (i = 1:2)
persistentValue = persistentValue + function1(iterator,someNumber);
function2(iterator,persistentValue);
end
it would not be usable.
Yes. It is possible.
Here's an example:
ret = zeros(2,1);
fHandles = {#min, #max};
x = 1:10;
parfor i=1:2
ret(i) = fHandles{i}(x);
end
ret % show the results.
Whether this is a good idea or not, I don't know. There is overhead to setting up the parallel processing that may or may not make it worthwhile for you.
Typically the more iterations you have computed, the more value you get from setting up a parfor loop as the iterations are sliced-up and sent non-deterministically to the separate cores for processing. So you're getting use of 2 cores right now, but if you have many functions this may improve things.
The order that the iterations are run is not guaranteed (it could be that one core gets assigned a range of values for i, but we do not know if it those values are taken in order or randomly), so your code can't depend on other iterations of the loop.
In general, the MATLAB editor is pretty at flagging these issues ahead of time.
EDIT
Here's a proof of concept for a variable number of arguments to your different functions
ret = zeros(2,1);
fHandles = {#min, #max};
x = 1:10; % x is a 1x10 vector
y = rand(20); % y is a 20x20 matrix
z = 1; % z is a scalar value
fArgs = {{x};
{y,z}}; %wrap your arguments up in a cell
parfor i=1:2
ret(i) = fHandles{i}([fArgs{i}{:}]); %calls the function with its variable sized arguments here
end
ret % show the output
Again, this is just proof-of-concept. There are big warnings showing up in MATLAB about having to broadcast fArgs across all of the cores.

how can I make these four loop compute paralleling?

I have a problem with MathWorks Parallel Computing Toolbox in Matlab. See my code below
for k=1:length(Xab)
n1=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n1)=JXab{k};
MY_j(1,n1)=JYab{k};
MZ_j(1,n1)=Z;
end
for k=length(Xab)+1:length(Xab)+length(Xbc)
n2=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n2)=JXbc{k-length(Xab)};
MY_j(1,n2)=JYbc{k-length(Yab)};
MZ_j(1,n2)=Z;
end
for k=length(Xab)+length(Xbc)+1:length(Xab)+length(Xbc)+length(Xcd)
n3=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n3)=JXcd{k-length(Xab)-length(Xbc)};
MY_j(1,n3)=JYcd{k-length(Yab)-length(Ybc)};
MZ_j(1,n3)=Z;
end
for k=length(Xab)+length(Xbc)+length(Xcd)+1:length(Xab)+length(Xbc)+length(Xcd)+length(Xda)
n4=length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,n4)=JXda{k-length(Xab)-length(Xbc)-length(Xcd)};
MY_j(1,n4)=JYda{k-length(Yab)-length(Ybc)-length(Ycd)};
MZ_j(1,n4)=Z;
end
If I change the for-loop to parfor-loop, matlab warns me that MX_j is not an efficient variable. I have no idea how to solve this and how to make these for loops compute in parallel?
For me, it looks like you can combine it to one loop. Create combined cell arrays.
JX = cat(2,JXab, JXbc, JXcd, JXda);
JY = cat(2,JYab, JYbc, JYcd, JYda);
Check for the right dimension here. If your JXcc arrays are column arrays, use cat(1,....
After doing that, one single loop should do it:
n = length(Xab)+length(Xbc)+length(Xcd)+length(Xda);
for k=1:n
k2 = length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,k2)=JX{k};
MY_j(1,k2)=JY{k};
MZ_j(1,k2)=Z;
end
Before parallizing anything, check if this still valid. I haven't tested it. If everything's nice, you can switch to parfor.
When using parfor, the arrays must be preallocated. The following code could work (untested due to lack of test-data):
n = length(Xab)+length(Xbc)+length(Xcd)+length(Xda);
MX_j = zeros(1,n*length(Z));
MY_j = MX_j;
MZ_j = MX_j;
parfor k=1:n
k2 = length(Z)*(k-1)+1:length(Z)*k;
MX_j(1,k2)=JX{k};
MY_j(1,k2)=JY{k};
MZ_j(1,k2)=Z;
end
Note: As far as I can see, the parfor loop will be much slower here. You simply assign some values... no calculation at all. The setup of the worker pool will take 99.9% of the total execution time.

Why does the matlab bootstrap procedure evaluate N+1 times?

I wanted to bootstrap with matlab using the built in command "bootstrp". What I noticed was that the procedure makes N+1 iterations when I am only asking for N iterations. Why is that? When I build a manual loop to do the bootstrapping, so that it really just runs N times, then it is faster. Here is a minimal example of the problem:
clear all
global iterationcounter
tic
iterationcounter=0;
data=unifrnd(0,1,1,1000); %draw vector of 1000 random numbers
bootstat = bootstrp(100,#testmean,data); %evaluate function for 100 bootstrap samples
toc
which uses the function
function [ m ] = testmean( data )
global iterationcounter
m=mean(data);
iterationcounter=iterationcounter+1
end
The function should evaluate 100 samples, yet when I run the script, it will evaluate the function 101 times:
...
iterationcounter =
101
Elapsed time is 0.102291 seconds.
So why should one use this build-in Matlab function that appears to waste time?
bootstrp makes a call to bootfun (the function argument) for sanity checks (from the source code, in MATLAB 2015b, bootstrp.m, l.167 ff) :
% Sanity check bootfun call and determine dimension and type of result
try
% Get result of bootfun on actual data, force to a row.
bootstat = feval(bootfun,bootargs{:});
bootstat = bootstat(:)';
catch ME
m = message('stats:bootstrp:BadBootFun');
MEboot = MException(m.Identifier,'%s',getString(m));
ME = addCause(ME,MEboot);
rethrow(ME);
end
I would think that in a realistic application, N>>100, so the extra overhead is (much) less than a percent of he total runtime (not taking into account speed gains from possible parallelisation), so that should not matter that much?