matlab GPU computation - matlab

I am learning matlab GPU functions. My function myfun takes 2 input parameters delta, p. Eventually, I will apply myfun to many combinations of delta,p. For each combination of delta,p, 'myfun' will how many V's satisfies the condition delta*V-p>0, where V = [0:0.001:1]. Ideally, I want V to be a global variable. But it seems that matlab GPU has some restrictions on global variable. So I use another way to do this thing. The code is as the following:
function result = gpueg2()
dd = 0.1;
DELTA = [dd:dd:1];
dp = 0.001;
P = [0:dp:1];
[p,delta]=meshgrid(P,DELTA);
p = gpuArray(p(:));
delta = gpuArray(delta(:));
V = [0:0.001:1];
function [O] = myfun(delta,p)
O = sum((delta*V-p)>0);
end
result = arrayfun(#myfun,delta,p);
end
However, it through an error message
Function passed as first input argument contains unsupported or unknown function 'sum'.
But I believe sum is applicable in GPU.
Any advice and suggestions are highly appreciated.

The problem with sum is not with the GPU, it's with using arrayfun on a GPU. The list of functions that arrayfun on a GPU accepts is given here: https://www.mathworks.com/help/distcomp/run-element-wise-matlab-code-on-a-gpu.html. sum is not on the list on that page of documentation.
Your vectors are not so big (though I accept this may be a toy example of your real problem). I suggest the following alternative implementation:
function result = gpueg2()
dd = 0.1;
DELTA = dd:dd:1;
dp = 0.001;
P = 0:dp:1;
V = 0:0.001:1;
[p,delta,v] = meshgrid(P,DELTA,V);
p = gpuArray(p);
delta = gpuArray(delta);
v = gpuArray(v);
result = sum(delta.*v-p>0, 3);
end
Note the following differences:
I make 3D arrays of p,delta,v, rather than 2D. These three are only 24MB in total.
I do the calculation delta.*v-p>0 on the whole 3D array: this will be well split on the GPU.
I do the sum on the 3rd index, i.e. over V.
I have checked that your routine on the CPU and mine on the GPU give the same results.

Related

"Not enough input arguments" when trying to implement simple FOR loop using GPU

I'm trying to calculate the probability that a point (a nxn matrix) uniformly distributed in R^(n^2) exclusively has eigenvalues with negative real part, but I keep getting the following error:
Not enough input arguments.
Error in probability_n (line 4)
for i = 1:num_pts
Here is my code:
N = 10^2;
num_pts = 10^4;
n = 2;
n = n*ones(N,1,'gpuArray');
k = arrayfun(probability_n,n,num_pts);
and the function called is
function k = probability_n(n,num_pts)
k = 0;
for i = 1:num_pts
R = reshape(randsphere(1,n^2,1),n,n);
if all(real(eig(R))<0)
k = k+1;
end
end
end
function P = randsphere(m,n,r)
P = randn(m,n);
s2 = sum(P.^2,2);
P = P.*repmat(r*(gammainc(s2/2,n/2).^(1/n))./sqrt(s2),1,n);
end
Why is this happening? I suspect it is something very simple to do with a syntax error, since this is my first time trying to use my GPU for MATLAB. The GPU is an Nvidia GeForce GTX 580. Thanks.
In general, it's best to test things in vanilla MATLAB (without GPU or parallel processing) if you experience issues to see if the issue is specific to the GPU or parallel processing or whether it's something else. If you do that, you'll see that your code still doesn't work.
This is because you need to pass a function handle for probability_n to arrayfun, as you have it written, probability_n is implicitly called with no input arguments (you don't need the () to invoke a function). You receive the error you do when MATLAB tries to access num_pts from within probability_n and it hasn't been provided.
k = arrayfun(#probability_n, n, num_pts);
Note that passing the scalar num_pts as the third input only works when the first input to arrayfun is a gpuarray object. Otherwise, you'll want to create an anonymous function which passes num_pts to probability_n

MATLAB: How to plot a cubic expression for certain range of input pressure

I have a cubic expression here
I am trying to determine and plot δ𝛿 in the expression for P values of 0.0 to 5000. I'm really struggling to get the expression for δ in terms of the pressure P.
clear all;
close all;
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e+9;
E = 1e+12;
v = 0.17;
P = 0:100:5000
P = (4*delta*t)*w/r^2 + (2*E*t)*w^3/((1-v)*r^4);
I would appreciate if anyone could provide pointers.
I suggest two simple methods.
You evaluate P as a function of delta then you plot(P,delta). This is quick and dirty but if all you need is a plot it will do. The inconvenience is that you may to do some guess-and-trial to find the correct interval of P values, but you can also take a large enough value of delta_max and then restrict the x-axis limit of the plot.
Your function is a simple cubic, which you can solve analytically (see here if you are lost) to invert P(delta) into delta(P).
What you want is the functional inverse of your expression, i.e., δ𝛿 as a function of P. Since it's a cubic polynomial, you can expect up to three solutions (roots) for a give value of P. However, I'm guessing that you're only interested in real-valued solutions and nonnegative values of P. In that case there's just one real root for each value of P.
Given the values of your parameters, it makes most sense to solve this numerically using fzero. Using the parameter names in your code (different from equations):
t = 0.335*1e-9;
r = 62*1e-6;
delta = 1.2*1e9;
E = 1e12;
v = 0.17;
f = #(w,p)2*E*t*w.^3/((1-v)*r^4)+4*delta*t*w/r^2-p;
P = 0:100:5000;
w0 = [0 1]; % Bounded initial guess, valid up to very large values of P
w_sol = zeros(length(P),1);
for i = 1:length(P)
w_sol(i) = fzero(#(w)f(w,P(i)),w0); % Find solution for each P
end
figure;
plot(P,w_sol);
You could also solve this using symbolic math:
syms w p
t = 0.335*sym(1e-9);
r = 62*sym(1e-6);
delta = 1.2*sym(1e9);
E = sym(1e12);
v = sym(0.17);
w_sol = solve(p==2*E*t*w^3/((1-v)*r^4)+4*delta*t*w/r^2,w);
P = 0:100:5000;
w_sol = double(subs(w_sol(1),p,P)); % Plug in P values and convert to floating point
figure;
plot(P,w_sol);
Because of your numeric parameter values, solve returns an answer in terms of three RootOf objects, the first of which is the real one you want.

Setting up a matrix of time and space for a function

I'm writing a program in matlab to observe how a function evolves in time. I'd like to set up a matrix that fills its first row with the initial function, and progressively fills more rows based off of a time derivative (that's dependent on the spatial derivative). The function is arbitrary, the program just needs to 'evolve' it. This is what I have so far:
xleft = -10;
xright = 10;
xsampling = 1000;
tmax = 1000;
tsampling = 1000;
dt = tmax/tsampling;
x = linspace(xleft,xright,xsampling);
t = linspace(0,tmax,tsampling);
funset = [exp(-(x.^2)/100);cos(x)]; %Test functions.
funsetvel = zeros(size(funset)); %The functions velocities.
spacetimevalue1 = zeros(length(x), length(t));
spacetimevalue2 = zeros(length(x), length(t));
% Loop that fills the first functions spacetime matrix.
for j=1:length(t)
funsetvel(1,j) = diff(funset(1,:),x,2);
spacetimevalue1(:,j) = funsetvel(1,j)*dt + funset(1,j);
end
This outputs the error, Difference order N must be a positive integer scalar. I'm unsure what this means. I'm fairly new to Matlab. I will exchange the Euler-method for another algorithm once I can actually get some output along the proper expectation. Aside from the error associated with taking the spatial derivative, do you all have any suggestions on how to evaluate this sort of process? Thank you.

How can I plot data to a “best fit” cos² graph in Matlab?

I’m currently a Physics student and for several weeks have been compiling data related to ‘Quantum Entanglement’. I’ve now got to a point where I have to plot my data (which should resemble a cos² graph - and does) to a sort of “best fit” cos² graph. The lab script says the following:
A more precise determination of the visibility V (this is basically how 'clean' the data is) follows from the best fit to the measured data using the function:
f(b) = A/2[1-Vsin(b-b(center)/P)]
Granted this probably doesn’t mean much out of context, but essentially A is the amplitude, b is an angle and P is the periodicity. Hence this is also a “wave” like the experimental data I have found.
From this I understand, as previously mentioned, I am making a “best fit” curve. However, I have been told that this isn’t possible with Excel and that the best approach is Matlab.
I know intermediate JavaScript but do not know Matlab and was hoping for some direction.
Is there a tutorial I can read for this? Is it possible for someone to go through it with me? I really have no idea what it entails, so any feed back would be greatly appreciated.
Thanks a lot!
Initial steps
I guess we should begin by getting a representation in Matlab of the function that you're trying to model. A direct translation of your formula looks like this:
function y = targetfunction(A,V,P,bc,b)
y = (A/2) * (1 - V * sin((b-bc) / P));
end
Getting hold of the data
My next step is going to be to generate some data to work with (you'll use your own data, naturally). So here's a function that generates some noisy data. Notice that I've supplied some values for the parameters.
function [y b] = generateData(npoints,noise)
A = 2;
V = 1;
P = 0.7;
bc = 0;
b = 2 * pi * rand(npoints,1);
y = targetfunction(A,V,P,bc,b) + noise * randn(npoints,1);
end
The function rand generates random points on the interval [0,1], and I multiplied those by 2*pi to get points randomly on the interval [0, 2*pi]. I then applied the target function at those points, and added a bit of noise (the function randn generates normally distributed random variables).
Fitting parameters
The most complicated function is the one that fits a model to your data. For this I use the function fminunc, which does unconstrained minimization. The routine looks like this:
function [A V P bc] = bestfit(y,b)
x0(1) = 1; %# A
x0(2) = 1; %# V
x0(3) = 0.5; %# P
x0(4) = 0; %# bc
f = #(x) norm(y - targetfunction(x(1),x(2),x(3),x(4),b));
x = fminunc(f,x0);
A = x(1);
V = x(2);
P = x(3);
bc = x(4);
end
Let's go through line by line. First, I define the function f that I want to minimize. This isn't too hard. To minimize a function in Matlab, it needs to take a single vector as a parameter. Therefore we have to pack our four parameters into a vector, which I do in the first four lines. I used values that are close, but not the same, as the ones that I used to generate the data.
Then I define the function I want to minimize. It takes a single argument x, which it unpacks and feeds to the targetfunction, along with the points b in our dataset. Hopefully these are close to y. We measure how far they are from y by subtracting from y and applying the function norm, which squares every component, adds them up and takes the square root (i.e. it computes the root mean square error).
Then I call fminunc with our function to be minimized, and the initial guess for the parameters. This uses an internal routine to find the closest match for each of the parameters, and returns them in the vector x.
Finally, I unpack the parameters from the vector x.
Putting it all together
We now have all the components we need, so we just want one final function to tie them together. Here it is:
function master
%# Generate some data (you should read in your own data here)
[f b] = generateData(1000,1);
%# Find the best fitting parameters
[A V P bc] = bestfit(f,b);
%# Print them to the screen
fprintf('A = %f\n',A)
fprintf('V = %f\n',V)
fprintf('P = %f\n',P)
fprintf('bc = %f\n',bc)
%# Make plots of the data and the function we have fitted
plot(b,f,'.');
hold on
plot(sort(b),targetfunction(A,V,P,bc,sort(b)),'r','LineWidth',2)
end
If I run this function, I see this being printed to the screen:
>> master
Local minimum found.
Optimization completed because the size of the gradient is less than
the default value of the function tolerance.
A = 1.991727
V = 0.979819
P = 0.695265
bc = 0.067431
And the following plot appears:
That fit looks good enough to me. Let me know if you have any questions about anything I've done here.
I am a bit surprised as you mention f(a) and your function does not contain an a, but in general, suppose you want to plot f(x) = cos(x)^2
First determine for which values of x you want to make a plot, for example
xmin = 0;
stepsize = 1/100;
xmax = 6.5;
x = xmin:stepsize:xmax;
y = cos(x).^2;
plot(x,y)
However, note that this approach works just as well in excel, you just have to do some work to get your x values and function in the right cells.

Non linear function parameter estimation - matlab, lsqnonlin, fzero

I'm having difficulty with a fitting problem. From the errors that I get I imagine that the boundaries are not defined correctly and I haven't managed to find a solution. Any help would be very much appreciated.
Alternative methods for the solution of the same problem are also accepted.
Description
I have to estimate the parameters of a non-linear function of the type:
A*y(x) + B*EXP(C*y(x)) + g(x,D) = 0
subjected to the parameters PAR = [A,B,C,D] being within the range
LB < PAR < UB
Code
To solve the problem I'm using the Matlab functions lsqnonlin and fzero. The simplified code used is reported below.
The problem is divided in four functions:
parameterEstimation - (a wrapper for the lsqnonlin function)
objectiveFunction_lsq - (the objective function for the param estimation)
yFun - (the function returing the value of the variable y)
objectiveFunction_zero - (the objective function of the non-linear equation used to calculate y)
Errors
Running the code on the data I get the this waring
Warning: Length of lower bounds is > length(x); ignoring extra bounds
and this error
Failure in initial user-supplied objective function evaluation.
LSQNONLIN cannot continue
This makes me to think that the boundaries are not correctly used or not correctly called, but maybe the problem is elsewhere.
function Done = parameterEstimation()
%read inputs
Xmeas = xlsread('filepath','worksheet','range');
Ymeas = xlsread('filepath','worksheet','range');
%inital values and boundary conditions
initialGuess = [1,1,1,1]; %model parameters initial guess
LB = [0,0,0,0]; %model parameters lower boundaries
UB = [2,2,2,2]; %model parameters upper boundaries
%parameter estimation
calcParam = lsqnonlin(#objectiveFunction_lsq_2,initialGuess,LB,UB,[],Xmeas,Ymeas);
Done = calcParam;
function diff = objectiveFunction_lsq_2(PAR,Xmeas,Ymeas)
y_calculated = yFun(PAR,Xmeas);
diff = y_calculated-Ymeas;
function result = yFun(PAR,X)
y_0 = 2;
val = fzero(#(y)objfun_y(y,PAR,X),y_0);
result = val;
function result = objfun_y(y,PAR,X)
A = PAR(1);
B = PAR(2);
A = PAR(3);
C = PAR(4);
D = PAR(5);
val = A*y+B*exp(y*C)+g(D,X);
result = val;
I don't have the optimization toolbox, but are you sure you can pass the constants like that?
I would do this instead:
calcParam = lsqnonlin(#(PAR) objectiveFunction_lsq_2(PAR,Xmeas,Ymeas),initialGuess,LB,UB);