simple gpu computation fails on MATLAB parallel processing toolbox - matlab

EDIT: the problem was due to my setup which shadowed the in-built assert function. Thanks to Edric for the tip (see comments below).
I am trying out the gpu capabilities of MATLAB with the example code but receving an error. I am not sure what the error means and how to fix it. The functions I am testing are built-in for gpu as described on MATLAB's help page. Here is as example, but similar error pops up for other functions.
>> x = gpuArray.ones(1,10)
x =
1 1 1 1 1 1 1 1 1 1
>> y = cos(x);
>> gather(y)
Error using assert
Too many input arguments.
Error in parallel.internal.types.Atomic.validateIsScalar (line 133)
Error in parallel.internal.types.Atomic/cType (line 267)
Error in parallel.internal.ptx.ptxEmitter/mangleCprotoEntryLazyEval (line 2614)
Error in parallel.internal.gpu.ptxExpr (line 73)
Here is the gpuDevice output:
>> gpuDevice
ans =
CUDADevice with properties:
Name: 'GeForce GTX 870M'
Index: 1
ComputeCapability: '3.0'
SupportsDouble: 1
DriverVersion: 6.5000
ToolkitVersion: 6
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 6.4425e+09
MultiprocessorCount: 7
ClockRateKHz: 967000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
I will really appreciate any help/pointers.
Thanks
EDIT: I am using MATLAB 2014b

Related

Convolution of two symbolic arrays on Matlab

I have two arrays:
p1=[sym(1) sym(2)]
p2=[sym(3) sym(4)]
I want to do the convolution of those two lists using conv function.
Matlab outputs the following:
Error using conv2
Invalid data type. First and second arguments must be numeric or logical.
Error in conv (line 43)
c = conv2(a(:),b(:),shape);
Can anyone help me how to deal with that?
Edit 1: i have not symbolic math toolbox so i demonstrated a matrix-wise operation on numeric values to calculate conv, i think this should do the trick on symbolic values either.
the conv function just accept numeric values.
there is a way to calculate conv matrix-wise
i demonstrate it with an example:
assume u and v are as follows :
u =
1 2 1 3
v =
2 7 1
>> conv(u,v)
ans =
2 11 17 15 22 3
instead we could first calculate u'*v, then do some rearranging and summing to calculate conv:
so first :
>> c=u'*v
c=
2 7 1
4 14 2
2 7 1
6 21 3
then we do some rearranging:
>> d=[c;zeros(3,3)]
d =
2 7 1
4 14 2
2 7 1
6 21 3
0 0 0
0 0 0
0 0 0
>>e= reshape(d(1:end-3),[6,3])
e=
2 0 0
4 7 0
2 14 1
6 7 2
0 21 1
0 0 3
and finally adding values together :
>> sum(e,2)
ans =
2
11
17
15
22
3
you can write your own code to use "v size" to do it(add (numel(v)*numel(v)) zeros to end of u'*v and so on.)

Matlab #fmincon error: "Supplied objective function must return a scalar value"

EDIT: To help clarify my question, I'm looking to get a fit to the following data:
I can get a fit using the cftool function, but using a least squares approach doesn't make sense with my binary data. Just to illustrate...
So, my goal is to fit this data using the fmincon function.
ORIGINAL POST:
I have data from a movement control experiment in which participants were timed while they performed a task, and given a score (failure or success) based on their performance. As you might expect, we assume participants will make less errors as they have more time to perform the task.
I'm trying to fit a function to this data using fmincon, but get the error "Error using fmincon (line 609)
Supplied objective function must return a scalar value."
I don't understand a) what this means, or b) how I can fix it.
I provide some sample data and code below. Any help greatly appreciated.
%Example Data:
time = [12.16 11.81 12.32 11.87 12.37 12.51 12.63 12.09 11.25
7.73 8.18 9.49 10.29 8.88 9.46 10.12 9.76 9.99 10.08
7.48 7.88 7.81 6.7 7.68 8.05 8.23 7.84 8.52 7.7
6.26 6.12 6.19 6.49 6.25 6.51 6 6.79 5.89 5.93 3.97 4.91 4.78 4.43
3.82 4.72 4.72 4.31 4.81 4.32 3.62 3.71 4.29 3.46 3.9 3.73 4.15
3.92 3.8 3.4 3.7 2.91 2.84 2.7 2.83 2.46 3.19 3.44 2.67 3.49 2.71
3.17 2.97 2.76 2.71 2.88 2.52 2.86 2.83 2.64 2.02 2.37 2.38
2.53 3.03 2.61 2.59 2.59 2.44 2.73 ]
error = [0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 1 0 1 0 1 1 0 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1];
%Code:
% initial parameters - a corresponds to params(1), b corresponds to params(2)
a = 3.0;
b = -0.01;
LL = #(params) 1/1+params(1)*(log(time).^params(2));
LL([a b]);
pOpt = fmincon(LL,[a b],[],[]);
The mistakes comes from the function LL, that returns a number of values equal to the length of time.
To properly use fmincon, you need to have a function that returns only one value.
I believe logistic regression would fit your data and purposes nicely. In that case, why not simply use Matlab's built-in function for multinomial logistic regression?
B = mnrfit(time,error)
Regarding your function LL, are you sure you have entered the function correctly and are not missing a parentheses?
LL = #(params) 1/(1+params(1)*(log(time).^params(2)));
Without the parentheses, you function is equivalent to 1 + a*log(x)^b

MATLAB ismember function with 'rows' option fails on GPU

When I run the command:
[Lia, Locb] = ismember(gpuArray(Ent_Pair_Pos), gpuArray(Ent_Pair_Dmel), 'rows')
I get this error:
Error using gpuArray/ismember Failed to initialize GPU BLAS library.
The data files can be downloaded from
https://drive.google.com/open?id=1_l51j5wcRSf1gvPxPq-goiIM0nHVCiO7 and
https://drive.google.com/open?id=1T497EmvcsApIUkUWTefFGEvtsPoGnRDE
They can be loaded into MATLAB workspace by running load Ent_Pair_Dmel.txt and load Ent_Pair_Pos.txt.
In addition, here is the output of the gpuDevice command.
CUDADevice with properties:
Name: 'GeForce GTX TITAN X'
Index: 1
ComputeCapability: '5.2'
SupportsDouble: 1
DriverVersion: 9
ToolkitVersion: 7.5000
MaxThreadsPerBlock: 1024
MaxShmemPerBlock: 49152
MaxThreadBlockSize: [1024 1024 64]
MaxGridSize: [2.1475e+09 65535 65535]
SIMDWidth: 32
TotalMemory: 1.2795e+10
AvailableMemory: 1.1801e+10
MultiprocessorCount: 24
ClockRateKHz: 1076000
ComputeMode: 'Default'
GPUOverlapsTransfers: 1
KernelExecutionTimeout: 1
CanMapHostMemory: 1
DeviceSupported: 1
DeviceSelected: 1
I am using MATLAB Version: 9.0.0.341360 (R2016a) on Ubuntu 14.04.
Any idea regarding the error?

MATLAB `fitglme` causes error on intermediate results

MATLAB R2014b's library function fitglme is acting up. It seems to be producing invalid intermediate results, cf. following run:
>> formula = 'Y ~ A + (1|B)';
>> glme = fitglme(ds,formula,'Verbose',2);
Starting PL iterations.
============================================================================================
ITER FUN VALUE NORM GRAD NORM STEP CG TERM RHO TRUST RAD ACCEPT
============================================================================================
0 -1.798e+308 0.000e+00 1.250e+03 BNDRY +1.747e+305 1.250e+03 YES
Infinity norm of the final gradient = 0.000e+00
Two norm of the final step = 1.250e+03, TolX = 1.000e-12
Relative infinity norm of the final gradient = 0.000e+00, TolFun = 1.000e-06
EXIT: Local minimum found.
-----------------------------------------------------------------------------------
PL ITER LOGLIK ||ETA|| ||ERR: ETA|| ||W|| ||ERR: ETA->MU->ETA||
-----------------------------------------------------------------------------------
1 NaN 2.797e+00 NaN 4.000e+00 NaN
Error using
classreg.regr.lmeutils.StandardLinearLikeMixedModel/validatey (line
299)
NaN or Inf values are not allowed in y.
Error in classreg.regr.lmeutils.StandardLinearMixedModel/set.y (line
265)
newy = validatey(slme,newy);
Error in
classreg.regr.lmeutils.StandardGeneralizedLinearMixedModel/fitUsingPL
(line 1661)
slme.y = ypw;
Error in
classreg.regr.lmeutils.StandardGeneralizedLinearMixedModel/refit
(line 4315)
[sglme,cause] = fitUsingPL(sglme,numIter,kappa);
Error in classreg.regr.lmeutils.StandardGeneralizedLinearMixedModel
(line 4288)
sglme = refit(sglme);
Error in GeneralizedLinearMixedModel/fitStandardLMEModel (line 1317)
slme =
classreg.regr.lmeutils.StandardGeneralizedLinearMixedModel(X,model.y,Zs,Psi,model.FitMethod,dofit,dostats,args{:});
Error in GeneralizedLinearMixedModel/fitter (line 891)
model.slme = fitStandardLMEModel(model);
Error in classreg.regr.FitObject/doFit (line 220)
model = fitter(model);
Error in GeneralizedLinearMixedModel.fit (line 2411)
model = doFit(model);
Error in fitglme (line 389)
glme = GeneralizedLinearMixedModel.fit(T,formula,varargin{:});
where
ds =
Y A B
2.7971 1 1
2.3801 2 1
1.7125 1 2
0.13291 2 2
0.70898 1 3
1.3898 2 3
0.55758 1 4
0.43072 2 4
-1.3622 1 5
-1.4441 2 5
-0.0781 1 6
0.48738 2 6
-0.77377 1 7
-1.4891 2 7
-1.149 1 8
-0.70913 2 8
Plese help. I am running thousands of these fittings, and I cannot tell why e.g. the data set here does not work, while e.g. the following DOES:
ds =
Y A B
2.8272 1 1
2.4091 2 1
1.6445 1 2
0.11834 2 2
0.66552 1 3
1.3342 2 3
0.53821 1 4
0.35225 2 4
-1.3412 1 5
-1.4446 2 5
-0.092893 1 6
0.44625 2 6
-0.805 1 7
-1.5075 2 7
-1.1167 1 8
-0.7717 2 8

Neural Network : Train Y= X1+X2 Poor performance: How to train small erratic pattern for regression

I was trying to simulate MATLAB's NN functions before testing my own coded network. I was training y = x1+x2.
But see how it performed,
>> net = newfit([1 2 3 4 5 0 1 2 5;1 2 3 4 5 1 1 1 1],[2 4 6 8 10 0 2 3 6],15);
>> net = train(net,[1 2 3 4 5 0 1 2 5;1 2 3 4 5 1 1 1 1],[2 4 6 8 10 0 2 3 6]);
>> sim(net,[1;4])
ans =
12.1028
>> sim(net,[4;4])
ans =
8.0000
>> sim(net,[4;1])
ans =
3.0397
>> sim(net,[2;2])
ans =
5.1659
>> sim(net,[3;3])
ans =
10.3024
Can anyone explain what is wrong with these training data? Is it not enough to estimate y = x1+x2 ? Or it just over-specialized? I believe it is a regression problem. Now, I do not know what should I expect from my own coded network. I was wondering on basis of what criteria this NN converges where it is producing such stupid result? is there any way to know what function it maps to (I know no way !)? My own network would not even converge , because it checks sum squared error as loop break condition. So how to deal with such training pattern?
However, I have another awesome training pattern which I am unable to train.
Can anyone train the following data set? Will it work/converge?
0 0 -------> 0
0 1 -------> 1000
1000 0 ----> 1
1 1 -------> 0
I have been using f(x)=x in output layer and used back propagation algorithm , but for this pattern the code never seems converge.
By calling
net = newfit([1 2 3 4 5 0 1 2 5;1 2 3 4 5 1 1 1 1],[2 4 6 8 10 0 2 3 6],15);
you create an ANN with hidden layers of size 15, which is probably too much for your problem. Besides, your training set is too small.
Here is a working code (it will take a while on old computers), I let you analyze it and diff with yours, please ask should you need further explanations:
% Prepare input and target vectors
a = perms(1:9);
x = a(:, 1);
y = a(:, 2);
z = x + y;
input = [x y];
% Create ANN
net = newfit(input',z',2);
% Learn
net.trainParam.epochs = 1000;
net = train(net, input', z');
Results are virtually perfect:
>> sim(net,[1;4])
ans =
5.0002
>> sim(net,[4;4])
ans =
7.9987
>> sim(net,[4;1])
ans =
4.9998
>> sim(net,[2;2])
ans =
4.0024
>> sim(net,[3;3])
ans =
5.9988
PS: NEWFIT is obsoleted in R2010b NNET 7.0. Last used in R2010a NNET 6.0.4.