I have a matrix M of size(262322x4). On running knn imputation:
M=csvread("C:\Users\Hello\Desktop\DATA\B.csv",1,0);
B = transpose(M);
A = knnimpute(B,1);
C=transpose(A);
I get the following error:
>>knn_imputation
Error using pdistmex
Out of memory. Type HELP MEMORY for your options.
Error in pdist (line 264)
Y = pdistmex(X',dist,additionalArg);
Error in knnimpute (line 162)
distances = pdist(dataNoNans',metric,distargs{:});
Error in knn_imputation (line 4)
A = (knnimpute(B,1));
Is this error due to Memory limitations or something else?
(Note: Memory size: 8GB)
Related
I have another question about running narx with bigg data. I tried increase the hidden size, to get better model.
seems like 300 is some upper limit for the hidden units allowed- memory error. So for 1000 it clearly says no.
with the narxnet of :
net = narxnet(1:25,1:25,1000);
I get this error .e file
{Error using zeros
Requested 1000x54373200 (405.1GB) array exceeds maximum array size preference
(377.4GB). This might cause MATLAB to become unresponsive.
Error in nnet.internal.configure.inputWeight (line 25)
net.IW{i,j} = zeros(newSize);
Error in nnet.internal.configure.input (line 42)
net = nnet.internal.configure.inputWeight(net,j,i,x);
Error in network/configure (line 244)
net = nnet.internal.configure.input(net,i,X{i});
Error in preparets (line 302)
net = configure(net,'input',xx(i,:),i);
}
with the size 600 i get - out of memeory error, How to fix it to able to use the NARX for big data:
net = narxnet(1:25,1:25,600);
we get the following
{Out of memory.
Error in normr (line 27)
xi(~isfinite(xi)) = 0;
Error in randnr>new_value_from_rows_cols (line 152)
x = normr(rands(rows,cols));
Error in randnr (line 98)
out1 = new_value_from_rows_cols(in1,in2);
Error in initnw>calcnw (line 287)
wDir = randnr(s,r);
Error in initnw>initialize_layer (line 212)
[w,b] = calcnw(range,net.layers{i}.size,active);
Error in initnw (line 101)
out1 = initialize_layer(in1,in2);
Error in initlay>initialize_network (line 155)
net = feval(initFcn,net,i);
Error in initlay (line 97)
out1 = initialize_network(in1);
Error in network/init (line 31)
net = feval(initFcn,net);
Error in network/configure (line 253)
net = init(net);
Error in preparets (line 302)
net = configure(net,'input',xx(i,:),i);
}
Also, the matlab code do not scale up with more cpu count . Like I get same time for 24 cpu and 120 cpu.
I have a 10^5 square sparse matrix called pbAttack. Each element represents if there is connection between node i and node j. If there is connection, pbAttack(i,j) = 1. Otherwise, pbAttack (i,j) = 0. Then I want to use it following this tutorial: Matlab Autoencoders. I use the same code as in the linked Matlab tutorial. I only change it to use my data.
However, I got following errors:
Error using bsxfun
Out of memory. Type HELP MEMORY for your options.
Error in mapminmax.apply (line 8)
y = bsxfun(#plus,y,settings.ymin);
Error in mapminmax.create (line 44)
y = mapminmax.apply(x,settings);
Error in mapminmax (line 51)
[y,settings] = mapminmax.create(x,param);
Error in nnet.internal.configure.input (line 31)
[x,config] = feval(processFcns{j},x,processParams{j});
Error in network/configure (line 234)
net = nnet.internal.configure.input(net,i,X{i});
Error in nntraining.config (line 116)
net = configure(network(net),X,T);
Error in nntraining.setup>setupPerWorker (line 68)
[net,X,Xi,Ai,T,EW,Q,TS,err] = nntraining.config(net,X,Xi,Ai,T,EW,configNetEnable);
Error in nntraining.setup (line 43)
[net,data,tr,err] = setupPerWorker(net,trainFcn,X,Xi,Ai,T,EW,enableConfigure);
Error in network/train (line 335)
[net,data,tr,err] = nntraining.setup(net,net.trainFcn,X,Xi,Ai,T,EW,enableConfigure,isComposite);
Error in Autoencoder.train (line 490)
net = train(net,X,X,'useGPU',iYesOrNo(useGPU));
Error in trainAutoencoder (line 105)
autoenc = Autoencoder.train(X, autonet, paramsStruct.UseGPU);
Error in workflow_autoencoder (line 8)
autoenc1 = trainAutoencoder(pbAttack,hiddenSize,...
Are all these errors caused by the huge size of the matrix? Is there a work around so that it does not need so much memory? Thank you so much.
It just means that you don't have enough memory for this (since your matrix is not so sparse). You can try to repeat your code by reducing the size of the matrix.
I am using the commands proposed here. When I execute
PCRmsep = sum(crossval(#pcrsse,X,Y,'KFold',6),1) / n;
I get the following error messages:
Error using crossval>evalFun (line 480)
The function 'pcrsse' generated the following error:
Index exceeds matrix dimensions.
Error in crossval>getFuncVal (line 497) funResult =
evalFun(funorStr,arg(:));
Error in crossval (line 343)
funResult = getFuncVal(1, nData, cvp, data, funorStr, []);
What does this error mean and how can I prevent this error?
X: 24x9 matrix
Y: 24x1 matrix
I'm new to Matlab and also trying to use this function. I was getting the same error and had a look at the function. For me, saving a copy and changing the maxNumComp value from 10 to 8 (I have 8 predictors) made it work. Yet to figure out why...
I am trying to run the example at:
http://nl.mathworks.com/help/stats/group-comparisons-using-categorical-arrays.html
using Matlab R2013b.
clear
load('carsmall')
cars = table(MPG,Weight,Model_Year);
cars.Model_Year = nominal(cars.Model_Year);
fit = fitlm(cars,'MPG~Weight*Model_Year')
Unfortunatelly I get the error:
Error using classreg.regr.FitObject/assignData (line 257)
Predictor and response variables must have the same length.
Error in classreg.regr.TermsRegression/assignData (line 349)
model = assignData#classreg.regr.ParametricRegression(model,X,y,w,asCat,varNames,excl);
Error in LinearModel.fit (line 852)
model = assignData(model,X,y,weights,asCatVar,dummyCoding,model.Formula.VariableNames,exclude);
Error in fitlm (line 111)
model = LinearModel.fit(X,varargin{:});
Any clue why?
I need to run the optimization of a 2 variables function for which I provide the gradients. The code runs normally for the 'interior-point' algorithm but it stucks after the first interaction when I try the 'sqp' algorithm. The first algorithm gives me 5% of error from the values I expected, so I really need to check if SQP gives me better results.
deltaml=10^(-4);
deltaE=10^(-2);
Eo=[0;1];
Elb=[0;0];
Eub=[100;100000000000];
Alg='sqp'
f1=[subs(fdelta,ml,ml+deltaml);subs(fdelta,E,E+deltaE)];
f2=[subs(fdelta,ml,ml+deltaml/2);subs(fdelta,E,E+deltaE/2)];
gradobj = (4*f2-3*f-f1)./[deltaml;deltaE];
objfungrad = matlabFunction(fdelta,gradobj,'vars',{t},'outputs',{'f','gradf'});
opts_grad=optimset('Algorithm',Alg,'TolFun',1e-16,'TolX',1e-16,'Display','off','GradObj','on');
[xgrad fval] = fmincon(objfungrad,Eo,[],[],[],[],[Elb],[Eub],[],opts_grad);>
The error when running sqp is:
Error using deal (line 38)
The number of outputs should match the number of inputs.
Error in C:\Program
Files\MATLAB\R2013a\toolbox\symbolic\symbolic\symengine.p
Error in C:\Program
Files\MATLAB\R2013a\toolbox\optim\optim\private\evalObjAndConstr.p>evalObjAndConstr
(line 135)
Error in C:\Program
Files\MATLAB\R2013a\toolbox\optim\optim\sqpLineSearch.p>sqpLineSearch
(line 287)
Error in fmincon (line 910)
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] =
sqpLineSearch(funfcn,X,full(A),full(B),full(Aeq),full(Beq), ...
Error in symb_optm_ml_E_mult (line 54)
[xgrad fval] =
fmincon(objfungrad,Eo,[],[],[],[],[Elb],[Eub],[],opts_grad);