MATLAB 'Gather' on Tall Array Never Terminating - matlab

I am running R2017a on Windows 10, and using a tall array that was constructed off a Datastore object (that was, itself, constructed from tall arrays built on MATLAB matrices of doubles).
What really bugs me about the issue I'm having now is that all my code used to work fine. One day it simply started hanging when I tried to run it.
This is the block in question:
%load
tallTraindat = tall(datastore);
sz = size(tallTraindat);
sz = gather(sz);
numExamples = sz(1);
exampleLen = sz(2);
My tall array is a M x ~1700 single array, constructed on this datastore of smaller single arrays. In a loop:
write(fname,tallEx); % tall ex constructed by tall(someSingles);
folderNames{end+1} = fname;
and then:
ds = datastore(folderNames,'Type','tall');
As you can see, this is about as vanilla as it gets. But the operation 'sz = gather(sz)' simply hangs forever. It never finishes or returns. My parallel pool has started properly, and the gather operation gets to the point where it prints 'Evaluation 100% complete'. But it goes nowhere from there. If I pause execution, I'm always taken to a point in RemoteSpdmExecutor, line 129 'obj.RemoteSpmdController.drainIO( false );'. This line apparently lasts forever.
EDIT: When I woke up today, it started failing with an error message instead : 'Error using parallel.FevalOnAllFuture/fetchOutputs (line 69)
fetchOutputs could not concatenate the OutputArguments. Set 'UniformOutput' to false.
Cell contents reference from a non-cell array object.'
EDIT: Re-created my default local parallel pool a couple times. Now it's back to hanging on that same line of code.
Based on all my tests, it seems that my issue occurs whenever I construct a tall datastore from more than one tall-array folder, and then construct a tall array off of that datastore.
Tearing my hair out over this one. If anyone even has a suspicion where to look for the source of this problem, I'd appreciate it. I'll try to respond quickly to requests for more info.

Related

Spark::KMeans calls takeSample() twice?

I have many data and I have experimented with partitions of cardinality [20k, 200k+].
I call it like that:
from pyspark.mllib.clustering import KMeans, KMeansModel
C0 = KMeans.train(first, 8192, initializationMode='random', maxIterations=10, seed=None)
C0 = KMeans.train(second, 8192, initializationMode='random', maxIterations=10, seed=None)
and I see that initRandom() calls takeSample() once.
Then the takeSample() implementation doesn't seem to call itself or something like that, so I would expect KMeans() to call takeSample() once. So why the monitor shows two takeSample()s per KMeans()?
Note: I execute more KMeans() and they all invoke two takeSample()s, regardless of the data being .cache()'d or not.
Moreover, the number of partitions doesn't affect the number takeSample() is called, it's constant to 2.
I am using Spark 1.6.2 (and I cannot upgrade) and my application is in Python, if that matters!
I brought this to the mailing list of the Spark devs, so I am updating:
Details of 1st takeSample():
Details of 2nd takeSample():
where one can see that the same code is executed.
As suggested by Shivaram Venkataraman in Spark's mailing list:
I think takeSample itself runs multiple jobs if the amount of samples
collected in the first pass is not enough. The comment and code path
at GitHub
should explain when this happens. Also you can confirm this by
checking if the logWarning shows up in your logs.
// If the first sample didn't turn out large enough, keep trying to take samples;
// this shouldn't happen often because we use a big multiplier for the initial size
var numIters = 0
while (samples.length < num) {
logWarning(s"Needed to re-sample due to insufficient sample size. Repeat #$numIters")
samples = this.sample(withReplacement, fraction, rand.nextInt()).collect()
numIters += 1
}
However, as one can see, the 2nd comment said it shouldn't happen often, and it does happen always to me, so if anyone has another idea, please let me know.
It was also suggested that this was a problem of the UI and takeSample() was actually called only once, but that was just hot air.

Simulink-Simulation with parfor (Parallel Computing)

I asked today a question about Parallel Computing with Matlab-Simulink. Since my earlier question is a bit messy and there are a lot of things in the code which doesnt really belong to the problem.
My problem is
I want to simulate something in a parfor-Loop, while my Simulink-Simulation uses the "From Workspace" block to integrate the needed Data from the workspace into the simulation. For some reason it doesnt work.
My code looks as follows:
load DemoData
path = pwd;
apool = gcp('nocreate');
if isempty(apool)
apool = parpool('local');
end
parfor k = 1 : 2
load_system(strcat(path,'\DemoMDL'))
set_param('DemoMDL/Mask', 'DataInput', 'DemoData')
SimOut(k) = sim('DemoMDL')
end
delete(apool);
My simulation looks as follows
The DemoData-File is just a zeros(100,20)-Matrix. It's an example for Data.
Now if I simulate the Script following error message occures:
Errors
Error using DemoScript (line 9)
Error evaluating parameter 'DataInput' in 'DemoMDL/Mask'
Caused by:
Error using parallel_function>make_general_channel/channel_general (line 907)
Error evaluating parameter 'DataInput' in 'DemoMDL/Mask'
Error using parallel_function>make_general_channel/channel_general (line 907)
Undefined function or variable 'DemoData'.
Now do you have an idea why this happens??
The strange thing is, that if I try to acces the 'DemoData' inside the parfor-Loop it works. For excample with that code:
load DemoData
path = pwd;
apool = gcp('nocreate');
if isempty(apool)
apool = parpool('local');
end
parfor k = 1 : 2
load_system(strcat(path,'\DemoMDL'))
set_param('DemoMDL/Mask', 'DataInput', 'DemoData')
fprintf(num2str(DemoData))
end
delete(apool);
Thats my output without simulating and displaying the Data
'>>'DemoScript
00000000000000000 .....
Thanks a lot. That's the original question with a lot more (unnecessary) details:
EarlierQuestion
I suspect the issue is that when MATLAB is pre-processing the parfor loop to determine what variables need to be passed to the workers it does not know what DemoData is. In your first example it's just a string, so no data gets sent over. In your second example it explicitly knows about the variable and hence does pass it over.
You could try either using the Model Workspace, or perhaps just inserting the line
DemoData = DemoData;
in the parfor loop code.
Your error is because workers did not have access to DemoData in the client workspace.
When running parallel simulations with Simulink it would be easier to manage data from workspace if you move them to model workspace. Then each worker can access this data from its model workspace. You can load a MAT file or write MATLAB code to initialize data in model workspace. You can access model workspace using the Simulink model menu View->Model Explorer->Model Workspace.
Also see documentation at http://www.mathworks.com/help/simulink/ug/running-parallel-simulations.html that talks about "Resolving workspace access issues".
You can also move the line
load DemoData
to within the parfor loop. Doing this, you assure that the data will be available in each worker base workspace, wich is accessible to the model, instead of the client workspace.

matlab parpool failed when stopping mdce on one of workers node [duplicate]

When an out-of-memory error is raised in a parfor, is there any way to kill only one Matlab slave to free some memory instead of having the entire script terminate?
Here is what happens by default when an out-of-memory error occurs in a parfor: the script terminated, as shown in the screenshot below.
I wish there was a way to just kill one slave (i.e. removing a worker from parpool) or stop using it to release as much memory as possible from it:
If you get a out of memory in the master process there is no chance to fix this. For out of memory on the slave, this should do it:
The simple idea of the code: Restart the parfor again and again with the missing data until you get all results. If one iteration fails, a flag (file) is written which let's all iterations throw an error as soon as the first error occurred. This way we get "out of the loop" without wasting time producing other out of memory.
%Your intended iterator
iterator=1:10;
%flags which indicate what succeeded
succeeded=false(size(iterator));
%result array
result=nan(size(iterator));
FLAG='ANY_WORKER_CRASHED';
while ~all(succeeded)
fprintf('Another try\n')
%determine which iterations should be done
todo=iterator(~succeeded);
%initialize array for the remaining results
partresult=nan(size(todo));
%initialize flags which indicate which iterations succeeded (we can not
%throw erros, it throws aray results)
partsucceeded=false(size(todo));
%flag indicates that any worker crashed. Have to use file based
%solution, don't know a better one. #'
delete(FLAG);
try
parfor falseindex=1:sum(~succeeded)
realindex=todo(falseindex);
try
% The flag is used to let all other workers jump out of the
% loop as soon as one calculation has crashed.
if exist(FLAG,'file')
error('some other worker crashed');
end
% insert your code here
%dummy code which randomly trowsexpection
if rand<.5
error('hit out of memory')
end
partresult(falseindex)=realindex*2
% End of user code
partsucceeded(falseindex)=true;
fprintf('trying to run %d and succeeded\n',realindex)
catch ME
% catch errors within workers to preserve work
partresult(falseindex)=nan
partsucceeded(falseindex)=false;
fprintf('trying to run %d but it failed\n',realindex)
fclose(fopen(FLAG,'w'));
end
end
catch
%reduce poolsize by 1
newsize = matlabpool('size')-1;
matlabpool close
matlabpool(newsize)
end
%put the result of the current iteration into the full result
result(~succeeded)=partresult;
succeeded(~succeeded)=partsucceeded;
end
After quite bit of research, and a lot of trial and error, I think I may have a decent, compact answer. What you're going to do is:
Declare some max memory value. You can set it dynamically using the MATLAB function memory, but I like to set it directly.
Call memory inside your parfor loop, which returns the memory information for that particular worker.
If the memory used by the worker exceeds the threshold, cancel the task that worker was working on. Now, here it get's a bit tricky. Depending on the way you're using parfor, you'll either need to delete or cancel either the task or worker. I've verified that it works with the code below when there is one task per worker, on a remote cluster.
Insert the following code at the beginning of your parfor contents. Tweak as necessary.
memLimit = 280000000; %// This doesn't have to be in parfor. Everything else does.
memData = memory;
if memData.MemUsedMATLAB > memLimit
task = getCurrentTask();
cancel(task);
end
Enjoy! (Fun question, by the way.)
One other option to consider is that since R2013b, you can open a parallel pool with 'SpmdEnabled' set to false - this allows MATLAB worker processes to die without the whole pool being shut down - see the doc here http://www.mathworks.co.uk/help/distcomp/parpool.html . Of course, you still need to arrange somehow to shutdown the workers.

getblast error unavilable matlab

I'm writing matlab code that's supposed to iterate over a list sequences and blast each at a time. Here is the relevant part of the code:
%blast the seq
[res, ROTE] = blastncbi(seq, 'blastn');
res1 = getblast(res, 'WaitTime',ROTE);
resName = res1.Hits(1).Name
for some seq's it worked, and then for the last it gave me this error message:
Error using getblast (line 176)
BLAST V7EBUE0901R is unavailable - try later.
please note that I've defined ROTE as the 'WaitTime' value, as suggested in the documentation of this function.
The script must iterate over lots and lots of genes, so I can't let it crash every 5 minutes!
The RTOE returned by blastncbi is an estimated time of how long it takes. Perhaps the estimate is sometimes simply incorrect.
Two simple ways to deal with this could be waiting longer, or trying it twice:
res1 = getblast(res, 'WaitTime',ROTE*10);
or
try
res1 = getblast(res, 'WaitTime',ROTE);
catch
res1 = getblast(res, 'WaitTime',ROTE);
end
Of course this assumes that you are sure that the info that you request is actually available.

Long data load time in Matlab

I have four variables, each saved in 365 mat-files (size: 8 x 92 x 240). I try to load these into my function in a for-loop day=1:365, one variable file per day. However, the two first variables always take abnormally long time to load. My code for loading looks like this:
load([eraFolder sprintf('Y%dD%d-tempSD.mat',year,day)], 'tempSD'); % took 5420 s to load
load([eraFolder sprintf('Y%dD%d-tempDewSD.mat',year,day)], 'tempDewSD')
load([eraFolder sprintf('Y%dD%d-eEraSD.mat',year,day)], 'eEraSD'); % took 6 seconds to load
load([eraFolder sprintf('Y%dD%d-pEraSD.mat',year,day)], 'pEraSD');
Using Profiler, I could see that the first two variables took 5420 seconds to load in 365 calls, whereas the the last two variables took 6 and 4 seconds to load respectively over 365 calls. When I swap the order in which variables are loaded, e.g. eEraSD before tempSD, it is still the first two loads that take more time.
When using tic toc to track the time spent on loading, it appears that the time to load a the first or second variable exponentially increases with the number of calls (with the last calls taking 50 seconds to run) . For the third and fourth variable, the loading time stays around 0.02-0.04 seconds per file, more or less independent on how far in the for loop I have gone. See figures below.
When using importdata instead of 'load', the first line takes about 8000 seconds to load 365 times (with the loading exponentially increasing as shown for T in the second figure). The other lines then take about 10 seconds to load 365 times.
I can't understand why it looks this way and what I can do to decrease the loading time. Would greatly appreciate any idea of a possible solution for this.
I suppose your data sets are in the same directory(over network or local) and with same attributes e.g. access properties and so on.
Then the only option left is with the charateristics of the vairbales stored in those matfiles. Can you check how much those variables appear in size e.g. by loading a sample one. This will narrow down to solve your problem.
Hope that help.
FS
Thanks for your help. I finally found out what caused the problem. In a 'for' loop later in the script, I saved other data to a folder I called temp. After renaming that folder to something else (e.g. temporary), the data loading problem disappeared.
(Doesn't matter so much now that the practical problem is solved, but I can't really say I understand why there was this peculiar relationship between the later 'save' call and this 'importdata' or 'load' call.)
Please see new question about the temp folder