Matlab Segmentation Violation and Memory Assertion Failure - matlab

I am running multiple Matlab jobs in parallel on an Sun Grid Engine that is using Matlab 2016b. On my personal macbook I am running Matlab 2016a. The script is doing some MRI image processing, where each job uses a different set of parameters so that I can do parameter optimization for my image processing routine.
About half of the jobs crash however, either due to segmentation violations, malloc.c memory assertion failures ('You may have modified memory not owned by you.') or errors from HDF5-DIAG followed by a segmentation violation.
Some observations
The errors do not always occur in the same jobs or in the same
functions, but the crashes occur in several groups of jobs, where the jobs within one group crash within one minute of another.
I am not using dynamic arrays anymore but preallocate my
arrays. If the arrays turn out to be too small I extend them with
for example cat(array, zeros(1, 2000)).
The jobs use partly the
same computations so they can share data. I do this by first
checking wether the data is already generated by another job. If so
try to load it using a while loop with a maximum number of attempts
and pauses of 1 second (since it might fail when another job is
still writing to the file, if it waits a bit and retries it might
succeed). If the loading fails after the maximum number of attempts
or if the data does not exist yet, then this job performs the
required computations and tries to save the data. If the data was
saved by another job in the meantime then this job does not save the
data anymore.
I am not using any C/C++ or MEX files.
I have tested a subset of some of the jobs on my own laptop with Matlab 2016a and on a linux computer with Matlab 2016b, those worked fine. But again, the problem occurs only after a few hundred (of the total 500 iterations), and I didn't run the full simulation on my own computer but only around 20 iterations due to time constraints.

Related

Running multiple instances of Google OR Solver (CBC) results in no solution found (C++)

I am using the Google OR tools (CBC) for solving Mixed Integer Programming problem in C++.
I am following the sample code shown in the Google OR website https://developers.google.com/optimization/mip/integer_opt
Only difference is that I have a bunch of threads(pthreads)/threadpool that are trying to call solver (CBC) simultaneously. All of the threads have thread local data, and hence are calling the solver on their local data simultaneously. This also means that I am ensuring that the constraints, MPSolver, etc are all thread local (none of them are global)
Problem:
If 'n' threads call the solver simultaneously, i see that the solver reports "No Solution Found" for all the thread local datasets. However, if the whole process is sequential i.e. different datasets are solved one after another by limiting the number of threads to 1, then I get perfect optimal solution for each and every datasets.
Now, this happens only when the time limit has been set. The time limit is set to 5s using the api solver.SetTimeLimit(). I can't avoid setting the time limit because there can be cases where it might take huge amount of time to get the optimal solution. Different threads can have different data sets which can be varying in size (number of constraints and number of variables).
'n' varies from 2 to 32
Note:
Just to re-clarify I am trying to invoke simultaneously 'n' solver calls using 'n' threads (each thread calls it's own solver.solve()).
I am not trying to distribute the work of a single solver among 'n' threads.

Parallel computing data extraction from a SQL database in Matlab

In my current setup I have a for loop in which I extract different type of data from a SQL database hosted on Amazon EC2. This extraction is done in the function extractData(variableName). After that the data gets parsed and stored as a mat file in parsestoreData(data):
variables = {'A','B','C','D','E'}
for i = 1:length(variables)
data = extractData(variables{i});
parsestoreData(data);
end
I would like to parallelize this extraction and parsing of the data and to speed up the process. I argue that I could do this using a parfor instead of for in the above example.
However, I am worried that the extraction will not be improved as the SQL database will get slowed down when multiple requests are made on the same database.
I am therefore wondering if Matlab can handle this issue in a smart way, in terms of parralelization?
The workers in parallel pool running parfor are basically full MATLAB processes without a UI, and they default to running in "single computational thread" mode. I'm not sure whether parfor will benefit you in this case - the parfor loop simply arranges for the MATLAB workers to execute the iterations of your loop in parallel. You can estimate for yourself how well your problem will parallelise by launching multiple full desktop MATLABs, and set them off running your problem simultaneously. I would run something like this:
maxNumCompThreads(1);
while true
t = tic();
data = extractData(...);
parsestoreData(data);
toc(t)
end
and then check how the times reported by toc vary as the number of MATLAB clients varies. If the times remain constant, you could reasonably expect parfor to give you benefit (because it means the body can be parallelised effectively). If however, the times decrease significantly as you run more MATLAB clients, then it's almost certain that parfor would experience the same (relative) slow-down.

matlab script node in Labview with different timing

I have a DAQ for Temperature measurment. I take a continuous sample rate and after DAQ, calculating temperature difference per minute (Cooling Rate: CR) during this process. This CR and temperature values are inserted into the Matlab script for a physical model running (predicting the temperature drop for next 30 sec). Then, I record and compare the predicted and experimental values in LabVIEW.
What i am trying to do is the matlab model is executing every 30 sec, and send out its predictions as an output from matlab script. One of this outputs helps me to change the Air Blower Motor Speed until next matlab run( eventually affect the temperature drop for next 30 sec as well, which becomes a closed loop). After 30 sec while main process is still running, sending CR and temperature values to matlab model again, and so on.
I have a case structure for this Matlab script. And inside of case structure i applied an elapsed time function to control the timing for the matlab script, but this is not working.
Yes. Short answer: I believe (one of) the reasons the program behaves weird on changed timing are several race conditions present in the code.
The part of the diagram presented shows several big problems with the code:
Local variables lead to race conditions. Use dataflow. E.g. you are writing to Tinitial local variable, and reading from Tinitial local varaible in the chunk of code with no data dependencies. It is not known whether reading or writing will happen first. It may not manifest itself badly with small delays, while big delays may be an issue. Solution: rewrite you program using the following example:
From Bad:
To Good:
(nevermind broken wires)
Matlab script node executes in the main UI execution system. If it is executing for a long time, it may freeze indicators/controls as well as execution of other pieces of code. Change execution system of other VIs in your program (say to "other 1") and see if the situation improves.

Faster way to run simulink simulation repeatedly for a large number of time

I want to run a simulation which includes SimEvent blocks (thus only Normal option is available for sim run) for a large number of times, like at least 1000. When I use sim it compiles the program every time and I wonder if there is any other solution which just run the simulation repeatedly in a faster way. I disabled Rebuild option from Configuration Parameter and it does make it faster but still takes ages to run for around 100 times.
And single simulation time is not long at all.
Thank you!
It's difficult to say why the model compiles every time without actually seeing the model and what's inside it. However, the Parallel Computing Toolbox provides you with the ability to distribute the iterations of your model across several cores, or even several machines (with the MATLAB Distributed Computing Server). See Run Parallel Simulations in the documentation for more details.

Spawn multiple copies of matlab on the same machine

I am facing a huge problem. I built a complex C application with embedded Matlab functions that I call using the Matlab engine (engOpen() and such ...). The following happens:
I spawn multiple instances of this application on a machine, one for each core
However! ... The application then slows down to a halt. In fact, on my 16-core machine, the application slows down approximately by factor 16.
Now I realized this is because there is only a sngle matlab engine started per machine and all my 16 instances share the same copy of matlab!
I tried to replicate this with the matlab GUI and its the same problem. I run a program in the GUI that takes 14 seconds, and THEN I run it in two GUIs at the same time and it takes 28 seconds
This is a huge problem for me, because I will miss my deadline if I have to reprogram my entire c application without matlab. I know that matlab has commands for parallel programming, but my matlab calls are embedded in the C application and I want to run multiple instances of the C application. Again, I cannot refactor my entire c application because I will miss the deadline.
Can anyone please let me know if there is a solution for this (e.g. really start multiple matlab processes on the same machine). I am willing to pay for extra licenses. I currently have fully lincensed matlab installed on all machines.
Thank you so so much!
EDIT
Thank you Ben Voigt for your help. I found that a single instance of Matlab is already using multiple cores. In fact, running one instance shows me full utilization of 4 cores. If I run two copies of Matlab, I get full utilization of 8 cores. Hence it is actually running in parallel. However, even though 2 instances seem to take up double the processing power, I still get 2* slowdown. Hence, 2 instances seem to get twice the result with 4* the compute power total. Why could that be?
Your slowdown is not caused by stuffing all N instances into a single MatLab instance on a single core, but by the fact that there are no longer 16 cores at the disposal of each instance. Many MATLAB vector operations use parallel computation even without explicit parallel constructs, so more than one core per instance is needed for optimal efficiency.
MATLAB libraries are not thread-safe. If you create multithreaded applications, make sure only one thread accesses the engine application.
I think the matlab engine is the wrong technique. For windows platforms, you can try using the com automation server, which has the .Single option which starts one matlab instance for each com client you open.
Alternatives are:
Generate C++ code for the functions.
Create a .NET library. (NE Builder)
Run matlab via command line.