How to prevent MatLab from freezing? - matlab

Is there a way to limit the amount of time an evaluation is allowed to run? Or limit the amount of memory that MatLab is allowed to take up so it doesn't freeze my laptop?

Let's answer your questions one at a time:
Question #1 - Can I limit the amount of time MATLAB takes to execute a script?
As far as I know, this is not possible. If you want to do this, you would need a multi-threaded environment where one thread does the actual job while another thread keeps an eye on the timer... but even with that functionality, AFAIK, MATLAB does not have this supported. The only way to stop your script from running is if you punch Ctrl + C / Cmd + C. Depending on what is actually being executed... for example a MEX script or a LAPACK routine, or just a simple MATLAB script, it may work by just pushing it once... or you may have to mash the sequence like a maniac.
(Note: The above image was introduced to try and be funny. If you don't know where that image is from, it's from the movie Flashdance and one of the songs from the soundtrack is She's a maniac, where I've also provided a YouTube link to the song above.)
See this post for more details: How can I interrupt MATLAB when it gets really really busy?
Question #2 - Can we limit the amount of memory that MATLAB uses?
Yes you can. From what I have seen in your posts, you're using Windows. You can change this by changing the page size of the virtual memory that is used for your computer. Specifically, instead of allowing it to grow dynamically, you could set it to be a certain size and once MATLAB exhausts that, it'll give you an out-of-memory error rather than freezing your computer.
See this post from MathWorks forums for more insight:
http://www.mathworks.com/matlabcentral/answers/12695-put-a-limit-on-memory-matlab-uses
Also see this guide from MathWorks on how to handle out-of-memory errors:
http://www.mathworks.com/help/matlab/matlab_prog/resolving-out-of-memory-errors.html
Finally, take a look at this link on how to change / modify the page size of your computer via Windows:
http://windows.microsoft.com/en-ca/windows/change-virtual-memory-size#1TC=windows-7

Related

MATLAB: Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?

Does the execution of addpath/rmpath/savepath in one MATLAB instance affect other instances?
Motivation: Imagine that you are developing a MATLAB package, which provides a group of functions to the users. You have multiple versions of this package being developed on a single laptop. You would like to test these different versions in multiple instances of MATLAB:
You open one MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION1), and hit enter;
While the first test is running, you open another MATLAB window, type run_test(DIRECTORY_OF_PACKAGE_VERSION2), and hit enter.
See the pseudo-code below for a better idea about the tests.
No code or data is shared between different tests --- except for those embedded in MATLAB, as the tests are running on the same laptop, using the same installation of MATLAB. Below is a piece of pseudo-code for such a scenario.
% MATLAB instance 1
run_test(DIRECTORY_OF_PACKAGE_VERSION1);
% MATLAB instance 2
run_test(DIRECTORY_OF_PACKAGE_VERSION2);
% Code for the tests
function run_test(package_directory)
setup_package(package_dirctory);
RUN EXPERIMENTS TO TEST THE FUNCTIONS PROVIDED BY THE PACKAGE;
uninstall_package(package_directory);
end
% This is the setup of the package that you are developing.
% It should be called as a black box in the tests.
function setup_package(package_dirctory)
addpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
% Make the package available in subsequent MATLAB sessions
savepath;
end
% The function that uninstalls the package: remove the paths
% added by `setup_package` and delete the files etc.
function uninstall_package(package_directory)
rmpath(PATH_TO_THE_FUNCTIONS_PROVIDED_BY_THE_PACKAGE);
savepath;
end
You want to make sure the following.
The tests do not interfere with each other;
Each test is calling funtions from the correct version of the package.
Hence here come our questions.
Questions:
Does the execuation of addpath, rmpath, and savepath in one MATLAB instance affect the other instance, sooner or later?
More generally, what kind of commands executed in one MATLAB instance can affect the other instance?
3. What if I am running only one instance of MATLAB, but invoke a parfor loop with two loops running in parallel? Does the execution of addpath/rmpath/savepath in one loop affect the other loop, sooner or later? In general, what kind of commands executed in one parallel loop can affect the other loop? (As pointed out by #Edric, this can be complicated; so let us not worry about it. Thank you, #Edric.)
Thank you very much for any comments and insights. It would be much appreciated if you could direct me to relevant sections in the official documentation of MATLAB --- I did some searching in the documentation, but have not found an answer to my question.
BTW, in case you find that the test described in the pseudo code is conducted in a wrong/bad manner, I will be very grateful if you could recommend a better way of doing it.
The documentation page for the MATLAB Search Path specifies at the bottom:
When you change the search path, MATLAB uses it in the current session, but does not update pathdef.m. To use the modified search path in the current and future sessions, save the changes using savepath or the Save button in the Set Path dialog box. This updates pathdef.m.
So, standard MATLAB sessions are "isolated" in terms of their MATLAB Search Path unless you use savepath. After a call to savepath, new MATLAB sessions will read the updated pathdef.m on startup.
The situation with a parallel pool is slightly more complex. There are a couple of things that affect this. First is the parameter AutoAddClientPath that you can specify for the parpool command. When true, an attempt is made to reflect the desktop MATLAB's path on the workers. (This might not work if the workers cannot access the same folders).
When a parallel pool is running, any changes to the path on the desktop MATLAB client are sent to the workers, so they can attempt to add or remove path entries. Parallel pool workers calling addpath or rmpath do so in isolation. (I'm afraid I can't find a documentation reference for this).

OS - Difference between NX bit and Valid-Invalid Bit

What i understand:
Setting the NX bit high for a page will mean that whenever some program tries to execute the code in that page, it will cause a segmentation fault.
There is also a Valid-Invalid bit, and accessing a page marked Invalid will also cause a segmentation fault.
So the question is:
Why is the NX bit not redundant when you already have a valid-invalid bit ? What does it mean to mark a page both Valid and NX ?
First, I am going to change the terminology slightly. Instead of using terms like valid/invalid, I am going to replace them with present/not present.
A page that is marked as present can be accessed in some fashion (read, write or execute). A page that is marked as not present is not directly accessible (if at all). If it can be accessed, it must first be loaded from some type of backing store (disk/flash/...) into memory.
The NX bit only controls whether the page of memory has execute permissions. Why is controlling the execute permissions important? It helps in locking down the system to help prevent the execution of arbitrary code.
So, a page that is marked as both NX and not present is one that does not have execute permissions, but may need to be loaded from backing store.
If you do not have an NX bit (especially on your data pages), then if some clever hacker figures out how to jump to code in the data section while in supervisor/kernel mode, it makes their job much easier to execute arbitrary code on your system.
Hope this helps.

Differences between error handling between parallel tool box and regular matlab

On a recent post I was told that there are differences between the way parallel toolbox handles warnings and the way regular matlab deoes it. I felt that poster hand gone someway to answering my question so I marked it as answered. But I still have some additional questions (hope this doesn't contitute double posting).
Error only triggers when I don't use parfor?
I just wondered if someone can explain to me what these differences are?
Also what is meant by parfor being sandboxed?
Is it still possible to use try catch type structures with the parallel tool box or using some other mechanism to chain the same thing?
To be clear when I run using parfor warning messages are still produced telling me the matix is illconditioned but it doesn't seem to be being picked up as an error despite me adding the lines
warnState(1) = warning('error', 'MATLAB:singularMatrix');
warnState(2) = warning('error', 'MATLAB:illConditionedMatrix');
However, when I run using a regular for loop it is picked up as an error.
So the parallel tool box is producing the warnings correctly its just not translating them into errors via the code above so they can be used in a try catch structure.
Kind Regards
Hugh
I think the problem in your original code is that you changed the warnings to errors on the MATLAB client only. To make that change on the workers, you need to do
spmd
warnState(1) = warning('error', 'MATLAB:singularMatrix');
warnState(2) = warning('error', 'MATLAB:illConditionedMatrix');
end
There's also the pctRunOnAll function to run things everywhere.
Also, I don't know what the OP meant about matlabpool workers being 'sandboxed'. The differences between your MATLAB client and the workers are:
The workers always run in single threaded mode; and
The workers run with no display (although they can produce off-screen graphics and save to file)

Google Spreadsheet turn off autosave via script?

I'm fairly new to using Google Docs, but I have come to really appreciate it. The scripting is pretty easy to accomplish simple tasks, but I have come to realize a potential speed issue that is a little frustrating.
I've got a sheet that I use for my business to calculate the cost of certain materials on a jobsite. It works great, but was a little tedious to clear between jobs so I wrote a simple script to clear the ranges (defined by me and referenced by name) that I needed emptied.
Once again, worked great. The only problem with it is that clearing a few ranges (seven) ends up taking about ten full seconds. I -believe- that this is because the spreadsheet is being saved after each range is cleared, which becomes time intensive.
What I'd like to do is test this theory by disabling autosave in the script, and then re enabling it after the ranges have been cleared. I don't know if this is even possible because I haven't seen a function in the API to do it, but if it is I'd love to know about it.
Edit: this is the function I'm using as it stands. I've tried rewriting it a couple of times to be more concise and less API call intensive, but so far I haven't had any luck in reducing the time it takes to process the calls.
function clearSheet() {
var ss = SpreadsheetApp.getActiveSpreadsheet();
var sheet = ss.getActiveSheet();
sheet.getRange("client").clear();
sheet.getRange("lm_group_1").clear({contentsOnly:true});
sheet.getRange("lm_group_2").clear({contentsOnly:true});
sheet.getRange("dr_group_1").clear({contentsOnly:true});
sheet.getRange("dr_group_2").clear({contentsOnly:true});
sheet.getRange("fr_group_1").clear({contentsOnly:true});
sheet.getRange("fr_group_2").clear({contentsOnly:true});
sheet.getRange("gr_group_1").clear({contentsOnly:true});
sheet.getRange("client_name").activate();
}
That is not possible, and will probably never be. It's not "the nature" for Google Docs.
But depending on how you wrote your script, it's probable that all changes are already being wrote at once, in the end. There's some API calls that may be forcing a flush of your writings to the spreadsheet (like trying to read after you wrote something), but we'd need to see your code to check that.
Anyway, you can always check the spreadsheet revision history to verify if it's being done at once or in multiple steps.
About the performance, Apps Scripts have a natural delay that is unavoidable, but it's not 10s, so there's probably room to improve on your script, using fewer API calls and preferring batch calls like setValues over setValue and so on. But then again, we'd have to see your code to assert that and give more helpful tips.

How do I time my sml code?

Can anyone tell me how I can time my sml code?
I have implemented several different versions of the same algorithm and would like to time them and perhaps even know the memoryusage?
The Timer module is what you want. It can either give you cpu time (gives you user, sys and gc times) or wall clock time.
For example of how to use it see the Benchmark module of MyLib.
With respect to finding out how much memory your algorithms are using, you might bind the profiling feature of MLton handy. Note however that i have actually never used this, but it states that:
you can profile your program to find out how many bytes each function allocates.