MATLAB used up all my disk space! How can I get it back? - matlab

I left MATLAB running on a simple ode45 + plot, and when I came back I saw that the 5GBs of free space I had on my drive (C:) was no more! MATLAB had stopped due to "no memory".
Can someone please tell me what happened and how I can get my space back???
Thank You.

You can visually inspect hard disk usage and find folders and files which take up a lot of space with a tool such as TreeSize Free.
P.S. You can also try clearing temporary folders either trough built-in disk cleaner or other tools such as CCleaner.

MatLab is one of those apps that have an all world of computing science where you only want to work in a small tiny island of knowledge, the Help folder of it is huge, anyway here's some things you can do to make it slimmer on disk:
Install only the packages you need.
Use JPEGMini to compress the JPEG collection of the huge help folder.
Use Pngyu to compress the huge collection of PNG files to 8 bit depth.
Step 2 and 3 will get you back like a Gigabyte if not more.
Use NTFS compression on the MatLab Folder.
It will get you back another 2 Gigabytes
Both step 2 and 3 must be done with admin privileges, the drag and drop of folder to it must be done with another app with admin privileges also, you can use Explorer++ as Windows File Explorer alternative.

Related

CDT uses too much memory

I have an AUTOSAR project(1.1k sources) which I want to index using the C/C++ Indexer plugin on eclipse oxygen(4.7.3). After I got an Out of heap space error with -xmx4g I wanted to see how much memory it really needs so I configured -xmx10g, yet it wasn't enough.
Taking a snapshot with jvisualvm.exe from JDK 1.8 I see 7 gb of char[] objects kept in memory.
After about 10 minutes of running, the indexing didn't pass the first file from the 1.1k files to analyze.
What do I have to do to get a fix on such a problem?
Or where should I look to find the root cause?
The best way to get such a problem fixed is to reduce your project to a minimal set of files that reproduce the problem, and then file a CDT bug with the files attached.
The reduction can be done using binary search: delete half the files in your project, and see if the problem persists. If so, delete half of the remaining files, and so on. (It helps to consider dependency order when choosing which files to delete, i.e. avoid deleting a file before deleting files that depend on it.) When you only have a few files left, you can perform the binary search on their contents. Ideally you arrive at a minimal reproducing testcase of maybe 100-200 lines spread out over 1-3 files, at which point you can rename identifiers to be generic and post the code.
I would suggest testing with the latest release (CDT 9.5.2) before doing this, to make sure you're not running into an issue that has already been fixed.
Are you sure, -xmx is accepted .. or is it rather -Xmx.
I usually use the following in eclipse.ini:
-Xms512m
-Xmx4096m
1.1k Sources doesn't sound much (we have much more), but on the other side, some generated files can eat up a lot of memory and performance, e.g. Rte.c and Rte_*.h files (e.g. Rte.c here is about 100k LOC). Together with CDTs features of AST based syntax and semantic highlighting eats up memory and also performance.

Matlab freezes at certain files when dicomread is used

This problem is bothering me for a long time and I hope that someone is able to help me. I've searched the internet extensively but it seems that im the only one with this problem.
Occasionally when im loading multiple dicom files into Matlab it freezes at a certain file. Im unable to terminate the script and i have to force matlab to shut down. I do not know if this is a bug but i hope that there is a work-around for this because the dicomread does not return an error but freezes Matlab.
More information:
It happens with multiple datasets from different organisations
It happens with multiple computers
Matlab version 2013b/2014a/2014b
I hope that somebody can help me to fix this or find a workaround.
I had the same problem and I am using Matlab 2014. The same code I have runs fine on Matlab 2012.
I solved the issue by copying the DICOM library from Matlab 2012 to 2014. If you have a Windows machine, the library in the 2012 version is typically installed at
C:\Program Files\MATLAB\R2012a\toolbox\images\iptformats
The 2014 version is at
C:\Program Files\MATLAB\R2014a\toolbox\images\iptformats
Identical problem here with 3D CT scans. I had hundreds of scans stored as dicom folders (1 file per slice) that I converted to dicom volumes (1 file per entire volume) with compression. 6 of them would trigger a segmentation fault in the dicomparse call inside dicomread, despite I had no problem reading them in other software tools.
Twe easiest walkaround for me was to re-export these dicoms as uncompressed dicom volumes with a different software tool.

Archiving succesive beta versions : how to save harddisk space?

I archive successive versions of an in-progress work :
MySoftware-v1.01beta.rar [2 GB]
MySoftware-v1.02beta.rar [2 GB]
MySoftware-v1.03beta.rar [2 GB]
MySoftware-v1.04beta.rar [2 GB]
etc.
Lots of files are modified, so it's not possible to backup only modified files : most of the files are modified each time.
How can do a .rar file that only saves the "difference" (should I use something like "patch" or "diff" ? -> I never used them). There are lots of "difference" tool, okay, but the result file won't be a .rar, it will only be a "difference file" : so each time I would like to re-open such an archive, I'll have to "de-diff" it and only THEN I will have a .rar again.
I'm on Windows, and if possible, I'd like to use winrar or command-line tool (it would be great if no third party software is needed).
Thanks a lot in advance!
You say 90% of your product is .wav files. Since diff on two wav files that are different is likely to produce huge differences, this is not likely to save you any space. Nor are .wav files really compressible, so zip or rar likely doesn't help much, either.
However, if, like most of us programmers, you derive your next version of the product from the previous one, by mostly retaining files unchanged (whether that be source or be .wav files), then what you really want to do is simply store, for each version, the files that changed. This is called "de-duplication" in the backup/compression world.
You can organize a complicated scheme your self to do this. (e.g., your self-suggested "do this with winrar"). But if you use a decent "source control system" (SVN or GIT would be fine), this will happen automatically as you checkin changed (and don't re-checkin unchanged) files. These tools work by keeping track of "differences" between versions; you can tell the tools to track text ("diff") style differences, or simply store the entire thing.
Also, since your individual versions occupy 2GB, I'd go waste $100 on a 2 or 4 terabyte (external) drive. That should last you in worst case through some 1000 iterations. (SVN/GIT will likely extended this a lot further).
You should really be using a source control system. A popular one is called 'git'. There are many others, each with their own strengths and weaknesses and the debate about which is 'best' is long and tedious.
Source control systems take care of storing and managing revisions of your files. The actual methods vary, but as a programmer who uses version control you 'check in' files for storage and version control, 'tag' them with revision numbers and then 'check out' files for modifying.
If you've ever downloaded source off the Internet using 'svn' or 'cvs', that's the type of thing I mean.
The source control system usually uses some sort of difference system to only store differences between modified files. Its purpose is to save you from having to even think about copying and backing up files - all you have to do is ensure your 'repository' is backed up correctly.
Also, as an added advantage you can make changes to source files and always have backups in case your changes need reverting. So suppose you want to try out a new file handling system you can use the source control system to create a testing (or whatever you want to call it) 'branch' and do all your changes in there without damaging a working copy of your software. If the changes are good you can then 'merge' the changes into the non testing branch of your repository.

MATLAB slowing down on long debugging sessions

I have noticed that MATLAB (R2011b on Windows 7, 64 bit) tends to slow down if I am in debugging mode for a long period of time (e.g. 3 hours). I don't recall this happening on previous versions of MATLAB.
The slow down is small, but significant enough to have an impact on my productivity (sometimes MATLAB needs to wait for up to 1 sec before I can type on the command line or on the editor).
I usually spend hours on debugging mode (e.g. after stopping at a keyboard statement) coding full projects in this mode. I find working on debugging mode convenient to organically grow my code while inspecting my code anytime in execution time.
The odd thing is my machine has 16 GB of RAM and the total size of all workspaces while in debugging mode is usually less than 4 GB. I don't have any other large process running in the background, and my system reports ~8GB of free RAM.
Also, unfortunately MATLAB does not let me call pack from debugging mode; it complains with :
Warning: PACK can only be used from the MATLAB command line.
I have reproduced this behavior after restarting MATLAB, rebooting my system, and on different days. With this, my question/s are:
Has anybody else noticed this? Is there anything I could do to prevent this slowdown without exiting debugging mode?
Are there any technical notes or statements from Mathworks addressing this issue?
In case it matters, my code is on a network drive, so I added the following on my startup.m file, which should alleviate any impact on performance resulting from it:
system_dependent('RemoteCWDPolicy', 'None');
system_dependent('RemotePathPolicy', 'None');
system_dependent('DirChangeHandleWarn','Never');
I have experienced some similar issues. The problem ended up being that Mathworks changed how Matlab caches files. For some users, it is now storing data in the TMP folder as defined by the environment variables. This folder was being scanned by anti virus and causing a lot of performance problem. Of course, IT wouldn't let us exclude the TMP folder from scans. So we added a line to our start up script that changes the environment variable of TMP to some other location within an excluded folder.
You don't have to worry about changing the variable back or messing up other programs. When applications launch, they copy the environment variables into their own local instance of them. Any changes made to them only change the local copy of those variables, not the system copy.
Here is the function you will need.
setenv('TEMP', 'C:\TEMP');
I'm not sure if it was TMP or TEMP. Check your environment variables to be sure.
I am using MATLAB R2011 on linux 10, windows 7 (32 bit).
I experienced MATLAB slowing down while printing simple variables in command window.
It turned that there was one .m file loaded in my Editor.
It was a big file with 10000 lines. These lines were simple data that should have been saved as mat file. When i closed this file, the editor was back to its normal speed.

Command line CSV viewer with column-alignment for LARGE files

I would like to view my CSV files in a column-aligned format from the command line, with something like less, but my CSV files are sometimes gigabytes big, and I'm using a little computer (Netbook, 1GB RAM, 8GB HD, 1GHz processor), so I don't want to waste a lot of memory or processing power viewing the file.
I mention that I'd like to use something like less because I would like to be able to navigate around within the file.
cat FILE | column -s, -t | less is one thought, but cat is still going to try to print the whole file and I'm not sure how much buffering the pipes will use (if any) or what sort of caching less employs.
This question is similar to this other question, but I'm specifically interested in viewing large files using minimal resources preferably already on the machine. I don't presently use VI or EMACS, and think they'd both be overkill here. VI, for instance, would be a 27MB install for a utility acting merely as a viewer.
First of all, less can open oversized files. Second, both vim (which I use with the Largefile plugin and with files over 8 GB) and emacs can do it.
But... Most of the time, viewing a big file in a 80x40 (or a bit bigger) terminal is useless... so you should filter it with something like (f)grep or process it with awk. If you want only the start or end, then there are head and tail.
HTH
Check the tail \ head commands.
Or even better, Download VIM source and compile it. That should be easy enough. Version 5.8 source is 1Mb before decompressing (4MB after). Enjoy.