I need to read 4000 or more DICOM files. I have written the following code to read the files and store the data into a cell array so I can process them later. A single DICOM file contains 128 * 931 data. But once I execute the code it took more than 55 minutes to complete the iteration. Can someone point out to me the performance issue of the following code?
% read the file information form the disk to memory
readFile=dir('d:\images','*.dcm');
for i=1:4000
% Read the information form the dicom files in to arrays
data{i}=dicomread(readFile(i).name);
info{i}=dicominfo(readFile(i).name);
data_double{i}=double(data{1,i}); % convert 16 bit data into double
first_chip{i}=data_double{1,i}(1:129,1:129); % extracting first chip data into an array
end
You are reading 128*931*4000 pixels into memory (assuming 16-bit values, that's nearly 1 GB), converting that to doubles (4 GB) and extracting a region (129*129*4000*8 = 0.5 GB). You are keeping all three of these copies, which is a terrible amount of data! Try not keeping all that data around:
readFile = dir('d:\images','*.dcm');
first_chip = cell(size(readFile));
info = cell(size(readFile));
for ii = 1:numel(readFile)
info{ii} = dicominfo(readFile(ii).name);
data = dicomread(info{ii});
data = (1:129,1:129); % extracting first chip data
first_chip{ii} = double(data); % convert 16 bit data into double
end
Here, I have pre-allocated the first_chip and info arrays. If you don't do this, the arrays will be re-allocated every time you add an element, causing expensive copies. I have also extracted the ROI first, then converted to double, as suggested by Rahul in his answer. Finally, I am re-using the DICOM info structure to read the file. I don't know if this makes a big difference in speed, but it saves the dicomread function some effort.
But note that this process will still take a considerable amount of time. Reading DICOM files is complex, and takes time. I suggest you read them all in once, then save the first_chip and info cell arrays into a MAT-file, which will be a lot faster to read in at a later time.
You can run the profiler to check which part of the code is taking up most of the time! But as far as it looks to me is that its your iteration size & the time taken is very much genuine. You could try and use parallel computing ( parfor loop) if you have a multicore processor, that should decrease the runtime significantly depending upon the number of cores that you have.
One suggestion would be to exctract the 'first chip data' first and then convert it to double, as the conversion process takes a significant amount of time.
Related
I am trying to load a 4.0 GB large CSV file into Matlab. I have 40GB of RAM. However, the table does not seem to finish loading. (Activity Monitor showed fast increase of RAM use up to 38.64GB and stopped after that. CPU still in heavily in use.)
According to the force quit menu of apple, matlab is has not gotten stuck. (I'd guess a missing "Matlab is not responding"-message signals that.)
1st Question: Why does it even take up that much RAM? I've read RAM duplicates. Can I do something in this regard?
2nd Question: Can I speed this project up. Split the CSV somehow?
3rd Question: Can I speed up my computer? It is taking forever, while using only 30% of CPU capacity... Why does it not use more? The vents are not crazy loud, so I guess "it's chilling".
Edit: It went up to 72.80 and is now decreasing...
Edit: Now back down at 55.something
There are a few concepts you should be aware of with Matlab.
Strings are stored as UINT16 (sort of, I can never get this right). Importantly what this means is that every character requires 2 bytes. If you stored the entire file as a long string it would take up 8 GB.
Values, whether they are arrays or scalars, are stored with headers. This means that storing a string (technically a character array, strings - the ones with double quotes instead of single quotes - may be different) requires a header that is roughly 104 bytes. This means something like 'test' requires roughly 108 bytes! If you can store an array of numbers then the 104 byte overhead is minimal. If you have a cell array of scalars, then each scalar is taking up 112 byes (assuming the scalar is an 8 byte double). This might be a bit confusing but in the end it means if you're not careful reading a CSV file your memory requirements can explode.
So what can you do. Tables store columns as arrays where possible. You can try readtable although I think the underlying implementation might not be memory efficient.
For large files Matlab suggests using the datastore function. It will fix your memory problem although it may be a bit slow.
The other option is to read the entire file into memory and to do your own custom processing. For example, assuming you don't have anything escaped (i.e. commas that are not actually delimiters), you can find all relevant delimiters by using:
%Find comma or newline
I = regexp(temp,',|\n')
Here's an example of extracting various columns. As indicated above, this has a large overhead for strings (character arrays) but is efficient for numbers.
%Fake data as an example, 3 columns with middle one numeric
temp = sprintf('asdf,1234,temp\nfred,324,chip\ncheese,12,you are always right');
I = regexp(temp,',|\n');
starts = [0 I];
ends = [I length(temp)+1];
n_columns = 3;
%extract column 2
c2 = arrayfun(#(x,y) str2double(temp(x+1:y-1)),starts(2:n_columns:end),ends(2:n_columns:end));
%extract column 1
c1 = arrayfun(#(x,y) temp(x+1:y-1),...
starts(1:n_columns:end),ends(1:n_columns:end),'un',0);
Depending on your use case this may work or it may not. To read the file into memory you can use fileread
In answer to question(2): It is quite straightforward to split the csv up, assuming there are more rows than columns...
bigfile= csvread(filename);
bigLen=length(bigfile);
size=unint64(bliglen/2)
csvwrite('first.csv', bigfile(1:size,:));
csvwrite('second.csv', bigfile(size:beglen,:));
Or even doing this with SEVERAL files; it may not make it faster overall, but it would allow you to observe the process as each file is read.
I think that MatLab itself has a limit of how much input it is allowed to take in. I'm sure you can set that in the preferences if you have the high enough version.
Check this out: http://www.mathworks.com/help/matlab/matlab_env/set-workspace-and-variable-preferences.html
I am getting some readings off an accelerometer connected to an Arduino which is in turn connected to MATLAB through serial communication. I would like to write the readings into a text file. A 10 second reading will write around 1000 entries that make the text file size around 1 kbyte.
I will be using the following code:
%%%%%// Communication %%%%%
arduino=serial('COM6','BaudRate',9600);
fopen(arduino);
fileID = fopen('Readings.txt','w');
%%%%%// Reading from Serial %%%%%
for i=1:Samples
scan = fscanf(arduino,'%f');
if isfloat(scan),
vib = [vib;scan];
fprintf(fileID,'%0.3f\r\n',scan);
end
end
Any suggestions on improving this code ? Will this have a time or Size limit? This code is to be run for 3 days.
Do not use text files, use binary files. 42718123229.123123 is 18 bytes in ASCII, 4 bytes in a binary file. Don't waste space unnecessarily. If your data is going to be used later in MATLAB, then I just suggest you save in .mat files
Do not use a single file! Choose a reasonable file size (e.g. 100Mb) and make sure that when you get to that many amount of data you switch to another file. You could do this by e.g. saving a file per hour. This way you minimize the possible errors that may happen if the software crashes 2 minutes before finishing.
Now knowing the real dimensions of your problem, writing a text file is totally fine, nothing special is required to process such small data. But there is a problem with your code. You are writing a variable vid which increases over time. That may cause bad performance because you are not using preallocation and it may consume a lot of memory. I strongly recommend not to keep this variable, and if you need the dater read it afterwards.
Another thing you should consider is verification of your data. What do you do when you receive less samples than you expect? Include timestamps! Be aware that these timestamps are not precise because you add them afterwards, but it allows you to identify if just some random samples are missing (may be interpolated afterwards) or some consecutive series of maybe 100 samples is missing.
I had csv files of size 6GB and I tried using the import function on Matlab to load them but it failed due to memory issue. Is there a way to reduce the size of the files?
I think the no. of columns are causing the problem. I have a 133076 rows by 2329 columns. I had another file which is of the same no. of rows but only 12 rows and Matlab could handle that. However, once the columns increases, the files got really big.
Ulitmately, if I can read the data column wise so that I can have 2329 column vector of 133076, that will be great.
I am using Matlab 2014a
Numeric data are by default stored by Matlab in double precision format, which takes up 8 bytes per number. Data of size 133076 x 2329 therefore take up 2.3 GiB in memory. Do you have that much free memory? If not, reducing the file size won't help.
If the problem is not that the data themselves don't fit into memory, but is really about the process of reading such a large csv-file, then maybe using the syntax
M = csvread(filename,R1,C1,[R1 C1 R2 C2])
might help, which allows you to only read part of the data at one time. Read the data in chunks and assemble them in a (preallocated!) array.
If you do not have enough memory, another possibility is to read chunkwise and then convert each chunk to single precision before storing it. This reduces memory consumption by a factor of two.
And finally, if you don't process the data all at once, but can implement your algorithm such that it uses only a few rows or columns at a time, that same syntax may help you to avoid having all the data in memory at the same time.
I need to write an array that is too large to fit into memory to a .mat binary file. This can be accomplished with the matfile function, which allows random access to a .mat file on disk.
Normally, the accepted advice is to preallocate arrays, because expanding them on every iteration of a loop is slow. However, when I was asking how to do this, it occurred to me that this may not be good advice when writing to disk rather than RAM.
Will the same performance hit from growing the array apply, and if so, will it be significant when compared to the time it takes to write to disk anyway?
(Assume that the whole file will be written in one session, so the risk of serious file fragmentation is low.)
Q: Will the same performance hit from growing the array apply, and if so will it be significant when compared to the time it takes to write to disk anyway?
A: Yes, performance will suffer if you significantly grow a file on disk without pre-allocating. The performance hit will be a consequence of fragmentation. As you mentioned, fragmentation is less of a risk if the file is written in one session, but will cause problems if the file grows significantly.
A related question was raised on the MathWorks website, and the accepted answer was to pre-allocate when possible.
If you don't pre-allocate, then the extent of your performance problems will depend on:
your filesystem (how data are stored on disk, the cluster-size),
your hardware (HDD seek time, or SSD access times),
the size of your mat file (whether it moves into non-contiguous space),
and the current state of your storage (existing fragmentation / free space).
Let's pretend that you're running a recent Windows OS, and so are using the NTFS file-system. Let's further assume that it has been set up with the default 4 kB cluster size. So, space on disk gets allocated in 4 kB chunks and the locations of these are indexed to the Master File Table. If the file grows and contiguous space is not available then there are only two choices:
Re-write the entire file to a new part of the disk, where there is sufficient free space.
Fragment the file, storing the additional data at a different physical location on disk.
The file system chooses to do the least-bad option, #2, and updates the MFT record to indicate where the new clusters will be on disk.
Now, the hard disk needs to physically move the read head in order to read or write the new clusters, and this is a (relatively) slow process. In terms of moving the head, and waiting for the right area of disk to spin underneath it ... you're likely to be looking at a seek time of about 10ms. So for every time you hit a fragment, there will be an additional 10ms delay whilst the HDD moves to access the new data. SSDs have much shorter seek times (no moving parts). For the sake of simplicity, we're ignoring multi-platter systems and RAID arrays!
If you keep growing the file at different times, then you may experience a lot of fragmentation. This really depends on when / how much the file is growing by, and how else you are using the hard disk. The performance hit that you experience will also depend on how often you are reading the file, and how frequently you encounter the fragments.
MATLAB stores data in Column-major order, and from the comments it seems that you're interested in performing column-wise operations (sums, averages) on the dataset. If the columns become non-contiguous on disk then you're going to hit lots of fragments on every operation!
As mentioned in the comments, both read and write actions will be performed via a buffer. As #user3666197 points out the OS can speculatively read-ahead of the current data on disk, on the basis that you're likely to want that data next. This behaviour is especially useful if the hard disk would be sitting idle at times - keeping it operating at maximum capacity and working with small parts of the data in buffer memory can greatly improve read and write performance. However, from your question it sounds as though you want to perform large operations on a huge (too big for memory) .mat file. Given your use-case, the hard disk is going to be working at capacity anyway, and the data file is too big to fit in the buffer - so these particular tricks won't solve your problem.
So ...Yes, you should pre-allocate. Yes, a performance hit from growing the array on disk will apply. Yes, it will probably be significant (it depends on specifics like amount of growth, fragmentation, etc). And if you're going to really get into the HPC spirit of things then stop what you're doing, throw away MATLAB , shard your data and try something like Apache Spark! But that's another story.
Does that answer your question?
P.S. Corrections / amendments welcome! I was brought up on POSIX inodes, so sincere apologies if there are any inaccuracies in here...
Preallocating a variable in RAM and preallocating on the disk don't solve the same problem.
In RAM
To expand a matrix in RAM, MATLAB creates a new matrix with the new size and copies the values of the old matrix into the new one and deletes the old one. This costs a lot of performance.
If you preallocated the matrix, the size of it does not change. So there is no more reason for MATLAB to do this matrix copying anymore.
On the hard-disk
The problem on the hard-disk is fragmentation as GnomeDePlume said. Fragmentation will still be a problem, even if the file is written in one session.
Here is why: The hard disk will generally be a little fragmentated. Imagine
# to be memory blocks on the hard disk that are full
M to be memory blocks on the hard disk that will be used to save data of your matrix
- to be free memory blocks on the hard disk
Now the hard disk could look like this before you write the matrix onto it:
###--##----#--#---#--------------------##-#---------#---#----#------
When you write parts of the matrix (e.g. MMM blocks) you could imagine the process to look like this >!(I give an example where the file system will just go from left to right and use the first free space that is big enough - real file systems are different):
First matrix part:
###--##MMM-#--#---#--------------------##-#---------#---#----#------
Second matrix part:
###--##MMM-#--#MMM#--------------------##-#---------#---#----#------
Third matrix part:
###--##MMM-#--#MMM#MMM-----------------##-#---------#---#----#------
And so on ...
Clearly the matrix file on the hard disk is fragmented although we wrote it without doing anything else in the meantime.
This can be better if the matrix file was preallocated. In other words, we tell the file system how big our file would be, or in this example, how many memory blocks we want to reserve for it.
Imagine the matrix needed 12 blocks: MMMMMMMMMMMM. We tell the file system that we need so much by preallocating and it will try to accomodate our needs as best as it can. In this example, we are lucky: There is free space with >= 12 memory blocks.
Preallocating (We need 12 memory blocks):
###--##----#--#---# (------------) --------##-#---------#---#----#------
The file system reserves the space between the parentheses for our matrix and will write into there.
First matrix part:
###--##----#--#---# (MMM---------) --------##-#---------#---#----#------
Second matrix part:
###--##----#--#---# (MMMMMM------) --------##-#---------#---#----#------
Third matrix part:
###--##----#--#---# (MMMMMMMMM---) --------##-#---------#---#----#------
Fourth and last part of the matrix:
###--##----#--#---# (MMMMMMMMMMMM) --------##-#---------#---#----#------
Voilá, no fragmentation!
Analogy
Generally you could imagine this process as buying cinema tickets for a large group. You would like to stick together as a group, but there are already some seats in the theatre reserved by other people. For the cashier to be able to accomodate to your request (large group wants to stick together), he/she needs knowledge about how big your group is (preallocating).
A quick answer to the whole discussion (in case you do not have the time to follow or the technical understanding):
Pre-allocation in Matlab is relevant for operations in RAM. Matlab does not give low-level access to I/O operations and thus we cannot talk about pre-allocating something on disk.
When writing a big amount of data to disk, it has been observed that the fewer the number of writes, the faster is the execution of the task and smaller is the fragmentation on disk.
Thus, if you cannot write in one go, split the writes in big chunks.
Prologue
This answer is based on both the original post and the clarifications ( both ) provided by the author during the recent week.
The question of adverse performance hit(s) introduced by a low-level, physical-media-dependent, "fragmentation", introduced by both a file-system & file-access layers is further confronted both in a TimeDOMAIN magnitudes and in ComputingDOMAIN repetitiveness of these with the real-use problems of such an approach.
Finally a state-of-art, principally fastest possible solution to the given task was proposed, so as to minimise damages from both wasted efforts and mis-interpretation errors from idealised or otherwise not valid assumptions, alike that a risk of "serious file fragmentation is low" due to an assumption, that the whole file will be written in one session ( which is simply principally not possible during many multi-core / multi-process operations of the contemporary O/S in real-time over a time-of-creation and a sequence of extensive modification(s) ( ref. the MATLAB size limits ) of a TB-sized BLOB file-object(s) inside contemporary COTS FileSystems ).
One may hate the facts, however the facts remain true out there until a faster & better method moves in
First, before considering performance, realise the gaps in the concept
The real performance adverse hit is not caused by HDD-IO or related to the file fragmentation
RAM is not an alternative for the semi-permanent storage of the .mat file
Additional operating system limits and interventions + additional driver and hardware-based abstractions were ignored from assumptions on un-avoidable overheads
The said computational scheme was omited from the review of what will have the biggest impact / influence on the resulting performance
Given:
The whole processing is intended to be run just once, no optimisation / iterations, no continuous processing
Data have 1E6 double Float-values x 1E5 columns = about 0.8 TB (+HDF5 overhead)
In spite of original post, there is no random IO associated with the processing
Data acquisition phase communicates with a .NET to receive DataELEMENTs into MATLAB
That means, since v7.4,
a 1.6 GB limit on MATLAB WorkSpace in a 32bit Win ( 2.7 GB with a 3GB switch )
a 1.1 GB limit on MATLAB biggest Matrix in wXP / 1.4 GB wV / 1.5 GB
a bit "released" 2.6 GB limit on MATLAB WorkSpace + 2.3 GB limit on a biggest Matrix in a 32bit Linux O/S.
Having a 64bit O/S will not help any kind of a 32bit MATLAB 7.4 implementation and will fail to work due to another limit, the maximum number of cells in array, which will not cover the 1E12 requested here.
The only chance is to have both
both a 64bit O/S ( wXP, Linux, Solaris )
and a 64bit MATLAB 7.5+
MathWorks' source for R2007a cited above, for newer MATLAB R2013a you need a User Account there
Data storage phase assumes block-writes of a row-ordered data blocks ( a collection of row-ordered data blocks ) into a MAT-file on an HDD-device
Data processing phase assumes to re-process the data in a MAT-file on an HDD-device, after all inputs have been acquired and marshalled to a file-based off-RAM-storage, but in a column-ordered manner
just column-wise mean()-s / max()-es are needed to calculate ( nothing more complex )
Facts:
MATLAB uses a "restricted" implementation of an HDF5 file-structure for binary files.
Review performance measurements on real-data & real-hardware ( HDD + SSD ) to get feeling of scales of the un-avoidable weaknesses thereof
The Hierarchical Data Format (HDF) was born on 1987 at the National Center for Supercomputing Applications (NCSA), some 20 years ago. Yes, that old. The goal was to develop a file format that combine flexibility and efficiency to deal with extremely large datasets. Somehow the HDF file was not used in the mainstream as just a few industries were indeed able to really make use of it's terrifying capacities or simply did not need them.
FLEXIBILITY means that the file-structure bears some overhead, one need not use if the content of the array is not changing ( you pay the cost without consuming any benefit of using it ) and an assumption, that HDF5 limits on overall size of the data it can contain sort of helps and saves the MATLAB side of the problem is not correct.
MAT-files are good in principle, as they avoid an otherwise persistent need to load a whole file into RAM to be able to work with it.
Nevertheless, MAT-files are not serving well the simple task as was defined and clarified here. An attempt to do that will result in just a poor performance and HDD-IO file-fragmentation ( adding a few tens of milliseconds during write-through-s and something less than that on read-ahead-s during the calculations ) will not help at all in judging the core-reason for the overall poor performance.
A professional solution approach
Rather than moving the whole gigantic set of 1E12 DataELEMENTs into a MATLAB in-memory proxy data array, that is just scheduled for a next coming sequenced stream of HDF5 / MAT-file HDD-device IO-s ( write-throughs and O/S vs. hardware-device-chain conflicting/sub-optimised read-aheads ) so as to have all the immenses work "just [married] ready" for a few & trivially simple calls of mean() / max() MATLAB functions( that will do their best to revamp each of the 1E12 DataELEMENTs in just another order ( and even TWICE -- yes -- another circus right after the first job-processing nightmare gets all the way down, through all the HDD-IO bottlenecks ) back into MATLAB in-RAM-objects, do redesign this very step into a pipe-line BigDATA processing from the very beginning.
while true % ref. comment Simon W Oct 1 at 11:29
[ isStillProcessingDotNET, ... % a FLAG from .NET reader function
aDotNET_RowOfVALUEs ... % a ROW from .NET reader function
] = GetDataFromDotNET( aDtPT ) % .NET reader
if ( isStillProcessingDotNET ) % Yes, more rows are still to come ...
aRowCOUNT = aRowCOUNT + 1; % keep .INC for aRowCOUNT ( mean() )
for i = 1:size( aDotNET_RowOfVALUEs )(2) % stepping across each column
aValue = aDotNET_RowOfVALUEs(i); %
anIncrementalSumInCOLUMN(i) = ...
anIncrementalSumInCOLUMN(i) + aValue; % keep .SUM for each column ( mean() )
if ( aMaxInCOLUMN(i) < aValue ) % retest for a "max.update()"
aMaxInCOLUMN(i) = aValue; % .STO a just found "new" max
end
endfor
continue % force re-loop
else
break
endif
end
%-------------------------------------------------------------------------------------------
% FINALLY:
% all results are pre-calculated right at the end of .NET reading phase:
%
% -------------------------------
% BILL OF ALL COMPUTATIONAL COSTS ( for given scales of 1E5 columns x 1E6 rows ):
% -------------------------------
% HDD.IO: **ZERO**
% IN-RAM STORAGE:
% Attr Name Size Bytes Class
% ==== ==== ==== ===== =====
% aMaxInCOLUMNs 1x100000 800000 double
% anIncrementalSumInCOLUMNs 1x100000 800000 double
% aRowCOUNT 1x1 8 double
%
% DATA PROCESSING:
%
% 1.000.000x .NET row-oriented reads ( same for both the OP and this, smarter BigDATA approach )
% 1x INT in aRowCOUNT, %% 1E6 .INC-s
% 100.000x FLOATs in aMaxInCOLUMN[] %% 1E5 * 1E6 .CMP-s
% 100.000x FLOATs in anIncrementalSumInCOLUMN[] %% 1E5 * 1E6 .ADD-s
% -----------------
% about 15 sec per COLUMN of 1E6 rows
% -----------------
% --> mean()s are anIncrementalSumInCOLUMN./aRowCOUNT
%-------------------------------------------------------------------------------------------
% PIPE-LINE-d processing takes in TimeDOMAIN "nothing" more than the .NET-reader process
%-------------------------------------------------------------------------------------------
Your pipe-lined BigDATA computation strategy will in a smart way principally avoid interim storage buffering in MATLAB as it will progressively calculate the results in not more than about 3 x 1E6 ADD/CMP-registers, all with a static layout, avoid proxy-storage into HDF5 / MAT-file, absolutely avoid all HDD-IO related bottlenecks and low BigDATA sustained-read-s' speeds ( not speaking at all about interim/BigDATA sustained-writes... ) and will also avoid ill-performing memory-mapped use just for counting mean-s and max-es.
Epilogue
The pipeline processing is nothing new under the Sun.
It re-uses what speed-oriented HPC solutions already use for decades
[ generations before BigDATA tag has been "invented" in Marketing Dept's. ]
Forget about zillions of HDD-IO blocking operations & go into a pipelined distributed process-to-process solution.
There is nothing faster than this
If it were, all FX business and HFT Hedge Fund Monsters would already be there...
As in my previous question I have the following problem. I have a matrix P nxn which elements are matrices P{i,j} which are also nxn. So the total amount of elements is n^4. For n=100 there is an error about the lack of memory. I calculate this matrix only one time and then operate with it. Could you advise me, how to store matrices P{i,j} on the HDD?
I mean that maybe it is possible to store each of them in a file like "data_i_j.dat" and then load it while doing computations in a loop for i and j?
The save function will write data to a file, and the load function will read it back again. save(filename,varname,varname,varname...), followed by S = load(filename) and referring to S.varname (there's also a version of load that just dumps stuff into your current workspace, but that seems like poor practice).