how to read text file as chunks determined by chunk size in matlab - matlab

I'm trying to read a text file chunk by chunk where every chunk has a size of 10KB for example..
How to do that in matlab??
BTW You can't control the content of the text file (which means you can't suggest adding a specific character to split the text file)

I believe you could start by using fread, and then specify that you want to read n bytes at a time- something like this, perhaps?
n = 10000
file = fopen(fileID)
A = zeros(n, 'uchar') --perhaps char*1 for a text file?
A = fread(file, size(A))
What this should do is read 10KB, and then leave the pointer where it read the last character. If you call fread again with the same parameters, it should give you the next n bytes. I'd double check this, but I don't have a copy of Matlab at the moment.

Related

A solution for "out of memory" error in matlab

i have a very big size of text file(about 11GB) that needs to load in matlab.but when i use "textread" function,"out of memory" error occurs.and There is no way to reduce the file size. when i type memory, show this to me.
memory
Maximum possible array: 24000 MB (2.517e+10 bytes) *
Memory available for all arrays: 24000 MB (2.517e+10 bytes) *
Memory used by MATLAB: 1113 MB (1.167e+09 bytes)
Physical Memory (RAM): 16065 MB (1.684e+10 bytes)
* Limited by System Memory (physical + swap file) available.
Does anyone have a solution to this problem?
#Anthony suggested a way to read the file line-by-line, which is perfectly fine, but more recent (>=R2014b) versions of MATLAB have datastore functionality, which is designed for processing large data files in chunks.
There are several types of datastore available depending on the format of your text file. In the simplest cases (e.g. CSV files), the automatic detection works well and you can simply say
ds = datastore('myCsvFile.csv');
while hasdata(ds)
chunkOfData = read(ds);
... compute with chunkOfData ...
end
In even more recent (>=R2016b) versions of MATLAB, you can go one step further and wrap your datastore into a tall array. tall arrays let you operate on data that is too large to fit into memory all at once. (Behind the scenes, tall arrays perform computations in chunks, and give you the results only when you ask for them via a call to gather). For example:
tt = tall(datastore('myCsvFile.csv'));
data = tt.SomeVariable;
result = gather(mean(data)); % Trigger tall array evaluation
According to your clarification of the purpose of your code:
it is a point cloud with XYZRGB column in txt file and i needs to add another column to this.
What I suggest you to do is read the text file one line at a time, modify the line and write the modified line straight to a new text file.
To read one line at a time:
% Open file for reading.
fid = fopen(filename, 'r');
% Get the first line.
line = fgetl(fid);
while ~isnumeric(line)
% Do something.
% get the next line
line = fgetl(fid);
end
fclose(fid);
To write the line, you can use fprintf.
Here is a demonstration:
filename = 'myfile.txt';
filename_new = 'myfile_new.txt';
fid = fopen(filename);
fid_new = fopen(filename_new,'w+');
line = fgetl(fid);
while ~isnumeric(line)
% Make sure you add \r\n at the end of the string;
% otherwise, your text file will become a one liner.
fprintf(fid_new, '%s %s\r\n', line, 'new column');
line = fgetl(fid);
end
fclose(fid);
fclose(fid_new);

How to delete first block of bytes of a file in matlab

I want to delete first block of bytes in a file in matlab (ex: delete first 50 Byte of a text file)
is that possible in matlab?? if so, how to achieve that??
Do you want to do this with or without loading the file into memory? If you can do this in memory, one possible way is to read in the file with fseek and fread, skip the first few bytes, read the rest of the data into memory and save that back into a new file using fwrite.
In Linux / Mac OS, there are efficient ways to do this without having to load the file in memory. For example, see here: https://unix.stackexchange.com/questions/6852/best-way-to-remove-bytes-from-the-start-of-a-file
However, if you're in Windows, you can't escape doing a byte copy which ultimately means doing this in memory. From what I have seen with Windows, the only way is to do a byte copy where the input pointer starts at however many bytes you want to skip over.
See for example here: What is the most efficient way to remove first N bytes from a file on Windows?, and also here: http://blogs.msdn.com/b/oldnewthing/archive/2010/12/01/10097859.aspx
With these posts, you don't have a choice but to do a byte copy. Therefore, if you want to simulate the same in MATLAB, you'll have to do what I said above.
Since you're working in MATLAB, here is some example code to do what I have outlined above:
fid = fopen('data', 'r'); %// Open up data file
fid2 = fopen('dataout', 'w'); %// File to save - new file with skipped bytes
skip = 50; %// Determine how many bytes you want to skip over
fseek(fid1, skip, 'bof'); %// Skip over bytes - 'bof' means from beginning of file
A = fread(fid1); %// Read the data
fwrite(fid2, A); %// Write data to new file
%// Close the files
fclose(fid);
fclose(fid2);

save a large cell matrix (string variables) in Matlab is very slow and size is massive

I have a big cell matrix (string variables) with 40,000,000 lines. I first check the size using whos('file'), and it tells me that the size of the matrix in the workspace is 4.5GB. Then, I use 'save('file',-v7.3) in order to export it to .mat file. It takes so long time and after 10 mins is still saving, so I check the file in the target directory, the file size is already 12GB and is still increasing. Can anybody tell me what happen? Is there any other way to save this matrix? It doesn't need to be a .mat file, it can be .txt or something else.
A small part of the matrix.
'00086810'
'00192610'
'00213T10'
'00339010'
'00350L10'
'00350P10'
'00428010'
'00431F10'
'00433710'
'00723110'
'00743710'
'00818210'
'00818810'
'01031710'
'01204610'
'01747610'
'01747F10'
'01852Q10'
'01853510'
'01887110'
'01888510'
'01890A10'
'01920510'
'02316010'
'02343R10'
'02361310'
'02391210'
'02407310'
'02407640'
'02408H10'
'02434310'
'02520W10'
'02581610'
Lets test
test='helloooooo'
whos('test')
Name Size Bytes Class Attributes
test 1x10 20 char
save('A','test')
size A file 184 bytes
Lets test bigger data.
symbols = ['a':'z' 'A':'Z' '0':'9'];
MAX_ST_LENGTH = 500;
stLength = randi(MAX_ST_LENGTH);
for ii=1:100
nums = randi(numel(symbols),[1 stLength]);
testcell{ii} = symbols (nums);
end
save('test','testcell')
whos('testcell')
Name Size Bytes Class Attributes
testcell 1x100 52200 cell
Size file 15.7Kb
It compresses data. BUT I realised that it depends in the data. however usually I got x3 compression
Can you show us you code, maybe you are saving things wrongly

Determine number of bytes in a line of a binary file Matlab

I am working on importing some data interpretation of binary files from Fortran to MATLAB and have come across a bit of an issue.
In the Fortran file I am working with the following check is performed
CHARACTER*72 PICTFILE
CHARACTER*8192 INLINE
INTEGER NPX
INTEGER NLN
INTEGER BYTES
c This is read from another file but I'll just hard code it for now
NPX = 1024
NLN = 1024
bytes=2
open(unit=10, file=pictfile, access='direct', recl=2*npx, status='old')
read(10,rec=nln, err=20) inline(1:2*npx)
go to 21
20 bytes=1
21 continue
close(unit=10)
where nln is the number of lines in the file being read, and npx is the number of integers contained in each line. This check basically determines whether each of those integers is 1 byte or 2 bytes. I understand the Fortran code well enough to figure that out, but now I need to figure out how to perform this check in MATLAB. I have tried using the fgetl command on the file and then reading the length of the characters contained but the length never seems to be more than 4 or 5 characters, when even if each integer is 1 byte the length should be somewhere around 1000.
Does someone know a way that I can automatically perform this check in MATLAB?
So what we figured out was that the check is simply to see if the file is the correct size. In Matlab this can be done as
fullpath=which(file); %extracting the full file path
s=dir(fullpath); %extracting information about hte file
fid=fopen(file_name,'r'); %opening image file
if s.bytes/NLN==2*NPX %if the file is NLN*NPX*2 bytes
for n=1:NLN %for each line
dn(n,:) = (fread(fid, NPX, '*uint16','b'))'; %reading in lines into DN
end
elseif s.bytes/NLN==NPX %Else if the file is NLN*NPX bytes
for n=1:NLN %for each line
dn(n,:) = (fread(fid, NPX, '*uint8','b'))'; %reading in lines into DN
end
else %If the file is neither something went wrong
error('Invalid file. The file is not the correct size specified by the SUM file')
end
where file contains the filename, nln contains the number of lines, and npx contains the number of columns. Hope this helps anyone who may have a similar answer, but be warned because this will only work if your file only contains data that has the same number of bytes for each entry, and if you know the total number of entries there should be!
Generally speaking, binary files don't have line lengths; only text files have line lengths. MATLAB's getl will read until it finds the binary equivalent of newline characters. It then removes them and returns the result. The binary file, on the other hand, should read a block of length 2*npx and return the result. It looks like you want to use fread to get a block of data like this:
inline = fread(fileID,2*npx)
Your fortran code is requesting to read record nln. If the code you have shared reads all the records starting at the first one and working up, then can just put the above code in a loop for nln=1:maxValue. However, if you really do want to yank out record nln you need to fseek to that position first:
fseek(fileID, nln*2*npx, -1);
inline = fread(fileID,2*npx)
so you get something like the following:
Either reading them all in a loop:
fileID = fopen(pictfile);
nln = 0;
while ~feof(fileID)
nln = nln+1;
inline = fread(fileID,2*npx);
end
fclose(fileID);
or picking out only the number `nln record:
fileID = fopen(pictfile);
nln = 7;
fseek(fileID, nln*2*npx, -1);
inline = fread(fileID,2*npx);
fclose(fileID);

Memory map file in MATLAB?

I have decided to use memmapfile because my data (typically 30Gb to 60Gb) is too big to fit in a computer's memory.
My data files consist two columns of data that correspond to the outputs of two sensors and I have them in both .bin and .txt formats.
m=memmapfile('G:\E-Stress Research\Data\2013-12-18\LD101_3\EPS/LD101_3.bin','format','int32')
m.data(1)
I used the above code to memory map my data to a variable "m" but I have no idea what data format to use (int8', 'int16', 'int32', 'int64','uint8', 'uint16', 'uint32', 'uint64', 'single', and 'double').
In fact I tried all of the data formats listed that MATLAB supports, but when I used the m.data(index number) I never get a pair of numbers (2 columns of data) which is what I expected, also the number will be different depending on the format I used.
If anyone has experience with memmapfile please help me.
Here are some smaller versions of my data files so people can understand how my data is structured:
cheers
James
memmapfile is designed for reading binary files, that's why you are having trouble with your text file. The data in there is characters, so you'll have to read them as characters and then parse them into numbers. More on that below.
The binary file appears to contain more than just a stream of floating point values written in binary format. I see identifiers (strings) and other things in the file as well. Your only hope of reading that is to contact the manufacturer of the device that created the binary file and ask them about how to read in such files. There'll probably be an SDK, or at least a description of the format. You might want to look into this as the floating point numbers in your text file might be truncated, i.e., you have lost precision compared to directly reading the binary representation of the floats.
Ok, so how to read your file with memmapfile? This post provides some hints.
So first we open your file as 'uint8' (note there is no 'char' option, so as a workaround we read the content of the file into a datatype of the same size):
m = memmapfile('RTL5_57.txt','Format','uint8'); % uint8 is default, you could leave that off
We can render the data read in as uint8 as characters by casting it to char:
c = char(m.Data(1:19)).' % read the first three lines. NB: transpose just for getting nice output, don't use it in your code
c =
0.398516 0.063440
0.399611 0.063284
0.398985 0.061253
As each line in your file has the same length (2*8 chars for the numbers, 1 tab and 2 chars for newline = 19 chars), we can read N lines from the file by reading N*19 values. So m.Data(1:19) gets you the first line, m.Data(20:38), the second line, and m.Data(20:57) the second and third lines. Read as much as you want at once.
Then we'll have to parse the read-in data into floating point numbers:
f = sscanf(c,'%f')
f =
0.3985
0.0634
0.3996
0.0633
0.3990
0.0613
All that's left now is to reshape them into your two column format
d = reshape(f,2,[]).'
d =
0.3985 0.0634
0.3996 0.0633
0.3990 0.0613
Easier ways than using memmapfile:
You don't need to use memmapfile to solve your problem, and I think it makes things more complicated. You can simply use fopen followed by fread:
fid = fopen('RTL5_57.txt');
c = fread(fid,Nlines*19,'*char');
% now sscanf and reshape as above
% NB: one can read the values the text file directly with f = fscanf(fid,'%f',Nlines*19).
% However, in testing, I have found calling fread followed by sscanf to be faster
% which will make a significant difference when reading such large files.
Using this you can read Nlines pairs of values at a time, process them and simply call fread again to read the next Nlines. fread remembers where it is in the file (as does fscanf), so simply use same call to get next lines. Its thus easy to write a loop to process the whole file, testing with feof(fid) if you are at the end of the file.
An even easier way is suggested here: use textscan. To slightly adapt their example code:
Nlines = 10000;
% describe the format of the data
% for more information, see the textscan reference page
format = '%f\t%f';
fid = fopen('RTL5_57.txt');
while ~feof(fid)
C = textscan(fid, format, Nlines, 'CollectOutput', true);
d = C{1}; % immediately clear C at this point if you need the memory!
% process d
end
fclose(fid);
Note again however that the fread followed by sscanf will be fastest. Note however that the fread method would die as soon as there is one line in the text file that doesn't exactly match your format. textscan is forgiving of whitespace changes on the other hand and thus more robust.