I have a huge .txt file and parts of which I want to parse (using text scan), say I have 10000 line data and a part which starts at line 300, the part also has a header of 10 lines say,how can I skip the first 300 lines(not using header function of text scan of course as I then wont be able to get my actual 10 line header) or is there a way in which I can jump to line 300 and start text scan from there as if 301 line was the first line.
So, assuming your data is generated by the following (since you have not mentioned how it's formatted in your question):
fid = fopen('datafile.txt', 'w');
for i=1:300
fprintf(fid, 'ignore%d\n', i);
end
for i=301:310
fprintf(fid, 'header%d\n', i);
end
for i=311:10000
fprintf(fid, '%d\n', i);
end
fclose(fid);
You can read the data by first using fgetl to advance to the 300th line, then using textscan twice to get the header info and the data. Basically, the thing to remember is that textscan works starting from the place where fid is pointing. So, if it's pointing to the 301st line, it'll start scanning from there. So, here's the code to read the file above, starting from line 301:
fid = fopen('datafile.txt', 'r');
for i=1:300
fgetl(fid);
end
scannedHeader = textscan(fid, '%s', 10);
scannedData = textscan(fid, '%d');
fclose(fid);
NB: if the data is always the same format, you can use ftell to know where to skip to exactly then use fseek to go to that offset.
Related
I'm trying to analyze a very large file using textscan in MATLAB. The file in question is about 12 GB in size and contains about 250 million lines with seven (floating) numbers in each (delimited by a whitespace); because this obviously would not fit into the RAM of my desktop, I'm using the approach suggested in the MATLAB documentation (i.e. loading and analyzing a smaller block of the file at a time. According to the documentation this should allow for processing "arbitrarily large delimited text file[s]"). This only allows me to scan about 43% of the file, after which textscan starts returning empty cells (despite there still being data left to scan in the file).
To debug, I attempted to go to several positions in the file using the fseek function, for example like this:
fileInfo = dir(fileName);
fid = fileopen(fileName);
fseek(fid, floor(fileInfo.bytes/10), 'bof');
textscan(fid,'%f %f %f %f %f %f %f','Delimiter',' ');
I'm assuming that the way I'm using fseek here moves the position indicator to about 10% of my file. (I'm aware this doesn't necessarily mean the indicator is at the beginning of a line, but if I run textscan twice I get a satisfactory answer.) Now, if I substitute fileInfo.bytes/10 by fileInfo.bytes/2 (i.e. moving it to about 50% of the file) everything breaks down and textscan only returns an empty 1x7 cell.
I looked at the file using a text editor for large files, and this shows that the entire file looks fine, and that there should be no reason for textscan to be confused. The only possible explanation that I can think of is that something goes wrong on a much deeper level that I have little understanding of. Any suggestions would be greatly appreciated!
EDIT
The relevant part of my code used to look like this:
while ~feof(fid)
data = textscan(fid, FormatString, nLines, 'Delimiter', ' '); %// Read nLines
%// do some stuff
end
First I tried fixing it using ftell and fseek as suggested by Hoki below. This gave exactly the same error as I got before: MATLAB was unable to read in more than approximately 43% of the file. Then I tried using the HeaderLines solution (also suggested below), like this:
i = 0;
while ~feof(fid)
frewind(fid)
data = textscan(fid, FormatString, nLines, 'Delimiter',' ', 'HeaderLines', i*nLines);
%// do some stuff
i = i + 1;
end
This seems to read in the data without producing errors; it is, however, incredibly slow.
I'm not entirely sure I understand what HeaderLines does in this context, but it seems to make textscan completely ignore everything that comes before the specified line. This doesn't seem to happen when using textscan in the "appropriate" way (either with or without ftell and fseek): in both cases it tries to continue from its last position, but to no avail because of some reason I don't understand yet.
fseek a pointer in a file is only good when you know precisely where (or by how many bytes) you want to move the cursor. It is very useful for binary files when you just want to skip some records of known length. But on a text file it is more dangerous and confusing than anything (unless you are absolutely sure that each line is the same size and each element on the line is at the same exact place/column, but that doesn't happen often).
There are several ways to read a text file block by block:
1) Use the HeaderLines option
To simply skip a block of lines on a text file, you can use the HeaderLines parameter of textscan, so for example:
readFormat = '%f %f %f %f %f %f %f' ; %// read format specifier
nLines = 10000 ; %// number of line to read per block
fileInfo = dir(fileName);
%// read FIRST block
fid = fileopen(fileName);
M = textscan(fid, readFormat, nLines,'Delimiter',' '); %// read the first 10000 lines
fclose(fid)
%// Now do something with your "M" data
Then when you want to read the second block:
%// later read the SECOND block:
fid = fileopen(fileName);
M = textscan(fid, readFormat, nLines,'Delimiter',' ','HeaderLines', nLines); %// read lines 10001 to 20000
fclose(fid)
And if you have many blocks, for the Nth block, just adapt:
%// and then for the Nth BLOCK block:
fid = fileopen(fileName);
M = textscan(fid, readFormat, nLines,'Delimiter',' ','HeaderLines', (N-1)*nLines);
fclose(fid)
If necessary (if you have many blocks), just code this last version in a loop.
Note that this is good if you close your file after each block reading (so the file pointer will start at the beginning of the file when you open it again). Closing the file after reading a block of data is safer if your processing might take a long time or may error out (you don't want to have files which remain open too long or loose the fid if you crash).
2) Read by block (without closing the file)
If the processing of the block is quick and safe enough so you're sure it won't bomb out, you could afford to not close the file. In this case, the textscan file pointer will stay where you stopped, so you could also :
read a block (do not close the file): M = textscan(fid, readFormat, nLines)
Process it then save your result (and release memory)
read the next block with the same call: M = textscan(fid, readFormat, nLines)
In this case you wouldn't need the headerlines parameter because textscan will resume reading exactly where it stopped.
3) use ftell and fseek
Lastly, you could use fseek to start reading the file at the precise position you want, but in this case I recommend using it in conjunction with ftell.
ftell will return the current position in an open file, so use that to know at which position you stop reading last, then use fseek the next time to go straight at this position. Something like:
%// read FIRST block
fid = fileopen(fileName);
M = textscan(fid, readFormat, nLines,'Delimiter',' ');
lastPosition = ftell(fid) ;
fclose(fid)
%// do some stuff
%// then read another block:
fid = fileopen(fileName);
fseek( fid , 'bof' , lastPosition ) ;
M = textscan(fid, readFormat, nLines,'Delimiter',' ');
lastPosition = ftell(fid) ;
fclose(fid)
%// and so on ...
I am trying to read a csv file use textscan. The fields are seperated with ',' . I used the following code, but it only read in one line of data into the matrix W.
I also tried dlmread(), it got the number of fields wrong.
The file is contructed under linux, matlab is under linux.
file_id = fopen('H:\data\overlapmatrices\cos.mat.10');
W = textscan(file_id, '%f', 'delimiter', ',' , 'EndOfLine', '\r\n');
fclose(file_id);
clear file_id;
you might wanna try csvread, it should do the trick.
or you could alway do something dirty like
fid = fopen( filename );
tline = fgetl(fid);
while ischar(tline) %or some other check
%sscanf(tline...
tline = fgetl(fid);
end
The problem could be in how the end of line is represented in the file (see also this article on Wikipedia). While \r\n (the combination of a carriage return and a newline character) is common on Windows, \n (just the newline character) is the standard on Linux and other Unix systems.
But as ben is saying, csvread might be an easier way how to read the file.
Hopefully somebody can help me as I am about to finish a program but I am having trouble with the format that my text file has.
My text file is quite large. I paste here the first 9 lines (Note that is a non rectangular text file which contains numeric and string data):
AFH,98.3,76.4,D,2,56.3,H
TYU,65.2,K,47,I
UJK,67.5,J
AFH,65.5,56.5,L,8,34.1,P
TYU,56.2,S,97,T
UJK,88.5,J
AFH,32.1,11.4,G,4,45.6,F
TYU,22.8,D,37,U
UJK,78.3,Z
The only data that I need from the entire text file are the lines that start with 'AFH'. I need to manage reading just these specific lines and write them in a new text file, so that I can use this last text file as input to run the rest of my program.
I can't find any way to be able to select just my 'AFH' lines to print them in a new text file.
The easiest way to do it is read the file line by line using fgetl and ignore the irrelevant lines.
myText = cell(1,N);
tline = fgetl(fid); % Read first line
while ischar(tline) % Keep going til the end of the file
if strcmp(tline(1:3),'AFH') % Check if it starts with 'AFH'
myText{end+1} = tline; % Save the line in cell array
end
tline = fgetl(fid); % Read the next line
end
I want to load a csv file in a matrix using matlab.
I used the following code:
formatSpec = ['%*f', repmat('%f',1,20)];
fid = fopen(filename);
X = textscan(fid, formatSpec, 'Delimiter', ',', 'CollectOutput', 1);
fclose(fid);
X = X{1};
The csv file has 1000 rows and 21 columns.
However, the matrix X generated has 2000 columns and 20 columns.
I tried using different delimiters like '\t' or '\n', but it doesn't change.
When I displayed X, I noticed that it displayed the correct csv file but with extra rows of zeros every 2 rows.
I also tried adding the 'HeaderLines' parameters:
`X = textscan(fid, formatSpec1, 'Delimiter', '\n', 'CollectOutput', 1, 'HeaderLines', 1);`
but this time, the result is an empty matrix.
Am I missing something?
EDIT: #horchler
I could read with no problem the 'test.csv' file.
There is no extra comma at the end of each row. I generated my csv file with a python script: I read the rows of another csv file, modified these (selecting some of them and doing arithmetic operations on them) and wrote the new rows on another csv file. In order to do this, I converted each element of the first csv file into floats...
New Edit:
Reading the textscan documentation more carefully, I think the problem is that my input file is neither a textfile nor a str, but a file containing floats
EDIT: three lines from the file
0,1,0,0,0,1,0,0,0,1,0,0,0,1,0,0,1,0,0,0,2
1,-0.3834323,-1.92452324171,-1.2453254094,0.43455627857,-0.24571121,0.4340657,1,1,0,0,0,0.3517396202,1,0,0,0.3558122164,0.2936975319,0.4105696144,0,1,0
-0.78676,-1.09767,0.765554578,0.76579043,0.76,1,0,0,323124.235998,1,0,0,0,1,0,0,1,0,0,0,2
How about using regex ?
X=[];
fid = fopen(filename);
while 1
fl = fgetl(fid);
if ~ischar(fl), break, end
r =regexp(fl,'([-]*\d+[.]*\d*)','match');
r=r(1:21); % because your line 2nd is somehow having 22 elements,
% all lines must have same # elements or an error will be thrown
% Error: CAT arguments dimensions are not consistent.
X=[X;r];
end
fclose(fid);
Using csvread to read a csv file seems a good option. However, I also tend to read csv files with textscan as files are sometimes badly written. Having more options to read them is therefore necessary.
I face a reading problem like yours when I think the file is written a certain way but it is actually written another way. To debug it I use fgetl and print, for each line read, both the output of fgetl and its double version (see the example below). Examining the double version, you may find which character causes a problem.
In your case, I would first look at multiple occurrences of delimiters (',' and '\t') and , in 'textscan', I would activate the option 'MultipleDelimsAsOne' (while turning off 'CollectOutput').
fid = fopen(filename);
tline = fgetl(fid);
while ischar(tline)
disp(tline);
double(tline)
pause;
tline = fgetl(fid);
end
fclose(fid);
I have a big text file containing data that needs to be extracted and inserted into a new text file. I possibly need to store this data in an cell/matrix array ?
But for now, the question is that I am trying to test a smaller dataset, to check if the code below works.
I have a code in which it opens a text file, scans through it and replicates the data and saves it in another text file called, "output.txt".
Problem : It doesn't seem to save the file properly. It just shows an empty array in the text file, such as this " [] ". The original text file just contains string of characters.
%opens the text file and checks it line by line.
fid1 = fopen('sample.txt');
tline = fgetl(fid1);
while ischar(tline)
disp(tline);
tline = fgetl(fid1);
end
fclose(fid1);
% save the sample.txt file to a new text fie
fid = fopen('output.txt', 'w');
fprintf(fid, '%s %s\n', fid1);
fclose(fid);
% view the contents of the file
type exp.txt
Where do i go from here ?
It's not a good practice to read an input file by loading all of its contents to memory at once. This way the file size you're able to read is limited by the amount of memory on the machine (or by the amount of memory the OS is willing to allocate to a single process).
Instead, use fopen and its related function in order to read the file line-by-line or char-by- char.
For example,
fid1 = fopen('sample.txt', 'r');
fid = fopen('output.txt', 'w');
tline = fgetl(fid1);
while ischar(tline)
fprintf(fid, '%s\n', tline);
tline = fgetl(fid1);
end
fclose(fid1);
fclose(fid);
type output.txt
Of course, if you know in advance that the input file is never going to be large, you can read it all at once using by textread or some equivalent function.
Try using textread, it reads data from a text file and stores it as a matrix or a Cell array. At the end of the day, I assume you would want the data to be stored in a variable to manipulate it as required. Once you are done manipulating, open a file using fopen and use fprintf to write data in the format you want.