I have a file that is nth lines long and I want to extract line 10 from the file and read it in as a string. I don't want to import the file, I don't want to search for a string in the file, and I don't want to skip nth lines, I just want to read in line 10. I'm having trouble scripting this up, how can I do this?
fileID = fopen(test.txt','r');
fclose(fileID)
If you knew exactly how many bytes into the file line 10 was, you could use fseek to skip to that offset in file. If you do not know this, then you have no other option than to read line by line using fgetl and ignore lines until you get to line 10.
Matlab can't find the nth line without linearly searching for eol characters. Even if a function did exist to go to line 10, that function would still need to read every line and check for eol. You have to either skip n lines use fgets/fgetl or use fseek if you know how many bytes precede the line.
Related
I have a long file, and I want to read a specific line not from the first sequence lines.
is there a way to do it without looping all over the file and counting the lines?
For example files.read, which get an index from which line to read?
Thanks
You can use the predefined method files.get_text_lines().
I have a lot of .txt files (<1000 lines each). The data format is the following (the picture): there are some lines in the beginning that I don't need, then the line with '', then the lines with data that I need to extract from the file, then again a line with '' and some comments that I don't need.
Is there any way to do that? I have a lot of such files. The matter is that in every file the number of lines before the first '' is different. So, is there any way to read the data in between of two ''? I tried all the functions but I am a beginner and just cannot come up with the right idea...
This is quite simple with regular expressions:
usefulData = regexp(fileread('abg06.txt'), '(?<=\*).*?(?=\*)', 'match','once');
I have a text file (5 columns "\t" separated) that's being written to by another software. I need to take the readings from the file and do some calculations. Is there a was to read the new lines added to the file and process it then repeat again for every new set of lines. I don't mind a bit of delay as long as it does the job.
My idea is to start reading the file line by line until the end of file, then it will read from where it stopped last until the new end of file ...etc.
Can this be done in Matlab? Can I specify the starting line for the file reading? can I also update the end of file point?
To prevent the loop from breaking at the eof point, I think I should set my loop to be controlled by time or anything else, while it should check for eof at the end of every iteration.
I've mostly worked with Matlab, but if there is a better option to use for this purpose (that I can reasonably learn) please feel free to guide me.
Edit1: I've tried using dlmread as you suggested, when I read the file outside the loop it reads the file correctly even when I change R1 and with the other software updating the text. However, when I put it in a loop I get this error:
Error using dlmread (line 143)
Empty format string is not supported at the end of a file.
Here is my code to read it multiple times:
clear all
x=0;
R1=0; C1=0;
while(x<10)
M = dlmread('tst_4.txt','\t',R1,C1);
R1=length(M);
x=x+1;
end
Thanks
You can used dlmread(filename,delimiter,R1,C1). Where R1 and C1 are the row and column offset respectively. By setting the row offset to the last row that you have read, you can read the file content excluding what you would have already read.
I am using a perl script to read in a file, but I'm not sure what encoding the file is in. Basically, my file is a list of book titles, but each book has other info associated with it (author, publication date, etc). So each book title is within a discrete chunk of data for the book. So I iterate through the file line by line until I find the regular expression '/Book Title: (.*)/' and take what's in the paren. Then, I create a separate .txt file with the name of the text file being my book. However, in my unix server, when I look at the name of the file, it's actually not, for example, 'LordOfTheFlies.txt' but rather 'LordOfTheFlies^M.txt'
What is this '^M'? Is that a weird end of line encoding I'm not taking into account? I tried chomp but it doesn't seem to be working. What is the best file encoding for working with perl?
It's the additional carriage return character that Windows systems insert before line feed characters (M == 13th letter, hence ASCII 13 is visualised as ^M).
It has nothing to do with file encoding, it's just the line ending policy biting you. Perl is usually good at handling line ending characters correctly, but if they occur somewhere else than the end of a line you have to do it yourself. You can use s/\r// instead of chomp() to get them out.
Before processing the file, you need to know the encoding of the file, which is determined by the producer of the file.
That "^M" is control-M, which is a carriage return, and is not needed in Unix file systems.Looks like the file is created in Unix and transferred to Windows. It can also be added with ftp when text file are transfered as binaries.
Try chop, instead of 'chomp'. Chomp removes the 'new line character'. s/\r// is also good.
For your general question, you might want to use appropriate module for the file type you have to make your life easier and better with Perl.
I need to open a text file and convert it into a CSV file in Matlab. The first 3 lines of the text file are sentences that need to be omitted. The next 28 lines are numbers that need to make up the first column of the CSV, and then the next 28 lines need to make up the second column.
The text file is called datanal.txt and the output file can be named anything. Any help would be appreciated.
Don't have Matlab now to test, but try this. Your input file should be in Matlab's current directory, or put the full path to the file name.
A = csvread('datanal.txt',3,0);
A = reshape(A,28,2);
csvwrite('output.csv',A)
well you can add #'s in front of the first 3 lines then use load and a reshape. Did you need a fully automated script or is there only one file? If you're familiar with matlab at all there are a bunch of ways to turn that large column vector into a matrix.