I'm trying to import a width delimited txt file using the textscan function. The file is 80 characters wide, with no delimiter, and the desired resulting 12 columns are different widths of characters. I have tried to do this by specifying the width of the string, (i.e 12 strings, each of a different width of characters that add up to 80) but as soon as there is a space (because certain values are missing) MATLAB interprets this as my delimiter and messes up the format.
data= textscan(fileID, '%5s %7s %1s %1s %1s %17s %12s %12s %10s %5s %6s %3s');
I can work around this using Excel but this seems like a bad solution. Is there any way of doing this using MATLAB, maybe a different function than textscan/make textscan forget delimiters and just deal with width of the string?
You need to change the value of the delimiter and white space characters to empty:
format_string = '%5s %7s %1s %1s %1s %17s %12s %12s %10s %5s %6s %3s';
C = textscan(fid, format_string, 'delimiter', '', 'whitespace', '');
That way MATLAB will treat each character, including spaces, as valid characters.
Hmmm, I have experienced the same problem with textscan. Well, here is a long way around it (it is by no means the best solution, but it should work)
fid=fopen('txtfile.txt','rt'); %//load in file
a=fscanf(fid'%c'); %//scan the thing into chars
fclose(fid);
for r = 0:NumberOfRowsInUrData -1 %//Now the loop... Number of rows in your data can also be calculated by size(a,2)/20
b(r+1,:) = a(1+20*r:20*(r+1)); %// this will correctly index everything
end
The good thing is that now everything is in the matrix b, you can simply index your chars like string1 = b(:,1:5) and it will output everything in a nice matrix.
The downside ofc is the for loop, which I think you should be able to replace with something like a cellfun or something.
Related
What I have
A txt file like:
D091B
E7E1F
20823
...
What I need
To read them and store them like char, just as they are in the file: N (don't knot how many) lines, with its 5 characters (5 columns) at each one.
What have I tried
fichero = fopen('PS.txt','r');
sizeDatos = [[] 5]; % Several Options, read below
resultados=fscanf(fichero, '%s', sizeDatos); % Here too
fclose(fichero);
I've tried with the snippet above, to read my txt file. However, I didn't manage to get it. Most I've obtained is, using:
sizeDatos = [1 Inf];
So I got all my hex characters into an array, with no spaces.
As you can see, I've tried several optios changing fscanf size parameter, as well as trying to say into the format chain that it should recognize new lines by using \n for example. None of them have worked for me.
Any idea about how can I get it? I've readed fscanf page from documentation, but it didn't inspire me to make anything different.
One possible solution is using textscan and convert it to a cell array.
fileId = fopen('PS.txt');
C = textscan(fileId, '%s');
Now to show the content of cell you can use
celldisp(C)
Or you can convert it to other types.
Don't forget to close your file after using it.
On http://www.mathworks.com/help/matlab/ref/textscan.html, I can see the suggestion:
fileID = fopen('data3.csv');
C = textscan(fileID,'%f %f %f %f','Delimiter',',',...
'MultipleDelimsAsOne',1);
fclose(fileID);
celldisp(C)
Not sure if textscan can also .txt but I can't really write out 100s of '%f's. Is there a way to do this by giving textscan the dimensions of the mtx in my .txt file? Thanks.
If you have a file that is only numbers, and the text is comma separated (.csv), then you can use csvread:
num_headerlines = 1
C = csvread('C:\users\smith\Documents\data3.csv', num_headerlines, 0)
The last two arguments here are the row and column to begin reading, and unlike most everything else in MATLAB, they are 0-indexed, so if you want to start on the first column, you pass a 0, and if you want to start on the second row, you pass a 1. This will read as many columns as you have, without needed a long format specifier.
I have a text file in Matlab that contains comment strings as well as variables and I am trying to figure out the best way to read this file and give an output as different variables that can easily be plugged into equations later on.
The text file looks something like this:
#Comments
2
#Comments
#Comments
1.1 2.55 4.32
1.9 2.76 8.95
1 3.65 9.12
I want an output so that each number is given a variable and the strings with the #s in front are ignored.
ex output:
i=2
a1=1.1
b1=2.55
c1=4.32
a2=1.9
b2=2.76
c2=8.95
a3=1
b3=3.65
c3=9.12
And these variables will be stored for later use. Thanks in advance to anyone who can help.
If you used textscan you can set the CommentStyle to # - this will ignore the lines starting with a #. Looking at your data, you should set your delimiter to a space. As some of your lines seem to be shorter than others you should probably set the EmptyValue parameter - this will replace any empty fields with a flag of your choosing, for example Inf or NaN, or just zero. The command will look something like this:
fid = fopen(filename)
data = textscan(fid, '%f%f%f', 'Delimiter', ',', 'CommentStyle', '#', 'EmptyValue', NaN)
This will put your data into a cell array - I am not sure how you could elegantly assign each value to a completely different variable.
I have a cell array with nine columns (the first eight text and the ninth numbers) and thousands of rows that I would like to export to a csv file.
I have tried to follow the suggestions provided in similar questions and I take that the best way to proceed is to use the fprintf function:
fid = fopen(outputfile, 'w')
fprint(fid, ???, variable{:,:})
fclose(fid)
Nevertheless, I cannot figure out what I am supposed to write in the middle. I have tried several combinations using "%s", "\n", "\t", but it does not seem to work. Ideally, I would like to separate each column by either a ";", "," or a tab, and to make sure that the decimals of the values are not lost.
I have a text file which has 4 columns, each column having 65536 data points. Every element in the row is separated by a comma. For example:
X,Y,Z,AU
4010.0,3210.0,-440.0,0.0
4010.0,3210.0,-420.0,0.0
etc.
So, I have 65536 rows, each row having 4 data values as shown above. I want to convert it into a matrix. I tried importing data from the text file to an excel file, because that way its easy to create a matrix, but I lost more than half the data.
If all the entries in your file are numeric, you can simply use a = load('file.txt'). It should create a 65536x4 matrix a. It is even easier than csvread
Have you ever tried using 'importdata'?
The parameters you need only file name and delimiter.
>> tmp_data = importdata('your_file.txt',',')
tmp_data =
data: [2x4 double]
textdata: {'X' 'Y' 'Z' 'AU'}
colheaders: {'X' 'Y' 'Z' 'AU'}
>> tmp_data.data
ans =
4010 3210 -440 0
4010 3210 -420 0
>> tmp_data.textdata
ans =
'X' 'Y' 'Z' 'AU'
Instead of messing with Excel, you should be able to read the text file directly into MATLAB (using the functions FOPEN, FGETL, FSCANF, and FCLOSE):
fid = fopen('file.dat','rt'); %# Open the data file
headerChars = fgetl(fid); %# Read the first line of characters
data = fscanf(fid,'%f,%f,%f,%f',[4 inf]).'; %'# Read the data into a
%# 65536-by-4 matrix
fclose(fid); %# Close the data file
The easiest way to do it would be to use MATLAB's csvread function.
There is also this tool which reads CSV files.
You could do it yourself without too much difficulty either: Just loop over each line in the file and split it on commas and put it in your array.
Suggest you familiarize yourself with dlmread and textscan.
dlmread is like csvread but because it can handle any delimiter (tab, space, etc), I tend to use it rather than csvread.
textscan is the real workhorse: lots of options, + it works on open files and is a little more robust to handling "bad" input (e.g. non-numeric data in the file). It can be used like fscanf in gnovice's suggestion, but I think it is faster (don't quote me on that though).