How can I import a table in MATLAB with footer rows? - matlab

I have a text file with header lines above the table and below the table is a blank line and then a table with summary statistics for the table. Handling the header lines is easy as most of the standard functions have an option for that (i.e. readtable). The length of the file is not always the same. The issue with readtable is that the footer table has fewer columns than the main table, so the function is unable to read those lines and returns an error.
This is the error that I get with readtable:
Error using readtable (line 216)
Reading failed at line 2285. All lines of a text file must have the same number of delimiters. Line 2285 has 0 delimiters, while preceding lines
have 24.
Note: readtable detected the following parameters:
'Delimiter', '\t', 'HeaderLines', 21, 'ReadVariableNames', true, 'Format', '%T%f%f%f%q%f%f%f%f%f%f%q%f%f%f%f%f%f%f%f%f%f%f%f%f'
Here's what I've come up with as an alternative solution:
dataStartRow = 23;
numRows = length(readmatrix(filePath, 'NumHeaderLines',0));
dataEndRow = numRows - 8;
opts = detectImportOptions(filePath);
opts.DataLines = [dataStartRow, dataEndRow];
dataTable = readtable(filePath, opts);
This works but I have another file with a different number of footer rows and I don't know how to deal with this without hardcoding in the number of footer lines.
I've considered using fgetl, and reading lines in one by one to determine when to stop adding to the table, but that seems very inefficient. How can I import this table with an unknown number of table lines and an unknown number of footer lines?

First of all, don't conclude that something 'seems very inefficient' unless you've profiled or timed it and found that it's actually too slow for your requirements.
In this case though, you can change the action MATLAB takes in the event of an error, by setting this property of your delimitedTextImportOptions object:
opts = detectImportOptions(filePath);
opts.ImportErrorRule = 'omitrow'; # ignore any lines that don't match the detected pattern
dataTable = readtable(filePath, opts);
If you use this code regularly to read in new data, I'd consider doing some sort of validation on dataTable to make sure it is consistent with what you expect, just in case a new file causes detectImportOptions to give a different result for some reason. For example if you know that the number of columns and their format should always be the same, you could specify
opts.Format = '%T%f%f%f%q%f%f%f%f%f%f%q%f%f%f%f%f%f%f%f%f%f%f%f%f';
and then check that the resulting table is not empty.

Related

Table fields digit change when replacing data of different types

Based on this post (Table fields in format in PDF report generator - Matlab), I run the script on a table. In the script, I extract one column, run the function on it, and replace the old column with this new one in identical but copied table (of the original). Strangely, I get that the modified column data is not stored correctly in the new table. Why? I suspect that is it is because of the categorical. How can I fix it?
Code:
clc;
clear all;
dataInput = webread('https://people.sc.fsu.edu/~jburkardt/data/csv/hw_25000.csv');
tempCol = (f(table2array(dataInput(:,2))));
newDataInput = dataInput;
newDataInput(:,2) = table(tempCol);
function FivDigsStr = f(x)
%formatting to character array with 5 significant digits and then splitting.
%at each tab. categorical is needed to remove ' 's that appear around char
%in the output PDF file with newer MATLAB versions
%e.g. with R2018a, there are no ' ' in the output file but ' ' appears with R2020a
FivDigsStr = categorical(split(sprintf('%0.5G\t',x)));
%Removing the last (<undefined>) value (which is included due to \t)
FivDigsStr = FivDigsStr(1:end-1);
end
You cannot replace values of one type with values of some other type. Instead, just rewrite the column. i.e use:
newDataInput.(newDataInput.Properties.VariableNames{2}) = tempCol;
instead of:
newDataInput(:,2) = table(tempCol);

How to import large dataset and put it in a single column

I want to import the large data set (multiple column) by using the following code. I want to get all in a single column instead only one row (multi column). So i did transpose operation but it still doesn't work appropriately.
clc
clear all
close all
dataX_Real = fopen('dataX_Real_in.txt');dataX_Real=dataX_Real';
I will really appreciate your support and suggestions. Thank You
The sample files can be found using the following link.
When using fopen, all you are doing is opening up the file. You aren't reading in the data. What is returned from fopen is actually a file pointer that gives you access to the contents of the file. It doesn't actually read in the contents itself. You would need to use things like fread or fscanf to read in the content from the text data.
However, I would recommend you use dlmread instead, as this doesn't require a fopen call to open your file. This will open up the file, read the contents and store it into a variable in one function call:
dataX_Real = dlmread('dataX_Real_in.txt');
By doing the above and using your text file, I get 44825 elements. Here are the first 10 entries of your data:
>> format long;
>> dataX_Real(1:10)
ans =
Columns 1 through 4
-0.307224970000000 0.135961950000000 -1.072544100000000 0.114566020000000
Columns 5 through 8
0.499754310000000 -0.340369000000000 0.470609910000000 1.107567700000000
Columns 9 through 10
-0.295783020000000 -0.089266816000000
Seems to match up with what we see in your text file! However, you said you wanted it as a single column. This by default reads the values in on a row basis, so here you can certainly transpose:
dataX_Real = dataX_Real.';
Displaying the first 10 elements, we get:
>> dataX_Real = dataX_Real.';
>> dataX_Real(1:10)
ans =
-0.307224970000000
0.135961950000000
-1.072544100000000
0.114566020000000
0.499754310000000
-0.340369000000000
0.470609910000000
1.107567700000000
-0.295783020000000
-0.089266816000000

Reading large amount of data stored in lines from csv

I need to read in a lot of data (~10^6 data points) from a *.csv-file.
the data is stored in lines
I can't know how many data points per line and how many lines are there before I read it in
the amount of data points per line can be different for each line
So the *.csv-file could look like this:
x Header
x1,x2
y Header
y1,y2,y3, ...
z Header
z1,z2
...
Right now I read in every line as string and split it at every comma. This is what my code looks like:
index = 1;
headerLine = textscan(csvFileHandle,'%s',1,'Delimiter','\n');
while ~isempty(headerLine{1})
dummy = textscan(csvFileHandle,'%s',1,'Delimiter','\n', ...
'BufSize',2^31 - 1);
rawData(index) = textscan(dummy{1}{1},'%f','Delimiter',',');
headerLine = textscan(csvFileHandle,'%s',1,'Delimiter','\n');
index = index + 1;
end
It's working, but it's pretty slow. Most of the time is used while splitting the string with textscan. (~95%).
I preallocated rawData with sample data, but it brought next to nothing for the speed.
Is there a better way than mine to read in something like this?
If not, is there a faster way to split this string?
First suggestion: to read a single line as a string when looping over a file, just use fgetl (returns a nice single string so no faffing with cell arrays).
Also, you might consider (if possible), reading everything in a single go rather than making repeating reads from file:
output = textscan(fid, '%*s%s','Delimiter','\n'); % skips headers with *
If the file is so big that you can't do everything at once, try to read in blocks (e.g. tackle 1000 lines at a time, parsing data as you go).
For converting the string, there are the options of str2num or strsplit+str2double but the only thing I can think of that might be slightly quicker than textscan is sscanf. Since this doesn't accept the delimiter as a separate input put it in the format string (the last value doesn't end with ,, true, but sscanf can handle that).
for n = 1:length(output);
data{n} = sscanf(output{n},'%f,');
end
Tests with a limited patch of test data suggests sscanf is a bit quicker (but might depend on machine/version/data sizes).

How to import column of numbers from mixed string numerical text file

Variations of this question have already been asked several times, for example here. However, I can't seem to get this to work for my data.
I have a text file with 3 columns. First and third columns are floating point numbers. Middle column is strings. I'm only interested in getting the first column really.
Here's what I tried:
filename=fopen('heartbeatn1nn.txt');
A = textscan(filename,'%f','HeaderLines',0);
fclose(filename);
When I do this A comes out as just a single number--the first element in the column. How do I get the whole column? I've also tried this with the '.tsv' file extension, same result.
Also tried:
filename=fopen('heartbeatn1nn.txt');
formatSpec='%f';sizeA=[1 Inf];
A = fscanf(filename,formatSpec,sizeA);
fclose(filename);
with same result.
Could the file size be a problem? Not sure how many rows but probably quite a few since file size is 1.7M.
Assuming the columns in your text file are separated by single whitespace characters your format specification should look like this:
A = textscan(filename,'%f %s %f');
A now contains the complete file content. To obtain the first column:
first_col = A{:,1};
Alternatively, you can tell textscan to skip the unneeded fields with the * option:
first_col = cell2mat( textscan(filename, '%f %*s %*f') );
This returns only the first column.

How to avoid the repeated paragraghs of long txt files being ignored for importdata in matlab

I am trying to import all double from a txt file, which has this form
#25x1 string
#9999x2 double
.
.
.
#(repeat ten times)
However, when I am trying to use import Wizard, only the first
25x1 string
9999x2 double.
was successfully loaded, the other 9 were simply ignored
How may I import all the data? (Does importdata has a maximum length or something?)
Thanks
It's nothing to do with maximum length, importdata is just not set up for the sort of data file you describe. From the help file:
For ASCII files and spreadsheets, importdata expects
to find numeric data in a rectangular form (that is, like a matrix).
Text headers can appear above or to the left of the numeric data,
as follows:
Column headers or file description text at the top of the file, above
the numeric data. Row headers to the left of the numeric data.
So what is happening is that the first section of your file, which does match the format importdata expects, is being read, and the rest ignored. Instead of importdata, you'll need to use textscan, in particular, this style:
C = textscan(fileID,formatSpec,N)
fileID is returned from fopen. formatspec tells textscan what to expect, and N how many times to repeat it. As long as fileID remains open, repeated calls to textscan continue to read the file from wherever the last read action stopped - rather than going back to the start of the file. So we can do this:
fileID = fopen('myfile.txt');
repeats = 10;
for n = 1:repeats
% read one string, 25 times
C{n,1} = textscan(fileID,'%s',25);
% read two floats, 9999 times
C{n,2} = textscan(fileID,'%f %f',9999);
end
You can then extract your numerical data out of the cell array (if you need it in one block you may want to try using 'CollectOutput',1 as an option).