Keeping whitespace in csv headers (Matlab) - matlab

So I'm reading in a .csv file, and it all works as I want bar one thing. The headers of the data have spaces, which I want later for displaying data to the user. However, these spaces get stripped when the csv file is read in via readtable (as they get used as the variable names). Again, no problem with this per se, but I still need the unmodified strings as well.
Two additional notes:
I'm happy for the strings to be stored separately from the main table if that makes things easier.
The actual .csv file I'm reading in is reasonably large (about 2 million data points) so from a computational cost side of things, the less reading of the file the better
Example read in code:
File = 'example.csv';
Import_Options = detectImportOptions( File, 'NumHeaderLines', 0 );
Data = readtable( File )
Example csv file (example.csv):
"this","is","an","example test"
"1","1","2","3"
"3","1","4","1"
"hot","hot","cold","hot"

You can simply read the first line with fgetl, thus grabbing the headers, before reading the entire file with readtable.

Related

Extracting data from complex output text file using perl and placing into new text file

The complete output text file is hundreds of lines long, with relevant nuclear cross sections and a plethora of other data that I do not need for this particular problem. I am trying to extract the columns of data under "BURNUP" and the first "K-INF" from the file I attached. I am trying to extract this data and place it into a separate file. I am a newbie, and have a similar perl script from a professor. I have tried to adapt it to the information I am looking for but the only result I am receiving are the 2 print statements. Any suggestions?

How to read data from txt between 2 lines with the same symbol?

I have a lot of .txt files (<1000 lines each). The data format is the following (the picture): there are some lines in the beginning that I don't need, then the line with '', then the lines with data that I need to extract from the file, then again a line with '' and some comments that I don't need.
Is there any way to do that? I have a lot of such files. The matter is that in every file the number of lines before the first '' is different. So, is there any way to read the data in between of two ''? I tried all the functions but I am a beginner and just cannot come up with the right idea...
This is quite simple with regular expressions:
usefulData = regexp(fileread('abg06.txt'), '(?<=\*).*?(?=\*)', 'match','once');

How can I get around MATLAB's specifications of csvread?

I am trying to create a program that takes in multiple csv files. However, they include both strings and numbers.
I have csv files that looks something like this:
"Project","Task","Value Type", "Value"
"105", "06.05.02", "cost", "3434"
"105", "06.05.02", "obligation", "3434"
"106", "06.05.02", "cost", "500"
"106", "06.05.02", "obligation", "500"
The number of columns is fixed (there are actually 23, I only listed four for readability), but each csv has a different number of rows. If I save it as an xls file, it works perfectly. However, this takes too long if there is a lot of files and the end user doesn't want to deal with that.
Similar questions suggested textread, but the first row would be
textread('filename.csv', '%s%s%s%s', 'delimiter', ',');
while the rest of the file is
textread('filename.csv', '%f%s%s%f', 'delimiter', ',');
In comparison to the simplicity of having the numbers, strings, and raw data in corresponding arrays using xlsread, having 23 different arrays seems a bit complicated.
What would be the best solution here?
The files are large, but not large enough that I am worried about efficiency.
Is there a way to change the extension of the files from .csv to .xls from within my program? (I looked this up as well, but couldn't find anything that worked) I would really like to use xlsread, but if this isn't possible, is there a way to have textread save the first row of a csv with certain variable types(%s%s%s%s) and then save the rest of the rows with a different variable type (%f%s%s%f)?
xlsread does read csv files, so there should be no need to convert. Just read directly (tested on 2013b with a small file of mixed numeric and string data):
[num, text, alldata] = xlsread('test.csv')
Note: this apparently only works on Windows machines. If just changing the extension makes xlsread work for you, you can rename with movefile:
oldfile = somefile.csv;
ext = 'xls';
[~, name, ~ ] = fileparts(oldfile);
newfile = [name,ext];
movefile(oldfile,newfile);
If you have many files, this would go in a loop and oldfile would be taken from the output of something like a dir or ls command giving you all the .csv files.
Incidentally, while you might see it mentioned in older questions, textread is now not recommended, use textscan instead for cases where you need more complexity/control over the input. It can be very powerful but for this case is probably like cracking a nut with a sledgehammer.
If you don't need the headers, for example, you can take the whole file in one line with:
C = textread('filename.csv', '%f%s%s%f', 'Delimiter', ',','HeaderLines',1);

MATLAB - Stitch Together Multiple Files

I am new to MATLAB programming and some of the syntax escapes me. So I need a little help. Plus I need some complex looping ideas.
Here's the breakdown of what I have:
12 seperate .dat files, each titled something like output_1_x.dat, output_2_x.dat, etc.
each file is actually one piece of a whole that was seperated and processed
each .dat file is approx. 3.9 GB
Here's what I need to do:
create a single file containing all the data from each seperate file, i.e. I need to recreate the original file.
call this complete output file something like output_final.dat
it has to be done in MATLAB, there are no other alternatives (actually there maybe; see note below)
What is implied:
I will have to fread each 3.9 GBfile into chunks or packets, probably 100 mb at a time (using an imbedded loop?)
these packets will have to be read then written sequentially
after one file is read then written into output_final.dat, the next file is automatically read & written (the master loop).
Well, that's pretty much it. I did a search for 'merging mulitple files' and found this. That isn't exactly what I need to do...I don't need to take part of a file, or data from files, and write it to a new one. I'm simply...concatenating...? This would be simple in Java or Perl, but I only have MATLAB as a tool.
Note: I am however running KDE in OpenSUSE on a pretty powerful box. Maybe someone who is also an expert in terminal knows a command/script to do this from the kernel?
So on this site we usually would point you to whathaveyoutried.com but this question is well phrased.
I wont write the code but i will give you how I would do it. So first I am a bit confused about why you need to fread the file. Are you just appending one file onto the end of another?
You can actually use unix commands to achieve what you want:
files = dir('*.dat');
for i = 1:length(files)
string = sprintf('cat %s >> output_final.dat.temp', files(i).name);
unix(string);
end
That code should loop through all the files and pipe all of the content into output_final.dat.temp (then just rename it, we didn't want it to be included in anything);
But if you really want to use fread because you want to parse the lines in some manner then you can use the same process:
files = dir('*.dat');
fidF = fopen('output_final.dat', 'w');
for i = 1:length(files)
fid = fopen(files(i).name);
while(~feof(fid))
string = fgetl(fid) %You may choose to parse the string in some manner here
fprintf(fidF, '%s', string)
end
end
Just remember, if you are not parsing the lines this will take much much longer.
Hope this helps.
I suggest using a matlab.io.matfileclass objects on two of the files:
matObj1 = matfile('datafile1.mat')
matObj2 = matfile('datafile2.mat')
This does not load any data into memory. Then you can use the objects' methods to sequentialy save a variable from one file to another.
matObj1.varName = matObj2.varName
You can get all the variables in one file with fieldnames(mathObj1) and loop through to copy contents from one file to another. You can then clear some space by removing the copied fields. Or you can use a bit more risky procedure by directly moving the data:
matObj1.varName = rmfield(matObj2,'varName')
Just a disclaimer: haven't tried it, use at own risk.

Reading large csv files with strings containing commas as one field

I have a large .csv file (~26000 rows). I want to be able to read it into matlab. Another problem is that it contains a collection of strings delimited by commas in one of the fields.
I'm having trouble reading it. I tried stuff like tdfread, which won't work here. Any tricks with textscan i should be aware about?
Is there any other way?
I'm not sure what is generating your CSV file but that is your problem.
The point of a CSV file, is that the file itself designates separation of fields. If the text of the CSV contains commas, then nothing you can do will help you. How would ANY program know when the text in a single field contains commas, or when that comma is a field delimiter?
Proper CSV would have a text qualifier. Some generators/readers gives you the option to use one. The standard text qualifier is a " (quote). Its changeable, though, because your text may contain those, too.
Again, its all about generating proper CSV content.
There's a chance that xlsread won't give you the answer you expect -- do the strings always appear in the same columns, for example? I think (as everyone else seems to :-) that it would be more robust to just use
fid = fopen('yourfile.csv');
and then either textscan
t = textscan(fid, '%s', delimiter', sprintf('\n'));
t = t{1};
or just fgetl (the example in the help is perfect).
After that you can do some line-by-line processing -- using textscan again on the text content of each line, for example, is a nice, quick way to get a cell-array that will allow fast analysis of each line.
You have a problem because you're reading it in as a .csv, and you have commas within your data. You can get it in Excel and manipulate the date, possibly extract the unwanted commas with Excel formulas. I work with .csv files for DB imports quite a bit. I imagine matLab has similar rules, which is - no commas in your data.
Can you tell us more about your data? Are there commas throughout, our just one column? Maybe you can read it in as tab delimited?
Are you using a Unix system? The reason I am asking is that you could use a command-line function such as sed and regular expressions to clean those data files before you pass them into Matlab. Here is a link that explains how to do exactly what you are looking for.
Since, as others have observed, your file is CSV with commas inside what you think of as a single field, it's going to be hard to persuade Matlab that that really is only one field. I think your best strategy is going to be to read one line at a time, into a string acting as a buffer, and to translate it, field-by-field, into the variables or other data structures that you want. Since Matlab has in-built regular expression capabilities this shouldn't be too hard.
And, as others have already suggested, posting a sample of your data would help us to help you.
One easy solution is:
path='C:\folder1\folder2\';
data = 'data.csv';
data = dataset('xlsfile',sprintf('%s\%s', path,data));
Of course you could also do the following:
[data,path] = uigetfile('C:\folder1\folder2\*.csv');
data = dataset('xlsfile',sprintf('%s\%s', path,data));
now you will have loaded the data as dataset. An easy way to get a column 1 for example is
double(data(1))