Matlab's Import Tool recognizes a column as numbers but generate %s in formatSpec - matlab

I use Matlab's Import Tool to generate a script that will take care of importing several CSV files with the same columns.
The Import Tool successfully manages to recognize the type of each column of my CSV file:
However, in the generated script, the same column are cast as strings (%s = string):
Any idea why?
Surprisingly it works fine with CSV files with fewer columns (it works with 70-column CSV files, but the issue arises with with 120-column CSV files). Here is one example of a CSV file that triggers the issue.
I use R2014b x64 with Windows 7 SP1 x64 Ultimate.

This is happening because one of the columns in your file contains data which contains numbers and text. The Import Tool is predicting that you're going to want to extract the numbers from this field, so it labels the column as 'NUMBER'. However, standard textscan doesn't allow for this, so the Import Tool must generate code to read in all of the data as text, and does the numeric conversion afterwards. The Import Tool is trying to help avoid errors using textscan.
The result of running the generated code is still numeric, as is shown in the Import Tool.
The specific column is labeled SEGMENT_ID in your example file. It contains data like:
l8K3ziItQ2pRFQ8
L79Y0zA8xF7fVTA
JYYqCCwRrigaUmD

Related

Import mixed data from CSV file into MATLAB

I have a CSV file with data inside like:
10_09xyz,xy1,11,PX,....
...and I want to import it into MATLAB.
Is it possible to import the mixed data like 10_09xyz? Which format do I have to use?
I tried the following, but it failed:
formatSpec = %C%C%f%s%f';
T = readtable('XYZ.csv','Delimiter',',','Format',formatSpec);
The following warning appears:
"Variable names were modified to make them valid MATLAB identifiers."
Thanks for your help.
You can manually do it at least. Just use matlab open the folder and double click the csv file.
Please check my screenshop from matlab

Is it possible to view data as it is being imported in Teradata?

I'm trying to import data from a txt file and keep getting a 'Wrong number of data values in row xxx' error. Looking at the text file, everything looks fine but I can't tell what/how Teradata is interpreting it.
So is there a way to view or preview the data from Teradata's perspective? I tried running a SELECT statement, but since the import doesn't finish, nothing is even imported. Which brings me to my next question, is there a way to limit an external-file import to a certain # of rows? Like import just the first 50 rows from the text file?
May I suggest you obtain a copy of Notepad++ or Sublime Text, both of which are free to download, to view the text file. This will allow you to open the text file and identify what in the records is causing you trouble loading the file. You will be able to display non-printable characters and use advanced search techniques to traverse the files looking for problems with the data.
It is possible there is an embedded carriage return, line feed, or other non-printable character that is being interpreted during the import and generating this error.

How can I prevent MATLAB from automatically modifying .dat file variable names upon import using the dataset function?

So, I currently have a MATLAB script that does stuff with data and then, using a template .dat file, creates about 20 more .dat files with only a single column being changed (I've been using the dataset and export functions to read and write the files, respectively). The program that will use the .dat files, ExperimentBuilder, requires that the headers have names that start with dollar signs (for example: $image). However, when I use the dataset function in MATLAB to import the template file, I get this warning:
Warning: Variable names were modified to make them valid MATLAB identifiers.
It then replaces all the dollar signs in the variables to x_ (for example, x_image), which would be fine if it would let me change it back to the $ format. But whenever I try to using set , it just gives me this warning again and reverts it back to x_, which is unreadable by ExperimentBuilder.
I know I could just do a quick copy and paste on each file with the original headings, but I would like to know if there's a way to fix this problem in the actual code.
Thanks!
Thing is the MATLAB database uses the header names to provide access to the columns by name, this is why the header names must be valid identifiers (isvarname() states that it must starts with a letter, and contains only valid alphanumeric characters [a-zA-Z0-9_]).
The easiest solution would be manually write the header line yourself (including names starting with $), while separately exporting the data without the headers:
export(ds, ..., 'WriteObsNames',false)
(Note that dataset.export overwrites files by default, so you'll have to export first, then prepend the header line at the beginning of the file. Or if you're comfortable modifying MATLAB own functions, then go edit dataset.export and change the fopen mode from overwrite 'wt' to append 'at' mode).

Convert dataset of .mat format to .csv octave/matlab

there are datasets in .mat format in the this site: http://www.cs.nyu.edu/~roweis/data.html
I want to change the format to .csv.
Can someone tell me how to change the format to create the .csv file.
Thanks!
Suppose that the .mat files from the site are available already. In the command window in Matlab, you may write, for example:
load('C:\Users\YourUserName\Downloads\mnist_all.mat');
to load the .mat file; the result should be a set of matrices test0, test1, ..., train0, train1 ... created in your workspace, which you want saved as CSV files. Because they're different size, you need to save one CSV per variable, e.g. (also in the command window):
csvwrite('C:\Users\YourUserName\Downloads\mnist_test0.csv', test0);
Repeat the command for each variable, and do not forget to change also the name of the output file to avoid overwriting.
Did you tried the csvwrite function in Matlab?
Just load your .mat files with the load function and then write them with csvwrite!
I do not have a Matlab license so I installed GNU Octave 4.2.1 (2017) on Windows 10 (thank you to John W. Eaton and others). I was not fully successful using the csvwrite so I used the following workaround. (BTW, I am totally incompetent in the Octave world. csvwrite worked for simple data structures).
In the Command Window I used the following two commands
load myfile.mat
save("-text","myfile.txt","variablename")
When the "myfile.mat" is loaded, the variable names for the data vectors loaded are displayed in the workspace window. This is the name(s) to use in the save command. Some .mat files will load several data structures.
The "-text" option is the default, so you may not need to include this option in the command.
The output file lists the .mat file contents in text format as single column (of potentially sequential variables). It should be easy to use you text editor to massage this data into the original matrix structure for use in whatever app you are comfortable with.
Had a similar issue. Needed to convert a series of .mat files that had two columns of numerical data into standard data files (ascii text). Note that I don't really ever use csv, but everything here could be adapted by using csvwrite instead of the standard save.
Using Octave 4.2.1 ....
load myfile.mat
LI = [L, I] ## L and I are column vectors representing my data
save myfile.txt LI
Note that L and I appear to be default variable names chosen by Octave for the two columns vectors in my original data file. Ideally a script that iterated over all files with the .mat extension in my directory would be ideal, but this got the job done. It saves the data as two space separated columns of data.
*** Update
The following script works on Octave 4.2.1 for a series of data files with the .mat extension that are in the same directory. It will iterate over them and write the data out to text files with the same name but with the extension .dat . Note that this is not efficient, so if you have a lot of files or if they are large it can take a while to run. I would suggest that you run it from the command line using octave mat2dat.m so you can actually watch it go.
I make no guarantees that this will work for you, but it did for me. I also am NOT proficient in Octave or Matlab, so I'm sure a better solution exists.
# mat2dat.m
dirlist = glob("*.mat")
for i=1:length(dirlist)
filename = dirlist{i,1}
load(filename, "L", "I")
LI = [L,I]
tmpname = filename(1:length(filename)-3)
txtname = strcat(tmpname, 'dat')
save(txtname, "LI")
end

MATLAB: How to import multiple CSV files with mixed data types

I have just started learning MATLAB and have difficulties to import csv files to a 2-D array..
Here is a sample csv for my needs:(all the csv files are in the same format with fixed columns)
Date, Code, Number....
2012/1/1, 00020.x1, 10
2012/1/2, 00203.x1, 0300
...
As csvread() only works with integer numbers, should I import numeric data and text data separately or is there any quick way to import multiple csv files with mixed data types?
Thanks a lot!!
What you're looking for is maybe the function xlsread.
It opens any file recognized by Excel, and automatically separates text data from numerical data.
The problem is that the default delimiter for at least on my computer is ;, and not , (at least for my locale here in Brazil). So xlsread will try to separate the fields on the file with a ;, and not a comma as you'd like.
To change that you have to change your system locales to add the comma as the list separator. So if you feel like it, to do it in windows vista, click Start, Control Panel, Regional and Language Options, Customize this format, and change the List Separator from ';' to ','. On other windows the process should be almost the same.
After doing that, typing:
[num, txt, all] = xlsread('your_file.csv');
will return something like:
num =
10
300
txt =
'01/01/2012' ' 00020.x1'
'02/01/2012' ' 00203.x1'
all =
'01/01/2012' ' 00020.x1' [ 10]
'02/01/2012' ' 00203.x1' [300]
Notice that if your locale has already the list separator set to ',', you won't have to change anything on your system to make that work.
If you don't want to change your system just to use the xlsread function, then you could use the textscan function described here: http://www.mathworks.com/help/techdoc/ref/textscan.html
The problem is that it is not as simple as calling it, as you will have to open the file, iterate on the lines, and tell matlab explicitly the format of your file.
Best regards
I recently wrote a function that solves exactly this problem. See delimread.
It's worth noting that xlsread on csv files only works in windows. On Linux or Mac, xlsread works in 'basic' mode which cannot read csv files. It might not be a great idea in the longrun to use xlsread in case you need to migrate across platforms or automate code runs on Linux servers.
xlsread is also much slower than other text parsing functions since it opens an Excel session to read the file.