Matlab - Read An Unknown CSV [duplicate] - matlab

I'm working with MATLAB for few days and I'm having difficulties to import a CSV-file to a matrix.
My problem is that my CSV-file contains almost only Strings and some integer values, so that csvread() doesn't work. csvread() only gets along with integer values.
How can I store my strings in some kind of a 2-dimensional array to have free access to each element?
Here's a sample CSV for my needs:
04;abc;def;ghj;klm;;;;;
;;;;;Test;text;0xFF;;
;;;;;asdfhsdf;dsafdsag;0x0F0F;;
The main thing are the empty cells and the texts within the cells.
As you see, the structure may vary.

For the case when you know how many columns of data there will be in your CSV file, one simple call to textscan like Amro suggests will be your best solution.
However, if you don't know a priori how many columns are in your file, you can use a more general approach like I did in the following function. I first used the function fgetl to read each line of the file into a cell array. Then I used the function textscan to parse each line into separate strings using a predefined field delimiter and treating the integer fields as strings for now (they can be converted to numeric values later). Here is the resulting code, placed in a function read_mixed_csv:
function lineArray = read_mixed_csv(fileName, delimiter)
fid = fopen(fileName, 'r'); % Open the file
lineArray = cell(100, 1); % Preallocate a cell array (ideally slightly
% larger than is needed)
lineIndex = 1; % Index of cell to place the next line in
nextLine = fgetl(fid); % Read the first line from the file
while ~isequal(nextLine, -1) % Loop while not at the end of the file
lineArray{lineIndex} = nextLine; % Add the line to the cell array
lineIndex = lineIndex+1; % Increment the line index
nextLine = fgetl(fid); % Read the next line from the file
end
fclose(fid); % Close the file
lineArray = lineArray(1:lineIndex-1); % Remove empty cells, if needed
for iLine = 1:lineIndex-1 % Loop over lines
lineData = textscan(lineArray{iLine}, '%s', ... % Read strings
'Delimiter', delimiter);
lineData = lineData{1}; % Remove cell encapsulation
if strcmp(lineArray{iLine}(end), delimiter) % Account for when the line
lineData{end+1} = ''; % ends with a delimiter
end
lineArray(iLine, 1:numel(lineData)) = lineData; % Overwrite line data
end
end
Running this function on the sample file content from the question gives this result:
>> data = read_mixed_csv('myfile.csv', ';')
data =
Columns 1 through 7
'04' 'abc' 'def' 'ghj' 'klm' '' ''
'' '' '' '' '' 'Test' 'text'
'' '' '' '' '' 'asdfhsdf' 'dsafdsag'
Columns 8 through 10
'' '' ''
'0xFF' '' ''
'0x0F0F' '' ''
The result is a 3-by-10 cell array with one field per cell where missing fields are represented by the empty string ''. Now you can access each cell or a combination of cells to format them as you like. For example, if you wanted to change the fields in the first column from strings to integer values, you could use the function str2double as follows:
>> data(:, 1) = cellfun(#(s) {str2double(s)}, data(:, 1))
data =
Columns 1 through 7
[ 4] 'abc' 'def' 'ghj' 'klm' '' ''
[NaN] '' '' '' '' 'Test' 'text'
[NaN] '' '' '' '' 'asdfhsdf' 'dsafdsag'
Columns 8 through 10
'' '' ''
'0xFF' '' ''
'0x0F0F' '' ''
Note that the empty fields results in NaN values.

Given the sample you posted, this simple code should do the job:
fid = fopen('file.csv','r');
C = textscan(fid, repmat('%s',1,10), 'delimiter',';', 'CollectOutput',true);
C = C{1};
fclose(fid);
Then you could format the columns according to their type. For example if the first column is all integers, we can format it as such:
C(:,1) = num2cell( str2double(C(:,1)) )
Similarly, if you wish to convert the 8th column from hex to decimals, you can use HEX2DEC:
C(:,8) = cellfun(#hex2dec, strrep(C(:,8),'0x',''), 'UniformOutput',false);
The resulting cell array looks as follows:
C =
[ 4] 'abc' 'def' 'ghj' 'klm' '' '' [] '' ''
[NaN] '' '' '' '' 'Test' 'text' [ 255] '' ''
[NaN] '' '' '' '' 'asdfhsdf' 'dsafdsag' [3855] '' ''

In R2013b or later you can use a table:
>> table = readtable('myfile.txt','Delimiter',';','ReadVariableNames',false)
>> table =
Var1 Var2 Var3 Var4 Var5 Var6 Var7 Var8 Var9 Var10
____ _____ _____ _____ _____ __________ __________ ________ ____ _____
4 'abc' 'def' 'ghj' 'klm' '' '' '' NaN NaN
NaN '' '' '' '' 'Test' 'text' '0xFF' NaN NaN
NaN '' '' '' '' 'asdfhsdf' 'dsafdsag' '0x0F0F' NaN NaN
Here is more info.

Use xlsread, it works just as well on .csv files as it does on .xls files. Specify that you want three outputs:
[num char raw] = xlsread('your_filename.csv')
and it will give you an array containing only the numeric data (num), an array containing only the character data (char) and an array that contains all data types in the same format as the .csv layout (raw).

Have you tried to use the "CSVIMPORT" function found in the file exchange? I haven't tried it myself, but it claims to handle all combinations of text and numbers.
http://www.mathworks.com/matlabcentral/fileexchange/23573-csvimport

Depending on the format of your file, importdata might work.
You can store Strings in a cell array. Type "doc cell" for more information.

I recommend looking at the dataset array.
The dataset array is a data type that ships with Statistics Toolbox.
It is specifically designed to store hetrogeneous data in a single container.
The Statistics Toolbox demo page contains a couple vidoes that show some of the dataset array features. The first is titled "An Introduction to Dataset Arrays". The second is titled "An Introduction to Joins".
http://www.mathworks.com/products/statistics/demos.html

If your input file has a fixed amount of columns separated by commas and you know in which columns are the strings it might be best to use the function
textscan()
Note that you can specify a format where you read up to a maximum number of characters in the string or until a delimiter (comma) is found.

% Assuming that the dataset is ";"-delimited and each line ends with ";"
fid = fopen('sampledata.csv');
tline = fgetl(fid);
u=sprintf('%c',tline); c=length(u);
id=findstr(u,';'); n=length(id);
data=cell(1,n);
for I=1:n
if I==1
data{1,I}=u(1:id(I)-1);
else
data{1,I}=u(id(I-1)+1:id(I)-1);
end
end
ct=1;
while ischar(tline)
ct=ct+1;
tline = fgetl(fid);
u=sprintf('%c',tline);
id=findstr(u,';');
if~isempty(id)
for I=1:n
if I==1
data{ct,I}=u(1:id(I)-1);
else
data{ct,I}=u(id(I-1)+1:id(I)-1);
end
end
end
end
fclose(fid);

Related

Effective way to convert/create matrix from mixed cell/string

Sometimes there might be more that one string located somewhere else, so I need a way to find everyone in the cell array. I have a cell array like the one below and I need a fast and effective way to 1) remove the empty columns, 2) convert the cells containing a string with "#" to the number after the "#" (6.504), and finally 3) create or convert the whole cell array to a data matrix like "data" below. Is there a smart way to do all this? Any suggestions are highly appreciated.
array ={
[47.4500] '' [23.9530] '' [12.4590]
[34.1540] '' [15.1730] '' [ 9.6840]
[45.2510] '' [23.3770] '' [13.0670]
[29.9350] '' [14.8680] '' '# 6.504'}
data =[
47.4500 23.9530 12.4590
34.1540 15.1730 9.6840
45.2510 23.3770 13.0670
29.9350 14.8680 6.5040]
Columns with mixed types are tricky to handle, but if the format always follows the regex pattern # \d+(?:\.\d+) you can proceed as follows:
C = {
47.4500 '' 23.9530 '' 12.4590
34.1540 '' 15.1730 '' 9.6840
45.2510 '' 23.3770 '' 13.0670
29.9350 '' 14.8680 '' '# 6.504'
};
% Get rid of empty columns...
C(:,all(cellfun(#ischar,C))) = [];
% Convert numeric strings into numeric values...
C = cellfun(#(x)convert(x),C,'UniformOutput',false);
% Convert the cell matrix into a numeric matrix...
C = cell2mat(C);
Where the convert function is defined as follows:
function x = convert(x)
if (~ischar(x))
return;
end
x = str2double(strrep(x,'# ',''));
end

Convert the contents of columns containing numeric text to numbers

I have a csv file that consists of text or number. But some columns are corrupted as seen in the image below("<<"K.O). When I open the csv file via Matlab (without importing), it converts them to number and define undefined values such as "<<"K.O as NaN as I wanted. But when I read the file via script I wrote:
opts = detectImportOptions(filedir);
table = readtable(filedir,opts);
It reads them as char arrays. Since I have many different csv files (columns are different), I want to do it automatically rather than using textscan(since it requires file format and my file format is different for each csv file). Is there any way to convert the contents of columns containing numeric text to numbers automatically?
As far as I can understand from your comments, this is what you are actually looking for:
for i = 1:numel(files)
file = fullfile(folder,files(i).name));
opts = detectImportOptions(file);
idx = strcmp(opts.VariableNames,'Grade');
if (any(idx))
opts.VariableTypes(idx) = {'double'};
end
tabs(i) = readtable(file,opts);
end
Assuming you have your data stored in a table, you can attempt to convert each column of character arrays to numeric values using str2double. Any values that don't convert to a numeric value (empty entries, words, non-numeric strings, etc.) will be converted to NaN.
Since you want to do the conversions automatically, we'll have to make one key assumption: any column that converts to all NaN values should remain unchanged. In such a case, the data was likely either all non-convertable character arrays, or already numeric. Given that assumption, this generic conversion could be applied to any table T:
for varName = T.Properties.VariableNames
numData = str2double(T.(varName{1}));
if ~all(isnan(numData))
T.(varName{1}) = numData;
end
end
As a test, the following sample data:
T = table((1:5).', {'Y'; 'N'; 'Y'; 'Y'; 'N'}, {'pi'; ''; '1.4e5'; '1'; 'A'});
T =
Var1 Var2 Var3
____ ____ _______
1 'Y' 'pi'
2 'N' ''
3 'Y' '1.4e5'
4 'Y' '1'
5 'N' 'A'
Will be converted to the following by the above code:
T =
Var1 Var2 Var3
____ ____ ______
1 'Y' NaN
2 'N' NaN
3 'Y' 140000
4 'Y' 1
5 'N' NaN

Updating N-gram 2 dimension cell array in Matlab

I am trying to extract bi-grams from a set of words and store them in a matrix. what I want is to insert the word in the first raw and all the bi-grams related to that word
for example: if I have the following string 'database file there' my output should be:
database file there
da fi th
at il he
ta le er
ab re
..
I have tried this but it gives me only the bigram without the original word
collection = fileread('e:\m.txt');
collection = regexprep(collection,'<.*?>','');
collection = lower(collection);
collection = regexprep(collection,'\W',' ');
collection = strtrim(regexprep(collection,'\s*',' '));
temp = regexprep(collection,' ',''',''');
eval(['words = {''',temp,'''};']);
word = char(words(1));
word2 = regexp(word, sprintf('\\w{1,%d}', 1), 'match');
bi = cellfun(#(x,y) [x '' y], word2(1:end-1)', word2(2:end)','un',0);
this is only for the first word however, i want to do that for every word in the "words" matrix 1X1000
is there an efficient way to accomplish this as I will deal with around 1 million words?
I am new to Matlab and if there any resource to explain how to deal with matrix (update elements, delete, ...) will be helpful
regards,
Ashraf
If you were looking to get a cell array as the output, this might work for you -
input_str = 'database file there' %// input
str1_split = regexp(input_str,'\s','Split'); %// split words into cells
NW = numel(str1_split); %// number of words
char_arr1 = char(str1_split'); %//' convert split cells into a char array
ind1 = bsxfun(#plus,[1:NW*2]',[0:size(char_arr1,2)-2]*NW); %//' get indices
%// to be used for indexing into char array
t1 = reshape(char_arr1(ind1),NW,2,[]);
t2 = reshape(permute(t1,[2 1 3]),2,[])'; %//' char array with rows for each pair
out = reshape(mat2cell(t2,ones(1,size(t2,1)),2),NW,[])'; %//'
out(reshape(any(t2==' ',2),NW,[])')={''}; %//' Use only paired-elements cells
out = [str1_split ; out] %// output
Code Output -
input_str =
database file there
out =
'database' 'file' 'there'
'da' 'fi' 'th'
'at' 'il' 'he'
'ta' 'le' 'er'
'ab' '' 're'
'ba' '' ''
'as' '' ''
'se' '' ''

textscan introduces additional zeros in output array

I have a .txt file like this:
ord,cat,1
ord,cat,1
ord,cat,3
ord,cat,1
ord,cat,4
I know the number of entries for each row (comma separated) but not the number of rows.
I need to import the number of the following car in an array.
I wrote this:
fid=fopen(filename)
A=textscan(fid,'%s%s%d','Delimiter',',')
But i get this
A = {17x1 cell} [16x1 int32]
where the number of cells is clearly wrong.
When i try to read
A{3}
i get
ans =
0
0
0
0
0
1
0
1
0
3
0
1
0
4
I'm really interested just in the integer array, but maybe can be useful to show you also:
A{1}
ans =
'{\rtf1\ansi\ansicpg1252\cocoartf1187\cocoasubrtf400'
'{\fonttbl\f0\fswiss\fcharset0 Helvetica;}'
'{\colortbl;\red255\green255\blue255;}'
[1x75 char]
[1x102 char]
'\f0\fs24 \cf0 ord'
'\'
'ord'
'\'
'ord'
'\'
'ord'
'\'
'ord'
'}'
A{2}
ans =
''
''
''
''
''
'cat'
''
'cat'
''
'cat'
''
'cat'
''
'cat'
Ok,I think there was a format mistake of some kind in the input file.
I deleted it and created a new .txt file and the code above works fine.
You're not giving the right format command to textscan.
A=textscan(fid,'%s%d','Delimiter',',')
'%s%d' here means "read one string, then one integer". So it will probably sit there reading string-integer-string-integer (or trying to), and the "0"s arise from errors where
Since you have three entries per line, try instead:
A=textscan(fid,'%s%s%d','Delimiter',',')
Your numbers should be in A{3}.
If you don't need the first two columns, you can also skip over those fields:
A=textscan(fid,'%*s%*s%d','Delimiter',',')

Matlab get value from csv file

I have an excel file data I would use.
I would like from two input values from columns B and C ​​get the name from column A.
Example: from these two values
​​var1 = 12.90050072
var2 = 55.95981118
I would get "ALIOTH"
here data
A B C
ALGOL 3.13614789 40.95564610
ALIOTH 12.90050072 55.95981118
ALKAID 13.79233003 49.31324779
I can load the csv file, but can not browse the data.
function [name] = getNameObject(ad,dec)
fileID = fopen('bdd.csv');
C = textscan(fileID, '%s %f %f','Delimiter',';');
fclose(fileID);
Please suggest some functions and sample code to do this
As you will need to compare floating point values, direct numeric comparisons don't work a lot of the time. Here I will make use of string comparisons to achieve what you need:
clear;
fid = fopen('test.csv');
C = textscan(fid, '%s %s %s', 'Delimiter', ';');
fclose(fid);
val1 = input('Enter the first input: ', 's');
val2 = input('Enter the second input: ', 's');
if(find(ismember(C{2},val1)) == find(ismember(C{3},val2)))
output = C{1}{find(ismember(C{2},val1))}
else
disp('No match found!');
end
Now the result would be something like:
>> test
Enter the first input: 1.03
Enter the second input: 4.12
No match found!
>> test
Enter the first input: 12.90050072
Enter the second input: 55.95981118
output =
ALIOTH
Here I'm assuming, as per what I could deduce from your code, that the delimiter was a semi-colon. As such, my input data was:
A;B;C
ALGOL;3.13614789;40.95564610
ALIOTH;12.90050072;55.95981118
ALKAID;13.79233003;49.31324779
I use importdata to deal with csv-s
aa.csv:
A, B, C
ALGOL, 3.13614789, 40.95564610
ALIOTH, 12.90050072, 55.95981118
ALKAID, 13.79233003, 49.31324779
importdata('aa.csv').data:
3.1361 40.9556
12.9005 55.9598
13.7923 49.3132
importdata('aa.csv').textdata:
'A' ' B' ' C'
'ALGOL' '' ''
'ALIOTH' '' ''
'ALKAID' '' ''