CSV data from URL to table - matlab

How can I convert CSV file to a proper table that has rows and columns, reading the CSV directly from a URL?
clc;
clear all;
S = {urlread('https://people.sc.fsu.edu/~jburkardt/data/csv/homes.csv')}
T = array2table(S)

You can use webread:
data = webread( 'https://people.sc.fsu.edu/~jburkardt/data/csv/homes.csv' );
This gives you a table variable:
In the documentation you can see several choices for the 'ContentType' option. The default 'auto' will work in this case, and be identified as a table due to the .csv file extension. If you wanted to be over-specific, you could also specify the 'table' content type as part of the optional web options input to webread.

I would suggest you to download the file. And then with readtable("examples.csv"), it works with your data:
filename = 'C:\temp\homes.csv';
data = readtable(filename);

Related

Unable to create an array from a table

I'm trying to load an external CSV file using MATLAB.
I managed to download it using webread, but I only need a subset of the columns.
I tried
Tb = webread('https://datahub.io/machine-learning/iris/r/iris.csv');
X = [sepallength sepalwidth petallength petalwidth];
But I cannot form X this way because the names are not recognized. How can I create X correctly?
The line
Tb = webread('https://datahub.io/machine-learning/iris/r/iris.csv');
Produces a table object with column names you later try to access as if they were workspace variables - which they aren't. Instead, you should modify your code to use:
X = [Tb.sepallength Tb.sepalwidth Tb.petallength Tb.petalwidth];

Sequential import of datafiles according to rule in Matlab

I have a list of .txt datafiles to import. Suppose they are called like that
file100data.txt file101data.txt ... file109data.txt I want to import them all using readtable.
I tried using the for to specify a vector a = [0:9] through which matlab could loop the readtable command but I cannot make it work.
for a = [0:9]
T_a_ = readtable('file10_a_data.txt')
end
I know I cannot just put _a_ where I want the vector to loop through, so my question is how can I actually do it?
Thank you in advance!
Here is a solution that should work even if you have missing files in your folder (e.g. you have file100data.txt to file107data.txt, but you are missing file file108data.txt and file109data.txt):
files=dir('file10*data.txt'); %list all data files in your folder
nof=size(files,1); %number of files
for i=1:nof %loop over the number of files
table_index=files(i).name(7) %recover table index from data filename
eval(sprintf('T%s = readtable(files(i).name)', table_index)); %read table
end
Now, please note that is it generally regarded as poor practice to dynamically name variables in Matlab (see this post for example). You may want to resort to structures or cells to store your data.
You need to convert the value of a into a string and combine strings together, like this:
Tables = struct();
for a = 0:9
% note: using dynamic structure field names to store the imported tables
fname = ['file10_' num2str(a) '_data'];
Tables.(fname) = readtable([fname '.txt']);
end

how to read text files from content repository

My requirement is read text files from content repository in sap abap.I used SCMS_DOC_READ FM to read image file and creating url DP_CREATE_URL for creating image url but SCMS_DOC_READ not working for text.
Can any one suggest some code, FM or class .
There are two options based on your requirement:
Option 1: Use READ DATASET to read file.
DATA : FNAME(60) type c VALUE 'myfile.txt',
TEXT2(5) type c.
OPEN DATASET FNAME FOR INPUT IN TEXT MODE.
DO.
READ DATASET FNAME INTO TEXT2 LENGTH LENG.
WRITE: / SY-SUBRC, TEXT2.
IF SY-SUBRC <> 0.
EXIT.
ENDIF.
ENDDO.
CLOSE DATASET FNAME.
Option 2: Use Class CL_ABAP_CONV_IN_CE to read file.
Refer this tutorial page to get more information on this class.
You can easily find the answer there: http://scn.sap.com/thread/525075
If you want the short answer, you should use this(Note: I am not the author of this part):
CALL FUNCTION 'GUI_UPLOAD'
EXPORTING
FILENAME = "File path"
FILETYPE = 'ASC'
HAS_FIELD_SEPARATOR = 'X'
TABLES
DATA_TAB = IT.
Note : Internal table structure should be same as text File.

Why does Open XML API Import Text Formatted Column Cell Rows Differently For Every Row

I am working on an ingestion feature that will take a strongly formatted .xlsx file and import the records to a temp storage table and then process the rows to create db records.
One of the columns is strictly formatted as "Text" but it seems like the Open XML API handles the columns cells differently on a row-by-row basis. Some of the values while appearing to be numeric values are truly not (which is why we format the column as Text) -
some examples are "211377", "211727.01", "209395.388", "209395.435"
what these values represent is not important but what happens is that some values (using the Open XML API v2.5 library) will be read in properly as text whether retrieved from the Shared Strings collection or simply from InnerXML property while others get sucked in as numbers with what appears to be appended rounding or precision.
For example the "211377", "211727.01" and "209395.435" all come in exactly as they are in the spreadsheet but the "209395.388" value is being pulled in as "209395.38800000001" (there are others that this happens to as well).
There seems to be no rhyme or reason to which values get messed up and which ones which import fine. What is really frustrating is that if I use the native Import feature in SQL Server Management Studio and ingest the same spreadsheet to a temp table this does not happen - so how is that the SSMS import can handle these values as purely text for all rows but the Open XML API cannot.
To begin the answer you main problem seems to be values,
"209395.388" value is being pulled in as "209395.38800000001"
Yes in .xlsx file value is stored as 209395.38800000001 instead of 209395.388. And it's the correct format to store floating point numbers; nothing wrong in it. You van simply confirm it by following code snippet
string val = "209395.38800000001"; // <= What we extract from Open Xml
Console.WriteLine(double.Parse(val)); // < = Simply pass it to double and print
The output is :
209395.388 // <= yes the expected value
So there's nothing wrong in the value you extract from .xlsx using Open Xml SDK.
Now to cells, yes cell can have verity of formats. Numbers, text, boleans or shared string text. And you can styles to a cell which would format your string to a desired output in Excel. (Ex - Date Time format, Forced strings etc.). And this the way Excel handle the vast verity of data. It need this kind of formatting and .xlsx file format had to be little complex to support all.
My advice is to use a proper parse method set at extracted values to identify what format it represent (For example to determine whether its a number or a text) and apply what type of parse.
ex : -
string val = "209395.38800000001";
Console.WriteLine(float.Parse(val)); // <= Float parse will be deduce a different value ; 209395.4
Update :
Here's how value is saved in internal XML
Try for yourself ;
Make an .xlsx file with value 209395.388 -> Change extention to .zip -> Unzip it -> goto worksheet folder -> open Sheet1
You will notice that value is stored as 209395.38800000001 as scene in attached image.. So nothing wrong on API for extracting stored number. It's your duty to decide what format to apply.
But if you make the whole column Text before adding data, you will see that .xlsx hold data as it is; simply said as string.

Issues when reading a .nc file

I got this .nc file. However, when I read the file like this
ncid = netcdf.open(ncfile)
It gives me only a number. It was supposed to contain some data. I am not sure what's wrong with it. Can anyone please provide some information?
According to the documentation, netcdf.open only returns the NetCDF ID, not the data:
ncid = netcdf.open(source) opens source, which can be the name of a
NetCDF file or the URL of an OPeNDAP NetCDF data source, for read-only
access. Returns a NetCDF ID in ncid.
You probably want to use ncread.
Note:
ncid = netcdf.open(ncfile)
Where ncid is a netCDF file identifier returned by netcdf.create or
netcdf.open.
Eg : In your Case
ncid=netcdf.open(ncfile,'NC_NOWRITE');
varidp=netcdf.inqVarID(ncid,'varname'); //returns varid
Eg : Official
This example opens the example netCDF file included with MATLABĀ®, example.nc, and uses several inquiry functions to get the ID of the first variable.
ncid = netcdf.open('example.nc','NC_NOWRITE');
% Get information about first variable in the file.
[varname, xtype, dimids, atts] = netcdf.inqVar(ncid,0);
% Get variable ID of the first variable, given its name
varid = netcdf.inqVarID(ncid,varname)
Ref:http://www.mathworks.in/help/matlab/ref/netcdf.inqvarid.html
Thanks