I have file names go on 2.txt 4.txt 8.txt 12.txt 14.txt. And each file structure looks like
I want to read each designated file and do some calculations with the designated columns for instance, after calling 2.txt file I want to calculate
column(A)+column(I)
The questions
How can I call the certain file with their name
How can I do calculations with this file columns
Here is my code
function[t]=ad(x)
folderName='C:\Users\zeldagrey6\Desktop\AD';
fileinfo=dir([folderName filesep '**/*.txt'] );
filename={fileinfo.name};
fullFileName=[folderName filesep filename{x}];
d=readtable(fullFileName, 'ReadVariableNames', true);
t=d.A+d.I;
end
The problems of the code
When I put ad(2) into array i call 4.txt instead of 2.txt. I guess it does not care the names of the text just read them according to their sequence
Is there any way to assign with each column like var1,var2 and do some
calculations with var1+var2 instead of d.A+d.I
yes, you can refer to table contents with curly brackets like this:
A = (30.1:0.1:30.5)';
I = (324:328)';
Angle = (35:5:55)';
FWHM = (0.2:0.05:0.4)';
d = table(A,I,Angle,FWHM);
t1 = d.A + d.I;
t2 = d{:,1} + d{:,2};
See that t1 and t2 are equal
I have an existing .csv file - data.csv - with 2 columns of data shown below.
COL1 COL2
1/1/1991 00:00 65.4
1/1/1991 01:00 59.2
I need to use either dlmwrite, csvwrite or even fprintf to open this file (data.csv), KEEP the first column without deleting it, and just write a new column of data overwriting the original 2nd column in the same file.
The final result will have 2 columns of data in file data.csv: the first column of data is the original column and the 2nd column is the new overwritten data from a calculation in the main program.
Final result:
COL1 COL2
1/1/1991 00:00 101.4
1/1/1991 01:00 96.3
The row offset for writing/overwriting the 2nd column is row=1, col=1 or the 2nd row and 2nd column.
I have tried xlswrite, csvwrite, dlmwrite, fprintf with varying results. The closest attempt was using csvwrite and it wrote the data in the correct row and file offset but it deleted all the original data including the 1st column of data! Here is the code part that fails to run as expected without overwriting existing data - thank you!
R = 1; C = 1;
rows = 241777;
ii = 1:rows -1
% place datesall and data2 into cell array to write to file
for jj =1:2
data(ii,jj) =csvread(strcat(dirstr,files(jj).name),R,C);
data2(ii,jj) = ((data(ii,jj)/100)*(1/24)*24*newTcaps(jj));
f = strcat(dirstr2,files2(jj).name);
%f=fopen(strcat(dirstr2,files2(jj).name),'a+');
%dlmwrite(f, data2(ii,jj) ,'delimiter', ',', 'roffset',1,'coffset',0)
csvwrite(f, data2(ii,jj),R,C);
%fprintf(f,[ ' ',data2(ii,jj), '\n' ]);
end
I'm trying to write a CSV from a table using writetable with writetable(..., 'WriteRowNames', true), but when I do so, Matlab defaults to putting Row in the (1,1) cell of the CSV. I know I can change Row to another string by setting myTable.Properties.DimensionNames{1} but I can't set that to be blank and so it seems like I'm forced to have some text in that (1,1) cell.
Is there a way to leave the (1,1) element of my CSV blank and still write the row names?
There doesn't appear to be any way to set any of the character arrays in the 'DimensionNames' field to either empty or whitespace. One option is to create your .csv file as you do above, then use xlswrite to clear that first cell:
xlswrite('your_file.csv', {''}, 1, 'A1');
Even though the xlswrite documentation states that the file argument should be a .xls, it still works properly for me.
Another approach could use memmapfile to modify the leading bytes of the file in memory.
For example:
% Set up data
LastName = {'Smith';'Johnson';'Williams';'Jones';'Brown'};
Age = [38;43;38;40;49];
Height = [71;69;64;67;64];
Weight = [176;163;131;133;119];
BloodPressure = [124 93; 109 77; 125 83; 117 75; 122 80];
T = table(Age, Height, Weight, BloodPressure, 'RowNames', LastName);
% Write data to CSV
fname = 'asdf.csv';
writetable(T, fname, 'WriteRowNames', true)
% Overwrite row dimension name in the first row
% Use memmapfile to map only the dimension name to memory
tmp = memmapfile(fname, 'Writable', true, 'Repeat', numel(T.Properties.DimensionNames{1}));
tmp.Data(:) = 32; % Change to the ASCII code for a space
clear('tmp'); % Clean up
Which brings us from:
Row,Age,Height,Weight,BloodPressure_1,BloodPressure_2
Smith,38,71,176,124,93
Johnson,43,69,163,109,77
Williams,38,64,131,125,83
Jones,40,67,133,117,75
Brown,49,64,119,122,80
To:
,Age,Height,Weight,BloodPressure_1,BloodPressure_2
Smith,38,71,176,124,93
Johnson,43,69,163,109,77
Williams,38,64,131,125,83
Jones,40,67,133,117,75
Brown,49,64,119,122,80
Unfortunately not quite deleted, but it's a fun approach.
Alternatively, you can use MATLAB's low level file IO to copy everything after the row dimension name to a new file, then overwrite the original:
fID = fopen(fname, 'r');
fID2 = fopen('tmp.csv', 'w');
fseek(fID, numel(T.Properties.DimensionNames{1}), 'bof');
fwrite(fID2, fread(fID));
fclose(fID);
fclose(fID2);
movefile('tmp.csv', fname);
Which produces:
,Age,Height,Weight,BloodPressure_1,BloodPressure_2
Smith,38,71,176,124,93
Johnson,43,69,163,109,77
Williams,38,64,131,125,83
Jones,40,67,133,117,75
Brown,49,64,119,122,80
No, that is currently not supported. The only workaround I see is to use a placeholder as dimension name and to programmatically remove it from the file afterwards.
writetable(T,fileFullPath,'WriteVariableNames',false);
When specify 'WriteVariableNames' as false (default one is true), then the variable/dimension names will NOT be written in the output file.
Ref link: https://uk.mathworks.com/help/matlab/ref/writetable.html
I need to read a csv file and then to make a new file having the specified 3 columns ..
I am aware of reading a text file but not csv file .
import scala.io.Source._
val lines = fromFile("file.txt").getLines
Or if you just want the first three columns, try this
val lines = fromFile("file.txt").
getLines.
map(_.split(",",4).take(3)).
toList
Assuming a collection of indices idx that refer to columns in the csv file, consider first
val idx = Array(1,3,4)
val xs = (1 to 10).toArray
and so we can fetch the 2nd, 4th and 5th columns (index 0 refers to the first column),
idx.map(xs)
Array(2, 4, 5)
We can apply this idea onn each array from splitting each line as follows,
Source.fromFile("file.csv").getLines.map(_.split(",").map(idx))
This approach allows for defining the indices of interest at runtime (non hard-coding).
I had a similar question. but what i am trying now is to read files in .txt format into MATLAB. My problem is with the headers. Many times due to errors the system rewrites the headers in the middle of file and then MATLAB cannot read the file. IS there a way to skip it? I know i can skip reading some characters if i know what the character is.
here is the code i am using.
[c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data');
file=[pathc c];
data= dlmread(file, ',', 1,4);
this way i let the user pick the file. My files are huge typically [ 86400 125 ]
so naturally it has 125 header fields or more depends on files.
Thanks
Because the files are so big i cannot copy , but its in format like
day time col1 col2 col3 col4 ...............................
2/3/2010 0:10 3.4 4.5 5.6 4.4 ...............................
..................................................................
..................................................................
and so on
With DLMREAD you can read only numeric data. It will not read date and time, as your first two columns contain. If other data are all numeric you can tell DLMREAD to skip first row and 2 columns on the right:
data = dlmread(file, ' ', 1,2);
To import also day and time you can use IMPORTDATA instead of DLMREAD:
A = importdata(file, ' ', 1);
dt = datenum(A.textdata(2:end,1),'mm/dd/yyyy');
tm = datenum(A.textdata(2:end,2),'HH:MM');
data = A.data;
The date and time will be converted to serial numbers. You can convert them back with DATESTR function.
It turns out that you can still use textscan. Except that you read everything as string. Then, you attempt to convert to double. 'str2double' returns NaN for strings, and since headers are all strings, you can identify header rows as rows with all NaNs.
For example:
%# find and open file
[c,pathc]=uigetfile({'*.txt'},'Select the data','V:\data');
file=[pathc c];
fid = fopen(file);
%# read all text
strData = textscan(fid,'%s%s%s%s%s%s','Delimiter',',');
%# close the file again
fclose(fid);
%# catenate, b/c textscan returns a column of cells for each column in the data
strData = cat(2,strData{:});
%# convert cols 3:6 to double
doubleData = str2double(strData(:,3:end));
%# find header rows. headerRows is a logical array
headerRowsL = all(isnan(doubleData),2);
%# since I guess you know what the headers are, you can just remove the header rows
dateAndTimeCell = strData(~headerRowsL,1:2);
dataArray = doubleData(~headerRowsL,:);
%# and you're ready to start working with your data