problem in writing data and matrix elements into text file - matlab

I have a problem with writing data to a file in Matlab, I have an LP problem that their solution is a vector of size n, and I want to write some information about this solution in a text file to read it any time I want,
(I don not want to save workspace in matlab)
I write the following code
n = 5;
Length = 10;
s = [1;0;1;0];
t = 0.6;
Test = fopen('Test.txt', 'w');
fprintf(Test,'TSP Problem Size is: %d \n', n);
fprintf(Test,'Optimal Length is: %.5f \n', Length);
fprintf(Test,'Time To Solve in seconds is: %f \n', t);
fprintf(Test,'Solution is: \n');
for i=1:size(s)
fprintf(Test,'%.0f\n', s(i));
end
fclose(Test);
and when I execute 'type(Test.txt);' it appears as I want
but when I open file in windows explorer it appears all as one line, I don't know why ???
is there any way to keep formatting and to write the solution every value in a separate line
(I try dlmwrite but it writes only matrix, I need to write some information then the matrix)
thank you for your help

Related

How to sparsely read a large file in Matlab?

I ran a simulation which wrote a huge file to disk. The file is a big matrix v. I can't read it all, but I really only need a portion of the matrix, say, 1:100 of the columns and rows. I'd like to do something like
vtag = dlmread('v',1:100:end, 1:100:end);
Of course, that doesn't work. I know I should have only done the following when writing to the file
dlmwrite('vtag',v(1:100:end, 1:100:end));
But I did not, and running everything again would take two more days.
Thanks
Amir
Thankfully the dlmread function supports specifying a range to read as the third input. So if you wan to read all N columns for the first 100 rows, you can specify that with the following command
startRow = 1;
startColumn = 1;
endRow = 100;
endColumn = N;
rng = [startRow, startColumn, endRow, endColumn] - 1;
vtag = dlmread(filename, ',', rng);
EDIT Based on your clarification
Since you don't want 1:100 rows but rather 1:100:end rows, the following approach should work better for you.
You can use textscan to read chunks of data at a time. You can read a "good" row and then read in the next "chunk" of data to ignore (discarding it in the process), and continue until you reach the end of the file.
The code below is a slight modification of that idea, except it utilizes the HeaderLines input to textscan which instructs the function how many lines to ignore before reading in the data. The first time through the loop, no lines will be skipped, however all other times through the loop, rows2skip lines will be skipped. This allows us to "jump" through the file very rapidly without calling any additional file opertions.
startRow = 1;
rows2skip = 99;
columns = 3000;
fid = fopen(filename, 'rb');
% For now, we'll just assume you're reading in floating-point numbers
format = repmat('%f ', [1 columns]);
count = 1;
lines2discard = startRow - 1;
while ~feof(fid)
% Use "HeaderLines" to skip data before reading in data we care about
row = textscan(fid, format, 1, 'Delimiter', ',', 'HeaderLines', lines2discard);
data{count} = [row{:}];
% After the first time through, set the "HeaderLines" (i.e. lines to ignore)
% to be the # we want to skip between lines (much faster than alternatives!)
lines2discard = rows2skip;
count = count + 1;
end
fclose(fid);
data = cat(1, data{:});
You may need to adjust your format specifier for your own type of input.

str2num and importing data for large matrix

I have a large matrix in xlsx file which contains chars as following for example:
1,26:45:32.350,6,7,8,9,9,0,0,0
1,26:45:32.409,5,7,8,9,9,0,75,89
I want to make the 2nd column (the one which contains 26:45:32:350)
as a time vector and all the rest as a double matrix.
I tried the next code on like 50000 rows and it worked.
[FileName PathName] = uigetfile('*.xlsx','XLSX Files');
fid = fopen(FileName);
T=char(importdata(FileName));
Time=T(:,5:16);
Data=str2double(T);
However, when I tested it on the whole matrix (about 500,000 roww), I recieved Data=[] instead of matrix.
Is there any other thing I can do so 'Data' will be double matrix even for large matrix?
The excel file contains 1 column and around 500,000 rows, so the whole line 1,26:45:32:350,6,7,8,9,9,0,0,0 is inside 1 cell.
Also, I wrote another code,which works but take alot of time to run.
[FileName PathName] = uigetfile('*.xlsx','XLSX Files');
fid = fopen(FileName);
T=importdata(FileName);
h = waitbar(0,'Converting Data to cell array, please wait...');
for i=1:length(T)
delimiter_index=[0 find(T{i,1}(:)==char(44))'];
for j=1:length(delimiter_index)-1
Data{i,j}=T{i,1}(delimiter_index(j)+1:delimiter_index(j+1)-1);
end
waitbar(i/length(T));
end
close(h)
h = waitbar(0,'Seperating Data to time and data, please wait...');
for i=1:length(T)
Full_Time(i,:)=Data{i,2};
Data{i,2}=Data{i,1};
Data{i,1}=Full_Time(i,:);
waitbar(i/length(T));
end
close(h)
Data(:,1)=[];
h = waitbar(0,'Changing data cell to mat, please wait...');
for i=1:size(Data,1)
for j=1:size(Data,2)
Matrix(i,j)=str2num(Data{i,j});
end
waitbar(i/size(Data,1));
end
close(h)
Running this code for like 20000 rows shows that:(slowest to fastest)
waitbar
allchild
str2num
importdata
So basically I can remove this waitbar, but allchild (not sure what it is) and str2num take most of the time.
Is there anything I can do to make it run faster?

How to read binary file in one block rather than using a loop in matlab

I have this file which is a series of x, y, z coordinates of over 34 million particles and I am reading them in as follows:
parfor i = 1:Ntot
x0(i,1)=fread(fid, 1, 'real*8')';
y0(i,1)=fread(fid, 1, 'real*8')';
z0(i,1)=fread(fid, 1, 'real*8')';
end
Is there a way to read this in without doing a loop? It would greatly speed up the read in. I just want three vectors with x,y,z. I just want to speed up the read in process. Thanks. Other suggestions welcomed.
I do not have a machine with Matlab and I don't have your file to test either but I think coordinates = fread (fid, [3, Ntot], 'real*8') should work fine.
Maybe fread is the function you are looking for.
You're right. Reading data in larger batches is usually a key part of speeding up file reads. Another part is pre-allocating the destination variable zeros, for example, a zeros call.
I would do something like this:
%Pre-allocate
x0 = zeros(Ntot,1);
y0 = zeros(Ntot,1);
z0 = zeros(Ntot,1);
%Define a desired batch size. make this as large as you can, given available memory.
batchSize = 10000;
%Use while to step through file
indexCurrent = 1; %indexCurrent is the next element which will be read
while indexCurrent <= Ntot
%At the end of the file, we may need to read less than batchSize
currentBatch = min(batchSize, Ntot-indexCurrent+1);
%Load a batch of data
tmpLoaded = fread(fid, currentBatch*3, 'read*8')';
%Deal the fread data into the desired three variables
x0(indexCurrent + (0:(currentBatch-1))) = tmpLoaded(1:3:end);
y0(indexCurrent + (0:(currentBatch-1))) = tmpLoaded(2:3:end);
z0(indexCurrent + (0:(currentBatch-1))) = tmpLoaded(3:3:end);
%Update index variable
indexCurrent = indexCurrent + batchSize;
end
Of course, make sure you test, as I have not. I'm always suspicious of off-by-one errors in this sort of work.

Convert .mat to .csv octave/matlab

I'm trying to write an octave program that will convert a .mat file to a .csv file. The .mat file has a matrix X and a column vector y. X is populated with 0s and 1s and y is populated with labels from 1 to 10. I want to take y and put it in front of X and write it as a .csv file.
Here is a code snippet of my first approach:
load(filename, "X", "y");
z = [y X];
basename = split{1};
csvname = strcat(basename, ".csv");
csvwrite(csvname, z);
The resulting file contains lots of really small decimal numbers, e.g. 8.560596795891285e-06,1.940359477121703e-06, etc...
My second approach was to loop through and manually write the values out to the .csv file:
load(filename, "X", "y");
z = [y X];
basename = split{1};
csvname = strcat(basename, ".csv");
csvfile = fopen(csvname, "w");
numrows = size(z, 1);
numcols = size(z, 2);
for i = 1:numrows
for j = 1:numcols
fprintf(csvfile, "%d", z(i, j));
if j == numcols
fprintf(csvfile, "\n");
else
fprintf(csvfile, ",");
end
end
end
fclose(csvfile);
That gave me a correct result, but took a really long time.
Can someone tell me either how to use csvwrite in a way that will write the correct values, or how to more efficiently manually create the .csv file.
Thanks!
The problem is that if y is of type char, your X vector gets converted to char, too. Since your labels are nothing else but numbers, you can simply convert them to numbers and save the data using csvwrite:
csvwrite('data.txt', [str2num(y) X]);
Edit Also, in the loop you save the numbers using integer conversion %d, while csvwrite writes doubles if your data is of type double. If the zeros are not exactly zeros, csvwrite will write them with scientific notation, while your loop will round them. Hence the different behavior.
Just a heads up your code isn't optimized for Matab / octave. Switch the for i and for j lines around.
Octave is in column major order so its not cache efficient to do what your doing. It will speed up the overall loop by making the change to probably an acceptable time

Matlab fminsearch get's stuck in first iteration

I'm doing an optimization on a simulated robot's walk with the fminsearch method. What I did is create a function (the objective function) that writes to a text file the walking parameters (the input of fminsearch), opens the simulator (in webots) which writes the result of the walk in a text file and closes itself and then the objective returns the value in that text file. To sum up- the objective function get's a 8x12 matrix of leg position's and returns a scalar which indicates how good the walk is.
This works and the position values really change each iteration and the objective value improved indeed.
But here is the problem- I want to follow the function's value in each iteration (preferably by plot) and when I do that I get only the function's value at the first iteration and I can't understand why.
Here's the code:
options= optimset( 'PlotFcns', #optimplotfval);
[position,fval] = fminsearch(#callWebots,pos_start,options);
I tried displaying the results too and the same problem occurred (it displayed only the first iteration):
options= optimset(options, 'Display', 'iter-detailed');
I even tried to write an output function which will plot the fval and occurred with the same problem.
I would be grateful if you have any ideas why this can be...
Thank you in advanced
Here's the objective function:
function [objFun]=callWebots(pos)
%Open Motion File
filePath= fullfile('C:\Users\student\Documents\Webots\worlds', 'motion_trot_opt.motion');
data=convertToData(pos,filePath);
fout=fopen(filePath,'w');
%Write Motion File
for row=1:size(data,1)
for col=1:size(data,2)
if(col>1)
fprintf(fout, ',');
end
fprintf(fout, '%s', data{row,col});
end
fprintf(fout,'\n');
end
fclose(fout);
system('C:\Users\student\Documents\Webots\worlds\robot_full_withGPSLighter.wbt');
% Get result and return to main function
resfilePath= fullfile('C:\Users\student\Documents\Webots\worlds', 'result.txt');
fres=fopen(resfilePath);
result = textscan(fres, '%f');
fclose(fres);
objFun=cell2mat(result);
And the call for the objective function:
function [position,fval]=optCall()
%Read Initial Motion File Into CELL
filePath= fullfile('C:\Users\student\Documents\Webots\worlds', 'motion_trot_opt.motion');
fin=fopen(filePath);
ii = 1;
while 1
tline = fgetl(fin);
if (tline == -1)
break;
end
SplitedRow(ii,:) = regexp(tline, ',', 'split');
ii = ii+1;
end
fclose(fin);
%Convert from double to Data
[n,m] = size(SplitedRow);
pos_start = cellfun(#str2double,SplitedRow(2:end,3:end));
options= optimset( 'PlotFcns', #optimplotfval);
options= optimset(options, 'Display', 'iter-detailed');
[position,fval] = fminsearch(#callWebots,pos_start,options);