Dynamic Output Arguments in for-loop - matlab

I am fairly new to Matlab and I created this script to help me gather 4 numbers out of an excel file. This one works so far:
clear all
cd('F:/Wortpaare Analyse/Excel')
VPNumber = input('Which Participant Number?', 's');
filename = (['PAL_',VPNumber,'_Gain_Loss.xlsx']);
sheet = 1;
x1Range = 'N324';
GainBlock1 = xlsread(filename,sheet,x1Range);
x1Range = 'O324';
LossBlock1 = xlsread(filename,sheet,x1Range);
x1Range = 'AD324';
GainBlock2 = xlsread(filename,sheet,x1Range);
x1Range = 'AE324';
LossBlock2 = xlsread(filename,sheet,x1Range);
AnalyseProband = [GainBlock1, LossBlock1, GainBlock2, GainBlock2]
Now I would like to make a script that will analyze the first 20 excel files and tried this:
clear all
cd('F:/Wortpaare Analyse/Excel')
for VPNumber = 1:20 %for the 20 files
a = (['PAL_%d_Gain_Loss.xlsx']);
filename = sprintf(a, VPNumber) % specifies the name of the file
sheet = 1;
x1Range = 'N324';
(['GainBlock1_',VPNummer]) = xlsread(filename,sheet,x1Range);
....
end
The problem seems to be that I can only have one output argument. I would like to change the output argument in each loop, so it doesn't overwrite "GainBlock1" in every cycle.
In the end I would like to have these variables:
GainBlock1_1 (for the first excel sheet)
GainBlock1_2 (for the second excel sheet)
...
GainBlock1_20 (for the 20th excel sheet)
Is there a clever way to do that? I was able to write the first script fairly easily, but was unable to produce any significant progress in the second script. Any help is greatly appreciated.
Best,
Luca

This can be accomplished by either storing the data in an array or a cell. I believe an array will be sufficient for what you're trying to do. I've included a re-organized code sample below, broken out into functions to help make it easier to read and understand.
Basically, the first function handles the loop and controls where the data gets placed in the array, and the second function is your original script which accepts the VPNumber as an input.
What this should return is a 20x4 array, where the first index controls the sheet the data was pulled from and the second index controls whether it's [GainBlock1, LossBlock1, GainBlock2, LossBlock2]. i.e.- GainBlock 1 for sheet 5 would be AnalseProband(5,1), LossBlock2 for sheet 11 would be AnalyseProband(11,4).
function AnalyseProband = getAllProbands()
currentDir = pwd;
returnToOriginalDir = onCleanup(#()cd(currentDir));
cd('F:/Wortpaare Analyse/Excel')
numProbands = 20;
AnalyseProband = zeros(numProbands,4);
for n = 1:numProbands
AnalyseProband(n,:) = getBlockInfoFromXLS(n);
end
end
function AnalyseProband = getBlockInfoFromXLS(VPNumber)
a = 'PAL_%d_Gain_Loss.xlsx';
filename = sprintf(a, VPNumber); % specifies the name of the file
sheet = 1;
x1Range = 'N324';
GainBlock1 = xlsread(filename,sheet,x1Range);
x1Range = 'O324';
LossBlock1 = xlsread(filename,sheet,x1Range);
x1Range = 'AD324';
GainBlock2 = xlsread(filename,sheet,x1Range);
x1Range = 'AE324';
LossBlock2 = xlsread(filename,sheet,x1Range);
AnalyseProband = [GainBlock1, LossBlock1, GainBlock2, LossBlock2];
end

Related

Extract fields from Structure Array to put into another Structure Array

I have a structure array with a large number of fields that I don't care about, so I want to extract the limited number of fields I DO care about and put it into a separate structure array.
For a structure array of size one, I've done this by creating the new array from scratch, for example:
structOld.a = 1;
structOld.b = 2;
structOld.usefulA = 'useful information';
structOld.usefulB = 'more useful information';
structOld.c = 3;
structOld.d = 'words';
keepFields = {'usefulA','usefulB'};
structNew = struct;
for fn = keepFields
structNew.(fn{:}) = structOld.(fn{:});
end
which gives
structNew =
usefulA: 'useful information'
usefulB: 'more useful information'
Is there a more efficient way of doing this? How can I scale up to an structure array (vector) of size N?
N = 50;
structOld(1).a = 1;
structOld(1).b = 2;
structOld(1).usefulA = 500;
structOld(1).usefulB = 'us';
structOld(1).c = 3;
structOld(1).d = 'ef';
structOld(2).a = 4;
structOld(2).b = 5;
structOld(2).usefulA = 501;
structOld(2).usefulB = 'ul';
structOld(2).c = 6;
structOld(2).d = 'in';
structOld(3).a = 7;
structOld(3).b = '8';
structOld(3).usefulA = 504;
structOld(3).usefulB = 'fo';
structOld(3).c = 9;
structOld(3).d = 'rm';
structOld(N).a = 10;
structOld(N).b = 11;
structOld(N).usefulA = 506;
structOld(N).usefulB = 'at';
structOld(N).c = 12;
structOld(N).d = 'ion';
In this case, I'd like to end up with:
structNew =
1x50 struct array with fields:
usefulA
usefulB
Keeping elements with empty usefulA/usefulB fields is fine; I can get rid of them later if needed.
Using rmfield isn't great because the number of useless fields far outnumbers the useful fields.
You can create a new struct array using existing data as follows:
structNew = struct('usefulA',{structOld.usefulA},'usefulB',{structOld.usefulB});
If you have an arbitrary set of field names that you want to preserve, you could use a loop as follows. Here, I'm first extracting the data from strcutOld into a cell array data, which contains each of the arguments the the struct call in the previous line of code. data{:} is now a comma-separated list of these arguments, the last line of code below is identical to the line above.
keepFields = {'usefulA','usefulB'};
data = cell(2,numel(keepFields));
for ii=1:numel(keepFields)
data{1,ii} = keepFields{ii};
data{2,ii} = {structOld.(keepFields{ii})};
end
structNew = struct(data{:});

fopen with correct file format and path

How do I get to read my file with increment .htm file with correct file format and path?
path:DATA\WEBPAGE_SOURCE\train75_phish_data\1.htm
file:1.htm,2.htm,3.htm....etc
Inside 1.htm,2.htm,3.htm....etc are the soucre code of webpage
I do try with the following example, but got the error when i=21.
data2=fopen(strcat('DATA\WEBPAGE_SOURCE\train75_phish_data\',int2str(i),'.htm'),'r')
I have refer to this, still cannot work, any ideas?
http://www.mathworks.com/help/matlab/ref/fopen.html
Here is my code:
data = importdata('DATA/URL/trainURL')
domain_URL = regexp(data,'\w*://[^/]*','match','once')
[sizeData b] = size(domain_URL);
for i = 1:150
A7_data = domain_URL{i};
data2=fopen(strcat('DATA\WEBPAGE_SOURCE\train75_phish_data\',int2str(i),'.htm'),'r')
CharData = fread(data2, '*char')'; %read text file and store data in CharData
img_only = regexp(CharData, '<img.*?>', 'match');
feature7_data=(cellfun(#(n) isempty(n), strfind(img_only, A7_data)))
B7(i)=sum(feature7_data)
end
feature7(B7>=10)=1;
feature7(B7<10&B7>5)=0;
feature7(B7<=5)=-1;
feature7'
Here is my output:
data = importdata('DATA/URL/trainURL') is a list of URL being saved inside
I could not loop the results for i=20, it will come to error when iteration=21, I want to loop until 150, it cnt read the 'data2' for 'i=21'
I think you need to handle possible exceptions that can come in a more principled way. Try this:
data = importdata('DATA/URL/trainURL')
domain_URL = regexp(data,'\w*://[^/]*','match','once')
[sizeData b] = size(domain_URL);
for i = 1:150
A7_data = domain_URL{i};
filename = fullfile('DATA\WEBPAGE_SOURCE\train75_phish_data\',strcat(int2str(i),'.htm'));
if (exist(filename,'file')),
disp(sprintf('file %s exists, processing it',filename));
data2=fopen(filename,'r');
CharData = fread(data2, '*char')'; %read text file and store data in CharData
fclose(data2);
img_only = regexp(CharData, '<img.*?>', 'match');
feature7_data=(cellfun(#(n) isempty(n), strfind(img_only, A7_data)))
B7(i)=sum(feature7_data)
else,
disp(sprintf('file %s does not exist, skipping it!',filename));
end
end
feature7(B7>=10)=1;
feature7(B7<10&B7>5)=0;
feature7(B7<=5)=-1;
feature7'
after the line that does the fread.

Read with order files in subfolders in matlab

I've got a folder which contains subfolders with text files. I want to read those file with the same order as they are in the subfolders. I've got a problem with that. I use the following matlab code:
outNames = {};
k=1;
feature = zeros(619,85);
fileN = cell(619,1);
for i=1:length(nameFolds)
dirList = dir(strcat(path, num2str(cell2mat(nameFolds(i,1)))));
names = {dirList.name};
outNames = {};
for j=1:numel(names)
name = names{j};
if ~isequal(name,'.') && ~isequal(name,'..')
[~,name] = fileparts(names{j});
outNames{end+1} = name;
fileName = strcat(path, num2str(cell2mat(nameFolds(i,1))), '\', name, '.descr' );
feature(k,:) = textread(fileName);
fileN{k} = [fileName num2str(k)];
k= k+1;
end
end
end
In one subfolder I've got the above text file names:
AnimalPrint_tiger_test_01.descr
AnimalPrint_tiger_test_02.descr
AnimalPrint_tiger_test_03.descr
AnimalPrint_tiger_test_04.descr
AnimalPrint_tiger_test_05.descr
AnimalPrint_tiger_test_06.descr
AnimalPrint_tiger_test_07.descr
AnimalPrint_tiger_test_08.descr
AnimalPrint_tiger_test_09.descr
AnimalPrint_tiger_test_10.descr
AnimalPrint_tiger_test_11.descr
AnimalPrint_tiger_test_12.descr
AnimalPrint_tiger_test_13.descr
AnimalPrint_tiger_test_14.descr
AnimalPrint_tiger_test_15.descr
AnimalPrint_zebra_test_1.descr
AnimalPrint_zebra_test_2.descr
AnimalPrint_zebra_test_3.descr
AnimalPrint_zebra_test_4.descr
AnimalPrint_zebra_test_5.descr
AnimalPrint_zebra_test_12.descr
But it seems that it reads first the AnimalPrint_zebra_test_12.descr and after the AnimalPrint_zebra_test_1.descr and the rest. Any idea why this happens?
dir sorts the files according to their names, for instance
test_1
test_12 % 1 followed by 2
test_2
test_3
You may want to build your own order with ['test_' num2str(variable) '.descr'] that concatenates test_ with an incrementing variable.

Excessively large overhead in MATLAB .mat file

I am parsing a large text file full of data and then saving it to disk as a *.mat file so that I can easily load in only parts of it (see here for more information on reading in the files, and here for the data). To do so, I read in one line at a time, parse the line, and then append it to the file. The problem is that the file itself is >3 orders of magnitude larger than the data contained therein!
Here is a stripped down version of my code:
database = which('01_hit12.par');
[directory,filename,~] = fileparts(database);
matObj = matfile(fullfile(directory,[filename '.mat']),'Writable',true);
fidr = fopen(database);
hitranTemp = fgetl(fidr);
k = 1;
while ischar(hitranTemp)
if abs(hitranTemp(1)) == 32;
hitranTemp(1) = '0';
end
hitran = textscan(hitranTemp,'%2u%1u%12f%10f%10f%5f%5f%10f%4f%8f%15c%15c%15c%15c%6u%2u%2u%2u%2u%2u%2u%1c%7f%7f','delimiter','','whitespace','');
matObj.moleculeNumber(1,k) = uint8(hitran{1});
matObj.isotopeologueNumber(1,k) = uint8(hitran{2});
matObj.vacuumWavenumber(1,k) = hitran{3};
matObj.lineIntensity(1,k) = hitran{4};
matObj.airWidth(1,k) = single(hitran{6});
matObj.selfWidth(1,k) = single(hitran{7});
matObj.lowStateE(1,k) = single(hitran{8});
matObj.tempDependWidth(1,k) = single(hitran{9});
matObj.pressureShift(1,k) = single(hitran{10});
if rem(k,1e4) == 0;
display(sprintf('line %u (%2.2f)',k,100*k/K));
end
hitranTemp = fgetl(fidr);
k = k + 1;
end
fclose(fidr);
I stopped the code after 13,813 of the 224,515 lines had been parsed because it had been taking a very long time and the file size was getting huge, but the last printout indicated that I had only just cleared 10k lines. I cleared the memory, and then ran:
S = whos('-file','01_hit12.mat');
fileBytes = sum([S.bytes]);
T = dir(which('01_hit12.mat'));
diskBytes = T.bytes;
disp([fileBytes diskBytes diskBytes/fileBytes])
and get the output:
524894 896189009 1707.37141022759
What is taking up the extra 895,664,115 bytes? I know the help page says there should be a little extra overhead, but I feel that nearly a Gb of descriptive header is a bit excessive!
New information:
I tried pre-allocating the file, thinking that perhaps MATLAB was doing the same thing it does when a matrix is embiggened in a loop and reallocating a chunk of disk space for the entire matrix on each write, and that isn't it. Filling the file with zeros of the appropriate data types results in a file that my short check script returns:
8531570 71467 0.00837677004349727
This makes more sense to me. Matlab is saving the file sparsely, so the disk file size is much smaller than the size of the full matrix in memory. Once it starts replacing values with real data, however, I get the same behavior as before and the file size starts skyrocketing beyond all reasonable bounds.
New new information:
Tried this on a subset of the data, 100 lines long. To stream to disk, the data has to be in v7.3 format, so I ran the subset through my script, loaded it into memory, and then resaved as v7.0 format. Here are the results:
v7.3: 3800 8752 2.30
v7.0: 3800 2561 0.67
No wonder the v7.3 format isn't the default. Does anyone know a way around this? Is this a bug or a feature?
This seems like a bug to me. A workaround is to write in chunks to pre-allocated arrays.
Start off by pre-allocating:
fid = fopen('01_hit12.par', 'r');
data = fread(fid, inf, 'uint8');
nlines = nnz(data == 10) + 1;
fclose(fid);
matObj.moleculeNumber = zeros(1,nlines,'uint8');
matObj.isotopeologueNumber = zeros(1,nlines,'uint8');
matObj.vacuumWavenumber = zeros(1,nlines,'double');
matObj.lineIntensity = zeros(1,nlines,'double');
matObj.airWidth = zeros(1,nlines,'single');
matObj.selfWidth = zeros(1,nlines,'single');
matObj.lowStateE = zeros(1,nlines,'single');
matObj.tempDependWidth = zeros(1,nlines,'single');
matObj.pressureShift = zeros(1,nlines,'single');
Then to write in chunks of 10000, I modified your code as follows:
... % your code plus pre-alloc first
bs = 10000;
while ischar(hitranTemp)
if abs(hitranTemp(1)) == 32;
hitranTemp(1) = '0';
end
for ii = 1:bs,
hitran{ii} = textscan(hitranTemp,'%2u%1u%12f%10f%10f%5f%5f%10f%4f%8f%15c%15c%15c%15c%6u%2u%2u%2u%2u%2u%2 u%1c%7f%7f','delimiter','','whitespace','');
hitranTemp = fgetl(fidr);
if hitranTemp==-1, bs=ii; break; end
end
% this part really ugly, sorry! trying to keep it compact...
matObj.moleculeNumber(1,k:k+bs-1) = uint8(builtin('_paren',cellfun(#(c)c{1},hitran),1:bs));
matObj.isotopeologueNumber(1,k:k+bs-1) = uint8(builtin('_paren',cellfun(#(c)c{2},hitran),1:bs));
matObj.vacuumWavenumber(1,k:k+bs-1) = builtin('_paren',cellfun(#(c)c{3},hitran),1:bs);
matObj.lineIntensity(1,k:k+bs-1) = builtin('_paren',cellfun(#(c)c{4},hitran),1:bs);
matObj.airWidth(1,k:k+bs-1) = single(builtin('_paren',cellfun(#(c)c{5},hitran),1:bs));
matObj.selfWidth(1,k:k+bs-1) = single(builtin('_paren',cellfun(#(c)c{6},hitran),1:bs));
matObj.lowStateE(1,k:k+bs-1) = single(builtin('_paren',cellfun(#(c)c{7},hitran),1:bs));
matObj.tempDependWidth(1,k:k+bs-1) = single(builtin('_paren',cellfun(#(c)c{8},hitran),1:bs));
matObj.pressureShift(1,k:k+bs-1) = single(builtin('_paren',cellfun(#(c)c{9},hitran),1:bs));
k = k + bs;
fprintf('.');
end
fclose(fidr);
The final size on disk is 21,393,408 bytes. The usage breaks down as,
>> S = whos('-file','01_hit12.mat');
>> fileBytes = sum([S.bytes]);
>> T = dir(which('01_hit12.mat'));
>> diskBytes = T.bytes; ratio = diskBytes/fileBytes;
>> fprintf('%10d whos\n%10d disk\n%10.6f\n',fileBytes,diskBytes,ratio)
8531608 whos
21389582 disk
2.507099
Still fairly inefficient, but not out of control.

How to programatically construct a large cell array

How could I construct automatically a dataset like the one below, assuming that the number of columns of matrix summary_whts is approx. 400???
lrwghts = dataset(...
{summary_whts(:,01),'w00'},...
{summary_whts(:,02),'w01'},...
{summary_whts(:,03),'w02'},...
{summary_whts(:,04),'w03'},...
{summary_whts(:,05),'w04'},...
{summary_whts(:,06),'w05'},...
{summary_whts(:,07),'w06'},...
{summary_whts(:,08),'w07'},...
{summary_whts(:,09),'w08'},...
{summary_whts(:,10),'w09'},...
{summary_whts(:,11),'w10'},...
{summary_whts(:,12),'w11'},...
'ObsNames',summary_mthd);
Why not use a simple loop to populate dataset?
nCols = size(summary_whts,1);
dataset = cell(nCols, 2);
for i = 1:nCols
dataset{i,1} = summary_whts(:,i);
dataset{i,2} = sprintf('w%04d', i);
end
dataset{end+1,1} = 'ObsNames';
dataset(end, 2} = summary_mthd;
At last, I found it! This is what I was looking for:
cat = [];
for i = 0:(size(X,2)),
cat = [cat;sprintf('w%03d',i)];
end
cat = cellstr(cat);
lrwghts = dataset({summary_whts,cat{:}},'ObsNames',cellstr(summary_mthd));