I have a matrix as shown in below:
A=[2;1;8;5;4;7]
now i need to extract the matrix A into 2 parts:
newpoint=[2];
rest=[1;8;5;4;7];
then apply loop again to extract the second column as new point :
newpoint=[1];
rest=[2;8;5;4;7];
Applying loop again to take third column number as new point :
newpoint=[8];
rest=[2;1;5;4;7];
Take the number in row sequence until the last row .
Can someone be kind enough to help.Thanks!
Something like that might do:
for i=1:length(A)
newpoint = A(i);
if i==1
rest = A(i+1:end);
else
if i== length(A);
rest = A(1:end-1);
else
rest=A(1:i-1,i+1:end);
... stuff to do
end
I would go for something like this:
for i = 1:size(A,1)
newpoint = A(i,1)
rest = A;
rest(i) = [];
%# use rest and newpoint
end
Or if you prefer saving all the rest and newpoints in a matrix:
newpoint = zeros(size(A,1),1);
rest = zeros(size(A,1)-1,size(A,1));
for i = 1:size(A,1)
newpoint(i) = A(i,1);
temp = A;
temp(i) = [];
rest(:,i) = temp;
end
Related
Let's say I have a bunch of data with labels [0-9]. I want to gather information based on all the pairwise interactions of these data. To avoid redundancy, I do something like this:
a = zeros(45, 1);
pair = 1;
for i = 1:9
for j = (i+1):10
a(pair) = i * j;
pair = pair + 1;
end
end
If I want to examine everything in a I can loop through it in a 2-dimensional way using the pair, i, j structure. That's fine. But what if I want to programmatically examine only certain pairs? Is there some logic by which I can do something analogous to a(i,j), where a(i,j) is actually "the coefficients from the model that was trained on data classes i and j"?
Running Matlab_R2018b. For the curious, I'm doing this as part of a DAGSVM implementation.
You can store the input information along side the resulting vector.
a = zeros(45, 1);
pair = 1;
I = a;
J = a;
for i = 1:9
for j = (i+1):10
I(pair) = i;
J(pair) =j;
a(pair) = i * j;
pair = pair + 1;
end
end
res=[a,I,J];
Then using a function match the input values to a given pair using a tolerance for floating point values.
function Val = findVal(res,pair)
#pair = [i,j]
pairs = res(:,2:3);
ind = sum(abs(pairs-pair)<1e-6,2)==2;
if sum(ind) == 0
disp('No match found')
Val = NaN
else
Val = res(ind,1);
disp('pair')
disp(pair)
disp('value')
disp(Val)
end
endfunction
Now I generate two pairs, one that is in the set and the other that is not to show the usage of the function.
testpair = res(8,2:3)
badpair = [20,20]
findVal(res,testpair)
findVal(res,badpair)
You shouldn't need loops for this.
If i spans the range [1:I] and j spans the range [1:J], then you have K = I*J possible interactions half of which are redundant/permutations (A(i,j) = A(j,i)).
j = mod(pair, J); % "row"
i = floor((pair-1) / J) + 1; % "col"
pair = j + i * J; % linear index
To access only certain combinations you can just use this basic linear indexing.
a(pair) = a(j,i) = a(i,j) = i * j;
It sounds like you'd like to avoid self interaction & redundancy so only choose pairs where either i > j xor j > i which is equivalent to just creating either the upper triangular or lower triangular (as in your code above) matrix.
To compute the mean of every bins along a dimension of a nd array in matlab, for example, average every 10 elements along dim 4 of a 4d array
x = reshape(1:30*30*20*300,30,30,20,300);
n = 10;
m = size(x,4)/10;
y = nan(30,30,20,m);
for ii = 1 : m
y(:,:,:,ii) = mean(x(:,:,:,(1:n)+(ii-1)*n),4);
end
It looks a bit silly. I think there must be better ways to average the bins?
Besides, is it possible to make the script applicable to general cases, namely, arbitray ndims of array and along an arbitray dim to average?
For the second part of your question you can use this:
x = reshape(1:30*30*20*300,30,30,20,300);
dim = 4;
n = 10;
m = size(x,dim)/10;
y = nan(30,30,20,m);
idx1 = repmat({':'},1,ndims(x));
idx2 = repmat({':'},1,ndims(x));
for ii = 1 : m
idx1{dim} = ii;
idx2{dim} = (1:n)+(ii-1)*n;
y(idx1{:}) = mean(x(idx2{:}),dim);
end
For the first part of the question here is an alternative using cumsum and diff, but it may not be better then the loop solution:
function y = slicedmean(x,slice_size,dim)
s = cumsum(x,dim);
idx1 = repmat({':'},1,ndims(x));
idx2 = repmat({':'},1,ndims(x));
idx1{dim} = slice_size;
idx2{dim} = slice_size:slice_size:size(x,dim);
y = cat(dim,s(idx1{:}),diff(s(idx2{:}),[],dim))/slice_size;
end
Here is a generic solution, using the accumarray function. I haven't tested how fast it is. There might be some room for improvement though.
Basically, accumarray groups the value in x following a matrix of customized index for your question
x = reshape(1:30*30*20*300,30,30,20,300);
s = size(x);
% parameters for averaging
dimAv = 4;
n = 10;
% get linear index
ix = (1:numel(x))';
% transform them to a matrix of index per dimension
% this is a customized version of ind2sub
pcum = [1 cumprod(s(1:end-1))];
sub = zeros(numel(ix),numel(s));
for i = numel(s):-1:1,
ixtmp = rem(ix-1, pcum(i)) + 1;
sub(:,i) = (ix - ixtmp)/pcum(i) + 1;
ix = ixtmp;
end
% correct index for the given dimension
sub(:,dimAv) = floor((sub(:,dimAv)-1)/n)+1;
% run the accumarray to compute the average
sout = s;
sout(dimAv) = ceil(sout(dimAv)/n);
y = accumarray(sub,x(:), sout, #mean);
If you need a faster and memory efficient operation, you'll have to write your own mex function. It shouldn't be so difficult, I think !
i have a question. I need to generate a random sequence of data samples of increasing size from a specified normal distribution. This i think i am doing ok in my code
array1 = N;
array2 = zeros(1,1000);
array3 = zeros(1,1500);
normal_mu = 5;
normal_sigma = 3;
pd2 = makedist('Normal','mu',normal_mu,'sigma',normal_sigma);
N=2;
for i=1:500
array1 = zeros(1,N);
lastN = N;
for a=1:N
array1(a)=random(pd2);
end
if i>1
for b= lastN-1:N
array2(b) = random(pd2);
end
else
for b = 1:N
array2(b) = array1(b);
end
Mean = mean(array2);
array3(i) = Mean;
N= N+2;
end
end
figure,plot(array3,'*');
The problem is the next bit. I have to start with a randomly generated first sample of n and each sample in the sequence should contain all the data values contained in the previous samples plus n further data values.
I Have no idea where to start with this, would appreciate any help with this at all!!
Suppose I have the following MATLAB code:
clear; clc
Items = {'counter','item1', 'item2', 'item3', 'item4'};
a = rand(8,4);
j = (1:8)';
t = table(j,a(:,1), a(:,2), a(:,3), a(:,4),'VariableNames',Items)
I would like to know if there is a sophisticated way to extend this list if i have e.g. 20 items. Following this code, I should include inside table every single a(:,i), i = 1,...,20, plus I would have to do the same for the Items list. I guess there is a more convenient way than this.
See array2table:
a = rand(8,4);
[l, w] = size(a);
j = 1:l;
Items = cell(1, w + 1);
Items{1} = 'counter';
for ii = 2:length(Items)
Items{ii} = sprintf('item%u', ii - 1);
end
t = array2table([j', a], 'VariableNames', Items);
Edit: It seems like there's a lot of overhead associated with array2table. It's essentially a wrapper for mat2cell so there might be a speed benefit to just using that on its own and skipping all the error checking. Haven't tested it out though.
You can make a cell with all columns in a.
for i=1:size(a,2)
acell{end+1}=a(:,i);
end
and then call
table(j,acell{:},'VariableNames',Items)
Here is an example:
Items = {'counter'};
a = rand(8,6);
j = (1:8)'
acell = [];
for i=1:size(a,2)
acell{end+1}=a(:,i);
Items{end+1}=['item',num2str(i)];
end
t = table(j,acell{:},'VariableNames',Items);
This is a follow-up question to How to append an element to an array in MATLAB? That question addressed how to append an element to an array. Two approaches are discussed there:
A = [A elem] % for a row array
A = [A; elem] % for a column array
and
A(end+1) = elem;
The second approach has the obvious advantage of being compatible with both row and column arrays.
However, this question is: which of the two approaches is fastest? My intuition tells me that the second one is, but I'd like some evidence for or against that. Any idea?
The second approach (A(end+1) = elem) is faster
According to the benchmarks below (run with the timeit benchmarking function from File Exchange), the second approach (A(end+1) = elem) is faster and should therefore be preferred.
Interestingly, though, the performance gap between the two approaches is much narrower in older versions of MATLAB than it is in more recent versions.
R2008a
R2013a
Benchmark code
function benchmark
n = logspace(2, 5, 40);
% n = logspace(2, 4, 40);
tf = zeros(size(n));
tg = tf;
for k = 1 : numel(n)
x = rand(round(n(k)), 1);
f = #() append(x);
tf(k) = timeit(f);
g = #() addtoend(x);
tg(k) = timeit(g);
end
figure
hold on
plot(n, tf, 'bo')
plot(n, tg, 'ro')
hold off
xlabel('input size')
ylabel('time (s)')
leg = legend('y = [y, x(k)]', 'y(end + 1) = x(k)');
set(leg, 'Location', 'NorthWest');
end
% Approach 1: y = [y, x(k)];
function y = append(x)
y = [];
for k = 1 : numel(x);
y = [y, x(k)];
end
end
% Approach 2: y(end + 1) = x(k);
function y = addtoend(x)
y = [];
for k = 1 : numel(x);
y(end + 1) = x(k);
end
end
How about this?
function somescript
RStime = timeit(#RowSlow)
CStime = timeit(#ColSlow)
RFtime = timeit(#RowFast)
CFtime = timeit(#ColFast)
function RowSlow
rng(1)
A = zeros(1,2);
for i = 1:1e5
A = [A rand(1,1)];
end
end
function ColSlow
rng(1)
A = zeros(2,1);
for i = 1:1e5
A = [A; rand(1,1)];
end
end
function RowFast
rng(1)
A = zeros(1,2);
for i = 1:1e5
A(end+1) = rand(1,1);
end
end
function ColFast
rng(1)
A = zeros(2,1);
for i = 1:1e5
A(end+1) = rand(1,1);
end
end
end
For my machine, this yields the following timings:
RStime =
30.4064
CStime =
29.1075
RFtime =
0.3318
CFtime =
0.3351
The orientation of the vector does not seem to matter that much, but the second approach is about a factor 100 faster on my machine.
In addition to the fast growing method pointing out above (i.e., A(k+1)), you can also get a speed increase from increasing the array size by some multiple, so that allocations become less as the size increases.
On my laptop using R2014b, a conditional doubling of size results in about a factor of 6 speed increase:
>> SO
GATime =
0.0288
DWNTime =
0.0048
In a real application, the size of A would needed to be limited to the needed size or the unfilled results filtered out in some way.
The Code for the SO function is below. I note that I switched to cos(k) since, for some unknown reason, there is a large difference in performance between rand() and rand(1,1) on my machine. But I don't think this affects the outcome too much.
function [] = SO()
GATime = timeit(#GrowAlways)
DWNTime = timeit(#DoubleWhenNeeded)
end
function [] = DoubleWhenNeeded()
A = 0;
sizeA = 1;
for k = 1:1E5
if ((k+1) > sizeA)
A(2*sizeA) = 0;
sizeA = 2*sizeA;
end
A(k+1) = cos(k);
end
end
function [] = GrowAlways()
A = 0;
for k = 1:1E5
A(k+1) = cos(k);
end
end