Repeat elements of vector [duplicate] - matlab

This question already has answers here:
Repeat copies of array elements: Run-length decoding in MATLAB
(5 answers)
Closed 8 years ago.
I have a value vector A containing elements i, for example:
A = [0.1 0.2 0.3 0.4 0.5];
and say r = [5 2 3 2 1];
Now I want to create a new vector Anew containing r(i) repetitions of the values i in A, such that the first r(1)=5 items in Anew have value A(1) and the length of the new vector is sum(r). Thus:
Anew = [0.1 0.1 0.1 0.1 0.1 0.2 0.2 0.3 0.3 0.3 0.4 0.4 0.5]
I am sure this can be done with an elaborate for-loop combining e.g. repmat, but any chance someone knows how to do this in a smoother way?

As far as I'm aware, there is no equivalent function to do that in MATLAB, though R has rep that can do that for you.... so jealous.
In any case, the only way I can suggest is to run a for loop with repmat as you suggested. However, you can perhaps do arrayfun instead if you want to do this as a one-liner... well it's technically two to do the post-processing required to get this into a single vector. As such, you can try this:
Anew = arrayfun(#(x) repmat(A(x), r(x), 1), 1:numel(A), 'uni', 0);
Anew = vertcat(Anew{:});
This essentially does the for loop and concatenation of the replicated vectors with less code. We go through each pair of values in A and r and spit out replicated vectors. Each of them will be in a cell array, which is why vertcat is required to put it all into one vector.
We get:
Anew =
0.1000
0.1000
0.1000
0.1000
0.1000
0.2000
0.2000
0.3000
0.3000
0.3000
0.4000
0.4000
0.5000
Take note that other people have tried something similar to what you're doing in this post: A similar function to R's rep in Matlab. This is essentially mimicking R's way of doing rep, which is what you want to do!
Alternative - Using for loops
Because of #Divakar's benchmarking, I'm curious to see how pre-allocating the array, then using an actual for loop to iterate through A and r and populate it by indexing would benchmark. As such, the equivalent code to the above using for loops and indexing would be:
Anew = zeros(sum(r), 1);
counter = 1;
for idx = 1 : numel(r)
Anew(counter : counter + r(idx) - 1) = A(idx);
counter = counter + r(idx);
end
We would need a variable that keeps track of where we need to insert elements in the array, which is stored in counter. We offset this by the total number of elements to replicate per number, which is stored in each value of r.
As such, this method completely avoids using repmat and just uses indexing to generate our replicated vectors instead.
Benchmarking (à la Divakar)
Building on top of Divakar's benchmarking code, I actually tried running all of the tests on my machine, in addition to the for loop approach. I simply used his benchmarking code with the same test cases.
These are the timing results I get per algorithm:
Case #1 - N = 4000, max_repeat = 4000
------------------- With arrayfun
Elapsed time is 1.202805 seconds.
------------------- With cumsum
Elapsed time is 1.691591 seconds.
------------------- With bsxfun
Elapsed time is 0.835201 seconds.
------------------- With for loop
Elapsed time is 0.136628 seconds.
Case #2 - N = 10000, max_repeat = 1000
------------------- With arrayfun
Elapsed time is 2.117631 seconds.
------------------- With cumsum
Elapsed time is 1.080247 seconds.
------------------- With bsxfun
Elapsed time is 0.540892 seconds.
------------------- With for loop
Elapsed time is 0.127728 seconds.
In these cases, cumsum actually beats out arrayfun... which is what I originally expected. bsxfun beats everyone else out, except for the for loop. My guess is with the differing times in arrayfun between myself and Divakar, we are running our code on different architectures. I'm currently running my tests using MATLAB R2013a on a Mac OS X 10.9.5 MacBook Pro machine.
As we can see, the for loop is much quicker. I know for a fact that when it comes to indexing operations in a for loop, the JIT kicks in and gives you better performance.

First think of forming an index vector [1 1 1 1 1 2 2 3 3 3 4 4 5]. Noticing the regular increments here makes me think of cumsum: we can get these steps by putting ones at the correct location in a zeros vector: [1 0 0 0 0 1 0 1 0 0 1 0 1]. And that we can get by running another cumsum on the input list. After adjusting for end conditions and 1-based indexing, we get this:
B(cumsum(r) + 1) = 1;
idx = cumsum(B) + 1;
idx(end) = [];
A(idx)

bsxfun based approach -
A = [0.1 0.2 0.3 0.4 0.5]
r = [5 2 3 2 1]
repeats = bsxfun(#le,[1:max(r)]',r) %//' logical 2D array with ones in each column
%// same as the repeats for each entry
A1 = A(ones(1,max(r)),:) %// 2D matrix of all entries repeated maximum r times
%// and this resembles your repmat
out = A1(repeats) %// desired output with repeated entries
It could essentially become a two-liner -
A1 = A(ones(1,max(r)),:);
out = A1(bsxfun(#le,[1:max(r)]',r));
Output -
out =
0.1000
0.1000
0.1000
0.1000
0.1000
0.2000
0.2000
0.3000
0.3000
0.3000
0.4000
0.4000
0.5000
Benchmarking
Some benchmark results could be produced for the solutions presented here thus far.
Benchmarking Code - Case I
%// Parameters and input data
N = 4000;
max_repeat = 4000;
A = rand(1,N);
r = randi(max_repeat,1,N);
num_runs = 10; %// no. of times each solution is repeated for better benchmarking
disp('------------------- With arrayfun')
tic
for k1 = 1:num_runs
Anew = arrayfun(#(x) repmat(A(x), r(x), 1), 1:numel(A), 'uni', 0);
Anew = vertcat(Anew{:});
end
toc, clear Anew
disp('------------------- With cumsum')
tic
for k1 = 1:num_runs
B(cumsum(r) + 1) = 1;
idx = cumsum(B) + 1;
idx(end) = [];
out1 = A(idx);
end
toc,clear B idx out1
disp('------------------- With bsxfun')
tic
for k1 = 1:num_runs
A1 = A(ones(1,max(r)),:);
out2 = A1(bsxfun(#le,[1:max(r)]',r));
end
toc
Results
------------------- With arrayfun
Elapsed time is 2.198521 seconds.
------------------- With cumsum
Elapsed time is 5.360725 seconds.
------------------- With bsxfun
Elapsed time is 2.896414 seconds.
Benchmarking Code - Case II [Bigger datasize but lesser max of r]
%// Parameters and input data
N = 10000;
max_repeat = 1000;
Results
------------------- With arrayfun
Elapsed time is 2.641980 seconds.
------------------- With cumsum
Elapsed time is 3.426921 seconds.
------------------- With bsxfun
Elapsed time is 1.858007 seconds.
Conclusions from benchmarks
For case I, arrayfun seems like the way to go, while for Case II, bsxfun might be the weapon of choice. So, it seems that the type of data you are dealing with, would really dictate which approach to go with.

Related

Weighted Random Integers MATLAB [duplicate]

How to randomly pick up N numbers from a vector a with weight assigned to each number?
Let's say:
a = 1:3; % possible numbers
weight = [0.3 0.1 0.2]; % corresponding weights
In this case probability to pick up 1 should be 3 times higher than to pick up 2.
Sum of all weights can be anything.
R = randsample([1 2 3], N, true, [0.3 0.1 0.2])
randsample is included in the Statistics Toolbox
Otherwise you can use some kind of roulette-wheel selection process. See this similar question (although not MATLAB specific). Here's my one-line implementation:
a = 1:3; %# possible numbers
w = [0.3 0.1 0.2]; %# corresponding weights
N = 10; %# how many numbers to generate
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 )
Explanation:
Consider the interval [0,1]. We assign for each element in the list (1:3) a sub-interval of length proportionate to the weight of each element; therefore 1 get and interval of length 0.3/(0.3+0.1+0.2), same for the others.
Now if we generate a random number with uniform distribution over [0,1], then any number in [0,1] has an equal probability of being picked, thus the sub-intervals' lengths determine the probability of the random number falling in each interval.
This matches what I'm doing above: pick a number X~U[0,1] (more like N numbers), then find which interval it falls into in a vectorized way..
You can check the results of the two techniques above by generating a large enough sequence N=1000:
>> tabulate( R )
Value Count Percent
1 511 51.10%
2 160 16.00%
3 329 32.90%
which more or less match the normalized weights w./sum(w) [0.5 0.16667 0.33333]
amro gives a nice answer (that I rated up), but it will be highly intensive if you wish to generate many numbers from a large set. This is because the bsxfun operation can generate a huge array, which is then summed. For example, suppose I had a set of 10000 values to sample from, all with different weights? Now, generate 1000000 numbers from that sample.
This will take some work to do, since it will generate a 10000x1000000 array internally, with 10^10 elements in it. It will be a logical array, but even so, 10 gigabytes of ram must be allocated.
A better solution is to use histc. Thus...
a = 1:3
w = [.3 .1 .2];
N = 10;
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R)
R =
1 1 1 2 2 1 3 1 1 1
However, for a large problem of the size I suggested above, it is fast.
a = 1:10000;
w = rand(1,10000);
N = 1000000;
tic
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R);
toc
Elapsed time is 0.120879 seconds.
Admittedly, my version takes 2 lines to write. The indexing operation must happen on a second line since it uses the second output of histc. Also note that I've used the ability of the new matlab release, with the tilde (~) operator as the first argument of histc. This causes that first argument to be immediately dumped in the bit bucket.
TL;DR
For maximum performance, if you only need a singe sample, use
R = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
and if you need multiple samples, use
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
Avoid randsample. Generating multiple samples upfront is three orders of magnitude faster than generating individual values.
Performance metrics
Since this showed up near the top of my Google search, I just wanted to add some performance metrics to show that the right solution will depend very much on the value of N and the requirements of the application. Also that changing the design of the application can dramatically increase performance.
For large N, or indeed N > 1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
N = 100000000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
fprintf('randsample:\n');
tic
R = randsample(a, N, true, w);
toc
tabulate(R)
fprintf('bsxfun:\n');
tic
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 );
toc
tabulate(R)
fprintf('histc:\n');
tic
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
toc
tabulate(R)
Results:
w_normalized =
0.5000 0.1667 0.3333
randsample:
Elapsed time is 2.976893 seconds.
Value Count Percent
1 49997864 50.00%
2 16670394 16.67%
3 33331742 33.33%
bsxfun:
Elapsed time is 2.712315 seconds.
Value Count Percent
1 49996820 50.00%
2 16665005 16.67%
3 33338175 33.34%
histc:
Elapsed time is 2.078809 seconds.
Value Count Percent
1 50004044 50.00%
2 16665508 16.67%
3 33330448 33.33%
In this case, histc is fastest
However, in the case where maybe it is not possible to generate all N values up front, perhaps because the weights are updated on each iterations, i.e. N=1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
I = 100000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
R=zeros(N,1);
fprintf('randsample:\n');
tic
for i=1:I
R(i) = randsample(a, 1, true, w);
end
toc
tabulate(R)
fprintf('cumsum:\n');
tic
for i=1:I
R(i) = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
end
toc
tabulate(R)
fprintf('histc:\n');
tic
for i=1:I
[~, R(i)] = histc(rand(1),cumsum([0;w(:)./sum(w)]));
end
toc
tabulate(R)
Results:
0.5000 0.1667 0.3333
randsample:
Elapsed time is 3.526473 seconds.
Value Count Percent
1 50437 50.44%
2 16149 16.15%
3 33414 33.41%
cumsum:
Elapsed time is 0.473207 seconds.
Value Count Percent
1 50018 50.02%
2 16748 16.75%
3 33234 33.23%
histc:
Elapsed time is 1.046981 seconds.
Value Count Percent
1 50134 50.13%
2 16684 16.68%
3 33182 33.18%
In this case, the custom cumsum approach (based on the bsxfun version) is fastest.
In any case, randsample certainly looks like a bad choice all round. It also goes to show that if an algorithm can be arranged to generate all random variables upfront then it will perform much better (note that there are three orders of magnitude less values generated in the N=1 case in a similar execution time).
Code is available here.
Amro has a really nice answer for this topic. However, one might want a super-fast implementation to sample from huge PDFs where the domain might contain several thousands. For such scenarios, it might be tedious to use bsxfun and cumsum very frequently. Motivated from Gnovice's answer, it would make sense to implement roulette wheel algorithm with a run length encoding schema. I performed a benchmark with Amro's solution and new code:
%% Toy example: generate random numbers from an arbitrary PDF
a = 1:3; %# domain of PDF
w = [0.3 0.1 0.2]; %# Probability Values (Weights)
N = 10000; %# Number of random generations
%Generate using roulette wheel + run length encoding
factor = 1 / min(w); %Compute min factor to assign 1 bin to min(PDF)
intW = int32(w * factor); %Get replicator indexes for run length encoding
idxArr = zeros(1,sum(intW)); %Create index access array
idxArr([1 cumsum(intW(1:end-1))+1]) = 1;%Tag sample change indexes
sampTable = a(cumsum(idxArr)); %Create lookup table filled with samples
len = size(sampTable,2);
tic;
R = sampTable( uint32(randi([1 len],N,1)) );
toc;
tabulate(R);
Some evaluations of the code above for very large data where domain of PDF contain huge length.
a ~ 15000, n = 10000
Without table: Elapsed time is 0.006203 seconds.
With table: Elapsed time is 0.003308 seconds.
ByteSize(sampTable) 796.23 kb
a ~ 15000, n = 100000
Without table: Elapsed time is 0.003510 seconds.
With table: Elapsed time is 0.002823 seconds.
a ~ 35000, n = 10000
Without table: Elapsed time is 0.226990 seconds.
With table: Elapsed time is 0.001328 seconds.
ByteSize(sampTable) 2.79 Mb
a ~ 35000 n = 100000
Without table: Elapsed time is 2.784713 seconds.
With table: Elapsed time is 0.003452 seconds.
a ~ 35000 n = 1000000
Without table: bsxfun: out of memory
With table : Elapsed time is 0.021093 seconds.
The idea is to create a run length encoding table where frequent values of the PDF are replicated more compared to non-frequent values. At the end of the day, we sample an index for weighted sample table, using uniform distribution, and use corresponding value.
It is memory intensive, but with this approach it is even possible to scale up to PDF lengths of hundred thousands. Hence access is super-fast.

Vectorised/faster form of datasample (statistics toolbox) - MATLAB [duplicate]

How to randomly pick up N numbers from a vector a with weight assigned to each number?
Let's say:
a = 1:3; % possible numbers
weight = [0.3 0.1 0.2]; % corresponding weights
In this case probability to pick up 1 should be 3 times higher than to pick up 2.
Sum of all weights can be anything.
R = randsample([1 2 3], N, true, [0.3 0.1 0.2])
randsample is included in the Statistics Toolbox
Otherwise you can use some kind of roulette-wheel selection process. See this similar question (although not MATLAB specific). Here's my one-line implementation:
a = 1:3; %# possible numbers
w = [0.3 0.1 0.2]; %# corresponding weights
N = 10; %# how many numbers to generate
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 )
Explanation:
Consider the interval [0,1]. We assign for each element in the list (1:3) a sub-interval of length proportionate to the weight of each element; therefore 1 get and interval of length 0.3/(0.3+0.1+0.2), same for the others.
Now if we generate a random number with uniform distribution over [0,1], then any number in [0,1] has an equal probability of being picked, thus the sub-intervals' lengths determine the probability of the random number falling in each interval.
This matches what I'm doing above: pick a number X~U[0,1] (more like N numbers), then find which interval it falls into in a vectorized way..
You can check the results of the two techniques above by generating a large enough sequence N=1000:
>> tabulate( R )
Value Count Percent
1 511 51.10%
2 160 16.00%
3 329 32.90%
which more or less match the normalized weights w./sum(w) [0.5 0.16667 0.33333]
amro gives a nice answer (that I rated up), but it will be highly intensive if you wish to generate many numbers from a large set. This is because the bsxfun operation can generate a huge array, which is then summed. For example, suppose I had a set of 10000 values to sample from, all with different weights? Now, generate 1000000 numbers from that sample.
This will take some work to do, since it will generate a 10000x1000000 array internally, with 10^10 elements in it. It will be a logical array, but even so, 10 gigabytes of ram must be allocated.
A better solution is to use histc. Thus...
a = 1:3
w = [.3 .1 .2];
N = 10;
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R)
R =
1 1 1 2 2 1 3 1 1 1
However, for a large problem of the size I suggested above, it is fast.
a = 1:10000;
w = rand(1,10000);
N = 1000000;
tic
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R);
toc
Elapsed time is 0.120879 seconds.
Admittedly, my version takes 2 lines to write. The indexing operation must happen on a second line since it uses the second output of histc. Also note that I've used the ability of the new matlab release, with the tilde (~) operator as the first argument of histc. This causes that first argument to be immediately dumped in the bit bucket.
TL;DR
For maximum performance, if you only need a singe sample, use
R = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
and if you need multiple samples, use
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
Avoid randsample. Generating multiple samples upfront is three orders of magnitude faster than generating individual values.
Performance metrics
Since this showed up near the top of my Google search, I just wanted to add some performance metrics to show that the right solution will depend very much on the value of N and the requirements of the application. Also that changing the design of the application can dramatically increase performance.
For large N, or indeed N > 1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
N = 100000000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
fprintf('randsample:\n');
tic
R = randsample(a, N, true, w);
toc
tabulate(R)
fprintf('bsxfun:\n');
tic
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 );
toc
tabulate(R)
fprintf('histc:\n');
tic
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
toc
tabulate(R)
Results:
w_normalized =
0.5000 0.1667 0.3333
randsample:
Elapsed time is 2.976893 seconds.
Value Count Percent
1 49997864 50.00%
2 16670394 16.67%
3 33331742 33.33%
bsxfun:
Elapsed time is 2.712315 seconds.
Value Count Percent
1 49996820 50.00%
2 16665005 16.67%
3 33338175 33.34%
histc:
Elapsed time is 2.078809 seconds.
Value Count Percent
1 50004044 50.00%
2 16665508 16.67%
3 33330448 33.33%
In this case, histc is fastest
However, in the case where maybe it is not possible to generate all N values up front, perhaps because the weights are updated on each iterations, i.e. N=1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
I = 100000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
R=zeros(N,1);
fprintf('randsample:\n');
tic
for i=1:I
R(i) = randsample(a, 1, true, w);
end
toc
tabulate(R)
fprintf('cumsum:\n');
tic
for i=1:I
R(i) = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
end
toc
tabulate(R)
fprintf('histc:\n');
tic
for i=1:I
[~, R(i)] = histc(rand(1),cumsum([0;w(:)./sum(w)]));
end
toc
tabulate(R)
Results:
0.5000 0.1667 0.3333
randsample:
Elapsed time is 3.526473 seconds.
Value Count Percent
1 50437 50.44%
2 16149 16.15%
3 33414 33.41%
cumsum:
Elapsed time is 0.473207 seconds.
Value Count Percent
1 50018 50.02%
2 16748 16.75%
3 33234 33.23%
histc:
Elapsed time is 1.046981 seconds.
Value Count Percent
1 50134 50.13%
2 16684 16.68%
3 33182 33.18%
In this case, the custom cumsum approach (based on the bsxfun version) is fastest.
In any case, randsample certainly looks like a bad choice all round. It also goes to show that if an algorithm can be arranged to generate all random variables upfront then it will perform much better (note that there are three orders of magnitude less values generated in the N=1 case in a similar execution time).
Code is available here.
Amro has a really nice answer for this topic. However, one might want a super-fast implementation to sample from huge PDFs where the domain might contain several thousands. For such scenarios, it might be tedious to use bsxfun and cumsum very frequently. Motivated from Gnovice's answer, it would make sense to implement roulette wheel algorithm with a run length encoding schema. I performed a benchmark with Amro's solution and new code:
%% Toy example: generate random numbers from an arbitrary PDF
a = 1:3; %# domain of PDF
w = [0.3 0.1 0.2]; %# Probability Values (Weights)
N = 10000; %# Number of random generations
%Generate using roulette wheel + run length encoding
factor = 1 / min(w); %Compute min factor to assign 1 bin to min(PDF)
intW = int32(w * factor); %Get replicator indexes for run length encoding
idxArr = zeros(1,sum(intW)); %Create index access array
idxArr([1 cumsum(intW(1:end-1))+1]) = 1;%Tag sample change indexes
sampTable = a(cumsum(idxArr)); %Create lookup table filled with samples
len = size(sampTable,2);
tic;
R = sampTable( uint32(randi([1 len],N,1)) );
toc;
tabulate(R);
Some evaluations of the code above for very large data where domain of PDF contain huge length.
a ~ 15000, n = 10000
Without table: Elapsed time is 0.006203 seconds.
With table: Elapsed time is 0.003308 seconds.
ByteSize(sampTable) 796.23 kb
a ~ 15000, n = 100000
Without table: Elapsed time is 0.003510 seconds.
With table: Elapsed time is 0.002823 seconds.
a ~ 35000, n = 10000
Without table: Elapsed time is 0.226990 seconds.
With table: Elapsed time is 0.001328 seconds.
ByteSize(sampTable) 2.79 Mb
a ~ 35000 n = 100000
Without table: Elapsed time is 2.784713 seconds.
With table: Elapsed time is 0.003452 seconds.
a ~ 35000 n = 1000000
Without table: bsxfun: out of memory
With table : Elapsed time is 0.021093 seconds.
The idea is to create a run length encoding table where frequent values of the PDF are replicated more compared to non-frequent values. At the end of the day, we sample an index for weighted sample table, using uniform distribution, and use corresponding value.
It is memory intensive, but with this approach it is even possible to scale up to PDF lengths of hundred thousands. Hence access is super-fast.

Find the average value between each element of the array and its immediate neighbor

Suppose I have a matrix 1a1 which is 1 x n, and I want to find the average value between each element of a and its neighbors.
What's a smart way to do this?
EX:
If
a=[0 1 2 1 0 1];
Then the "average value matrix" is:
b=[0.5 1 1.33 1 0.5];
Where the first entry of b is:
b(1) = (0+1)/2 = 0.5
b(2) = (0+1+2)/3 = 1
etc.
I would suggest doing the middle as vector ops and handling the edge conditions as scalars.
b=zeros(size(a));
b(2:end-1)=(a(1:end-2)+a(2:end-1)+a(3:end))/3;
b(1)=(a(1)+a(2))/2;
b(end)=(a(end-1)+a(end))/2;
If you get into bigger averages...
% scale and sum elements with a sliding window 3 long.
b=conv(a,[1,1,1]/3)
%
% remove the tails
b=b(2:end-1)
%
% and rescale the edge cases.
b(1)=b(1)*3/2
b(end)=b(end)*3/2
I compared the first method above(vector), the convolution method, and the hankel method suggested by RDizzl3. (Sorry Luis, I don't have the Statistics package, though I expect the nanmean method to be slower due to the amount of condition checking.) The comparison was with a 10000 length random a vector, to make the timing significant. b was initialized to a zeros matrix of the correct size before these timings were done.The hankel matrix(h) of correct size was precomputed before the these timings as well.
% hankle method
tic; b(1)=mean(a([1,2])); b(2:(n-1))=mean(a(h),2); b(2)=mean(a([n-1,n])); toc
Elapsed time is 0.001698 seconds.
% convolution method
tic; c=conv(a,[1,1,1]/3) ; b=c(2:(2+n-1)); b(1)=b(1)*3/2; b(n)=b(n)*3/2; toc;
Elapsed time is 0.000339 seconds.
% vector method
tic; b(1)=mean(a([1,2])) ; b(2:(n-1))=(a(1:(n-2))+a(2:(n-1))+a(3:n))/3;b(2)=mean(a([n-1,n])); toc
Elapsed time is 0.000914 seconds.
I repeated the above 3 more times and sorted the results,
hankel convolution vector
9.2500e-04 3.3900e-04 7.2600e-04
1.3820e-03 5.2600e-04 8.7100e-04
1.6980e-03 5.5200e-04 9.1400e-04
2.1570e-03 5.5300e-04 2.6390e-03
I am a little surprised, I didn't expect the efficiency of the convolution approach to come out till larger window sizes. But it consistently did the best here.
Note that if you are using smaller data sets these timings probably aren't appropriate. I wouldn't at all be surprised if the hankel approach works better if the interest is in large numbers of shorter length vectors.
You can use this:
a=[0 1 2 1 0 1];
n = numel(a);
h = hankel(1:(n-2),(n-2):n);
b(1) = mean(a([1 2]))
b(2:(n-1)) = mean(a(h),2);
b(n) = mean(a([n-1 n]))
This will return the vector:
b = [0.5000 1.0000 1.3333 1.0000 0.6667 0.5000]
This takes the elements from the vector a and finds the average for its neighbors, so:
b(1) = (0+1)/2 = 0.5
b(2) = (0+1+2)/3 = 1
b(3) = (1+2+1)/3 = 1.3333
b(4) = (2+1+0)/3 = 1
b(5) = (1+0+1)/3 = 0.6667
b(6) = (0+1)/2 = 0.5 % last element
a = [0 1 2 1 0 1]; %// data
n = 1; %// how many neighbours to consider on each side
a2 = [NaN(1,n) a NaN(1,n)]; %// pad with NaN's (which will be ignored by nanmean)
b = arrayfun(#(k) nanmean(a2(k-n:k+n)), n+1:n+numel(a)); %// apply a
%// sliding-window mean ignoring NaN's
Easiest way to use smooth filter
output=smooth(A,3,'moving');
where 3 is the window size (should be odd value)
check documentation for smooth function
https://www.mathworks.com/help/curvefit/smooth.html

Weighted random numbers in MATLAB

How to randomly pick up N numbers from a vector a with weight assigned to each number?
Let's say:
a = 1:3; % possible numbers
weight = [0.3 0.1 0.2]; % corresponding weights
In this case probability to pick up 1 should be 3 times higher than to pick up 2.
Sum of all weights can be anything.
R = randsample([1 2 3], N, true, [0.3 0.1 0.2])
randsample is included in the Statistics Toolbox
Otherwise you can use some kind of roulette-wheel selection process. See this similar question (although not MATLAB specific). Here's my one-line implementation:
a = 1:3; %# possible numbers
w = [0.3 0.1 0.2]; %# corresponding weights
N = 10; %# how many numbers to generate
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 )
Explanation:
Consider the interval [0,1]. We assign for each element in the list (1:3) a sub-interval of length proportionate to the weight of each element; therefore 1 get and interval of length 0.3/(0.3+0.1+0.2), same for the others.
Now if we generate a random number with uniform distribution over [0,1], then any number in [0,1] has an equal probability of being picked, thus the sub-intervals' lengths determine the probability of the random number falling in each interval.
This matches what I'm doing above: pick a number X~U[0,1] (more like N numbers), then find which interval it falls into in a vectorized way..
You can check the results of the two techniques above by generating a large enough sequence N=1000:
>> tabulate( R )
Value Count Percent
1 511 51.10%
2 160 16.00%
3 329 32.90%
which more or less match the normalized weights w./sum(w) [0.5 0.16667 0.33333]
amro gives a nice answer (that I rated up), but it will be highly intensive if you wish to generate many numbers from a large set. This is because the bsxfun operation can generate a huge array, which is then summed. For example, suppose I had a set of 10000 values to sample from, all with different weights? Now, generate 1000000 numbers from that sample.
This will take some work to do, since it will generate a 10000x1000000 array internally, with 10^10 elements in it. It will be a logical array, but even so, 10 gigabytes of ram must be allocated.
A better solution is to use histc. Thus...
a = 1:3
w = [.3 .1 .2];
N = 10;
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R)
R =
1 1 1 2 2 1 3 1 1 1
However, for a large problem of the size I suggested above, it is fast.
a = 1:10000;
w = rand(1,10000);
N = 1000000;
tic
[~,R] = histc(rand(1,N),cumsum([0;w(:)./sum(w)]));
R = a(R);
toc
Elapsed time is 0.120879 seconds.
Admittedly, my version takes 2 lines to write. The indexing operation must happen on a second line since it uses the second output of histc. Also note that I've used the ability of the new matlab release, with the tilde (~) operator as the first argument of histc. This causes that first argument to be immediately dumped in the bit bucket.
TL;DR
For maximum performance, if you only need a singe sample, use
R = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
and if you need multiple samples, use
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
Avoid randsample. Generating multiple samples upfront is three orders of magnitude faster than generating individual values.
Performance metrics
Since this showed up near the top of my Google search, I just wanted to add some performance metrics to show that the right solution will depend very much on the value of N and the requirements of the application. Also that changing the design of the application can dramatically increase performance.
For large N, or indeed N > 1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
N = 100000000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
fprintf('randsample:\n');
tic
R = randsample(a, N, true, w);
toc
tabulate(R)
fprintf('bsxfun:\n');
tic
R = a( sum( bsxfun(#ge, rand(N,1), cumsum(w./sum(w))), 2) + 1 );
toc
tabulate(R)
fprintf('histc:\n');
tic
[~, R] = histc(rand(N,1),cumsum([0;w(:)./sum(w)]));
toc
tabulate(R)
Results:
w_normalized =
0.5000 0.1667 0.3333
randsample:
Elapsed time is 2.976893 seconds.
Value Count Percent
1 49997864 50.00%
2 16670394 16.67%
3 33331742 33.33%
bsxfun:
Elapsed time is 2.712315 seconds.
Value Count Percent
1 49996820 50.00%
2 16665005 16.67%
3 33338175 33.34%
histc:
Elapsed time is 2.078809 seconds.
Value Count Percent
1 50004044 50.00%
2 16665508 16.67%
3 33330448 33.33%
In this case, histc is fastest
However, in the case where maybe it is not possible to generate all N values up front, perhaps because the weights are updated on each iterations, i.e. N=1:
a = 1:3; % possible numbers
w = [0.3 0.1 0.2]; % corresponding weights
I = 100000; % number of values to generate
w_normalized = w / sum(w) % normalised weights, for indication
R=zeros(N,1);
fprintf('randsample:\n');
tic
for i=1:I
R(i) = randsample(a, 1, true, w);
end
toc
tabulate(R)
fprintf('cumsum:\n');
tic
for i=1:I
R(i) = a( sum( (rand(1) >= cumsum(w./sum(w)))) + 1 );
end
toc
tabulate(R)
fprintf('histc:\n');
tic
for i=1:I
[~, R(i)] = histc(rand(1),cumsum([0;w(:)./sum(w)]));
end
toc
tabulate(R)
Results:
0.5000 0.1667 0.3333
randsample:
Elapsed time is 3.526473 seconds.
Value Count Percent
1 50437 50.44%
2 16149 16.15%
3 33414 33.41%
cumsum:
Elapsed time is 0.473207 seconds.
Value Count Percent
1 50018 50.02%
2 16748 16.75%
3 33234 33.23%
histc:
Elapsed time is 1.046981 seconds.
Value Count Percent
1 50134 50.13%
2 16684 16.68%
3 33182 33.18%
In this case, the custom cumsum approach (based on the bsxfun version) is fastest.
In any case, randsample certainly looks like a bad choice all round. It also goes to show that if an algorithm can be arranged to generate all random variables upfront then it will perform much better (note that there are three orders of magnitude less values generated in the N=1 case in a similar execution time).
Code is available here.
Amro has a really nice answer for this topic. However, one might want a super-fast implementation to sample from huge PDFs where the domain might contain several thousands. For such scenarios, it might be tedious to use bsxfun and cumsum very frequently. Motivated from Gnovice's answer, it would make sense to implement roulette wheel algorithm with a run length encoding schema. I performed a benchmark with Amro's solution and new code:
%% Toy example: generate random numbers from an arbitrary PDF
a = 1:3; %# domain of PDF
w = [0.3 0.1 0.2]; %# Probability Values (Weights)
N = 10000; %# Number of random generations
%Generate using roulette wheel + run length encoding
factor = 1 / min(w); %Compute min factor to assign 1 bin to min(PDF)
intW = int32(w * factor); %Get replicator indexes for run length encoding
idxArr = zeros(1,sum(intW)); %Create index access array
idxArr([1 cumsum(intW(1:end-1))+1]) = 1;%Tag sample change indexes
sampTable = a(cumsum(idxArr)); %Create lookup table filled with samples
len = size(sampTable,2);
tic;
R = sampTable( uint32(randi([1 len],N,1)) );
toc;
tabulate(R);
Some evaluations of the code above for very large data where domain of PDF contain huge length.
a ~ 15000, n = 10000
Without table: Elapsed time is 0.006203 seconds.
With table: Elapsed time is 0.003308 seconds.
ByteSize(sampTable) 796.23 kb
a ~ 15000, n = 100000
Without table: Elapsed time is 0.003510 seconds.
With table: Elapsed time is 0.002823 seconds.
a ~ 35000, n = 10000
Without table: Elapsed time is 0.226990 seconds.
With table: Elapsed time is 0.001328 seconds.
ByteSize(sampTable) 2.79 Mb
a ~ 35000 n = 100000
Without table: Elapsed time is 2.784713 seconds.
With table: Elapsed time is 0.003452 seconds.
a ~ 35000 n = 1000000
Without table: bsxfun: out of memory
With table : Elapsed time is 0.021093 seconds.
The idea is to create a run length encoding table where frequent values of the PDF are replicated more compared to non-frequent values. At the end of the day, we sample an index for weighted sample table, using uniform distribution, and use corresponding value.
It is memory intensive, but with this approach it is even possible to scale up to PDF lengths of hundred thousands. Hence access is super-fast.

What's the most efficient/elegant way to delete elements from a matrix in MATLAB?

I want to delete several specific values from a matrix (if they exist). It is highly probable that there are multiple copies of the values in the matrix.
For example, consider an N-by-2 matrix intersections. If the pairs of values [a b] and [c d] exist as rows in that matrix, I want to delete them.
Let's say I want to delete rows like [-2.0 0.5] and [7 7] in the following matrix:
intersections =
-4.0000 0.5000
-2.0000 0.5000
2.0000 3.0000
4.0000 0.5000
-2.0000 0.5000
So that after deletion I get:
intersections =
-4.0000 0.5000
2.0000 3.0000
4.0000 0.5000
What's the most efficient/elegant way to do this?
Try this one-liner (where A is your intersection matrix and B is the value to remove):
A = [-4.0 0.5;
-2.0 0.5;
2.0 3.0;
4.0 0.5;
-2.0 0.5];
B = [-2.0 0.5];
A = A(~all(A == repmat(B,size(A,1),1),2),:);
Then just repeat the last line for each new B you want to remove.
EDIT:
...and here's another option:
A = A((A(:,1) ~= B(1)) | (A(:,2) ~= B(2)),:);
WARNING: The answers here are best used for cases where small floating point errors are not expected (i.e. with integer values). As noted in this follow-up question, using the "==" and "~=" operators can cause unwanted results. In such cases, the above options should be modified to use relational operators instead of equality operators. For example, the second option I added would be changed to:
tolerance = 0.001; % Or whatever limit you want to set
A = A((abs(A(:,1)-B(1)) > tolerance) | (abs(A(:,2)-B(2)) > tolerance),:);
Just a quick head's up! =)
SOME RUDIMENTARY TIMING:
In case anyone was really interested in efficiency, I just did some simple timing for three different ways to get the subindex for the matrix (the two options I've listed above and Fanfan's STRMATCH option):
>> % Timing for option #1 indexing:
>> tic; for i=1:10000, index = ~all(A == repmat(B,size(A,1),1),2); end; toc;
Elapsed time is 0.262648 seconds.
>> % Timing for option #2 indexing:
>> tic; for i=1:10000, index = (A(:,1) ~= B(1)) | (A(:,2) ~= B(2)); end; toc;
Elapsed time is 0.100858 seconds.
>> % Timing for STRMATCH indexing:
>> tic; for i=1:10000, index = strmatch(B,A); end; toc;
Elapsed time is 0.192306 seconds.
As you can see, the STRMATCH option is faster than my first suggestion, but my second suggestion is the fastest of all three. Note however that my options and Fanfan's do slightly different things: my options return logical indices of the rows to keep, and Fanfan's returns linear indices of the rows to remove. That's why the STRMATCH option uses the form:
A(index,:) = [];
while mine use the form:
A = A(index,:);
However, my indices can be negated to use the first form (indexing rows to remove):
A(all(A == repmat(B,size(A,1),1),2),:) = []; % For option #1
A((A(:,1) == B(1)) & (A(:,2) == B(2)),:) = []; % For option #2
The simple solution here is to look to set membership functions, i.e., setdiff, union, and ismember.
A = [-4 0.5;
-2 0.5;
2 3;
4 0.5;
-2 0.5];
B = [-2 .5;7 7];
See what ismember does with the two arrays. Use the 'rows' option.
ismember(A,B,'rows')
ans =
0
1
0
0
1
Since we wish to delete rows of A that are also in B, just do this:
A(ismember(A,B,'rows'),:) = []
A =
-4 0.5
2 3
4 0.5
Beware that set membership functions look for an EXACT match. Integers or multiples of 1/2 such as are in A satisfy that requirement. They are exactly represented in floating point arithmetic in MATLAB.
Had these numbers been real floating point numbers, I'd have been more careful. There I'd have used a tolerance on the difference. In that case, I might have computed the interpoint distance matrix between the two sets of numbers, removing a row of A only if it fell within some given distance of one of the rows of B.
You can also abuse the strmatch function to suit your needs: the following code removes all occurences of a given row b in a matrix A
A(strmatch(b, A),:) = [];
If you need to delete more than one row, such as all rows from matrix B, iterate over them:
for b = B'
A(strmatch(b, A),:) = [];
end
Not sure when this function was introduced (using 2012b) but you can just do:
setdiff(A, B, 'rows')
ans =
-4.0000 0.5000
2.0000 3.0000
4.0000 0.5000
Based on:
A = [-4.0 0.5;
-2.0 0.5;
2.0 3.0;
4.0 0.5;
-2.0 0.5];
B = [-2.0 0.5];