average number of different values in a column - matlab

I had a question in Matlab. It is so, I try to take average of the different number of values ​​in a column. For example, if we have the column below,
X = [1 1 2 3 4 3 8 2 1 3 5 6 7 7 5]
first I want to start by taking the average of 5 values ​​and plot them. In the case above, I should receive three averages that I could plot. Then take 10 values ​​at a time and so on.
I wonder if you have to write custom code to fix it.

The fastest way is probably to rearrange your initial vector X into some matrix, with each column storing the required values to average:
A = reshape(X, N, []);
where N is the desired number of rows in the new matrix, and the empty brackets ([]) tell MATLAB to calculate the number of columns automatically. Then you can average each column using mean:
X_avg = mean(A);
Vector X_avg stores the result. This can be done in one line like so:
X_avg = mean(reshape(X, N, []));
Note that the number of elements in X has to be divisible by N, otherwise you'll have to either pad it first (e.g with zeroes), or handle the "leftover" tail elements separately:
tail = mod(numel(X), N);
X_avg = mean(reshape(X(1:numel(X) - tail), N, [])); %// Compute average values
X_avg(end + 1) = mean(X(end - tail + 1:end)); %// Handle leftover elements
Later on you can put this code in a loop, computing and plotting the average values for a different value of N in each iteration.
Example #1
X = [1 1 2 3 4 3 8 2 1 3 5 6 7 7 5];
N = 5;
tail = mod(numel(X), N);
X_avg = mean(reshape(X(1:numel(X) - tail), N, []))
X_avg(end + 1) = mean(X(end - tail + 1:end))
The result is:
X_avg =
2.2000 3.4000 6.0000
Example #2
Here's another example (this time the length of X is not divisible by N):
X = [1 1 2 3 4 3 8 2 1 3 5 6 7 7 5];
N = 10;
tail = mod(numel(X), N);
X_avg = mean(reshape(X(1:numel(X) - tail), N, []))
X_avg(end + 1) = mean(X(end - tail + 1:end))
The result is:
X_avg =
2.8000 6.0000

This should do the trick:
For a selected N (the number of values you want to take the average of):
N = 5;
mean_vals = arrayfun(#(n) mean(X(n-1+(1:N))),1:N:length(X))
Note: This does not check if Index exceeds matrix dimensions.
If you want to skip the last numbers, this should work:
mean_vals = arrayfun(#(n) mean(X(n-1+(1:N))),1:N:(length(X)-mod(length(X),N)));
To add the remaining values:
if mod(length(X),N) ~= 0
mean_vals(end+1) = mean(X(numel(X)+1-mod(length(X),N):end))
end
UPDATE: This is a modification of Eitan's first answer (before it was edited). It uses nanmean(), which takes the mean of all values that are not NaN. So, instead of filling the remaining rows with zeros, fill them with NaN, and just take the mean.
X = [X(:); NaN(mod(N - numel(X), N), 1)];
X_avg = nanmean(reshape(X, N, []));

It would be helpful if you posted some code and point out exactly what is not working.
As a first pointer. If
X = [1 1 2 3 4 3 8 2 1 3 5 6 7 7 5]
the three means in blocks of 5 you are interested in are
mean(X(1:5))
mean(X(6:10))
mean(X(11:15))
You will have to come up with a for loop or maybe some other way to iterate through the indices.

I think you want something like this (I didn't use Matlab in a while, I hope the syntax is right):
X = [1 1 2 3 4 3 8 2 1 3 5 6 7 7 5],
currentAmount=5,
block=0,
while(numel(X)<=currentAmount)
while(numel(X)<=currentAmount+block*currentAmount)
mean(X(block*currentAmount+1:block*currentAmount+currentAmount));
block =block+1;
end;
currentAmount = currentAmount+5;
block=0;
end
This code will first loop through all elements calculating means of 5 elements at a time. Then, it will expand to 10 elements. Then to 15, and so on, until the number of elements from which you want to make the mean is bigger than the number of elements in the column.

If you are looking to average K random samples in your N-dimensional vector, then you could use:
N = length(X);
K = 20; % or 10, or 30, or any integer less than or equal to N
indices = randperm(N, K); % gives you K random indices from the range 1:N
result = mean(X(indices)); % averages the values of X at the K random
% indices from above
A slightly more compact form would be:
K = 20;
result = mean(X(randperm(length(X), K)));
If you are just looking to take every K consecutive samples from the list and average them then I am sure one of the previous answers will give you what you want.

If you need to do this operation a lot, it might be worth writing your own function for it. I would recommend using #EitanT's basic idea: pad the data, reshape, take mean of each column. However, rather than including the zero-padded numbers at the end, I recommend taking the average of the "straggling" data points separately:
function m = meanOfN(x, N)
% function m = meanOfN(x, N)
% create groups of N elements of vector x
% and return their mean
% if numel(x) is not a multiple of N, the last value returned
% will be for fewer than N elements
Nf = N * floor( numel( x ) / N ); % largest multiple of N <= length of x
xr = reshape( x( 1:Nf ), N, []);
m = mean(xr);
if Nf < N
m = [m mean( x( Nf + 1:end ) )];
end
This function will return exactly what you were asking for: in the case of a 15 element vector with N=5, it returns 3 values. When the size of the input vector is not a multiple of N, the last value returned will be the "mean of what is left".
Often when you need to take the mean of a set of numbers, it is the "running average" that is of interest. So rather than getting [mean(x(1:5)) mean(x(6:10)) mean(11:15))], you might want
m(1) = mean(x(1:N));
m(2) = mean(x(2:N+1));
m(3) = mean(x(3:N+2));
...etc
That could be achieved using a simple convolution of your data with a vector of ones; for completeness, here is a possible way of coding that:
function m = meansOfN(x, n)
% function m = meansOfN(x, n)
% taking the running mean of the values in x
% over n samples. Returns a row vector of size (sizeof(x) - n + 1)
% if numel(x) < n, this returns an empty matrix
mv = ones(N,1) / N; % vector of ones, normalized
m = convn(x(:), mv, 'valid'); % perform 1D convolution
With these two functions in your path (save them in a file called meanOfN.m and meansOfN.m respectively), you can do anything you want. In any program you will be able to write
myMeans = meanOfN(1:30, 5);
myMeans2 = meansOfN(1:30, 6);
etc. Matlab will find the function, perform the calculation, return the result. Writing your custom functions for specific operations like this can be very helpful - not only does it keep your code clean, but you only have to test the function once...

Related

Take a random draw of all possible pairs of indices in Matlab

Consider a Matlab matrix B which lists all possible unordered pairs (without repetitions) from [1 2 ... n]. For example, if n=4,
B=[1 2;
1 3;
1 4;
2 3;
2 4;
3 4]
Note that B has size n(n-1)/2 x 2
I want to take a random draw of m rows from B and store them in a matrix C. Continuing the example above, I could do that as
m=2;
C=B(randi([1 size(B,1)],m,1),:);
However, in my actual case, n=371293. Hence, I cannot create B and, then, run the code above to obtain C. This is because storing B would require a huge amount of memory.
Could you advise on how I could proceed to create C, without having to first store B? Comments on a different question suggest to
Draw at random m integers between 1 and n(n-1)/2.
I=randi([1 n*(n-1)/2],m,1);
Use ind2sub to obtain C.
Here, I'm struggling to implement the second step.
Thanks to the comments below, I wrote this
n=4;
m=10;
coord=NaN(m,2);
R= randi([1 n^2],m,1);
for i=1:m
[cr, cc]=ind2sub([n,n],R(i));
if cr>cc
coord(i,1)=cc;
coord(i,2)=cr;
elseif cr<cc
coord(i,1)=cr;
coord(i,2)=cc;
end
end
coord(any(isnan(coord),2),:) = []; %delete NaN rows from coord
I guess there are more efficient ways to implement the same thing.
You can use the function named myind2ind in this post to take random rows of all possible unordered pairs without generating all of them.
function [R , C] = myind2ind(ii, N)
jj = N * (N - 1) / 2 + 1 - ii;
r = (1 + sqrt(8 * jj)) / 2;
R = N -floor(r);
idx_first = (floor(r + 1) .* floor(r)) / 2;
C = idx_first-jj + R + 1;
end
I=randi([1 n*(n-1)/2],m,1);
[C1 C2] = myind2ind (I, n);
If you look at the odds, for i=1:n-1, the number of combinations where the first value is equal to i is (n-i) and the total number of cominations is n*(n-1)/2. You can use this law to generate the first column of C. The values of the second column of C can then be generated randomly as integers uniformly distributed in the range [i+1, n]. Here is a code that performs the desired tasks:
clc; clear all; close all;
% Parameters
n = 371293; m = 10;
% Generation of C
R = rand(m,1);
C = zeros(m,2);
s = 0;
t = n*(n-1)/2;
for i=1:n-1
if (i<n-1)
ind_i = R>=s/t & R<(s+n-i)/t;
else % To avoid rounding errors for n>>1, we impose (s+n-i)=t at the last iteration (R<(s+n-i)/t=1 always true)
ind_i = R>=s/t;
end
C(ind_i,1) = i;
C(ind_i,2) = randi([i+1,n],sum(ind_i),1);
s = s+n-i;
end
% Display
C
Output:
C =
84333 266452
46609 223000
176395 328914
84865 94391
104444 227034
221905 302546
227497 335959
188486 344305
164789 266497
153603 354932
Good luck!

Matlab get all possible combinations less than a value

I have a matrix as follows:
id value
=============
1 0.5
2 0.5
3 0.8
4 0.3
5 0.2
From this array, I wish to find all the possible combinations that have a sum less than or equal to 1. That is,
result
======
1 2
1 4 5
2 4 5
3 5
1 5
1 4
2 4
2 5
...
In order to get the above result, my idea has been to initially compute all the possibilities of finding sum of elements in the array, like so:
for ii = 1 : length(a) % compute number of possibilities
no_of_possibilities = no_of_possibilities + nchoosek(length(a),ii);
end
Once this is done, then loop through all possible combinations.
I would like to know if there's an easier way of doing this.
data = [0.5, 0.5, 0.8, 0.3, 0.2];
required = cell(1, length(data));
subsets = cell(1, length(data));
for k = 2:length(data)-1 % removes trivial cases (all numbers or one number at a time)
% generate all possible k-pairs (if k = 3, then all possible triplets
% will be generated)
combination = nchoosek(1:length(data), k);
% for every triplet generated, this function sums the corresponding
% values and then decides whether then sum is less than equal to 1 or
% not
findRequired = #(x) sum(data(1, combination(x, :))) <= 1;
% generate a logical vector for all possible combinations like [0 1 0]
% which denotes that the 2nd combination satisfies the condition while
% the others do not
required{k} = arrayfun(findRequired, 1:size(combination, 1));
% access the corresponding combinations from the entire set
subsets{k} = combination(required{k}, :);
end
This produces the following subsets:
1 2
1 4
1 5
2 4
2 5
3 5
4 5
1 4 5
2 4 5
It is not in easy way, however is a faster way, as I removed the combination which its subsets are not passed the condition.
bitNo = length(A); % number of bits
setNo = 2 ^ bitNo - 1; % number of sets
subsets = logical(dec2bin(0:setNo, bitNo) - '0'); % all subsets
subsets = subsets(2:end,:); % all subsets minus empty set!
subsetCounter = 1;
resultCounter = 1;
result = {};
while(1)
if( subsetCounter >= size(subsets,1))
break;
end
if(sum(A(subsets(subsetCounter,:).',2)) <= 1)
result{resultCounter} = A(subsets(subsetCounter,:).',1).';
resultCounter = resultCounter + 1;
subsetCounter = subsetCounter + 1;
else
% remove all bad cases related to the current subset
subsets = subsets(sum((subsets & subsets(subsetCounter,:)) - subsets(subsetCounter,:),2) ~= 0,:);
end
end
Generate the subsets using this method. After that, check the condition for each subset. If the subset does not pass the condition, all its supersets are removed from the subsets. To do this, using sum((subsets & subsets(i,:)) - subsets(i,:),2) ~= 0 which mean get some rows from subsets which has not the same elements of the not passed subset. By doing this, we able to not to consider some bad cases anymore. Although, theoretically, this code is Θ(2^n).
Here is potential solution, using inefficient steps, but borrowing efficient code from various SO answers. Credit goes to those original peeps.
data = [0.5, 0.5, 0.8, 0.3, 0.2];
First get all combinations of indices, not necessarily using all values.
combs = bsxfun(#minus, nchoosek(1:numel(data)+numel(data)-1,numel(data)), 0:numel(data)-1);
Then get rid of repeated indices in each combination, regardless of index order
[ii, ~, vv] = find(sort(combs,2));
uniq = accumarray(ii(:), vv(:), [], #(x){unique(x.')});
Next get unique combinations, regardless of index order... NOTE: You can do this step much more efficiently by restructuring the steps, but it'll do.
B = cellfun(#mat2str,uniq,'uniformoutput',false);
[~,ia] = unique(B);
uniq=uniq(ia);
Now sum all values in data based on cell array (uniq) of index combinations
idx = cumsum(cellfun('length', uniq));
x = diff(bsxfun(#ge, [0; idx(:)], 1:max(idx)));
x = sum(bsxfun(#times, x', 1:numel(uniq)), 2); %'// Produce subscripts
y = data([uniq{:}]); % // Obtain values
sums_data = accumarray(x, y);
And finally only keep the index combinations that sum to <= 1
allCombLessThanVal = uniq(sums_data<=1)

vectorize lookup values in table of interval limits

Here is a question about whether we can use vectorization type of operation in matlab to avoid writing for loop.
I have a vector
Q = [0.1,0.3,0.6,1.0]
I generate a uniformly distributed random vector over [0,1)
X = [0.11,0.72,0.32,0.94]
I want to know whether each entry of X is between [0,0.1) or [0.1,0.3) or [0.3,0.6), or [0.6,1.0) and I want to return a vector which contains the index of the maximum element in Q that each entry of X is less than.
I could write a for loop
Y = zeros(length(X),1)
for i = 1:1:length(X)
Y(i) = find(X(i)<Q, 1);
end
Expected result for this example:
Y = [2,4,3,4]
But I wonder if there is a way to avoid writing for loop? (I see many very good answers to my question. Thank you so much! Now if we go one step further, what if my Q is a matrix, such that I want check whether )
Y = zeros(length(X),1)
for i = 1:1:length(X)
Y(i) = find(X(i)<Q(i), 1);
end
Use the second output of max, which acts as a sort of "vectorized find":
[~, Y] = max(bsxfun(#lt, X(:).', Q(:)), [], 1);
How this works:
For each element of X, test if it is less than each element of Q. This is done with bsxfun(#lt, X(:).', Q(:)). Note each column in the result corresponds to an element of X, and each row to an element of Q.
Then, for each element of X, get the index of the first element of Q for which that comparison is true. This is done with [~, Y] = max(..., [], 1). Note that the second output of max returns the index of the first maximizer (along the specified dimension), so in this case it gives the index of the first true in each column.
For your example values,
Q = [0.1, 0.3, 0.6, 1.0];
X = [0.11, 0.72, 0.32, 0.94];
[~, Y] = max(bsxfun(#lt, X(:).', Q(:)), [], 1);
gives
Y =
2 4 3 4
Using bsxfun will help accomplish this. You'll need to read about it. I also added a Q = 0 at the beginning to handle the small X case
X = [0.11,0.72,0.32,0.94 0.01];
Q = [0.1,0.3,0.6,1.0];
Q_extra = [0 Q];
Diff = bsxfun(#minus,X(:)',Q_extra (:)); %vectorized subtraction
logical_matrix = diff(Diff < 0); %find the transition from neg to positive
[X_categories,~] = find(logical_matrix == true); % get indices
% output is 2 4 3 4 1
EDIT: How long does each method take?
I got curious about the difference between each solution:
Test Code Below:
Q = [0,0.1,0.3,0.6,1.0];
X = rand(1,1e3);
tic
Y = zeros(length(X),1);
for i = 1:1:length(X)
Y(i) = find(X(i)<Q, 1);
end
toc
tic
result = arrayfun(#(x)find(x < Q, 1), X);
toc
tic
Q = [0 Q];
Diff = bsxfun(#minus,X(:)',Q(:)); %vectorized subtraction
logical_matrix = diff(Diff < 0); %find the transition from neg to positive
[X_categories,~] = find(logical_matrix == true); % get indices
toc
Run it for yourself, I found that when the size of X was 1e6, bsxfun was much faster, while for smaller arrays the differences were varying and negligible.
SAMPLE: when size X was 1e3
Elapsed time is 0.001582 seconds. % for loop
Elapsed time is 0.007324 seconds. % anonymous function
Elapsed time is 0.000785 seconds. % bsxfun
Octave has a function lookup to do exactly that. It takes a lookup table of sorted values and an array, and returns an array with indices for values in the lookup table.
octave> Q = [0.1 0.3 0.6 1.0];
octave> x = [0.11 0.72 0.32 0.94];
octave> lookup (Q, X)
ans =
1 3 2 3
The only issue is that your lookup table has an implicit zero which be fixed easily with:
octave> lookup ([0 Q], X) # alternatively, just add 1 at the results
ans =
2 4 3 4
You can create an anonymous function to perform the comparison, then apply it to each member of X using arrayfun:
compareFunc = #(x)find(x < Q, 1);
result = arrayfun(compareFunc, X, 'UniformOutput', 1);
The Q array will be stored in the anonymous function ( compareFunc ) when the anonymous function is created.
Or, as one line (Uniform Output is the default behavior of arrayfun):
result = arrayfun(#(x)find(x < Q, 1), X);
Octave does a neat auto-vectorization trick for you if the vectors you have are along different dimensions. If you make Q a column vector, you can do this:
X = [0.11, 0.72, 0.32, 0.94];
Q = [0.1; 0.3; 0.6; 1.0; 2.0; 3.0];
X <= Q
The result is a 6x4 matrix indicating which elements of Q each element of X is less than. I made Q a different length than X just to illustrate this:
0 0 0 0
1 0 0 0
1 0 1 0
1 1 1 1
1 1 1 1
1 1 1 1
Going back to the original example you have, you can do
length(Q) - sum(X <= Q) + 1
to get
2 4 3 4
Notice that I have semicolons instead of commas in the definition of Q. If you want to make it a column vector after defining it, do something like this instead:
length(Q) - sum(X <= Q') + 1
The reason that this works is that Octave implicitly applies bsxfun to an operation on a row and column vector. MATLAB will not do this until R2016b according to #excaza's comment, so in MATLAB you can do this:
length(Q) - sum(bsxfun(#le, X, Q)) + 1
You can play around with this example in IDEOne here.
Inspired by the solution posted by #Mad Physicist, here is my solution.
Q = [0.1,0.3,0.6,1.0]
X = [0.11,0.72,0.32,0.94]
Temp = repmat(X',1,4)<repmat(Q,4,1)
[~, ind]= max( Temp~=0, [], 2 );
The idea is that make the X and Q into the "same shape", then use element wise comparison, then we obtain a logical matrix whose row tells whether a given element in X is less than each of the element in Q, then return the first non-zero index of each row of this logical matrix. I haven't tested how fast this method is comparing to other methods

Replacing zeros (or NANs) in a matrix with the previous element row-wise or column-wise in a fully vectorized way

I need to replace the zeros (or NaNs) in a matrix with the previous element row-wise, so basically I need this Matrix X
[0,1,2,2,1,0;
5,6,3,0,0,2;
0,0,1,1,0,1]
To become like this:
[0,1,2,2,1,1;
5,6,3,3,3,2;
0,0,1,1,1,1],
please note that if the first row element is zero it will stay like that.
I know that this has been solved for a single row or column vector in a vectorized way and this is one of the nicest way of doing that:
id = find(X);
X(id(2:end)) = diff(X(id));
Y = cumsum(X)
The problem is that the indexing of a matrix in Matlab/Octave is consecutive and increments columnwise so it works for a single row or column but the same exact concept cannot be applied but needs to be modified with multiple rows 'cause each of raw/column starts fresh and must be regarded as independent. I've tried my best and googled the whole google but coukldn’t find a way out. If I apply that same very idea in a loop it gets too slow cause my matrices contain 3000 rows at least. Can anyone help me out of this please?
Special case when zeros are isolated in each row
You can do it using the two-output version of find to locate the zeros and NaN's in all columns except the first, and then using linear indexing to fill those entries with their row-wise preceding values:
[ii jj] = find( (X(:,2:end)==0) | isnan(X(:,2:end)) );
X(ii+jj*size(X,1)) = X(ii+(jj-1)*size(X,1));
General case (consecutive zeros are allowed on each row)
X(isnan(X)) = 0; %// handle NaN's and zeros in a unified way
aux = repmat(2.^(1:size(X,2)), size(X,1), 1) .* ...
[ones(size(X,1),1) logical(X(:,2:end))]; %// positive powers of 2 or 0
col = floor(log2(cumsum(aux,2))); %// col index
ind = bsxfun(#plus, (col-1)*size(X,1), (1:size(X,1)).'); %'// linear index
Y = X(ind);
The trick is to make use of the matrix aux, which contains 0 if the corresponding entry of X is 0 and its column number is greater than 1; or else contains 2 raised to the column number. Thus, applying cumsum row-wise to this matrix, taking log2 and rounding down (matrix col) gives the column index of the rightmost nonzero entry up to the current entry, for each row (so this is a kind of row-wise "cummulative max" function.) It only remains to convert from column number to linear index (with bsxfun; could also be done with sub2ind) and use that to index X.
This is valid for moderate sizes of X only. For large sizes, the powers of 2 used by the code quickly approach realmax and incorrect indices result.
Example:
X =
0 1 2 2 1 0 0
5 6 3 0 0 2 3
1 1 1 1 0 1 1
gives
>> Y
Y =
0 1 2 2 1 1 1
5 6 3 3 3 2 3
1 1 1 1 1 1 1
You can generalize your own solution as follows:
Y = X.'; %'// Make a transposed copy of X
Y(isnan(Y)) = 0;
idx = find([ones(1, size(X, 1)); Y(2:end, :)]);
Y(idx(2:end)) = diff(Y(idx));
Y = reshape(cumsum(Y(:)), [], size(X, 1)).'; %'// Reshape back into a matrix
This works by treating the input data as a long vector, applying the original solution and then reshaping the result back into a matrix. The first column is always treated as non-zero so that the values don't propagate throughout rows. Also note that the original matrix is transposed so that it is converted to a vector in row-major order.
Modified version of Eitan's answer to avoid propagating values across rows:
Y = X'; %'
tf = Y > 0;
tf(1,:) = true;
idx = find(tf);
Y(idx(2:end)) = diff(Y(idx));
Y = reshape(cumsum(Y(:)),fliplr(size(X)))';
x=[0,1,2,2,1,0;
5,6,3,0,1,2;
1,1,1,1,0,1];
%Do it column by column is easier
x=x';
rm=0;
while 1
%fields to replace
l=(x==0);
%do nothing for the first row/column
l(1,:)=0;
rm2=sum(sum(l));
if rm2==rm
%nothing to do
break;
else
rm=rm2;
end
%replace zeros
x(l) = x(find(l)-1);
end
x=x';
I have a function I use for a similar problem for filling NaNs. This can probably be cutdown or sped up further - it's extracted from pre-existing code that has a bunch more functionality (forward/backward filling, maximum distance etc).
X = [
0 1 2 2 1 0
5 6 3 0 0 2
1 1 1 1 0 1
0 0 4 5 3 9
];
X(X == 0) = NaN;
Y = nanfill(X,2);
Y(isnan(Y)) = 0
function y = nanfill(x,dim)
if nargin < 2, dim = 1; end
if dim == 2, y = nanfill(x',1)'; return; end
i = find(~isnan(x(:)));
j = 1:size(x,1):numel(x);
j = j(ones(size(x,1),1),:);
ix = max(rep([1; i],diff([1; i; numel(x) + 1])),j(:));
y = reshape(x(ix),size(x));
function y = rep(x,times)
i = find(times);
if length(i) < length(times), x = x(i); times = times(i); end
i = cumsum([1; times(:)]);
j = zeros(i(end)-1,1);
j(i(1:end-1)) = 1;
y = x(cumsum(j));

Vector of the occurence number

I have a vector a=[1 2 3 1 4 2 5]'
I am trying to create a new vector that would give for each row, the occurence number of the element in a. For instance, with this matrix, the result would be [1 1 1 2 1 2 1]': The fourth element is 2 because this is the first time that 1 is repeated.
The only way I can see to achieve that is by creating a zero vector whose number of rows would be the number of unique elements (here: c = [0 0 0 0 0] because I have 5 elements).
I also create a zero vector d of the same length as a. Then, going through the vector a, adding one to the row of c whose element we read and the corresponding number of c to the current row of d.
Can anyone think about something better?
This is a nice way of doing it
C=sum(triu(bsxfun(#eq,a,a.')))
My first suggestion was this, a not very nice for loop
for i=1:length(a)
F(i)=sum(a(1:i)==a(i));
end
This does what you want, without loops:
m = max(a);
aux = cumsum([ ones(1,m); bsxfun(#eq, a(:), 1:m) ]);
aux = (aux-1).*diff([ ones(1,m); aux ]);
result = sum(aux(2:end,:).');
My first thought:
M = cumsum(bsxfun(#eq,a,1:numel(a)));
v = M(sub2ind(size(M),1:numel(a),a'))
on a completely different level, you can look into tabulate to get info about the frequency of the values. For example:
tabulate([1 2 4 4 3 4])
Value Count Percent
1 1 16.67%
2 1 16.67%
3 1 16.67%
4 3 50.00%
Please note that the solutions proposed by David, chappjc and Luis Mendo are beautiful but cannot be used if the vector is big. In this case a couple of naïve approaches are:
% Big vector
a = randi(1e4, [1e5, 1]);
a1 = a;
a2 = a;
% Super-naive solution
tic
x = sort(a);
x = x([find(diff(x)); end]);
for hh = 1:size(x, 1)
inds = (a == x(hh));
a1(inds) = 1:sum(inds);
end
toc
% Other naive solution
tic
x = sort(a);
y(:, 1) = x([find(diff(x)); end]);
y(:, 2) = histc(x, y(:, 1));
for hh = 1:size(y, 1)
a2(a == y(hh, 1)) = 1:y(hh, 2);
end
toc
% The two solutions are of course equivalent:
all(a1(:) == a2(:))
Actually, now the question is: can we avoid the last loop? Maybe using arrayfun?