I have an array of positive numbers and there are some duplicates. I want to find the largest index of the minimum value.
For example, if a=[2, 3, 1, 1, 4, 1, 3, 2, 1, 5, 5] then [i, v] = min(a) returns i=3, however I want i=9.
Using find and min.
A = [2, 3, 1, 1, 4, 1, 3, 2, 1, 5, 5];
minA = min(A);
maxIndex = max(find(A==minA));
min get the minimun value, and find return de index of values that meet the condition A==minA. max return de maximun index.
Here's a different idea, which only requires one function, sort:
[~,y] = sort(a,'descend');
i = y(end)
ans =
9
You can use imreginalmin as well with time complexity O(n):
largestMinIndex = find(imregionalmin(A),1,'last');
Related
(To anyone who reads this, just to not waste your time, I wrote up this question and then came up with a solution to it right after I wrote it. I am posting this here just to help out anyone who happened to also be thinking about something like this.)
I have a vector with elements that I would like to sum up. The elements that I would like to add up are elements that share the same "triggerNumber". For example:
vector = [0, 1, 1, 1, 1]
triggerNumber = [1, 1, 1, 2, 2]
I will sum up the numbers that share a triggerNumber of 1 (so 0+1+1 =2) and share a triggerNumber of 2 (so 1+1+1 = 3). Therefore my desiredOutput is the array [2, 2].
accumarray accomplishes this task, and if I give it those two inputs:
output = accumarray(triggerNumber.',vector.').'
which returns [2, 2]. But, while my "triggerNumbers" are always increasing, they are not necessarily always increasing by one. So for example I might have the following situation:
vector = [0, 1, 1, 1, 1]
triggerNumber = [4, 4, 4, 6, 6]
output = accumarray(triggerNumber.',vector.').'
But now this returns the output:
output = [0, 0, 0, 2, 0, 2]
Which is not what I want. I want to just sum up elements with the same trigger number (in order), so the desired output is still [2, 2]. Naively I thought that just deleting the zeros would be sufficient, but then that messes up the situation with the inputs:
vector = [0, 0, 0, 1, 1]
triggerNumber = [4, 4, 4, 6, 6]
which if I deleted the zeroes would return just [2] instead of the desired [0, 2].
Any ideas for how I can accomplish this task (in a vectorized way of course)?
I just needed to turn things like [4, 4, 4, 6, 6] into [1, 1, 1, 2, 2], which can be done with a combination of cumsum and diff.
vector = [0, 0, 0, 1, 1];
triggerNumber = [4, 4, 4, 6, 6];
vec1 = cumsum(diff(triggerNumber)>0);
append1 = [0, vec1];
magic = append1+1;
output = accumarray(magic.',vector.').'
which returns [2, 2]....and hopefully my method works for all cases.
I have this vector:
Population = [3, 5, 0, 2, 0, 5, 10, 50, 0, 1];
And i need to fill this vector with a random value between 1 and 4 only where have 0 value in vector.
How i can do it ?
Edit: there's a way to do it using randperm function?
First, find zero elements, then generate random values, then replace those elements:
Population = [3, 5, 0, 2, 0, 5, 10, 50, 0, 1];
idx = find(Population==0);
Population(idx) = 3 * rand(size(idx)) + 1;
If you need integers (didn't specify), just round the generated random numbers in the last statement, like this round(3*rand(size(idx))+1); or use randi (as suggested in answer by #OmG): randi([1,4], size(idx)).
You can use the following code:
ind = find(~Population); % find zero places
Population(ind) = randi(4,1,length(ind)); % replace them with a random integer
I have a matrix like this:
A = [1, 2, 3, 4, 5, NaN, NaN, NaN, NaN, NaN;
1, 2, 3, 4, 5, 6, 7, NaN, NaN, NaN;
1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
I would like to know how I can count the number of values in each row excluding any NaNs.
So I would get an output like:
output = [5;
7;
10;]
If A is a 2D array, e.g.
A = [1, 2, 3, 4, 5, NaN, NaN, NaN, NaN, NaN;
1, 2, 3, 4, 5, 6, 7, NaN, NaN, NaN;
1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
and you want to count the number of NaN entries on each row of A, you can simply use
>> sum(~isnan(A), 2)
ans =
5
7
10
Breakdown
isnan(A) returns a logical array of the same size as A, in which (logical1 indicates a NaN and 0 a non-NaN.
Note that you have to use the isnan function, here. In particular, the expression A == ~NaN is useless: it would simply return a logical array of the same size as A but full of (logical) 0's. Why? Because, according to floating-point arithmetic, NaN == NaN always returns "false" (i.e. logical 0, in MATLAB).
Then, by applying MATLAB's not operator (~) to that, you get a logical array of the same size as A, in which 1 indicates a non-NaN and 0 a NaN.
Finally, sum(~isnan(A), 2) returns a column vector in which the i-th entry corresponds to the number of logical 1's on the i-th row of ~isnan(A).
The resulting column vector is exactly what you want: a count, row by row, of the non-NaN entries in A.
I'm trying to program a function (or even better it it already exists) in scilab that calculates a regular timed samples of values.
IE: I have a vector 'values' which contains the value of a signal at different times. This times are in the vector 'times'. So at time times(N), the signal has value values(N).
At the moment the times are not regular, so the variable 'times' and 'values' can look like:
times = [0, 2, 6, 8, 14]
values= [5, 9, 10, 1, 6]
This represents that the signal had value 5 from second 0 to second 2. Value 9 from second 2 to second 6, etc.
Therefore, if I want to calculate the signal average value I can not just calculate the average of vector 'values'. This is because for example the signal can be for a long time with the same value, but there will be only one value in the vector.
One option is to take the deltaT to calculate the media, but I will also need to perform other calculations:average, etc.
Other option is to create a function that given a deltaT, samples the time and values vectors to produce an equally spaced time vector and corresponding values. For example, with deltaT=2 and the previous vectors,
[sampledTime, sampledValues] = regularSample(times, values, 2)
sampledTime = [0, 2, 4, 6, 8, 10, 12, 14]
sampledValues = [5, 9, 9, 10, 1, 1, 1, 6]
This is easy if deltaT is small enough to fit exactly with all the times. If the deltaT is bigger, then the average of values or some approximation must be done...
Is there anything already done in Scilab?
How can this function be programmed?
Thanks a lot!
PS: I don't know if this is the correct forum to post scilab questions, so any pointer would also be useful.
If you like to implement it yourself, you can use a weighted sum.
times = [0, 2, 6, 8, 14]
values = [5, 9, 10, 1, 6]
weightedSum = 0
highestIndex = length(times)
for i=1:(highestIndex-1)
// Get the amount of time a certain value contributed
deltaTime = times(i+1) - times(i);
// Add the weighted amount to the total weighted sum
weightedSum = weightedSum + deltaTime * values(i);
end
totalTimeDelta = times($) - times(1);
average = weightedSum / totalTimeDelta
printf( "Result is %f", average )
Or If you want to use functionally the same, but less readable code
timeDeltas = diff(times)
sum(timeDeltas.*values(1:$-1))/sum(timeDeltas)
I have a 3 Dimensional array Val 4xmx2 dimension. (m can be variable)
Val{1} = [1, 280; 2, 281; 3, 282; 4, 283; 5, 285];
Val{2} = [2, 179; 3, 180; 4, 181; 5, 182];
Val{3} = [2, 315; 4, 322; 5, 325];
Val{4} = [1, 95; 3, 97; 4, 99; 5, 101];
I have a subscript vector:
subs = {1,3,4};
What i want to get as output is the average of column 2 in the above 2D Arrays (only 1,3 an 4) such that the 1st columns value is >=2 and <=4.
The output will be:
{282, 318.5, 98}
This can probably be done by using a few loops, but just wondering if there is a more efficient way?
Here's a one-liner:
output = cellfun(#(x)mean(x(:,1)>=2 & x(:,1)<=4,2),Val(cat(1,subs{:})),'UniformOutput',false);
If subs is a numerical array (not a cell array) instead, i.e. subs=[1,3,4], and if output doesn't have to be a cell array, but can be a numerical array instead, i.e. output = [282,318.5,98], then the above simplifies to
output = cellfun(#(x)mean(x(x(:,1)>=2 & x(:,1)<=4,2)),Val(subs));
cellfun applies a function to each element of a cell array, and the indexing makes sure only the good rows are being averaged.