Here I have two matrix, one indicating cost and the other determines when to take into comparison.
cost = [0.2 0.0 0.3; 0.4 0 0; 0.5 0 0];
available = [1 1 0 ; 1 0 0; 0 0 0];
available = logical(available);
I want to get the index of the min available element in the cost matrix, which in this case would compare 0.2, 0.0 and 0.4 and return the index of 0.0, which is (1, 2) or 4 in the cost matrix.
I tried
mul = cost .* available; % Zero if not available, but I can't know if it is zero because cost is zero
mul(~mul) = nan; % Set zero to be NaN
[minVal, minId] = min(mul)
This will help to get the min non-zero cost but if there exists zero elements which are available, it would be wrong.
So is there a better way to do so?
Here are two possible solutions. Both essentially involve converting all non-available costs to Inf.
%#Set up an example
Cost = [0.2 0 0.3; 0.4 0 0; 0.5 0 0];
Available = [1 1 0; 1 0 0; 0 0 0];
%#Transform non-available costs to Inf
Cost(Available == 0) = Inf;
%#Obtain indices using find
[r, c] = find(Cost == min(min(Cost)))
%#Obtain linear indices and convert using ind2sub
[~, I1] = min(Cost(:));
[r2, c2] = ind2sub(size(Cost), I1);
Both solutions will only return the first minimum value in the instance that there is not a unique minimum. Also, the method will fail in the perverse case that all the available costs are Inf (but I guess you've got bigger problems if all your costs are infinite...).
I've done a few speed tests, and the second method is definitely faster, no matter what the dimensions of Cost, so should be strictly preferred. Also, if you only want linear indices and not subscript indices then you can of course drop the call to ind2sub. However, this doesn't give you huge savings in efficiency, so if there is a preference for subscript indices then you should use them.
Related
I have a Matlab code, which use fmincon with some constraints. So that I am able to modify the code I have thought about whether the line position within the condition matrix A makes a difference
I set up a test file so I can change some variables. It turns out that the position of the condition is irrelevant for the result, but the number of rows in A and b plays a role. I´m suprised by that because I would expect that a row with only zeros in A and b just cancel out.
fun = #(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2;
options1 = optimoptions('fmincon','Display','off');
A=zeros(2,2); %setup A
A(2,2)=1; %x2<0
b=[0 0]'; %setup b
x = fmincon(fun,[-1,2],A,b,[],[],[],[],[],options1);x
%change condition position inside A
A=zeros(2,2);
A(1,2)=1; %x2<0
b=[0 0]';
x = fmincon(fun,[-1,2],A,b,[],[],[],[],[],options1);x
% no change; the position doesn´t influence fmincon
%change row size of A
A=zeros(1,2);
A(1,2)=1; %x2<0
b=[0]';
x = fmincon(fun,[-1,2],A,b,[],[],[],[],[],options1);x
%change in x2
%increase size of A
A=zeros(10,2);
A(1,2)=1; %x2<0
b=[0 0 0 0 0 0 0 0 0 0]';
x = fmincon(fun,[-1,2],A,b,[],[],[],[],[],options1);x
%change in x2
Can someone explain to me why fmincon is influenced by the row number? What is the "right" rownumber in A and b? The number of variables or the number of conditions?
EDIT
For reasons of completeness:
I agree that different values are possible because of the iteration process. Nevertheless I can find situations where the difference is bigger than the tolerance:
Added +log(x(2) to the function:
fun = #(x)100*(x(2)-x(1)^2)^2 + (1-x(1))^2+log(x(3));
options1 = optimoptions('fmincon','Display','off');
options = optimoptions('fmincon')
A=zeros(2,3); %setup A
A(2,3)=1; %x2<0
b=[0 0]'; %setup b
x = fmincon(fun,[-1,2,1],A,b,[],[],[],[],[],options1);x
%change row size of A
A=zeros(1,3);
A(1,3)=1; %x2<0
b=[0]';
x = fmincon(fun,[-1,2,1],A,b,[],[],[],[],[],options1);x
%change in x2
%increase size of A
A=zeros(10,3);
A(1,3)=1; %x2<0
b=[0 0 0 0 0 0 0 0 0 0]';
x = fmincon(fun,[-1,2,1],A,b,[],[],[],[],[],options1);x
%change in x2
x =
-0.79876 **0.49156** 2.3103e-11
x =
-0.79921 0.49143 1.1341e-11
x =
-0.80253 **0.50099** 5.8733e-12
Matlab support told me that the A matrix should not have more rows than conditions. Each condition makes it more difficult for the algorithm.
Note that fmincom doesn't necessarily give the exact solution but a good approximation of the solution according to a certain criteria.
The difference in results are plausible since fminconis an iterative algorithm and these matrix multiplications (even if there are mainly zeros) will eventually end with different results. Matlab will actually do these matrix multiplications until he finds the best result. So these results are all correct in the sense they are all close to the solution.
x =
0.161261791015350 -0.000000117317860
x =
0.161261791015350 -0.000000117317860
x =
0.161261838607809 -0.000000077614999
x =
0.161261877075196 -0.000000096088746
The difference in your results is around 1.0e-07 which is decent result considering you don't specify stopping criteria. You can see what you have by default with the command
options = optimoptions('fmincon')
My result is
Default properties:
Algorithm: 'interior-point'
CheckGradients: 0
ConstraintTolerance: 1.0000e-06
Display: 'final'
FiniteDifferenceStepSize: 'sqrt(eps)'
FiniteDifferenceType: 'forward'
HessianApproximation: 'bfgs'
HessianFcn: []
HessianMultiplyFcn: []
HonorBounds: 1
MaxFunctionEvaluations: 3000
MaxIterations: 1000
ObjectiveLimit: -1.0000e+20
OptimalityTolerance: 1.0000e-06
OutputFcn: []
PlotFcn: []
ScaleProblem: 0
SpecifyConstraintGradient: 0
SpecifyObjectiveGradient: 0
StepTolerance: 1.0000e-10
SubproblemAlgorithm: 'factorization'
TypicalX: 'ones(numberOfVariables,1)'
UseParallel: 0
For example, I can reach closer results with the option:
options1 = optimoptions('fmincon','Display','off', 'OptimalityTolerance', 1.0e-09);
Result is
x =
0.161262015455003 -0.000000000243997
x =
0.161262015455003 -0.000000000243997
x =
0.161262015706777 -0.000000000007691
x =
0.161262015313928 -0.000000000234186
You can also try and play with other criteria MaxFunctionEvaluations, MaxFunctionEvaluations etc to see if you can have even closer results...
I have a matrix suppX in Matlab with size GxN and a matrix A with size MxN. I would like your help to construct a matrix Xresponse with size GxM with Xresponse(g,m)=1 if the row A(m,:) is equal to the row suppX(g,:) and zero otherwise.
Let me explain better with an example.
suppX=[1 2 3 4;
5 6 7 8;
9 10 11 12]; %GxN
A=[1 2 3 4;
1 2 3 4;
9 10 11 12;
1 2 3 4]; %MxN
Xresponse=[1 1 0 1;
0 0 0 0;
0 0 1 0]; %GxM
I have written a code that does what I want.
Xresponsemy=zeros(size(suppX,1), size(A,1));
for x=1:size(suppX,1)
Xresponsemy(x,:)=ismember(A, suppX(x,:), 'rows').';
end
My code uses a loop. I would like to avoid this because in my real case this piece of code is part of another big loop. Do you have suggestions without looping?
One way to do this would be to treat each matrix as vectors in N dimensional space and you can find the L2 norm (or the Euclidean distance) of each vector. After, check if the distance is 0. If it is, then you have a match. Specifically, you can create a matrix such that element (i,j) in this matrix calculates the distance between row i in one matrix to row j in the other matrix.
You can treat your problem by modifying the distance matrix that results from this problem such that 1 means the two vectors completely similar and 0 otherwise.
This post should be of interest: Efficiently compute pairwise squared Euclidean distance in Matlab.
I would specifically look at the answer by Shai Bagon that uses matrix multiplication and broadcasting. You would then modify it so that you find distances that would be equal to 0:
nA = sum(A.^2, 2); % norm of A's elements
nB = sum(suppX.^2, 2); % norm of B's elements
Xresponse = bsxfun(#plus, nB, nA.') - 2 * suppX * A.';
Xresponse = Xresponse == 0;
We get:
Xresponse =
3×4 logical array
1 1 0 1
0 0 0 0
0 0 1 0
Note on floating-point efficiency
Because you are using ismember in your implementation, it's implicit to me that you expect all values to be integer. In this case, you can very much compare directly with the zero distance without loss of accuracy. If you intend to move to floating-point, you should always compare with some small threshold instead of 0, like Xresponse = Xresponse <= 1e-10; or something to that effect. I don't believe that is needed for your scenario.
Here's an alternative to #rayryeng's answer: reduce each row of the two matrices to a unique identifier using the third output of unique with the 'rows' input flag, and then compare the identifiers with singleton expansion (broadcast) using bsxfun:
[~, ~, w] = unique([A; suppX], 'rows');
Xresponse = bsxfun(#eq, w(1:size(A,1)).', w(size(A,1)+1:end));
I'd like to find a vectorized way to calculate the cumulative sums of a vector, but with upper and lower limits.
In my case, the input only contains 1's and -1's. You can use this assumption in your answer. Of course, a more general solution is also welcome.
For example:
x = [1 1 1 1 -1 -1 -1 -1 -1 -1];
upper = 3;
lower = 0;
s = cumsum(x) %// Ordinary cumsum.
s =
1 2 3 4 3 2 1 0 -1 -2
y = cumsumlim(x, upper, lower) %// Cumsum with limits.
y =
1 2 3 3 2 1 0 0 0 0
^ ^
| |
upper limit lower limit
When the cumulative sum reaches the upper limit (at the 3rd element), it won't increase anymore. Likewise, when the cumulative sum reaches the lower limit (at the 7th element), it won't decrease anymore. A for-loop version would be like this:
function y = cumsumlim(x, upper, lower)
y = zeros(size(x));
y(1) = x(1);
for i = 2 : numel(x)
y(i) = y(i-1) + x(i);
y(i) = min(y(i), upper);
y(i) = max(y(i), lower);
end
end
Do you have any ideas?
This is a somewhat hackish solution, but perhaps worth mentioning.
You can do the sum using a signed integer data type, and exploit the inherent limits of that data type. For this to work, the input needs to be converted to that integer type and multiplied by the appropiate factor, and an initial offset needs to be applied. The factor and offset are chosen as a function of lower and upper. After cumsum, the multiplication and offset are undone to obtain the desired result.
In your example, data type int8 suffices; and the required factor and offset are 85 and -128 respectively:
x = [1 1 1 1 -1 -1 -1 -1 -1 -1];
result = cumsum([-128 int8(x)*85]); %// integer sum, with factor and initial offset
result = (double(result(2:end))+128)/85; %// undo factor and offset
which gives
result =
1 2 3 3 2 1 0 0 0 0
I won't provide you with a magic vectorized way to do this, but I'll provide you with some data that probably will help you get on with your work.
Your cumsumlim function is very fast!
tic
for ii = 1:100
y = cumsumlim(x,3,0);
end
t = toc;
disp(['Length of vector: ' num2str(numel(x))])
disp(['Total time for one execution: ' num2str(t*10), ' ms.'])
Length of vector: 65000
Total time for one execution: 1.7965 ms.
I really doubt this is your bottleneck. Have you tried profiling the code?
Let d and p be two integers. I need to generate a large matrix A of integers, having d columns and N=nchoosek(d+p,p) rows. Note that nchoosek(d+p,p) increases quickly with d and p, so it's very important that I can generate A quickly. The rows of A are all the multi-indices with components from 0 to p, such that the sum of the components is less than or equal to p. This means that, if d=3 and p=3, then A is an [N=nchoosek(3+3,3)=20x3] matrix with the following structure:
A=[0 0 0;
1 0 0;
0 1 0;
0 0 1;
2 0 0;
1 1 0;
1 0 1;
0 2 0;
0 1 1;
0 0 2;
3 0 0;
2 1 0;
2 0 1;
1 2 0;
1 1 1;
1 0 2;
0 3 0;
0 2 1;
0 1 2;
0 0 3]
It is not indispensable to follow exactly the row ordering I used, although it would make my life easier (for those interested, it's called graded lexicographical ordering and it's described here:
http://en.wikipedia.org/wiki/Monomial_order).
In case you are curious about the origin of this weird matrix, let me know!
Solution using nchoosek and diff
The following solution is based on this clever answer by Mark Dickinson.
function degrees = monomialDegrees(numVars, maxDegree)
if numVars==1
degrees = (0:maxDegree).';
return;
end
degrees = cell(maxDegree+1,1);
k = numVars;
for n = 0:maxDegree
dividers = flipud(nchoosek(1:(n+k-1), k-1));
degrees{n+1} = [dividers(:,1), diff(dividers,1,2), (n+k)-dividers(:,end)]-1;
end
degrees = cell2mat(degrees);
You can get your matrix by calling monomialDegrees(d,p).
Solution using nchoosek and accumarray/histc
This approach is based on the following idea: There is a bijection between all k-multicombinations and the matrix we are looking for. The multicombinations give the positions, where the entries should be added. For example the multicombination [1,1,1,1,3] will be mapped to [4,0,1], as there are four 1s, and one 3. This can be either converted using accumarray or histc. Here is the accumarray-approach:
function degrees = monomialDegrees(numVars, maxDegree)
if numVars==1
degrees = (0:maxDegree).';
return;
end
degrees = cell(maxDegree+1,1);
degrees{1} = zeros(1,numVars);
for n = 1:maxDegree
pos = nmultichoosek(1:numVars, n);
degrees{n+1} = accumarray([reshape((1:size(pos,1)).'*ones(1,n),[],1),pos(:)],1);
end
degrees = cell2mat(degrees);
And here the alternative using histc:
function degrees = monomialDegrees(numVars, maxDegree)
if numVars==1
degrees = (0:maxDegree).';
return;
end
degrees = cell(maxDegree+1,1);
degrees(1:2) = {zeros(1,numVars); eye(numVars);};
for n = 2:maxDegree
pos = nmultichoosek(1:numVars, n);
degrees{n+1} = histc(pos.',1:numVars).';
end
degrees = cell2mat(degrees(1:maxDegree+1));
Both use the following function to generate multicombinations:
function combs = nmultichoosek(values, k)
if numel(values)==1
n = values;
combs = nchoosek(n+k-1,k);
else
n = numel(values);
combs = bsxfun(#minus, nchoosek(1:n+k-1,k), 0:k-1);
combs = reshape(values(combs),[],k);
end
Benchmarking:
Benchmarking the above codes yields that the diff-solution is faster if your numVars is low and maxDegree high. If numVars is higher than maxDegree, then the histc solution will be faster.
Old approach:
This is an alternative to Dennis' approach of dec2base, which has a limit on the maximum base. It is still a lot slower than the above solutions.
function degrees = monomialDegrees(numVars, maxDegree)
Cs = cell(1,numVars);
[Cs{:}] = ndgrid(0:maxDegree);
degrees = reshape(cat(maxDegree+1, Cs{:}),(maxDegree+1)^numVars,[]);
degrees = degrees(sum(degrees,2)<=maxDegree,:);
I would solve it this way:
ncols=d;
colsum=p;
base=(0:colsum)';
v=#(dm)permute(base,[dm:-1:1]);
M=bsxfun(#plus,base,v(2));
for idx=3:ncols
M=bsxfun(#plus,M,v(idx));
end
L=M<=colsum;
A=cell(1,ncols);
[A{:}]=ind2sub(size(L),find(L));
a=cell2mat(A);
%subtract 1 because 1 based indexing but base starts at 0
a=a-1+min(base);
It builds up a p-dimensional matrix which contains the sum. The efficiency of this code depends on sum(L(:))/numel(L), this quotient tells you how much of the created matrix is actually used for solutions. If this gets low for your intput, there probably exits a better solution.
Here is a very easy way to do it:
L = dec2base(0:4^3-1,4);
idx=sum(num2str(L)-'0',2)<=3;
L(idx,:)
I think the first line can be very time efficient for creating a list of candidates, but unfortunately I don't know how to reduce the list in an efficient way after that.
So the second line works, but could use improvement performance wise.
Let say my matrix is x = [0.1 0.2; 0.3 0.4; 0.5 0.6; 0.8 0.9 ;1 0.1]
Now i want to check this matrix by threshold values 0
The resulting matrix for each condition should have same size as x and except the satisfied values all other values must go zero in the resulting matrix.For ex,for condition 1, x must be x = [0.1 0.2; 0 0; 0 0; 0 0;0 0.1]
I found a command ind = find(x>0), which gives only indices of those condition and I can get those values in this way :x(ind). But it is an array. If I use logical conditions say > or <, it will give only 1 or 0 based on true or false. It cant give the real matrix values.
Can anybody suggest an idea?
You can use logical indexing like so:
x(x>Value) = 0
You can change the logical expression in brackets to suit your particular requirements. Say you want values equal or larger than 0.3 to be 0 like you suggest in your post. Then you can write:
x(x>=0.3)=0
You can find out more about logical indexing at the bottom of this page:
http://www.mathworks.co.uk/company/newsletters/articles/matrix-indexing-in-matlab.html