openCV bicubic imresize creates negative values - matlab

I noticed that when downsampling matrices in openCV using the bicubic interpolation I get negative values even though the original matrix was all positive.
I attach the following code as an example:
// Declaration of variables
cv::Mat M, MLinear, MCubic;
double minVal, maxVal;
cv::Point minLoc, maxLoc;
// Create random values in M matrix
M = cv::Mat::ones(1000, 1000, CV_64F);
cv::randu(M, cv::Scalar(0), cv::Scalar(1));
minMaxLoc(M, &minVal, &maxVal, &minLoc, &maxLoc);
// Printout smallest value in M
std::cout << "smallest value in M = "<< minVal << std::endl;
// Resize M to quarter area with bicubic interpolation and store in MCubic
cv::resize(M, MCubic, cv::Size(0, 0), 0.5, 0.5, cv::INTER_CUBIC);
// Printout smallest value in MCubic
minMaxLoc(MCubic, &minVal, &maxVal, &minLoc, &maxLoc);
std::cout << "smallest value in MCubic = " << minVal << std::endl;
// Resize M to quarter area with linear interpolation and store in MLinear
cv::resize(M, MLinear, cv::Size(0, 0), 0.5, 0.5, cv::INTER_LINEAR);
// Printout smallest value in MLinear
minMaxLoc(MLinear, &minVal, &maxVal, &minLoc, &maxLoc);
std::cout << "smallest value in MLinear = " << minVal << std::endl;
I don't understand why this happens. I noticed that if I choose random values between [0,100] the smallest value is after resizing is typically ~-24 vs. -0.24 for the range of [0,1] as in the code above.
As a comparison, in Matlab this doesn't occur (I am aware of a slight difference in weighting schemes as appears here: imresize comparison - Matlab/openCV).
Here's a short Matlab code snippet that saves the smallest value in any of 1000 random downsized matrices (original dimensions of eahc matrix 1000x1000):
currentMinVal = 1e6;
for k=1:1000
x = rand(1000);
x = imresize(x,0.5);
minVal = min(currentMinVal,min(x(:)));
end

As you can see at this answer the bicubic kernel is not non-negative, therefore, in some cases the negative coefficients may dominate and produce negative outputs.
You should also note that Matlab is using 'Antialiasing' by default, which has an effect on the result:
I = zeros(9);I(5,5)=1;
imresize(I,[5 5],'bicubic') %// with antialiasing
ans =
0 0 0 0 0
0 0.0000 -0.0000 -0.0000 0
0 -0.0000 0.3055 0.0000 0
0 -0.0000 0.0000 0.0000 0
0 0 0 0 0
imresize(I,[5 5],'bicubic','Antialiasing',false) %// without antialiasing
ans =
0 0 0 0 0
0 0.0003 -0.0160 0.0003 0
0 -0.0160 1.0000 -0.0160 0
0 0.0003 -0.0160 0.0003 0
0 0 0 0 0

Related

Null space calculation for same matrix with different data type inconsistent

I'm running the following code to find the eigenvector corresponding to eigenvalue of 1 (to find the rotation axis of an arbitrary 3x3 rotation matrix).
I was debugging something with the identity rotation, but I'm getting two different answers.
R1 =
1.0000 -0.0000 0.0000
0.0000 1.0000 0.0000
-0.0000 0 1.0000
R2 =
1 0 0
0 1 0
0 0 1
Running the null space computation on each matrix.
null(R1 - 1 * eye(3))
>> 3x0 empty double matrix
null(R2 - 1 * eye(3))
>>
1 0 0
0 1 0
0 0 1
Obviously the correct answer is the 3x0 empty double matrix, but why is R2 producing a 3x3 identity matrix when R1 == R2 ?
It makes sense that the nullspace of a zero matrix (rank 0) is an identity matrix, as any vector x in R^3 will produce A*x = 0.
>> null(zeros(3, 3))
ans =
1 0 0
0 1 0
0 0 1
This would be the case of R2 - eye(3) if R2 is exactly eye(3)
It also makes sense that the nullspace of a full rank matrix is an empty matrix, as no vectors different than 0 will produce A*x = 0:
>> null(eye(3))
ans = [](3x0)
which could be the case of R1 - eye(3) if R1 is not exactly eye(3) so the result is rank 3. For example:
>> R1 = eye(3) + 1e-12*diag(ones(3,1))
R1 =
1.0000 0 0
0 1.0000 0
0 0 1.0000
>> null(R1 - 1 * eye(3))
ans = [](3x0)
>> rank(R1 - 1 * eye(3))
ans = 3

Multiplication of matrices involving inverse operation: getting infinity

In my earlier question asked here : Matlab: How to compute the inverse of a matrix
I wanted to know how to perform inverse operation
A = [1/2, (1j/2), 0;
1/2, (-1j/2), 0;
0,0,1]
T = A.*1
Tinv = inv(T)
The output is Tinv =
1.0000 1.0000 0
0 - 1.0000i 0 + 1.0000i 0
0 0 1.0000
which is the same as in the second picture. The first picture is the matrix A
However for a larger matrix say 5 by 5, if I don't use the identity, I to perform element wise multiplication, I am getting infinity value. Here is an example
A = [1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
1/2, (1j/2), 1/2, (1j/2), 0;
1/2, (-1j/2), 1/2, (-1j/2), 0;
0, 0 , 0 , 0, 1.00
];
T = A.*1
Tinv = inv(T)
Tinv =
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
Inf Inf Inf Inf Inf
So, I tried to multiply T = A.*I where I = eye(5) then took the inverse Eventhough, I don't get infinity value, I am getting element 2 which is not there in the picture for 3 by 3 matrix case. Here is the result
Tinv =
2.0000 0 0 0 0
0 0 + 2.0000i 0 0 0
0 0 2.0000 0 0
0 0 0 0 + 2.0000i 0
0 0 0 0 1.0000
If for 3 by 3 matrix case, I use I = eye(3), then again I get element 2.
Tinv =
2.0000 0 0
0 0 + 2.0000i 0
0 0 1.0000
What is the proper method?
Question : For general case, for any sized matrix m by m, should I multiply using I = eye(m) ? Using I prevents infinity values, but results in new numbers 2. I am really confused. Please help
UPDATE: Here is the full image where Theta is a vector of 3 unknowns which are Theta1, Theta1* and Theta2 are 3 scalar valued parameters. Theta1 is a complex valued number, so we are representing it into two parts, Theta1 and Theta1* and Theta2 is a real valued number. g is a complex valued function. The expression of the derivative of a complex valued function with respect to Theta evaluates to T^H. Since, there are 3 unknowns, the matrix T should be of size 3 by 3.
your problem is slightly different than you think. The symbols (I, 0) in the matrices in the images are not necessarily scalars (only for n = 1), but they are actually square matrices.
I is an identity matrix and 0 is a matrix of zeros. if you treat these matrix like that you will get the expected answers:
n = 2; % size of the sub-matrices
I = eye(n); % identity matrix
Z = zeros(n); % matrix of zeros
% your T matrix
T = [1/2*I, (1j/2)*I, Z;
1/2*I, (-1j/2)*I, Z;
Z,Z,I];
% inverse of T
Tinv1 = inv(T);
% expected result
Tinv2 = [I,I,Z;
-1j*I,1j*I,Z;
Z,Z,I];
% max difference between computed and expected
maxDist = max(abs(Tinv1(:) - Tinv2(:)))
First you should know, whether you should do
T = A.*eye(...)
or
I = A.*1 %// which actually does nothing
These are completely different things. Be sure what you need, then think about the code.
The reason why you get all inf is because the determinant det of your matrix is zero.
det(T) == 0
So from the mathematical point of view your result is correct, as building the inverse requires every element of T to be divided by det(T). Your matrix cannot be inversed. If it should be possible, the error is in your input matrix, or again in your understanding of the actual underlying problem to solve.
Edit
After your question update, it feels like you're actually looking for ctranpose instead of inv.

Multiplying last row of the matrix with a variable of different order

I want to multiply just last row of a matrix with a variable of different order;
%A matrix
N = length(a)-1;% order of the matrix
last_row = (a(1:end-1))*(1/a(end));% creating the last row
k = ones(1,N-1);
A = diag(k,1) ; % diag(vector, k) produce a matrix filled with zero's and k'th off diagonal as vector, in this case, 1's.
last_row = (wc.^(N:-1:1)).*last_row;% multiply the variable with different order.
A(end,:) = last_row;% adding the last_row
before i multiply the variables to my "last_row"
last_row =
1.0000 2.6131 3.4142 2.6131
my matrix :
A =
0 1 0 0
0 0 1 0
0 0 0 1
0 0 0 0
after multiplying the variable:
last_row =
1.0e+009 *
1.2624 0.0175 0.0001 0.0000
when i insert the last row:
A =
1.0e+009 *
0 0.0000 0 0
0 0 0.0000 0
0 0 0 0.0000
1.2624 0.0175 0.0001 0.0000
It is changing elements which isn't my last_row, where am i going wrong. All those 1's should remain as it is
If your calculations are correct, only the last row of the resulting matrix is changed. MATLAB tries to account for all numbers of a matrix when it is formatting it for output.
So in short there is no problem. For example check if A(1,2) is 1
If you want different formatting, take a look at format or sprintf

special point distance transform in matlab

I use matlab to calculate the distance transform of a binary image, and I found that bwdist() can calculate distances of all the points of the image, but I just want to know the distance of a special point.
for example,I have a binary image like this
image =
1 0 0
0 0 1
0 0 0
The bwdist() compute the distance transform of all points
>> bwdist(a)
ans =
0 1.0000 1.0000
1.0000 1.0000 0
2.0000 1.4142 1.0000
But I just want to compute distance of the point image(3,2), so the function give me 1.4142
any function can do?
You can use find to find row and column indices for all 1's, then use pdist2 from Statistics and Machine Learning Toolbox to calculate distances for all 1's from the search point (3,2) and finally choose the minimum of those distances to get the final output. Here's the implementation shown as a sample run -
>> image
image =
1 0 0
0 0 1
0 0 0
>> point
point =
3 2
>> [R,C] = find(image);
>> min(pdist2([R C],point))
ans =
1.4142
If you don't have access to pdist2, you can use bsxfun to replace it like so -
min(sqrt(sum(bsxfun(#minus,[R C],point).^2,2)))

building a matrix starting from another matrix

I want to build a square matrix. Let's suppose We have this matrix called nodes
1 4.3434 3.4565
2 6.2234 5.1234
3 10.4332 2.3243
4 7.36543 1.1434
where the column 2 and 3 represents position x and y of nodes n
and a matrix called heads where its elements are some elements of nodes matrix
2 6.2234 5.1234
3 10.4332 2.3243
I created this function to build the matrix of the distance of every nodes from the heads
function [distances] = net_dist(nodes,heads)
nnodes = length(nodes(:,1));
distances = zeros(nnodes);
for i = 1 : nnodes
for j = 1 : nnodes
if nodes(i,1) == nodes(j,1) && ismember(nodes(j,1),heads(:,1))
distances(i,j) = sqrt((nodes(i,2) - nodes(j,2))^2 + (nodes(i,3) - nodes(j,3))^2);
elseif (nodes(i,1) == nodes(j,1) || nodes(i,1) ~= nodes(j,1)) && ismember(nodes(j,1),heads(:,1))
distances(i,j) = sqrt((nodes(i,2) - nodes(j,2))^2 + (nodes(i,3) - nodes(j,3))^2);
elseif (nodes(i,1) == nodes(j,1) || nodes(i,1) ~= nodes(j,1)) && ~ismember(nodes(j,1),heads(:,1))
distances(i,j) = 1E9;
end
end
end
return;
This function should return the distance of every nodes from a heads. The positions between nodes that aren't heads are filled with number 1E9. I don't understand why when I execute this function instead to receive sqrt values I receive all 0.
Definitely I would obtain such similar thing
1 2 3 4
1 1E9 d d 1E9
2 1E9 0 d 1E9
3 1E9 d 0 1E9
4 1E9 d 0 1E9
You do not get zeros, you get correct distances. You probably think you get zeros, because 1e9 is a large value and when you try to print your matrix you get
distances =
1.0e+09 *
1.0000 0.0000 0.0000 1.0000
1.0000 0 0.0000 1.0000
1.0000 0.0000 0 1.0000
1.0000 0.0000 0.0000 1.0000
You can see that the two 0-entries are true zeros, while the others are approximately 0 down to 4 digits after the coma. Try to print one of the matrix elements and see they are not zeros
distances(1,2)
ans =
2.5126
You can also use nnz function to quickly know how many non-zeros you have
nnz(distances)
ans =
14