I am trying to implement ROI pooling by PyTorch. Here's the minimal demo.
import torch
import torch.nn as nn
import torch.nn.functional as F
def roi_pooling(feature_map, rois, size=(7, 7)):
"""
:param feature_map: (1, C, H, W)
:param rois: (1, N, 4) N refers to bbox num, 4 represent (ltx, lty, w, h)
:param size: output size
:return: (1, C, size[0], size[1])
"""
output = []
rois_num = rois.size(1)
for i in range(rois_num):
roi = rois[0][i]
x, y, w, h = roi
output.append(F.adaptive_max_pool2d(feature_map[:, :, y:y+h, x:x+w], size))
return torch.cat(output)
if __name__ == '__main__':
test_tensor = torch.tensor([
[0.88, 0.44, 0.14, 0.16, 0.37, 0.77, 0.96, 0.27],
[0.19, 0.45, 0.57, 0.16, 0.63, 0.29, 0.71, 0.70],
[0.66, 0.26, 0.82, 0.64, 0.54, 0.73, 0.59, 0.26],
[0.85, 0.34, 0.76, 0.84, 0.29, 0.75, 0.62, 0.25],
[0.32, 0.74, 0.21, 0.39, 0.34, 0.03, 0.33, 0.48],
[0.20, 0.14, 0.16, 0.13, 0.73, 0.65, 0.96, 0.32],
[0.19, 0.69, 0.09, 0.86, 0.88, 0.07, 0.01, 0.48],
[0.83, 0.24, 0.97, 0.04, 0.24, 0.35, 0.50, 0.91]
])
test_tensor = test_tensor.view(1, 1, 8, 8)
rois = torch.tensor([[0, 3, 7, 5]])
rois = rois.view(1, -1, 4)
output = roi_pooling(test_tensor, rois, (2, 2))
print(output)
I implement this by referencing this website: https://deepsense.ai/region-of-interest-pooling-explained/. And the test_tensor and RoI also comes from the website's example. However, as the website display, the output should be
[ [0.85, 0.84], [0.97, 0.96] ]
instead of my demo's output:
[ [0.85, 0.96], [0.97, 0.96] ]
So, what's the exact problem of my code? Is the coordinate split phase wrong?
Related
С++ have a very efficient algorithm to calculate eigenvalues and eigenvectors in MKL library with function dgeev. But it calculates all Eigenvalues, and all left and rirhgt eigenvectors.
DGEEV Example Program Results
Eigenvalues ( 2.86, 10.76) ( 2.86,-10.76) ( -0.69, 4.70) ( -0.69,
-4.70) -10.46
Left eigenvectors ( 0.04, 0.29) ( 0.04, -0.29) ( -0.13, -0.33) (
-0.13, 0.33) 0.04 ( 0.62, 0.00) ( 0.62, 0.00) ( 0.69, 0.00) ( 0.69, 0.00) 0.56 ( -0.04, -0.58) ( -0.04, 0.58) ( -0.39,
-0.07) ( -0.39, 0.07) -0.13 ( 0.28, 0.01) ( 0.28, -0.01) ( -0.02, -0.19) ( -0.02, 0.19) -0.80 ( -0.04, 0.34) ( -0.04, -0.34) ( -0.40, 0.22) ( -0.40, -0.22) 0.18
Right eigenvectors ( 0.11, 0.17) ( 0.11, -0.17) ( 0.73, 0.00) (
0.73, 0.00) 0.46 ( 0.41, -0.26) ( 0.41, 0.26) ( -0.03, -0.02) ( -0.03, 0.02) 0.34 ( 0.10, -0.51) ( 0.10, 0.51) ( 0.19, -0.29) ( 0.19, 0.29) 0.31 ( 0.40, -0.09) ( 0.40, 0.09) ( -0.08,
-0.08) ( -0.08, 0.08) -0.74 ( 0.54, 0.00) ( 0.54, 0.00) ( -0.29, -0.49) ( -0.29, 0.49) 0.16
For large matrices it takes a lot of time. Especially if you need to calculate eigenvectors for a large number of matrices.
So the main question is how can I calculate the only one eigenvector for real matrix only for one eigenvalue lambda = 1 as fast as possible?
Or how I can solve the system of linear equations A-E=0, where A is a real matrix, E - Identity matrix.
The eigenvector 100% exist and it consists only real numbers.
You can't really solve B x = 0 with B = A - I. This equation has infinitely many solutions if A has a unit eigenvalue.
For the simplest solution, put x(n) = 1 for arbitrary n, say n = N. Then use first N-1 equations to find x(1:N-1). Effectively, you'll have to solve B' x' = r for x' = x(1:N-1), where B' = B(1:N-1, 1:N-1), and r = -B(1:N-1,N). This can easily be done using MKL/LAPACK ?gesv routine specifying appropriate values for matrix sizes and leading dimensions.
Note that depending on A, B' can also be singular. Then you'll have to fix more x_n components. In the extreme case, when A = I, all x_ns are arbitrary.
Can anyone explain to me how I can apply fminsearch to this equation to find the value of K (Diode Equality Factor) using the Matlab command window.
I = 10^-9(exp(38.68V/k)-1)
I have data values as follows:
Voltage := [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
Current:= [0, 0, 0, 0, 0, 0, 0, 0.07, 0.92, 12.02, 158.29]:
I used fminsearch and an error message appeared:
Matrix dimensions must agree.
Error in #(k)sum((I(:)-Imodel(V(:),k)).^2)
Error in fminsearch (line 189)
fv(:,1) = funfcn(x,varargin{:});
I used this fminsearch code:
V = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0];
I = [0, 0, 0, 0, 0, 0, 0.07 ,0.92 ,12.02 ,158.29];
Imodel = #(V,k) 1E-9*(exp(38.68*V/k)-1);
k0 = 1;
kmodel = fminsearch(#(k) sum((I(:)-Imodel(V(:),k)).^2), k0)
Please explain what the problem in this code is?
It looks like you are carrying on from this post: Fminsearch Matlab (Non Linear Regression ). The linked post is trying to find the right coefficient k in your equation that minimizes the sum of squared errors between the input, which is predicted current from the current-voltage relation of a diode and the output, which is the measured current from a diode. This current post is simply trying to get that off the ground.
In any case, this is a very simple error. You're missing an element in your current array I. It's missing one 0. You can verify this by using numel on both V and I. Basically, V and I don't match in size. numel(V) == 11 and numel(I) == 10.
The definition you have at the top of your question compared to how you defined your error, it's missing one final zero:
I = [0, 0, 0, 0, 0, 0, 0, 0.07, 0.92, 12.02, 158.29];
%// ^
When I run the code with this new I, I get:
>> kmodel
kmodel =
1.4999
Can anyone explain to me how I can apply non linear regression to this equation t find out K using the matlab command window.
I = 10^-9(exp(38.68V/k)-1).
Screenshot of Equation
I have data values as follows:
Voltage := [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]:
Current:= [0, 0, 0, 0, 0, 0, 0, 0.07, 0.92, 12.02, 158.29]:
Screenshot of Equation
[NEW]: Now I used FminSearch as an alternative another and another error message appeared.
Matrix dimensions must agree.
Error in #(k)sum((I(:)-Imodel(V(:),k)).^2)
Error in fminsearch (line 189)
fv(:,1) = funfcn(x,varargin{:});
I used this fminsearch code:
>> V = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0];
>> I = [0, 0, 0, 0, 0, 0, 0.07 ,0.92 ,12.02 ,158.29];
>> Imodel = #(V,k) 1E-9*(exp(38.68*V/k)-1);
>> k0 = 1;
>> kmodel = fminsearch(#(k) sum((I(:)-Imodel(V(:),k)).^2), k0)
>> kmodel = fminsearch(#(k) sum((I(:)-Imodel(V(:),k)).^2), k0);
You want to find the parameter k that will minimize the sum of squared error of your exponential model (BTW, is that a current/voltage characteristic?) given the current data I and voltage data V as vectors:
Imodel = #(V,k) 1E-9*(exp(38.68*V/k)-1);
k0 = 1;
kmodel = fminsearch(#(k) sum((I(:)-Imodel(V(:),k)).^2), k0);
plot(V(:), I(:), 'ok', V(:), Imodel(V(:),kmodel), '-r');
The anonymous function calculates the sum of squared error. The search for the parameter k that will minimize the model error starts with the value 1; please change it to a more appropriate value (if you have a good guess for it).
I have a table( w, alfa, eta ):
w = [0, 0.5, 1]
alfa = [0, 0.3, 0.6, 0.9]
eta(0,0.3) = 0.23
eta(0.5,0) = 0.18
eta(0.5,0.6) = 0.65
eta(1,0.9) = 0.47
where, eta = f(w,alfa)
How can I interpolate the data to obtain all values in this table?
I try griddata, interp2 etc but i can't do it.
It seems like griddata should do the work in your case. However, you should notice that your inputs requires extrapolation as well as interpolation.
>> [xout yout] = meshgrid( w, alfa ); % output points
>> w_in = [ 0, 0.5, 0.5, 1 ];
>> a_in = [ 0.3, 0, 0.6, 0.9 ];
>> e_in = [ 0.23, 0.18, 0.65, 0.47 ];
>> eta_out = griddata( w_in, a_in, e_in, xout, yout, 'linear' )
For a Discrete Time Markov Chain problem, i have the following:
1) Transition matrix:
0.6 0.4 0.0 0.0
0.0 0.4 0.6 0.0
0.0 0.0 0.8 0.2
1.0 0.0 0.0 0.0
2) Initial probability vector:
1.0 0.0 0.0 0.0
So, i wrote the following SciLab code to get to the stationary vector:
P = [0.6, 0.4, 0, 0; 0, 0.4, 0.6, 0; 0, 0, 0.8, 0.2; 1,0,0,0]
PI = [1,0,0,0]
R=PI*P
count=0;
for i = 1 : 35 // stationary vector is obtained at iteration 33, but i went futher to be sure
R=R*P;
count=count+1
disp("count = "+string(count))
end
PI // shows initial probability vector
P // shows transition matrix
R // shows the resulting stationary vector
After iteration number 33, the following resulting stationary vector is obtained:
0.2459016 0.1639344 0.4918033 0.0983607
What manual calculations do i have to perform in order to get to the stationary vector above without having to multiply the transition matrix 33 times then multiply the result by the initial vector?
I was told that the calculations are quite simple but i just could not realize what to do even after reading some books.
Of course explanations are welcome, but above all things i would like to have the exact answer for this specific case.
You can solve DTMC on Octave by using this short code:
P = [
0.6, 0.4, 0, 0;
0, 0.4, 0.6, 0;
0, 0, 0.8, 0.2;
1, 0, 0, 0
]
pis = [P' - eye(size(P)); ones(1, length(P))] \ [zeros(length(P), 1); 1]
Or with SAGE with this code:
P = matrix(RR, 4, [
[0.6, 0.4, 0, 0],
[ 0, 0.4, 0.6, 0],
[ 0, 0, 0.8, 0.2],
[ 1, 0, 0, 0]
])
I = matrix(4, 4, 1); # I; I.parent()
s0, s1, s2, s3 = var('s0, s1, s2, s3')
eqs = vector((s0, s1, s2, s3)) * (P-I); eqs[0]; eqs[1]; eqs[2]; eqs[3]
pis = solve([
eqs[0] == 0,
eqs[1] == 0,
eqs[2] == 0,
eqs[3] == 0,
s0+s1+s2+s3==1], s0, s1, s2, s3)
On both, the result of the steady state probabilities vector is:
pis =
0.245902
0.163934
0.491803
0.098361
I hope it helps.
WBR,
Albert.