How to implement weighted-softmax in chainer - softmax

I am re-produce a paper https://arxiv.org/abs/1711.11575 : where it has one formula:
But I searched chainer, it only has F.softmax,but it cannot add weight on to it.
How to reimplement that formula?

If you want to add the term w_G^{mn}, I guess "adding a value log(w_G^{mn})" to each output before applying usual softmax should have a same effect.

Related

how to transform output SVM into probability using 'Platt scaling' method?

I worked on the problem of handwritten recognition images. For this, I use support vector machines as a classifier . the matrix score shows an example of the scores returned by svm for 5 samples. the number of classes is also 5. I want to transform this matrix into probabilities.
score=[ 0,2590 -0,6033 -1,1350 -1,2347 -0,9776
-1,4727 -0,2136 -0,9649 0,1480 -1,4761
-0,9637 -0,8662 0,0674 -1,0051 -1,1293
-2,1230 -0,8805 -0,9808 -0,0520 -0,0836
-1,6976 -1,1578 -0,9205 -1,1101 1,0796]
According to research on existing methods, I found that the Platt's scaling method is most appropriate in my case. I found an implementation of this method on this link Platt scaling but the problem is that I don't understand the third parameter to enter. Please, help me to understand this implementation and to make it executable
I await your answers and thank you in advance

How to calculate a variable using two different sets of equations selected according a criteria from large data set in Matlab

I have to calculate a variable (settling velocity ) using two different sets of equations selected according a criteria from large data set using Matlab.
I load all sediment diameter (D50) values in a text file. Then I have to calculate settling velocity.
Settling velocity depends on the sediment diameter. For sand I want to use one set of equations and for silt and clay I want another set of equations. The criteria is D50 > 0.0000625.
I used if and else conditions (please see the script written by me; I am new to matlab, this may not be efficient).
However, it uses only one set of equations to calculate settling velocity. There should be a problem of writing if condition. Here is the sample data set.
D50=
0.0002626
0.0002626
0.000003504
0.0000108
0.0000985
The answer I get;
ws=
0.069
0.069
0.0000123
0.000117
0.009717
But the answer should be;
ws=
0.03681
0.03681
0.0000123
0.000117
0.009717
According to the results from following scripts, I can see that the calculations were done only using the equations after else condition. The calculations under if condition were not done. I have done calculations using only one set of equations separately. I get the write answers.
But I want to include two set of equations and do the calculations according to criteria defined here. (This calculation is only a part of large script to simulate morphological evolution due to sea-level rise for my PhD work. Previously my script included only the first set of equation but I want to improve the model using two set of equations.)
Below is my code. I am very grateful to anyone who can help me to sort out this problem.
%%% Settling velocity
g=9.81;
psed=2650;
pw=1027;
neu=0.000001004;
p=0.4;
load D50.txt
i=1:length(D50);
if D50(i) > 0.0000625
Dstar=D50(i)*(g*((psed/pw)-1)/neu^2)^(1/3);
Dstarpower=Dstar.^3;
j=1:length(Dstarpower);
idleA=ones(size(D50));
idleB=107.3296*idleA+1.049*Dstarpower;
idleC=idleB.^0.5;
idleD=idleC-10.36*idleA;
idleE=neu*idleD;
ws=idleE./D50
else
ws=1000000*(D50.^2)
end
CoefA=ws/((1-p)*psed);
fileID1=fopen('ws.txt','w');
fprintf(fileID1,'%6.6f\r\n',ws);
fclose(fileID1);
fileID2=fopen('CoefA.txt','w');
fprintf(fileID2,'%6.6f\r\n',CoefA);
fclose(fileID2);

LIBSVM in MATLAB/Octave - what's the output of libsvmread?

The second output of the libsvmread command is a set of features for each given training example.
For example, in the following MATLAB command:
[heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
This second variable (heart_scale_inst) holds content in a form that I don't understand, for example:
<1, 1> -> 0.70833
What is the meaning of it? How is it to be used (I can't plot it, the way it is)?
PS. If anyone could please recommend a good LIBSVM tutorial, I'd appreciate it. I haven't found anything useful and the README file isn't very clear... Thanks.
The definitive tutorial for LIBSVM for beginners is called: A Practical Guide to SVM Classification it is available from the site of the authors of LIBSVM.
The second parameter returned is called the instance matrix. It is a matrix, let call it M, M(1,:) are the features of data point 1 and so on. The matrix is sparse that is why it prints out weirdly. If you want to see it fully print full(M).
[heart_scale_label, heart_scale_inst] = libsvmread('../heart_scale');
with heart_scale_label and heart_scale_inst you should be able to train an SVM by issuing:
mod = svmtrain(heart_scale_label,heart_scale_inst,'-c 1 -t 0');
I strong suggest you read the above linked guide to learn how to set the c parameter (and possibly, in case of RBF kernel the gamma parameter), but the above line is how you would train with that data.
I think it is the probability with which test case has been predicted to heart_scale label category

why am I getting different answers with dsolve?

I have defined the following ODE
syms R1 C1 vc0 Vin
Vc_ode = 'Dvc+vc/(R1*C1)=(Vin)/(R1*C1)';
Vc=dsolve(Vc_ode,'vc(0)=vc0','t');
and the solution I receive is
Vin - (Vin - vc0)/exp(t/(C1*R1))
while solving manually I get
Vin +vc0*exp(-t/(C1*R1))
both are correct solutions, but is there a way to reach my desired solution?
I think the practical answer would be: No, you cannot let MATLAB reach your desired solution.
When looking at the dsolve input there is no option to specify what the output should look like. It is just a guess but this may be because it is hard to translate your desired style into code.
The only thing that might make a difference is the way you write the input formula but I would suspect it is not going to make much difference.
On the other hand, the academic answer would be: Everything is possible, you may need to make your own dsolve function though.
The problem is that the manual solution vc(t) = Vin +vc0*exp(-t/(C1*R1)) is incorrect. That solutions has vc(0) = Vin + vc0 which is not equal to vc0, so this is why your solutions differ. There is a theorem that states that a first order linear ODE with a initial condition like vc(t_0) = ... has exactly one solution. I suggest that you carefully go through you steps.

Niblack algorithm for Document binarization

i've this photo :
and i'm trying to make Document binarization using niblack algorithm
i've implemented the simple Niblack algorithm
T = mean + K* standardDiviation
and that was it's result:
the problem is there's some parts of the image in which the window doesn't contain any objects so it detects the noise as objects and elaborates them .
i tried to apply blurring filter then global thresholding
that was the result :
which wont be solved by any other filter
i guess the only solution is preventing the algorithm from detecting global noise if the window i free from object
i'm interested to do this using niblack algorithm not using other algorithm so any suggestions ?
i tried sauvola algorithm in this paper Adaptive document image binarization J. Sauvola*, M. PietikaKinen section 3.3
it's a modified version of niblack algorithm which uses a modified equation of niblack
which returned a pretty good answers :
as well as i tried another modification of Niblack which is implemented in this paper
in the 5.5 Algorithm No. 9a: Université de Lyon, INSA, France (C. Wolf, J-M Jolion)
which returned a good results as well :
Did you look here: https://stackoverflow.com/a/9891678/105037
local_mean = imfilter(X, filt, 'symmetric');
local_std = sqrt(imfilter(X .^ 2, filt, 'symmetric'));
X_bin = X >= (local_mean + k_threshold * local_std);
I don't see many options here if you insist to use niblack. You can change the size and type of the filter, and the threshold.
BTW, it seems that your original image has colors. This information can significantly improve black text detection.
There are range of methods that can help in this situation:
Of course, you can change algorithm it self =)
Also it is possible just apply morphology filters: first you apply maximum in the window, and after - minimum. You should tune windows size to achieve a better result, see wiki.
You can choose the hardest but the better way and try to improve Niblack's scheme. It is necessary to increase Niblack's windows size if standard deviation is smaller than some fixed number (should be tuned).
i tried the niblack algorithm with k=-0.99 and windows=990 using optimisation:
Shafait – “Efficient Implementation of Local Adaptive Thresholding
Techniques Using Integral Images”, 2008
with : T = mean + K* standardDiviation; i have this result :
the implementation of algorithm is taken here