I am analysing gafchromic filters in a freeware called ImageJ, which uses a simplified form of Java to write macros.
I have a set of datapoints I have successfully connected with different methods and have decided that a third degree polynomial fits the data best, however I need to work with the actual curve, so I need to somehow extract the equation/formula of said polynomial. This should be possible as the variables defining the polynomial are listed on the generated graph, however I can't seem to find a way to extract them in the code.
Here's my code so far:
n = nResults();
x = newArray(n);
for (i=0; i<x.length; i++)
{
x[i] = getResult("Grays ", i);
}
y = newArray(n);
for (i=0; i<y.length; i++)
{
y[i] = getResult("Mean ", i);
}
// Do all possible fits, plot them and add the plots to a stack
setBatchMode(true);
for (i = 0; i < Fit.nEquations; i++) {
Fit.doFit(i, x, y);
Fit.plot();
if (i == 0)
stack = getImageID;
else {
run("Copy");
close();
selectImage(stack);
run("Add Slice");
run("Paste");
}
Fit.getEquation(i, name, formula);
print(""); print(name+ " ["+formula+"]");
print(" R^2="+d2s(Fit.rSquared,3));
for (j=0; j<Fit.nParams; j++)
print(" p["+j+"]="+d2s(Fit.p(j),6));
}
setBatchMode(false);
run("Select None");
rename("Curve Fits");
}
As hinted above, I already got an answer elsewhere. Nonetheless, I'd like to also keep it here for the record.
Basically, the answer is already included in the original post, as it prints the individual variables into the "Log" window.
For the third-degree polynomial, I could have just used:
Fit.doFit(2, x, y); // 2 is 3rd Degree Polynomial
Fit.plot();
rename("Calibrating curve");
And then the can be extracted easily as thus:
a = Fit.p(0);
b = Fit.p(1);
c = Fit.p(2);
d = Fit.p(3);
Related
i have meet a problem when using coder to generate C coder.
matlab function contains a sentences like
function [ B ] = fn1( A )
a = A(:,2);
B = a+1;
end
A is input parameter, 4x2 matrix.
i got c code:
void fn1(const float A[8], float B[4])
{
int i0;
for (i0 = 0; i0 < 4; i0++) {
B[i0] = A[4 + i0] + 1.0F;
}
}
B is not then 2nd column of A.
in matab "define input types", i changed row/column ,it still not working.
i'm using matlab 2016b. is there additional setting or advice to solve this problem?
thanks.
KEYWORD: array layout.
matlab array is column layout while C array is row layout. this mismatch causes the problem. matlab introduces 'array layout' option in R2019 or ealier, but it's not available in R2016.
Why am I getting the wrong answer (err2) from GoodnessOfFit.StandardError? In the code below, I do the computation myself and get the right answer (err3). I get the right answer from GoodnessOfFit.RSquared. Note: esttime and phrf are double[]. Length is 63.
Tuple<double, double> p = Fit.Line(esttime, phrf);
double ss = 0.0;
for (int j = 0; j < esttime.Length; j++)
{
est[j] = p.Item1 + p.Item2 * esttime[j];
ss += Math.Pow(est[j] - phrf[j], 2);
}
double err2 = GoodnessOfFit.StandardError(est, phrf, phrf.Length - 2);
Console.WriteLine(err2.ToString()); //writes 70.91 which is wrong
double err3 = Math.Sqrt(ss / est.Length - 2);
Console.WriteLine(err3.ToString()); // writes 12.56 which is correct
Answer: The third argument, Degrees of Freedom, is actually the number of degrees of freedom lost in the regression. So in the example it should be 2 and not phrf.Length - 2. Even so, it does not match my calculation, and Excel's, exactly.
I get the exact same result with excel and Math.NET for a polynomial fit when I set the degrees of freedom to be the order of the polynomial fit + 1. So, in the following exemple:
double[] paramsPoly = MathNet.Numerics.Fit.Polynomial(X.ToArray(),Y.ToArray(),6);
The StandardError function must receive 6 + 1 as it's last parameter:
double sey = MathNet.Numerics.GoodnessOfFit.StandardError(Yest, Y, 7);
Currently I am trying to implement Resilient Propagation for my network. I'm doing this based on the encog implementation, but there is one thing I don't understand:
The documentation for RPROP and iRPROP+ says when change > 0: weightChange = -sign(gradient) * delta
The source code in lines 298 and 366 does not have a minus!
Since I assume both are in some case correct: Why is there a difference between the two?
And concerning the gradient: I'm using tanh as activion in the output layer. Is this the correct calculation of the gradient?
gradientOutput = (1 - lastOutput[j] * lastOutput[j]) * (target[j] - lastOutput[j]);
After re-reading the relevant papers and looking up in a textbook I think the documentation of encog is not correct at this point. Why don't you just try it out by temporarily adding the minus-signs in the source code? If you use the same initial weights, you should receive exact the same results, given the documentation was correct. But in the end it just matters how you use the weightUpdate variable. If the author of the documentation is used to subtracting the weightUpdate from the weights instead of adding it, this will work.
Edit: I revisited the part about the gradient calculation in my original answer.
First, here is a brief explanation on how you can imagine the gradient for the weights in your output layer. First, you calculate the error between your outputs and the target values.
What you are now trying to do is to "blame" those neurons in the previous layer, which were active. Imagine the output neuron saying "Well, I have an error here, who is responsible?". Responsible are the neurons of the previous layer. Depending on the output being too small or too large compared to the target value, it will increase or decrease the weights to each of the neurons in the previous layers depending on how active they have been.
x is the activation of a neuron in the hidden layer.
o is the activation of the output neuron.
φ is the activation function of the output neuron, φ' its derivative.
Edit2: Corrected the part below. Added matrix style computation of backpropagation.
The error at each output neuron j is:
(1) δout, j = φ'(oj)(t - oj)
The gradient for the weight connecting the hidden neuron i with the output neuron j:
(2) gradi, j = xi * δout, j
The backpropagated error at each hidden neuron i with the weights w:
(3) δhid, i = φ'(x)*∑wi, j * δout, j
By repeatedly applying formula 2 and 3, you can backpropagate up to the input layer.
Written in loops, regarding one training sample:
The error at each output neuron j is:
for(int j=0; j < numOutNeurons; j++) {
errorOut[j] = activationDerivative(o[j])*(t[j] - o[j]);
}
The gradient for the weight connecting the hidden neuron i with the output neuron j:
for(int i=0; i < numHidNeurons; i++) {
for(int j=0; j < numOutNeurons; j++) {
grad[i][j] = x[i] * errorOut[j]
}
}
The backpropagated error at each hidden neuron i:
for(int i=0; i < numHidNeurons; i++) {
for(int j=0; j < numOutNeurons; j++) {
errorHid[i] = activationDerivative(x[i]) * weights[i][j] * errorOut[j]
}
}
In fully connected Multilayer Perceptrons without convolution or anything like that you can can use standard matrix operations, which is a lot faster.
Assuming each of your samples is a row in your input matrix and the columns are its attributes, you can propagate the input through your network like this:
activations[0] = input;
for(int i=0; i < numWeightMatrices; i++){
activations[i+1] = activations[i].dot(weightMatrices[i]);
activations[i+1] = activationFunction(activations[i+1]);
}
Backpropagation then becomes:
n = numWeightMatrices;
error = activationDerivative(activations[n]) * (target - activations[n]);
for (int l=n-1; l >= 0; l--){
gradient[l] = activations[l].transposed().dot(error);
if (l > 0) {
error = error.dot(weightMatrices[l].transposed());
error = activationDerivative(activations[l])*error;
}
}
I omitted the bias neuron in the above explanations. In literature it is recommended to model the bias neuron as an additional column in each activation matrix which is alway 1.0 . You will need to deal with some slice assigns. When using the matrix backpropagation loop, do not forget to set the error at the position of the bias to 0 before each step!
private float resilientPropagation(int i, int j){
float gradientSignChange = sign(prevGradient[i][j]*gradient[i][j]);
float delta = 0;
if(gradientSignChange > 0){
float change = Math.min((prevChange[i][j]*increaseFactor), maxDelta);
delta = sign(gradient[i][j])*change;
prevChange[i][j] = change;
prevGradient[i][j] = gradient[i][j];
}
else if(gradientSignChange < 0){
float change = Math.max((prevChange[i][j]*decreaseFactor), minDelta);
prevChange[i][j] = change;
delta = -prevDelta[i][j];
prevGradient[i][j] = 0;
}
else if(gradientSignChange == 0){
float change = prevChange[i][j];
delta = sign(gradient[i][j])*change;
prevGradient[i][j] = gradient[i][j];
}
prevDelta[i][j] = delta;
return delta;
}
gradient[i][j] = error[j]*layerInput[i];
weights[i][j]= weights[i][j]+resilientPropagation(i,j);
I am searching for a Matlab implementation of the Moore-Penrose algorithm computing pseudo-inverse matrix.
I tried several algoithm, this one
http://arxiv.org/ftp/arxiv/papers/0804/0804.4809.pdf
appeared good at the first look.
However, the problem it, that for large elements it produces badly scaled matrices and some internal operations fail. It concerns the following steps:
L=L(:,1:r);
M=inv(L'*L);
I am trying to find a more robust solution which is easily implementable in my other SW. Thanks for your help.
I re-implemented one in C# using the Mapack matrix library by Lutz Roeder. Perhaps this, or the Java version, will be useful to you.
/// <summary>
/// The difference between 1 and the smallest exactly representable number
/// greater than one. Gives an upper bound on the relative error due to
/// rounding of floating point numbers.
/// </summary>
const double MACHEPS = 2E-16;
// NOTE: Code for pseudoinverse is from:
// http://the-lost-beauty.blogspot.com/2009/04/moore-penrose-pseudoinverse-in-jama.html
/// <summary>
/// Computes the Moore–Penrose pseudoinverse using the SVD method.
/// Modified version of the original implementation by Kim van der Linde.
/// </summary>
/// <param name="x"></param>
/// <returns>The pseudoinverse.</returns>
public static Matrix MoorePenrosePsuedoinverse(Matrix x)
{
if (x.Columns > x.Rows)
return MoorePenrosePsuedoinverse(x.Transpose()).Transpose();
SingularValueDecomposition svdX = new SingularValueDecomposition(x);
if (svdX.Rank < 1)
return null;
double[] singularValues = svdX.Diagonal;
double tol = Math.Max(x.Columns, x.Rows) * singularValues[0] * MACHEPS;
double[] singularValueReciprocals = new double[singularValues.Length];
for (int i = 0; i < singularValues.Length; ++i)
singularValueReciprocals[i] = Math.Abs(singularValues[i]) < tol ? 0 : (1.0 / singularValues[i]);
Matrix u = svdX.GetU();
Matrix v = svdX.GetV();
int min = Math.Min(x.Columns, u.Columns);
Matrix inverse = new Matrix(x.Columns, x.Rows);
for (int i = 0; i < x.Columns; i++)
for (int j = 0; j < u.Rows; j++)
for (int k = 0; k < min; k++)
inverse[i, j] += v[i, k] * singularValueReciprocals[k] * u[j, k];
return inverse;
}
What is wrong with using the built-in pinv?
Otherwise, you could take a look at the implementation used in Octave. It is not in Octave/MATLAB syntax, but I guess you should be able to port it without much problems.
Here is the R code [I][1] have written to compute M-P pseudoinverse. I think that is simple enough to be translated into matlab code.
pinv<-function(H){
x=t(H) %*% H
s=svd(x)
xp=s$d
for (i in 1:length(xp)){
if (xp[i] != 0){
xp[i]=1/xp[i]
}
else{
xp[i]=0
}
}
return(s$u %*% diag(xp) %*% t(s$v) %*% t(H))
}
[1]: http://hamedhaseli.webs.com/downloads
I would like to find the inverse of a matrix.
I know this involves first LU factorisation then the inversion step but I cannot find the required function by searching apple's docs of 10.7!
This seems like a useful post Symmetric Matrix Inversion in C using CBLAS/LAPACK, pointing out that the sgetrf_ and sgetri_ functions should be used. However searching these terms I find nothing in Xcode docs.
Does anybody have boiler plate code for this matrix operation?
Apple does not document the LAPACK code at all, I guess because they just implement the standard interface from netlib.org. It's a shame that you cannot search the these function names from the built-in Xcode docs, however the solution is fairly straight forward: just specify the function name in the URL e.g. for dgetrf_() go to, http://www.netlib.org/clapack/what/double/dgetrf.c.
To invert a matrix two LAPACK function are need: dgetrf_(), which performs LU factorisation, and dgetri_() which takes the output of the previous function and does the actual inversion.
I created a standard Application Project using Xcode, added the Accelerate Framework, create two C files: matinv.h, matinv.c and edited the main.m file to remove Cocoa things:
// main.m
#import "matinv.h"
int main(int argc, char *argv[])
{
int N = 3;
double A[N*N];
A[0] = 1; A[1] = 1; A[2] = 7;
A[3] = 1; A[4] = 2; A[5] = 1;
A[6] = 1; A[7] = 1; A[8] = 3;
matrix_invert(N, A);
// [ -1.25 -1.0 3.25 ]
// A^-1 = [ 0.5 1.0 -1.5 ]
// [ 0.25 0.0 -0.25 ]
return 0;
}
Now the header file,
// matinv.h
int matrix_invert(int N, double *matrix);
and then source file,
int matrix_invert(int N, double *matrix) {
int error=0;
int *pivot = malloc(N*sizeof(int)); // LAPACK requires MIN(M,N), here M==N, so N will do fine.
double *workspace = malloc(N*sizeof(double));
/* LU factorisation */
dgetrf_(&N, &N, matrix, &N, pivot, &error);
if (error != 0) {
NSLog(#"Error 1");
free(pivot);
free(workspace);
return error;
}
/* matrix inversion */
dgetri_(&N, matrix, &N, pivot, workspace, &N, &error);
if (error != 0) {
NSLog(#"Error 2");
free(pivot);
free(workspace);
return error;
}
free(pivot);
free(workspace);
return error;
}