Least square surface fitting from first principles - matlab

I want to make this "by hand" rather than using a surface fitting tool, because depending on the data I have, the surface fitting may vary. So, I first read the data in an excel sheet, then initialize some coefficients, calculate a 3D surface (f(x,y)) and then calculate the total least squares sum, which I'd like to minimise. Every time I run the script it tells me that I'm at the local minimum, even when I change the initial values. Changing the tolerance doesn't affect the result either.
This is the code:
% flow function in a separate .m file (approximation, it’s a negative paraboloid, maybe if required, this function may vary):
function Q = flow(P1,P2,a,b,c,d,e,f)
Q1 = a-b.*P1-c.*P1.^2;
Q2 = d-e.*P2-f.*P2.^2;
Q = Q1 + Q2;
% Variable read, I use a xlsread instead
p1a = [-5, -5, -5, -5, -5, -5, -5, -5, -5, -5];
p2a = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
qa = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1];
p1b = [-6, -6, -6, -6, -6, -6, -6, -6, -6, -6];
p2b = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
qb = [12, 11, 10, 9, 8, 7, 6, 5, 4, 3];
% Variable initialization
coef = [50, 1, 1, 10, 1, 1];
% Function calculation
q1a = flow(p1a,p2a,coef(1),coef(2),coef(3),coef(4),coef(5),coef(6));
q1b = flow(p1b,p2b,coef(1),coef(2),coef(3),coef(4),coef(5),coef(6));
% Least squares
LQa = (qa-q1a).^2;
LQb = (qb-q1b).^2;
Sa = sum(LQa);
Sb = sum(LQb);
St = Sa+Sb;
% Optimization (minimize the least squares sum)
func = #(coef)(St);
init = coef;
opt = optimoptions('fminunc', 'Algorithm', 'quasi-newton', 'Display', 'iter','TolX', 1e-35, 'TolFun', 1e-30);
[coefmin, Stmin] = fminunc(func, init, opt);
If you run this, you should get a result of 15546 for Stmin, but if you change the coefficients, you'll get another result, and it will also be considered as a local minimum.
What am I doing wrong?

The problem is that your func is just a constant. It simply returns a pre-calculated value, St, which is constant no matter what input you pass to func. Try calling func with various different inputs to test this.
Your objective function needs to contain all the calculations that got you to St. So I suggest you replace your func with a function saved in an m-file looking something like this:
function St = objectiveFunction(coef, p1a, p2a, p1b, p2b, qa, qb, q1a, q1b)
% Function calculation
q1a = flow(p1a,p2a,coef(1),coef(2),coef(3),coef(4),coef(5),coef(6));
q1b = flow(p1b,p2b,coef(1),coef(2),coef(3),coef(4),coef(5),coef(6));
% Least squares
LQa = (qa-q1a).^2;
LQb = (qb-q1b).^2;
Sa = sum(LQa);
Sb = sum(LQb);
St = Sa+Sb;
end
And then in your script call objectiveFunction using an anonymous function like this:
[coefmin, Stmin] = fminunc(#(coef)(objectiveFunction(coef, p1a, p2a, p1b, p2b, qa, qb, q1a, q1b)), init, opt);
The idea is to create an anonymous function that only takes a single parameter, coef, which is the variable that fminunc will peturb and pass back to your objective function. The other parameters that your objectiveFunction needs (i.e. p1a, p2a, p1b,...) are now considered to be pre-calculated by your anonymous function and thus by fminunc.
The rest of your code can stay the same.

Related

Number of layers vs list(net.parameters())

New to convolutional neural nets so sorry if this doesn't make much sense. I have this code:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
and I think this has 5 layers. However, when I print len((list(net.parameters())) I get 10. Shouldn't this be a list of size 5 with parameters for each layer?
Quick answer : You get an extra parameter array for each layer containing the bias vector associated to the layer.
Detailled answer:
I will try to guide you in my process of investigating your questions.
It seems like a good idea to see what our 10 parameters are :
for param in net.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([6])
<class 'torch.nn.parameter.Parameter'> torch.Size([16, 6, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([16])
<class 'torch.nn.parameter.Parameter'> torch.Size([120, 400])
<class 'torch.nn.parameter.Parameter'> torch.Size([120])
<class 'torch.nn.parameter.Parameter'> torch.Size([84, 120])
<class 'torch.nn.parameter.Parameter'> torch.Size([84])
<class 'torch.nn.parameter.Parameter'> torch.Size([10, 84])
<class 'torch.nn.parameter.Parameter'> torch.Size([10])
We can recognize our 5 layers and an extra line for each layer. For instance, if we look at a specific layer for instance the first conv layer, we get :
for param in net.conv1.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])
<class 'torch.nn.parameter.Parameter'> torch.Size([6])
So now that we know we have two array of parameters per layer, the question is why. The first 6*3*5*5 array corresponds to your 6 kernels of size 5*5 with 3 channels, the second one corresponds the the bias associated to each of your kernel. Mathematically speaking, to compute the value at the next layer associated to a given kernel, you make the convolution between the area under your desired pixel and the kernel and you add a real number. That number is called the bias and it is empirically proven that using a bias gives better results.
Now you can also create a layer without bias, and then you will only get one parameter array :
layer = nn.Conv2d(3,6,5, bias= False)
for param in layer.parameters():
print(type(param), param.size())
<class 'torch.nn.parameter.Parameter'> torch.Size([6, 3, 5, 5])

Simple convolution in nd4j

I can't get a simple convolution to work in nd4j and documentation regarding this specific topic is scarse. What I'm trying to do:
INDArray values = Nd4j.create(new double[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
INDArray kernel = Nd4j.create(new double[]{0.5,0.5});
INDArray conv = Nd4j.getConvolution().convn(values, kernel, Convolution.Type.VALID);
No matter the values or the convolution type, I always get the same exception (see below). The error seems to occur when nd4j is trying to transform the array of values into a complex array to perform what I think is a Fourier transformation.
I've tried several versions of nd4j (0.9.1 - 0.8.0 - 0.7.0) but to no avail. Can anyone help?
Exception in thread "main" java.lang.UnsupportedOperationException
at org.nd4j.linalg.api.complex.BaseComplexNDArray.putScalar(BaseComplexNDArray.java:1947)
at org.nd4j.linalg.api.complex.BaseComplexNDArray.putScalar(BaseComplexNDArray.java:1804)
at org.nd4j.linalg.api.complex.BaseComplexNDArray.copyFromReal(BaseComplexNDArray.java:545)
at org.nd4j.linalg.api.complex.BaseComplexNDArray.<init>(BaseComplexNDArray.java:159)
at org.nd4j.linalg.api.complex.BaseComplexNDArray.<init>(BaseComplexNDArray.java:167)
at org.nd4j.linalg.cpu.nativecpu.complex.ComplexNDArray.<init>(ComplexNDArray.java:104)
at org.nd4j.linalg.cpu.nativecpu.CpuNDArrayFactory.createComplex(CpuNDArrayFactory.java:166)
at org.nd4j.linalg.factory.Nd4j.createComplex(Nd4j.java:3345)
at org.nd4j.linalg.convolution.DefaultConvolutionInstance.convn(DefaultConvolutionInstance.java:116)
at org.nd4j.linalg.convolution.BaseConvolution.convn(BaseConvolution.java:66)
at com.example.demo.Main.testing(Main.java:41)
at com.example.demo.Main.main(Main.java:34)
It's a bit trickier, as ND4j currently does not support mathematical convolution. You have to craft your own implementation.
double[] rawData = {12,10,15,12,10,11,15,12,12};
INDArray data = Nd4j.create(rawData);
double[] rawFilter = {1.0 / 2, 0, 1.0 / 2};
INDArray filter = Nd4j.create(rawFilter);
Nd4jConv1d convolution = new Nd4jConv1d(1, 1, (int) filter.shape()[1], 1, 0);
INDArray output = convolution.forward(data, filter);
As seen in: https://github.com/deeplearning4j/deeplearning4j/blob/af7155d61dc810d3e7139f15f98810e0255b2e17/arbiter/arbiter-deeplearning4j/src/test/java/org/deeplearning4j/arbiter/multilayernetwork/MNISTOptimizationTest.java
Note: you need an additional class Nd4jConv1d. Go to the repo to get it

How to put labels on each data points in stem plot using matlab

so this is my x and y data:
x = [29.745, 61.77, 42.57, 70.049, 108.51, 93.1, 135.47, 52.79, 77.91, 116.7, 100.71, 146.37, 125.53]
y = [6, 6, 12, 24, 24, 12, 24, 8, 24, 24, 24, 48, 8]
stem(x,y);
so i want to label each data point on my stem plot, this i want output i want:
i edit it using paint, can matlab do this vertical labeling? just what the image look like? please help.
Yes it can! You just need to provide the rotation property of text annotations with a value of 90 and it works fine.
Example:
clear
clc
x = [29.745, 61.77, 42.57, 70.049, 108.51, 93.1, 135.47, 52.79, 77.91, 116.7, 100.71, 146.37, 125.53]
y = [6, 6, 12, 24, 24, 12, 24, 8, 24, 24, 24, 48, 8]
hStem = stem(x,y);
%// Create labels.
Labels = {'none'; 'true';'false';'mean';'none';'';'true';'hints';'high';'low';'peas';'far';'mid'}
%// Get position of each stem 'bar'. Sorry I don't know how to name them.
X_data = get(hStem, 'XData');
Y_data = get(hStem, 'YData');
%// Assign labels.
for labelID = 1 : numel(X_data)
text(X_data(labelID), Y_data(labelID) + 3, Labels{labelID}, 'HorizontalAlignment', 'center','rotation',90);
end
Which gives the following:
The last label is a bit high so you might want to rescale the axes, but you get the idea.

Index exceeds matrix dimensions encountered when training a model

I have a problem with training a model using the PASCAL dev kit with the Discriminatively trained deformable part model system developed by Felzenszwalb, D. McAllester, D. Ramaman and his team which is implemented in Matlab.
Currently I have this output error when I tried to train a 1-component model for 'cat' using 10 positive and 10 negative images.
Error:
??? Index exceeds matrix dimensions.
Error in ==> pascal_train at 48
models{i} = train(cls, models{i}, spos{i}, neg(1:maxneg),
0, 0, 4, 3, ...
Error in ==> pascal at 28
model = pascal_train(cls, n, note);
And this is the pascal_train file
function model = pascal_train(cls, n, note)
% model = pascal_train(cls, n, note)
% Train a model with 2*n components using the PASCAL dataset.
% note allows you to save a note with the trained model
% example: note = 'testing FRHOG (FRobnicated HOG)
% At every "checkpoint" in the training process we reset the
% RNG's seed to a fixed value so that experimental results are
% reproducible.
initrand();
if nargin < 3
note = '';
end
globals;
[pos, neg] = pascal_data(cls, true, VOCyear);
% split data by aspect ratio into n groups
spos = split(cls, pos, n);
cachesize = 24000;
maxneg = 200;
% train root filters using warped positives & random negatives
try
load([cachedir cls '_lrsplit1']);
catch
initrand();
for i = 1:n
% split data into two groups: left vs. right facing instances
models{i} = initmodel(cls, spos{i}, note, 'N');
inds = lrsplit(models{i}, spos{i}, i);
models{i} = train(cls, models{i}, spos{i}(inds), neg, i, 1, 1, 1, ...
cachesize, true, 0.7, false, ['lrsplit1_' num2str(i)]);
end
save([cachedir cls '_lrsplit1'], 'models');
end
% train root left vs. right facing root filters using latent detections
% and hard negatives
try
load([cachedir cls '_lrsplit2']);
catch
initrand();
for i = 1:n
models{i} = lrmodel(models{i});
models{i} = train(cls, models{i}, spos{i}, neg(1:maxneg), 0, 0, 4, 3, ...
cachesize, true, 0.7, false, ['lrsplit2_' num2str(i)]);
end
save([cachedir cls '_lrsplit2'], 'models');
end
% merge models and train using latent detections & hard negatives
try
load([cachedir cls '_mix']);
catch
initrand();
model = mergemodels(models);
48: model = train(cls, model, pos, neg(1:maxneg), 0, 0, 1, 5, ...
cachesize, true, 0.7, false, 'mix');
save([cachedir cls '_mix'], 'model');
end
% add parts and update models using latent detections & hard negatives.
try
load([cachedir cls '_parts']);
catch
initrand();
for i = 1:2:2*n
model = model_addparts(model, model.start, i, i, 8, [6 6]);
end
model = train(cls, model, pos, neg(1:maxneg), 0, 0, 8, 10, ...
cachesize, true, 0.7, false, 'parts_1');
model = train(cls, model, pos, neg, 0, 0, 1, 5, ...
cachesize, true, 0.7, true, 'parts_2');
save([cachedir cls '_parts'], 'model');
end
save([cachedir cls '_final'], 'model');
I have highlighted the line of code where the error occurs at line 48.
I am pretty sure that the system is reading in both the positive and negative images for training correctly. I have no idea where this error is occurring since matlab does not indicate precisely which index is exceeding the matrix dimensions.
I have tried to tidy up the code as much as possible do guide me if I have done wrong somewhere.
Any suggestions where I should start looking at?
Ok, I tried with the use of display to check the variables in use for pascal_train;
disp(i);
disp(size(models));
disp(size(spos));
disp(length(neg));
disp(maxneg);
So the results returned were;
1
1 1
1 1
10
200
Just replace:
models{i} = train(cls, models{i}, spos{i}, neg(1:maxneg),
as
models{i} = train(cls, models{i}, spos{i}, neg(1:min(length(neg),maxneg)),
there are several similar sentences at other place in this script, you should revise them all.
The reason is that your train sample set is small, so you list 'neg' is short than maxneg(200)
I don't have an answer to your question, but here is a suggestion that might help you debug this problem yourself.
In the Matlab menu go to Debug-> Stop if Errors/Warnings ... and select "Always stop if error (dbstop if error)". Now run your script again and this time when you get the error, matlab will stop at the line where the error occurred as if you put a breakpoint there. At that point you have the whole workspace at your disposal and you can check all variables and matrix sizes to see which variable is giving you the error you are seeing.

Matlab: How to bend line in image

I have a image(png format) in hand. The lines that bound the ellipses (represent the nucleus) are over straight which are impractical. How could i extract the lines from the image and make them bent, and with the precondition that they still enclose the nucleus.
The following is the image:
After bending
EDIT: How can i translate the Dilation And Filter part in answer2 into Matlab language? I can't figure it out.
Ok, here is a way involving several randomization steps needed to get a "natural" non symmetrical appearance.
I am posting the actual code in Mathematica, just in case someone cares translating it to Matlab.
(* A preparatory step: get your image and clean it*)
i = Import#"http://i.stack.imgur.com/YENhB.png";
i1 = Image#Replace[ImageData[i], {0., 0., 0.} -> {1, 1, 1}, {2}];
i2 = ImageSubtract[i1, i];
i3 = Inpaint[i, i2]
(*Now reduce to a skeleton to get a somewhat random starting point.
The actual algorithm for this dilation does not matter, as far as we
get a random area slightly larger than the original elipses *)
id = Dilation[SkeletonTransform[
Dilation[SkeletonTransform#ColorNegate#Binarize#i3, 3]], 1]
(*Now the real random dilation loop*)
(*Init vars*)
p = Array[1 &, 70]; j = 1;
(*Store in w an image with a different color for each cluster, so we
can find edges between them*)
w = (w1 =
WatershedComponents[
GradientFilter[Binarize[id, .1], 1]]) /. {4 -> 0} // Colorize;
(*and loop ...*)
For[i = 1, i < 70, i++,
(*Select edges in w and dilate them with a random 3x3 kernel*)
ed = Dilation[EdgeDetect[w, 1], RandomInteger[{0, 1}, {3, 3}]];
(*The following is the core*)
p[[j++]] = w =
ImageFilter[ (* We apply a filter to the edges*)
(Switch[
Length[#1], (*Count the colors in a 3x3 neighborhood of each pixel*)
0, {{{0, 0, 0}, 0}}, (*If no colors, return bkg*)
1, #1, (*If one color, return it*)
_, {{{0, 0, 0}, 0}}])[[1, 1]] (*If more than one color, return bkg*)&#
Cases[Tally[Flatten[#1, 1]],
Except[{{0.`, 0.`, 0.`}, _}]] & (*But Don't count bkg pixels*),
w, 1,
Masking -> ed, (*apply only to edges*)
Interleaving -> True (*apply to all color chanels at once*)]
]
The result is:
Edit
For the Mathematica oriented reader, a functional code for the last loop could be easier (and shorter):
NestList[
ImageFilter[
If[Length[#1] == 1, #1[[1, 1]], {0, 0, 0}] &#
Cases[Tally[Flatten[#1, 1]], Except[{0.` {1, 1, 1}, _}]] & , #, 1,
Masking -> Dilation[EdgeDetect[#, 1], RandomInteger[{0, 1}, {3, 3}]],
Interleaving -> True ] &,
WatershedComponents#GradientFilter[Binarize[id,.1],1]/.{4-> 0}//Colorize,
5]
What you have as input is the Voronoi diagram. You can recalculate it using another distance function instead of the Euclidean one.
Here is an example in Mathematica using the Manhattan Distance (i3 is your input image without the lines):
ColorCombine[{Image[
WatershedComponents[
DistanceTransform[Binarize#i3,
DistanceFunction -> ManhattanDistance] ]], i3, i3}]
Edit
I am working with another algorithm (preliminary result). What do you think?
Here is what I came up with, it is not a direct translation of #belisarius code, but should be close enough..
%# read image (indexed image)
[I,map] = imread('http://i.stack.imgur.com/YENhB.png');
%# extract the blobs (binary image)
BW = (I==1);
%# skeletonization + dilation
BW = bwmorph(BW, 'skel', Inf);
BW = imdilate(BW, strel('square',2*1+1));
%# connected components
L = bwlabel(BW);
imshow(label2rgb(L))
%# filter 15x15 neighborhood
for i=1:13
L = nlfilter(L, [15 15], #myFilterFunc);
imshow( label2rgb(L) )
end
%# result
L(I==1) = 0; %# put blobs back
L(edge(L,'canny')) = 0; %# edges
imshow( label2rgb(L,#jet,[0 0 0]) )
myFilterFunc.m
function p = myFilterFunc(x)
if range(x(:)) == 0
p = x(1); %# if one color, return it
else
p = mode(x(x~=0)); %# else, return the most frequent color
end
end
The result:
and here is an animation of the process: