newff, different numbers of hidden layers but get the same result - neural-network

I'm trying to compare the result of newff with different number of hidden layer but the result is the same. I used 1 hidden layer and 2 hidden layers to compare.
net = newff( minmax( pn ), [5 1], {'tansig' 'purelin'}, 'trainlm');
net = newff( minmax( pn ), [5 5 1], {'tansig' 'tansig' 'purelin'}, 'trainlm');
code:
load data.txt;
P = data(1:20,1:3);
T = data(1:20,4);
[a,minp,maxp,b,mint,maxt] = premnmx(P',T');
net = newff( minmax( pn ), [5 1], {'tansig' 'purelin'}, 'trainlm');
net.trainParam.epochs = 10000;
net.trainParam.show = 5;
net = train(net,a,b);
y = sim(net,a)
x = postmnmx(y',mint,maxt);
plot(x, 'r');
hold
plot(T);
What is the problem here?

May I suggest you to use a GUI based matlab command nprtools for neural networks.

Related

How to regularize 'fitcecoc' using templateSVM in Matlab?

I am using polynomial SVM in MATLAB for CIFAR-10 dataset using HOG features for data extraction. I wanted to know how I can tune the regularization parameters for 'fitcecoc' to avoid overfitting the training set.
template = templateSVM(...
'KernelFunction', 'polynomial', ...
'PolynomialOrder', 2, ...
'KernelScale', 'auto', ...
'BoxConstraint', 1, ...
'Standardize', true);
SVM_model = fitcecoc(...
X_train, ...
Y_train, ...
'Learners', template, ...
'Coding', 'onevsone', ...
'Verbose', 2,...
'ClassNames', [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]);

Simple denoising autoencoder for 1D data in Matlab

I'm trying to set up a simple denoising autoencoder with Matlab for 1D data. As currently there is no specialised input layer for 1D data the imageInputLayer() function has to be used:
function net = DenoisingAutoencoder(data)
[N, n] = size(data);
%setting up input
X = zeros([n 1 1 N]);
for i = 1:n
for j = 1:N
X(i, 1, 1, j) = data(j,i);
end
end
% noisy X : 1/10th of elements are set to 0
Xnoisy = X;
mask1 = (mod(randi(10, size(X)), 7) ~= 0);
Xnoisy = Xnoisy .* mask1;
layers = [imageInputLayer([n 1 1]) fullyConnectedLayer(n) regressionLayer()];
opts = trainingOptions('sgdm');
net = trainNetwork(X, Xnoisy, layers, opts);
However, the code fails with this error message:
The output size [1 1 n] of the last layer doesn't match the
response size [n 1 1].
Any thoughts on how should the input / layers should be reconfigured? If the fullyConnectedLayer is left out then the code runs fine, but obviously then I'm left without the hidden layer.
The target output should be a matrix, not a 4D tensor.
Here's a working version of the previous code:
function DenoisingAutoencoder(data)
[N, n] = size(data);
X = data;
Xoriginal = data;
Xout = data';
% corrupting the input
zeroMask = (mod(randi(100, size(X)), 99) ~= 0);
X = X + randn(size(X))*0.05;
X = X .* zeroMask;
X4D = reshape(X, [1 n 1 N]);
layers = [imageInputLayer([1 n]) fullyConnectedLayer(n) regressionLayer()];
opts = trainingOptions('sgdm');
net = trainNetwork(X4D, Xout, layers, opts);
R = predict(net, X4D)';

how restrict values of chosen weights in matlab neural network?

everyone!
I know about regularization, but I wanna restrict only choosen weights. For example, I have code
H = rand(10, 100);
H = rand(1, 100);
net = newff(H, F, [10, 5], { 'tansig' 'tansig'}, 'traingdx', 'learngdm', 'mse');
net = train(net, H, F);
and i wanna have during train
net.IW{1}(i, i) = 0
or even
a <= net.IW{1}(i - 1 : i + 1, i) <= b
how can i achieve this result?

Simulating a network that trained with ADAPT

I used ADAPT for incremental training a simple network, and i know that ADAPT changes weights and biases,i used this:
clc
clear all
net = linearlayer([0 1 2]);
pi = {[1; 1] [2;2]};
p = {[3 ;4] [5; 6] [7;8]};
t={[40; 50; 60] [10 ;20; 30] [70;60;50]};
net=configure(net,p,t);
net.inputweights{1}.learnparam.lr=0.001
net.adaptParam.passes = 10;
for i=1:1
[net,y,E,pf,af] = adapt(net,p,t,pi);
end
after that i simulate that network with the same input:
y1=sim(net,p,pi);
I expect that y =y1, but the results y1 and y are not equal!!
Why there is differnce betweet network output training with ADAPT(y) and the output of the trained network(y1)!?
What does ADAPT do?
there you have it:
net = linearlayer([0 1 2]);
pi = {[1; 1] [2;2]};
p = {[3 ;4] [5; 6] [7;8]};
t = {[40; 50; 60] [10 ;20; 30] [70;60;50]};
net = configure(net,p,t);
net.inputweights{1}.learnparam.lr = 0.001;
net.adaptParam.passes = 10;
view(net)
[net,y,E,pf,af] = train(net,p,t);
tout = net(p);
You would use adapt() for post training applications. The matlab documentation specifically says that you use adapt after it has been trained and the network adapts as it is simulated (http://www.mathworks.com/help/nnet/ref/adapt.html)
THANK YOU BRIAN.
So i shoud use ADAPT after training? like this code?
net = linearlayer([0 1 2]);
pi = {[1; 1] [2;2]};
p = {[3 ;4] [5; 6] [7;8]};
t = {[40; 50; 60] [10 ;20; 30] [70;60;50]};
net = configure(net,p,t);
net.inputweights{1}.learnparam.lr = 0.001;
net.adaptParam.passes = 10;
view(net)
[net,y,E,pf,af] = train(net,p,t);
tout = net(p);
for i=1:100
[net,y,E,pf,af] = adapt(net,p,t,pi);
end
If yes,is this incremental training!?

Holes in gaussian mixture plot

I'm trying to plot a gaussian mixture model using matlab. I'm using the following code/data:
p = [0.048544095760874664 , 0.23086205172287944 , 0.43286598287228106 ,0.1825503345829704 , 0.10517753506099443];
meanVectors(:,1) = [1.3564375381318807 , 5.93145751073734];
meanVectors(:,2) = [3.047518052924292 , 3.0165339699001463];
meanVectors(:,3) = [7.002335595122265 , 6.02432823594635];
meanVectors(:,4) = [6.990841095715846 , 3.5056707068971438];
meanVectors(:,5) = [6.878912868397179 , 1.1054191293515965];
covarianceMatrices(:,:,1) = [1.3075839191466305 0.07843065902827488; 0.07843065902827488 0.3167448334937619];
covarianceMatrices(:,:,2) = [0.642914957488056 0.15638677636129855; 0.15638677636129852 0.382240356677974];
covarianceMatrices(:,:,3) = [0.8216051423486987 0.15225179380145448; 0.15225179380145445 0.37030472711188295];
covarianceMatrices(:,:,4) = [1.064002437166605 0.11798234162403692; 0.11798234162403692 0.2687495955430368];
covarianceMatrices(:,:,5) = [0.6445011493286044 0.15295220981440236; 0.1529522098144023 0.5231676196736254];
obj = gmdistribution(meanVectors', covarianceMatrices, p);
figure(1);
ezcontour(#(x,y)pdf(obj,[x y]), [-10 10], [-10 10]);
figure(2);
ezsurf(#(x,y)pdf(obj,[x y]), [-10 10], [-10 10]);
But the resulting surface appears to be really "spiky". Am i doing something wrong?
It seems that the problem is ploting the function
This piece of code is much slower, but it works for me
p = [0.048544095760874664 , 0.23086205172287944 , 0.43286598287228106 ,0.1825503345829704 , 0.10517753506099443];
meanVectors(:,1) = [1.3564375381318807 , 5.93145751073734];
meanVectors(:,2) = [3.047518052924292 , 3.0165339699001463];
meanVectors(:,3) = [7.002335595122265 , 6.02432823594635];
meanVectors(:,4) = [6.990841095715846 , 3.5056707068971438];
meanVectors(:,5) = [6.878912868397179 , 1.1054191293515965];
covarianceMatrices(:,:,1) = [1.3075839191466305 0.07843065902827488; 0.07843065902827488 0.3167448334937619];
covarianceMatrices(:,:,2) = [0.642914957488056 0.15638677636129855; 0.15638677636129852 0.382240356677974];
covarianceMatrices(:,:,3) = [0.8216051423486987 0.15225179380145448; 0.15225179380145445 0.37030472711188295];
covarianceMatrices(:,:,4) = [1.064002437166605 0.11798234162403692; 0.11798234162403692 0.2687495955430368];
covarianceMatrices(:,:,5) = [0.6445011493286044 0.15295220981440236; 0.1529522098144023 0.5231676196736254];
obj = gmdistribution(meanVectors', covarianceMatrices, p);
x = -10:.2:10; y = x; n=length(x); a=zeros(n,n);
for i = 1:n, for j = 1:n, a(i,j) = pdf(obj,[x(i) y(j)]); end, end;
surf(x,y,a,'FaceColor','interp','EdgeColor','none','FaceLighting','phong')
The problem is the default grid size, which is 60. Set a higher number and you will get the expected result:
figure(1);
ezcontour(#(x,y)pdf(obj,[x y]), [-10 10], [-10 10],300);
figure(2);
ezsurf(#(x,y)pdf(obj,[x y]), [-10 10], [-10 10],300);