How to remove marginal points in Gaussian power curve? - matlab

I'm trying to generate a power a curve which is Gaussian but in the plot generated I need to remove the marginal values. Could someone please guide me how? Thanks
Following is the code I've written for the power curve:
function [xgrid,ygrid,Z] = biVariateContourPlotsGMMCopula(givenData,gmmObject,~,numMeshPoints,x_dim,y_dim)
d = 2;
if nargin < 5
x_dim = 1;
y_dim = 2;
end
if x_dim == y_dim
hist(givenData(:,x_dim),10);
return;
end
numMeshPoints = min(numMeshPoints,256);
givenData = givenData(:,[x_dim y_dim]);
alpha = gmmObject.alpha;
mu = gmmObject.mu(:,[x_dim y_dim]);
sigma = gmmObject.sigma([x_dim y_dim],[x_dim y_dim],:) + 0.005*repmat(eye(d),[1 1 numel(alpha)]);
gmmObject = gmdistribution(mu,sigma,alpha);
bin_num = 256;
for j = 1:2
l_limit = min(gmmObject.mu(:,j))-3*(max(gmmObject.Sigma(j,j,:))^0.5);
u_limit = max(gmmObject.mu(:,j))+3*(max(gmmObject.Sigma(j,j,:))^0.5);
xmesh_inverse_space{j} = (l_limit:(u_limit-l_limit)/(bin_num-1):u_limit);
end
%if isempty(xmesh)||isempty(pdensity)||isempty(cdensity)
% Following for loop does the non-parameteric estimation of marginal
% densities if not provided
for i = 1:d
currentVar = givenData(:,i);
[bandwidth,pdensity{i},xmesh{i}]=kde(currentVar,numMeshPoints);
pdensity{i}(find(pdensity{i}<0)) = 0;
cdensity{i} = cumsum(pdensity{i});
cdensity{i} = (cdensity{i}-min(cdensity{i}))/(max(cdensity{i})-min(cdensity{i}));
end
[xgrid,ygrid] = meshgrid(xmesh{1}(2:end-1),xmesh{2}(2:end-1));
for k = 1:d
marginalLogLikelihood_grid{k} = log(pdensity{k}(2:end-1)+eps);
marginalCDFValues_grid{k} = cdensity{k}(2:end-1);
end
[marg1,marg2] = meshgrid(marginalLogLikelihood_grid{1},marginalLogLikelihood_grid{2});
[xg,yg] = meshgrid(marginalCDFValues_grid{1},marginalCDFValues_grid{2});
inputMatrix = [reshape(xg,numel(xg),1) reshape(yg,numel(yg),1)];
clear xg yg;
copulaLogLikelihoodVals = gmmCopulaPDF(inputMatrix,gmmObject,xmesh_inverse_space);
Z = reshape(copulaLogLikelihoodVals,size(marg1,1),size(marg1,2));
Z = Z+marg1+marg2;
Z = exp(Z);
plot(givenData(:,1),givenData(:,2),'k.','MarkerSize',3);hold
contour(xgrid,ygrid,Z,40);
%title_string = ['GMCM fit (Log-Likelihood = ',num2str(logLikelihoodVal), ')'];
%title(title_string,'FontSize',12,'FontWeight','demi');
axis tight;

Related

How do I eliminate points outside the contour lines?

I have the code that generate a plot which has points plotted both inside and outside the contour line. I want to eliminate the points outside the outermost contour line. I'm using gaussian copula function.
plot(givenData(:,1),givenData(:,2),'b.','MarkerSize',3);
givenData is a 5000x2 matrix and I want only those values that lie inside the outer contour line to be plotted.
plot i'm getting
plot i want to generate
i want to eliminate the points lying outside the red contour which are shown as geen dots.
function [xgrid,ygrid,Z] = biVariateContourPlotsGMMCopula(givenData,gmmObject,~,numMeshPoints,x_dim,y_dim)
d = 2;
if nargin < 5
x_dim = 1;
y_dim = 2;
end
if x_dim == y_dim
hist(givenData(:,x_dim),10);
return;
end
numMeshPoints = min(numMeshPoints,256);
givenData = givenData(:,[x_dim y_dim]);
alpha = gmmObject.alpha;
mu = gmmObject.mu(:,[x_dim y_dim]);
sigma = gmmObject.sigma([x_dim y_dim],[x_dim y_dim],:) + 0.005*repmat(eye(d),[1 1 numel(alpha)]);
gmmObject = gmdistribution(mu,sigma,alpha);
bin_num = 256;
for j = 1:2
l_limit = min(gmmObject.mu(:,j))-3*(max(gmmObject.Sigma(j,j,:))^0.5);
u_limit = max(gmmObject.mu(:,j))+3*(max(gmmObject.Sigma(j,j,:))^0.5);
xmesh_inverse_space{j} = (l_limit:(u_limit-l_limit)/(bin_num-1):u_limit);
end
[~,pdensity{i},xmesh{i}]=kde(currentVar,numMeshPoints);
pdensity{i}(pdensity{i}<0) = 0;
cdensity{i} = cumsum(pdensity{i});
cdensity{i} = (cdensity{i}-min(cdensity{i}))/(max(cdensity{i})-min(cdensity{i})); % scaling the cdensity value to be between [0 1]
end
[xgrid,ygrid] = meshgrid(xmesh{1}(2:end-1),xmesh{2}(2:end-1));
for k = 1:d
marginalLogLikelihood_grid{k} = log(pdensity{k}(2:end-1)+eps);
marginalCDFValues_grid{k} = cdensity{k}(2:end-1);
end
[marg1,marg2] = meshgrid(marginalLogLikelihood_grid{1},marginalLogLikelihood_grid{2});
[xg,yg] = meshgrid(marginalCDFValues_grid{1},marginalCDFValues_grid{2});
inputMatrix = [reshape(xg,numel(xg),1) reshape(yg,numel(yg),1)];
copulaLogLikelihoodVals = gmmCopulaPDF(inputMatrix,gmmObject,xmesh_inverse_space);
Z = reshape(copulaLogLikelihoodVals,size(marg1,1),size(marg1,2));
Z = Z+marg1+marg2;
Z = exp(Z);
plot(givenData(:,1),givenData(:,2),'b.','MarkerSize',3);hold
contour(xgrid,ygrid,Z,40,'EdgeColor',[1 0 0]);
axis tight;

Use a variable outside the function in matlab

I've written the following function:
% This function plots the contours of likelihood values on the scatter plot of a 2 dimensional data.
function [xgrid,ygrid,Z,xy_matrix] = biVariateContourPlotsGMMCopula(givenData,gmmObject,~,numMeshPoints,x_dim,y_dim)
%INPUT: givenData (MxN, M=number of points, N=Dimension)
% : plo = binary variable (1 plot contour plot, 0 do not plot)
%OUTPUT: xgrid,ygrid,Z ( Z contains the likelihood values of the points defined by xgrid and ygrid)
%load general_info;
d = 2;
if nargin < 5
x_dim = 1;
y_dim = 2;
end
if x_dim == y_dim
hist(givenData(:,x_dim),10);
return;
end
numMeshPoints = min(numMeshPoints,256);
givenData = givenData(:,[x_dim y_dim]);
alpha = gmmObject.alpha;
mu = gmmObject.mu(:,[x_dim y_dim]);
sigma = gmmObject.sigma([x_dim y_dim],[x_dim y_dim],:) + 0.005*repmat(eye(d),[1 1 numel(alpha)]);
gmmObject = gmdistribution(mu,sigma,alpha);
bin_num = 256;
for j = 1:2
l_limit = min(gmmObject.mu(:,j))-3*(max(gmmObject.Sigma(j,j,:))^0.5);
u_limit = max(gmmObject.mu(:,j))+3*(max(gmmObject.Sigma(j,j,:))^0.5);
xmesh_inverse_space{j} = (l_limit:(u_limit-l_limit)/(bin_num-1):u_limit);
end
%if isempty(xmesh)||isempty(pdensity)||isempty(cdensity)
% Following for loop does the non-parameteric estimation of marginal % densities if not provided
for i = 1:d
currentVar = givenData(:,i);
[~,pdensity{i},xmesh{i}]=kde(currentVar,numMeshPoints);
pdensity{i}(pdensity{i}<0) = 0;
cdensity{i} = cumsum(pdensity{i});
cdensity{i} = (cdensity{i}-min(cdensity{i}))/(max(cdensity{i})-min(cdensity{i})); % scaling the cdensity value to be between [0 1]
end
[xgrid,ygrid] = meshgrid(xmesh{1}(2:end-1),xmesh{2}(2:end-1));
for k = 1:d
marginalLogLikelihood_grid{k} = log(pdensity{k}(2:end-1)+eps);
marginalCDFValues_grid{k} = cdensity{k}(2:end-1);
end
[marg1,marg2] = meshgrid(marginalLogLikelihood_grid{1},marginalLogLikelihood_grid{2});
[xg,yg] = meshgrid(marginalCDFValues_grid{1},marginalCDFValues_grid{2});
inputMatrix = [reshape(xg,numel(xg),1) reshape(yg,numel(yg),1)];
clear xg yg;
copulaLogLikelihoodVals = gmmCopulaPDF(inputMatrix,gmmObject,xmesh_inverse_space);
Z = reshape(copulaLogLikelihoodVals,size(marg1,1),size(marg1,2));
Z = Z+marg1+marg2;
Z = exp(Z);
% Getting the likelihood value from the log-likelihood
plot(givenData(:,1),givenData(:,2),'b.','MarkerSize',5);hold
[~,h] = contour(xgrid,ygrid,Z,[4e-6,4e-6]);
% Extract the (x, y) coordinates of the contour and concatenate them along the first dimension
xy_matrix = [];
for i = 1:length(h)
xy = get(h(i), 'XData');
x = xy(1, :);
y = xy(2, :);
xy_matrix = [xy_matrix, [x; y]];
end
% Print the concatenated matrix
disp(xy_matrix);
%title_string = ['GMCM fit (Log-Likelihood = ',num2str(logLikelihoodVal), ')'];
%title(title_string,'FontSize',12,'FontWeight','demi');
axis tight
however xy_matrix is not shown on the workspace.
How do I return the variable xy_matrix so that I can use it in another function?
Function call is inside a for loop as in below:
for i = 1:d
for j = 1:d
subplot(d,d,count); count = count+1;
[xgrid,ygrid,Z,xy_matrix] = biVariateContourPlotsGMMCopula(power_curve_reference_build_T2,gmcObject_bestfit,0,256,i,j);
end
end
So, when I'm including xy_matrix as a params in the function call, it generates the following error:
Have I missed anything here?
When you're calling the function with i==j==1 as parameters x_dim and y_dim, the function ends in the following if:
if x_dim == y_dim
hist(givenData(:,x_dim),10);
return;
end
The return values aren't defined at that point. If you define them in the beginning of the function, you won't get the error message.
function [xgrid,ygrid,Z,xy_matrix] = biVariateContourPlotsGMMCopula(givenData,gmmObject,~,numMeshPoints,x_dim,y_dim)
%INPUT: givenData (MxN, M=number of points, N=Dimension)
% : plo = binary variable (1 plot contour plot, 0 do not plot)
%OUTPUT: xgrid,ygrid,Z ( Z contains the likelihood values of the points defined by xgrid and ygrid)
%load general_info;
xgrid=0;
ygrid=0;
Z=0;
xy_matrix=0;
d = 2;
if nargin < 5
x_dim = 1;
y_dim = 2;
end
Below is a suggestion of some changes of your function call. The return values are saved in cells so that you don't overwrite them in the next iteration. The function is also not called when i==j==x_dim==y_dim.
xgrids={};
ygrids={};
Zs={};
xy_matrices={};
for i = 1:d
for j = 1:d
if i~=j
subplot(d,d,count); count = count+1;
[xgrids{i,j},ygrids{i,j},Zs{i,j},xy_matrices{i,j}] = biVariateContourPlotsGMMCopula(power_curve_reference_build_T2,gmcObject_bestfit,0,256,i,j);
end
end
end

Why do i get "wrong number of output arguments" error when converting from Matlab to Scilab?

I'm trying to covert this Matlab code to Scilab, but I have some problems.
N = 101;
L = 4*pi;
x = linspace(0,L,N);
% It has three data set; 1: past, 2: current, 3: future.
u = zeros(N,3);
s = 0.5;
% Gaussian Pulse
y = 2*exp(-(x-L/2).^2);
u(:,1) = y;
u(:,2) = y;
% Plot the initial condition.
handle_line = plot(x,u(:,2),'LineWidth',2);
axis([0,L,-2,2]);
xlabel('x'); ylabel('u');
title('Wave equation');
% Dirichet Boundary conditions
u(1,:) = 0;
u(end,:) = 0;
filename = 'wave.gif';
for ii=1:100
disp(['at ii= ', num2str(ii)]);
u(2:end-1,3) = s*(u(3:end,2)+u(1:end-2,2)) ...
+ 2*(1-s)*u(2:end-1,2) ...
- u(2:end-1,1);
u(:,1) = u(:,2);
u(:,2) = u(:,3);
handle_line.YData = u(:,2);
drawnow;
frame = getframe(gcf);
im = frame2im(frame);
[A,map] = rgb2ind(im,256);
if ii==1
imwrite(A,map,filename,'gif','LoopCount',Inf,'DelayTime',0.05);
else
imwrite(A,map,filename,'gif','WriteMode','append','DelayTime',0.05);
end
end
I get an error for this line:
handle_line = plot(x,u(:,2),'LineWidth',2);
Error states: Wrong number of output arguments
What should i change to fix it?
The line
axis([0,L,-2,2]);
has to be translated in Scilab to
set(gca(),"data_bounds",[0,L,-2,2]);
Try this out:
N = 101;
L = 4*pi;
x = linspace(0,L,N);
% It has three data set; 1: past, 2: current, 3: future.
u = zeros(N,3);
s = 0.5;
% Gaussian Pulse
y = 2*exp(-(x-L/2).^2);
u(:,1) = y;
u(:,2) = y;
% Define a standard plot range for x and y
x_range=[min(x) max(x)];
y_range=[-max(y) max(y)];
% Plot the initial condition.
plot(x,u(:,2),'LineWidth',2);
axis([0,L,-2,2]);
xlabel('x'); ylabel('u');
title('Wave equation');
% Dirichet Boundary conditions
u(1,:) = 0;
u(end,:) = 0;
filename = 'wave.gif';
for ii=1:100
disp(['at ii= ', num2str(ii)]);
u(2:end-1,3) = s*(u(3:end,2)+u(1:end-2,2)) ...
+ 2*(1-s)*u(2:end-1,2) ...
- u(2:end-1,1);
u(:,1) = u(:,2);
u(:,2) = u(:,3);
plot(x,u(:,2),'LineWidth',2);
axis([x_range y_range]);
frame = getframe(gcf);
im = frame2im(frame);
[A,map] = rgb2ind(im,256);
if ii==1
imwrite(A,map,filename,'gif','LoopCount',Inf,'DelayTime',0.05);
else
imwrite(A,map,filename,'gif','WriteMode','append','DelayTime',0.05);
end
end
I removed the output and added axis limit independently.

Matlab neural network for regression

I have implemented 3 function for neural network regression:
1) a forward propagation function that given the training inputs and the net structure calculates the predicted output
function [y_predicted] = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch)
for i = 1:size(Inputs{1},2)
Activation = (Inputs{1}(:,i))';
for j = 2:NumberOfLayers - RegressionSwitch
Activation = 1./(1+exp(-(Activation*Theta{j-1} + Baias{j-1})));
end
if RegressionSwitch == 1
y_predicted(:,i) = Activation*Theta{end} + Baias{end};
else
y_predicted(:,i) = Activation;
end
end
end
2) a cost function that given the predicted and the desired output, calculates the cost of the network
function [Cost] = costFunction(y_predicted, y, Theta, Baias, Lambda)
Cost = 0;
for j = 1:size(y,2)
for i = 1:size(y,1)
Cost = Cost +(((y(i,j) - y_predicted(i,j))^2)/size(y,2));
end
end
Reg = 0;
for i = 1:size(Theta, 2)
for j = 1:size(Theta{i}, 1)
for k = 1:size(Theta{i}, 2)
Reg = Reg + (Theta{i}(j,k))^2;
end
end
end
for i = 1:size(Baias, 2)
for j = 1:length(Baias{i})
Reg = Reg + (Baias{i}(j))^2;
end
end
Cost = Cost + (Lambda/(2*size(y,2)))*Reg;
end
3) a back propagation function that calculates the partial derivative of the cost function for each weight in the network
function [dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y)
for i = 1:size(Theta,2)
for j = 1:size(Theta{i},1)
for k = 1:size(Theta{i},2)
dTp = Theta;
dTm = Theta;
dTp{i}(j,k) = dTp{i}(j,k) + Epsilon;
dTm{i}(j,k) = dTm{i}(j,k) - Epsilon;
y_predicted_p = forwardProp(dTp,Baias,Inputs,NumberOfLayers,RegressionSwitch);
y_predicted_m = forwardProp(dTm,Baias,Inputs,NumberOfLayers,RegressionSwitch);
Cost_p = costFunction(y_predicted_p, y, dTp, Baias, Lambda);
Cost_m = costFunction(y_predicted_m, y, dTm, Baias, Lambda);
dTheta{i}(j,k) = (Cost_p - Cost_m)/(2*Epsilon);
end
end
end
for i = 1:size(Baias,2)
for j = 1:length(Baias{i})
dBp = Baias;
dBm = Baias;
dBp{i}(j) = dTp{i}(j) + Epsilon;
dBm{i}(j) = dTm{i}(j) - Epsilon;
y_predicted_p = forwardProp(Theta,dBp,Inputs,NumberOfLayers,RegressionSwitch);
y_predicted_m =forwardProp(Theta,dBm,Inputs,NumberOfLayers,RegressionSwitch);
Cost_p = costFunction(y_predicted_p, y, Theta, dBp, Lambda);
Cost_m = costFunction(y_predicted_m, y, Theta, dBm, Lambda);
dBaias{i}(j) = (Cost_p - Cost_m)/(2*Epsilon);
end end end
I train the neural network with data from an exact mathematical function of the inputs.
The gradient descent seems to work as the cost decrease each iteration, but when i test the trained network the regression is terrible.
The functions are not meant to be efficient, but they should work so I am really frustrated to see they don't... The main function and the data are ok so the problem should be here. Can you please help me to spot it?
here is the "main":
clear;
clc;
Nodes_X = 5;
Training_Data = 1000;
x = rand(Nodes_X, Training_Data)*3;
y = zeros(2,Training_Data);
for j = 1:Nodes_X
for i = 1:Training_Data
y(1,i) = (x(1,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(5,i);
y(2,i) = (x(5,i)^2)+x(2,i)-x(3,i)+2*x(4,i)/x(1,i);
end
end
vx = rand(Nodes_X, Training_Data)*3;
vy = zeros(2,Training_Data);
for j = 1:Nodes_X
for i = 1:Training_Data
vy(1,i) = (vx(1,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(5,i);
vy(2,i) = (vx(5,i)^2)+vx(2,i)-vx(3,i)+2*vx(4,i)/vx(1,i);
end
end
%%%%%%%%%%%%%%%%%%%%%%ASSIGN NODES TO EACH LAYER%%%%%%%%%%%%%%%%%%%%%%%%%%%
NumberOfLayers = 4;
Nodes(1) = 5;
Nodes(2) = 10;
Nodes(3) = 10;
Nodes(4) = 2;
if length(Nodes) ~= NumberOfLayers || (Nodes(1)) ~= size(x, 1)
WARNING = msgbox('Nodes assigned incorrectly!');
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%INITIALIZATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i = 1:NumberOfLayers-1
Theta{i} = rand(Nodes(i),Nodes(i+1));
Baias{i} = rand(1,Nodes(i+1));
end
Inputs{1} = x;
Outputs{1} = y;
RegressionSwitch = 1;
Lambda = 10;
Epsilon = 0.00001;
Alpha = 0.01;
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%TRAINING%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Epoch = 0;
figure;
hold on;
while Epoch <=20
%%%%%%%%%%%%%%%%%%%%FORWARD PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
y_predicted = forwardProp(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%COST%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Cost = costFunction(y_predicted, y, Theta, Baias, Lambda);
scatter(Epoch,Cost);
pause(0.01);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%BACK PROPAGATION%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
[dTheta, dBaias] = Deltas(Theta,Baias,Inputs,NumberOfLayers,RegressionSwitch, Epsilon, Lambda, y);
%%%%%%%%%%%%%%%GRADIENT DESCENT%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
for i = 1:size(Theta,2)
Theta{i} = Theta{i}-Alpha*dTheta{i};
end
for i = 1:size(Baias,2)
Baias{i} = Baias{i}-Alpha*dBaias{i};
end
Epoch = Epoch + 1;
end
hold off;
V_Inputs{1} = vx;
V_y_predicted = forwardProp(Theta,Baias,V_Inputs,NumberOfLayers,RegressionSwitch);
figure;
hold on;
for i = 1:size(vy,2)
scatter(vy(1,i),V_y_predicted(1,i));
pause(0.01);
end
hold off;
figure;
hold on;
for i = 1:size(vy,2)
scatter(vy(2,i),V_y_predicted(2,i));
pause(0.01);
end
hold off;

Matlab Iris Classification input size mistmach

I am very new to Matlab. What i am trying to do is classify the iris dataset using Cross-Validation (that means that i have to split the dataset in 3: trainingSet, validationSet, and test set) . In my mind everything i write here is ok (beeing a beginner is hard sometimes). So i could use a little help...
This is the function that splits the data (first 35(70% of the data) are the training set, the rest is the validation set(15%) and 15% i will use later for the test set)
close all; clear ;
load fisheriris;
for i = 1:35
for j = 1:4
trainSeto(i,j) = meas(i,j);
end
end
for i = 51:85
for j = 1:4
trainVers(i-50,j) = meas(i,j);
end
end
for i = 101:135
for j = 1:4
trainVirg(i-100,j) = meas(i,j);
end
end
for i = 36:43
for j = 1:4
valSeto(i-35,j) = meas(i,j);
end
end
for i = 86:93
for j = 1:4
valVers(i-85,j) = meas(i,j);
end
end
for i = 136:143
for j = 1:4
valVirg(i-135,j) = meas(i,j);
end
end
for i = 44:50
for j = 1:4
testSeto(i-43,j) = meas(i,j);
end
end
for i = 94:100
for j = 1:4
testVers(i-93,j) = meas(i,j);
end
end
for i = 144:150
for j = 1:4
testVirg(i-143,j) = meas(i,j);
end
end
And this is the main script:
close all; clear;
%%the 3 tipes of iris
run divinp
% the representation of the 3 classes(their coding)
a = [-1 -1 +1]';
b = [-1 +1 -1]';
c = [+1 -1 -1]';
%training set
trainInp = [trainSeto trainVers trainVirg];
%the targets
T = [repmat(a,1,length(trainSeto)) repmat(b,1,length(trainVers)) repmat(c,1,length(trainVirg))];
%%the training
trainCor = zeros(10,10);
valCor = zeros(10,10);
Xn = zeros(1,10);
Yn = zeros(1,10);
for k = 1:10,
Yn(1,k) = k;
for n = 1:10,
Xn(1,n) = n;
net = newff(trainInp,T,[k n],{},'trainbfg');
net = init(net);
net.divideParam.trainRatio = 1;
net.divideParam.valRatio = 0;
net.divideParam.testRatio = 0;
net.trainParam.max_fail = 2;
valInp = [valSeto valVers valVirg];
valT = [repmat(a,1,length(valSeto)) repmat(b,1,length(valVers)) repmat(c,1,length(valVirg))];
[net,tr] = train(net,trainInp,T);
Y = sim(net,trainInp);
[Yval,Pfval,Afval,Eval,perfval] = sim(net,valInp,[],[],valT);
% calculate [%] of correct classifications
trainCor(k,n) = 100 * length(find(T.*Y > 0)) / length(T);
valCor(k,n) = 100 * length(find(valT.*Yval > 0)) / length(valT);
end
end
figure
surf(Xn,Yn,trainCor/3);
view(2)
figure
surf(Xn,Yn,valCor/3);
view(2)
I get this error
Error using trainbfg (line 120) Inputs and targets have different
numbers of samples.
Error in network/train (line 106) [net,tr] =
feval(net.trainFcn,net,X,T,Xi,Ai,EW,net.trainParam);
Error in ClassIris (line 38)
[net,tr] = train(net,trainInp,T);
close all; clear ;
load fisheriris;
trainSetoIndx = 1:35;
trainVersIndx = 51:85; % or: trainVersIndx = trainSetoIndx + 50;
trainVirgIndx = 101:135;
colIndx = 1:4;
trainSeto = meas(trainSetoIndx, colIndx);
trainVers = meas(trainVersIndx, colIndx);
trainVirg = meas(trainVirgIndx, colIndx);
valSetoIndx = 36:43;
valVersIndx = 86:93;
valVirgIndx = 136:143
valSeto = meas(valSetoIndx, colIndx);
valVers = meas(valVersIndx, colIndx);
valVirg = meas(valVirgIndx, colIndx);
testSetoIndx = 44:50;
testVersIndx = 94:100;
testVirgIndx = 144:150
testSeto = meas(testSetoIndx, colIndx);
testVers = meas(testVersIndx, colIndx);
testVirg = meas(testVirgIndx, colIndx);
i have writen it with ":" also still the same problem it's something with repmat.. i don't know how to use it properly or newff :D
Just to get you started, you can rewrite your code loops as follows:
trainSetoIndx = 1:35;
trainVersIndx = 51:85; % or: trainVersIndx = trainSetoIndx + 50;
trainVirgIndx = 101:135; % or: trainVirgIndx = trainSetoIndx + 100;
colIndx = 1:4; % can't tell if this is all the columns in meas
trainSeto = meas(trainIndx, colIndx);
trainVers = meas(trainVersIndx, colIndx);
trainVirg = meas(trainVirgIndx, colIndx);
The do the same thing for all the others:
valSetoIndx = 36:43;
etc.
Next, simply type whos at the command prompt and you will see the sizes of all the arrays you have created. See whether the ones that need to be the same size have, in fact, the same dimensions.