I have 4 variables a,b,c,d. a can vary 1,2 i.e. a=1,2, b=1,2,3, c=1,2,3,4, d=1,2,3,4,5 so by varying each element I want to make output value i.e.
a b c d output
1 1 1 1 1
1 1 1 2 2
1 1 1 3 3
1 1 1 4 4
1 1 1 5 5
now varying c with 1 value and d with all values i.e.
a b c d output
1 1 2 1 6
1 1 2 2 7
1 1 2 3 8
1 1 2 4 9
1 1 2 5 10
now change c to 3 and doing above so getting output as 11,12,13,14,15. when c reaches max varying limit then change b i.e.
a b c d output
1 2 1 1 16
1 2 1 2 17
1 2 1 3 18
1 2 1 4 19
1 2 1 5 20
then
a b c d output
1 2 2 1 21
1 2 2 2 22
1 2 2 3 23
1 2 2 4 24
1 2 2 5 25
so like this I want to proceed and want output for all conditions of a,b,c,d. so how to do it or any equation to do this in matlab. in above a,b,c,d vary 2,3,4,5 i.e in increasing order but in general case they can vary without increasing order e.g. a,b,c,d can vary 7,4,9,13.
A possible algorithm could be to buil the combinations column by column considering the number of times eache value has to be repeted starting form the array d
Defined:
len_a the length of the arraya
len_b the length of the arrayb
len_c the length of the arrayc
len_d the length of the arrayd
you need to replicate the d array len_a * len_b * len_c times.
The array c needs to be replicated len_c * len_d times to cover the "right side" combination, the this set of data have to be replicated len_a * len_b times to account for the "left side" to come.
Similar approach applies for the definiton of the array a and b.
To have the set of combinations in a "random" sequence, is sufficient to
use the randperm function.
% Define the input arrays
a=1:2;
len_a=length(a);
b=1:3;
len_b=length(b);
c=1:4;
len_c=length(c);
d=1:5;
len_d=length(d);
% Generate the fourth column of the table
%
d1=repmat(d',len_a*len_b*len_c,1)
%
% Generate the third column of the table
c1=repmat(reshape(bsxfun(#plus,zeros(len_d,1),[1:len_c]),len_c*len_d,1),len_a*len_b,1)
%
% Generate the second column of the table
b1=repmat(reshape(bsxfun(#plus,zeros(len_c*len_d,1),[1:len_b]),len_b*len_c*len_d,1),len_a,1)
%
% Generate the first column of the table
a1=reshape(bsxfun(#plus,zeros(len_b*len_c*len_d,1),[1:len_a]),len_a*len_b*len_c*len_d,1)
%
% Merge the colums and add the counter in the fifth column
combination_set_1=[a1 b1 c1 d1 (1:len_a*len_b*len_c*len_d)']
% Randomize the rows
combination_set_2=combination_set_1(randperm(len_a*len_b*len_c*len_d),:)
Output:
1 1 1 1 1
1 1 1 2 2
1 1 1 3 3
1 1 1 4 4
1 1 1 5 5
1 1 2 1 6
1 1 2 2 7
1 1 2 3 8
1 1 2 4 9
1 1 2 5 10
1 1 3 1 11
1 1 3 2 12
1 1 3 3 13
1 1 3 4 14
1 1 3 5 15
1 1 4 1 16
1 1 4 2 17
1 1 4 3 18
1 1 4 4 19
1 1 4 5 20
1 2 1 1 21
1 2 1 2 22
1 2 1 3 23
1 2 1 4 24
1 2 1 5 25
1 2 2 1 26
1 2 2 2 27
1 2 2 3 28
1 2 2 4 29
1 2 2 5 30
1 2 3 1 31
1 2 3 2 32
1 2 3 3 33
1 2 3 4 34
1 2 3 5 35
1 2 4 1 36
1 2 4 2 37
1 2 4 3 38
1 2 4 4 39
1 2 4 5 40
1 3 1 1 41
1 3 1 2 42
1 3 1 3 43
1 3 1 4 44
1 3 1 5 45
1 3 2 1 46
1 3 2 2 47
1 3 2 3 48
1 3 2 4 49
1 3 2 5 50
1 3 3 1 51
1 3 3 2 52
1 3 3 3 53
1 3 3 4 54
1 3 3 5 55
1 3 4 1 56
1 3 4 2 57
1 3 4 3 58
1 3 4 4 59
1 3 4 5 60
2 1 1 1 61
2 1 1 2 62
2 1 1 3 63
2 1 1 4 64
2 1 1 5 65
2 1 2 1 66
2 1 2 2 67
2 1 2 3 68
2 1 2 4 69
2 1 2 5 70
2 1 3 1 71
2 1 3 2 72
2 1 3 3 73
2 1 3 4 74
2 1 3 5 75
2 1 4 1 76
2 1 4 2 77
2 1 4 3 78
2 1 4 4 79
2 1 4 5 80
2 2 1 1 81
2 2 1 2 82
2 2 1 3 83
2 2 1 4 84
2 2 1 5 85
2 2 2 1 86
2 2 2 2 87
2 2 2 3 88
2 2 2 4 89
2 2 2 5 90
2 2 3 1 91
2 2 3 2 92
2 2 3 3 93
2 2 3 4 94
2 2 3 5 95
2 2 4 1 96
2 2 4 2 97
2 2 4 3 98
2 2 4 4 99
2 2 4 5 100
2 3 1 1 101
2 3 1 2 102
2 3 1 3 103
2 3 1 4 104
2 3 1 5 105
2 3 2 1 106
2 3 2 2 107
2 3 2 3 108
2 3 2 4 109
2 3 2 5 110
2 3 3 1 111
2 3 3 2 112
2 3 3 3 113
2 3 3 4 114
2 3 3 5 115
2 3 4 1 116
2 3 4 2 117
2 3 4 3 118
2 3 4 4 119
2 3 4 5 120
Hope this helps.
Qapla'
Related
I have a large dataset of children and their parents (could be one or two),
collected over multiple waves.
The child has a unique ID, but the parents
have just been called parent1 "1" or parent2 "2", so they do not have their own unique ID.
I would like to make a new variable like "New ParentID" below which
gives each parent a unique ID.
`
> ChildID = c("1","1","1","1","1","2","2","2","2","3","3","3","3","3","3")
> StudyWave = c("1","2","3","4","5","2","3","4","6","1","2","3","4","5","6")
> ParentID = c("1","2","1","2","1","1","1","1","1","1","2","1","2","1","2")
> NewParentID = c("1","2","1","2","1","3","3","3","3","4","5","4","5","4","5")
> data=cbind.data.frame(ChildID, StudyWave, ParentID, NewParentID)
> data
ChildID StudyWave ParentID NewParentID
1 1 1 1 1
2 1 2 2 2
3 1 3 1 1
4 1 4 2 2
5 1 5 1 1
6 2 2 1 3
7 2 3 1 3
8 2 4 1 3
9 2 6 1 3
10 3 1 1 4
11 3 2 2 5
12 3 3 1 4
13 3 4 2 5
14 3 5 1 4
15 3 6 2 5
`
Many thanks in advance for any suggestions - I am stuck.
I'm trying to get this for loop to work on Matlab so I can plot these three histograms. I'm guessing it won't output because it says that my variables such as a_M_S1 keep changing size on every loop iteration, so the process is essentially inefficient. Any help? Below is the code.
I'm basically trying to generate 500 samples of 100 readings so I can then plot a histogram using estimated parameter values.
clear
clc
% Importing Data
%a = 0.9575
for m=1:500
seed=m;
rng(seed);
syms x
F=((1/atanh(0.9575))*((0.9575^(2*x-1))/(2*x-1)));
for n=1:100
data_1(n)=ceil(vpasolve(F==rand(1)));
end
Data_1(m,:)=data_1;
end
clear
clc
Data_1=[49 1 3 17 13 3 5 51 7 1
9 3 67 1 3 1 1 1 1 99
5 13 21 17 41 1 1 9 23 1
1 5 1 1 41 1 13 1 5 27
5 37 99 1 1 33 1 1 9 1
1 3 47 11 7 1 1 41 21 27
5 1 1 11 45 7 3 5 1 17
13 5 3 3 1 99 1 59 1 13
3 5 1 35 1 1 1 1 5 19
5 1 1 1 79 3 1 1 1 1
31 3 1 1 1 21 69 39 1 29
3 3 1 1 5 1 3 1 1 15
1 1 9 1 7 1 1 1 1 11
27 9 1 3 39 5 1 5 7 1
1 1 7 5 1 1 3 1 3 23
5 1 21 1 1 7 1 17 1 3
11 11 5 1 9 1 1 1 1 37
33 1 9 7 1 1 31 27 1 1
5 5 1 17 3 31 1 45 37 1
1 1 19 47 9 7 5 1 9 1
11 1 61 5 29 1 95 1 1 1
13 19 1 1 13 1 23 7 73 1
1 1 11 1 5 1 3 1 7 1
15 1 9 53 3 7 3 21 7 3
1 7 1 1 23 7 5 1 3 1
1 7 1 3 1 1 1 7 3 5
1 1 1 43 7 3 1 1 21 5
1 39 1 5 13 3 1 5 1 3
1 11 1 1 29 17 25 1 9 1
17 9 13 11 1 5 29 3 3 1
65 5 63 1 1 3 5 1 7 1
21 3 7 1 1 1 27 11 15 3
1 1 1 1 21 1 5 3 1 11
5 1 3 7 1 5 43 5 7 75
29 7 83 1 3 5 15 1 1 3
1 1 9 1 13 1 17 23 1 5
99 1 1 1 5 7 9 3 7 1
1 11 1 11 21 1 5 9 5 1
33 49 3 9 15 1 1 5 1 1
1 17 1 1 1 1 13 1 1 9
5 13 1 1 5 3 1 1 67 1
5 1 1 1 7 27 1 21 47 1
1 1 1 21 3 17 1 5 5 1
1 1 17 29 99 1 9 1 5 15
17 5 1 13 1 1 1 1 1 21
1 21 1 1 1 11 9 35 31 15
99 15 1 1 9 3 1 21 1 1
1 1 9 33 1 1 31 9 29 47
41 99 1 7 17 5 9 3 3 13
1 29 9 5 11 1 1 7 37 15];
Data_2=[1 1 3 3 5 7 1 3 1 1
1 1 1 1 1 1 1 1 1 13
5 1 5 1 1 1 1 3 1 1
1 1 3 1 1 1 1 3 1 1
1 1 13 5 1 3 1 1 5 1
3 3 1 7 3 5 3 1 3 1
1 1 1 1 1 3 3 5 1 1
1 1 1 9 1 1 1 1 5 1
1 1 1 1 1 11 7 1 5 1
17 1 1 7 3 7 3 5 5 1];
for o=1:500
syms a
%Method of Moments (MM)
mean_S1 = mean(transpose(Data_1(o,:)));
a_MM_S1(o) = vpa(vpasolve((a)/((atanh(a))*(1-a.^2)) == mean_S1,a),4);
mean_S2 = mean(transpose(Data_2(o,:)));
a_MM_S2(o) = vpa(vpasolve((a)/((atanh(a))*(1-a.^2)) == mean_S2,a),4);
%Using Lower Quantile (OS)
lower_S1 = floor(quantile(Data_1(o,:),0.25));
a_LQ_S1(o) = vpa(vpasolve((a)/(atanh(a)) == 0.25,a),4);
lower_S2 = floor(quantile(Data_2(o,:),0.25));
a_LQ_S2(o) = vpa(vpasolve((a)/(atanh(a)) == 0.25,a),4);
%Using Median (OSM)
median_S1 = floor(quantile(Data_1(o,:),0.5));
a_M_S1(o) = vpa(vpasolve((a)/(atanh(a)) == 0.5,a),4);
median_S2 = floor(quantile(Data_2(o,:),0.5));
a_M_S2(o) = vpa(vpasolve((a)/(atanh(a)) == 0.5,a),4);
end
a_MM_S1=transpose(a_MM_S1);
a_LQ_S1=transpose(a_LQ_S1);
a_M_S1=transpose(a_M_S1);
a_MM_S2=transpose(a_MM_S2);
a_LQ_S2=transpose(a_LQ_S2);
a_M_S2=transpose(a_M_S2);
figure(1)
histogram([double(a_MM_S1),double(a_MM_S2)],20),title('Method of Moments')
figure(2)
histogram([double(a_LQ_S1),double(a_LQ_S2)],20),title('Using Lower Quartile as Estimator')
figure(3)
histogram([double(a_M_S1),double(a_M_S2)],20),title('Using Median as Estimator')
I have a matrix and each of its columns represents a sequence of points, to be more specific:
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2
5 4 3 2 6 4 3 2 6 5 3 2 6 5 4 2 6 5 4 3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 3 2 2 2 3 2 2 2 3 2 2 2 4 3 3 3 4
3 3 4 4 3 3 4 4 3 3 5 5 4 4 5 5 4 4 5 5
4 5 5 5 4 6 6 6 5 6 6 6 5 6 6 6 5 6 6 6
1 stands for point number one, 2 stands for point number two, and so on.
So as said above, every column represent a different configuration of a set of point (x and y coordinates).
If the set of points is:
(1,9)
(2,5)
(3,7)
(4,2)
(2,1)
(2,3)
then one possible path, according to the first column is:
(1,9)
(2,3)
(2,1)
(1,9)
(2,5)
(3,7)
(4,2)
Is there a way I can compute all this possible configurations and store them?
When I first approached this problem I didn't know about graph theory, that's why so far I am not using it.
I don't understand the logics behind the fouth row of your sequence matrix. It's filled with 1 but they seem to be completely ignored by your example. Respecting your example, given the points:
(1,9) (2,5) (3,7) (4,2) (2,1) (2,3)
and the first column sequence:
1 6 5 1 2 3 4
the output should be:
(1,9) (2,3) (2,1) (1,9) (2,5) (3,7) (4,2)
and not:
(1,9) (2,3) (2,1) (2,5) (3,7) (4,2)
Since I don't know how your scripts should work and how I should deal with the fourth row, I implemented a code that ignore that logic, producing the result that seems the most obvious to me:
seq = [
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
6 6 6 6 5 5 5 5 4 4 4 4 3 3 3 3 2 2 2 2
5 4 3 2 6 4 3 2 6 5 3 2 6 5 4 2 6 5 4 3
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 3 2 2 2 3 2 2 2 3 2 2 2 4 3 3 3 4
3 3 4 4 3 3 4 4 3 3 5 5 4 4 5 5 4 4 5 5
4 5 5 5 4 6 6 6 5 6 6 6 5 6 6 6 5 6 6 6
];
pts = {
[1 9]
[2 5]
[3 7]
[4 2]
[2 1]
[2 3]
};
paths = pts(seq);
Then, in order to access the paths you can to, for example:
for i = 1:size(paths,2)
disp(cell2mat(paths(:,i)))
end
or:
paths = cell2mat(paths);
for i = 1:2:(size(paths,2) / 2)
x = paths(:,i);
y = paths(:,i+1);
disp([x y]);
end
I have below code and I want to convert it to a faster way but I don't how I can convert For syntax to a faster way in Matlab.
If user count is 5 and item count is 2 and time count is 4, I want to create this matrix:
1 1 1
1 1 2
1 1 3
1 1 4
1 2 1
1 2 2
1 2 3
1 2 4
2 1 1
2 1 2
2 1 3
2 1 4
...
result=zeros(userCount*itemCount*timeCount,4);
j=0;
for i=1:userCount
result(j*itemCount*timeCount+1:j*itemCount*timeCount+itemCount*timeCount,1)=ones(itemCount*timeCount,1)*i;
j=j+1;
end
j=0;
h=1;
for i=1:userCount*itemCount
result(j*timeCount+1:j*timeCount+timeCount,2)=ones(timeCount,1)*(h);
j=j+1;
h=h+1;
if h>itemCount
h=1;
end
end
j=0;
for i=1:userCount*itemCount
result(j*timeCount+1:j*timeCount+timeCount,3)=1:timeCount;
j=j+1;
end
for i=1:size(subs,1)
f=(result(:,1)==subs(i,1)& result(:,2)==subs(i,2));
result(f,:)=[];
end
What you are describing is to enumerate permutations for three independent linear sets. One way to achieve this would be to use ndgrid and unroll each output into a single vector:
userCount = 5; itemCount = 2; timeCount = 4;
[X,Y,Z] = ndgrid(1:timeCount,1:itemCount,1:userCount);
result = [Z(:) Y(:) X(:)];
We get:
result =
1 1 1
1 1 2
1 1 3
1 1 4
1 2 1
1 2 2
1 2 3
1 2 4
2 1 1
2 1 2
2 1 3
2 1 4
2 2 1
2 2 2
2 2 3
2 2 4
3 1 1
3 1 2
3 1 3
3 1 4
3 2 1
3 2 2
3 2 3
3 2 4
4 1 1
4 1 2
4 1 3
4 1 4
4 2 1
4 2 2
4 2 3
4 2 4
5 1 1
5 1 2
5 1 3
5 1 4
5 2 1
5 2 2
5 2 3
5 2 4
Can someone check did I guess correct number of neurons in input/hidden/output layer and overall params please.
My idea of this ANN:
Input neurons : 784 (28x28)
Hidden Layers : 1
Size of hidden layer(s) : 25
Activation function : Log-sigmoid
Training method : gradient descent
Data size : 400 + 200
There are 400 bmp images used for training of it, and 200 for checking (however only 1-50 get guessed with 100% rate and others 0% rate...)
clear all;
clc
for kk=1:400
pl=ones(28,28); %³õʼ»¯28*28¶þֵͼÏñΪȫ°×
m=strcat('b',int2str(kk),'.bmp'); %Á¬½Ó×Ö·ûµÃµ½Ñù±¾ÎļþÃû
x=imread(m,'bmp'); %¶ÁÈëÑб¾ÎļþͼÏñ
pl=im2bw(x,0.5); %°ÑÑù±¾Í¼Ïñת»¯Îª¶þֵͼ
for m=0:27 %ÐγÉÉñ¾ÍøÂçÊäÈëÏòÁ¿
p(m*28+1:(m+1)*28,kk)=pl(1:28,m+1);
end
end
%ÊÖдÌåÑù±¾¶ÔÓ¦µÄÊý×Ö£¨´Ób1.bmpµ½b400.bmp ¹²400¸ö£©£º
t=[5 0 4 1 9 2 1 3 1 4 3 6 3 6 1 7 2 8 6 9 4 0 9 1 1 2 4 3 2 7 8 8 6 9 0 5 6 0 7......
6 1 8 7 9 3 9 8 5 9 3 3 0 7 4 9 8 0 9 4 1 4 4 6 0 4 5 6 1 0 0 1 7 1 6 3 0 2 1......
1 7 8 0 2 6 7 8 3 9 0 4 6 7 4 6 8 0 7 8 3 1 5 7 1 7 1 1 6 3 0 2 9 3 1 1 0 4 9......
2 0 0 2 0 2 7 1 8 6 4 1 6 3 4 1 9 1 3 3 9 5 4 7 7 4 2 8 5 8 6 0 3 4 6 1 9 9 6......
0 3 7 2 8 2 9 4 4 6 4 9 7 0 9 2 7 5 1 5 9 1 2 3 1 3 5 9 1 7 6 2 8 2 2 6 0 7 4......
9 7 8 3 2 1 1 8 3 6 1 0 3 1 0 0 1 1 2 7 3 0 4 6 5 2 6 4 7 1 8 9 9 3 0 7 1 0 2......
0 3 5 4 6 5 8 6 3 7 5 8 0 9 1 0 3 1 2 2 3 3 6 4 7 5 0 6 2 7 9 8 5 9 2 1 1 4 4......
5 6 4 1 2 5 3 9 3 9 0 5 9 6 5 7 4 1 3 4 0 4 8 0 4 3 6 8 7 6 0 9 7 5 7 2 1 1 6......
8 9 4 1 5 2 2 9 0 3 9 6 7 2 0 3 5 4 3 6 5 8 9 5 4 7 4 2 7 3 4 8 9 1 9 2 1 7 9......
1 8 7 4 1 3 1 1 0 2 3 9 4 9 2 1 6 8 4 7 7 4 4 9 2 5 7 2 4 4 2 1 9 2 2 8 7 6 9......
8 2 3 8 1 6 5 1 1 0];
%´´½¨BPÍøÂç
pr(1:784,1)=0;
pr(1:784,2)=1;
t1=clock; %¼Æʱ¿ªÊ¼
%ÉèÖÃѵÁ·²ÎÊý
net=newff(pr,[25 1],{'logsig','purelin'},'traingdx','learngdm');
net.trainParam.epochs=5000; %ÉèÖÃѵÁ·´ÎÊý
net.trainParam.goal=0.05; %ÉèÖÃÐÔÄܺ¯Êý
net.trainParam.show=10; %ÿ10ÏÔʾ
net.trainParam.Ir=0.05; %ÉèÖÃѧϰËÙÂÊ
net=train(net,p,t); %ѵÁ·BPÍøÂç
datat=etime(clock,t1) %¼ÆËãÉè¼ÆÍøÂçµÄʱ¼äΪ66.417s
%Éú³É²âÊÔÑù±¾
pt(1:784,1)=1;
pl=ones(28,28); %³õʼ»¯28*28¶þֵͼÏñÏñËØ
for kk=401:600
pl=ones(28,28); %³õʼ»¯28*28¶þֵͼÏñΪȫ°×
m=strcat('b',int2str(kk),'.bmp'); %Á¬½Ó×Ö·ûµÃµ½Ñù±¾ÎļþÃû
x=imread(m,'bmp'); %¶ÁÈëÑб¾ÎļþͼÏñ
pl=im2bw(x,0.5); %°ÑÑù±¾Í¼Ïñת»¯Îª¶þֵͼ
for m=0:27 %ÐγÉÉñ¾ÍøÂçÊäÈëÏòÁ¿
pt(m*28+1:(m+1)*28,kk-400)=pl(1:28,m+1);
end
end
[a,Pf,Af]=sim(net,pt); %ÍøÂç·ÂÕæ
a=round(a) %Êä³öʶ±ð½á¹û
%²âÊÔÑù±¾¶ÔÓ¦µÄÊý×Ö£¨´Ób401.bmpµ½b600.bmp ¹²200¸ö£©£º
tl=[2 6 4 5 8 3 1 5 1 9 2 7 4 4 4 8 1 5 8 9 5 6 7 9 9 3 7 0 9......
0 6 6 2 3 9 0 7 5 4 8 0 9 4 1 1 8 7 1 2 6 1 0 3 0 1 1 8 2 0 3 9 4 0 5 0 6 1 7......
7 8 1 9 2 0 5 1 2 2 7 3 5 4 4 7 1 8 3 9 6 0 3 1 1 2 0 3 5 7 6 8 2 9 5 8 5 7 4......
1 1 3 1 7 5 5 5 2 5 8 2 0 9 7 7 5 0 9 0 0 8 9 2 4 8 1 6 1 6 5 1 8 3 4 0 5 5 8......
3 4 2 3 9 2 1 1 5 2 1 3 2 8 7 3 7 2 4 6 9 7 2 4 2 8 1 1 3 8 4 0 6 5 9 3 0 9 2......
4 7 1 1 9 4 2 6 1 8 9 0 6 6 7];
k=0;
for i=1:200
if a(i)==tl(i)
k=k+1;
end
end
rate=1.00*k/200; %¼ÆËã×îºóÕýÈ·ÂÊΪ0.495
I might be wrong, since you don't specify the number of output neurons and the number of patterns per class in your dataset. However, it seems that you have created only one output neuron for your network. In this case, the network assings ALL patterns to the same class, and the classification accuracy you get is equal to the a priori probability. If, for example, the first 50 patterns of your dataset belong to the same class, and the rest to different classes, a classifier with one output will assign all patterns to the first class, so you will get the first 50 right.
If this is the case, you should create a classifier with N outputs, where N is the number of classes in your dataset. In this case, the classifier will vote for each class, and the pattern will be assigned to the class with the maximum output. If for example you have 3 classes, and the output for a specific pattern is [0.2, 0.83, 0.6], the pattern will be assigned to the second class.
Moreover, converting to image to black-and-white is probably not the best way. It would be better to convert the image to grayscale (to preserve the histogram to some extent), and use some normalization to compensate for differences in lighting.
Finally, keep in mind that neural networks essentially detect similarity between input vectors. So, if you need to classify pictures, you need to find a representation such that similar images produce similar input vectors. Feeding the values of the pixels into the classifier is not a such representation. For example, if you turn the image upside down, the input vector changes completely, even though it still shows the same object. You don't want that. You want features that depend on the object shown, and not on lighting/angle etc. However, extracting such features is a different matter altogether (for example see some examples for image preprocessing and feature extraction from the OpenCV framework, the standard image processing and computer vision tool in C++/python)
If you are interested in neural networks and not image processing, it would be better to start with some standard classification problems from the UCI repository (eg. iris flower, wisconsin breast cancer) and practice with them until you produce good results and feel comfortable with the tools you are using.