I've been thinking about this for a while but I cant seem to find any data on it. When classifying with a neural network you usually assign regions of the output neuron's activation function to a specific class, e.g. For tanh you could set 0.8 for class 1 and -0.8 for class 2. This is all well and good if you have up to 3 classes (the third class can be around zero), but when you have more classes things can become tricky.
Take an example where you are classifying football players based on their statistics. An attacking midfield player and a striker have similar statistics, but if you assign them to regions on opposite sides of the activation function, the accuracy of the classifier is surely harmed.
Would it not be easier to have a 2-output neural network that outputs an arbitrary x and a y value such that the class regions could be represented in 2D rather than 1D? You could essentially have a circle, cut into the number of classes you want and have the centre of each slice as the target value for the class. This seems like a good way to classify to me but the lack of relevant data on the subject is leading me to believe there are easier ways to perform classification with a higher number of classes (say 6 classes for example). The reason I ask is because I am trying to classify football players in certain positions based on their stats. You can see a scatter plot of the top 2 principal component scores for players below.
The usual approach is to use one neuron for every class. You will then find the answer with "argmax".
You don't gain much by encoding 2 or 3 values with a single neuron.
Related
I understand Convolutional neural networks can be used to fix this problem, but if you look at videos of self driving cars, like tesla autopilot, they still use vision detection and labeling systems as input for their neural networks. I am wondering how the self driving cars fix the problem of having N possible number of detection objects and for each of the inputs there are a varing number of information to input about them. As a neural network structure is very rigid, I would imagine that this would cause a problem. Any explanation would be greatly helpful; however, if you do have a scientific paper that would be very appreciated!
These networks do not output a class label such as car, person or sidewalk, rather a probability distribution over N objects. The final decision is later made, basically taking the highest rated object in terms of probability as the prediction. The model is trained on lots of images and as you said all of these images contain a varying numbers of objects but since the model itself output probabilities for all N objects regardless of the number of objects in the input, this is already something that model is trained for. So they learn to output probabilities close to 0 for objects types if they are not extant in the image.
Since this is something that they are trained for they can also do it during the inference. Of course, some problems might occur if certain object type is very rare in the data but this is a class imbalance issue.
I’m using Visual Recognition service on IBM Bluemix.
I have created some classifiers, in particular two of these with this objective:
first: a “generic” classifier that has to return the score of confidence about the recognition of a particular object in the image. I’ve trained it with 50 positive examples of the object, and 50 negative examples of something similar to the object (details of it, its components, images alike it etc.).
second: a more specific classifier that recognize the particular type of the object identified before, if the score of the first classification is quite high. This new classifier has been trained as the first one: 50 positive examples of type A object, 50 negative examples of type B object. This second categorization should be more specific that the first one, because the images are more detailed and are all similar among them.
The result is that the two classifiers work well, and the expected results of a particular set of images correspond to the truth in most cases, and this should mean that both have been well trained.
But there is a thing that I don’t understand.
In both classifiers, if I try to classify one of the images that have been used in the positive training set, my expectation is that the confidence score should be near to 90-100%. Instead, I always obtain a score that is included in the range between 0.50 and 0.55. Same thing happens when I try with an image very similar to one of the positive training set (scaled, reflected, cut out etc.): the confidence never goes above 0.55 circa.
I’ve tried to create a similar classifier with 100 positive images and 100 negative images, but the final result never change.
The question is: why the confidence score is so low? why it is not near to 90-100% with images used in the positive training set?
The scores from Visual Recognition custom classifiers range from 0.0 to 1.0, but they are unitless and are not percentages or probabilities. (They do not add up to 100% or 1.0)
When the service creates a classifier from your examples, it is trying to figure out what distinguishes the features of one class of positive_examples from the other classes of positive_examples (and negative_examples, if given). The scores are based on the distance to a decision boundary between the positive examples for the class and everything else in the classifier. It attempts to calibrate the score output for each class so that 0.5 is a decent decision threshold, to say whether something belongs to the class.
However, given the cost-benefit balance of false alarms vs. missed detections in your application, you may want to use a higher or lower threshold for deciding whether an image belongs to a class.
Without knowing the specifics of your class examples, I might guess that there is a significant amount of similarity between your classes, that maybe in the feature space your examples are not in distinct clusters, and that the scores reflect this closeness to the boundary.
I am building a bidirectional LSTM to do multi-class sentence classification.
I have in total 13 classes to choose from and I am multiplying the output of my LSTM network to a matrix whose dimensionality is [2*num_hidden_unit,num_classes] and then apply softmax to get the probability of the sentence to fall into 1 of the 13 classes.
So if we consider output[-1] as the network output:
W_output = tf.Variable(tf.truncated_normal([2*num_hidden_unit,num_classes]))
result = tf.matmul(output[-1],W_output) + bias
and I get my [1, 13] matrix (assuming I am not working with batches for the moment).
Now, I also have information that a given sentence does not fall into a given class for sure and I want to restrict the number of classes considered for a given sentence. So let's say for instance that for a given sentence, I know it can fall only in 6 classes so the output should really be a matrix of dimensionality [1,6].
One option I was thinking of is to put a mask over the result matrix where I multiply the rows corresponding to the classes that I want to keep by 1 and the ones I want to discard by 0, by in this way I will just lose some of the information instead of redirecting it.
Anyone has a clue on what to do in this case?
I think your best bet is, as you seem to have described, using a weighted cross entropy loss function where the weights for your "impossible class" are 0 and 1 for the other possible classes. Tensorflow has a weighted cross entropy loss function.
Another interesting but probably less effective method is to feed whatever information you now have about what classes your sentence can/cannot fall into the network at some point (probably towards the end).
I have build a neural network model, with 3 classes. I understand that the best output for a classification process is the boolean 1 for a class and boolean zeros for the other classes , for example the best classification result for a certain class, where the output of a classifire that lead on how much this data are belong to this class is the first element in a vector is [1 , 0 , 0]. But the output of the testing data will not be like that,instead it will be a rational numbers like [2.4 ,-1 , .6] ,So how to interpret this result? How to decide to which class the testing data belong?
I have tried to take the absolute value and turn the maximum element to 1 and the other to zeros, so is this correct?
Learner.
It appears your neural network is bad designed.
Regardless your structure is -number of input-hidden-output- layers, when you are doing a multiple classification problem, you must ensure each of your output neurones are evaluating an individual class, that is, each them has a bounded output, in this case, between 0 and 1. Use almost any of the defined function on the output layer for performing this.
Nevertheles, for the Neural Network to work properly, you must strongly remember, that every single neuron loop -from input to output- operates as a classificator, this is, they define a region on your input space which is going to be classified.
Under this framework, every single neuron has a direct interpretable sense on the non-linear expansion the NN is defining, particularly when there are few hidden layers. This is ensured by the general expression of Neural Networks:
Y_out=F_n(Y_n-1*w_n-t_n)
...
Y_1=F_0(Y_in-1*w_0-t_0)
For example, with radial basis neurons -i.e. F_n=sqrt(sum(Yni-Rni)^2) and w_n=1 (identity):
Yn+1=sqrt(sum(Yni-Rni)^2)
a dn-dim spherical -being dn the dimension of the n-1 layer- clusters classification is induced from the first layer. Similarly, elliptical clusters are induced. When two radial basis neuron layers are added under that structure of spherical/elliptical clusters, unions and intersections of spherical/elliptical clusters are induced, three layers are unions and intersections of the previous, and so on.
When using linear neurons -i.e. F_n=(.) (identity), linear classificators are induced, that is, the input space is divided by dn-dim hyperplanes, and when adding two layers, union and intersections of hyperplanes are induced, three layers are unions and intersections of the previous, and so on.
Hence, you can realize the number of neurons per layer is the number of classificators per each class. So if the geometry of the space is -lets put this really graphically- two clusters for the class A, one cluster for the class B and three clusters for the class C, you will need at least six neurons per layer. Thus, assuming you could expect anything, you can consider as a very rough approximate, about n neurons per class per layer, that is, n neurons to n^2 minumum neurons per class per layer. This number can be increased or decreased according the topology of the classification.
Finally, the best advice here is for n outputs (classes), r inputs:
Have r good classificator neurons on the first layers, radial or linear, for segmenting the space according your expectations,
Have n to n^2 neurons per layer, or as per the dificulty of your problem,
Have 2-3 layers, only increase this number after getting clear results,
Have n thresholding networks on the last layer, only one layer, as a continuous function from 0 to 1 (make the crisp on the code)
Cheers...
I have big data set (time-series, about 50 parameters/values). I want to use Kohonen network to group similar data rows. I've read some about Kohonen neural networks, i understand idea of Kohonen network, but:
I don't know how to implement Kohonen with so many dimensions. I found example on CodeProject, but only with 2 or 3 dimensional input vector. When i have 50 parameters - shall i create 50 weights in my neurons?
I don't know how to update weights of winning neuron (how to calculate new weights?).
My english is not perfect and I don't understand everything I read about Kohonen network, especially descriptions of variables in formulas, thats why im asking.
One should distinguish the dimensionality of the map, which is usually low (e.g. 2 in the common case of a rectangular grid) and the dimensionality of the reference vectors which can be arbitrarily high without problems.
Look at http://www.psychology.mcmaster.ca/4i03/demos/competitive-demo.html for a nice example with 49-dimensional input vectors (7x7 pixel images). The Kohonen map in this case has the form of a one-dimensional ring of 8 units.
See also http://www.demogng.de for a java simulator for various Kohonen-like networks including ring-shaped ones like the one at McMasters. The reference vectors, however, are all 2-dimensional, but only for easier display. They could have arbitrary high dimensions without any change in the algorithms.
Yes, you would need 50 neurons. However, these types of networks are usually low dimensional as described in this self-organizing map article. I have never seen them use more than a few inputs.
You have to use an update formula. From the same article: Wv(s + 1) = Wv(s) + Θ(u, v, s) α(s)(D(t) - Wv(s))
yes, you'll need 50 inputs for each neuron
you basically do a linear interpolation between the neurons and the target (input) neuron, and use W(s + 1) = W(s) + Θ() * α(s) * (Input(t) - W(s)) with Θ being your neighbourhood function.
and you should update all your neurons, not only the winner
which function you use as a neighbourhood function depends on your actual problem.
a common property of such a function is that it has a value 1 when i=k and falls off with the distance euclidian distance. additionally it shrinks with time (in order to localize clusters).
simple neighbourhood functions include linear interpolation (up to a "maximum distance") or a gaussian function