Given a base bitmap s and a set of possible sucessor bitmaps s1, ..., sN, how can I train a TensorFlow graph to compute a probability distribution over these sucessors ?
Every bitmap sK could be processed as a single input by the same network to give a real value representing it likelihood, which could then at this point be mapped trough a softmax layer to give a probability distribution.
However, doing this by hand wouldn't allow to use backpropagation and implemented optimizers, and it's not even guaranted that it would accept variable lenght input and outputs.
Is this even possible ? Input and Output tensors seems to have necessarly a fixed size appart over the batch dimension.
If I understand correctly, you have a base bitmap s and given that you want to model the distribution of the sequence s1, s2, ..., sN.
This can be achieved using a sequence labeling RNN model, where you want to model the probability distribution of the sequence s1, s2, ..., sN. But to condition on s, you can concatenate the feature vector of s to every input in the sequence. So at time-step t, the input vector to the RNN will be the concatenation of feature vector for s_t and s and after the softmax, the expected output should be s_t+1 (i.e. the loss should be such that. eg. Cross-entropy loss can be used).
Variable-length sequences can very well be used. Tensorflow always as many dimensions of the placeholder to be None, i.e. of variable size during running the graph. So along with variable batch-size you can very well have variable length sequences.
Related
I’m trying to use a Neural Network for purposes of binary classification. It consist of three layers. The first layer has three input neurons, the hidden layer has two neurons, and the output layer has three neurons that output a binary value of 1 or 0. Actually the output is usually a floating point number, but it typically rounds up to a whole number.
If the network only outputs vectors of 3, then shouldn't my input vectors be the same size? Otherwise, for classification, how else do you map the output to the input?
I wrote the neural network in Excel using VBA based on the following article: https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/
So far it works exactly as described in the article. I don’t have access to a machine learning library at the moment so I’ve chosen to give this a try.
For example:
If the output of the network is [n, n ,n], does that mean that my input data has to be [n, n, n] also?
From what I read in here: Neural net input/output
It seems that's the way it should be. I'm not entirely sure though.
To speak simple,
for regression task, your output usually has the dimension [1] (if you predict single value).
For the classification task, your output should have the same number of dimensions equal to the number of classes you have (outputs are probabilities, the sum of them = 1).
So, there is no need to have equal dimensions of input and output. NN is just a projection of one dimension to another.
For example,
regression, we predict house prices: input is [1, 10] (to features of the property), the output is [1] - price
classification, we predict class (will be sold or not): input is [1, 11] (same features + listed price), output is [1, 2] (probability of class 0 (will be not sold) and 1 (will be sold); for example, [1; 0], [0; 1] or [0.5; 0.5] and so on; it is binary classification)
Additionally, equality of input-output dimensions exists in more specific tasks, for example, autoencoder models (when you need to present your data in other dimension and then represent it back, to the original dimension).
Again, the output dimension is the size of outputs for 1 batch. Only one, not of the whole dataset.
I have a theoretical question, and understand the concept of Kernel scale with the Gaussian Kernel, but when I run 'OptimizeHyperparameters' in fitcsvm in Matlab, it gives me different values than one, and I would like to understand what that means...
What does it mean a high value of kernel scale in linear kernel svm? and in polynomial?
Please pay attention to these paragraph from MATLAB help :
You cannot use any cross-validation name-value pair argument along with the 'OptimizeHyperparameters' name-value pair argument. You can modify the cross-validation for 'OptimizeHyperparameters' only by using the 'HyperparameterOptimizationOptions' name-value pair argument.
OptimizeHyperparameters values override any values you set using other name-value pair arguments. For example, setting OptimizeHyperparameters to 'auto' causes the 'auto' values to apply.
MATLAB divides all elements of the predictor matrix X by the value of KernelScale. Then, the software applies the appropriate kernel norm to compute the Gram matrix. Therefore a high value for kernel scale means all elements of the predictor matrix must be divided to a big value.
KernelScale can be between [1e-3,1e3]. Fitcsvm searches among positive values, by default log-scaled in the range [1e-3,1e3].
If you specify KernelScale and your own kernel function, for example, 'KernelFunction','kernel', then the software throws an error. You must apply scaling within kernel.
i want to create a Neural Network with "three (2D) Matrices" as a inputs , and
the output is a 1 (2D) Matrix , so the three inputs is :
1-2D Matrix Contains ( X ,Y ) Coordinates From a device
2-2D Matrix Contains ( X ,Y ) Coordinates From another different Device
3-2D Matrix Contains the True exact( X , Y ) Coordinates that i already
measured it ( i don't know if that exact include from the inputs or What??)
***Note that each input have his own Error and i want to make the Neural Network
to Minimize that error and choose the best result depends on the true exact (X,Y)***
**Notice that : im Working on object tracking that i extracting (x,y) Coordinate
from the camera and the other device is same the data so for example
i will simulate the Coordinates as Follows:
{ (1,2), (1,3), (1,4), (1,5) , (1,6).......}
and so on
For Sure the Output is one 2D Matrix the best or the True Exact (x,y) ,So im a
beginner and i want to understand how to create this Network with this Different
inputs and choose the best training method to have the Best Results ...?!
thanks in Advance
It sounds like what you want is a HxWx2 input where the first channel (depthwise layer) is your 1st input and the 2nd channel is your 2nd input. Your "true exact" coordinates would be the target that your net output is compared to, rather than being an input.
Note that neural nets don't really handle regression (real-valued outputs) very well - you may get better results dividing your coordinate range into buckets and then treating it as a classification problem instead (use softmax loss vs regression mean-squared error loss).
Expanding on the regression vs classification point:
A regression problem is one where you want the net to output a real value such as a coordinate value in range 0-100. A classification problem is one where you want the net to output a set of propabilities that your input belongs to a given class it was trained on (e.g. you train a net on images belonging to classes "cat" "dog" and "rabbit").
It turns out that modern neural nets are much better at classification than regression, because the way they work is basically by subdividing the N-dimensional input spaces into sub-regions corresponding to the outputs they are being trained to make. What they are naturally doing is classifying.
The obvious way to turn a regression problem into a classification problem, which may work better, is to divide your desired output range into sub-ranges (aka buckets) which you treat as classes. e.g. instead of training your net to output a single (or multiple) value in range 0-100, instead train it to output class probabilities representing, for example, each of 10 separate sub-ranges (classes) 0-9, 10-19, 20-20, etc.
I have a question regarding neural networks back-propagation. Suppose we have a trained DNN for some data. Then we feed a corrupted data into NN and back-propagate the error not until the first hidden layer, but until the input layer (so to say we calculate deltas for the input neurons). Does the error term shows us a mismatch between "clean" and "corrupted" vector?
If I'm interpreting the question correctly, you have two input vectors i1 = (a, b, ...) and i2 = (c, d, ...). You then have two corresponding output vectors o1 = (v, w, ...) and o2 = (x, y, ...).
i1 is part of your valid training data and is used to teach the NN the model. After this is done, you want to use the NN to detect the delta between the invalid o2 and the valid output given a correct application of the model to i2?
If this is the case, train the NN as normal with all of your valid input cases, then feed your test cases (input vectors which correspond to known corrupted output vectors) and collect the results with backpropagation disabled. That is, once the NN has learned the correct model, stop training and simply compare the "clean" results to the corrupted results yourself.
Note: You could alternatively train a neural network to accept as input a set of values corresponding to the input of some other process and a set of values corresponding to the (possibly corrupted) output of that process and produce as output the difference between the clean values and the corrupted values, but the extra training and framing of the data just to avoid doing the subtraction yourself is probably not worth it.
I have created an Auto Encoder Neural Network in MATLAB. I have quite large inputs at the first layer which I have to reconstruct through the network's output layer. I cannot use the large inputs as it is,so I convert it to between [0, 1] using sigmf function of MATLAB. It gives me a values of 1.000000 for all the large values. I have tried using setting the format but it does not help.
Is there a workaround to using large values with my auto encoder?
The process of convert your inputs to the range [0,1] is called normalization, however, as you noticed, the sigmf function is not adequate for this task. This link maybe is useful to you.
Suposse that your inputs are given by a matrix of N rows and M columns, where each row represent an input pattern and each column is a feature. If your first column is:
vec =
-0.1941
-2.1384
-0.8396
1.3546
-1.0722
Then you can convert it to the range [0,1] using:
%# get max and min
maxVec = max(vec);
minVec = min(vec);
%# normalize to -1...1
vecNormalized = ((vec-minVec)./(maxVec-minVec))
vecNormalized =
0.5566
0
0.3718
1.0000
0.3052
As #Dan indicates in the comments, another option is to standarize the data. The goal of this process is to scale the inputs to have mean 0 and a variance of 1. In this case, you need to substract the mean value of the column and divide by the standard deviation:
meanVec = mean(vec);
stdVec = std(vec);
vecStandarized = (vec-meanVec)./ stdVec
vecStandarized =
0.2981
-1.2121
-0.2032
1.5011
-0.3839
Before I give you my answer, let's think a bit about the rationale behind an auto-encoder (AE):
The purpose of auto-encoder is to learn, in an unsupervised manner, something about the underlying structure of the input data. How does AE achieves this goal? If it manages to reconstruct the input signal from its output signal (that is usually of lower dimension) it means that it did not lost information and it effectively managed to learn a more compact representation.
In most examples, it is assumed, for simplicity, that both input signal and output signal ranges in [0..1]. Therefore, the same non-linearity (sigmf) is applied both for obtaining the output signal and for reconstructing back the inputs from the outputs.
Something like
output = sigmf( W*input + b ); % compute output signal
reconstruct = sigmf( W'*output + b_prime ); % notice the different constant b_prime
Then the AE learning stage tries to minimize the training error || output - reconstruct ||.
However, who said the reconstruction non-linearity must be identical to the one used for computing the output?
In your case, the assumption that inputs ranges in [0..1] does not hold. Therefore, it seems that you need to use a different non-linearity for the reconstruction. You should pick one that agrees with the actual range of you inputs.
If, for example, your input ranges in (0..inf) you may consider using exp or ().^2 as the reconstruction non-linearity. You may use polynomials of various degrees, log or whatever function you think may fit the spread of your input data.
Disclaimer: I never actually encountered such a case and have not seen this type of solution in literature. However, I believe it makes sense and at least worth trying.