I am going to build a neural network which has an architecture of more than one output layers. More specificly, it is designed to construct parallel procedures on top of a series of convolutional layers. One branch is to compute classification results (softmax-like); the other is to get regression results. However, I'm stuck designing the model as well as choosing loss functions(criterions).
I. Should I use torch container nn.Parallel() or nn.Concat() for the branch layers on top of conv layers (nn.Sequential())? What is the differenct except for data format.
II. Due to output data, a classification loss function and a regression loss function are to be combined linearly. I am wondering whether nn.MultiCriterion() or nn.ParallelCriterion() to be chosen with respect to determined container. Or I have to customize a new criterion class.
III. Could anyone who had done similar work tell me if torch needs additional customization to implement backprop for training. I concern about data structure issue of torch containers.
Concat vs Parallel differ in that each module in Concat gets the entire output of the last layer as input, while each input of Parallel takes a slice of the output of the last layer. For your purpose you need Concat, not Parallel, since both loss functions need to take the entire output of your sequential network.
Based on the source code of MultiCriterion and ParallenCriterion they do practically the same thing. The important difference is that in case of MultiCriterion you provide multiple loss functions, but only one target, and they are all computed against that target. Given that you have a classification and a regression task I assume you have different targets, so you need ParallelCriterion(false), where false enables the multitarget mode (if the argument is true ParallelCriterion seems to behave identical to MultiCriterion). Then the target is expected to be a table of targets for individual criterions.
If you use Concat and ParallelCriterion, torch should be able to compute gradients properly for you. The both implement updateGradInput, which properly merges the gradients of individual blocks.
Related
I have trained a network on two different modals of the same image. I pass the data together in one layer but after that, it is pretty much two networks in parallel, they don't share a layer and the two tasks have different set of labels, therefore I have two different loss and accuracy layers (I use caffe btw). I would like to learn these tasks jointly. For example, the prediction of a class of task 1 should be higher in presence of the task 2 predicting a certain class label. I don't want to join them at feature level but at prediction level. How do I get to do this?
Why don't you want to join the prediction at feature level?
If you really want to stick to your idea of not joining any layers of the network, you can apply a CRF or SVM on top of the overall prediction pipeline to learn cross-correlations between the predictions. For any other method you will need to combine features inside the network, one way or another. However I would strongly recommend, that you consider doing this. It is a general theme in deep learning, that doing stuff inside the network works better then doing it outside.
From what I have learned by experimenting with joint prediction, you will get the most performance gain, if you share weights between all convolutional layers of the network. You can then apply independent fc-layers, followed by a softmax regression and separate loss functions on top of the jointly predicted features. This will allow the network to learn cross-correlation between features while it is still able to make separate predictions.
Have a look at my MultiNet paper as a good starting point. All our training code is on github.
I am trying to fit some input to predict an output in Matlab using fitnet neural networks, but I am concerned in finding which input candidate vector would correlate the most with the output as a preprocessing step prior to my neural network training.
In the figure below the output in yellow has five input candidates where I need to chose only from. What command should I use in Matlab and how should I prepare that data (repeated around 1000 time) so I can get a clear correlation between the input candidate and the output.
To find out correlation between given feature and target variable you can use R = corrcoef(A,B), but... do not do it!.
This process makes no sense and will be probably harmfull for the whole process. You are going to remove part of information from your data so only features which have idependent, linear realtion to target variable persist. Then, you will apply highly-non linear model which exploits co-occurences and features correlations. These two steps are completely incompatible. The only valid relation is - if your data is very simple and it can be pretty much modeled with linear model, then neural net will work as well. But then there is no point in using a neural net in the first place, just apply linear regression. Consequently: do not perform feature selection unless you have to. Try to build a good model without doing that, and if you have to remove some features (maybe getting them is expensive process?) use post-hoc model analysis to remove features which are not used by this model. Do not split your problem to multiple, independent processes if you do not have to (unless you can show that this decomposition does not harm the process, but in case of feature selection + regressor this is not true, as you cannot construct a valid feature selection supervision without trained regressor).
I have run ANN in matlab for prediction a variable based on several response variables.ALL variables have numerical values.I could not get a desirable results although I changed hidden neuron several times many runs of the model and so on.My question is should I use transformation of the input variables to get a better results?how can I know that which transformation I should choos?Thanks for any help.
I strongly advise you to use some methods from time series analysis like lagged correlation or window lagged correlation (with statistical tests). You can find it in most of statistical packages (e.g. in R). From one small picture it's hard to deduce whether your prediction is lagged or not. Testing huge amount of data can help you in revealing true dependencies and avoid trusting in spurious correlations.
I'm trying to get started using neural networks for a classification problem. I chose to use the Encog 3.x library as I'm working on the JVM (in Scala). Please let me know if this problem is better handled by another library.
I've been using resilient backpropagation. I have 1 hidden layer, and e.g. 3 output neurons, one for each of the 3 target categories. So ideal outputs are either 1/0/0, 0/1/0 or 0/0/1. Now, the problem is that the training tries to minimize the error, e.g. turn 0.6/0.2/0.2 into 0.8/0.1/0.1 if the ideal output is 1/0/0. But since I'm picking the highest value as the predicted category, this doesn't matter for me, and I'd want the training to spend more effort in actually reducing the number of wrong predictions.
So I learnt that I should use a softmax function as the output (although it is unclear to me if this becomes a 4th layer or I should just replace the activation function of the 3rd layer with softmax), and then have the training reduce the cross entropy. Now I think that this cross entropy needs to be calculated either over the entire network or over the entire output layer, but the ErrorFunction that one can customize calculates the error on a neuron-by-neuron basis (reads array of ideal inputs and actual inputs, writes array of error values). So how does one actually do cross entropy minimization using Encog (or which other JVM-based library should I choose)?
I'm also working with Encog, but in Java, though I don't think it makes a real difference. I have similar problem and as far as I know you have to write your own function that minimizes cross entropy.
And as I understand it, softmax should just replace your 3rd layer.
Beginner on ANNs:
I am implementing a back propagation neural network to predict the price of gold. I know that I have to split my data into training data, selection data and test data.
However I unsure How to go on about using these sets of data. At first I was training the data network with my training set then after it's trained I am getting a number of inputs to my network from the test set and comparing the output.
I'm not sure if I'm doing this right and were does the selection set come in ?
thanks in advance!
The general idea is:
Train the network for a little while on the training set.
Evaluate the network on a second set, often called the validation set. Probably what you're calling the selection set.
Train the network a little more on the training set.
Evaluate the new network on the selection set again.
Which did better, the old network or the new network? If the new network is better, we're still getting some use out of training, so goto 3. If the new network is worse, more training will probably only hurt. Use the previously version of the network, since it did better.
In this way, you can tell when to stop training.
One easy modification to this is to always keep track of the best network seen so far, and we only stop training when we see some number (say, three) of training attempts that do worse in a row.
The third set, the test set, is necessary because the selection set is, if indirectly, involved in the training process. Final evaluation must be done on data that was not used at all during training.
This sort of thing is sufficient for simple experiments, but in general you'll want to use cross-validation to get a better idea of your system's performance.
I wanted to leave a comment just to say that validation sets are a good place for model-dependent hyper-parameter tuning, but I'm new here and hence lack the reputation points to do so. To make this more worthy of a separate posting, I've included an outline of my own train-validate-test process. In practice, my workflow is as follows:
Identify, collect, and clean data. Try to limit complaining during data munging process.
Split data into three sets: training, validation, test.
Establish two "base" models for evaluating more complex models built later on in the process. The first of these models is typically a basic linear/logistic regression using all possible features. The second models uses only the most obviously informative (initial identification of informative features depends on use case, typically involves combination of domain knowledge, basic clustering, simple correlation).
Begin more empirical feature selection (i.e. unsupervised NN, but usually random forest) and prototype a broad range of models using the training set.
Eliminate poorly performing models as well as uninformative features
Compare performance of remaining models against each other and the "base" models, using a modified version of the training set (same data, but sans uninformative features). Toss under-performing models.
Using the validation set, tune the appropriate hyper-parameters for each of the models (either by hand or gridsearch). Further reduce the number of models in consideration, ideally to just 2-3 (excluding base models).
Finally, evaluate model performance (with optimized hyper-parameters) on the test set. Again, compare models among themselves and against the base models. Make final model choice based on a problem-specific appropriate combination of computational complexity/cost, ease of interpretation/transparency/"explainability", and improvement over and/or performance vs base models.