How to jointly learn two tasks at prediction level? - neural-network

I have trained a network on two different modals of the same image. I pass the data together in one layer but after that, it is pretty much two networks in parallel, they don't share a layer and the two tasks have different set of labels, therefore I have two different loss and accuracy layers (I use caffe btw). I would like to learn these tasks jointly. For example, the prediction of a class of task 1 should be higher in presence of the task 2 predicting a certain class label. I don't want to join them at feature level but at prediction level. How do I get to do this?

Why don't you want to join the prediction at feature level?
If you really want to stick to your idea of not joining any layers of the network, you can apply a CRF or SVM on top of the overall prediction pipeline to learn cross-correlations between the predictions. For any other method you will need to combine features inside the network, one way or another. However I would strongly recommend, that you consider doing this. It is a general theme in deep learning, that doing stuff inside the network works better then doing it outside.
From what I have learned by experimenting with joint prediction, you will get the most performance gain, if you share weights between all convolutional layers of the network. You can then apply independent fc-layers, followed by a softmax regression and separate loss functions on top of the jointly predicted features. This will allow the network to learn cross-correlation between features while it is still able to make separate predictions.
Have a look at my MultiNet paper as a good starting point. All our training code is on github.

Related

How to evaluate a quality of neural network for object detection in Keras?

I've already trained the neural network in Keras for detecting two classes of images (cats and dogs) and got accuracy on test data. Is it enough for the conclusion in the master thesis or should I do other actions for evaluating the quality of network (for instance, cross-validation)?
Not really, I would expect more than just accuracy from my students in any classification setup. Accuracy only evaluates that particular network on that particular test set but you would have to some extent justify the design choices you've made in building that network. Here are some things to consider:
Presumably you have some hyper-parameters you've fixed, you can investigate how these affect your results. How many filters? How many layers? and most importantly why?
An important aspect of object classification is how your model handles noise. Depending on your dataset, one simple way would be to pre-process the test data, blur it, invert colours etc and you'll see that your performance will drop. Why does it do that? How does the confusion matrix look like then?
What is the performance of the network? Is it fast, slow compared to another system, say VGG?
When you evaluate your project in general not just the network, asking why things worked helps a lot, not just why things didn't work.

Siamese networks: Why does the network to be duplicated?

The DeepFace paper from Facebook uses a Siamese network to learn a metric. They say that the DNN that extracts the 4096 dimensional face embedding has to be duplicated in a Siamese network, but both duplicates share weights. But if they share weights, every update to one of them will also change the other. So why do we need to duplicate them?
Why can't we just apply one DNN to two faces and then do backpropagation using the metric loss? Do they maybe mean this and just talk about duplicated networks for "better" understanding?
Quote from the paper:
We have also tested an end-to-end metric learning ap-
proach, known as Siamese network [8]: once learned, the
face recognition network (without the top layer) is repli-
cated twice (one for each input image) and the features are
used to directly predict whether the two input images be-
long to the same person. This is accomplished by: a) taking
the absolute difference between the features, followed by b)
a top fully connected layer that maps into a single logistic
unit (same/not same). The network has roughly the same
number of parameters as the original one, since much of it
is shared between the two replicas, but requires twice the
computation. Notice that in order to prevent overfitting on
the face verification task, we enable training for only the
two topmost layers.
Paper: https://research.fb.com/wp-content/uploads/2016/11/deepface-closing-the-gap-to-human-level-performance-in-face-verification.pdf
The short answer is that yes, I think that looking at the architecture of the network will help you understand what is going on. You have two networks that are "joined at the hip" i.e. sharing weights. That's what makes it a "Siamese network". The trick is that you want the two images you feed into the network to pass through the same embedding function. So to ensure that this happens both branches of the network need to share weights.
Then we combine the two embeddings into a metric loss (called "contrastive loss" in the image below). And we can back-propagate as normal, we just have two input branches available so that we can feed in two images at a time.
I think a picture is worth a thousand words. So check out how a Siamese network is constructed (at least conceptually) below.
The gradients depend on the activation values. So for each branch gradients will be different and final update could be based on some averaging to share the weights

Use a trained neural network to imitate its training data

I'm in the overtures of designing a prose imitation system. It will read a bunch of prose, then mimic it. It's mostly for fun so the mimicking prose doesn't need to make too much sense, but I'd like to make it as good as I can, with a minimal amount of effort.
My first idea is to use my example prose to train a classifying feed-forward neural network, which classifies its input as either part of the training data or not part. Then I'd like to somehow invert the neural network, finding new random inputs that also get classified by the trained network as being part of the training data. The obvious and stupid way of doing this is to randomly generate word lists and only output the ones that get classified above a certain threshold, but I think there is a better way, using the network itself to limit the search to certain regions of the input space. For example, maybe you could start with a random vector and do gradient descent optimisation to find a local maximum around the random starting point. Is there a word for this kind of imitation process? What are some of the known methods?
How about Generative Adversarial Networks (GAN, Goodfellow 2014) and their more advanced siblings like Deep Convolutional Generative Adversarial Networks? There are plenty of proper research articles out there, and also more gentle introductions like this one on DCGAN and this on GAN. To quote the latter:
GANs are an interesting idea that were first introduced in 2014 by a
group of researchers at the University of Montreal lead by Ian
Goodfellow (now at OpenAI). The main idea behind a GAN is to have two
competing neural network models. One takes noise as input and
generates samples (and so is called the generator). The other model
(called the discriminator) receives samples from both the generator
and the training data, and has to be able to distinguish between the
two sources. These two networks play a continuous game, where the
generator is learning to produce more and more realistic samples, and
the discriminator is learning to get better and better at
distinguishing generated data from real data. These two networks are
trained simultaneously, and the hope is that the competition will
drive the generated samples to be indistinguishable from real data.
(DC)GAN should fit your task quite well.

Torch implementation of multi-output-layer neural network

I am going to build a neural network which has an architecture of more than one output layers. More specificly, it is designed to construct parallel procedures on top of a series of convolutional layers. One branch is to compute classification results (softmax-like); the other is to get regression results. However, I'm stuck designing the model as well as choosing loss functions(criterions).
I. Should I use torch container nn.Parallel() or nn.Concat() for the branch layers on top of conv layers (nn.Sequential())? What is the differenct except for data format.
II. Due to output data, a classification loss function and a regression loss function are to be combined linearly. I am wondering whether nn.MultiCriterion() or nn.ParallelCriterion() to be chosen with respect to determined container. Or I have to customize a new criterion class.
III. Could anyone who had done similar work tell me if torch needs additional customization to implement backprop for training. I concern about data structure issue of torch containers.
Concat vs Parallel differ in that each module in Concat gets the entire output of the last layer as input, while each input of Parallel takes a slice of the output of the last layer. For your purpose you need Concat, not Parallel, since both loss functions need to take the entire output of your sequential network.
Based on the source code of MultiCriterion and ParallenCriterion they do practically the same thing. The important difference is that in case of MultiCriterion you provide multiple loss functions, but only one target, and they are all computed against that target. Given that you have a classification and a regression task I assume you have different targets, so you need ParallelCriterion(false), where false enables the multitarget mode (if the argument is true ParallelCriterion seems to behave identical to MultiCriterion). Then the target is expected to be a table of targets for individual criterions.
If you use Concat and ParallelCriterion, torch should be able to compute gradients properly for you. The both implement updateGradInput, which properly merges the gradients of individual blocks.

Where do filters/kernels for a convolutional network come from?

I've seen some tutorial examples, like UFLDL covolutional net, where they use features obtained by unsupervised learning, or some others, where kernels are engineered by hand (using Sobel and Gabor detectors, different sharpness/blur settings etc). Strangely, I can't find a general guideline on how one should choose a good kernel for something more than a toy network. For example, considering a deep network with many convolutional-pooling layers, are the same kernels used at each layer, or does each layer have its own kernel subset? If so, where do these, deeper layer's filters come from - should I learn them using some unsupervised learning algorithm on data passed through the first convolution-and-pooling layer pair?
I understand that this question doesn't have a singular answer, I'd be happy to just the the general approach (some review article would be fantastic).
The current state of the art suggest to learn all the convolutional layers from the data using backpropagation (ref).
Also, this paper recommend small kernels (3x3) and pooling (2x2). You should train different filters for each layer.
Kernels in deep networks are mostly trained all at the same time in a supervised way (known inputs and outputs of network) using Backpropagation (computes gradients) and some version of Stochastic Gradient Descent (optimization algorithm).
Kernels in different layers are usually independent. They can have different sizes and their numbers can differ as well. How to design a network is an open question and it depends on your data and the problem itself.
If you want to work with your own dataset, you should start with an existing pre-trained network [Caffe Model Zoo] and fine-tune it on your dataset. This way, the architecture of the network would be fixed, as you would have to respect the architecture of the original network. The networks you can donwload are trained on very large problems which makes them able to generalize well to other classification/regression problems. If your dataset is at least partly similar to the original dataset, the fine-tuned networks should work very well.
Good place to get more information is Caffe # CVPR2015 tutorial.