Optical flow input for Caffe - neural-network

I wonder if Caffe can take optical flow image as input, instead of RGB. I am aware that there is such library like FlowNet that learns optical flow, but that is not what I am aiming at.
Please provide me a pointer if any.

Caffe is a very flexible framework. It can process almost any shape of input data you might provide it with.
A very common way to input images to caffe is via lmdb/leveldb datasets created using convert_imageset tool.
For more complex input shapes one can use binary hdf5 files to be read using "HDF5Data" layer.
As for optical flow, you can input it as an image via lmdb or as a two-channel tensor via hdf5. Caffe can handle either way, it's up to you to make sure the net knows how to make sense of the input data.

Related

Neural Network: Is it possible to reverse an output tensor in a layer to an input image in PyTorch?

How to reverse an output tensor in a layer to an input image? I can imagine the reversed input image will be different than the original one because of the dropout, etc. However, I want to do an experiment so I appreciate a possible method in PyTorch. I am currently using the pre-trained ResNet. If the answer involves some knowledge in a paper, kindly provide a citation or link.

can we make a convolution network that use more than one image to make a prediction

I cropped the following image from a tutorial.
this diagram shows a rough structure of a standard neural network. takes one image as input and make a prediction.
what I am thinking about is some kind of parallel structure. think about something like the following image.
not exactly as in the above image. But you can see I am trying to use two images to make one prediction. this image is for you to get an idea about what I am trying to ask.
is it possible to use more than one (two, three ..) images like this or any other way in order to make one prediction. now, this is not to be used in actual photo classification. But I think such a technique can be used in a file like audio classification where a graphical representation of data is used with image classification techniques.
any advice, guidance or opinion on this?
if we consider implementing exactly what is in the diagram, if I use a high-level API like Keras (Keras.model.sequential) all we can do is keep adding a layer one after the other.
so what kind of technology can I use to implement the parallel structure
Yes, you can use more than one image as input. See for example the Siamese Neural Network which takes as input 2 images and passes them through a shared network architecture.
If instead you want to have an arbitrary and variable number of images as input you can use an architecture based on Recurrent Neural Networks like Convolutional LSTM, which essentially applies a CNN to every image of the input sequence using an LSTM recurrent network.

How to decide which Convolution Neural Network architecture will work to identify the own data set?

I have data-set regarding chocolates. I need to detect whether it has scratches or not. I am planning to detect from Convolution Neural Network using Caffe. But how to define which neural network architecture will suit to my data-set?
Also how to generate heat values when there is any scratches in image?
I have tried detect normal image processing algorithms and it did not work.
Abnormal Image
Normal Image
Based on the little info you provide, the network architecture choice should be the last of your concerns. Also "trying normal image processing algorithms" is quite a vague statement.
A few points to consider
How big is the dataset? Are the chocolate photos taken in a controlled setting where they are always similar to your example photos or are they taken in the wild, i.e. where they could have different lighting conditions, positions, etc.? Is the dataset balanced?
How is the dataset labelled? Is it just a class for the whole image specifying normal vs abnormal? If so, you'd just be doing classification, and one way to potentially just visualise the location of the scratches (if they turn out to be the most prominent feature for the classification) is to use gradient-weighted class activation maps. On the other hand, if your dataset has labelled scratch points over images, then you can directly train your network to output heatmaps.
Once your dataset is properly set up with a training and validation set, you can just start with a baseline simple small convolutional network architecture, and then you can try out different and bigger network architectures like VGG16, ResNet, etc., and check whether they improve performance on your validation set.

Can I use next layer's output as current layer's input by Keras?

In text generate mission, we usually use model's last output as current input to generate next word. More generalized, I want to achieve a neural network that regards next layer's finally hidden state as current layer's input. Just like the following(what confuses me is the decoder part):
But I have read Keras document and haven't found any functions to achieve it.
Can I achieve this structure by Keras? How?
What you are asking is an autoencoders, you can find similar structures in Keras.
But there are certain details that you should figure it out on your own. Including the padding strategy and preprocessing your input and output data. Your input cannot get dynamic input size, so you need to have a fixed length for input and outputs. I don't know what do you mean by arrows who join in one circle but I guess you can take a look at Merge layer in Keras (basically adding, concatenating, and etc.)
You probably need 4 sequential model and one final model that represent the combined structure.
One more thing, the decoder setup of LSTM (The Language Model) is not dynamic in design. In your model definition, you basically introduce a fixed inputs and outputs for it. Then you prepare the training correctly, so you don't need anything dynamic. Then during the test, you can predict each decoded word in a loop by running the model once predict the next output step and run it again for next time step and so on.
The structure you have showed is a custom structure. So, Keras doesn't provide any class or wrapper to directly build such structure. But YES, you can build this kind of structure in Keras.
So, it looks like you need LSTM model in backward direction. I didn't understand the other part which probably looks like incorporating previous sentence embedding as input to the next time-step input of LSTM unit.
I rather encourage you to work with simple language-modeling with LSTM first. Then you can tweak the architecture later to build an architecture depicted in figure.
Example:
Text generation with LSTM in Keras

How do I use a pre-trained Caffe model?

I have some questions about how to actually interact with a pre-trained Caffe model. In my case I'm using a model for scene recognition.
In the caffe git repository, there are some code examples in Python and C++ on the implementations of Image Classifiers. However, those do not apply to my use case (since they only classify the input image as ONE class).
My goal is an application that takes an input image (jpg) and outputs the highest predicted class label for each pixel in the input image (e.i., indices for sky, beach, road, car).
Could anyone give me some pointers on how to proceed?
There already seem to exist implementations for this. This demo (http://places.csail.mit.edu/demo.html) is kind of what I what.
Thank you!
What you are looking for is not image classification, but rather semantic segmentation.
A recent work, by Jonathan Long, Evan Shelhamer and Trevor Darrell is based on Caffe, and can be found here. It uses fully convolutional network, that is, a network with no "InnerProduct" layers only convolutional layers, thus capable of producing outputs with different sizes for different sizes of inputs.