Can I use the flatten layer in Deep Network Designer Matlab? - matlab

I'm trying to translate a neural network that I wrote in Keras Python, but Matlab says that I can't use the Flatten layer together with the Image Input Layer. How can I solve this problem
?
Unfortunately, I could not find an answer to my question on the Internet.

Related

How to create your own Autoencoder in Matlab?

I am trying to make an autoencoder that would work on the ORL dataset. I have the images ready in vectors(1024 * 400) and I was thinking of making an autoencoder with a linear (fully connected) layer.
Of course, with the help of the Internet and a little searching, you can come across the trainAutoencoder function.
network = trainAutoencoder(fea, 512)
But in this function I can't make an autoencoder with multiple layers?? By googling, I found stack autoencoder, which solves that problem. But I ask here a few questions about how to change the activation function (for example ReLu), and not the sigmoid that comes automatically.
autoenc1 = [featureInputLayer(32*32)
fullyConnectedLayer(16*16,"Name","fc_1")
reluLayer("Name","relu_1")
fullyConnectedLayer(8*8,"Name","fc_2")
fullyConnectedLayer(16*16,"Name","fc_3")
reluLayer("Name","relu_2")
fullyConnectedLayer(32*32,"Name","fc_4")
classificationLayer("Name","classoutput")]
Is it possible to write an autoencoder in this way? I know the classification output doesn't make sense with an unsupervised network, but MatLab was forcing me to set something up. Is it possible to make an autoencoder using Deep Network Designer?

Neural Network: Is it possible to reverse an output tensor in a layer to an input image in PyTorch?

How to reverse an output tensor in a layer to an input image? I can imagine the reversed input image will be different than the original one because of the dropout, etc. However, I want to do an experiment so I appreciate a possible method in PyTorch. I am currently using the pre-trained ResNet. If the answer involves some knowledge in a paper, kindly provide a citation or link.

How to design and train my own convolutional neural network in Matlab using Caffe package?

I have a structure for my CNN that will be used in image enhancement. and I want to know how to use Caffe package in Matlab to design and train the network. I don't need to import pretrained network from Caffe as I have a specific structure for the CNN. Does any one have a link or example that guide me on how to do that? Any help will be appreciated.
The Caffe official documentation has a simple example showcasing basic interfaces for Matlab including adding conv layers to the network:
http://caffe.berkeleyvision.org/tutorial/interfaces.html#matlab
Once you get the basics for matlab, the interfaces are mostly similar to python.
Hope that helps

can we make a convolution network that use more than one image to make a prediction

I cropped the following image from a tutorial.
this diagram shows a rough structure of a standard neural network. takes one image as input and make a prediction.
what I am thinking about is some kind of parallel structure. think about something like the following image.
not exactly as in the above image. But you can see I am trying to use two images to make one prediction. this image is for you to get an idea about what I am trying to ask.
is it possible to use more than one (two, three ..) images like this or any other way in order to make one prediction. now, this is not to be used in actual photo classification. But I think such a technique can be used in a file like audio classification where a graphical representation of data is used with image classification techniques.
any advice, guidance or opinion on this?
if we consider implementing exactly what is in the diagram, if I use a high-level API like Keras (Keras.model.sequential) all we can do is keep adding a layer one after the other.
so what kind of technology can I use to implement the parallel structure
Yes, you can use more than one image as input. See for example the Siamese Neural Network which takes as input 2 images and passes them through a shared network architecture.
If instead you want to have an arbitrary and variable number of images as input you can use an architecture based on Recurrent Neural Networks like Convolutional LSTM, which essentially applies a CNN to every image of the input sequence using an LSTM recurrent network.

How do I use a pre-trained Caffe model?

I have some questions about how to actually interact with a pre-trained Caffe model. In my case I'm using a model for scene recognition.
In the caffe git repository, there are some code examples in Python and C++ on the implementations of Image Classifiers. However, those do not apply to my use case (since they only classify the input image as ONE class).
My goal is an application that takes an input image (jpg) and outputs the highest predicted class label for each pixel in the input image (e.i., indices for sky, beach, road, car).
Could anyone give me some pointers on how to proceed?
There already seem to exist implementations for this. This demo (http://places.csail.mit.edu/demo.html) is kind of what I what.
Thank you!
What you are looking for is not image classification, but rather semantic segmentation.
A recent work, by Jonathan Long, Evan Shelhamer and Trevor Darrell is based on Caffe, and can be found here. It uses fully convolutional network, that is, a network with no "InnerProduct" layers only convolutional layers, thus capable of producing outputs with different sizes for different sizes of inputs.