Autoencoder with Multiple Outputs - neural-network

How do I make an autoencoder with 2 different images at the output layer? I am asked to average 2 images as input to a neural network and receive both images separately as output.

Related

Training Network with sub networks

I am planning to train the following network from end-to-end. I have two questions:
Question 1
I have 4 ground truths.
Segmentation of distorted images
Parameters
Corrected images
Segmentation of corrected images
My problem is that as it can be seen in the image, The first two outputs are from network 1 and 2 and the last 2 from network 3.
Is there any way I can train my network from end to end.
Question 2
How can I load the VGG network weights.
I have connections from middle layers to Network 2 and subnetwork 1-2.
I am using Tensorflow 2.4.1

How to create a neural network that receive multiple input images in Matlab

I'd like to know if it's possible to create a neural network that receive multiple input images (imageInputLayer)
For example a Siamese architecture for computing the disparity (stereo correspondence) out of two image patches. The network input is two images and the output is a scalar that represent the disparity.
Currently matlab supports a single imageInputLayer for each neural network.
I'd like to to classify a 3D object by projecting the 3D object through 3 angles, Therefor converting the problem to classification of 3 images.
I'm trying to create a network that looks like the attached image.
Please let me know what you think and how to work things out with the network input
This is simply not possible in Matlab 2018B

how is the dimension of the activation being as an input to the pooling layer

I am using alexnet, you can see the structure of the network as following:
Alexnet structure with outputs
I used the activations function in Matlab to get the features of my data from the output of conv5 layer. The output is a feature vector with a dimension 43264 for each single image (I have 14000 Images).
I did some processing on this output with no change in the dimension so it still 43264.
I want to re-enter the data to the network starting in pooling layer 5 and train the network.
As you can notice in the structure of alexnet, the input of the pooling 5 should be 13x13x256. So I changed the feature vector 43264 to 13x13x256 matrix, so the whole training set will be a cell array 14000x1 each cell has 13x13x256.
I used the following code to train the network:
net = trainNetwork (Trainingset, labels, Layers, trainingOptions)
I still has an error saying unexpected input to Pooling layer!
Any I idea please?
Thanks in advance
Many Thanks

How to train a Matlab Neural Network using matrices as inputs?

I am making 8 x 8 tiles of Images and I want to train a RBF Neural Network in Matlab using those tiles as inputs. I understand that I can convert the matrix into a vector and use it. But is there a way to train them as matrices? (to preserve the locality) Or is there any other technique to solve this problem?
There is no way to use a matrix as an input to such a neural network, but anyway this won't change anything:
Assume you have any neural network with an image as input, one hidden layer, and the output layer. There will be one weight from every input pixel to every hidden unit. All weights are initialized randomly and then trained using backpropagation. The development of these weights does not depend on any local information - it only depends on the gradient of the output error with respect to the weight. Having a matrix input will therefore make no difference to having a vector input.
For example, you could make a vector out of the image, shuffle that vector in any way (as long as you do it the same way for all images) and the result would be (more or less, due to the random initialization) the same.
The way to handle local structures in the input data is using convolutional neural networks (CNN).

Classification using Matlab neural networks toolbox - image input?

I have the images of 4 different animals and need to do classification using the Matlab neural networks toolbox. In Matlab's examples (Iris), the form of input data is a 4*1 vector (sepal width, etc..), but I want the input to be the original images. In other words I want the neural networks to extract the features for the images (e.g. the kernels in convolutional NN), not myself using something like color histogram or SIFT. Is it possible for the toolbox to do the job? Thanks.