Neural network that outputs a float vector - autoencoder

Is it possible to input a float vector into a neural network that will output another float vector?
This is very similar to autoencoder with two different vectors

Related

Direction of the normal vector to the decision hyper-plane of support vector machine

Given the coefficients of the hyper-plane of the support vector machine for classifying an mxn-dimensional dataset into two classes as an n-dimensional vector, how can we find out the direction (e.g. cosines) of the normal vector to that hyper-plane?
FYI, the coefficients and the support vectors were calculated by using the svmtrain function in Matlab.

How can I compute kernels in Matlab?

I want to calculate weighted kernels (for using in a SVM classifier) in Matlab but I'm currently compeletely confused.
I would like to implement the following weighted RBF and Sigmoid kernel:
x and y are vectors of size n, gamma and b are constants and w is a vector of size n with weights.
The problem now is that the fitcsvm method from Matlab need two matrices as input, i.e. K(X,Y). For example the not weighted RBF and sigmoid kernel can be computed as follows:
K_rbf = exp(-gamma .* pdist2(X,Y,'euclidean').^2)
K_sigmoid = tanh(gamma*X*Y' + b);
X and Y are matrices where the rows are the data points (vectors).
How can I compute the above weighted kernels efficiently in Matlab?
Simply scale your input by the weights before passing to the kernel equations. Lets assume you have a vector w of weights (of size of the input problem), you have your data in rows of X, and features are columns. Multiply it with broadcasting over rows (for example using bsxfun) with w. Thats all. Do not do the same to Y though, just multiply one of the matrices. This is true for every such "weighted" kernel based on scalar product (like sigmoid); for distance based (like RBF) you want to scale both by sqrt of w.
Short proofs:
scalar based
f(<wx, y>) = f(w<x, y>) (linearity of scalar product)
distance based
f(||sqrt(w)x - sqrt(w)y||^2) = f(SUM_i (sqrt(w_i)(x_i - y_i))^2)
= f(SUM_i w_i (x_i - y_i)^2)

How can I use a neural network to model a quadratic equation?

A lot of examples I've seen about neural network to model mathematical functions are using sin / cos / etc. These are nicely bounded between 0 and 1.
What if I wanted to model something that was quadratic? y = ax^2 + bx + c? How can I modify my input data to fit this?
Presumably I'll have only one input (x value) and a bias input. The output will be the y. My training data will have negative numbers as well as positive numbers.
Thank you.
You can feed any real number into a neural network and it can theoretically output any number, so long as the last layer of your neural network is linear. If not, you could possibly multiply all the targets by really small number.

Output Range of Neural Networks in MATLAB

I'm using a Nueral Network to solve a regression problem.
I've scaled all the values to fall in the interval [0,1].
Therefore, all the training inputs and outputs are in [0,1].
However, when I run the network for some test examples, the values are going below 0. How can I get over this? I want all the values to be in [0,1].
If by "scaled all the values in [0,1]" you mean normalization of the dataset, then all only the input vectors are in [0,1]. The output of a neuron by itself can take any value. The activation function is what maps the output to the [0,1] or [-1,1] interval. Since some outputs are below zero, your network is probably using the tansig function as activation. Change that to the logsig function, which has the same shape but gives output in [0,1] instead of [-1,1]

two dimensional input in neural network

I want to feed a cell containing of two dimensional inputs to the neural network in matlab. the input is a graph which is shown in a two dimensional matrix (n*n). how can I do that?
I guess that your neural network is 2D, subsequently your input is 1D, which means you will have to give a vector. In your case, this latter would be n²-sized.
Just string your whole input out as a one-dimensional array:
input_vec = reshape(input_mat, length(input_mat)^2, 1); % assuming your input_mat is square
So rather than having a 10x10 (or whatever) matrix input to your network, you would have a 100x1 vector being input. Then train your network on this vector. This approach is commonly used in textbook character recognition networks, for example read the section titled The MNIST Data in this tutorial.