Matlab: specify Neural Net with no hidden layer using built in functions - matlab

Most likely a trivial question...how can I create a neural network with no hidden layer for regression problems in Matlab using built in function (I understand this is the same as a multivariate linear regression). My problem set has 5 predictors and one predictand.
I get an error when I try to fit a net with a hidden layer size of 0...i.e.
net=fitnet(0);
Error using fitnet (line 69)
Parameters.hiddenSizes contains a zero.
Second, if I try to call the net using the configure command I also get an error telling me it cannot configure the 'net' since it is a structure.
In short, how can I create a NET object with no hidden layer so that I can train and test it on a set of data predictor and predictand pairs similar to calling a net with a specified number of hidden nodes.
My version of Matlab is R2012a.
Thank you all for your help.

Take a look at the linearnet function, it creates a very simple network of one layer with one neuron.

Related

Layer in keras like regressionlayer in matlab?

I am trying to write the code in keras from already written Matlab Model in example here:https://in.mathworks.com/help/deeplearning/examples/denoise-speech-using-deep-learning-networks.html
They have defined a layer in the end called regressionLayer. I want to know what to use corresponding to this in keras or pytorch.
I have simply added the sigmoid activation rather than this regressionLayer in keras. But I doubt if this is correct because I dont seem to get the desired output and this seems to be one of the reason.
model.add(Conv2D(1, (129,1), strides =(1,100),padding='same',
input_shape=(129,8,18),activation='sigmoid'))
In Matlab the regression layer just computes a mean squared loss, which is the way Caffe works (losses as layers), but not the way Keras works, so the equivalent line would not be a layer, just setting the loss:
model.compile(loss='mse', optimizer=...)
Note that we do not include accuracy metrics if you are doing regression, as it is a classification only metric.

fine-tuning with VGG on caffe

I'm replicating the steps in
http://caffe.berkeleyvision.org/gathered/examples/finetune_flickr_style.html
I want to change the network to VGG model which is obtained at
http://www.robots.ox.ac.uk/~vgg/software/very_deep/caffe/VGG_ILSVRC_16_layers.caffemodel
does it suffice to simply substitute the model parameter as following?
./build/tools/caffe train -solver models/finetune_flickr_style/solver.prototxt -weights VGG_ISLVRC_16_layers.caffemodel -gpu 0
Or do I need to adjust learning rates, iterations, i.e. does it come with separate prototxt files?
There needs to be a 1-1 correspondence between the weights of the network you want to train and the weights you use for initializing/fine-tuning. The architecture of the old and new model have to match.
VGG-16 has a different architecture than the model described by models/finetune_flickr_style/train_val.prototxt (FlickrStyleCaffeNet). This is the network that the solver will try to optimize. Even if it doesn't crash, the weights you've loaded don't have any meaning in the new network.
The VGG-16 network is described in the deploy.prototxt file on this page in Caffe's Model Zoo.

Non-linear regression using custom neural network in MatLab

I am very new to MatLab. I got a task for modelling non-linear regression using neural network in MatLab.
I need to create a two-layer neural network where:
The first layer is N neurons with sigmoid activation function.
The second layer is layer with one neuron and a linear activation
function.
Here is how I implemented the network:
net = network(N, 2);
net.layers{1}.transferFcn = 'logsig';
net.layers{1}.size = N
net.layers{2}.size = 1;
Is this implementation correct? How should I assign the linear activation function to the second layer?
A quick reading of the Matlab help on the nntransfer function gives you the list of all possible transfer functions you can use. In your case I think you should either try the poslin (positive linear) or the purelin one (pure linear).
When you have such questions, the best way is actually to 'ask' Matlab the possibilities you have.
In this case, I just typed net.layers{2} in the Matlab console window. This displays the list of the parameters of the 2nd layer. Then, you just click on the link TransferFcn and the Matlab help with the possible options for this parameter value automatically opens. This works for any parameter of your neural network ;)
You didn't determine the transfer function for the second layer.
net.layers{2}.transferFcn='pureline'
The rest is OK.

Perceptron with sigmoid stuck in local Minimum (WEKA)

I know that usually you don't have local minima in the error surface using a perceptron (no hidden layers) with linear output. But is it possible to get stuck in local minima with a perceptron using a sigmoid function since it is not linear?
I'm using the functions.MultilayerPerceptron in WEKA (uses a sigmoid activation function and Backpropagation) with no hidden layers. I train it on a linearly separable dataset with 4 different classes. When I change the seed for the random generator (used for the initial weights of the nodes) most of the time it classifies only 60% right (it doesn't fully learn the target concept). But I found a specific seed where it classifies 90% right (which is the optimum). I already played with momentum, training time and learning rate but it doesn't change anything. It seems like it gets stuck in a local minimum..
or what else could be the explanation?
I'm thankful for any help
Simgoid activation function changes nothing, this is still a linear model. So there is no local optima. The only reason for wrong behavior is some weird stopping criterion and/or errors in data processing/methdo implementation.

Neural networks: classification using Encog

I'm trying to get started using neural networks for a classification problem. I chose to use the Encog 3.x library as I'm working on the JVM (in Scala). Please let me know if this problem is better handled by another library.
I've been using resilient backpropagation. I have 1 hidden layer, and e.g. 3 output neurons, one for each of the 3 target categories. So ideal outputs are either 1/0/0, 0/1/0 or 0/0/1. Now, the problem is that the training tries to minimize the error, e.g. turn 0.6/0.2/0.2 into 0.8/0.1/0.1 if the ideal output is 1/0/0. But since I'm picking the highest value as the predicted category, this doesn't matter for me, and I'd want the training to spend more effort in actually reducing the number of wrong predictions.
So I learnt that I should use a softmax function as the output (although it is unclear to me if this becomes a 4th layer or I should just replace the activation function of the 3rd layer with softmax), and then have the training reduce the cross entropy. Now I think that this cross entropy needs to be calculated either over the entire network or over the entire output layer, but the ErrorFunction that one can customize calculates the error on a neuron-by-neuron basis (reads array of ideal inputs and actual inputs, writes array of error values). So how does one actually do cross entropy minimization using Encog (or which other JVM-based library should I choose)?
I'm also working with Encog, but in Java, though I don't think it makes a real difference. I have similar problem and as far as I know you have to write your own function that minimizes cross entropy.
And as I understand it, softmax should just replace your 3rd layer.