ANN Show Different Output Each Run on Matlab [duplicate] - matlab

This question already has an answer here:
How to set the same initial seed random numbers in Matlab?
(1 answer)
Closed 6 years ago.
I run ANN on MATLAB, and the output of ANN is not consistent every time I run it? How to overcome this problem. I used same data and ANN structure.
clear;
clc;
load ('C:\USers\ARMA\Desktop\DATA.txt');
data=DATA;
N=length(data);
DT=data;
X=DT(1:N,1:2);
Y=DT(1:N,3);
H=3;
net=newff(minmax(X),[H,1],{'logsig','purelin'},'traingdx');
net=init(net);
net.trainparam.Ir=0.9;
net.trainparam.mc=0.1;
net.trainparam.epochs=10000;
net.trainparam.goal=0.001;
net.trainparam.show=1000;
[net,tr]=train(net,X,Y);
plotperform(tr)

The ANN toolbox uses a randomised initial values as initial weights and biases. So apparently the results are sensitive to them.
You need to fix them before training to achieve similar results.

Related

explain backpropagation algorithm bishop codes [duplicate]

This question already has answers here:
How does a back-propagation training algorithm work?
(4 answers)
Closed 6 years ago.
I've recently completed Professor Ng's Machine Learning course on Coursera, but I have some problem with understanding backpropagation algorithm. so I try to read Bishop codes for backpropagation using sigmoid function. I searched and found clean codes which try to explain what backpropagation does, but still have problem with understanding codes
can any one explain me what does really backpropagation do? and also explain codes for me?
here is the code that I found in the github and I mentioned it before
You have an error of the network. And first step of backpropagation is to compute a portion of guilt for each neuron in network. Your goal is to describe an error as dependence of weights(parameter which you can change). So backprop equation is partial derivation error/weights.
First step: error signal = (desired result - output of output neuron) x derivationactivation(x)
where x is input of output neuron. That is portion of guilt for output neuron.
Next step is compute a portion of guilt for hidden units. First part of this step is summation of error signals of next layer x weights which connect hidden unit with next layer unit. And rest is partial derivation of activation function. error signal = sum(nextlayererror x weight)x derivationactivation(x).
Final step is adaptation of weights.
wij = errorsignal_i x learning_rate x output_of_neuron_j
My implementation of BP in Matlab
NN

How am I to predict and extend data that I have acquired as a 1D vector in MATLAB? [duplicate]

This question already has answers here:
How can I extrapolate to higher values in Matlab? [closed]
(2 answers)
Closed 7 years ago.
Currently I have a 1d vector, which when plotted, gives the blue line in the plot below. Now I want to extend this line based on the data values of vector I already have (as shown by the red line). I am aware that I can use simple machine learning to this problem. But is there an inbuilt MATLAB library functon which can also achieve this?
What exactly would you call this problem of extending the data? It's not interpolation, and I'm sure extrapolation is not a concept. Do not hesitate to ask any questions that would clarify this problem.
Extrapolation is what you're looking for. Since the final part of the curve you want to estimate is rather linear you can use the linear extrapolation.
Let's say our function is f(i)=i for i=1,...,50, with some random noise added.
signal=(1:50)+rand(1,50);
The original signal looks like
Now let's say we want to estimate the following 10 samples, that is for i=51,...,60. By means of linear extrapolation, we can append these 10 samples by the following loop:
for i=51:60
signal(i)=signal(i-2)+((i-(i-2))/((i-1)-(i-2)))*(signal(i-1)-signal(i-2));
end
The original formula has been taken from here, in which x_star=i, x_{k-1}=i-2, x_{k}=i-1, y(x_star) is the value we're estimating, y_{k-1}=signal(i-2), y_{k}=signal(i-1). Obviously you should re-adapt such formula with the function you're using. Basically you're using the previous 2 values to evaluate the new value.
Now that these newly estimated 10 samples have been appended, signal has the form

Perform FFT periodic and aperiodic correlation on 2D arrays with Matlab [duplicate]

This question already has answers here:
Cross-correlation in matlab without using the inbuilt function?
(3 answers)
Closed 7 years ago.
I am writing Matlab script to perform cross correlation on 2D matrix,does anyone know how to do this with fft2 function in Matlab?
There seems to be quite a lot of information/discussion about doing this: try here, here, and here.

Can I use neural network in this case? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
can I use neural networks or svm or etc, if my output data is 27680 that all of them are zero and just one of them is one?
I mean that Is it right to do this?
when I use SVM I have this error:
Error using seqminopt>seqminoptImpl (line 198)
No convergence achieved within maximum number of iterations.
SVMs are usually binary classifiers. Basically that means that they seperate your datapoints into two groups, which signals whether a datapoint does or doesn't belong to a class. Common strategies for solving multi-class problems with SVMs are one-vs-rest and one-vs-one. In the case of one-vs-rest, you would train one classifier per class, which would be 27,680 for you. In the case of one-vs-one, you would train (K over 2) = (K(K-1))/2 classifiers, so in your case around 38 million. As you can see, both numbers are rather high, so I would be pessimistic about your probability of successfully solving your problem with SVMs.
Nevertheless you can try to increase the maximum amount iterations as described in another stackoverflow thread. Maybe it still works.
You can use Neural Nets for your task and a 1-of-K output is nothing unusual. However, even with only one hidden layer of 500 neurons (and using the input and output vector sizes mentioned in your comment) you will have (27680*2*500) + (500*27680) = 41,520,000 weights in your network. So I would expect rather long training times (although a Google employee would probably laugh about these numbers). You will also most likely need a lot of training examples, unless your input is really simple.
As an alternative you might look into Decision Trees/Random Forests, Naive Bayes or kNN.

Similarity of data set to a distribution in MATLAB [duplicate]

This question already has answers here:
Test if a data distribution follows a Gaussian distribution in MATLAB
(3 answers)
Closed 8 years ago.
I have a data set that i want find which distribution is fit to it. How can I check difference distributions on this database? Is any code or automatic code for do that in MATLAB?
Thanks.
I think what you're looking for is called the Bayesian Information Criterion or BIC. Check it out on Wikipedia... Then pick several distributions, calculate the BIC for each distribution with your data, and finally see which one has the best BIC.
Although I make this out to be a simple problem, it actually isn't. For many distributions calculating the BIC requires numerical optimization over the parameters of the distribution. However for some distributions Matlab can calculate the Maximum Likelihood Estimator (MLE) for you automatically, which is part of what you'll need for the BIC.