what is EPOCH in neural network? [closed] - neural-network

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
what is EPOCH in neural network
I want EPOCH definition.
EPOCH is to update the weights.
So How does it work?
Change the "Training data(Input data)"?
Change the "Delta rule(Activation functions)"?
http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html

This comes in the context of training a neural network with gradient descent. Since we usually train NNs using stochastic or mini-batch gradient descent, not all training data is used at each iterative step.
Stochastic and mini-batch gradient descent use a batch_size number of training examples at each iteration, so at some point you will have used all data to train and can start over from the beginning of the dataset.
Considering that, one epoch is one complete pass through the whole training set, means it is multiple iterations of gradient descent updates until you show all the data to the NN, and then start again.

To put it really simple:
Epoch is a function in which everything happens. Within one epoch, you start forward propagation and back propagation. Within one epoch you make neuron activate, calculate loss, get partial derivatives of loss function and you update new values with your weights. And when all these is done, you start new epoch, and then new one etc. The number of epochs is not really important. What matter is the change of your loss function, derivatives etc. So when you are happy with results, you can stop with epoch iteration and get your model out :)

Epoches is single pass through whole training dataset.
Traditional Gradient Descent computes the gradient of the loss function with regards to parameters for the entire training data set for a given number of epochs.

Related

Which activation function should be used at intermediate Layers in a Time Series Prediction Task [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I want to build a Time series prediction model using LSTM,
Which activation function should be used at intermediate layers?
Is Linear activation function is good for Final or Output Layer?
I am normalising my input data in range (0, 1) and inverse normalise after prediction.
Here is My Model:
model = Sequential()
model.add(LSTM(32, input_shape=(input_n, n_features),return_sequences=True,activation='relu'))
model.add(LSTM(32, input_shape=(n_features, input_n), return_sequences=True,activation='relu'))
model.add(Dense(output_n))
model.add(Activation("linear"))
model.compile(loss = 'mean_squared_error', optimizer = 'adam')
model.summary()
Here I have used 'relu' in intermediate layers and Linear activation at my output layer.
Is this approach correct, or in the intermediate layer I should also try with tanh and sigmoid.
What will happen if I will not use any activation function in the intermediate layer, will LSTM take care of this.
Actually LSTM already having tanh and sigmoid activation function for its internal gate calculation.
Word of warning: this is my subjective impression which is mostly (but not completely backed) by scientific research.
I can verify that ReLU and its derivates (PReLU, Leaky ReLU, etc.) have produced the best results for me in the past.
Which of those implementations will produce the best results for you is probably best determined by trying them out, if you can afford to do so.
ReLU is prettymuch better for deep learning models as an activation function.This normalizes your input and output in the range of [0,1] and adds non linearity

How loss in RNN/LSTM is calculated?

I'm learing how LSTM works by practicing with time series training data(input is a list of features and output is a scalar).
There is a problem that i couldnt understand when calculating loss for RNN/LSTM:
How loss is calculated? Is it calculated at each time i give the nn new input or acummulated through all the given inputs and then be backprop
#seed Answer is correct. However, in LSTM, or any RNN architecture, the loss for each instance, across all time steps, is added up. In other words, you'll have (L0#t0, L1#t1, ... LT#tT) for each sample in your input batch. Add those losses separately for each instance in the batch. Finally average the losses of each input instance to get the average loss for a current batch
For more information please visit: https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-recurrent-neural-networks
The answer does not depend on the neural network model.
It depends on your choice of optimization method.
If you are using batch gradient descent, the loss is averaged over the whole training set. This is often impractical for neural networks, because the training set is too big to fit into RAM, and each optimization step takes a lot of time.
In stochastic gradient descent, the loss is calculated for each new input. The problem with this method is that it is noisy.
In mini-batch gradient descent, the loss is averaged over each new minibatch - a subsample of inputs of some small fixed size. Some variation of this method is typically used in practice.
So, the answer to your question depends on the minibatch size you choose.
(Image is from here)

Backpropagation Optimization: How do I use the derivatives for optimizing the weights and biases? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Given the derivative of the cost function with respect to the weights or biases of the neurons of a neural network, how do I adjust these neurons to minimize the cost function?
Do I just subtract the derivative multiplied by a constant off of the individual weight and bias? If constants are involved how do I know what is reasonable to pick?
Your right about how to perform the update. This is what is done in gradient descent in its various forms. Learning rates (the constant you are referring to) are generally very small 1e-6 - 1e-8. There are numerous articles on the web covering both of these concepts.
In the interest of a direct answer though, it is good to start out with a small learning rate (on the order suggested above), and check that the loss is decreasing (via plotting). If the loss decreases, you can raise the learning rate a bit. I recommend to raise it by 3x its current value. For example, if it is 1e-6, raise it to 3e-6 and check again that your loss is still decreasing. Keep doing this until the loss is no longer decreasing nicely. This image should give some nice intuition on how learning rates affect the loss curve (image comes from Stanford's cs231n lecture series)
You want to raise the learning rate so that the model doesn't take as long to train. You don't want to raise the learning rate too much because then it is possible to overshoot the local minimum you're descending towards and for the loss to increase (the yellow curve above). This is an oversimplification because the loss landscape of a neural network is very non-convex, but this is the general intuition.

Does "epoch" have different meanings with respect to HMMs and Neural Networks?

I am building a speech to text system with N sample sentences using Hidden Markov Models for re-estimation. In the context of Neural Networks, I understand that the concept of epoch refers to a complete training cycle. I assume this means "feeding the same data to the same, updating network which has different weights and biases every time" - Correct me if I am wrong.
Would the same logic work while performing re-estimation (i.e. training) of HMMs from the same sentences ? In other words, if I have N sentences, can I repeat the input samples 10 times each to generate 10 * N samples. Does this mean I am performing 10 epochs on HMMs ? Furthermore, Does this actually help obtain better results?
From this paper, I get the impression that epoch in the context of HMMs refers to a unit of time :
Counts represent a device-specific numeric quantity which is generated
by an accelerometer for a specific time unit (epoch) (e.g. 1 to 60
sec).
If not a unit of time, epoch at the very least sounds different. In the end, I would like to know :
What is epoch in the context of HMMs?
How is it different from
epoch in Neural Networks?
Considering the definition of epoch as training cycles, would multiple epochs improve re-estimation of
HMMs ?
What is epoch in the context of HMMs?
Same as in neural networks, a round of processing the whole dataset.
How is it different from epoch in Neural Networks?
There are no differences except the term "epoch" is not very widely used for HMM. People just call it "iteration".
From this paper, I get the impression that epoch in the context of HMMs refers to a unit of time
"Epoch" in this paper is not related to HMM context at all, it is a separate idea specific to that paper, you should not generalize the term usage from the paper.
Considering the definition of epoch as training cycles, would multiple epochs improve re-estimation of HMMs?
There is no such thing such as multiple epochs improve re-estimation neither for neural networks nor for HMM. Each epoch improves the accuracy up to certain point, then overtraining happens and validation error starts to grow and training error continues to zero. There is an optimal number of iterations usually depending on the model architecture. HMM model usually has less parameters and less prone to overtraining, so extra epochs are not that harmful. Still, there is a number of epochs you need to perform optimally.
In speech recognition it is usually 6-7 iterations of the Baum-Welch algorithm. Less epochs give you less accurate model, more epochs could lead to overtraining or simply do not improve anything.

Single Neuron Neural Network - Types of Questions? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Can anybody think of a real(ish) world example of a problem that can be solved by a single neuron neural network? I'm trying to think of a trivial example to help introduce the concepts.
Using a single neuron to classification is basically logistic regression, as Gordon pointed out.
Logistic regression is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more metric (interval or ratio scale) independent variables. (statisticssolutions)
This is a good case to apply logistic regression:
Suppose that we are interested in the factors that influence whether a political candidate wins an election. The outcome (response) variable is binary (0/1); win or lose. The predictor variables of interest are the amount of money spent on the campaign, the amount of time spent campaigning negatively and whether or not the candidate is an incumbent. (ats)
For a single neuron network, I find solving logic functions a good example. Assuming say a sigmoid neuron, you can demonstrate how the network solves AND and OR functions, which are linearly sepparable and how it fails to solve the XOR function which is not.