Are word2vec embeddings the same if i re-train on the same sentences? - neural-network

If I give the same sentences to a word2vec model and train it 2 different times (of course with the same vector size), do I obtain the same embeddings for words?

There are several stochastic processes during word2vec training. First, the embeddings are randomly initialized, second, negative sampling is used to approximate the denominator in the softmax term. Only if those random processes, start with the same seed, the vectors will be exactly the same.
Otherwise, the training will converge to totally different vectors, however, the distances between the vectors will always be approximately the same.

Related

Kalman Filter : How measurement noise covariance matrix and process noise helps in working of kalman filter , can some one explain intuitively?

How process noise covariance and measurement noise covariance are helping better functioning of Kalman filter ?
Can someone explain intuitively without significant equations and math please.
Well, its difficult to explain mathematical things (like kalman filters) without mathematics, but here's my attempt:
There are two parts to a kalman filter, a time update part and a measurement part. In the time update part we estimate the state at the time of observation; in the measurement part we combine (via least squares) our 'predictions' (ie the estimate from the time update) with the measurements to get a new estimate of the state.
So far, no mention of noise. There are two sources of noise: one in the time update part (sometimes called process noise) and one in the measurement part (observation noise). In each case what we need is a measure of the 'size' of that noise, ie the covariance matrix. These are used when we combine the
predictions with the measurements. When we view our predictions as very uncertain (that is, they have a large covariance matrix) the combination will be closer to the measurements than to the predictions; on the other hand when we view our predictions as very good (small covariance) the combination will be closer to the predictions than to the measurements.
So you could look upon the process and observation noise covariances as saying how much to trust the (parts of) the predictions and observations. Increasing, say, the variance of a particular component of the predictions is to say: trust this prediction less; while increasing the variance of a particular measurement is to say: trust this measurement less. This is mostly an analogy but it can be made more precise. A simple case is when the covariance matrices are diagonal. In that case the cost, ie the contrinution to what we are trying to minimise, of a difference between an measurement and the computed value is te square of that difference, divided by the observations variance. So the higher an observations variance, the lower the cost.
Note that out of the measurement part we also get a new state covariance matrix; this is used (along with the process noise and the dynamics) in the next time update when we compute the predicted state covariance.
I think the question of why the covariance is the appropriate measure of the size of the noise is rather a deep one, as is why least squares is the appropriate way to combine the predictions and the measurements. The shallow answer is that kalman filtering and least squares have been found, over decades (centuries in the case of least squares), to work well in many application areas. In the case of kalman filtering I find the derivation of it from hidden markobv models (From Hidden Markov Models to Linear Dynamical Systems by T.Minka, though this is rather mathematical) convincing. In Hidden markov models we seek to find the (conditional) probability of the states given the measurements so far; Minka shows that if the measurements are linear functions of the states and the dynamics are linear and all probability distributions are Gaussian, then we get the kalman filter.

How to train large dataset for classification in MATLAB

I have a large features dataset of around 111 Mb for classification with 217000 data points and each point has 1760000 features point. When used in training with SVM in MATLAB, it takes a lot of time.
How can be this data processed in MATLAB.
It depends on what sort of SVM you are building.
As a rule of thumb, with such big feature sets you need to look at linear classifiers, such as an SVM with no/the linear kernel, or logistic regression with various regularizations etc.
If you're training an SVM with a Gaussian kernel, the training algorithm has O(max(n,d) min (n,d)^2) complexity, where n is the number of examples and d the number of features. In your case it ends up being O(dn^2) which is quite big.

Evaluating performance of Neural Network embeddings in kNN classifier

I am solving a classification problem. I train my unsupervised neural network for a set of entities (using skip-gram architecture).
The way I evaluate is to search k nearest neighbours for each point in validation data, from training data. I take weighted sum (weights based on distance) of labels of nearest neighbours and use that score of each point of validation data.
Observation - As I increase the number of epochs (model1 - 600 epochs, model 2- 1400 epochs and model 3 - 2000 epochs), my AUC improves at smaller values of k but saturates at the similar values.
What could be a possible explanation of this behaviour?
[Reposted from CrossValidated]
To cross check if imbalanced classes are an issue, try fitting a SVM model. If that gives a better classification(possible if your ANN is not very deep) it may be concluded that classes should be balanced first.
Also, try some kernel functions to check if this transformation makes data linearly separable?

Interpreting neurons in the neural network

I have come up with a solution for a classification problem using neural networks. I have got the weight vectors for the same too. The data is 5 dimensional and there are 5 neurons in the hidden layer.
Suppose neuron 1 has input weights w11, w12, ...w15
I have to explain the physical interpretation of these weights...like a combination of these weights, what does it represent in the problem.Does any such interpretation exist or is that the neuron has no specific interpretation as such?
A single neuron will not give you any interpretation, but looking at a combination of couple of neuron can tell you which pattern in your data is captured by that set of neurons (assuming your data is complicated enough to have multiple patterns and yet not too complicated that there is too many connections in the network).
The weights corresponding to neuron 1, in your case w11...w15, are the weights that map the 5 input features to that neuron. The weights quantify the extent to which each feature will effect its respective neuron (which is representing some higher dimensional feature, in turn). Each neuron is a matrix representation of these weights, usually after having an activation function applied.
The mathematical formula that determines the value of the neuron matrix is matrix multiplication of the feature matrix and the weight matrix, and using the loss function, which is most basically the sum of the square of the difference between the output from the matrix multiplication and the actual label.Stochastic Gradient Descent is then used to adjust the weight matrix's values to minimize the loss function.

ANNs traning, what happen in each epoche?

I have a question about the training of ANNs,
So I want to ask how the training is done for a set of input samples? Are there some relation between the size of the input set for training and the number of epoches for training, or it's totally independant?
For exemple, if my ANN has 4 inputs and for 2000 training's samples I get an input matrix of size 4x2000. So for each epoche of training, is that the whole matrix is loaded, or just one sample (training matrix column) is loaded for each epoche of traning?
in each epoch of a NN all the weight values of the neurons are updated, all the nodes. Usually the more neurons & layers & data you have the more epoches you need for a correct value for the weights, but there is not an equation for relationing epoches with neurons.
For the training usually there is used Backpropagation algorithm (check wikpedia for a great example), that updates each weight once. The more epoches the more accurate your NN will be. Usually for the training you set 2 variables: max num of epoches and accuracy, and when one of the two is finally achieved you stop iterating.