calculate weighted average in pytorch - neural-network

How to calculate the average weighted by neural networks in PyTorch?
I need to code c

Related

How to calculate 95% confidence interval for AUC from confusion matrix?

From a classification model in Weka software I get: sample size, confusion matrix and AUC (area under curve of ROC).
How may I calculate the 95% confidence interval for AUC?
I think you have everything you need so follow the following equation:
Note: N1 and N2 are the sample sizes of each group

Can I normalise subsets of training data for a neural network?

Say I have a training set with 50 vectors. I split this set into 5 sets each with 10 vectors and then I scale the vectors in each subset and normalise the subsets. Then I train my ANN with each vector from each subset.
After training is complete, I group my test set into subsets of 10 vectors each, scale the features of the vectors in each subset and normalise each subset and then feed it to the neural network to attempt to classify it.
Is this the right approach? Is it right to scale and normalise each subset, each with its own minimum, maximum, mean and standard deviation?

How to train ANN from PCA results?

I have performed PCA to all images of my database so I have different vectors returned from PCA like Egenface mean etc.
My question is Which vector I would use to train my NN in MATLB? And How would I train NN for 5 clases?

Spectral clustering distance/similarity

All papers about spectral clustering use similarity matrix as the input to spectral clustering algorithm.
Is it also possible to use pairwise distance matrix? I haven't seen any version of spectral clustering code which would use parwise distance.
I am implementing spectral clustering in matlab and it has the function pdist and the output of this function is pairwise distance matrix.
Similarity or Affinity Matrix gives an idea about the closeness of these data points with respect to each other. Distance on the other hand gives the measure of dis-similarity w.r.t each other. The easiest and most frequently used way of using pairwise distances for Similarity Matrix is to use a Gaussian kernel to get the affinity measure.
For points a and b, let D = pdist(a,b) give you the pairwise distance. Then the similarity for your matrix can be obtained as sim_ab = exp-(D/f) where f is a scaling factor.

Data in LoLiMot sub-spaces have linear distribution or gaussian?

I'm confused that lolimot approximate data by sum of linear models or by some of gaussian models? I see here on page 184 that lolimot is a model for dividing space into linear sub-spaces, but structure of lolimot is sum ofweighted gaussian models. Actually I mean data in every subspace have linear distribution or gaussian distribution?
Thanks