Measuring "single strongest peak" in a distribution - cluster-analysis

I'd like to automatically detect whether data have a very strongly discernable peak, with any particular distribution. The data can otherwise be quite noisy, or there might be several 'false' peaks. Here are a few examples of the performance I'd expect in a good measure, such that higher is better:
Multimodal: measure scores low
Flat: measure scores low
Jagged with no real high point: measure scores low
A well-defined peak, regardless of tail thickness or other considerations: measure scores high
Could Density Peak Clustering be a solution, particularly HDBSCAN? Or is there another clustering algorithm that's computationally faster if dedicated to finding a single peak of values?
I've also thought that this might be more of a pattern recognition problem, potentially making a neural network useful.

Related

Mutate weights and biases in a neural network through genetic algorithm

I have a genetic algorithm evolving a population of neural networks
Until now I make mutation on weights or biases using random.randn (Python) which is a random value from a normal distribution with mean = 0
It works "well" and I managed to achieve my project using it be wouldn't it be better to use a uniform distribution on a given interval ?
My intuition is that it would lead to more variety in my networks
I think, that this question has no simple solution. In case of normal distribution will be numbers around mean have more chances to be "selected" by your number generator, uniform distribution give almost equal chance to all numbers. That is clear but answer to question, will equal chance mean better result, lays according to me only at empirical experiments. So I suggest you to perform experiments with normal and uniform distribution a try to judge based on results.
About variety. I assume that you create some random vector which represents weights. At stage of mutation you perform addition of random number. This number will be more likely from close interval around mean, so in case 0 mutation with high probability will be change of some elements only little. So there will be only little improvements over vector and sometimes something big shows up. In case of uniform distribution will be changes more random, which leads to different individual. Question is, will be these individual better? I don't know, but I offer you another view. I look to genetic algorithms like an analogy to evolution theory. And from this point of view, cumulative little improvements of individual with little probability of some big change is more appropriate. Think about situation, used is uniform distribution, but children has low fitness due to big changes so at phase of creating new generation will be not selected. And you will wait so long for one tiny improvement which make your network works with good results.
Maybe one more thing. Your experiments maybe show that uniform/normal distribution is better. But such result may be true only for your current problem, no at general.

Encog - Using Hybrid Neural Networks

How is using simulated annealing in conjunction with a feed-forward neural network different than simply resetting the weights (and placing the hidden layer into a new error valley) when a local minimum is reached? Is simulated annealing used by the FFNN as a more systematic way of moving the weights around to find a global minimum, and hence only one iteration is performed each time the validation error begins to increase relative to the training error... slowly moving the current position across the error function? In this case, the simulated annealing is independent of the feed-forward network and the the feed-forward network is dependent on the simulated annealing output. If not, and the simulated annealing is directly dependent on results from the FFNN, I don't see how the simulated annealing trainer would receive this information in terms of how to update its own weights (if that makes sense). One of the examples mentions a cycle (multiple iterations), which doesn't fit into my first assumption.
I have looked at different exmaples, where network.fromArray() and network.toArray() are used, but I only see network.encodeToArray() and network.decodeFromArray(). What is the most current way (v3.2) to transfer weights from one type of network to another? Is this the same for using genetic algorithms, etc?
Neural network training algorithms, such as simulated annealing are essentially searches. The weights of the neural network are essentially vector coordinates that specify a location in a high dimension space.
Consider hill-climbing, possibly the most simple training algorithm. You adjust one weight, thus moving in one dimension and see if it improves your score. If the score is improved, then great, stay there and try a different dimension next iteration. If your score is NOT improved, retreat and try a different dimension next time. Think of a human looking at every point they can reach in one step and choosing the step that increases their altitude the most. If no step will increase altitude (you are standing in the middle of a valley), then your stuck. This is a local minimum.
Simulated annealing adds one critical component to hill-climbing. We might move to a lesser a worse location. (not greedy) The probability that we will move to a lesser location is determined by the decreasing temperature.
If you look inside of the NeuralSimulatedAnnealing classes you will see calls to NetworkCODEC.NetworkToArray() and NetworkCODEC.ArrayToNetwork(). These are how the weight vector is directly updated.

Best way to extract neuronal spike times from a noisy signal / voltage meaurement

I'm a neuroscientist, and not a very good one. My colleague has kindly provided me with a noisy voltage measurements of the PY neuron of the Stomatogastric Ganglion of the lobster.
The activity of this neuron is characterised by a slow depolarised plateaux with fast spikes on top (a burst).
Both idealised and noisy versions are presented here for you to peruse at your leisure.
It's my job to extract the spike times from the noisy signal but this is so far beyond my experience level I have no idea where to begin. Fortunately, I am a total ninja at Matlab.
Could someone kindly provide me with the name of the procedure, filter or smoothing function which is best suited for this task. Or even the appropriate forum to ask such an asinine question.
Presumably, it needs to increase the signal to noise ratio? The problem here seems to be determining the difference between noise and a bona fide spike as the margin between the two is quite small.
UPDATE: 02/07/2013
I have tried the following filters in Matlab with mixed results. It's still very hard to say what is noise and what is a spike.
Lowpass Butterworth filter,
median filter,
gaussian,
moving weighted window,
moving average filter,
smooth,
sgolay filter.
This may not be an adequate response for stackoverflow - but one way of increasing a signal to noise ratio in your case is to average parts of the signal.
low pass your signal to remove noise (and spikes), and find the minima of the filtered signal (from your image, one minimum every 600 data points). Keep the indexes of each minimum,
on the noisy signal, for each minimum index, select the consecutive 700 data points. If you have 50 minima, you should have a 50 by 700 matrix,
average your matrix. You should have a 1 by 700 vector.
By averaging parts of the signal (minimum-locked potentials), you will take advantage of two properties: noise is zero-mean (well, it should be), and the signal of interest is repetitive. The first will therefore decrease as you pile up potentials, and the second will increase. With this process however, you will lose the spike times for each slow wave figure, but at least have them for blocks of 50 minima.
This technique is known in neuroscience as event-related potential (http://en.wikipedia.org/wiki/Event-related_potential). It may not fit perfectly your signal, or the result may not give nice spikes, but you may extract the spike times for some periods of interest (given the nature of your signal, I would say that you would need 5 or 10 potentials to see an emerging mean activity).
There are some toolboxes that do part of the job (but I would program it myself given the complexity of the task). These are eeglab or fieldtrip. They have a bunch of filter/decomposition options too, as well as some statistical features.

PIV Analysis, Interrogation Area of The Cross Correlation

I'm running a PIV analysis on two consecutive images taken during an experiment to get the vector field. But I would like to know, based on what criteria do I have to choose the percentage of overlap between the tow images for the cross-correlation process? 50%, 75%...? The PIVlab_GUI tool designed for MATLAB chooses a 50% overlap by default, but it allows changing it.
I just want to know the criteria based on which I can know how much overlap is best? Do the vectors become less accurate, dependent.etc, as we increase/decrease the overlap?
My book "Fluid Mechanics Measurements" does not explain how to choose the overlap amount in the cross-correlation process, and I could not find any helpful online reference.
Any help is appreciated.
I suggest you read up on spectral estimation - which is basically equivalent to cross correlation when you segment the data and average the correlation estimates calculated from each segment (the cross correlation is the inverse Fourier transform of the cross spectrum). There's a book chapter on this stuff here, but you may want to find a more complete resource if you are unclear on the basics.
A short answer: increasing the overlap will increase the frequency resolution of the spectral estimate, and give you more segments to average over; your estimate will have a lower variance. But there are diminishing statistical returns the more you increase your overlap past 50%, while the computational complexity continues to rise (more segments = more calculations). Hence most people just choose 50% and have done with it.
It's important to note that you don't get any more information by using overlapping frames, you are simply increasing the frequency resolution (or time lag resolution, for correlation) - similar to the effect of zero-padding a signal before taking its Fourier transform - and this has statistical effects due to the way estimation of this type works.

Neural Network learning rate and batch weight update

I have programmed a Neural Network in Java and am now working on the back-propagation algorithm.
I've read that batch updates of the weights will cause a more stable gradient search instead of a online weight update.
As a test I've created a time series function of 100 points, such that x = [0..99] and y = f(x). I've created a Neural Network with one input and one output and 2 hidden layers with 10 neurons for testing. What I am struggling with is the learning rate of the back-propagation algorithm when tackling this problem.
I have 100 input points so when I calculate the weight change dw_{ij} for each node it is actually a sum:
dw_{ij} = dw_{ij,1} + dw_{ij,2} + ... + dw_{ij,p}
where p = 100 in this case.
Now the weight updates become really huge and therefore my error E bounces around such that it is hard to find a minimum. The only way I got some proper behaviour was when I set the learning rate y to something like 0.7 / p^2.
Is there some general rule for setting the learning rate, based on the amount of samples?
http://francky.me/faqai.php#otherFAQs :
Subject: What learning rate should be used for
backprop?
In standard backprop, too low a learning rate makes the network learn very slowly. Too high a learning rate
makes the weights and objective function diverge, so there is no learning at all. If the objective function is
quadratic, as in linear models, good learning rates can be computed from the Hessian matrix (Bertsekas and
Tsitsiklis, 1996). If the objective function has many local and global optima, as in typical feedforward NNs
with hidden units, the optimal learning rate often changes dramatically during the training process, since
the Hessian also changes dramatically. Trying to train a NN using a constant learning rate is usually a
tedious process requiring much trial and error. For some examples of how the choice of learning rate and
momentum interact with numerical condition in some very simple networks, see
ftp://ftp.sas.com/pub/neural/illcond/illcond.html
With batch training, there is no need to use a constant learning rate. In fact, there is no reason to use
standard backprop at all, since vastly more efficient, reliable, and convenient batch training algorithms exist
(see Quickprop and RPROP under "What is backprop?" and the numerous training algorithms mentioned
under "What are conjugate gradients, Levenberg-Marquardt, etc.?").
Many other variants of backprop have been invented. Most suffer from the same theoretical flaw as
standard backprop: the magnitude of the change in the weights (the step size) should NOT be a function of
the magnitude of the gradient. In some regions of the weight space, the gradient is small and you need a
large step size; this happens when you initialize a network with small random weights. In other regions of
the weight space, the gradient is small and you need a small step size; this happens when you are close to a
local minimum. Likewise, a large gradient may call for either a small step or a large step. Many algorithms
try to adapt the learning rate, but any algorithm that multiplies the learning rate by the gradient to compute
the change in the weights is likely to produce erratic behavior when the gradient changes abruptly. The
great advantage of Quickprop and RPROP is that they do not have this excessive dependence on the
magnitude of the gradient. Conventional optimization algorithms use not only the gradient but also secondorder derivatives or a line search (or some combination thereof) to obtain a good step size.
With incremental training, it is much more difficult to concoct an algorithm that automatically adjusts the
learning rate during training. Various proposals have appeared in the NN literature, but most of them don't
work. Problems with some of these proposals are illustrated by Darken and Moody (1992), who
unfortunately do not offer a solution. Some promising results are provided by by LeCun, Simard, and
Pearlmutter (1993), and by Orr and Leen (1997), who adapt the momentum rather than the learning rate.
There is also a variant of stochastic approximation called "iterate averaging" or "Polyak averaging"
(Kushner and Yin 1997), which theoretically provides optimal convergence rates by keeping a running
average of the weight values. I have no personal experience with these methods; if you have any solid
evidence that these or other methods of automatically setting the learning rate and/or momentum in
incremental training actually work in a wide variety of NN applications, please inform the FAQ maintainer
(saswss#unx.sas.com).
References:
Bertsekas, D. P. and Tsitsiklis, J. N. (1996), Neuro-Dynamic
Programming, Belmont, MA: Athena Scientific, ISBN 1-886529-10-8.
Darken, C. and Moody, J. (1992), "Towards faster stochastic gradient
search," in Moody, J.E., Hanson, S.J., and Lippmann, R.P., eds.
Advances in Neural Information Processing Systems 4, San Mateo, CA:
Morgan Kaufmann Publishers, pp. 1009-1016. Kushner, H.J., and Yin,
G. (1997), Stochastic Approximation Algorithms and Applications, NY:
Springer-Verlag. LeCun, Y., Simard, P.Y., and Pearlmetter, B.
(1993), "Automatic learning rate maximization by online estimation of
the Hessian's eigenvectors," in Hanson, S.J., Cowan, J.D., and Giles,
C.L. (eds.), Advances in Neural Information Processing Systems 5, San
Mateo, CA: Morgan Kaufmann, pp. 156-163. Orr, G.B. and Leen, T.K.
(1997), "Using curvature information for fast stochastic search," in
Mozer, M.C., Jordan, M.I., and Petsche, T., (eds.) Advances in Neural
Information Processing Systems 9,Cambridge, MA: The MIT Press, pp.
606-612.
Credits:
Archive-name: ai-faq/neural-nets/part1
Last-modified: 2002-05-17
URL: ftp://ftp.sas.com/pub/neural/FAQ.html
Maintainer: saswss#unx.sas.com (Warren S. Sarle)
Copyright 1997, 1998, 1999, 2000, 2001, 2002 by Warren S. Sarle, Cary, NC, USA.
A simple solution would be to take the average weight of a batch instead of summing it. This way you can just use a learning rate of 0.7 (or any other value of your liking), without having to worry about optimizing yet another parameter.
More interesting information about batch updating and learning rates can be found in this article by Wilson (2003).