Looking for some advice. I am playing around with an accelerometer, combined with the machine learning app in matlab. Clearly there are many ways to extract features from the received data, both in time and frequency domains. However, I have recently come across time-frequency analysis, specifically using wavelets.
Has anyone got any advice on using wavelet analysis for classifying accelerometer (or similar) data and the benefits of using it ? Or if indeed this would be a valid way of extracting features ? I'm not too sure what sort of data I should be extracting using this method ?
Thanks in advance.
Few points to note,
1)You can transform a number of samples (should be a dyadic number and depends on your sampling frequency) into wavelet domain and classify that data. (eg. if you transform 64 accelerometer samples then you also have 64 points in wavelet domain).
2) Apart from time-frequency information from wavelet transformation, wavelet transformation has sparsity property
(https://en.wikipedia.org/wiki/Sparse_approximation) that would be useful for your classification model.
3) Also, you can try different wavelet basis functions (mother wavelets),
and try to figure out which basis is most suitable for your data. Maybe you can start with Haar basis function as it is more suitable to capture the singular behaviour of your data.
Related
I am a new Matlab user and I would be grateful if you help me. I have converted a set of time series into pictural presentations using CWT (continuous wavelet transform) and trained a deep learning network with quite a reasonable accuracy. I have made use of classify to check the trained network performance for the output of a single image. Now I am going to use it for a series of images consecutively feeding on the main time series, so how do I have to use classify in this issue?
regards
Am quite new to wavelet analysis as well as stackoverflow and would wold like some help. I am performing a spatio-temporal analysis of rainfall data.
With PCA, I can reduce the dimension of the rainfall data into a few leading modes, yielding EOFs (which explain spatial variability)
and principal components (explaining temporal variability).
I would like to perform a similar analysis with wavelets using Matlab Wavelet Toolbox. As of now, I am able to decompose a 2D data (spatial decomposition)
but unable to take into account the temporal variability in the data.
My first course of action has been to first compress the data with PCA and then perform wavelet decomposition of the leading modes in both the
spatial (EOFs) and temporal (PCs) domain.
I am wondering if this is the right way to perform such an analysis and would like suggestions as to how to proceed.
Thanks alot.
Is there any way I can convert the SURFpoints object, generated by matlab, into a matrix with x and y positions, for feeding into a neural network?
I am a pretty much complete beginner, but from what I can tell, and by looking at documentation, I wasn't sure if there was a way to get SURFpoints into neural networks?
Many thanks,
Hugh
SURFPoints has a field, Location, that is an n x 2 matrix that has the (x,y) coordinates of each SURF point detected in the image.
Note, however, that SURF points have other attributes beside their location (such as scale and orientation). If you only take into account the (x,y) locations, you are throwing away a lot of data.
Also, it's unclear how you would feed this information into a neural network. A neural network, like many other machine learning models, expects a uniform length feature vector of an entity. If your task is something like image classification, you'll have to come up with some way to convert the list of SURF points into a feature vector that captures the properties you want your classifier to care about. Depending on your application, a neural network may or may not be the best way to go. In the context of computer vision and image processing, neural networks these days are more commonly used for unsupervised feature discovery (see "deep learning"). For supervised learning tasks, other models like boosted decision trees and SVMs give better theoretical guarantees and have fared much better in practice.
I did some reading this afternoon about SVM's. And have the hope that this looks very promising.
I am currently working on a problem, where I'm looking for a pattern in the fourier spectrum. What I'm saying is, that I have been looking at spectrums for days. I hope to find some repeating patterns. I found some criterias that match a certain pattern, but with the next sample, the whole pattern could look slightly different. So there is always slight deviation, which makes it hard to describe. Or in another way, I might be overlooking something. But I can clearly say, which is the training data.
I was hoping to make use of SVM to train it, and predict the classification. Means that if I have another set of new data, that it would tell me, that it matches the training data or it goes into the "other" group, which could be anything (no need to know).
Is that something a SVM is able to do, or am I completly off? I couldn't find any good examples of input data to see if my problem is something I could feed to SVM.
Currently using Matlab.
There actually has been tons of research done on this particular topic, but especially with Wavelet Transform. Google Wavelet Transform and SVM and you will find a number of papers. From there, you can easily go ahead with adjusting your model from Wavelet to FFT spectrum.
I don't have experience with SVM, but I do have experience with related techniques, and here's what I can say:
In all likelihood, you can't simply go from a spectrum to SVM to decision. You need to determine what it is about the spectrums that distinguish your various inputs. For example, if it's the way the data changes over time or the relationship between the high and low frequencies that makes the inputs different, you need to encode that a single parameter. Eg, you could make a parameter that's the ratio of some of your higher frequencies to some of your lower frequencies. You may also want to use parameters like frequency centroid and zero-crossing rate, which are simpler than the spectrum, but may still carry useful information (These are used in audio and speech. not sure if they apply to whatever you are looking at). Once you have these derived parameters, feed them to the SVM analysis, which will do the sorting.
Other techniques you might want to examine (which also have the same requirements) include HMM (Hidden Markov Models), K-Means, and Logistic Regression.
I have some dynamic light scattering data. The machine pumps out the autocorrelation function, and a count-rate.
I can do a simple fit to the ACF
ACF = exp(-D*q^2*t)
and obtain the diffusion coefficient.
I want to obtain the same D from the power spectrum. I have been able to create a power spectrum in two ways -- from the Fourier transform of the ACF, and from the count rate. Both agree, but the power spectrum does not look like in the one in the books, so I'm not sure how to use it to work out the line width.
Attached is an image from a PDF that shows what you should get, and what I get from MATLAB. Can anyone make sense of whats going on?
I have used the code of answer #3 on this question. The resulting autocorrelation comes out exactly the same as
the machine gives me and
using MATLAB's autocorr command on the photoncount data.
Thank you for your time.
When you compute the Fourier transform from short sequences of data it often looks very noisy. There are a number of reasons for this. One reason is that the statistics of individual Fourier components are not Gaussian, and so averaging the spectra across multiple samples of data will only slowly improve the quality of the estimate.
Another causes of "noisiness" in empirical spectra behavior is that you are applying (to a finite data sample) a transform which involves a pathological sinc function and which assumes an infinite length signal. To diminish this problem, it helps to apply a "windowing-function" to your data before computing the Fourier transform. One of the more complicated but also more powerful windowing approaches is the use of so-called 'Slepian tapers'.
MATLAB conveniently implements well-known windows in functions such as hamming and hann.