MATLAB program to estimate the energy spectrum - matlab

MATLAB program to estimate the energy spectrum by non-parametric and parametric methods in the case of EEG signals from epilepsy patients with focal and non-focal area recordings
I have to do this and I have no idea where to start. I have this reference link, https://www.upf.edu/web/ntsa/downloads/-/asset_publisher/xvT6E4pczrBw/content/2012-nonrandomness-nonlinear-dependence-and-nonstationarity-of-electroencephalographic-recordings-from-epilepsy-patients?inheritRedirect=false&redirect=https%3A%2F%2Fwww.upf.edu%2Fweb%2Fntsa%2Fdownloads%3Fp_p_id%3D101_INSTANCE_xvT6E4pczrBw%26p_p_lifecycle%3D0%26p_p_state%3Dnormal%26p_p_mode%3Dview%26p_p_col_id%3Dcolumn-1%26p_p_col_count%3D1#.X751wmUzaUk
If someone can give me some advice, please.

I have done both. The Signal Processing Toolbox and System Identification Toolbox documentation is very good on this topic.
Here is a paper I published on this topic. It is about point process rather than continuous signals, but it does have sections on parametric versus non-parametric spectrum analysis.
Non-parametric is the normal way. If your algorithm uses the fast Fourier transform, it is non-parametric.
Parametric means poles-and-zeroes type models. Spectra estimated non-parametrically tend to be a lot smoother than non-parametric. A key question that arises in parametric estimation which does not arise in non-parametric is what model structure to use and what model order. In non-paramtric, those questions do not arise. There are ways to answer those questions. The Akaike information criterion (AIC) is a standard way to answer the question about model order, although there are certainly alternatives to AIC which are better in some situations.
As you read the documentation, you will see numerous references to 'estimation'. This may seem strange at first, but it is a very important concept: wen you determine the spectrum from a signal, whether you do it parametrically or non-parametrically, you are estimating the true underlying spectrum, based on a sample of the process. The data you recorded is that sample.

Related

Fourier transform and filtering the MD trajectory data instead of PCA for dimensionality reduction?

I was using PCA for dimensionality reduction of MD (molecular dynamics) trajectory data of some protein simulations. Basically my data is xyz coordinates of protein atoms which change with time (that means I have lot of frames of this xyz coordinates). The dimension of this data is something like 20000 frames of 200x3 (atoms by coordinates). I implemented PCA using princomp command in Matlab.
I was wondering if I can do FFT on my data. I have experience of doing FFT on audio signals (1D signal). Here my data has both time and space in picture. It must be theoretically possible to implement FFT on my data and then filter it using a LPF (low pass filter). But I am unsure.
Can someone give me some direction/code snippets/references towards implementing FFT on my data?
Why people are preferring PCA more often compared to FFT and filtering. Is it because of computational efficiency of algorithm or is it because of the statistical nature of underlying data?
For the first question "Can someone give me some direction/code snippets/references towards implementing FFT on my data?":
I should say fft is implemented in matlab and you do not need to implement it by your own. Also, for your case you should use fftn (fft documentation)to transform and after applying lowpass filtering by dessignfilt (design filter in matalab), the apply ifftn (inverse fft in matlab)to inverse the transform.
For the second question "Why people are preferring PCA more often compared to FFT and filtering ...":
I should say as the filtering in fft is done in signal space, after filtering you can't generalize it in time space. You can more details about this drawback in this article.
But, Fourier analysis has also some other
serious drawbacks. One of them may be that time
information is lost in transforming to the frequency
domain. When looking at a Fourier transform of a
signal, it is impossible to tell when a particular event
has taken place. If it is a stationary signal - this
drawback isn't very important. However, most
interesting signals contain numerous non-stationary or
transitory characteristics: drift, trends, abrupt changes,
and beginnings and ends of events. These
characteristics are often the most important part of the
signal, and Fourier analysis is not suitable in detecting
them.

What is the structure of an indirect (error-state) Kalman filter and how are the error equations derived?

I have been trying to implement a navigation system for a robot that uses an Inertial Measurement Unit (IMU) and camera observations of known landmarks in order to localise itself in its environment. I have chosen the indirect-feedback Kalman Filter (a.k.a. Error-State Kalman Filter, ESKF) to do this. I have also had some success with an Extended KF.
I have read many texts and the two I am using to implement the ESKF are "Quaternion kinematics for the error-state KF" and "A Kalman Filter-based Algorithm for IMU-Camera Calibration" (pay-walled paper, google-able).
I am using the first text because it better describes the structure of the ESKF, and the second because it includes details about the vision measurement model. In my question I will be using the terminology from the first text: 'nominal state', 'error state' and 'true state'; which refer to the IMU integrator, Kalman Filter, and the composition of the two (nominal minus errors).
The diagram below shows the structure of my ESKF implemented in Matlab/Simulink; in case you are not familiar with Simulink I will briefly explain the diagram. The green section is the Nominal State integrator, the blue section is the ESKF, and the red section is the sum of the nominal and error states. The 'RT' blocks are 'Rate Transitions' which can be ignored.
My first question: Is this structure correct?
My second question: How are the error-state equations for the measurement models derived?
In my case I have tried using the measurement model of the second text, but it did not work.
Kind Regards,
Your block diagram combines two indirect methods for bringing IMU data into a KF:
You have an external IMU integrator (in green, labelled "INS", sometimes called the mechanization, and described by you as the "nominal state", but I've also seen it called the "reference state"). This method freely integrates the IMU externally to the KF and is usually chosen so you can do this integration at a different (much higher) rate than the KF predict/update step (the indirect form). Historically I think this was popular because the KF is generally the computationally expensive part.
You have also fed your IMU into the KF block as u, which I am assuming is the "command" input to the KF. This is an alternative to the external integrator. In a direct KF you would treat your IMU data as measurements. In order to do that, the IMU would have to model (position, velocity, and) acceleration and (orientation and) angular velocity: Otherwise there is no possible H such that Hx can produce estimated IMU output terms). If you instead feed your IMU measurements in as a command, your predict step can simply act as an integrator, so you only have to model as far as velocity and orientation.
You should pick only one of those options. I think the second one is easier to understand, but it is closer to a direct Kalman filter, and requires you to predict/update for every IMU sample, rather than at the (I assume) slower camera framerate.
Regarding measurement equations for version (1), in any KF you can only predict things you can know from your state. The KF state in this case is a vector of error terms, and thus you can only predict things like "position error". As a result you need to pre-condition your measurements in z to be position errors. So make your measurement the difference between your "estimated true state" and your position from "noisy camera observations". This exact idea may be represented by the xHat input to the indirect KF. I don't know anything about the MATLAB/Simulink stuff going on there.
Regarding real-world considerations for the summing block (in red) I refer you to another answer about indirect Kalman filters.
Q1) Your SIMULINK model looks to be appropriate. Let me shed some light on quaternion mechanization based KF's which I've worked on for navigation applications.
Since Kalman Filter is an elegant mathematical technique which borrows from the science of stochastics and measurement, it can help you reduce the noise from the system without the need for elaborately modeling the noise.
All KF systems start with some preliminary understanding of the model that you want to make free of noise. The measurements are fed back to evolve the states better (the measurement equation Y = CX). In your case, the states that you are talking about are errors in quartenions which would be the 4 values, dq1, dq2, dq3, dq4.
KF working well in your application would accurately determine the attitude/orientation of the device by controlling the error around the quaternion. The quaternions are spatial orientation of any body, understood using a scalar and a vector, more specifically an angle and an axis.
The error equations that you are talking about are covariances which contribute to Kalman Gain. The covariances denote spread around the mean and they are useful in understanding how the central/ average behavior of the system is changing with time. Low covariances denote less deviation from the mean behavior for any system. As KF cycles run the covariances keep getting smaller.
The Kalman Gain is finally used to compensate for the error between the estimates of the measurements and the actual measurements that are coming in from the camera.
Again, this elegant technique first ensures that the error in the quaternion values converge around zero.
Q2) EKF is a great technique to use as long as you have a non-linear measurement construction technique. Be very careful in using EKF if their are too many transformations in your system, i.e don't try to reconstruct measurements using transformation on your states, this seriously affects the model sanctity and since noise covariances would not undergo similar transformations, there would be a chance of hitting singularity as soon as matrices are non-invertible.
You could look at constant gain KF schemes, which would save you from covariance propagation and save substantial computation effort and time. These techniques are quite new and look very promising. They actively absorb P(error covariance), Q(model noise covariance) and R(measurement noise covariance) and work well with EKF schemes.

Fourier spectral analysis with Support Vector Machines

I did some reading this afternoon about SVM's. And have the hope that this looks very promising.
I am currently working on a problem, where I'm looking for a pattern in the fourier spectrum. What I'm saying is, that I have been looking at spectrums for days. I hope to find some repeating patterns. I found some criterias that match a certain pattern, but with the next sample, the whole pattern could look slightly different. So there is always slight deviation, which makes it hard to describe. Or in another way, I might be overlooking something. But I can clearly say, which is the training data.
I was hoping to make use of SVM to train it, and predict the classification. Means that if I have another set of new data, that it would tell me, that it matches the training data or it goes into the "other" group, which could be anything (no need to know).
Is that something a SVM is able to do, or am I completly off? I couldn't find any good examples of input data to see if my problem is something I could feed to SVM.
Currently using Matlab.
There actually has been tons of research done on this particular topic, but especially with Wavelet Transform. Google Wavelet Transform and SVM and you will find a number of papers. From there, you can easily go ahead with adjusting your model from Wavelet to FFT spectrum.
I don't have experience with SVM, but I do have experience with related techniques, and here's what I can say:
In all likelihood, you can't simply go from a spectrum to SVM to decision. You need to determine what it is about the spectrums that distinguish your various inputs. For example, if it's the way the data changes over time or the relationship between the high and low frequencies that makes the inputs different, you need to encode that a single parameter. Eg, you could make a parameter that's the ratio of some of your higher frequencies to some of your lower frequencies. You may also want to use parameters like frequency centroid and zero-crossing rate, which are simpler than the spectrum, but may still carry useful information (These are used in audio and speech. not sure if they apply to whatever you are looking at). Once you have these derived parameters, feed them to the SVM analysis, which will do the sorting.
Other techniques you might want to examine (which also have the same requirements) include HMM (Hidden Markov Models), K-Means, and Logistic Regression.

wavelet denoising routine using the wden functions in matlab

I was reading a report today which looked at measuring heat storage of a lake from temperature measurements where to reduce the the impacts of temperature fluctuations that can confound estimates of short-term changes in heat storage, a wavelet de-noising routine was used (daubechies 4 wavelet, single rescaling, min/max thresholds used on the wden function in the wavelet toolbox) where 2 levels of wavelet filtering was applied. This technique results in smoother temporal variations in water temperature, while preserving patterns of diurnal heat gain and loss.
From this description, consider that my temperature measurements are similar to
load sumsin;
s = sumsin;
plot(s);
How would I apply the techniques described using the wden functions in matlab.
Apologies for the vagueness of this post, but seeing as I am clueless on how to complete this task I would be very greatfull for some advice.
I assume you're talking about de-noising by thresholding the detail coefficients of the wavelet transform. wden does do this. You've not specified however whether it is hard or soft thresholding.
For not wanting to reproduce matlab's help here,
help wden
Will give you what you need on how to use the function. Given the information you've provided, and the assumption that soft thresholding is appropriate; (as it is with most methods except Donoho's Visushrink, referred to by wden as 'sqtwolog')
[s_denoised, ~, ~] = wden(s, 'minimaxi', 's', 'sln', 2, 'db4');
Should give you what you want. This does also assume you're not interested in the decomposed wavelet tree

Power spectrum from autocorrelation function with MATLAB

I have some dynamic light scattering data. The machine pumps out the autocorrelation function, and a count-rate.
I can do a simple fit to the ACF
ACF = exp(-D*q^2*t)
and obtain the diffusion coefficient.
I want to obtain the same D from the power spectrum. I have been able to create a power spectrum in two ways -- from the Fourier transform of the ACF, and from the count rate. Both agree, but the power spectrum does not look like in the one in the books, so I'm not sure how to use it to work out the line width.
Attached is an image from a PDF that shows what you should get, and what I get from MATLAB. Can anyone make sense of whats going on?
I have used the code of answer #3 on this question. The resulting autocorrelation comes out exactly the same as
the machine gives me and
using MATLAB's autocorr command on the photoncount data.
Thank you for your time.
When you compute the Fourier transform from short sequences of data it often looks very noisy. There are a number of reasons for this. One reason is that the statistics of individual Fourier components are not Gaussian, and so averaging the spectra across multiple samples of data will only slowly improve the quality of the estimate.
Another causes of "noisiness" in empirical spectra behavior is that you are applying (to a finite data sample) a transform which involves a pathological sinc function and which assumes an infinite length signal. To diminish this problem, it helps to apply a "windowing-function" to your data before computing the Fourier transform. One of the more complicated but also more powerful windowing approaches is the use of so-called 'Slepian tapers'.
MATLAB conveniently implements well-known windows in functions such as hamming and hann.