MATLAB computer machine learning find best model to merge 3 signals into 1 known outcome - matlab

I have ten measurements, containing 4 signals, with a length of 6-8 hours each at a sample rate of 10Hz. (200k-300k samples per measurement)
3 signals are the x-, y- and z-axis of an accelerometer (measuring the acceleration m/s^2). This sensor is positioned on a sphere. The sphere is inflated and deflated. The air-fluctuation in the sphere is the 4th signal.
The accelerometer values, are thus a representation of fluctuation of air in the sphere. If the sensor is positioned on top of the sphere, a good correlation is seen between air fluctuation and Z-axis of the accelerometer. The X- and Y- axis should in a perfect situation be 0, but they are not and contain noise.
I have tried many methods of combining the three signals into one, and comparing that to the air-fluctuation signal. At some points this does give a good outcomes, but when the accelerometer is not moving over one axis, the signal-noise ratio seems to be too high (e.g. when the sensor is at a 45 degree angle).
I am wondering if computer machine learning in Matlab can be used here, to automatically generate a model that makes the best fit: combine the 3 axis signals into one, that best represents the 4th signal. I assume this is what computer machine learning is about?
I can provide the signals in filtered format, the integrated signals, the angle of the sensor at a given time, full signal or just a minute of data, etc.
I have however no idea how to start and which toolbox to use to tackle this problem. Also what signals to feed into the algorithm? at what length (full vs couple of seconds/minutes)
Can you help me getting started on the computer machine learning process, to use 3 signals (and/or formatted versions of them) into one signal that closely matches the fourth signal*.
I'm not asking for full code, but what steps to take to tackle this problem. Like have a look at that toolbox, and function x.

Related

Impossibility to apply closed-loop filtering techniques modelling a thin flexible structure

Model approach:
I am modelling on Matlab-Simulink a very thin flexible structure. All points of the model are link with each other with springs and dampers this way (without the tethers in the center):
Mesh description
The general equation of my model applied at each point of the mesh is the following:
Dynamic formula of mass/spring/damper system
With k the springs stiffness, and c the dampers damping.
To adapt the physical properties of the material I want to model, the spring stiffness has been set to a very high value, around k = 5000. This mean that my spring links are highly reactive to any deformation.
Problem:
This leads to my problem: High stiffness links induce high frequency displacement that I can consider as noise in the simulation.
The simulation is much slower as the variable time step, I am using must be very low.
This high-frequency displacements (around 160 Hz, which the resonance frequency of the springs) stays all along the simulations.
Here is a simulation of my structure rotating at a constant angular speed:
In-time evolution of a random point of my structure in spherical coordinates
We can see that R is vibrating at a very high frequency. However, the displacement amplitude is clearly negligible.
To speed up the simulation, I want to suppress those vibrations!
Investigation:
To suppress them, I investigate on signal filtering techniques, mainly low-pass filtering. On every loop of our simulation, and what should enter our filter are data of all my points in all my axis.
Simulink low-pass filter block
The continuous version of low-pass filter in Simulink library has been tested on the acceleration, the speed and the position, with several cut-off frequencies from 100 Hz to 500 Hz.
For example, for a cut-off frequency of 200Hz and filtering the position at t=0.6 sec I have:
In-time filtered evolution of a random point of my structure in spherical coordinates
It is an in-plane movement so I don’t have any elevation angle, but azimuth angle and point distance from the center are completely diverging.
The problem might come from:
The fact that I am in a closed-loop system
The fact that for the mesh we have, the filter receives 81 vectors of 3*1 at each time step and maybe the filter block is not made to function with that.
The fact that for the mesh we have, the filter receives 81 vectors of 3*1 at each time step and maybe the filter block is not made to function with that.
Main question:
Are there filtering techniques for closed-loop and multiple inputs system that could solve my problem?
Digital filter designer works with SISO signals. Just demux your signals and apply some lowpass filters. You gave lots of info that made it harder to understand the core problem, if there is anything else you can re-iterate. I'd start with a 3rd order Butterworth LPF wc at around 100Hz for your needs.

modeling an relationship between sensor values and position (angle and distance) to a target

I want to derive a simple model that can predict a current position of an object with respect of a target.
To be more specific, I have a head that has 4 identical light sensors placed between 90 degree. There is a light source (LED) emitting visual light. Since each sensor has angle spectrum (maximum at 90 degree and decrease its sensitivity while the angle of the incident of light increases), the receiving value at each sensor is determined by the angle and distance of the head with respect of the target.
I measured the values at four sensors at various angles and distances.
Each sensor has maximum values around 9.5 when incoming light is low (either the sensor is far from the target or the sensor faces away the target), while the value decreases as the sensor gets close to the target or faces directly toward to the target.
my inputs and outputs look like
[0.1234 0.0124 8.342 9.232] = [angle, distance]: an example of the head placed toward next to the light.
four inputs from the sensors and two outputs for the angle and distance.
What strategy can I implement to derive an equation that I can use for predicting the angle and distance by putting current incoming sensor values?
I was thinking of multivariate regression, but my outputs are not a single scalar (more of vectors). I am not sure it will work.
Therefore, I am writing here for asking some help.
Any help would be appreciated.
Thanks
Your idea about multivariate regression looks reasonable.
IMHO you need to train two models instead of one. The first one will predict angle, and the second one will predict distance.
Why you want to combine these two models? This is looks strange in the sense of the optimization metric. When you build angle model you minimize the error in radians. When you build distance model you minimize the error in meters. So what the metric you will minimize in single model case?
I believe next links will be useful for you:
https://www.mathworks.com/help/curvefit/surface-fitting.html
https://www.mathworks.com/help/matlab/math/example-curve-fitting-via-optimization.html
Note: in some cases the data normalization (for example via zscore) greatly increases the fitting performance.
P.S. Try also ask at the https://stats.stackexchange.com/

How to calculate displacement from Accelerometer reading?

I have accelerometer readings of three axis X, Y and z, will be getting data in a frequency of (62 records per second). Could you please suggest me how can I calculate the displacement.
Data in hand:
Accelerometer readings with respect to time.
Do I need to calculate the displacement using time domain data or need to convert into frequency domain. Which one will give a accurate results?
You can double integrate the acceleration vector over time to obtain the displacement. In theory this is a perfectly sensible solution.
But in practice, there will always be a component of g (acceleration due to gravity) acting on at least one of the axes all the time. Let's say you subtract the g component from your xyz vectors. The problem is that any slight error in readings (even off by a small order of magnitude) when double integration will lead to the error adding up over time rendering the displacement wildly inaccurate.
According to the integrated values, you will most likely see even an idle object fly off into space. You'll need an additional sensor to tell you the orientation - like a gyroscope, and have some point of reference (the Wiimote does this with an IR sensor).
This is primarily a time domain problem, but you could have a frequency domain stage where some amount of filtering is done to remove measurement error or process error.
tl;dr Positional tracking with acceleration sensors alone is a hard problem.

On how to apply k means clustering and outlining the clusters

I am reading about applications of clustering in human motion analysis. I started out with random numbers and applied k-means clustering algorithm but I wanted to have some graphs that circle the clusters as shown in the picture. Basically, the lines represent the motion trajectory. I will appreciate ideas on how to obtain motion trajectory of a person. Application is patient monitoring where the trajectory will be used in abnormal behavior activity.
I will be using a kinect and recording the motion trajectory based on skeleton tracking. So, I will be recording the 4 quaternion values of Head, Shoulder and Torso joints and the RGBD (Red green blue Depth) that is combined as 1 value for these joints. So, a total of 4*3 + 3 = 15 time series. So, there are 15 variables. How do I convert them to represent the trajectories shown below and then apply clustering to cluster trajectories. The clusters will allow in classification.
Can somebody please show how to obtain the diagram similar to the one attached? and how do I fuse and convert the 15 time series from each person into a single trajectory.
The picture illustrates the number of clusters that are generated for the time series. Thank you in advance.
K-means is a bad fit for trajectories.
It needs to be able to compute the mean (which is why it is called "k-means"). Having a stable, sensible mean is important. But how meaningful is the mean of some time series, even if you could define it (and the series weren't e.g. of different length, and different movement speed)?
Try hierarchical clustering, and multivariate dynamic time warping.

Trying to filter (tons of) noise from accelerometers and gyroscopes

My project:
I'm developing a slot car with 3-axis accelerometer and gyroscope, trying to estimate the car pose (x, y, z, yaw, pitch) but I have a big problem with my vibration noise (while the car is running, the gears induce vibration and the track also gets it worse) because the noise takes values between ±4[g] (where g = 9.81 [m/s^2]) for the accelerometers, for example.
I know (because I observe it), the noise is correlated for all of my sensors
In my first attempt, I tried to work it out with a Kalman filter, but it didn't work because values of my state vectors had a really big noise.
EDIT2: In my second attempt I tried a low pass filter before the Kalman filter, but it only slowed down my system and didn't filter the low components of the noise. At this point I realized this noise might be composed of low and high frecuency components.
I was learning about adaptive filters (LMS and RLS) but I realized I don't have a noise signal and if I use one accelerometer signal to filter other axis' accelerometer, I don't get absolute values, so It doesn't work.
EDIT: I'm having problems trying to find some example code for adaptive filters. If anyone knows about something similar, I will be very thankful.
Here is my question:
Does anyone know about a filter or have any idea about how I could fix it and filter my signals correctly?
Thank you so much in advance,
XNor
PD: I apologize for any mistake I could have, english is not my mother tongue
The first thing i would do, would be to run a DFT on the sensor signal and see if there is actually a high and low frequency component of your accelerometer signals.
With a DFT you should be able to determine an optimum cutoff frequency of your lowpass/bandpass filter.
If you have a constant component on the Z axis, there is a chance that you haven't filtered out gravity. Note that if there is a significant pitch or roll this constant can be seen on your X and Y axes as well
Generally pose estimation with an accelerometer is not a good idea as you need to integrate the acceleration signals twice to get a pose. If the signal is noisy you are going to be in trouble already after a couple of seconds if the noise is not 100% evenly distributed between + and -.
If we assume that there is no noise coming from your gears, even the conversion accuracy of the Accelerometer might start to mess up your pose after a couple of minutes.
I would definately use a second sensor, eg a compass/encoder in combination with your mathematical model and combine all your sensor data in a kalmann filter(Sensor fusion).
You might also be able to derive a black box model of your noise by assuming that it is correlated with your motors RPM. (Box-jenkins/Arma/Arima).
I had similar problems with noise with low and high frequencies and I managed to decently remove it without removing good signal too by using an universal microphone shock mount. It does a good job with gyroscope too especially if you find one which fits it (or you can put it in a small case then mount it)
It basically uses elastic strings to remove shocks and vibration.
Have you tried a simple low-pass filter on the data? I'd guess that the vibration frequency is much higher than the frequencies in normal car acceleration data. At least in normal driving. Crashes might be another story...