Least squares in Matlab - matlab

A deployment of some (20 or so) sensors has detected a signal arriving from a certain direction. The sensors inter-distance is 50 meters. The signal is observed in sensors' data with a move-out depending on the arrival direction (top right picture).
I need to find the location of the signal source in an area mapped into grid cells surrounding the sensors. I am aware of array signal processing algorithms (beamforming, ...) for location estimation but I am trying to solve this as a least square problem.
Cross correlating sensor data (every sensor with its next) I find the delay between the observed signal's arrival times on sensors (3rd picture). Now, in the simplest scenario of homogeneous propagation velocity, say 2 km/s and using the observed time differences and the positions of the sensors, how do I set up my cost function and optimize so as to find the grid position most likely the source ? My problem is really the setting up of the cost function in Matlab.
In a more difficult scenario, where the velocity is changing across the grid, I need to calculate travel times from each sensor to every grid cell. I do that using a FD solver (see 4th and 5th pictures). Again, having the travel time grids, the observed time differences and the sensor positions, how do I find the most probable source ?
Thanks

Related

Impossibility to apply closed-loop filtering techniques modelling a thin flexible structure

Model approach:
I am modelling on Matlab-Simulink a very thin flexible structure. All points of the model are link with each other with springs and dampers this way (without the tethers in the center):
Mesh description
The general equation of my model applied at each point of the mesh is the following:
Dynamic formula of mass/spring/damper system
With k the springs stiffness, and c the dampers damping.
To adapt the physical properties of the material I want to model, the spring stiffness has been set to a very high value, around k = 5000. This mean that my spring links are highly reactive to any deformation.
Problem:
This leads to my problem: High stiffness links induce high frequency displacement that I can consider as noise in the simulation.
The simulation is much slower as the variable time step, I am using must be very low.
This high-frequency displacements (around 160 Hz, which the resonance frequency of the springs) stays all along the simulations.
Here is a simulation of my structure rotating at a constant angular speed:
In-time evolution of a random point of my structure in spherical coordinates
We can see that R is vibrating at a very high frequency. However, the displacement amplitude is clearly negligible.
To speed up the simulation, I want to suppress those vibrations!
Investigation:
To suppress them, I investigate on signal filtering techniques, mainly low-pass filtering. On every loop of our simulation, and what should enter our filter are data of all my points in all my axis.
Simulink low-pass filter block
The continuous version of low-pass filter in Simulink library has been tested on the acceleration, the speed and the position, with several cut-off frequencies from 100 Hz to 500 Hz.
For example, for a cut-off frequency of 200Hz and filtering the position at t=0.6 sec I have:
In-time filtered evolution of a random point of my structure in spherical coordinates
It is an in-plane movement so I don’t have any elevation angle, but azimuth angle and point distance from the center are completely diverging.
The problem might come from:
The fact that I am in a closed-loop system
The fact that for the mesh we have, the filter receives 81 vectors of 3*1 at each time step and maybe the filter block is not made to function with that.
The fact that for the mesh we have, the filter receives 81 vectors of 3*1 at each time step and maybe the filter block is not made to function with that.
Main question:
Are there filtering techniques for closed-loop and multiple inputs system that could solve my problem?
Digital filter designer works with SISO signals. Just demux your signals and apply some lowpass filters. You gave lots of info that made it harder to understand the core problem, if there is anything else you can re-iterate. I'd start with a 3rd order Butterworth LPF wc at around 100Hz for your needs.

modeling an relationship between sensor values and position (angle and distance) to a target

I want to derive a simple model that can predict a current position of an object with respect of a target.
To be more specific, I have a head that has 4 identical light sensors placed between 90 degree. There is a light source (LED) emitting visual light. Since each sensor has angle spectrum (maximum at 90 degree and decrease its sensitivity while the angle of the incident of light increases), the receiving value at each sensor is determined by the angle and distance of the head with respect of the target.
I measured the values at four sensors at various angles and distances.
Each sensor has maximum values around 9.5 when incoming light is low (either the sensor is far from the target or the sensor faces away the target), while the value decreases as the sensor gets close to the target or faces directly toward to the target.
my inputs and outputs look like
[0.1234 0.0124 8.342 9.232] = [angle, distance]: an example of the head placed toward next to the light.
four inputs from the sensors and two outputs for the angle and distance.
What strategy can I implement to derive an equation that I can use for predicting the angle and distance by putting current incoming sensor values?
I was thinking of multivariate regression, but my outputs are not a single scalar (more of vectors). I am not sure it will work.
Therefore, I am writing here for asking some help.
Any help would be appreciated.
Thanks
Your idea about multivariate regression looks reasonable.
IMHO you need to train two models instead of one. The first one will predict angle, and the second one will predict distance.
Why you want to combine these two models? This is looks strange in the sense of the optimization metric. When you build angle model you minimize the error in radians. When you build distance model you minimize the error in meters. So what the metric you will minimize in single model case?
I believe next links will be useful for you:
https://www.mathworks.com/help/curvefit/surface-fitting.html
https://www.mathworks.com/help/matlab/math/example-curve-fitting-via-optimization.html
Note: in some cases the data normalization (for example via zscore) greatly increases the fitting performance.
P.S. Try also ask at the https://stats.stackexchange.com/

How to calculate displacement from Accelerometer reading?

I have accelerometer readings of three axis X, Y and z, will be getting data in a frequency of (62 records per second). Could you please suggest me how can I calculate the displacement.
Data in hand:
Accelerometer readings with respect to time.
Do I need to calculate the displacement using time domain data or need to convert into frequency domain. Which one will give a accurate results?
You can double integrate the acceleration vector over time to obtain the displacement. In theory this is a perfectly sensible solution.
But in practice, there will always be a component of g (acceleration due to gravity) acting on at least one of the axes all the time. Let's say you subtract the g component from your xyz vectors. The problem is that any slight error in readings (even off by a small order of magnitude) when double integration will lead to the error adding up over time rendering the displacement wildly inaccurate.
According to the integrated values, you will most likely see even an idle object fly off into space. You'll need an additional sensor to tell you the orientation - like a gyroscope, and have some point of reference (the Wiimote does this with an IR sensor).
This is primarily a time domain problem, but you could have a frequency domain stage where some amount of filtering is done to remove measurement error or process error.
tl;dr Positional tracking with acceleration sensors alone is a hard problem.

MATLAB computer machine learning find best model to merge 3 signals into 1 known outcome

I have ten measurements, containing 4 signals, with a length of 6-8 hours each at a sample rate of 10Hz. (200k-300k samples per measurement)
3 signals are the x-, y- and z-axis of an accelerometer (measuring the acceleration m/s^2). This sensor is positioned on a sphere. The sphere is inflated and deflated. The air-fluctuation in the sphere is the 4th signal.
The accelerometer values, are thus a representation of fluctuation of air in the sphere. If the sensor is positioned on top of the sphere, a good correlation is seen between air fluctuation and Z-axis of the accelerometer. The X- and Y- axis should in a perfect situation be 0, but they are not and contain noise.
I have tried many methods of combining the three signals into one, and comparing that to the air-fluctuation signal. At some points this does give a good outcomes, but when the accelerometer is not moving over one axis, the signal-noise ratio seems to be too high (e.g. when the sensor is at a 45 degree angle).
I am wondering if computer machine learning in Matlab can be used here, to automatically generate a model that makes the best fit: combine the 3 axis signals into one, that best represents the 4th signal. I assume this is what computer machine learning is about?
I can provide the signals in filtered format, the integrated signals, the angle of the sensor at a given time, full signal or just a minute of data, etc.
I have however no idea how to start and which toolbox to use to tackle this problem. Also what signals to feed into the algorithm? at what length (full vs couple of seconds/minutes)
Can you help me getting started on the computer machine learning process, to use 3 signals (and/or formatted versions of them) into one signal that closely matches the fourth signal*.
I'm not asking for full code, but what steps to take to tackle this problem. Like have a look at that toolbox, and function x.

On how to apply k means clustering and outlining the clusters

I am reading about applications of clustering in human motion analysis. I started out with random numbers and applied k-means clustering algorithm but I wanted to have some graphs that circle the clusters as shown in the picture. Basically, the lines represent the motion trajectory. I will appreciate ideas on how to obtain motion trajectory of a person. Application is patient monitoring where the trajectory will be used in abnormal behavior activity.
I will be using a kinect and recording the motion trajectory based on skeleton tracking. So, I will be recording the 4 quaternion values of Head, Shoulder and Torso joints and the RGBD (Red green blue Depth) that is combined as 1 value for these joints. So, a total of 4*3 + 3 = 15 time series. So, there are 15 variables. How do I convert them to represent the trajectories shown below and then apply clustering to cluster trajectories. The clusters will allow in classification.
Can somebody please show how to obtain the diagram similar to the one attached? and how do I fuse and convert the 15 time series from each person into a single trajectory.
The picture illustrates the number of clusters that are generated for the time series. Thank you in advance.
K-means is a bad fit for trajectories.
It needs to be able to compute the mean (which is why it is called "k-means"). Having a stable, sensible mean is important. But how meaningful is the mean of some time series, even if you could define it (and the series weren't e.g. of different length, and different movement speed)?
Try hierarchical clustering, and multivariate dynamic time warping.