modeling an relationship between sensor values and position (angle and distance) to a target - matlab

I want to derive a simple model that can predict a current position of an object with respect of a target.
To be more specific, I have a head that has 4 identical light sensors placed between 90 degree. There is a light source (LED) emitting visual light. Since each sensor has angle spectrum (maximum at 90 degree and decrease its sensitivity while the angle of the incident of light increases), the receiving value at each sensor is determined by the angle and distance of the head with respect of the target.
I measured the values at four sensors at various angles and distances.
Each sensor has maximum values around 9.5 when incoming light is low (either the sensor is far from the target or the sensor faces away the target), while the value decreases as the sensor gets close to the target or faces directly toward to the target.
my inputs and outputs look like
[0.1234 0.0124 8.342 9.232] = [angle, distance]: an example of the head placed toward next to the light.
four inputs from the sensors and two outputs for the angle and distance.
What strategy can I implement to derive an equation that I can use for predicting the angle and distance by putting current incoming sensor values?
I was thinking of multivariate regression, but my outputs are not a single scalar (more of vectors). I am not sure it will work.
Therefore, I am writing here for asking some help.
Any help would be appreciated.
Thanks

Your idea about multivariate regression looks reasonable.
IMHO you need to train two models instead of one. The first one will predict angle, and the second one will predict distance.
Why you want to combine these two models? This is looks strange in the sense of the optimization metric. When you build angle model you minimize the error in radians. When you build distance model you minimize the error in meters. So what the metric you will minimize in single model case?
I believe next links will be useful for you:
https://www.mathworks.com/help/curvefit/surface-fitting.html
https://www.mathworks.com/help/matlab/math/example-curve-fitting-via-optimization.html
Note: in some cases the data normalization (for example via zscore) greatly increases the fitting performance.
P.S. Try also ask at the https://stats.stackexchange.com/

Related

How to correct (removing bias) IMU data from accelerometer and gyroscope measurement?

I am currently working on a mission to fuse GNSS and IMU for a more accurate navigation system for autonomous vehicles. I am very familiar with using GNSS to get the accurate position, however I'm a newbie in using IMU sensor. I've read several kinds of literature but am still confused about which better way should I do to remove bias from the accelerometer and gyroscope measurement.
I have 2 kinds of raw measurement data using MPU-9250, they are acceleration data (m/s2) in the x,y, z-axis and angular velocity data (deg/s) also in the x,y, z-axis. I have tried to input these data into my sensor fusion program. Unfortunately, I got unsatisfied with accuracy.. Hence I think firstly I should correcting (removing bias) of raw data IMU, and then the corrected IMU data can be input to my fusion program.
I couldn't find an answer that my brain could understand or fit my situation. Can someone please share some information about this? Can I use a high-pass filter or a low-pass filter in this situation?
I would really appreciate if there is someone could explain in detail to me without using complex math formulas/symbols, I'm not a mathematician and this is one of my problems when looking for information.
Thank you in advance
Accelerometer and Gyroscope have substantial bias usually. You could break the bias down to factors like,
Constant bias
Bias induced by temperature variation.
Bias instability
The static part of bias is easy to subtract out. If the unit starts from level orientation and without any movement, you could take samples for ~1s, average it and subtract it from your readings. Although, this step removes a big chuck of bias, it cannot still fully remove it (due to level not being perfect).
In case you observe that the temperature of IMU die varies during operation (even 5-10 deg matters), note down the bias and temperature (MPU9250 has an inbuilt temperature sensor). Fit a linear or quadratic curve that captures bias against temperature. Later on, use the temperature reading to estimate bias and subtract it out.
Even after implementing 1 and 2, there will still be some stubborn bias left. If the same is used in a fusion algorithm like Kalman filter (that is not formulated to estimate bias, the resulting position and orientation estimates will be biased too).
Bias can be estimated along with important states (like position) using some external reference/sensor like GNSS, Camera.
Complementary filter (low pass + high pass) or a Kalman filter can be formulated for this purpose.
Kalman filter approach:
Good amount of intuition along with some mathematics is needed to use this approach. Basically the work involves formulating prediction & measurement model and then provide rough noise variances for your measurements and prediction. An important thing to understand is that, Kalman filter assumes that the errors follow normal distribution without any bias. So the formulation should deliberately put bias terms as unknown states that should be estimated too (Do not assume that the sensor is bias free in the formulation)..
You could checkout my other answer to gain a detailed understanding of this approach.
Complementary filter approach
Complementary filter is simpler for simpler problems :P
The idea is that we use low pass filter on noisy measurement and high pass filter on biased measurement. Then add them up and call it a day.
Make sure that both the LPF and HPF are complements of each other (Transfer function of HPF should be 1-LPF). Typically first order filters with same time constants are used. Additionally the filter equations have to be converted from continuous laplace domain to discrete form (Read about ZOH, Tustins approximation...).
The final form is scattered around the internet too.
Personally I would use a Kalman filter for this purpose, but complementary filter can be used with same amount of effort. You could do this,
Assume that the body is not accelerating on average in long term (1-10 s or so). Then you could say that the accelerometer measures the direction of gravity in long term relative to the IMU. Then arctan(accy, accz) can be used to obtain an estimate of pitch and roll. But this pitch and roll readings will suffer from substantial noise. Implement a low pass filter on it with time constant ~5 seconds or so. Additionally add the latest pitch/roll with dt*transformationMatrix*gyroscope to get another pitch and roll. But these suffer from bias. Implement a HPF over gyro based Pitch and Roll. Add them together to get Pitch and Roll. Lets call these IMU_PR.
Now forget our original acceleration assumption. accelerometer gives specific force (which is net acceleration - gravity). Since we have Pitch and Roll angles (IMU_PR), we know gravities direction. Add gravity to accel readings to get an estimate of acceleration. Apply proper frame conversion to bring this acceleration to same coordinate frame as GPS (you will need an estimate of Yaw to do so. Fuse a magnetometer with gyroscope for this purpose). Then do vel = vel + acc*dt. Integrate it again to get an estimate of position from IMU. But this will drift due to the bias in accelerometer (and pitch, roll). Implement a high pass filter over this position and low pass filter over GPS position to get a final estimate.

How to create proper feedforward neural network with evolutionary algorithm

I've created a 2D game where you use map editor to place cars, obstacles and destination point where cars should go.
The idea is that these cars will be controlled by generation of feedforward neural networks. But I'm not sure how information should be represented in input layer and how exactly evolution should be like, so I'll explain my idea and it would be great to have advice on how to make this idea better especially if something will not work at all.
Input layer values:
Neurons in between (0,1) representing distance to the obstacle
Neuron in between (-1,1) representing the speed of car and its direction (-1 - max speed backwards, 0 - no speed, 0.5 - half of max speed forward)
Two neurons in between (-1,1) representing cos and sin (or 2*asin/Pi and 2*(acos-pi/2)/Pi) of vector from car to destination relative to some canvas (map) constant axis.
Output layer values:
Neuron between (-1,1) representing acceleration of car and its direction
Neuron between (-1,1) representing where and how fast car will turn
Looking at values, I'm thinking about using tanh function everywhere. But is this a good idea to use a single negative/positive value to determine a direction (as input or output)? Or is it a good idea to use two neurons to tell to a neural network where it should go (obviously can't tell with single value) and etc?
I imagine evolution itself to be be about switching some weight and bias values mostly between best networks (depending on fitness) and adding small random numbers to some weights and biases to mutate (where random number abs will depend on fitness in order to avoid destructive huge changes in good networks).

How to calculate displacement from Accelerometer reading?

I have accelerometer readings of three axis X, Y and z, will be getting data in a frequency of (62 records per second). Could you please suggest me how can I calculate the displacement.
Data in hand:
Accelerometer readings with respect to time.
Do I need to calculate the displacement using time domain data or need to convert into frequency domain. Which one will give a accurate results?
You can double integrate the acceleration vector over time to obtain the displacement. In theory this is a perfectly sensible solution.
But in practice, there will always be a component of g (acceleration due to gravity) acting on at least one of the axes all the time. Let's say you subtract the g component from your xyz vectors. The problem is that any slight error in readings (even off by a small order of magnitude) when double integration will lead to the error adding up over time rendering the displacement wildly inaccurate.
According to the integrated values, you will most likely see even an idle object fly off into space. You'll need an additional sensor to tell you the orientation - like a gyroscope, and have some point of reference (the Wiimote does this with an IR sensor).
This is primarily a time domain problem, but you could have a frequency domain stage where some amount of filtering is done to remove measurement error or process error.
tl;dr Positional tracking with acceleration sensors alone is a hard problem.

MATLAB computer machine learning find best model to merge 3 signals into 1 known outcome

I have ten measurements, containing 4 signals, with a length of 6-8 hours each at a sample rate of 10Hz. (200k-300k samples per measurement)
3 signals are the x-, y- and z-axis of an accelerometer (measuring the acceleration m/s^2). This sensor is positioned on a sphere. The sphere is inflated and deflated. The air-fluctuation in the sphere is the 4th signal.
The accelerometer values, are thus a representation of fluctuation of air in the sphere. If the sensor is positioned on top of the sphere, a good correlation is seen between air fluctuation and Z-axis of the accelerometer. The X- and Y- axis should in a perfect situation be 0, but they are not and contain noise.
I have tried many methods of combining the three signals into one, and comparing that to the air-fluctuation signal. At some points this does give a good outcomes, but when the accelerometer is not moving over one axis, the signal-noise ratio seems to be too high (e.g. when the sensor is at a 45 degree angle).
I am wondering if computer machine learning in Matlab can be used here, to automatically generate a model that makes the best fit: combine the 3 axis signals into one, that best represents the 4th signal. I assume this is what computer machine learning is about?
I can provide the signals in filtered format, the integrated signals, the angle of the sensor at a given time, full signal or just a minute of data, etc.
I have however no idea how to start and which toolbox to use to tackle this problem. Also what signals to feed into the algorithm? at what length (full vs couple of seconds/minutes)
Can you help me getting started on the computer machine learning process, to use 3 signals (and/or formatted versions of them) into one signal that closely matches the fourth signal*.
I'm not asking for full code, but what steps to take to tackle this problem. Like have a look at that toolbox, and function x.

Least squares in Matlab

A deployment of some (20 or so) sensors has detected a signal arriving from a certain direction. The sensors inter-distance is 50 meters. The signal is observed in sensors' data with a move-out depending on the arrival direction (top right picture).
I need to find the location of the signal source in an area mapped into grid cells surrounding the sensors. I am aware of array signal processing algorithms (beamforming, ...) for location estimation but I am trying to solve this as a least square problem.
Cross correlating sensor data (every sensor with its next) I find the delay between the observed signal's arrival times on sensors (3rd picture). Now, in the simplest scenario of homogeneous propagation velocity, say 2 km/s and using the observed time differences and the positions of the sensors, how do I set up my cost function and optimize so as to find the grid position most likely the source ? My problem is really the setting up of the cost function in Matlab.
In a more difficult scenario, where the velocity is changing across the grid, I need to calculate travel times from each sensor to every grid cell. I do that using a FD solver (see 4th and 5th pictures). Again, having the travel time grids, the observed time differences and the sensor positions, how do I find the most probable source ?
Thanks