Uncertainty of sun-earth distance in Pyephem - pyephem

I wonder if anybody ever has assessed the uncertainty of the sun-earth distance calculations of pyephem? Since it might be used as an input to other calculations, this would be of relevance for further uncertainty analysis.
Cheers,
F

Yes! The accuracy is well defined.
The libastro library that PyEphem relies on uses the VSOP87 planetary theory — here it is in the PyEphem source code:
https://github.com/brandon-rhodes/pyephem/blob/master/libastro-3.7.7/vsop87.c
And here is a Wikipedia article giving more information:
https://en.wikipedia.org/wiki/VSOP_(planets)#VSOP87
The Wikipedia states that for the Earth-Moon barycenter, VSOP87 provides one arcsecond (1") of accuracy. If you are curious how far off the Earth-Moon barycenter position can be while still remaining within an arcsecond of accuracy, simply compute the circumference of a circle the size of the distance between the Earth-Moon system and the Sun (149.6 billion meters, according to Google) and divide by 360 (degrees) and then 60 (to get arcminutes) and then 60 (to get arcseconds):
149.6e9 m * 2 * pi / 360 / 60 / 60
-> 725 km
So over the thousands of years over which VSOP87 provides good numbers, the Earth-Moon distance to the Sun might be off by as much as the width of Germany.

Related

Is there any numerical-accurracy difference on calculating sin(pi/2-A) and cos(A) in Matlab?

I am reading a matlab function for calculating great circle distance written by my senior collegue.
The distance between two points on the earth surface should be calculated using this formula:
d = r * arccos[(sin(lat1) * sin(lat2)) + cos(lat1) * cos(lat2) * cos(long2 – long1)]
However, the script has the code like this:
dist = (acos(cos(pi/180*(90-lat2)).*cos(pi/180*(90-lat1))+sin(pi/180*(90-lat2)).*sin(pi/180*(90-lat1)).*cos(pi/180*(diff_long)))) .* r_local;
(-180 < long1,long2 <= 180, -90 < lat1,lat2 <= 90)
Why are sin(pi/2-A) and cos(pi/2-A) used to replace cos(A) and sin(A)?
Doesn't it introduced more error source by using the constant pi?
Since lat1, lat2 might be very close to zero in my work, is this a trick on the numerical accuracy of MATLAB's sin() and cos() function?
Look forward to answers that explain how trigonometric functions in MATLAB work and analyze the error of these functions when the argument is close or equal to 0 and pi/2.
If the purpose is to increase accuracy, this seems a very poor idea. When the angle is small, 90-A spoils any accuracy. That even makes tiny angles vanish (90-ε=90).
On the opposite, the sine of tiny angles is very close to the angle itself (radians) and for this reason quite accurately computed, while the cosine is virtually 1 or 1-A²/2. For top accuracy on tiny angles, you may resort to the versine, using versin(A):= 1-cos(A) = 2 sin²(A/2) and rework the equations in terms of 1-versin(A) instead of cos(A).
If the angle is close to 90°, accuracy is lost anyway, 90°-A will not restore it.
I very much doubt this has to do with accuracy. Or at least, I don't think this helps any when it comes to accuracy.
The maximum difference between both sin(pi/2-A) - cos(A) and cos(pi/2-A) - sin(A) is 1.1102e-16, which is very small. This is just basic floating point accuracy, and there's really no way of telling which of the numbers is more correct. Note that cos(pi/2) = 6.1232e-17. So, if theta = 0, your colleague's code cos(pi/2-0) will give an error of 6.1232e-17, while simply doing the obvious sin(0) will be correct.
If you need numbers that are more accurate than this then you can try vpa.
I guess this is either because your colleague found another formula and implemented that, or he/she's confused and has attempted to increase the accuracy.
The latter might be the case if he/she tried to avoid the approximations sin(theta) ≈ theta and cos(theta) ≈ 1 for small values of theta. However, this doesn't make sense, since cos(pi/2-theta) ≈ theta and sin(pi/2-theta) ≈ 1 for small values of theta.
Best chance is to ask directly to the author of the text where you got those expressions from, if possible indeed.
It may be the case that the original expressions come from navigation formulae that were written when calculations were done manually: pencil paper ruler, no computers, no calculators.
Tables and graphs were then used to speed up results: pi-x was equivalent to start read table from other side or read graph upside-down.

Sound source localization with Matlab

I have three recordings of a signal taken with an array of three hydrophones (one sound source).
I would like to estimate the source localization using the time of arrival differences for the three recordings. In Matlab, I started the following, by estimating the time of arrival differences with the GCC-PHAT algorithm (Generalized cross-correlation):
[sig1, fs] = audioread('signal1.wav');
sig2 = audioread('signal2.wav');
sig3 = audioread('signal3.wav');
refsig = sig1;
[tau_est, R, lags] = gccphat([sig2,sig3],refsig, fs);
disp(tau_est * fs)
It gives me the time of arrival differences of signals 2 and 3 compared to signal 1 (tau).
Now I would like to get the direction of arrival estimates (DOAs) and proceed with triangulation to assess source position.
Any help will be greatly appreciated!
Some additional information is needed to convert the time-of-arrival-difference estimates to directions of arrival:
Spatial coordinates of each mic / hydrophone in the array.
Speed of sound. Speed of sound in seawater varies significantly with temperature, salinity, and depth; see this Wikipedia article for an empirical formula.
Once you know this, use equation (1) from
https://people.engr.ncsu.edu/kay/msf/sound.htm to estimate the angle of arrival θ for each mic pair:
θ = sin⁻¹(speed_of_sound * time_difference_of_arrival / distance_between_mics)
A limitation is that the angle of arrival estimated from a single mic pair has front-back ambiguity. For instance an angle at θ = 30 degrees would have the same time difference of arrival as θ = 150 degrees. You can think of the sin⁻¹ inverse function as multivalued to represent this ambiguity. Also, keep in mind that the formula derivation assumes plane wave propagation from a source at an infinite distance, and it ignores shadowing and diffraction effects around the mic, so obviously there is some inaccuracy. But it is simple and often works decently well.
From there, you can combine angle estimates from the three mic pairs to triangulate the source position.

fitness in inverted pendulum

What is the fitness function used to solve an inverted pendulum ?
I am evolving neural networks with genetic algorithm. And I don't know how to evaluate each individual.
I tried minimize the angle of pendulum and maximize distance traveled at the end of evaluation time (10 s), but this won't work.
inputs for neural network are: cart velocity, cart position, pendulum angular velocity and pendulum angle at time (t). The output is the force applied at time (t+1)
thanks in advance.
I found this paper which lists their objective function as being:
Defined as:
where "Xmax = 1.0, thetaMax = pi/6, _X'max = 1.0, theta'Max =
3.0, N is the number of iteration steps, T = 0.02 * TS and Wk are selected positive weights." (Using specific values for angles, velocities, and positions from the paper, however, you will want to use your own values depending on the boundary conditions of your pendulum).
The paper also states "The first and second terms determine the accumulated sum of
normalised absolute deviations of X1 and X3 from zero and the third term when minimised, maximises the survival time."
That should be more than enough to get started with, but i HIGHLY recommend you read the whole paper. Its a great read and i found it quite educational.
You can make your own fitness function, but i think the idea of using a position, velocity, angle, and the rate of change of the angle the pendulum is a good idea for the fitness function. You can, however, choose to use those variables in very different ways than the way the author of the paper chose to model their function.
It wouldn't hurt to read up on harmonic oscillators either. They take the general form:
mx" + Bx' -kx = Acos(w*t)
(where B, or A may be 0 depending on whether or not the oscillator is damped/undamped or driven/undriven respectively).

How to convert distance into probability?

Сan anyone shine a light to my matlab program?
I have data from two sensors and i'm doing a kNN classification for each of them separately.
In both cases training set looks like a set of vectors of 42 rows total, like this:
[44 12 53 29 35 30 49;
54 36 58 30 38 24 37;..]
Then I get a sample, e.g. [40 30 50 25 40 25 30] and I want to classify the sample to its closest neighbor.
As a criteria of proximity I use Euclidean metrics, sqrt(sum(Y2)), where Y is a difference between each element and it gives me an array of distances between Sample and each Class of Training Set.
So, two questions:
Is it possible to convert distance into distribution of probabilities, something like: Class1: 60%, Class 2: 30%, Class 3: 5%, Class 5: 1%, etc.
added: Up to this moment I'm using formula: probability = distance/sum of distances, but I cannot plot a correct cdf or histogram.
This gives me a distribution in some way, but I see a problem there, because if distance is large, for example 700, then the closest class will get a biggest probability, but it'd be wrong because the distance is too big to be compared with any of classes.
If I would be able to get two probability density functions, I guess then I would do some product of them. Is it possible?
Any help or remark is highly appreciated.
I think there are multiple way of doing this:
as Adam suggested using 1/d / sum(1/d)
use the square, or even higher ordered of inverse of distance, e.g 1/d^2 / sum(1/d^2), This will make the class probability distribution more skewed. For example if 1/d generated 40%/60% prediction, the 1/d^2 may gave a 10%/90%.
use softmax (https://en.wikipedia.org/wiki/Softmax_function), the exponential of negative distance.
use exp(-d^2)/sigma^2 / sum[exp(-d^2)/sigma^2], this will imitate the Gaussian Distribution likelihoods. Sigma could be the average within-cluster distance, or simply set to 1 for all clusters.
You could try to inverse your distances to get a likelihood measure. I.e. the bigger the distance x, the smaller the inverse of it. Then, you can normalize as in probability = (1/distance) / (sum (1/distance) )
Hi: Have you ever tried with the formula probability = 1-distance assuming that you are using a standardized distance between 0 and 1?

Accelerometer signal segmentation

I have a 1D accelerometer signal (one axis only). I would like to create a robust algorithm, which would be able to recognize some shapes in the signal.
At first I apply a moving average filter to the raw signal. On the attached picture the raw signal is coloured red and the averaged signal is black. As seen from the picture, some trends are visible from the averaged (black) signal - the signal contains 10 repetitions of a peak like pattern, where acceleration climbs to a maximum and then drops back down. I have marked the beginnings and endings of those patterns with a cross.
So my goal is to find the marked positions automatically. The problem making the pattern extraction difficult are:
the start of the pattern could have a different y value than the end of the pattern
the pattern could have more than one peak
I do not have any concrete time information (from start to the end of the pattern it takes A time units)
I have tried different approaches, which are pretty much home-brew, so I won't mention them - I don't want you to be biased by my way of thinking. Are there some standard or by the books approaches for doing that kind of pattern extraction? Or maybe does anyone know how to tackle the problem in a robust way?
Any idea will be appreciated.
Keep it simple!
It appears the moving average is a good enough damper device; keep it as-is, maybe only increasing or decreasing its sample count if you notice that it either leaves too much noise or removes too much signal respectively. You then work off the this averaged signal exclusively.
The pattern markers you seek seems relatively easy to detect. Expressed in English, these markers are:
Targets = the points of inflection in the averaged readings curve, when the slope goes markedly negative to positive.
You should therefore be able to detect this situation by comparison of the slope values, calculated along with the moving average, as each new reading value comes available (of course with a short delay, as of course the slope at a given point can only be calculated when the averaged reading for the next [few] point[s] is available)
To avoid false detection, however, you'd need to define a few parameters aimed at filtering the undesirable patterns. These paremeters will define more precisely the meaning of "markedly" in the above target definition.
Tentatively the formula for detecting a point of interest could be as simple as this
(-1 * S(t-1) + St ) > Min_delta_Slope
where
S is the slope (more on this) at time t-1 and t, respectively
Min_delta_Slope is a parameter defining how "sharp" a change in slope we want at a minimum.
Assuming normalized t and Y units, we can set the Min_delta_Slope parameter close to or even past 1. Intuitively a value of 1 (again in normalized units) would indicate that we target points where the curved "arrived" with a downward slope of say 50% and left the point with a upward slope of 50% (or 40% + 60% or .. 10% i.e almost flat and 90% i.e. almost vertical).
To avoid detecting points in the case when this is merely a small dip in the curve, we can take more points into consideration, with a fancier formula such as say
(Pm2 * S(t-2) + Pm1 * S(t-1) + P0 * St + Pp1 S(t+1) ) > Min_delta_Slope
where
Pm2, Pm1, P0 and Pp1 are coefficients giving relative importance to the slope at various point before and after the point of interest. (Pm2 and Pm1 typically negative values unless we use only positive parameter and use negative signs in the formula)
St +/- n is the slope a various times
and Min_delta_Slope is a parameter defining how "sharp" a change in slope we want at a minimum.
Intuitively, this 4 points formula would take into account the shape of the curve at a point two readings prior and two reading past the point of interest (in addition to considering the point right before and after it). Given the proper values for the parameters, the formula would require that the curve be steadily coming "down" for two time slices, then steadily going up for the next two time slices, hence avoiding to mark smaller dips in the curve.
An alternative way to achieve this, may be to compute the slope by using the difference in Y value between the [averaged] reading from two (or more) time slices ago and that of the current [averaged] reading. These two approaches are similar but would produce slightly different result; generally we'd have more say on the desired shape of the curve with the Pm2, Pm1, P0 and P1 parameters.
You might want to look at watershed segmentation, which does a related kind of thing (dividing landscapes into their separate catchment basins). Oddly enough, I'm actually writing a PhD thesis which uses watershed a lot at the moment (seriously :))