I would like to use the astropy package to compute the time of equinoxes and solstices. I have worked before with the pyephem package, and it provides easy functions exactly for this: one can, for example, say
>>> print(ephem.next_equinox(ephem.now()))
2019/9/23 07:50:14
and get the time of the next equinox. However, there are no such functions in astropy, so I thought I might try to compute the times by the definition: the vernal equinox is the moment when the ecliptic longitude of the Sun is zero; the summer solstice is the moment when the ecliptic longitude of the Sun is 90° etc.
So it seems that getting the ecliptic longitude of the Sun would be the essential step, and then I could somehow solve that function for time:
def sunEclipticLongitude(t):
sun = astropy.coordinates.get_body('sun', t)
eclipticOfDate = astropy.coordinates.GeocentricTrueEcliptic(equinox=t)
sunEcliptic = sun.transform_to(eclipticOfDate)
return sunEcliptic.lon.deg
My first thought was to use something from scipy.optimize to solve this function for time, but at this point, I got stuck. The Sun's longitude is an angle, so there are obviously many solutions for lon=0 (this year's equinox, next year's equinox ...) How do I find the next time (from a particular origin, for example now) when the Sun's longitude is zero? How do I find the previous time when it was zero? Also, the vernal equinox seems to be a particularly nasty case for solving, since the function has a discontinuity at that point – it jumps from 360 to 0. How to handle that?
To answer your first question, the optimization strategy will still work for periodic functions, but the solution will depend on your optimization starting point. Just pick a point that's already pretty close. You know that equinoxes are Mar 21 and Sep 20 (or something like that), and solstices are June 21 and Dec 21. Pick an arbitrary time of day on those dates, and you'll find the right solution.
As for the discontinuity in the angle, it isn't really there... It's just that in this particular convention you elect to represent angles as numbers between 0 and 360 degrees. But mathematically, the angle of 361 degrees means exactly the same thing as 1 degrees and has just as much right to exist.
As a result, continuous periodic functions like sin(), cos() etc. will not show any discontinuity at that value of the angle (or any other value of the angle that you pick as minimum or maximum).
As you expected, the syzygys can be located using scipy.optimize, via a root-finding function, as in the code snippet below. But rather than looking for the next syzygy, it looks for the nearest one, by picking points 44 days before and after the given date and expecting that there will be one syzygy in between those dates. Since Earth doesn't have seasons shorter than 88 days within plus-or-minus 50,000 years, that's hopefully a pretty safe bet, and the astropy ephemerides aren't accurate for that long anyway. I employed sin(angle*2) in the (poorly named) linearize(angle) code below to convert ecliptic longitudes to a function which crosses zero at each quadrant.
The code is also in a gist, which might get refined: see find syzygy / equinox / solstice with astropy
# Find the nearest syzygy (solstice or equinox) to a given date.
# Solution for https://stackoverflow.com/questions/55838712/finding-equinox-and-solstice-times-with-astropy
# TODO: need to ensure we're using the right sun position functions for the syzygy definition....
import math
from astropy.time import Time, TimeDelta
import astropy.coordinates
from scipy.optimize import brentq
from astropy import units as u
# We'll usually find a zero crossing if we look this many days before and after
# given time, except when it is is within a few days of a cross-quarter day.
# But going over 50,000 years back, season lengths can vary from 85 to 98 days!
# https://individual.utoronto.ca/kalendis/seasons.htm#seasons
delta = 44.0
def mjd_to_time(mjd):
"Return a Time object corresponding to the given Modified Julian Date."
return Time(mjd, format='mjd', scale='utc')
def sunEclipticLongitude(mjd):
"Return ecliptic longitude of the sun in degrees at given time (MJD)"
t = mjd_to_time(mjd)
sun = astropy.coordinates.get_body('sun', t)
# TODO: Are these the right functions to call for the definition of syzygy? Geocentric? True?
eclipticOfDate = astropy.coordinates.GeocentricTrueEcliptic(equinox=t)
sunEcliptic = sun.transform_to(eclipticOfDate)
return sunEcliptic.lon.deg
def linearize(angle):
"""Map angle values in degrees near the quadrants of the circle
into smooth functions crossing zero, for root-finding algorithms.
Note that for angles near 90 or 270, increasing angles yield decreasing results
>>> linearize(5) > 0 > linearize(355)
True
>>> linearize(95) > 0 > linearize(85)
False
"""
return math.sin(math.radians(angle * 2))
def map_syzygy(t):
"Map times into linear functions crossing zero at each syzygy"
return linearize(sunEclipticLongitude(t))
def find_nearest_syzygy(t):
"""Return the precise Time of the nearest syzygy to the given Time,
which must be within 43 days of one syzygy"
"""
syzygy_mjd = brentq(map_syzygy, t.mjd - delta, t.mjd + delta)
syzygy = mjd_to_time(syzygy_mjd)
syzygy.format = 'isot'
return syzygy
if __name__ == '__main__':
import doctest
doctest.testmod()
t0 = Time('2019-09-23T07:50:10', format='isot', scale='utc')
td = TimeDelta(1.0 * u.day)
seq = t0 + td * range(0, 365, 15)
for t in seq:
try:
syzygy = find_nearest_syzygy(t)
except ValueError as e:
print(f'{e=}, {t.value=}, {t.mjd-delta=}, {map_syzygy(t.mjd-delta)=}')
continue
print(f'{t.value=}, {syzygy.value=}, {sunEclipticLongitude(syzygy)=}')
Related
I am trying to get remainder using swift's truncatingRemainder(dividingBy:) method.
But I am getting a non zero remainder even if value I am using is completely divisible by deviser. I have tried number of solutions available here but none worked.
P.S. values I am using are Double (Tried Float also).
Here is my code.
let price = 0.5
let tick = 0.05
let remainder = price.truncatingRemainder(dividingBy: tick)
if remainder != 0 {
return "Price should be in multiple of tick"
}
I am getting 0.049999999999999975 as remainder which is clearly not the expected result.
As usual (see https://floating-point-gui.de), this is caused by the way numbers are stored in a computer.
According to the docs, this is what we expect
let price = //
let tick = //
let r = price.truncatingRemainder(dividingBy: tick)
let q = (price/tick).rounded(.towardZero)
tick*q+r == price // should be true
In the case where it looks to your eye as if tick evenly divides price, everything depends on the inner storage system. For example, if price is 0.4 and tick is 0.04, then r is vanishingly close to zero (as you expect) and the last statement is true.
But when price is 0.5 and tick is 0.05, there is a tiny discrepancy due to the way the numbers are stored, and we end up with this odd situation where r, instead of being vanishingly close to zero, is vanishing close to tick! And of course the last statement is then false.
You'll just have to compensate in your code. Clearly the remainder cannot be the divisor, so if the remainder is vanishingly close to the divisor (within some epsilon), you'll just have to disregard it and call it zero.
You could file a bug on this but I doubt that much can be done about it.
Okay, I put in a query about this and got back that it behaves as intended, as I suspected. The reply (from Stephen Canon) was:
That's the correct behavior. 0.05 is a Double with the value 0.05000000000000000277555756156289135105907917022705078125. Dividing 0.5 by that value in exact arithmetic gives 9 with a remainder of 0.04999999999999997501998194593397784046828746795654296875, which is exactly the result you're seeing.
The only rounding error that occurs in your example is in the division price/tick, which rounds up to 10 before your .rounded(.towardZero) has a chance to take effect. We'll add an API to let you do something like price.divided(by: tick, rounding: .towardZero) at some point, which will eliminate this rounding, but the behavior of truncatingRemainder is precisely as intended.
You really want to have either a decimal type (also on the list of things to do) or to scale the problem by a power of ten so that your divisor become exact:
1> let price = 50.0
price: Double = 50
2> let tick = 5.0
tick: Double = 5
3> let r = price.truncatingRemainder(dividingBy: tick)
r: Double = 0
I searched internet for a function to find exact square root of BigInt using scala programming language. I didn't get one, But saw one Java Program and I converted that function into Scala version. It is working but I am not sure, whether it can handle very large BigInt. But it returns BigInt only. Not BigDecimal as Square Root. It shows there is some bit manipulation done in the code with some hard coding of numbers like shiftRight(5), BigInt("8") and shiftRight(1). I can understand the logic clearly, But not the hard coding of these bitshift numbers and the number 8. May be these bitshift functions are not available in scala, and thats why it is needed to convert to java BigInteger at few places. These hard coded numbers may impact the precision of the result.I just changed the java code into scala code just copying the exact algorithm. And here is the code I have written in scala:
def sqt(n:BigInt):BigInt = {
var a = BigInt(1)
var b = (n>>5)+BigInt(8)
while((b-a) >= 0) {
var mid:BigInt = (a+b)>>1
if(mid*mid-n> 0) b = mid-1
else a = mid+1
}
a-1
}
My Points are:
Can't we return a BigDecimal instead of BigInt? How can we do that?
How these hardcoded numbers shiftRight(5), shiftRight(1) and 8 are related
to precision of the result.
I tested for one number in scala REPL: The function sqt is giving exact square root of the squared number. but not for the actual number as below:
scala> sqt(BigInt("19928937494873929279191794189"))
res9: BigInt = 141169888768369
scala> res9*res9
res10: scala.math.BigInt = 19928937494873675935734920161
scala> sqt(res10)
res11: BigInt = 141169888768369
scala>
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
Your question 1 and question 3 are actually the same question.
How [do] these bitshifts impact [the] precision of the result?
They don't.
How [are] these hardcoded numbers ... related to precision of the result?
They aren't.
There are many different methods/algorithms for estimating/calculating the square root of a number (as can be seen here). The algorithm you've posted appears to be a pretty straight forward binary search.
Pick a number a guaranteed to be smaller than the target (square root of n).
Pick a number b guaranteed to be larger than the target (square root of n).
Calculate mid, the whole number mid-point between a and b.
If mid is larger than (or equal to) the target then move b to mid (-1 because we know it's too large).
If mid is smaller than the target then move a to mid (+1 because we know it's too small).
Repeat 3,4,5 until a is no longer less than b.
Return a-1 as the square root of n rounded down to a whole number.
The bitshifts and hardcoded numbers are used in selecting the initial value of b. But b only has be greater than the target. We could have just done var b = n. Why all the bother?
It's all about efficiency. The closer b is to the target, the fewer iterations are needed to find the result. Why add 8 after the shift? Because 31>>5 is zero, which is not greater than the target. The author chose (n>>5)+8 but he/she might have chosen (n>>7)+12. There are trade-offs.
Can't we return a BigDecimal instead of BigInt? How can we do that?
Here's one way to do that.
def sqt(n:BigInt) :BigDecimal = {
val d = BigDecimal(n)
var a = BigDecimal(1.0)
var b = d
while(b-a >= 0) {
val mid = (a+b)/2
if (mid*mid-d > 0) b = mid-0.0001 //adjust down
else a = mid+0.0001 //adjust up
}
b
}
There are better algorithms for calculating floating-point square root values. In this case you get better precision by using smaller adjustment values but the efficiency gets much worse.
Can't we return a BigDecimal instead of BigInt? How can we do that?
This makes no sense if you want exact roots: if a BigInt's square root can be represented exactly by a BigDecimal, it can be represented by a BigInt. If you don't want exact roots, you'll need to specify precision and modify the algorithm (and for most cases, Double will be good enough and much much much faster than BigDecimal).
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
These aren't the only options. The point is that for every positive n, n/32 + 8 >= sqrt(n) (where sqrt is the mathematical square root). This is easiest to show by a bit of calculus (or just by building a graph of the difference). So at the start we know a <= sqrt(n) <= b (unless n == 0 which can be checked separately), and you can verify this remains true on each step.
If you simply create a FixedObject and give it a set of coordinates and then ask for them back you get a different position:
>>> import ephem
>>> TestStar = ephem.FixedBody()
>>> TestStar._ra, TestStar._dec = '12:43:20', '-45:34:12'
>>> TestStar.compute()
>>> print TestStar.ra, TestStar.dec
12:44:15.34 -45:39:46.8
I now understand that this is because, as documented, the FixedBody is by default at the J2000 epoch, however the default observer's epoch is the moment that said observer is created, and it seems that that is the default for when you don't specify an observer.
However if I try to compensate for that:
>>> TestStar4 = ephem.FixedBody()
>>> TestStar4._ra, TestStar4._dec, TestStar4._epoch = '12:43:20', '-45:34:12', '2000/01/01 12:00:00'
>>> TestSite2 = ephem.Observer()
>>> TestSite2.lat, TestSite2.lon, TestSite2.date = 0,0,'2000/01/01 12:00:00'
>>> TestStar4.compute(TestSite2)
>>> print TestStar4.ra, TestStar4.dec
12:43:19.42 -45:33:51.9
You get an almost identical RA, but a DEC that is different by 20 arcseconds for this example.
I'm specifically trying to get the J2000 coordinates of some stars in a WEBDA catalog which provide relative coordinates for most stars.
For example see this random cluster:
http://www.univie.ac.at/webda/cgi-bin/frame_list.cgi?ic0166
The "Coordinates J2000" only has information on 9 stars and almost all stars have information in the "XY positions" link. The center and scale of these XY positions is a bit arbitrary but can be found in the site.
However if don't know why that 20 arcsecond difference in coordinates is there, I don't know when my system will fail.
Ok at this point I imagine the discrepancy is due to some correction factor.
I know now I want to use the Astrometric Geocentric Position, so:
>>> import ephem
>>> TestStar = ephem.FixedBody()
>>> TestStar._ra, TestStar._dec = '12:43:20', '-45:34:12'
>>> TestStar.compute()
>>> print TestStar.a_ra, TestStar.a_dec
12:43:20 -45:34:12
Simple enough (just hadn't understood that part of the manual, sorry).
I'm still curious as to which of all the corrections would affect this mostly, but I can carry on without knowing for now.
I'm looking for a function or solution to the following:
For the chart in SQL Reporting i need to multiply values from a Column A. For summation i would use =SUM(COLUMN_A) for the chart. But what can i use for multiplication - i was not able to find a solution so far?
Currently i am calculating the value of the stacked column as following:
=ROUND(SUM(Fields!Value_Is.Value)/SUM(Fields!StartValue.Value),3)
Instead of SUM i need something to multiply the values.
Something like that:
=ROUND(MULTIPLY(Fields!Value_Is.Value)/MULTIPLY(Fields!StartValue.Value),3)
EDIT #1
Okay tried to get this thing running.
The expression for the chart looks like this:
=Exp(Sum(Log(IIf(Fields!Menge_Ist.Value = 0, 10^-306, Fields!Menge_Ist.Value)))) / Exp(Sum(Log(IIf(Fields!Startmenge.Value = 0, 10^-306, Fields!Startmenge.Value))))
If i calculate my 'needs' manually i have to get the following result:
In my SQL Report i get the following result:
To make it easier, these are the raw values:
and you have the possibility to group the chart by CW, CQ or CY
(The values from the first pictures are aggregated Sum values from the raw values by FertStufe)
EDIT #2
Tried your expression, which results in this:
Just to make it clear:
The values in the column
=Value_IS / Start_Value
in the first picture are multiplied against each other
0,9947 x 1,0000 x 0,59401 = 0,58573
Diffusion Calenderweek 44 Sums
Startvalue: 1900,00 Value Is: 1890,00 == yield:0,99474
Waffer unbestrahlt Calenderweek 44 Sums
Startvalue: 620,00 Value Is: 620,00 == yield 1,0000
Pellet Calenderweek 44 Sums
Startvalue: 271,00 Value Is: 160,00 == yield 0,59041
yield Diffusion x yield Wafer x yield Pellet = needed Value in chart = 0,58730
EDIT #3
The raw values look like this:
The chart ist grouped - like in the image - on these fields
CY (Calendar year), CM (Calendar month), CW (Calendar week)
You can download the data as xls here:
https://www.dropbox.com/s/g0yrzo3330adgem/2013-01-17_data.xls
The expression i use (copy / past from the edit window)
=Exp(Sum(Log(Fields!Menge_Ist.Value / Fields!Startmenge.Value)))
I've exported the whole report result to excel, you can get it here:
https://www.dropbox.com/s/uogdh9ac2onuqh6/2013-01-17_report.xls
it's actually a workaround. But I am pretty sure is the only solution for this infamous problem :D
This is how I did:
Exp(∑(Log(X))), so what you should do is:
Exp(Sum(Log(Fields!YourField.Value)))
Who said math was worth nothing? =D
EDIT:
Corrected the formula.
By the way, it's tested.
Addressing Ian's concern:
Exp(Sum(Log(IIf(Fields!YourField.Value = 0, 10^-306, Fields!YourField.Value))))
The idea is change 0 with a very small number. Just an idea.
EDIT:
Based on your updated question this is what you should do:
Exp(Sum(Log(Fields!Value_IS.Value / Fields!Start_Value.Value)))
I just tested the above code and got the result you hoped for.
I'm converting a string date/time to a numerical time value. In my case I'm only using it to determine if something is newer/older than something else, so this little decimal problem is not a real problem. It doesn't need to be seconds precise. But still it has me scratching my head and I'd like to know why..
My date comes in a string format of #"2010-09-08T17:33:53+0000". So I wrote this little method to return a time value. Before anyone jumps on how many seconds there are in months with 28 days or 31 days I don't care. In my math it's fine to assume all months have 31 days and years have 31*12 days because I don't need the difference between two points in time, only to know if one point in time is later than another.
-(float) uniqueTimeFromCreatedTime: (NSString *)created_time {
float time;
if ([created_time length]>19) {
time = ([[created_time substringWithRange:NSMakeRange(2, 2)]floatValue]-10) * 535680; // max for 12 months is 535680.. uh oh y2100 bug!
time=time + [[created_time substringWithRange:NSMakeRange(5, 2)]floatValue] * 44640; // to make it easy and since it doesn't matter we assume 31 days
time=time + [[created_time substringWithRange:NSMakeRange(8, 2)]floatValue] * 1440;
time=time + [[created_time substringWithRange:NSMakeRange(11, 2)]floatValue] * 60;
time=time + [[created_time substringWithRange:NSMakeRange(14, 2)]floatValue];
time = time + [[created_time substringWithRange:NSMakeRange(17, 2)]floatValue] * .01;
return time;
}
else {
//NSLog(#"error - time string not long enough");
return 0.0;
}
}
When passed that very string listed above the result should be 414333.53, but instead it is returning 414333.531250.
When I toss an NSLog in between each time= to track where it goes off I get this result:
time 0.000000
time 401760.000000
time 413280.000000
time 414300.000000
time 414333.000000
floatvalue 53.000000
time 414333.531250
Created Time: 2010-09-08T17:33:53+0000 414333.531250
So that last floatValue returned 53.0000 but when I multiply it by .01 it turns into .53125. I also tried intValue and it did the same thing.
Welcome to floating point rounding errors. If you want accuracy two a fixed number of decimal points, multiply by 100 (for 2 decimal points) then round() it and divide it by 100. So long as the number isn't obscenely large (occupies more than I think 57 bits) then you should be fine and not have any rounding problems on the division back down.
EDIT: My note about 57 bits should be noted I was assuming double, floats have far less precision. Do as another reader suggests and switch to double if possible.
IEEE floats only have 24 effective bits of mantissa (roughly between 7 and 8 decimal digits). 0.00125 is the 24th bit rounding error between 414333.53 and the nearest float representation, since the exact number 414333.53 requires 8 decimal digits. 53 * 0.01 by itself will come out a lot more accurately before you add it to the bigger number and lose precision in the resulting sum. (This shows why addition/subtraction between numbers of very different sizes in not a good thing from a numerical point of view when calculating with floating point arithmetic.)
This is from a classic floating point error resulting from how the number is represented in bits. First, use double instead of float, as it is quite fast to use on modern machines. When the result really really matters, use the decimal type, which is 20x slower but 100% accurate.
You can create NSDate instances form those NSString dates using the +dateWithString: method. It takes strings formatted as YYYY-MM-DD HH:MM:SS ±HHMM, which is what you're dealing with. Once you have two NSDates, you can use the -compare: method to see which one is later in time.
You could try multiplying all your constants by by 100 so you don't have to divide. The division is what's causing the problem because dividing by 100 produces a repeating pattern in binary.