ISO emacs/elisp to determines dates corresponding to first/last days of a week, e.g. ISO 8601 2013-WW47 - emacs

I like creating headings that look like
** WW47 (Monday November 18 - Sunday November 24, 2013)
I know how to use format-time-string, etc., in emacs/elisp to determine various definitions of the week-number (%U, %V, %W).
Q: how can I go backwards? In emacs/elisp, determine the dates of the first and last days of the week, given a year and week number?
More generally, parse a time-string such as ISO 8601 week-dates 2006-W52-7, or the week without day within week 2013-W46.
More generally still - many date and time representations imply date intervals. Weeks and months in particular imply intervals, although I suppose almost any time representation of a given precision can be interpreted as corresponding to an interval of time units less than the precision.
Q: are there (reasonably standard) emacs/elisp functions for determining first and last dates of a month, e.g. in terms of year/day-of-year format? Etc.
--
This post is mainly a "how do I do this in emacs/elisp?" question.
This sort of question appears to be quire common - there are similar questions on stackoverflow asking "how do I do this in Javascript/C#/..." etc., etc., etc.
I can do the math myself. But (a) better if there is a standard emacs/elisp function to do this, and (b) it is apparent from googling that there are many gotchas and issues, further emphasizing the goodness of using a standard library function, if one exists.
E.g. Getting first date in week given a year and weeknumber

I had a similar problem (see https://emacs.stackexchange.com/questions/43984/convert-between-iso-week-and-a-normal-date).
Using my solution on https://emacs.stackexchange.com/questions/43984/convert-between-iso-week-and-a-normal-date, the solution to your first question is found below.
Example:
(iso-header 2018 32)
"** WW32 (Monday August 06 - Sunday August 12, 2018)"
(iso-header 2018 53)
"** WW01 (Monday December 31 - Sunday January 06, 2019)"
(defun iso-header(year week)
(concat
(format-time-string "** WW%V (%A %B %d - " (iso-beginning-of-week year week))
(format-time-string "%A %B %d, %Y)" (iso-end-of-week year week))))

Related

`uuuu` versus `yyyy` in `DateTimeFormatter` formatting pattern codes in Java?

The DateTimeFormatter class documentation says about its formatting codes for the year:
u year year 2004; 04
y year-of-era year 2004; 04
…
Year: The count of letters determines the minimum field width below which padding is used. If the count of letters is two, then a reduced two digit form is used. For printing, this outputs the rightmost two digits. For parsing, this will parse using the base value of 2000, resulting in a year within the range 2000 to 2099 inclusive. If the count of letters is less than four (but not two), then the sign is only output for negative years as per SignStyle.NORMAL. Otherwise, the sign is output if the pad width is exceeded, as per SignStyle.EXCEEDS_PAD.
No other mention of “era”.
So what is the difference between these two codes, u versus y, year versus year-of-era?
When should I use something like this pattern uuuu-MM-dd and when yyyy-MM-dd when working with dates in Java?
Seems that example code written by those in the know use uuuu, but why?
Other formatting classes such as the legacy SimpleDateFormat have only yyyy, so I am confused why java.time brings this uuuu for “year of era”.
Within the scope of java.time-package, we can say:
It is safer to use "u" instead of "y" because DateTimeFormatter will otherwise insist on having an era in combination with "y" (= year-of-era). So using "u" would avoid some possible unexpected exceptions in strict formatting/parsing. See also this SO-post. Another minor thing which is improved by "u"-symbol compared with "y" is printing/parsing negative gregorian years (in far past).
Otherwise we can clearly state that using "u" instead of "y" breaks long-standing habits in Java-programming. It is also not intuitively clear that "u" denotes any kind of year because a) the first letter of the English word "year" is not in agreement with this symbol and b) SimpleDateFormat has used "u" for a different purpose since Java-7 (ISO-day-number-of-week). Confusion is guaranteed - for ever?
We should also see that using eras (symbol "G") in context of ISO is in general dangerous if we consider historic dates. If "G" is used with "u" then both fields are unrelated to each other. And if "G" is used with "y" then the formatter is satisfied but still uses proleptic gregorian calendar when the historic date mandates different calendars and date-handling.
Background information:
When developing and integrating the JSR 310 (java.time-packages) the designers decided to use Common Locale Data Repository (CLDR)/LDML-spec as the base of pattern symbols in DateTimeFormatter. The symbol "u" was already defined in CLDR as proleptic gregorian year, so this meaning was adopted to new upcoming JSR-310 (but not to SimpleDateFormat because of backwards compatibility reasons).
However, this decision to follow CLDR was not quite consistent because JSR-310 had also introduced new pattern symbols which didn't and still don't exist in CLDR, see also this old CLDR-ticket. The suggested symbol "I" was changed by CLDR to "VV" and finally overtaken by JSR-310, including new symbols "x" and "X". But "n" and "N" still don't exist in CLDR, and since this old ticket is closed, it is not clear at all if CLDR will ever support it in the sense of JSR-310. Furthermore, the ticket does not mention the symbol "p" (padding instruction in JSR-310, but not defined in CLDR). So we have still no perfect agreement between pattern definitions across different libraries and languages.
And about "y": We should also not overlook the fact that CLDR associates this year-of-era with at least some kind of mixed Julian/Gregorian year and not with the proleptic gregorian year as JSR-310 does (leaving the oddity of negative years aside). So no perfect agreement between CLDR and JSR-310 here, too.
In the javadoc section Patterns for Formatting and Parsing for DateTimeFormatter it lists the following 3 relevant symbols:
Symbol Meaning Presentation Examples
------ ------- ------------ -------
G era text AD; Anno Domini; A
u year year 2004; 04
y year-of-era year 2004; 04
Just for comparison, these other symbols are easy enough to understand:
D day-of-year number 189
d day-of-month number 10
E day-of-week text Tue; Tuesday; T
The day-of-year, day-of-month, and day-of-week are obviously the day within the given scope (year, month, week).
So, year-of-era means the year within the given scope (era), and right above it era is shown with an example value of AD (the other value of course being BC).
year is the signed year, where year 0 is 1 BC, year -1 is 2 BC, and so forth.
To illustrate: When was Julius Caesar assassinated?
March 15, 44 BC (using pattern MMMM d, y GG)
March 15, -43 (using pattern MMMM d, u)
The distinction will of course only matter if year is zero or negative, and since that is rare, most people don't care, even though they should.
Conclusion: If you use y you should also use G. Since G is rarely used, the correct year symbol is u, not y, otherwise a non-positive year will show incorrectly.
This is known as defensive programming:
Defensive programming is a form of defensive design intended to ensure the continuing function of a piece of software under unforeseen circumstances.
Note that DateTimeFormatter is consistent with SimpleDateFormat:
Letter Date or Time Component Presentation Examples
------ ---------------------- ------------ --------
G Era designator Text AD
y Year Year 1996; 96
Negative years has always been a problem, and they now fixed it by adding u.
Long story short
For 99 % of purposes you can toss a coin, it will make no difference whether you use yyyy or uuuu (or whether you use yy or uu for 2-digit year).
It depends on what you want to happen in case a year earlier than 1 CE (1 AD) occurs. The point being that in 99 % of programs such a year will never occur.
Two other answers have already presented the facts of how u and y work very nicely, but I still felt something was missing, so I am contributing the slightly more opinion-based answer.
For formatting
Assuming that you don’t expect a year before 1 CE to be formatted, the best thing you can do is to check this assumption and react appropriately in case it breaks. For example, depending on circumstances and requirements, you may print an error message or throw an exception. One very soft failure path might be to use a pattern with y (year of era) and G (era) in this case and a pattern with either u or y in the normal, current era case. Note that if you are printing the current date or the date your program was compiled, you can be sure that it is in the common era and may opt to skip the check.
For parsing
In many (most?) cases parsing also means validating meaning you have no guarantees what your input string looks like. Typically it comes from the user or from another system. An example: a date string comes as 2018-09-29. Here the choice between uuuu and yyyy should depend on what you want to happen in case the string contains a year of 0 or negative (e.g., 0000-08-17 or -012-11-13). Assuming that this would be an error, the immediate answer is: use yyyy in order for an exception to be thrown in this case. Still finer: use uuuu and after parsing perform a range check of the parsed date. The latter approach allows both for a finer validation and for a better error message in case of a validation error.
Special case (already mentioned by Meno Hochschild): If your formatter uses strict resolver style and contains y without G, parsing will always fail because strictly speaking year of era is ambiguous without era: 1950 might mean 1950 CE or 1950 BCE (1950 BC). So in this case you need u (or supplying a default era, this is possible through a DateTimeFormatterBuilder).
Long story short again
Explicit range check of your dates, specifically your years, is better than relying on the choice between uuuu and yyyy for catching unexpected very early years.
Short comparison, if you need strict parsing:
Examples with invalid Date 31.02.2022
System.out.println(DateTimeFormatter.ofPattern("dd.MM.yyyy").withResolverStyle(ResolverStyle.STRICT).parse("31.02.2022"));
prints "{MonthOfYear=2, DayOfMonth=31, YearOfEra=2022},ISO"
System.out.println(DateTimeFormatter.ofPattern("dd.MM.uuuu").withResolverStyle(ResolverStyle.STRICT).parse("31.02.2022"));
throws java.time.DateTimeException: Invalid date 'FEBRUARY 31'
So you must use 'dd.MM.uuuu' to get the expected behaviour.

unexpected result converting date using datestr

Can anyone tell me why if I type in MATLAB
datestr('17-03-2016','dd-mmmm-yyyy')
I get
06-September-0022
From the datestr docs
DateString = datestr(___,formatOut) specifies the format of the output text using formatOut. You can use formatOut with any of the input arguments in the above syntaxes.
So in your example the 'dd-mmmm-yyyy' is specifying the output format, not the input format.
Also
DateString = datestr(DateStringIn) converts DateStringIn to text in the format, day-month-year hour:minute:second. All dates and times represented in DateStringIn must have the same format.
where
'dd-mm-yyyy' is not in the list of allowed DateStringIn formats AND the documentation explicitly recommends using datenum to ensure correct behaviour. (Note: I underlined the wrong must in the sentence, it's the second must I wanted to emphasise)
So Sandar_Usama's answer of
datestr(datenum('17-03-2016','dd-mmmm-yyyy'))
is the officially correct method straight out of the docs.
Bottom line, always read the documentation.
Use this instead: datestr(datenum('17-03-2016','dd-mmmm-yyyy'))
To address the last unanswered point in this question, why does datenum behave like this?
>> datestr(datenum('17-03-2016'))
ans =
06-Sep-0022
Without explicitly telling datestr and datenum how it should treat the input, it will try to match against the expected formats. Since none of the documented formats match (see #dan's answer), it fails.
Although what it does next is undocumented, at least up to whatever version of Matlab we are running, it falls into a "last resource" attempt to give you a date number.
Matlab will try to parse different month names from your input, remove non-numeric characters, and then timedate elements from the string. In your case, they are 17, 03, and 2016. The first is expected to be either month or year. Since there's no 17th month, it is treated as year. Then 03 is the month, and 2016 is the day.
Now, March 2016th, 17 is not a valid date, but Matlab will give it a slack and read as 1985 days past March 31st, 17. And that gives us September 6th, 22.
Because Matlab's timestamp is a floating number for the number of days since its epoch, you can trigger that answer, using valid dates, like so:
>> datestr(datenum('0017-03-31') + 1985)
ans =
06-Sep-0022

Editing the year in date field in EXTJS 4.0 is defaulting the year to 2020

I have a date field in a form
xtype: 'datefield',
id: 'dateId',
maskRe: /[0-9\/]/,
format : 'm/d/Y',
for ex - if the date populated in that field is 07/30/2014. now i want to manually edit the date, if i give two backspaces, which means 07/30/20. and then click some where in the form, the year is getting defaulted to 07/30/2020. how to stop this getting defaulted to that 2020 year.
I answered this in my comment, but I will try to expand on that comment as much as possible, and make it as clear as possible. Here is the original comment:
"07/30/20 is equivalent to 07/30/2020. This is expected and normal behavior. When a year is only two digits, it is always the last two digits, so 20 == 2020, 00 == 2000, 14 == 2014, etc."
So, let me step you through an example.
You type 08/08/2014 into the datefield.
Hit backspace twice, removing the 1 and the 4.
Now you have 08/08/20.
When writing dates, the year can be written as 2 digits or 4. If written as two, the last two digits are used. So, the year 20 is the same as the year 2020.
The datefield has logic to handle 2 digit dates, so it knows that 08/08/20 is actually 08/08/2020.
That is why when you have 08/08/20 in a datefield, it interprets it as 08/08/2020.

Age calculation in common LISP from date of birth

I am trying to calculate a person's age in Common Lisp using a given date of birth (a string of the form YYYY-MM-DD) but I got the following error:
Error: "2013-12-10"' is not of the expected typenumber'
My code is as follows
(defun current-date-string ()
"Returns current date as a string."
(multiple-value-bind (sec min hr day mon yr dow dst-p tz)
(get-decoded-time)
(declare (ignore sec min hr dow dst-p tz))
(format nil "~A-~A-~A" yr mon day)))
(defvar (dob 1944-01-01))
(let ((age (- (current-date-string) dob))))
Can anyone give me help in this regard? I suspect that the current date is in string format like my input, but how can we convert it into the same date format as my input?
Your code
There are two immediate problems:
The subtraction function - takes numbers as arguments. You're passing the result of (current-date-string), and that's a string produced by (format nil "~A-~A-~A" yr mon day).
This code doesn't make any sense:
(defvar (dob 1944-01-01))
(let ((age (- (current-date-string) dob))))
It's not indented properly, for one thing. It's two forms, the first of which is (defvar (dob 1944-01-01)). That's not the right syntax for defvar. 1944-01-01 is a symbol, and it's not bound, so even if you had done (defvar dob 1944-01-01), you'd get an error when the definition is evaluated. While (let ((age (- (current-date-string) dob)))) is syntactically correct, you're not doing anything after binding age. You probably want something like (let ((age (- (current-date-string) dob))) <do-something-here>).
Date arithmetic
At any rate, universal times are integer values that are seconds:
25.1.4.2 Universal Time
Universal time is an absolute time represented as a single
non-negative integer---the number of seconds since midnight, January
1, 1900 GMT (ignoring leap seconds). Thus the time 1 is 00:00:01 (that
is, 12:00:01 a.m.) on January 1, 1900 GMT.
This means that you can take two universal times and subtract one from the other to get the duration in seconds. Unfortunately, Common Lisp doesn't provide functions for manipulating those durations, and it's non-trivial to convert them because of leap years and the like. The Common Lisp Cookbook mentions this:
Dates and Times
Since universal times are simply numbers, they are easier and safer to
manipulate than calendar times. Dates and times should always be
stored as universal times if possibile, and only converted to string
representations for output purposes. For example, it is
straightforward to know which of two dates came before the other, by
simply comparing the two corresponding universal times with <. Another
typical problem is how to compute the "temporal distance" between two
given dates. Let's see how to do this with an example: specifically,
we will calculate the temporal distance between the first landing on
the moon (4:17pm EDT, July 20 1969) and the last takeoff of the space
shuttle Challenger (11:38 a.m. EST, January 28, 1986).
* (setq *moon* (encode-universal-time 0 17 16 20 7 1969 4))
2194805820
* (setq *takeoff* (encode-universal-time 0 38 11 28 1 1986 5))
2716303080
* (- *takeoff* *moon*)
521497260
That's a bit over 52 million seconds, corresponding to 6035 days, 20
hours and 21 minutes (you can verify this by dividing that number by
60, 60 and 24 in succession). Going beyond days is a bit harder
because months and years don't have a fixed length, but the above is
approximately 16 and a half years.
Reading date strings
It sounds like you're needing to also read dates from strings of the form YYYY-MM-DD. If you're confident that the value that you receive will be legal, you can do something as simple as
(defun parse-date-string (date)
"Read a date string of the form \"YYYY-MM-DD\" and return the
corresponding universal time."
(let ((year (parse-integer date :start 0 :end 4))
(month (parse-integer date :start 5 :end 7))
(date (parse-integer date :start 8 :end 10)))
(encode-universal-time 0 0 0 date month year)))
That will return a universal time, and you can do the arithmetic as described above.
Being lazy
It's generally good practice as a programmer to be lazy. In this case, we can be lazy by not implementing these kinds of functions ourselves, but instead using a library that will handle it for us. Some Common Lisp implementations may already provide date and time functions, and there are many time libraries listed on Cliki. I don't know which of these is most widely used, or has the coverage of the kinds of functions that you need, and library recommendations are off-topic for StackOverflow, so you may have to experiment with some to find one that works for you.
No, can't just subtract one string from another and expect the computer to magically know that these strings are a date and handle them accordingly. Whatever gave you that idea?
Use the library local-time.
Installation with Quicklisp:
(ql:quickload "local-time").
To parse a date:
(local-time:parse-timestring "2013-12-10")
This produces a timestamp object.
To get the current timestamp:
(local-time:now)
To get the difference in seconds:
(local-time:timestamp-difference one-timestamp another-timestamp)
To get the difference in years (rounded down):
(local-time:timestamp-whole-year-difference one-timestamp another-timestamp)

Bug in Zend_Date (back in time)

I have a very strange problem, Zend_Date is converting my timestamp to a year earlier.
In my action:
// Timestamp
$intTime = 1293922800;
// Zend_Date object
$objZendDate = new Zend_Date($intTime);
// Get date
echo date('Y-m-d',$intTime).'<br>';
echo $objZendDate->get('YYYY-MM-dd');
This outputs:
2011-01-02
2010-01-02
Can anyone tell me what i'm doing wrong?
From the ZF issue tracker it seems this is a known issue:
Recently a lot of ZF users are filing a bug that Zend_Date returns the wrong year, 2009 instead of 2008. This is however expected behaviour, and NOT A BUG!
From the FAQ:
When using own formats in your code you could come to a situation where you get for example 29.12.2009, but you expected to get 29.12.2008.
There is one year difference: 2009 instead of 2008. You should use the lower cased year constant. See this example:
$date->toString('dd.MM.yyyy');
instead of
$date->toString('dd.MM.YYYY');
From the manual
Note that the default ISO format differs from PHP's format which can be irritating if you have not used in previous. Especially the format specifiers for Year and Minute are often not used in the intended way.
For year there are two specifiers available which are often mistaken. The Y specifier for the ISO year and the y specifier for the real year. The difference is small but significant. Y calculates the ISO year, which is often used for calendar formats. See for example the 31. December 2007. The real year is 2007, but it is the first day of the first week in the week 1 of the year 2008. So, if you are using 'dd.MM.yyyy' you will get '31.December.2007' but if you use 'dd.MM.YYYY' you will get '31.December.2008'. As you see this is no bug but a expected behaviour depending on the used specifiers.
For minute the difference is not so big. ISO uses the specifier m for the minute, unlike PHP which uses i. So if you are getting no minute in your format check if you have used the right specifier.
To add to zwip's answer, what happens behind the scenes is that your date format YYYY-MM-dd is actually translated into o\-m\-d, which is then passed to PHP's date() function internally with the timestamp you provided.
Like mentioned in the other answer, and in the documentation for the o format on the date format page, the calculation of the year based on the ISO week can sometimes result in the year being one different to the value that you expect.