The most consistent complaint about the AVR-Libc time implementation has been that it does not use 'UNIX time'.
Some have even alledged that since it does not use the Unix epoch, it is not standards compliant.
Misconception 1 : 'C time' == 'Unix time'
This is probably the most common misconception about the c standard time library.
The C standard makes no reference to 'Unix time'... it does not mention Unix at all.
Section 7.23.1 of the C standards document states....
- time_t is an arithmetic type capable of representing time (paragraph 3)
- "The range and precision of times representable in clock_t and time_t are implementation-defined." (paragraph 4)
Thus, the remaining characteristics of time_t are left up to the implementation. The standard only specifies that consistent results
are returned when values of type time_t are passed to the standard functions.
The only reference to anything which could possibly be construed as an epoch is in the specification of the year 1900 offset in tm_year
.
Section 7.23.1 paragraph 4 states that tm_year represents the number of years elapsed since 1900.
So it would seem that, if the C standard is construed to endorse an epoch for time_t, that epoch would be 1900.
That strikes a hard blow to the idea that C time is Unix time, since 1970 is a bit off from 1900.
Misconception 2 : It is not standards compliant
The answer is: which standard?
The AVR-Libc time library was not designed to meet the Unix operating system standard.
It was designed to meet the 'C' programming language standard.
Why Y2K?
The choice of the Y2K Epoch was not arbitrary at all.
The crucial items considered in the development process were:
- C-90 Standard compliance, as closely as possible
- Conservation of memory ( Flash and RAM ).
- Conservation of clock cycles.
Initially we actually did use the Unix epoch. But after getting it working, we were disappointed.
The code size was a bit larger than we had hoped, but worse than that, the performance was dismal.
Much of the execution time was spent in converting from timet_t to struct tm (and back).
And the bulk of that was in accounting for leap years, using the traditional 'loop and test' algorithm.
We reasoned that it if we could do away with those loops, we would greatly enhance the performance.
Therefore, a review of the leap year calculus was in order.
The Gregorian Calendar (specified by the C standard) establishes a 400 year 'Leap cycle'. That cycle is
modulated by a 100 year cycle, and the century cycle is modulated by a 4 year cycle.
The Gregorian calendar was established in 1582, with the first full 400 year cycle beginning in 1600.
That 400 year cycle ended, and the second began, in... the year 2000!
Beginning to get the picture?
Using the beginning of the leap cycle as our Epoch vastly simplifies our code, and thus reduces its 'footprint'. Unlike the majority of implementations, there are no loops in converting between
the two time types... it is a straightforward calculation, and completes within a specific number of cpu clock cycles.
Being faster is always a plus. Completing within a deterministic time is a plus. Consuming less Flash is a plus. So what are the detractions?
- Can't represent dates before the year 2000
AVR-Libc was introduced after Y2K
- Can't represent dates after 2136
32 bit UNIX can't represent dates after 2038
As always, send questions and comments, rants and raves to