static Msec EpochMsec(void)
{
// This routine returns the epoch time in microseconds; "Msec" is a 64 bit unsigned integer type.
struct timeval tval;
gettimeofday(&tval, 0);
return (((Msec) tval.tv_sec) * 1000000ull) + ((Msec) tval.tv_usec);
}
The above code will not survive the Y2038 transition. Does anyone know of a drop-in replacement which will handle Y2038 problem? The best I've seen so far is some kludge which will map a post-2038 date into the 1970-2038 era, do a calculation, and then map the result. There must be a better way.
In the years immediately preceding 2000, Apple Computer's marketers made a big deal about Mac OS 9 being immune to Y2000 problems. The same guys are strangely silent about Y2038.
static Msec EpochMsec(void)
{
// This routine returns the epoch time in microseconds; "Msec" is a 64 bit unsigned integer type.
struct timeval tval;
gettimeofday(&tval, 0);
return (((Msec) tval.tv_sec) * 1000000ull) + ((Msec) tval.tv_usec);
}
The above code will not survive the Y2038 transition. Does anyone know of a drop-in replacement which will handle Y2038 problem? The best I've seen so far is some kludge which will map a post-2038 date into the 1970-2038 era, do a calculation, and then map the result. There must be a better way.
In the years immediately preceding 2000, Apple Computer's marketers made a big deal about Mac OS 9 being immune to Y2000 problems. The same guys are strangely silent about Y2038.
By that point in time we will have 128 bit registers. problem solved for a few zillion more years.
static Msec EpochMsec(void)
{
// This routine returns the epoch time in microseconds; "Msec" is a 64 bit unsigned integer type.
struct timeval tval;
gettimeofday(&tval, 0);
return (((Msec) tval.tv_sec) * 1000000ull) + ((Msec) tval.tv_usec);
}
The above code will not survive the Y2038 transition. Does anyone know of a drop-in replacement which will handle Y2038 problem? The best I've seen so far is some kludge which will map a post-2038 date into the 1970-2038 era, do a calculation, and then map the result. There must be a better way.
In the years immediately preceding 2000, Apple Computer's marketers made a big deal about Mac OS 9 being immune to Y2000 problems. The same guys are strangely silent about Y2038.
I see no reason for not being silent espacielly when we know now the wrong predictions about bug 2000 so we do not believe people who claim similar things about bug 2038.
Note also that people did not talk about bug 2000 so early(24 years before 2000) so there is no reason to talk about bug 2038 in 2014 that is 24 years earlier.
sje wrote:The above code will not survive the Y2038 transition.
Yes, it will. I just checked on my Linux machine and tv_sec has type time_t, which is a 64-bit integer. It has already been solved, at least on some implementations. It is likely the case that some implementations today still use 32-bit integers for time_t, but I don't think there is much risk of this actually being a problem in 2038.
sje wrote:The above code will not survive the Y2038 transition.
Yes, it will. I just checked on my Linux machine and tv_sec has type time_t, which is a 64-bit integer. It has already been solved, at least on some implementations. It is likely the case that some implementations today still use 32-bit integers for time_t, but I don't think there is much risk of this actually being a problem in 2038.
It won't work on Mac OS/X Intel, because there sizeof(time_t) is four.
This may not be too important for Apple as by 2038 their only products might be fashion and jewelry.
It will be a big problem for embedded systems which outnumber desktops and notebooks and which can't be easily upgraded.
For regular computers, fixing the libraries and recompiling applications is not enough; all or nearly all of the current filesystems in use will have to be converted along with all their backed up data.
I think in 64-bit Windows time_t is a 64-bit integer and you should be fine. You are as likely to be using 32-bit Windows in 2038 as you are to be using a 16-bit OS now.
If you want a hack for your 32-bit systems, this will probably do:
gettimeofday should not be used to measure elapsed time. You should use clock_gettime(CLOCK_MONOTONIC) or an equivalent function of your OS instead. If the time of our system is adjusted (by a user or automatically), gettimeofday can return a wrong value.
gettimeofday should not be used to measure elapsed time. You should use clock_gettime(CLOCK_MONOTONIC) or an equivalent function of your OS instead. If the time of our system is adjusted (by a user or automatically), gettimeofday can return a wrong value.
On my Linux machines, I run the network time daemon ntpd so there's never any need for a manual adjustment of the time. The Mac OS/X machines are the same. There are automatic adjustments; these are done with utmost subtlety so than interval measurements using gettimeofday() can be trusted.
The people who maintain and improve ntpd and the network time system have an almost unnatural affection for accuracy and precision, so I have faith in their work. I'd need a personal atomic time clock on my desk to get better results.
gettimeofday should not be used to measure elapsed time. You should use clock_gettime(CLOCK_MONOTONIC) or an equivalent function of your OS instead. If the time of our system is adjusted (by a user or automatically), gettimeofday can return a wrong value.
If you let users adjust the system time, you already have a major security problem. Far worse than the potential time wrap problem Steven was talking about.