Tuesday, 03 March 2020
We perceive time as ordered and logical, so it feels natural to order events by timestamp. Our measurement of time, however, is imperfect and timestamps do not provide the strict order you might assume.
When you sort by timestamp, you risk confusing the real chain of events and perhaps introducing significant bugs. This page describes a few sources of confusion you might encounter.
What if two events have the same timestamp?
A log file presents no problem because the lines of the file provide a strict order, but import those events into a database and the original line order is lost. Any rows with equal timestamps are now in an undefined order.
The risk of a duplicate timestamp is affected by:
A lower-resolution hardware clock can increase the number of collisions. So running your code on another platform may significantly alter the chance of duplicate timestamps.
The resolution of timestamps in software also varies - do you store seconds since epoch (resolution 1 second)? nanoseconds? a date string with HH:MM (resolution 1 minute)?
The lower your resolution, and the more frequent your events, the more likely you are to record identical timestamps for events in close proximity, potentially causing them to appear shuffled when sorted.
A clock is merely a device to measure time and as such requires calibration and adjustment. Manual adjustments - like a user naively changing a timezone or correcting a slow clock - are the most likely cause of a jump backwards, but automatic changes can also be to blame.
The automatic change from daylight saving could jump an event backwards a whole hour if you handled timezones incorrectly. We have to be particularly careful in the UK, where GMT can happily masquerade as UTC for half the year.
Services like ntpd (Network Time Protocol Daemon) can also cause dramatic clock changes. Depending on configuration, a large drift in system time can cause ntpd to hop immediately to the correct time (possibly backwards). Devices like the Raspberry Pi - that are frequently disconnected and have no real time clock - are particularly vulnerable.
Monotonic clocks - guaranteed to never run backwards - do exist, but a timestamp from a monotonic clock is of little use between reboots, and useless to compare between machines. They are generally used to measure an interval on a single machine.
Jumps in time can cause problems, so services like ntpd often prefer to slow down or speed up the system clock until it gradually approaches the correct time (this is called 'slew' correction).
Google uses a similar approach for leap seconds, 'smearing' an extra second over a 24 hour period, instead of bamboozling software with a 61 second minute.
Even if you could start a timer on multiple machines at a known instant and stop them at another, they would likely measure a subtly different elapsed time as the clocks will run at different speeds. The longer the interval, the more apparent manufacturing tolerances will be. Adafruit advises one PCF8523 based RTC, for example, "may lose or gain a second or two per day".
You may be attracted to timestamps because they're easy to collect at multiple sites then add to an ordered series later. However, in addition to all of the above, you must now consider the disparity between multiple system clocks.
Replying to a chat message on a different machine, you might easily record a timestamp before the original message.
When you sort data by timestamp, it implies a causal relationship - that, say, a message happened before it's reply, or a credit happened before a debit. Therefore, techniques that provide a strict - or at least causal - ordering of events should be preferred.
The most fool-proof alternative to timestamps is an incremental counter stored on a single machine. If there is only one instance of the software, or clients always submit to a central server, this is often the best choice.
Most databases provide an auto increment or sequence type that can provide a suitable value.
If you need to generate points in a sequence at multiple sites, then you may need a more complex series of counters like Lamport timestamps or a vector clock. Distributed clocks like these provide a partial causal ordering of events and a means to detect conflicts (i.e. events that are seen as concurrent because they extend a shared point in history).
If your clients generate timestamps locally, but the data is only integrated by a central server (not shared peer-to-peer), your logical clock can be relatively simple, requiring only two peers.
Distributed clocks only help you detect concurrent events. Once detected, the problem of resolving conflicts is often domain-specific. Using the appropriate clock or data type will force you to handle these conflicts early. Remember, the conflicts were always present with timestamps - they were just not apparent.
Detecting and resolving conflicts can be as fancy and complex as you like - but, before you reach for a full version-control system like git, I suggest you try a distributed clock or simple counter first.
I'm only suggesting timestamps are a bad way to order causally linked events. Timestamps are still useful for:
Logical clocks don't mean a lot to us. Adding a timestamp as part of the presentation (but not ordering) of data is often a good idea as it lets us place entries in a wider context outside of a single application.
Data collected for statistical analysis is often collected ad-hoc from multiple sources and strictly ordering measurements in close proximity may not be important. Ask yourself: "If I shuffled a few events around would my conclusions still be sound?"