Unix Timestamps in Scheduled Tasks and Cron Jobs

Cron jobs are simple on the surface: run this command at this time. But production scheduling gets complicated quickly. You need to track when a job last ran, detect if it's overdue, handle overlapping runs, and log everything with enough precision to debug failures after the fact. Unix timestamps are the practical tool for all of this.

Use the Unix Timestamp Converter to convert any timestamp to a human-readable date or to get the current Unix time. This article covers how timestamps are used in scheduling systems — from basic cron to more complex job queues.

Why Scheduled Tasks Use Unix Timestamps

A cron expression like 0 2 tells a scheduler when* to run a job. But it doesn't tell you anything about what happened during the run. For that, you need timestamps recorded at runtime: when the job started, when it ended, whether it succeeded.

Unix timestamps are the natural format for this because they're:

  • Unambiguous — no timezone confusion, no locale-dependent formatting
  • Comparable — you can subtract two timestamps to get elapsed seconds
  • Compact — a single integer stores in any database column or log line
  • Sortable — integer ordering equals chronological ordering

A scheduled task that writes started_at: 1712534400 to a database is recording something precise and queryable. A task that writes started_at: "April 8, 2026 at 2:00 AM" is recording something that requires parsing before it's useful.

Storing Last-Run Timestamps

The most common use of timestamps in scheduled tasks is tracking the last successful run. A simple pattern:

1. Job starts, reads last_run timestamp from storage 2. Job does its work 3. On success, writes current Unix timestamp to last_run 4. On failure, leaves last_run unchanged (or writes to a separate last_failed_at field)

# Shell example
LAST_RUN=$(cat /var/run/myjob.timestamp 2>/dev/null || echo 0)
NOW=$(date +%s)

# Do work here...
if [ $? -eq 0 ]; then
    echo $NOW > /var/run/myjob.timestamp
fi

This pattern makes it trivial to answer "when did this job last succeed?" — just read the file and convert the timestamp. It also makes it easy to detect staleness: if now - last_run > expected_interval, the job is overdue.

Detecting Overdue or Missed Jobs

Cron doesn't detect its own failures. If a server is down during a scheduled run, cron doesn't retry and doesn't alert. If a job runs but exits with an error, cron doesn't mark it as failed. Detecting these situations requires external monitoring that checks timestamps.

A simple overdue check: if the current time minus last_run exceeds a threshold, something went wrong.

import time

EXPECTED_INTERVAL_SECONDS = 3600  # job should run hourly
TOLERANCE_SECONDS = 300           # allow 5 minutes of drift

last_run = get_last_run_timestamp()  # from DB or file
now = int(time.time())

if now - last_run > EXPECTED_INTERVAL_SECONDS + TOLERANCE_SECONDS:
    alert("Job overdue — last ran at {}".format(last_run))

The tolerance matters. Cron jobs don't always start exactly on schedule — system load, clock drift, and startup time mean a job scheduled for 02:00:00 might actually start at 02:00:04. Without a tolerance, a monitoring check that runs at 02:59:56 might see the job as overdue even though it ran fine at 02:00:04 and is due again in 4 seconds.

For hourly jobs, 5 minutes is a reasonable tolerance. For daily jobs, 30–60 minutes is typical.

Preventing Overlapping Runs With Timestamps

Long-running jobs can overlap if the next scheduled run starts before the current one finishes. A daily backup job that takes 2 hours is fine. One that takes 26 hours starts overlapping and eventually causes cascading failures.

Timestamps solve this with a simple lock pattern:

1. Job starts, reads started_at from a lock record 2. If started_at exists and now - started_at < timeout, another run is in progress — exit 3. If no lock or lock is expired, write current timestamp as started_at 4. Do work 5. Clear the lock on completion

LOCK_TIMEOUT = 7200  # 2 hours — if job runs longer, assume it's stuck

lock_time = get_lock_timestamp()
now = int(time.time())

if lock_time and (now - lock_time) < LOCK_TIMEOUT:
    print("Job already running, started at {}".format(lock_time))
    exit(0)

set_lock_timestamp(now)
# ... do work ...
clear_lock()

The timeout handles the case where a job crashes without clearing its lock. Without a timeout, a crashed job would block all future runs indefinitely. With a timestamp-based lock, a crashed run's lock expires after the timeout, and the next scheduled run can proceed.

Scheduling Future Jobs With Timestamps

Job queues (Sidekiq, Celery, BullMQ, RQ) often schedule future tasks by storing a Unix timestamp for when the job should execute. The queue worker polls for jobs where run_at <= current_timestamp.

-- Find jobs ready to run
SELECT * FROM scheduled_jobs
WHERE run_at <= EXTRACT(EPOCH FROM NOW())::int
  AND status = 'pending'
ORDER BY run_at ASC;

This is more flexible than cron for dynamic scheduling. Instead of "run every hour," you can say "run 24 hours after the user signs up" or "run 15 minutes after this payment failed." The job is inserted with run_at = NOW_UNIX + delay_seconds, and the worker picks it up when the time comes.

Retry logic also works this way. After a failure, reschedule with exponential backoff:

attempt = job.attempt_count
delay = min(2 ** attempt * 60, 3600)  # 1min, 2min, 4min... up to 1hr
job.run_at = int(time.time()) + delay
job.save()

After 1 attempt: retry in 60 seconds. After 2: 120 seconds. After 3: 240 seconds. The cap at 3600 prevents indefinite delays.

Logging Job Runs With Unix Timestamps

Job run logs are most useful when they include precise timing. A log line like:

[1712534400] backup-job started
[1712534447] backup-job completed in 47s, 3.2GB written

is immediately useful for debugging. You can convert 1712534400 to a human-readable date using the Unix Timestamp Converter, correlate it with other system logs, and calculate the exact duration without parsing date strings.

The alternative — logging formatted strings like "2026-04-08 02:00:00 UTC" — is readable to humans but fragile for machines. Different log shippers format dates differently, timezone conversions introduce errors, and string comparison is slower than integer comparison for time-range queries.

A common pattern is to log both: the raw timestamp for machine readability and the formatted date for human readability.

started_at=1712534400 started_at_human="2026-04-08T02:00:00Z" duration_s=47

Measuring Job Duration and Performance Over Time

Timestamps enable performance tracking across runs. Record started_at and completed_at for each run, and you can query:

  • Average duration over the last 30 runs
  • Longest run in the past week
  • Whether duration is trending up (possible performance regression)
SELECT
    AVG(completed_at - started_at) AS avg_duration_seconds,
    MAX(completed_at - started_at) AS max_duration_seconds,
    COUNT(*) AS run_count
FROM job_runs
WHERE job_name = 'nightly-report'
  AND started_at > EXTRACT(EPOCH FROM NOW())::int - (30 * 86400);

This kind of query is only possible because durations are stored as simple integer subtraction. If you'd stored formatted dates, you'd need string parsing before arithmetic.

Timestamp Precision: Seconds vs Milliseconds in Scheduling

For most scheduled tasks, second-level precision is sufficient. A job scheduled to run at 1712534400 doesn't need sub-second accuracy.

But for high-frequency task queues — jobs that might run hundreds of times per second — millisecond timestamps matter. If two jobs are inserted at the same second and you sort by timestamp to determine processing order, second-level timestamps produce ties. Millisecond timestamps preserve insertion order within each second.

JavaScript's Date.now() returns milliseconds. Python's time.time() returns a float with microsecond precision. Most databases support microsecond-precision timestamps. The choice of precision should match your scheduling frequency — milliseconds for high-volume queues, seconds for standard cron-like jobs.

Handling Daylight Saving Time in Scheduled Tasks

This is where many scheduling bugs hide. A cron job configured as 0 2 * runs at 2:00 AM local time. When clocks spring forward, 2:00 AM doesn't exist — the clock goes from 1:59 AM to 3:00 AM. The job either skips or runs at 3:00 AM depending on the cron implementation.

When clocks fall back, 2:00 AM occurs twice. The job might run twice.

The solution: run cron in UTC. 0 2 * in UTC is always 2:00 UTC — no ambiguity, no DST surprises. The job time shifts relative to local time twice a year, but it always runs exactly once.

Unix timestamps are UTC by definition, which is why they're immune to this problem. A job scheduled to run at timestamp 1712534400 runs at that exact moment regardless of what timezone the server is in or what DST offset applies at that time. This is the core argument for using timestamp-based schedulers over cron for anything where exact timing matters.

For jobs where local time semantics matter — "run at 9 AM business hours" — you need explicit timezone handling, not just UTC. Store the target timezone alongside the schedule, convert to UTC at schedule creation time, and recompute if the timezone's DST rules change.