Nanoseconds, Microseconds, and Milliseconds Explained — With Real-World Examples
Most people have an intuitive sense of seconds, minutes, and hours. But the sub-second time units — milliseconds, microseconds, and nanoseconds — are where computing, networking, and precision measurement live. These aren't just smaller versions of a second: each unit represents a completely different scale of activity, and confusing them in the wrong context can mean the difference between a fast system and a broken one.
The Time Converter handles conversions between all time units including milliseconds and seconds. This article covers the sub-second units in detail: what they measure, how they relate to each other, and where each one actually shows up.
The Units and Their Relationships
Starting from a second and working down:
| Unit | Symbol | Fraction of a second | Power of 10 |
|---|---|---|---|
| Millisecond | ms | 1/1,000 | 10⁻³ |
| Microsecond | µs | 1/1,000,000 | 10⁻⁶ |
| Nanosecond | ns | 1/1,000,000,000 | 10⁻⁹ |
| Picosecond | ps | 1/1,000,000,000,000 | 10⁻¹² |
Each step is exactly 1,000× smaller. 1 millisecond = 1,000 microseconds. 1 microsecond = 1,000 nanoseconds. So 1 millisecond = 1,000,000 nanoseconds.
To put 1 nanosecond in perspective: light travels approximately 30 centimeters (about 1 foot) in 1 nanosecond. In the time it takes light to cross a room (roughly 10 nanoseconds), a modern CPU can execute 10–30 instructions.
Milliseconds (ms): Human-Scale Computing
A millisecond is 1/1,000 of a second. This is the unit where human perception begins to matter in technology.
Human reaction time averages 150–250 ms to a visual stimulus. This sets the bar for "instant" in user interface design — anything under 100 ms feels immediate to a person, 100–300 ms is noticeable but acceptable, and above 300 ms users perceive a delay.
Web page load times are measured in milliseconds. A page that loads in 500 ms feels fast; 2,000 ms (2 seconds) starts to feel slow. Google's Core Web Vitals target a Largest Contentful Paint under 2,500 ms.
Network latency within a data center is typically 1–5 ms. Cross-continent latency (New York to London) is typically 70–90 ms. New York to Tokyo is around 150–170 ms. These numbers set the floor for how fast any network operation can be, regardless of how fast the server processes it.
Database queries in a well-optimized system complete in 1–50 ms. A query taking 500 ms is slow; above 1,000 ms (1 second) is usually a problem.
Audio buffers in music production software are set in milliseconds. A buffer of 10 ms gives a 10 ms latency between playing a note and hearing it — imperceptible. A 50 ms buffer is noticeable when playing a keyboard instrument live. Most recording setups target 5–20 ms.
Frame time in games is the milliseconds per frame: a 60 fps game renders one frame every 16.67 ms. A 120 fps game renders one frame every 8.33 ms. Frame time is where game developers measure performance.
Microseconds (µs): Network and Kernel Time
A microsecond is 1/1,000,000 of a second — 1,000 times smaller than a millisecond. This is where high-performance computing, low-latency networking, and operating system internals operate.
High-frequency trading (HFT) operates at microsecond latency. Co-located trading systems at major exchanges typically achieve round-trip latencies of 1–10 µs. A 100 µs advantage can translate to millions in profit in fast markets.
SSD (solid-state drive) access time is typically 50–150 µs for random reads. This compares to 5–10 ms for HDDs — SSDs are 50–100× faster in this measure. NVMe SSDs are even faster, with access times of 20–50 µs.
CPU context switches — the operating system switching between processes — take 1–10 µs. This overhead is why high-performance systems minimize context switching through event-driven architectures or dedicated CPU cores.
Memory access (RAM) takes about 50–100 ns (nanoseconds), but cache misses that require going to RAM are often quoted in the 50–100 µs range when you account for the overhead of cache miss resolution in real programs.
Ethernet frame transmission at 10 Gbps takes about 1 µs for a 1,500-byte frame. The propagation delay (signal travel time) between two devices in the same data center rack might be 100–500 ns.
Nanoseconds (ns): Hardware-Level Timing
A nanosecond is 1/1,000,000,000 of a second — 1,000 times smaller than a microsecond. At this scale, the physical speed of light becomes a meaningful constraint.
CPU clock cycles are measured in nanoseconds. A 3 GHz processor completes one clock cycle every 0.33 ns. A single instruction typically takes 1–4 clock cycles (0.33–1.3 ns) in a modern pipelined CPU.
CPU cache access times:
- L1 cache: ~1 ns (2–4 cycles)
- L2 cache: ~4 ns (10–15 cycles)
- L3 cache: ~10–40 ns (30–100 cycles)
- RAM: ~50–100 ns (150–300 cycles)
This is why CPU cache design is critical for performance — a program that fits in L1 cache runs dramatically faster than one that regularly misses to RAM.
Memory bus speed is rated in nanoseconds. DDR5 memory has a cycle time of about 0.625 ns (corresponding to an effective rate of 6,400 MT/s). Memory timings listed as "CL16" mean the column access strobe (CAS) latency is 16 clock cycles.
Network cables and PCIe propagation delays are in the nanosecond range. Signal travels through a copper cable at about 2/3 the speed of light — approximately 20 cm/ns. A 1-meter cable introduces about 5 ns of propagation delay.
GPS timing requires nanosecond precision. GPS satellites need to synchronize their clocks to within 20–30 ns to achieve meter-level positioning accuracy. An error of just 10 ns in timing translates to about 3 meters of position error.
Converting Between Sub-Second Units
The conversions follow the 1,000× pattern:
| From | To | Multiply by |
|---|---|---|
| Seconds | Milliseconds | × 1,000 |
| Milliseconds | Seconds | ÷ 1,000 |
| Milliseconds | Microseconds | × 1,000 |
| Microseconds | Milliseconds | ÷ 1,000 |
| Microseconds | Nanoseconds | × 1,000 |
| Nanoseconds | Microseconds | ÷ 1,000 |
| Seconds | Nanoseconds | × 1,000,000,000 |
Common conversions:
- 1 second = 1,000 ms = 1,000,000 µs = 1,000,000,000 ns
- 100 ms = 100,000 µs = 100,000,000 ns
- 50 µs = 0.05 ms = 50,000 ns
- 500 ns = 0.5 µs = 0.0005 ms
For conversions from these small units up to minutes, hours, and days, the Time Converter handles the full chain automatically.
Why Precision Matters in Code
When working with timestamps and timing in programming, unit confusion is a frequent source of bugs. The most common:
Seconds vs milliseconds in APIs. Unix timestamps are in seconds; JavaScript's Date.now() returns milliseconds. Passing a milliseconds value where seconds are expected creates dates in 2527; passing seconds where milliseconds are expected creates dates in January 1970.
Sleeping for the wrong duration. time.sleep(1) in Python sleeps for 1 second. In some frameworks, the equivalent call takes milliseconds or microseconds. Always check the documentation for the expected unit.
Performance measurement drift. If you're benchmarking code with a timer that has millisecond resolution but your operations take 10–50 µs, the measurements are meaningless — you need a higher-resolution clock. time.perf_counter() in Python and performance.now() in JavaScript both provide sub-millisecond resolution.
Network timeout misconfiguration. A timeout set to 100 might mean 100 milliseconds or 100 seconds depending on the library. 100 seconds is a very long timeout for a web request; 100 milliseconds may be too short for a database query under load.
Getting the unit right isn't pedantic — it's the difference between a system that behaves correctly and one that fails in subtle, hard-to-debug ways.


