>

From Copper to Light: How We're Still Fighting the Speed of Light

Scott MorrisonNovember 15, 2025 0 views
fiber optics network latency bandwidth wavelength division multiplexing speed of light copper networking hollow core fiber bandwidth-delay product network infrastructure physical layer
Network physical media evolved from copper carrying kilobits at a fraction of light speed to fiber carrying terabits at 67% of light speed, with hollow core fiber now approaching 99% by carrying light through air instead of glass. Bandwidth grew exponentially through wavelength division multiplexing and advanced modulation, but latency improvements remain limited by physics, creating fundamental constraints on TCP throughput through bandwidth-delay product and making those extra microseconds worth millions for latency-sensitive applications.

The history of networking physical media is a race against physics. We started with copper wires carrying electrical signals at a fraction of the speed of light, moved to fiber optics carrying light at two-thirds the speed of light in glass, and now we're developing hollow-core fiber that gets us to 99% of light speed in vacuum. Along the way we invented increasingly clever multiplexing schemes: first cramming multiple signals into the same wire by dividing time, then splitting light into different colors and sending dozens of wavelengths down the same fiber, and now we're even using the physical space inside fiber to carry multiple independent signals. Bandwidth has exploded from kilobits per second in the 1960s to terabits per second today. Yet latency, the time it takes signals to propagate across distance, remains stubbornly bound by the speed of light. This fundamental limit affects everything from TCP throughput to high-frequency trading, and it's why we're still innovating in physical media despite having "enough" bandwidth for most purposes. Because when you're moving bits across continents, physics matters more than protocols.

Let's explore the evolution of physical networking media, the clever tricks we've used to multiply capacity, why latency matters more than you might think, and how we're still fighting to get closer to the ultimate speed limit.

Copper: The Beginning

The earliest data communications used what was available: copper telegraph and telephone wires. These systems carried electrical signals, and their characteristics fundamentally shaped early networking.

Coaxial Cable: The Original High-Speed Link

Coaxial cable (a center conductor surrounded by insulating layer, metallic shield, and outer jacket) was the first medium specifically designed for high-frequency signals. Different types served different purposes:

Thicknet (10BASE5): The original Ethernet used thick coaxial cable (0.4 inch diameter) that could run 500 meters. It was rigid, expensive, and painful to work with. Installers drilled through the outer shield to attach "vampire taps" that pierced the cable to make connections. These taps frequently caused problems. 10BASE5 ran at 10 Mbps, which seemed blazing fast in 1980.

Thinnet (10BASE2): A more flexible coaxial cable (0.2 inch diameter) that made installation easier. Maximum distance dropped to 185 meters, but it was cheaper and easier to work with. Still 10 Mbps. Networks used BNC connectors and T-junctions to daisy-chain devices. Remove one connector and the entire segment went down.

Cable Broadband: Coaxial cable found a second life in cable television and cable Internet. Modern DOCSIS 3.1 can push 10 Gbps downstream over coax, though this requires sophisticated modulation and signal processing. The key insight was that coax has wide bandwidth (up to 1 GHz+), and if you can modulate signals cleverly enough, you can pack in enormous amounts of data.

Why coax eventually lost: Coax works, but it has fundamental limitations. Attenuation increases with frequency and distance, limiting both speed and reach. Installation is more difficult than twisted pair. Most critically, switching topologies (hubs and switches) didn't work well with coax's daisy-chain architecture. When twisted pair Ethernet arrived, coax's days were numbered for data networking.

Twisted Pair: The Workhorse

Twisted pair copper wiring (pairs of insulated copper wires twisted together to reduce electromagnetic interference) became the dominant copper medium and remains so today for short-range networking.

The Evolution:

Cat3 (Category 3): Used for 10BASE-T Ethernet (10 Mbps) and telephone systems. Still found in old building infrastructure for voice lines. Maximum 100 meters.

Cat5/Cat5e: The explosion of Fast Ethernet (100 Mbps) and later Gigabit Ethernet (1 Gbps) drove Cat5 and its enhanced version Cat5e. Cat5e supports 1 Gbps over 100 meters using all four pairs. This became the standard for enterprise and residential installations in the 2000s.

Cat6/Cat6a: Designed for 10 Gigabit Ethernet (10GBASE-T). Cat6 does 10 Gbps for 55 meters, Cat6a extends this to 100 meters. The cables are thicker (tighter twists, better shielding) and more expensive. Installation requires more care to maintain twist geometry.

Cat7/Cat8: 40 and 25 Gbps respectively, but at this point you're really pushing copper's limits. Used in data centers for short runs, but most high-speed applications move to fiber.

Why twisted pair succeeded: It's cheap, flexible, easy to terminate (RJ-45 connectors are simple), and supports Power over Ethernet (delivering power and data on the same cable). For runs up to 100 meters at speeds up to 10 Gbps, it's hard to beat.

The copper ceiling: Twisted pair hits fundamental physical limits around 10-25 Gbps at 100 meters. Higher frequencies suffer more attenuation, crosstalk between pairs increases, and electromagnetic interference becomes problematic. To go faster or farther, you need fiber.

Time Division Multiplexing in Copper

Before we had fiber, telecommunications companies needed to carry multiple phone calls over the same copper line. Time Division Multiplexing (TDM) was the solution.

How TDM works: Divide time into slots. Each user gets a fixed slot in a repeating cycle. If you have 8 users and 1 Mbps capacity, each user gets 125 kbps in their time slot, whether they're sending data or not.

T-Carrier System: The classic TDM hierarchy developed by Bell Labs:

  • T1: 24 voice channels, 1.544 Mbps (each channel is 64 kbps)
  • T3: 28 T1s multiplexed together, 44.736 Mbps
  • T4 and beyond: Further multiplexing, but rarely deployed

E-Carrier System: The European equivalent with different multiplexing ratios:

  • E1: 32 channels, 2.048 Mbps
  • E3: 16 E1s, 34.368 Mbps

The TDM problem: Fixed allocation wastes bandwidth. If a user isn't transmitting during their time slot, that capacity sits idle. Statistical multiplexing (what packet switching does) is far more efficient. This is why packet-switched networks like Ethernet and IP replaced circuit-switched TDM networks.

TDM was critical for the telephone network for decades, but it's a dead-end technology. Modern networks use packet switching for efficiency and fiber for capacity.

Fiber Optics: The Light Revolution

Fiber optic cables carry signals as light pulses instead of electrical signals. This fundamental change unlocked enormous capacity and range.

The Basics of Fiber

A fiber optic cable has:

  • Core: Hair-thin glass (8-62.5 microns diameter) that carries light
  • Cladding: Glass with lower refractive index, confining light to the core through total internal reflection
  • Coating: Protective layers around the glass

Single-Mode Fiber (SMF): Core diameter about 8-10 microns (incredibly thin). Light travels in essentially a straight line down the fiber. Can run tens to hundreds of kilometers without amplification. Used for long-distance and high-bandwidth applications.

Multi-Mode Fiber (MMF): Larger core (50 or 62.5 microns). Light bounces down the fiber at various angles (multiple modes). Cheaper transceivers (LEDs instead of lasers), but limited distance (300-550 meters typically) due to modal dispersion. Used for short runs in buildings and data centers.

Why fiber wins:

  1. Bandwidth: Fiber's usable bandwidth is measured in terahertz. Copper maxes out in gigahertz. This gives fiber orders of magnitude more capacity.
  2. Distance: Fiber can run 40-100+ km between amplifiers/repeaters. Copper needs repeaters every 100 meters for high-speed signals.
  3. No electromagnetic interference: Light isn't affected by EMI, so fiber works in electrically noisy environments.
  4. Security: Fiber doesn't radiate signal, making it harder to tap than copper.
  5. Weight and size: Fiber cables are thinner and lighter than equivalent copper for the same capacity.

Why copper still exists: Fiber transceivers are more expensive than copper Ethernet ports, fiber can't carry power (no PoE equivalent), termination requires specialized equipment, and for short runs (under 100m) at lower speeds (1-10 Gbps), copper is cheaper and simpler.

Wavelength Division Multiplexing: The Fiber Multiplier

Early fiber systems used one wavelength (color) of light per fiber. This works but seems wasteful, the fiber has terahertz of bandwidth but you're using maybe 10 GHz. Enter Wavelength Division Multiplexing (WDM).

WDM basics: Send multiple wavelengths (colors) of light down the same fiber simultaneously. Each wavelength is an independent channel. At the receiving end, prisms or diffraction gratings split the light back into individual wavelengths.

The Evolution:

DWDM (Dense WDM): The ITU-T defined a grid of wavelengths (channels) typically in the C-band (1530-1565 nm) with 50 or 100 GHz spacing between channels. Originally 8-16 channels, modern systems support 80-96 channels. Each channel can carry 100-800 Gbps, giving total fiber capacity of 10-50+ terabits per second.

CWDM (Coarse WDM): Wider spacing (20 nm) between wavelengths, cheaper optics, but fewer channels (usually 8-18). Used for metro and shorter-distance applications where cost matters more than maximum capacity.

The Math of DWDM: Let's say you have 80 channels with 100 GHz spacing, each running at 400 Gbps. That's 32 terabits per second down a single hair-thin fiber. This is why fiber completely dominates long-haul networking.

How it works in practice: Each wavelength is generated by a laser tuned to a specific frequency. These are combined with a wavelength multiplexer onto a single fiber. At the far end, a demultiplexer splits them apart. In between, optical amplifiers (typically EDFAs, Erbium-Doped Fiber Amplifiers) boost all wavelengths simultaneously without converting to electrical signals.

The gain: WDM turned a single fiber into 40, 80, or 96 independent channels. Submarine cables laid in the 1990s with 2-8 wavelengths have been upgraded to 80+ wavelengths by replacing only the terminal equipment, not the cable itself. This multiplied capacity by 10-40x without laying new cable.

Coherent Optical Transmission: More Bits Per Symbol

Modern fiber systems don't just send on/off light pulses. They use coherent optical modulation, encoding data in the phase and amplitude of the light wave, not just intensity.

Traditional On-Off Keying (OOK): Light on = 1, light off = 0. Simple but inefficient, only 1 bit per symbol.

Phase Modulation (QPSK, 16-QAM, 64-QAM): Encode data in the phase and amplitude of the carrier wave. QPSK (Quadrature Phase-Shift Keying) encodes 2 bits per symbol. 16-QAM encodes 4 bits per symbol. 64-QAM encodes 6 bits per symbol.

Combined with polarization multiplexing (using both polarization states of light independently), modern systems achieve spectral efficiencies of 6-8 bits per hertz or more.

This is the same idea as WiFi and 4G/5G using complex modulation schemes, but applied to fiber. It's how we got from 10 Gbps per wavelength in the early 2000s to 400-800 Gbps per wavelength today.

Spatial Division Multiplexing: Using Physical Space

Even with DWDM maxing out the wavelength space and coherent modulation squeezing more bits per wavelength, we want more capacity. Enter Spatial Division Multiplexing (SDM).

Multi-Core Fiber: Instead of one core, put multiple cores in the same fiber cladding. Each core carries independent signals. Think of it as bundling multiple fibers together but in a single physical cable. Experimental systems have demonstrated 19 cores or more.

Few-Mode Fiber: Use multiple spatial modes within the same core. This requires sophisticated signal processing to separate the modes, but it multiplies capacity.

The status: SDM is still mostly research and early deployment. The challenges are manufacturing (making multi-core fiber with low crosstalk), connectors (aligning multiple cores simultaneously), and signal processing complexity. But it represents the next frontier for fiber capacity growth.

The Speed of Light Matters: Latency Fundamentals

Bandwidth grabs headlines, but latency matters just as much and is harder to fix.

The speed of light in vacuum: 299,792,458 meters per second, or about 300,000 km/s. This is the universe's speed limit.

Light in fiber: Fiber has a refractive index around 1.47, meaning light travels at c/1.47 = about 200,000 km/s, or roughly 67% of light speed in vacuum.

Latency from distance: New York to London is about 5,600 km. At 200,000 km/s, the minimum theoretical latency is 28 milliseconds. Add in:

  • Cable routing (not perfectly straight, might be 6,000-6,500 km of actual fiber)
  • Optical-electrical conversion at amplifiers and switches
  • Processing delays in routers and switches
  • Queuing delays

Real-world latency is 70-80 ms round-trip between New York and London. About 40% is pure physics (speed of light in fiber), the rest is equipment processing and path inefficiency.

Why this matters: You can't fix speed-of-light latency by buying faster routers. Physics is physics. This creates fundamental limits for interactive applications.

The Bandwidth-Delay Product

The bandwidth-delay product (BDP) is the number of bits "in flight" on the network at any given time:

BDP = Bandwidth × Round-Trip Time

For a 10 Gbps link with 80 ms RTT:



BDP = 10,000,000,000 bits/sec × 0.08 sec = 800,000,000 bits = 100 megabytes

Why this matters for TCP: TCP uses a sliding window for flow control. The window size limits how much unacknowledged data can be outstanding. If your TCP window is 64 KB (default on many systems) but your BDP is 100 MB, you can only fill 0.064% of your bandwidth. Your 10 Gbps link runs at 6.4 Mbps because TCP is waiting for acknowledgments.

The fix: TCP window scaling (RFC 1323) allows windows up to 1 GB. But many systems default to smaller windows, and many firewalls or middleboxes strip window scaling options, crippling throughput on high-latency links.

High-frequency trading impact: Financial firms pay millions to shave microseconds off latency between exchanges. A 1 millisecond advantage means your order arrives first. This drives demand for the lowest-latency paths, not the highest-bandwidth paths.

Why Wireless Has Lower Latency (In Theory)

Radio waves travel at the speed of light in air/vacuum, essentially 300,000 km/s. This is about 50% faster than light in fiber (200,000 km/s).

For the same distance, wireless has lower propagation latency:

  • Fiber (New York to London, 6,500 km): 32.5 ms one-way
  • Direct wireless (5,600 km straight line): 18.7 ms one-way
  • Difference: 13.8 ms (wireless is faster)

Microwave networks for trading: High-frequency trading firms built microwave networks between financial centers (Chicago to New York, London to Frankfurt) because even though bandwidth is limited (maybe 1 Gbps), latency is 20-30% lower than fiber. For trading order transmission (small data, latency critical), this is worth it.

Why we don't use wireless for everything:

  1. Capacity: Fiber can carry terabits. Wireless is limited by spectrum (measured in megahertz or low gigahertz). A single fiber has millions of times more capacity than a microwave link.
  2. Weather sensitivity: Rain, fog, and snow attenuate microwave signals. Fiber is immune to weather.
  3. Line of sight: Microwave requires direct line of sight with relay towers every 50-100 km. Fiber can follow any path.
  4. Interference: Wireless spectrum is shared and regulated. Fiber is dedicated.
  5. Security: Wireless can be intercepted. Fiber is hard to tap.
  6. Reliability: Fiber has better uptime. Wireless has more failure modes (weather, interference, equipment on towers).

For point-to-point latency-critical links where bandwidth isn't the constraint, wireless wins. For everything else, fiber dominates.

Hollow Core Fiber: Getting Closer to Light Speed

Standard fiber carries light through glass, limiting speed to c/1.47. What if we could carry light through air instead?

Hollow Core Fiber (HCF): The core is literally hollow (filled with air or vacuum). Light travels through air with refractive index close to 1.0, meaning light speed approaches 300,000 km/s instead of 200,000 km/s.

The gain: 31% reduction in latency compared to standard fiber. For New York to London, this is about 8-10 ms round-trip improvement.

The challenge: Making hollow core fiber that actually works is hard:

  • Confining light in a hollow core requires carefully designed microstructure in the cladding
  • Loss (attenuation) is higher than standard fiber, requiring more amplifiers
  • Manufacturing is more difficult and expensive
  • Splicing and connecting hollow core fiber is tricky

Current status: Hollow core fiber is in limited commercial deployment for niche applications (high-frequency trading, data centers where latency matters more than cost). Ongoing research is reducing loss and improving manufacturability.

The future: If hollow core fiber becomes cost-competitive with standard fiber, it could become the default for long-haul links. The latency improvement is meaningful for many applications beyond trading.

The Latency Tax on Throughput

Let's work through a real example of how latency kills throughput:

Scenario: 1 Gbps satellite link with 600 ms round-trip time (geostationary orbit), default TCP configuration with 64 KB window.



BDP = 1,000,000,000 bits/sec × 0.6 sec = 600,000,000 bits = 75 MB

TCP Window = 64 KB = 0.064 MB

Actual Throughput = (Window Size / RTT) = (64 KB / 0.6 sec) = 106.7 KB/sec = 853 kbps

Your 1 Gbps link runs at 0.09% of capacity because TCP can't fill the pipe. The fix requires:

  • Window scaling enabled
  • Window size increased to at least 75 MB
  • Both endpoints and all middleboxes supporting this

Modern TCP improvements:

  • Window scaling: Allows windows up to 1 GB
  • TCP BBR: Google's congestion control algorithm that explicitly measures bandwidth and RTT to optimize throughput
  • CUBIC: Default Linux congestion control, designed for high-bandwidth, high-latency networks

These help, but they require deployment at both ends and no middleboxes interfering.

QUIC's approach: QUIC (HTTP/3) runs over UDP and implements its own congestion control at the application layer, avoiding kernel TCP limitations and middlebox interference. It's one reason QUIC performs better on high-latency links.

From Megabits to Terabits: The Capacity Explosion

Let's look at the capacity growth over time:

1960s-1970s: Kilobits per second over copper. Modems and early computer networks.

1980s: Ethernet at 10 Mbps over coax and twisted pair. T1 lines at 1.544 Mbps. Early fiber at 140 Mbps.

1990s: Fast Ethernet (100 Mbps), ATM (155 Mbps, 622 Mbps), early SONET/SDH fiber (2.5 Gbps, 10 Gbps). First WDM systems multiplying fiber capacity.

2000s: Gigabit Ethernet standard, 10 Gigabit Ethernet emerging. DWDM with 40-80 channels at 10 Gbps each, giving 400-800 Gbps per fiber. Coherent optical transmission starting.

2010s: 40/100 Gigabit Ethernet. DWDM channels at 100-200 Gbps, total fiber capacity reaching multiple terabits. 400G interfaces emerging late decade.

2020s: 400 Gigabit and 800 Gigabit Ethernet standardized. DWDM channels at 400-800 Gbps. Multiple terabits per fiber becoming common. Hollow core fiber in limited deployment.

Growth rate: Fiber capacity has doubled approximately every 12-18 months for the past 30 years, similar to Moore's Law for transistors. This growth came from:

  • Increasing channel speeds (10G → 100G → 400G per wavelength)
  • More wavelengths (8 → 80+ channels)
  • Better modulation (more bits per symbol)

The future: 1.6 Tbps per wavelength interfaces are in development. With 80+ wavelengths, single fiber capacity will exceed 100 Tbps. SDM could multiply this further.

Why Latency Improvements Lag Behind Bandwidth

While bandwidth has grown exponentially, latency improvements have been incremental:

1980s baseline: New York to London via transatlantic cable, about 80-90 ms RTT with older electronics and routing.

2020s reality: Same route, about 70-80 ms RTT. The improvement is modest.

Why latency is harder:

  1. Physics dominates: 40-50% of latency is speed of light in fiber. You can't optimize this away with better equipment.
  2. Path length matters: Cables don't run in straight lines. They follow routes determined by geography, regulations, and installation practicality. Optimizing routes requires new cable installations (expensive, slow).
  3. Processing delays: Modern routers are faster, but higher speeds mean more complex processing per packet (deeper buffers, more sophisticated queuing). These somewhat cancel out.
  4. Economic misalignment: Bandwidth sells (customers want faster speeds). Latency is harder to market and less noticeable for most applications.

Where latency has improved:

  • Submarine cable routes have been optimized (straighter paths)
  • Router processing is somewhat faster
  • Hollow core fiber provides meaningful reduction where deployed
  • Microwave links for specialized applications

But fundamental latency (distance / speed of light) hasn't changed. You can't break physics.

Data Center Implications

In data centers, latency becomes critical because round-trips add up:

Microservices architecture: A user request might trigger 20-50 internal service calls. If each call adds 1 ms latency, that's 20-50 ms before the user sees a response. This drives:

  • Collocation (services in same data center)
  • Low-latency networking (25/100 Gbps Ethernet with microsecond switching)
  • Avoiding unnecessarily small MTUs (more round-trips for same data)

Storage networks: NVMe over Fabrics requires latencies under 100 microseconds for optimal performance. This demands:

  • Direct-attach copper or very short fiber runs
  • Switching hardware with minimal latency
  • RDMA (Remote Direct Memory Access) to bypass OS overhead

High-performance computing: Latency between compute nodes affects parallel application performance. A 10 microsecond latency difference can mean 10% performance impact on communication-heavy workloads.

This is why data center operators care deeply about low-latency switches, short cable runs, and network topology (leaf-spine designs minimize hop count).

The Future: Physics and Economics

Future physical media developments will focus on:

Capacity growth: Continuing to increase bits per fiber through better modulation, more wavelengths, and spatial division multiplexing. 100+ Tbps per fiber is achievable.

Latency reduction: Hollow core fiber becoming cost-effective. Optimized cable routes. Better processing in equipment. But diminishing returns, we're approaching physical limits.

Power efficiency: Moving terabits requires significant power for lasers and amplification. Making this more efficient is critical for sustainability.

Reach extension: Longer distances between amplifiers/repeaters reduce cost and complexity. Research on ultra-low-loss fiber continues.

Cost reduction: Bandwidth demand grows faster than revenue, so cost per bit must decrease continually. Simpler optics, better integration, economies of scale.

Wireless for access: 5G and future wireless for last-mile and mobility. But backhaul remains fiber dominated.

The economics matter as much as the physics. Hollow core fiber is technically superior but costs more. It will only see wide adoption when the price premium drops enough that the latency benefit justifies the cost for typical applications, not just high-frequency trading.

Living with Physical Limits

We've come a long way from 56k modems over copper phone lines. Modern fiber systems carry terabits per second, and we're still finding ways to squeeze more capacity from the same physical fiber by exploiting every degree of freedom (time, wavelength, phase, amplitude, polarization, spatial modes).

But latency remains stubbornly tied to physics. We've gotten 10-20% better over 30 years by optimizing routes and using hollow core fiber. That's it. The speed of light is the speed of light.

This creates an interesting asymmetry:

  • Bandwidth: Practically unlimited, continuously improving, increasingly cheap
  • Latency: Bounded by physics, improving slowly, expensive to optimize

For most applications, bandwidth is sufficient and latency is acceptable. But for interactive applications (gaming, video conferencing, trading, real-time control), latency matters enormously. This drives specialized infrastructure: hollow core fiber for trading, edge computing to reduce round-trips, microwave links where latency trumps bandwidth.

The future of physical networking media isn't about revolutionary new technologies (we're already moving light at nearly light speed through nearly nothing). It's about:

  • Incremental improvements (better fiber, more wavelengths, smarter modulation)
  • Economic optimization (making advanced techniques affordable)
  • Deployment at scale (hollow core fiber won't matter if it's only in a few links)
  • Working around latency constraints (edge computing, better protocols, application design)

We're fighting physics, and physics is winning. But we're getting better at making the most of what physics allows. From copper carrying kilobits to hollow core fiber carrying terabits with sub-light-speed latency, networking physical media evolution is the story of human ingenuity pushing against fundamental limits, and occasionally finding clever ways to get a little bit closer to what the universe allows.