>

The Last Mile Problem: Why Your Gigabit Internet Feels Like Dial-Up at 8 PM

Scott MorrisonJanuary 03, 2026 0 views
last mile DOCSIS fiber optics PON DSL GPON cable internet fiber to the home wireless backhaul bandwidth contention
Your ISP promises gigabit speeds, your speed test confirms it at 2 PM on a Tuesday, yet Netflix still buffers during prime time and your Zoom calls pixelate like it's 2005. The problem isn't your router, it's the physics and economics of the last mile, where thirty years of incremental technology improvements collide with the fundamental challenge of sharing expensive infrastructure among people who all want bandwidth at the exact same time.

You're paying for gigabit internet. Your ISP's marketing department has assured you that you're getting gigabit internet. The speed test you ran at lunch confirms you're getting gigabit internet. Yet here you are at 7 PM, watching your video conference freeze mid-sentence while your download crawls along at speeds that would make a 56k modem nostalgic.

Welcome to the last mile, the final stretch of infrastructure between your ISP's network and your home, where the laws of physics, the economics of infrastructure deployment, and the behavior of your neighbors who all decided to stream 4K video at the same time conspire to make your "gigabit" connection feel like a polite suggestion rather than a guarantee.

The last mile is called the last mile because it's the hardest mile. Building fiber backbone networks between cities is relatively straightforward. You dig a trench, drop in fiber, and you're done. Building infrastructure to every single home in a city requires digging up every street, navigating easements, dealing with a hundred different property owners, and somehow making the economics work when each connection costs thousands of dollars to install and generates maybe $50-100 per month in revenue. This is why we've spent the last thirty years trying to squeeze more performance out of copper phone lines and coax cable originally designed for analog TV.

DOCSIS: Squeezing Gigabits Out of Cable TV Infrastructure

Cable internet is arguably the most successful hack in telecommunications history. Take infrastructure designed to broadcast analog TV signals in one direction, add some clever modulation and multiplexing, and suddenly you're delivering internet at speeds that would have seemed like science fiction when the cable was originally installed in the 1970s.

DOCSIS (Data Over Cable Service Interface Specification) is the standard that makes cable internet work. The fundamental architecture is simple: the cable company has a head-end facility with a CMTS (Cable Modem Termination System) that talks to cable modems in customer homes over the same coax cable that delivers TV channels. The genius is that TV channels occupy specific frequency bands, and DOCSIS carved out different frequency bands for internet data.

DOCSIS 1.0 and 1.1: The Beginning (1997-2001)

DOCSIS 1.0, released in 1997, was the first standardized approach to cable internet. It offered a whopping 40 Mbps downstream and 10 Mbps upstream, though real-world deployments typically did 10-30 Mbps down. The cable was shared among all customers on the same segment (typically 200-500 homes), so your actual speed depended heavily on what your neighbors were doing.

The downstream used 6 MHz channels (in North America) in the 50-750 MHz range, using 64-QAM or 256-QAM modulation. Upstream used narrower channels in the 5-42 MHz range, which is why upstream was always slower. This frequency split made sense for cable TV (where everything was downstream), but it created an asymmetric internet experience that persists to this day.

DOCSIS 1.1 (2001) added QoS (Quality of Service) so your VoIP call wouldn't get destroyed by someone's torrent download. It also added better security (BPI+, baseline privacy interface plus), because DOCSIS 1.0's encryption was approximately as secure as writing your password on a sticky note.

DOCSIS 2.0: Symmetric Dreams (2002)

DOCSIS 2.0 focused on improving upstream throughput, bumping it to 30 Mbps. The trick was advanced modulation (ATDMA, advanced time division multiple access) and better use of the upstream spectrum. This was critical for business customers and the emerging reality that people wanted to upload things, not just download them.

The problem was that the upstream spectrum (5-42 MHz) is inherently noisy. It's where all the RF interference lives: ham radios, CB radios, power line noise, and that one house on the block with terrible wiring that radiates interference like a malicious radio station. Cable companies spent enormous effort on ingress noise mitigation, essentially trying to keep customers' electrical problems from polluting the shared upstream channel.

DOCSIS 3.0: Channel Bonding Changes Everything (2006)

DOCSIS 3.0 was the game-changer. Instead of using a single 6 MHz channel, it introduced channel bonding: you could combine multiple downstream channels for aggregate throughput. A typical deployment might bond 8 downstream channels (yielding ~300 Mbps) and 4 upstream channels (yielding ~100 Mbps). High-end deployments could do 32 downstream channels for over 1 Gbps.

The math is straightforward. Each 6 MHz downstream channel using 256-QAM modulation can carry about 38 Mbps after overhead. Bond 32 of them and you get 1.2 Gbps. The cable modem negotiates with the CMTS to determine which channels to bond, and traffic is split across them.

This is where the "gigabit cable internet" marketing came from. Technically true, if you had a DOCSIS 3.0 modem that supported 32-channel bonding, you could get gigabit speeds. In practice, most cable companies deployed 16 or 24 channels, and your actual speed still depended on how many neighbors were online.

DOCSIS 3.0 also introduced IPv6 support (finally) and better energy management. Your cable modem could now power down unused channels, saving a few watts. Not much, but multiply by millions of modems and it adds up.

DOCSIS 3.1: OFDM and the Gigabit Future (2013)

DOCSIS 3.1 ditched the 6 MHz channel structure entirely and moved to OFDM (orthogonal frequency division multiplexing), the same modulation used in WiFi and LTE. Instead of discrete channels, you have a continuous spectrum divided into subcarriers. This is massively more efficient because you can adapt to channel conditions on a per-subcarrier basis. If part of the spectrum has interference, you use higher-order modulation on the clean parts and lower-order modulation on the noisy parts.

The theoretical maximum is 10 Gbps downstream and 1-2 Gbps upstream. Real deployments typically do 1-2 Gbps down and 50-100 Mbps up, limited more by business decisions than technology. Cable companies could offer symmetric multi-gigabit if they wanted to, but they've decided that most customers don't need more than 100 Mbps upstream and would rather use the spectrum for more downstream capacity.

DOCSIS 3.1 also introduced active queue management (AQM) to reduce bufferbloat. Cable modems have historically had enormous buffers to handle bursty traffic, but this creates latency spikes under load. With AQM, the buffers are managed intelligently, dropping packets earlier to signal congestion rather than buffering everything and adding hundreds of milliseconds of latency.

DOCSIS 4.0: Full Duplex and Beyond (2017-Present)

DOCSIS 4.0 is the current bleeding edge, supporting up to 10 Gbps downstream and, critically, 6 Gbps upstream. The magic is full-duplex DOCSIS (FDX), which allows simultaneous transmission and reception on the same frequencies. This requires sophisticated echo cancellation (you have to subtract your own transmitted signal from what you're receiving), but it means you can use the entire spectrum for both directions.

Alternatively, DOCSIS 4.0 supports extended spectrum DOCSIS (ESD), which extends the usable frequency range to 1.8 GHz instead of the traditional 1.0 GHz. More spectrum means more channels means more throughput. The catch is that you need new amplifiers and infrastructure that can handle the higher frequencies, and older cable plants might not be compatible.

Deployments are just starting as of 2025. Most cable operators are still on DOCSIS 3.1, but the migration path is clear: more upstream capacity, lower latency, and the ability to compete with fiber on symmetric speeds.

EuroDOCSIS: The European Variant

EuroDOCSIS is functionally identical to DOCSIS but uses 8 MHz channels instead of 6 MHz because European TV standards were based on PAL rather than NTSC. This means slightly higher throughput per channel (about 50 Mbps per 8 MHz channel with 256-QAM instead of 38 Mbps for 6 MHz), but the overall architecture is the same. The two standards are compatible in the sense that the same modulation and protocol are used, just with different channel widths.

DSL: Squeezing Megabits Out of Phone Lines

DSL (Digital Subscriber Line) is the other great infrastructure hack, taking copper phone lines designed for 4 kHz analog voice and using them for multi-megabit data. The trick is using frequencies above the voice band (above 4 kHz, up to several MHz) for data, so you can have internet and phone service on the same line simultaneously.

ADSL: The Asymmetric Pioneer (1999-2003)

ADSL (Asymmetric DSL) was the first widely deployed DSL technology. ADSL1 offered up to 8 Mbps downstream and 1 Mbps upstream over distances up to about 18,000 feet (5.5 km) from the DSLAM (DSL Access Multiplexer) at the phone company's central office. The asymmetry made sense because most people download more than they upload.

The physics of DSL are brutal. Signal attenuation increases with frequency and distance. At higher frequencies, you get more throughput but shorter range. At lower frequencies, you get longer range but less throughput. DSL modems negotiate with the DSLAM to figure out what frequencies are usable given the line conditions, then use DMT (discrete multi-tone) modulation, which is basically OFDM before OFDM was cool.

Each tone (subcarrier) is modulated independently with QAM, and the modem adapts the modulation order based on the signal-to-noise ratio. Clean tones might use 256-QAM (8 bits per symbol), noisy tones might use QPSK (2 bits per symbol) or get disabled entirely. This adaptive modulation is why DSL sync speeds vary wildly depending on line quality.

ADSL2 (2002) bumped speeds to 12 Mbps down and 1.3 Mbps up, and ADSL2+ (2003) doubled it to 24 Mbps down and 3.5 Mbps up by using twice the bandwidth (2.2 MHz instead of 1.1 MHz). The catch is that these speeds are only achievable on short loops. At 12,000 feet, you're down to 10-15 Mbps. At 18,000 feet, you're lucky to get 5 Mbps. Physics is undefeated.

VDSL: Very High Speed, Very Short Range (2001-2006)

VDSL (Very-high-bit-rate DSL) pushed frequencies up to 12 MHz or even 30 MHz (VDSL2), enabling speeds up to 52 Mbps (VDSL) or 100 Mbps (VDSL2). The price is range: you need to be within 4,000 feet of the DSLAM to get the advertised speeds. Beyond that, performance drops rapidly.

This is why phone companies started deploying fiber to the neighborhood (FTTN) or fiber to the curb (FTTC). Run fiber most of the way, put a DSLAM in a street cabinet, and use VDSL for the final few thousand feet to each home. This hybrid approach let them offer higher speeds without running fiber all the way to every house.

VDSL2 with vectoring (ITU G.993.5) was the final evolution. Vectoring is crosstalk cancellation across multiple lines in the same cable bundle. Phone lines are twisted pairs, and signals from one pair induce interference in adjacent pairs (FEXT, far-end crosstalk). Vectoring measures this crosstalk and pre-compensates by transmitting inverted interference that cancels it out. It's like active noise cancellation for DSL.

With vectoring, VDSL2 could hit 100 Mbps at 500 meters or 50 Mbps at 1 km. This was competitive with cable internet circa 2010-2015, but it required all lines in the cable bundle to be using compatible DSLAMs (you can't vector against a line connected to a different DSLAM, so mixed deployments don't work well).

G.fast: The Last Hurrah for Copper (2014)

G.fast pushed DSL to its absolute limits, using frequencies up to 106 MHz (G.fast) or even 212 MHz (G.fast+) to achieve speeds up to 1 Gbps. The catch is that you need to be within 250 meters of the DSLAM, and even then, you're only getting those speeds on perfect, short copper loops.

G.fast deployments are rare because if you're running fiber to within 250 meters of every home anyway (to deploy the DSLAM), you might as well run fiber the last 250 meters and be done with it. G.fast made sense as a bridge technology for phone companies with extensive FTTC deployments who wanted to offer gigabit speeds without a full fiber rollout, but it's not a long-term solution.

PON: Passive Optical Networks and the Fiber Future

Passive Optical Networks are how most fiber-to-the-home (FTTH) deployments work. The "passive" means there's no active electronics in the field, just optical splitters. One fiber from the OLT (Optical Line Terminal) at the provider's office splits passively to serve 32, 64, or even 128 homes. Each home gets an ONT (Optical Network Terminal) that converts optical signals to Ethernet.

APON and BPON: The ATM Era (1995-2001)

The first PON standards used ATM (asynchronous transfer mode) as the transport protocol. APON (ATM PON) and BPON (Broadband PON) offered 155 Mbps or 622 Mbps downstream, split among all customers on the splitter. This was great for 1990s standards but underwhelming by modern expectations.

ATM used fixed-size 53-byte cells, which was supposed to be great for QoS but turned out to be terrible for IP traffic. Encapsulating variable-size IP packets into fixed-size ATM cells wastes bandwidth (padding small packets) and adds complexity. The industry quickly moved on.

GPON: Gigabit PON Becomes Standard (2003)

GPON (Gigabit-capable PON, ITU-T G.984) was the breakthrough. It delivers 2.488 Gbps downstream and 1.244 Gbps upstream, shared across up to 64 or 128 customers on a splitter. Each customer gets a fraction of that based on the DBA (dynamic bandwidth allocation) algorithm that the OLT uses to schedule transmissions.

GPON uses Ethernet frames natively (with some GEM encapsulation for compatibility), supports up to 20 km reach from OLT to ONT, and includes strong encryption (AES-128) to prevent neighbors from eavesdropping on each other's traffic. The downstream is broadcast (all ONTs receive all traffic, but only decrypt their own), and the upstream is TDMA (time division multiple access), where each ONT is assigned time slots to transmit.

In practice, a GPON split 32 ways means each customer gets a maximum of about 75 Mbps downstream in the best case, assuming equal sharing. Most ISPs overprovision (they sell you 100 Mbps or even 300 Mbps service on a GPON network) betting that not everyone will use their full allocation simultaneously. This works until everyone does use it simultaneously, which is when you discover that "up to 100 Mbps" means something very different at 8 PM than at 2 AM.

EPON: The Ethernet Alternative (2004)

EPON (Ethernet PON, IEEE 802.3ah) took a simpler approach: just use Ethernet, no fancy encapsulation. It offers 1.25 Gbps symmetric, which is lower than GPON but simpler to implement. EPON became popular in Asia, especially Japan and South Korea, while GPON dominated in Europe and North America.

10G-EPON (IEEE 802.3av, 2009) bumped speeds to 10 Gbps downstream and 1 or 10 Gbps upstream, making it competitive with XG-PON. The symmetric 10 Gbps variant is particularly attractive for business customers and data centers.

XG-PON and XGS-PON: 10 Gigabit PON (2010-2016)

XG-PON (10 Gigabit-capable PON, ITU-T G.987) delivers 10 Gbps downstream and 2.5 Gbps upstream. It's backwards compatible with GPON (you can run both on the same fiber using wavelength division multiplexing), which made migration easier.

XGS-PON (ITU-T G.9807.1) is the symmetric variant: 10 Gbps both directions. This is what modern fiber deployments use for multi-gigabit residential service. A 32-way split means each customer can theoretically get 300+ Mbps, and with lighter splits (16-way or even 8-way for premium areas), you can deliver consistent gigabit service.

NG-PON2 and 25G/50G PON: The Bleeding Edge (2015-Present)

NG-PON2 (next-generation PON 2) uses tunable wavelengths and DWDM (dense wavelength division multiplexing) to support multiple 10 Gbps channels on the same fiber. Instead of all customers sharing one downstream channel, you might have 4 or 8 different wavelengths, each serving a subset of customers. This dramatically increases total capacity.

25G-PON and 50G-PON are the latest standards, designed for symmetrical 25 Gbps or 50 Gbps service. These are overkill for residential customers but make sense for 5G backhaul, business customers, and future-proofing. The technology exists, but deployments are limited because 10G-PON is already more than enough for most markets.

Dedicated Fiber and DWDM: When Shared Isn't Good Enough

PON is shared infrastructure, which means you're sharing bandwidth with your neighbors. For residential customers, this is fine. For businesses that need guaranteed throughput, it's unacceptable. Enter dedicated fiber and DWDM.

Dedicated Dark Fiber

Dark fiber is literally just fiber optic cable with no electronics on it. An ISP runs a fiber from their POP (point of presence) directly to your location, and you put whatever equipment you want on the ends. Want 100 Gbps? Install 100G transceivers. Want to run your own DWDM system? Go for it. The fiber is yours (or leased exclusively to you), and nobody else's traffic affects your performance.

The catch is cost. Running dedicated fiber can cost $10-50 per meter depending on location (underground in a city is expensive), so a 1 km run might cost $10,000-50,000. Then you pay monthly lease fees. This is why dark fiber is for enterprises, data centers, and anyone who needs guaranteed capacity and is willing to pay for it.

DWDM: Multiplying Capacity with Wavelengths

DWDM (dense wavelength division multiplexing) is how you run dozens or hundreds of channels over a single fiber. Each channel uses a different wavelength (color) of light, and they're multiplexed together using optical combiners. At the receiving end, optical splitters separate the wavelengths, and each channel goes to a different receiver.

A typical DWDM system might have 40 or 80 channels (wavelengths), each running at 10 Gbps, 100 Gbps, or even 400 Gbps. That's 4-32 Tbps on a single fiber pair. This is how internet backbone networks work, and it's also how ISPs deliver dedicated point-to-point connections to large customers.

The ITU-T G.694.1 standard defines a frequency grid for DWDM, with channels spaced 50 GHz or 100 GHz apart in the C-band (around 1550 nm wavelength). Each channel is a specific frequency, and transceivers are tunable or fixed to those frequencies. Add an EDFA (erbium-doped fiber amplifier) every 80-100 km to boost the signal, and you can run DWDM over hundreds of kilometers.

Wireless Last Mile: Point-to-Point and Point-to-Multipoint

When running fiber or cable isn't practical (rural areas, difficult terrain, temporary deployments), wireless backhaul becomes attractive. There are two basic topologies: point-to-point (PTP) and point-to-multipoint (PTMP).

Point-to-Point Wireless: Line of Sight, High Throughput

PTP wireless uses highly directional antennas to create a radio link between two locations. Think of it as a wireless cable: you point antennas at each other across a distance, and you get a dedicated link. Frequencies range from 5 GHz (license-free, but crowded and subject to interference) to 60 GHz, 70 GHz, and 80 GHz (licensed or lightly licensed, high capacity, short range).

Modern PTP systems at 60 GHz or higher can deliver multi-gigabit speeds (1-10 Gbps) over distances of 1-5 km, assuming clear line of sight. The catch is that higher frequencies have shorter range and are more susceptible to rain fade (water droplets absorb the signal). At 60 GHz, heavy rain can drop your link capacity by 50% or more.

PTP is popular for connecting buildings in a campus environment, connecting cell towers to the fiber network, and providing last-mile service to businesses in areas where fiber isn't available. You need roof rights (somewhere to mount the antenna), clear line of sight, and acceptance that bad weather will affect performance.

Point-to-Multipoint Wireless: Shared Spectrum, Shared Pain

PTMP uses a hub-and-spoke model: one base station serves multiple customer premises. This is how fixed wireless ISPs (WISPs) operate. The base station has a sector antenna covering 90-120 degrees, and customer sites have subscriber modules (SMs) that connect to it.

The spectrum is shared among all customers on that sector. If the base station has 500 Mbps of capacity and serves 50 customers, they're sharing that 500 Mbps. This is conceptually similar to DOCSIS or PON, except the medium is air instead of coax or fiber.

Technologies include pre-WiMAX proprietary systems, WiMAX (IEEE 802.16), LTE-based fixed wireless (CBRS in the US, using 3.5 GHz), and 5G fixed wireless (using mmWave frequencies like 28 GHz or 39 GHz for high capacity). Each generation brings higher spectral efficiency, better modulation schemes, and more capacity.

5G fixed wireless at mmWave frequencies can deliver hundreds of megabits or even gigabits per customer, but with the same line-of-sight and weather sensitivity issues as PTP. Deployments in urban areas can work well (short distances, lots of cell sites), but rural deployments struggle with range and capacity.

Why Speed Tests Lie: Contention, Bufferbloat, and Protocol Overhead

You run a speed test and see 950 Mbps on your gigabit connection. Great! Then you try to download a file and get 200 Mbps. What happened?

Contention: You're Not Alone

Every shared-medium last-mile technology (cable, PON, PTMP wireless) has a contention ratio: the ratio of total sold capacity to actual available capacity. Your ISP might sell 100 homes 300 Mbps service on a GPON split that can only deliver 2.5 Gbps total. The math doesn't add up (100 × 300 = 30,000 Mbps available, but only 2,500 Mbps actual capacity), but it works because not everyone uses their full allocation simultaneously.

This is called statistical multiplexing, and it's the same principle as airline overbooking. It works great until it doesn't. At 2 PM on a Tuesday, you're one of five people online, and you get full speed. At 8 PM on a Saturday when everyone is streaming Netflix, you're competing with 80 active users, and suddenly your "300 Mbps" connection feels like 50 Mbps.

Speed tests are designed to saturate your connection, and they work because they're short bursts. When you run a 30-second speed test, you get prioritized by the ISP's traffic shaping (they know what Ookla's servers are, and making speed tests look good is in their interest). When you download a 10 GB file, you're subject to the real contention ratio and QoS policies.

Bufferbloat: When Buffers Attack

Bufferbloat is the phenomenon where large buffers in network equipment add latency under load. Your cable modem, your ISP's CMTS, your router, they all have buffers to handle bursty traffic. When these buffers fill up, latency spikes.

You start a large download, the buffers fill with download traffic, and suddenly your ping times go from 20 ms to 200 ms or more. Your video call stutters because the real-time traffic is stuck behind a buffer full of bulk download traffic. This is why your connection feels laggy under load even though you have plenty of bandwidth.

Modern solutions include active queue management (AQM) algorithms like fq_codel or CAKE that intelligently drop packets to prevent buffer bloat, and traffic shaping that prioritizes interactive traffic over bulk transfers. DOCSIS 3.1's low-latency DOCSIS and cable modems with SQM (smart queue management) help, but many deployments still use older equipment with terrible bufferbloat characteristics.

Protocol Overhead: TCP Isn't Free

When your ISP says you have 1 Gbps, they mean 1 Gbps at the physical layer. By the time you account for Ethernet framing (26 bytes per frame), IP headers (20 bytes), TCP headers (20+ bytes), and application protocol overhead (HTTP headers, TLS encryption), your actual application throughput is lower.

For large transfers, this overhead is maybe 5-10%. For small packets (like VoIP or gaming), overhead can be 30-50% of the packet. This is one reason why "gigabit" internet doesn't actually deliver 125 MB/s (1 Gbps = 125 MB/s) of application throughput, you get more like 110-115 MB/s.

WiFi: The Last Six Feet That Ruin Everything

Your ISP delivered gigabit fiber to your ONT. Your router is connected to the ONT via Ethernet. Everything should be perfect, except your laptop is connected via WiFi, and WiFi is a shared medium where the only guarantee is that nothing is guaranteed.

WiFi uses CSMA/CA (carrier sense multiple access with collision avoidance). Before transmitting, devices listen to see if the channel is busy. If it is, they wait. If it's clear, they transmit. If two devices transmit at the same time, collision, both back off and retry with random delays. This is horribly inefficient.

Add in interference from neighboring WiFi networks (every apartment building has 20+ SSIDs visible), interference from Bluetooth, interference from microwave ovens, and the fact that WiFi range decreases rapidly with distance and obstacles (walls, furniture, your body), and it's impressive that WiFi works at all.

Modern WiFi (WiFi 6, 802.11ax, and WiFi 7, 802.11be) adds MU-MIMO (multi-user multiple-input multiple-output) and OFDMA to reduce contention, but the fundamental problem remains: WiFi is a shared, half-duplex, interference-prone medium. Your gigabit internet connection is only as good as your WiFi link, and your WiFi link is probably doing 200-400 Mbps under real-world conditions, maybe 600-800 Mbps if you're close to the AP with WiFi 6 or 7.

The Uncomfortable Economics of Last Mile Infrastructure

Here's the truth nobody wants to say: the last mile is expensive to build and doesn't generate enough revenue to justify the investment. Running fiber to every home in a city costs $500-2,000 per home (more in difficult terrain), and those homes generate $50-100/month in revenue. The payback period is measured in years or decades, assuming no competition drives prices down.

This is why we have DOCSIS, DSL, and PON instead of dedicated fiber to every home. Shared infrastructure spreads the cost across multiple customers. The trade-off is contention and variable performance, but the economics work out.

It's also why rural areas have terrible internet. The population density is too low to justify the infrastructure investment. Running fiber 5 miles to serve 10 homes that will generate $500/month total doesn't make financial sense. Fixed wireless helps, but it has capacity limits. Starlink and other LEO satellite constellations are a solution, but with latency penalties and capacity constraints.

The last mile will always be a compromise between performance, coverage, and economics. We're getting better technology (DOCSIS 4.0, XGS-PON, 5G fixed wireless), but the fundamental problem remains: connecting individual homes is expensive, and someone has to pay for it.

What Actually Matters for Your Connection

If you want to understand why your internet feels slow, look at these factors:

Technology: Fiber (PON) is best, cable (DOCSIS 3.1+) is good, DSL is acceptable if you're close to the DSLAM, and anything older is struggling. Fixed wireless can be great or terrible depending on line of sight and congestion.

Contention ratio: How many people are you sharing with? A 16-way PON split is better than 64-way. A cable segment with 200 homes is better than 500. You can't usually find this out, but ISPs with lower prices often have higher contention.

Time of day: Prime time (7-11 PM) is when everyone is online. Your performance will be worst then, best at 3 AM. If speed tests are great at noon and terrible at night, you're seeing contention.

WiFi quality: Your WiFi link is often the bottleneck. 5 GHz is faster than 2.4 GHz, WiFi 6 is better than WiFi 5, wired Ethernet eliminates the problem entirely. If your laptop gets 100 Mbps on WiFi but 900 Mbps on Ethernet, the ISP is fine, your WiFi is the limit.

Bufferbloat: Run a bufferbloat test (DSLReports speed test includes this). If your latency under load spikes to 100+ ms, you have bufferbloat. Solutions include better routers with SQM, ISP equipment upgrades, or just accepting that downloads will make your connection laggy.

The next time your ISP promises gigabit speeds, remember that they're promising the best-case scenario under ideal conditions with no contention. The actual experience will vary, and physics, economics, and your neighbors all get a vote. Welcome to the last mile, where the only guarantee is that it's more complicated than the marketing suggests.