Enterprise Firewalls in the 2020s: How Security Improvements Made Security Impossible
Enterprise Firewalls in the 2020s: How Security Improvements Made Security Impossible
The enterprise firewall used to have a very simple job. Packets came in, the firewall looked at the IP addresses and ports, decided if it liked them, and let them through or dropped them. This worked because most traffic was plaintext, users sat at desks on the corporate LAN, applications lived in the corporate data center, and the boundary between "inside the network" and "outside the network" was a literal wall with a cable running through it. That world is gone. It's been gone for most of a decade, actually, but the firewall is still there, trying to do its job in a landscape where the traffic is encrypted end-to-end, the users are on residential Wi-Fi in three time zones, the applications are SaaS, and the very concept of "inside the network" has been replaced with a vague shrug and the word "zero trust." The firewall is still expected to work, and when it doesn't, someone loses their job. Welcome to enterprise security in the 2020s.
Let's talk about enterprise firewalling, the challenges of securing networks where the network no longer really exists, and the uncomfortable truth that every improvement in security over the last fifteen years has made the firewall's job harder. TLS 1.3 was a massive win for user privacy. It also broke most of the inspection techniques that enterprise firewalls depend on. DNS over HTTPS is beautiful from a privacy standpoint. It also means your DNS firewall is now a suggestion. Zero trust is architecturally elegant. It also means the perimeter firewall is no longer the main line of defense, but you still have to run one anyway because compliance. Every one of these improvements is good for the Internet as a whole and bad for the people trying to keep a specific company's network from being pwned.
The Firewall's Original Job, or Why It Worked for a Couple of Decades
In the 1990s and early 2000s, an enterprise firewall could get real work done just by looking at Layer 3 and Layer 4. A typical rule set said something like "allow outbound TCP 80 and 443 to anywhere, allow outbound TCP 25 to the mail relay, block inbound everything except the web server." That was mostly it. Stateful inspection (pioneered by Check Point in the early 90s) let the firewall remember which connections were established, so return traffic for outbound connections was automatically allowed. Packet filtering plus connection tracking plus NAT, that was the whole product.
This worked because a few things were true. Applications used well-known ports. If it was on port 80, it was HTTP. If it was on port 22, it was SSH. Traffic was mostly plaintext, so if you wanted to see what was going on, you could just look at it. Users were physically on the corporate LAN, so the firewall at the edge saw literally all their traffic. Applications lived in the corporate data center, so east-west traffic stayed inside the perimeter and didn't require firewall enforcement. The concept of "trusted" and "untrusted" networks made sense because you actually had a line between the two.
Everything in that paragraph is now wrong. Applications run on port 443 regardless of what they are, because port 443 is the one port that's open everywhere. Traffic is almost entirely encrypted, and increasingly encrypted in ways that actively resist inspection. Users are on coffee shop Wi-Fi, their home networks, cellular hotspots, and maybe in the office two days a week if you're lucky. Applications are SaaS, which means your users connect directly from their laptops to Salesforce and Workday, never touching anything the company owns. The trust boundary has been distributed into so many pieces that the old perimeter is, at best, one of a dozen enforcement points, and not necessarily the most important one.
The firewall is still there. The firewall still has rules. The rules are mostly lies.
TLS Inspection, or Man-in-the-Middling Your Own Users
The first big fight in the 2020s enterprise firewall is TLS inspection. Also called SSL inspection, deep packet inspection (when it involves decryption), break-and-inspect, or, if you're feeling honest, "legal MITM on the corporate network."
The premise is that your firewall needs to see inside TLS traffic to do anything useful. Malware downloads happen over HTTPS. Data exfiltration happens over HTTPS. Command and control traffic happens over HTTPS. If your firewall can't see inside encrypted traffic, it can't detect any of this, which leaves it with roughly the same capabilities as a 1997 packet filter. So enterprise firewalls do what enterprise firewalls have always done: they decrypt the traffic, inspect it, and re-encrypt it before forwarding.
The mechanism is straightforward and architecturally offensive. Your client is configured to trust a corporate certificate authority. Your firewall has a private key matching that CA. When a user goes to https://example.com, the firewall intercepts the TLS handshake, terminates it locally, generates a fake certificate for example.com signed by the corporate CA, and presents that to the client. The client accepts it (because it trusts the corporate CA), establishes TLS with the firewall, and sends its traffic in the clear (to the firewall). The firewall inspects it, opens a separate TLS connection to the real example.com, forwards the traffic, and mirrors responses back. Your users see a padlock. Your firewall sees everything. Everyone is happy, except for the principle of end-to-end encryption, which is now dead.
Why this is getting harder every year
TLS inspection worked fine when TLS was simple. TLS 1.2 had predictable handshakes, server certificates that exposed the hostname, and cipher suites that the firewall vendor knew how to implement. Then TLS 1.3 arrived (RFC 8446, 2018), and everything that made inspection easy went away.
TLS 1.3 encrypts the certificate in the handshake. The firewall used to be able to see example.com in the certificate without decrypting anything, making SNI-based policy enforcement trivial. Now the certificate is encrypted, so the only hostname clue left is the Server Name Indication field. Which then got Encrypted Client Hello (ECH, RFC 9460 and in-progress drafts), which encrypts SNI too. At that point the firewall can see that you're connecting to some IP address on port 443 using TLS 1.3, and literally nothing else. It cannot tell if you're going to Gmail or going to a C2 server that happens to share a CDN with Gmail. For policy purposes, they are indistinguishable.
TLS 1.3 also has 0-RTT (zero round-trip time) handshakes, which let clients send application data in the first packet. This breaks inspection boxes that assumed they'd have a leisurely handshake to analyze before the data started flowing. It also has perfect forward secrecy as a requirement, so you can't capture traffic now and decrypt it later if the server key leaks. This is an unambiguous security win for users and an unambiguous operational loss for inspection.
Then there's certificate pinning. Modern applications, particularly mobile apps and browsers, increasingly pin specific certificates or public keys and refuse to accept anything else. This is great for preventing actual attacks, and it is fatal for corporate TLS inspection. When Zoom or Slack or a banking app sees your firewall's fake certificate, it does not see a trusted corporate CA. It sees a wrong certificate, and it refuses to connect. The enterprise gets a helpdesk ticket that says "Zoom doesn't work on the corporate Wi-Fi," and the security team has to add Zoom's entire infrastructure to the bypass list. Over time the bypass list gets longer, and the inspection coverage gets smaller, until you are inspecting approximately the traffic that wasn't going to attack you anyway.
The bypass list, or how TLS inspection slowly turns into a really expensive loophole
Every enterprise with TLS inspection has a bypass list. The bypass list is the set of destinations where inspection is skipped, because inspecting them breaks them. It starts small. Banking sites, which use client certificate authentication and will not tolerate an intermediate CA. Microsoft 365, because Microsoft has explicit documentation saying not to inspect their traffic and will blame you when things break. Apple services, because Apple pins things aggressively. Zoom and Webex, because video codecs and fake certificates do not mix. Then it grows. Salesforce. Workday. ServiceNow. Box. Dropbox. GitHub. Slack. Any app the CEO uses. Any app the lawyers use. Any app that generates a ticket.
By year three of a TLS inspection deployment, the bypass list has hundreds of entries, and it covers most of the traffic that matters. Your inspection engine is now inspecting the long tail: weird SaaS apps nobody has complained about yet, personal webmail, random CDNs. The malware and data exfiltration that were the reason for inspection in the first place are, of course, happening over the major SaaS platforms that are now on the bypass list. Attackers figured out years ago that if you host your C2 on AWS or a major CDN, and you look like an application the enterprise has bypassed, you are effectively invisible.
This is the unspoken truth of TLS inspection in 2025: it costs a fortune in hardware, it breaks things constantly, it generates tickets every week, and it does approximately nothing to catch sophisticated attacks. It does catch a lot of low-effort malware and some careless insiders, which is better than nothing, but the return on investment is not what the vendor pitch deck promised.
DNS Firewalling, or the Best First Line That's Full of Holes
If TLS inspection is the expensive option with diminishing returns, DNS firewalling is the cheap option that gets you most of the way there, most of the time. The idea is beautiful in its simplicity. Every malicious website, every C2 server, every data exfiltration endpoint has a name. Before the client connects, it has to resolve that name. If you control DNS resolution, you can block the name before the connection ever starts, and you don't need to decrypt anything.
DNS firewalling (implemented via RPZ, DNS sinkholes, commercial products like Cisco Umbrella, Cloudflare Gateway, NextDNS, and so on) works by intercepting DNS queries from your users, comparing the requested hostname against a threat intelligence feed, and returning NXDOMAIN or a sinkhole IP for anything on the block list. It is cheap, fast, scales horizontally, and provides real protection against unsophisticated threats, which is most threats.
For a receptionist clicking a phishing link in an email, DNS firewalling is excellent. The link resolves to a known malicious domain, the DNS query gets blocked, the connection never happens, the receptionist calls IT and says the site is down, everyone moves on with their life. DNS firewalling catches a meaningful percentage of opportunistic threats, and if you deploy nothing else at the network layer, deploy this.
How a determined attacker bypasses DNS firewalling in fifteen different ways
But if the attacker is paying attention, DNS firewalling is a speed bump. The techniques to bypass it have been documented for years and keep getting easier.
Hardcoded IPs. If your malware doesn't need to resolve a name, you don't need DNS. Modern C2 frameworks happily connect to an IP address directly. The DNS firewall sees no query, has nothing to block, and life goes on.
DNS over HTTPS (DoH) and DNS over TLS (DoT). Your corporate DNS server is where your DNS firewall lives. If the malware (or the user, for that matter) queries 1.1.1.1 over HTTPS directly, your DNS firewall never sees the query. The client gets back whatever answer it wants, your DNS logs show nothing, and your DNS firewall is a monument to a world that used to exist. Browsers have been making DoH the default for consumer users, which is a privacy win and a corporate security headache. Firefox, Chrome, and Edge all support DoH, and while they have enterprise policies to disable it, those policies only work on managed devices, and only if the admin remembered to configure them.
DNS over QUIC (DoQ) and Oblivious DoH. The protocol engineers are not done. DoQ (RFC 9250) runs DNS over QUIC, which is encrypted inside a UDP protocol that looks a lot like a generic HTTPS connection. Oblivious DoH (ODoH, RFC 9230) routes DNS queries through a proxy so that even the DoH resolver doesn't know who asked. All of this is good for user privacy and an ever-escalating problem for enterprise visibility.
Domain generation algorithms (DGAs). Malware doesn't hardcode a single C2 domain anymore. It generates thousands of candidate domains per day using a seeded algorithm, tries each one, and talks to whichever one resolves. Your threat feed cannot keep up. Some DNS firewalls try to detect DGAs with machine learning pattern matching, which works for known DGA families and fails for new ones. This is an endless arms race, and the defenders are not winning it.
Fast flux and domain fronting. Attackers register a domain, point it at an ever-rotating set of IPs (often on major CDNs), and tear down infrastructure faster than block lists can update. Domain fronting (now largely shut down by major CDN providers, but still sometimes possible) hides the real destination behind a CDN's front hostname.
Encoded queries over non-DNS channels. If your malware wants to be really sneaky, it encodes C2 traffic into DNS queries of its own, using a domain the attacker controls. The query abc123xyz.bad.example.com is the literal payload, and the DNS response is the command. Or it skips DNS entirely and embeds commands in the TLS SNI field, or in HTTP headers, or in the subject lines of IMAP searches, or in Twitter posts, or in Gmail drafts. At some point every protocol becomes a C2 channel, because any bidirectional communication can be repurposed for it.
DNS firewalling is still worth deploying, because it stops a large fraction of unsophisticated attacks at essentially zero cost. It is not a replacement for anything. And it is not going to catch anything an actually-motivated attacker does. If you think DNS firewalling is your primary defense, the determined malware on your network right now thanks you for your service.
User Context, or Why "Is This Normal" Beats "Is This Allowed"
Here is one of the biggest shifts in modern firewalling: the rule isn't "is this connection allowed" anymore, it's "does this connection make sense for this user."
Consider two SSH sessions, both outbound to the same IP address in an AWS region at 3 AM. They are, from the network's perspective, identical. Same protocol, same port, same destination, same encryption, same packet sizes. But one of them is from Alice, a senior SRE who runs the production infrastructure, and the other is from Bob, a receptionist who has never opened a terminal in his life. Alice's SSH session is almost certainly fine. Bob's SSH session is either (a) the result of malware running on his laptop, or (b) the most interesting HR conversation of the year. Either way, the firewall needs to care.
The old firewall model can't see this distinction. It sees IP addresses, ports, and protocols. The new model (variously called "identity-aware firewalling," "user-based policy," or whatever buzzword the vendor has this quarter) ties network decisions to the identity of the user. This requires the firewall to know who is behind every connection, which requires integration with identity providers (Active Directory, Okta, Entra ID), which requires the endpoints to assert identity to the firewall in a trustworthy way.
Where identity comes from
There are several mechanisms for getting user identity into firewall decisions:
802.1X authentication. When the user connects to the network (wired or wireless), they authenticate via 802.1X to a RADIUS server, which ties the MAC address and switch port to a user identity. The firewall can query the RADIUS server (or consume its logs) to know which IP belongs to which user. This works on the LAN. It doesn't work when the user is at home. And 802.1X has its own pile of problems (more on that shortly).
Agent-based identification. The endpoint has a client agent (Palo Alto GlobalProtect, Cisco AnyConnect, Zscaler Client Connector, and so on) that talks to the firewall or proxy and asserts the user's identity. This works anywhere the agent is installed, which is ideally every managed device, which is in practice most of them but not all of them.
Proxy-based identification. All web traffic goes through a proxy that requires authentication before forwarding. The proxy knows who you are because you logged in. This works for web traffic, poorly for everything else, and terribly for applications that don't handle proxy auth gracefully.
SAML/OIDC federation at the application layer. The identity is actually carried by the application itself via SSO. The firewall doesn't see it directly but can consume logs from the identity provider. This is weak for real-time enforcement but useful for audit and anomaly detection.
The honest truth is that most environments use some combination of all four, and the integration between them is a mess. The firewall has a view of identity that is usually correct, sometimes stale, occasionally wrong, and always more complicated than the vendor diagrams suggest.
Behavioral baselines, or the machine learning promise
The next step past identity-aware policy is behavioral analysis. If you know that Bob is a receptionist whose traffic normally consists of Outlook, SharePoint, the HR system, and some excessive Amazon shopping, you can flag the SSH session because it's anomalous. If Alice suddenly starts pulling 50 GB from an internal database at 2 AM when she normally doesn't work nights, you can flag that too.
Vendors have been selling this as "UEBA" (User and Entity Behavior Analytics) for a decade now. It mostly works. It also generates a staggering number of false positives, because user behavior is genuinely weird. People work weekends. People travel. People try new tools. Every legitimate deviation from baseline generates an alert, and the SOC analysts learn to ignore alerts, and then the one real alert gets buried. This is the same alert fatigue problem that has plagued every SIEM since the invention of SIEMs, and putting "AI" in the product name has not solved it.
The goal isn't for the machine to detect all bad behavior automatically. The goal is for the machine to reduce the search space enough that a human analyst can investigate the interesting things. When it works, it's genuinely valuable. When it doesn't work, it's a very expensive noise generator.
802.1X, or Network Access Control's Complicated Middle Child
802.1X is the standard for port-based network access control. Conceptually, it's simple: before a device is allowed on the network, it authenticates. If it passes, it gets network access, possibly in a specific VLAN with specific policies. If it fails, it gets blocked or placed in a restricted "remediation" VLAN.
In practice, 802.1X is one of those technologies that looks clean on a slide deck and turns into a multi-month project the moment you actually try to deploy it.
Why 802.1X is harder than it looks
First, not every device supports 802.1X. Printers sometimes do, sometimes don't, and even when they do the implementation is often buggy. IP phones have their own weirdness with voice VLAN handoff. IoT devices, industrial control systems, medical devices, cameras, HVAC controllers, and the long tail of embedded weird-stuff mostly don't support 802.1X at all. For these, you fall back to MAC Authentication Bypass (MAB), which is exactly as strong as it sounds. You're trusting a MAC address. A MAC address. The thing that can be spoofed with a single command on any laptop. MAB is ubiquitous because it's the only option for a huge swath of devices, and every 802.1X deployment is full of MAB exceptions that reduce the overall security to "basically, did you know the MAC of a printer."
Second, certificate management for 802.1X is a perpetual operational problem. You want EAP-TLS, which is the strong option and requires certificates on every endpoint. Deploying certs, renewing them, revoking them, handling devices that can't enroll, handling users who wipe their laptops and lose certs, all of this is work. The weaker alternative, PEAP or EAP-TTLS with password authentication, is easier to deploy and trivially phishable. Organizations that don't want to deal with certificate management end up with password-based 802.1X, which means a rogue AP with the same SSID name can capture credentials and the network access control is a fiction.
Third, 802.1X only authenticates the device's admission to the network. Once the device is on, 802.1X is done. If the device is compromised, 802.1X has no opinion. It doesn't inspect traffic, doesn't check device posture continuously, doesn't re-evaluate based on behavior. It's a one-time check at the door, like showing ID at a bar. Once you're in, you're in.
Fourth, 802.1X exists in a world where most user traffic is now wireless, and wireless 802.1X (WPA2-Enterprise, WPA3-Enterprise) has its own pile of quirks. Roaming between access points, re-authentication timing, driver compatibility, OS-specific behavior, guest networks that coexist with the corporate SSID. Every one of these is a ticket waiting to happen.
Fifth, 802.1X is an enterprise LAN technology. It doesn't help when the user is at home. For work-from-home, you replace 802.1X with something else entirely, usually a VPN or a zero-trust agent, and now you have two access control systems, neither of which has complete coverage of your user population.
NAC plus posture assessment
Modern NAC systems (Cisco ISE, Aruba ClearPass, Forescout) extend 802.1X with device posture assessment. Before granting access, the NAC checks whether the endpoint is running antivirus, whether patches are up to date, whether disk encryption is enabled, whether the OS is a supported version. If the posture is bad, the device goes into a remediation VLAN where it can fix itself and try again. This is called "NAC with posture," and when it works it's fantastic.
When it doesn't work, it means the CFO can't get on the network because her laptop's antivirus definitions are four hours old, and now you're getting yelled at. NAC posture is a powerful tool and a political minefield. The engineering challenge is making the checks accurate enough to stop actual bad posture and lenient enough to not block legitimate users on edge cases. The sweet spot is narrow, and it moves over time as operating systems and security tools evolve.
The Endpoint Agent, or the Firewall Has a Little Friend Now
Here is one of the biggest shifts in enterprise security over the last decade: the network firewall is no longer the primary enforcement point. The endpoint is. Modern enterprise security relies heavily on endpoint detection and response (EDR) agents: CrowdStrike Falcon, SentinelOne, Microsoft Defender for Endpoint, Palo Alto Cortex XDR, and their peers. These agents run on every managed device, have deep visibility into the operating system, and enforce policy locally. They see the process that opened the connection. They see the file that was downloaded. They see the command line. They see everything the firewall wishes it could see but can't, because it's on the wire and the interesting stuff happens above the wire.
The endpoint agent and the firewall are supposed to integrate. In theory, the agent reports user identity, device posture, and process information to the firewall, which uses that to make smarter decisions. The firewall reports suspicious traffic to the agent, which correlates it with local activity and responds. In practice, the integration is partial, vendor-specific, and usually the subject of a PowerPoint deck rather than working code.
What actually works
User and host identity injection. The agent asserts to the firewall "this connection is from user Alice on device LAPTOP-1234," and the firewall applies user-aware policy. This works reasonably well for managed devices.
Posture signaling. The agent tells the firewall whether the device meets policy (AV running, patches current, disk encrypted, firmware updated). The firewall gates access based on this. This is the core value proposition of zero-trust network access (ZTNA) platforms, and when the integration is clean it genuinely raises the bar for attackers.
Process attribution. When the firewall sees a suspicious connection, it asks the agent "what process on that device made this connection." If it's curl with a suspicious command line, that's a signal. If it's Chrome, probably fine. This attribution used to be impossible from the network. Now it's possible if your agent and firewall speak the same integration protocol.
Response actions. When the firewall detects something bad, it can tell the agent to isolate the host, kill a process, or remove a file. This is a much faster response than having a human analyst log in and investigate. It's also a much faster way to brick the CEO's laptop if you're wrong, so approvals and sanity checks are important.
What doesn't work
Multi-vendor integration. If your firewall is vendor A and your endpoint is vendor B, the integration is almost certainly worse than if they were both vendor A. Standards like the MITRE ATT&CK framework and OpenC2 try to provide common ground, but the actual data plane between a firewall and an endpoint is still mostly vendor-specific APIs. Picking a single vendor for both makes the integration work better and the procurement more expensive and the vendor lock-in worse. Pick your pain.
Unmanaged devices. The entire endpoint agent story falls apart the moment you have devices you don't manage. Contractors, BYOD users, IoT, OT, and so on all have network access and no agent. For these, you're back to IP addresses and MAC addresses and hoping, or you need to segment them into a zone where the network rules are strict enough to compensate for the lack of endpoint visibility. This is why "segment the network" is always on every security consultant's list, and why nobody fully does it.
Work From Anywhere, or The Day the Perimeter Died
The pandemic killed whatever was left of the perimeter. Before March 2020, the majority of a typical enterprise's users were on the corporate LAN for most of the workday. After March 2020, overnight, they were on their home Wi-Fi, with their kids' Xboxes, their smart thermostats, and their neighbor's leaky Ring camera sharing the same network segment. Every threat model that assumed "the user is on a trusted network" was suddenly invalid. Every firewall rule that assumed "inbound from the Internet" was suddenly seeing traffic that was nominally trusted, from the CEO's laptop, coming from a residential IP in a state you didn't know you had employees in.
The classical response was VPNs. Route everything through a corporate VPN concentrator, tunnel all traffic back to the data center, and the firewall sees traffic like it always did. This had a bad day on March 16, 2020, when a decade of capacity planning got obliterated in an afternoon, and every enterprise's VPN concentrators were running at 300% of their rated capacity. Some of them held up. Most did not. The entire IT industry learned in real time that site-to-site VPNs and user VPNs are different products and that the former does not scale like the latter.
Split tunneling, or the compromise nobody wanted to make
The technical fix for VPN overload was split tunneling. Instead of routing all traffic through the VPN, only route traffic destined for corporate resources through the VPN. Everything else (YouTube, the user's personal banking, the million ads that load on every news site) goes directly out through the home Internet. This keeps the VPN concentrator from exploding and keeps Teams video from routing through a data center 2,000 miles away just to get to Microsoft's servers 800 miles away.
It also means the firewall sees a fraction of what it used to see. Traffic to SaaS applications (which is, at this point, most traffic) never touches the VPN, never touches the firewall, and is entirely invisible to the network security stack. You've preserved VPN capacity at the cost of visibility. If the user's laptop is compromised and the malware talks to a SaaS C2 channel, the firewall sees nothing because the traffic never goes through it.
This is the moment zero trust stopped being a buzzword and started being a necessity. The premise is that you can no longer trust the network location. Every access to every resource has to be authenticated and authorized independently, regardless of whether the user is on the corporate LAN, at home, or at Starbucks. The firewall is no longer the enforcement point. The identity provider, the endpoint agent, the application, and the data plane proxy are all enforcement points.
Zero trust network access, or the new perimeter that isn't a perimeter
Zero Trust Network Access (ZTNA) products (Zscaler Private Access, Cloudflare Access, Palo Alto Prisma Access, Netskope, Tailscale, Twingate, Cisco Secure Access, and probably six more by the time you read this) replace the VPN with an identity-aware proxy. The user authenticates to the proxy (usually via the corporate IdP with MFA), the proxy authorizes access to specific applications based on user and device posture, and the proxy forwards traffic to the application without giving the user direct network access to the internal network. If the user's device is compromised, the malware can access only the applications the user is currently authorized for, not the entire corporate LAN.
This is architecturally much better than VPN. It also requires every application to be fronted by the ZTNA proxy, which is fine for web applications and complicated for legacy protocols. It requires the ZTNA proxy to be available, which makes it a single point of failure. It requires the IdP to be available, which makes it another single point of failure. It shifts the trust from network position to cryptographic identity and device posture, which is great until the identity or the device is compromised, at which point the attacker has the keys to exactly the applications that user was authorized for, which might be a lot.
Zero trust is not a silver bullet. It is a better architecture for a world where users are everywhere and the network is no longer a trust boundary. But it introduces its own failure modes, its own integrations to get wrong, its own operational complexity. The firewall hasn't gone away. It's been joined by a half-dozen other enforcement points, each with its own policy engine and its own failure modes, and the security team's job is now to keep them all consistent.
Connecting Untrusted Networks to Trusted Networks, or the Eternal Problem
There's a specific category of problem that has never been solved well and probably never will be: joining networks that have different trust levels.
The corporate network is trusted (ish). The home office is not trusted. The field office that was acquired three months ago is unknown. The partner network that has to integrate with SAP is only trusted for certain flows. The OT network that runs the factory floor is a special snowflake that nobody wants to touch. Every one of these networks has traffic that needs to reach the corporate network, or vice versa, and every one of them is a potential threat vector.
The classical answer is site-to-site VPNs with firewall rules between the segments. This works, in the sense that it does what it says. It also grows into a rat's nest of exceptions over time. The home office users need access to the intranet, and the intranet needs access to the license server, and the license server needs access to the update server, and the update server needs access to the vendor, and suddenly your DMZ has 47 holes in it and nobody has time to audit them.
SD-WAN, or the software layer over the rat's nest
SD-WAN is supposed to help here, and sometimes does. The pitch is that you deploy SD-WAN edges at each site, they build a mesh of encrypted tunnels to each other, and you manage policy centrally. In practice, SD-WAN works well for the connectivity problem and less well for the security problem. The tunnels are encrypted and reliable. The policies are centrally managed. But the firewall rules inside those tunnels still need to be right, and SD-WAN doesn't magically make them right. It just makes them easier to apply from a single pane of glass, which is not nothing, but it's also not the security transformation the sales pitch implied.
The real value of SD-WAN in security terms is that it lets you put security services at the edge (at the SD-WAN gateway) rather than backhauling everything to a central firewall. This is how SASE (Secure Access Service Edge) became a product category. Gartner named it in 2019, and the entire vendor ecosystem pivoted. SASE combines SD-WAN with cloud-delivered security (firewall, secure web gateway, CASB, ZTNA) and promises to make the whole problem of site-to-site and user-to-site and user-to-cloud security uniform. Whether it actually delivers depends on how much you buy into a single vendor, because SASE is much more pleasant to operate when it's all from one vendor and much more realistic when it's not.
Device Lockdown vs. User Productivity, or the Permanent Tug of War
Here is the other permanent tension in enterprise security: how much do you restrict the user's device, and how much do you let them do whatever they want?
On one end is the security team's dream, which is a device where the user cannot install software, cannot change settings, cannot access anything not explicitly approved, cannot bypass any control, cannot connect to networks not on the approved list, and basically has a terminal into the corporate SaaS apps and nothing else. This is extremely secure. It is also extremely unpleasant to use, and anyone whose work requires flexibility will quietly work around it by using their personal device, their phone, a non-corporate laptop a friend lent them, or some other shadow IT workaround. The security you gained by locking down the corporate device was lost because the user routed around it.
On the other end is the laissez-faire model, which is basically a BYOD policy with some advisory guardrails. Users have local admin, can install whatever, can use personal accounts for corporate work, can connect to whatever, and can copy data wherever. This is very productive. It is also terrible for security, because when the user's personal habits interact with corporate data, the corporate data loses every time.
Every real enterprise sits somewhere in the middle, and the position is the result of negotiation between security, IT, HR, legal, and the loudest department that wants an exception. Every so often, the position moves. After a breach, it moves toward more restrictive. After a round of layoffs and a lot of helpdesk tickets, it moves toward less restrictive. The equilibrium is not a single point; it's a continuous oscillation that never settles.
What actually helps
A few things do move the needle without breaking productivity. None of them are silver bullets.
Application allowlisting with sensible exceptions. Tools like Microsoft Defender Application Control (formerly WDAC), AppLocker, or commercial equivalents let you define which applications can run on managed devices. Unsigned code doesn't execute. Scripts from untrusted sources don't execute. Office macros don't execute by default (finally). This catches a huge fraction of commodity malware at essentially no cost to productivity, because most users don't actually need to run arbitrary binaries.
Data loss prevention, but only where it matters. Broad DLP is a productivity disaster. Targeted DLP (preventing upload of certain file types to non-corporate cloud storage, detecting specific patterns like credit card numbers being emailed outside the company) is tractable and useful. The difference is scope. Broad DLP becomes a political battle with every department. Targeted DLP is a technical control with a clear purpose.
Browser isolation. For high-risk activities (downloading files from unfamiliar sites, opening links in email, accessing partner portals), running the browser session in an isolated environment (either a cloud-hosted remote browser or a local sandboxed instance) lets the user do their job while keeping any compromise confined. This used to be awkward and slow. Modern implementations are fast enough that most users don't notice.
Just-in-time privileged access. Instead of giving admins permanent elevated access, give them time-bounded elevated access on request. The attacker who compromises an admin's laptop during off-hours finds a user account, not a domain admin. The admin asks for elevation when they need it and gets it with MFA and logging. This is the kind of control that costs very little operationally and dramatically reduces blast radius when things go wrong.
Monitoring, and then actually looking at the monitoring. The single most underrated control is "have logs of what's happening and have someone whose job is to look at them." Every sophisticated attack involves weeks or months of activity that would have been visible in logs, if anyone had been watching. The tooling exists. The process of looking at the outputs is what most organizations are actually missing.
The Uncomfortable Truth About Modern Enterprise Firewalling
Let's be honest: every improvement in security has made enterprise security harder.
TLS 1.3 and Encrypted Client Hello are unambiguously better for the Internet and unambiguously harder for inspection. DNS over HTTPS is better for user privacy and worse for DNS-based enforcement. Certificate pinning makes banking apps safer and corporate proxies useless. Zero trust is better architecturally and requires rebuilding an entire enterprise security stack. Work from anywhere is better for the humans and terrible for the network perimeter. Endpoint agents provide visibility the network can't and only work on managed devices. SaaS consolidates tools and removes them from the places the security team controls. Every one of these changes is correct and good, and every one of them makes the job of the enterprise security team harder.
The firewall is not going away. It still has a job. The job is smaller than it used to be. Stopping unsolicited inbound traffic is still mostly the firewall's problem. Enforcing network segmentation between zones of different trust levels is still the firewall's problem. Catching the most obvious attack traffic, the low-effort scanners, the legacy exploit attempts, the credential stuffing, is still the firewall's job and the firewall is still good at it.
What the firewall can no longer do is be the primary enforcement point. It can't see encrypted traffic it's been told to bypass. It can't understand user context without integration with identity. It can't see traffic that never traverses it because the user is at home talking to SaaS. The firewall is one of a dozen controls, and the mature security team designs for a world where any single control can fail without the whole system failing.
The failure mode of enterprise security in 2025 is not usually "the firewall didn't block it." It's "the firewall couldn't see it, and the endpoint agent wasn't installed, and the IdP accepted the compromised credentials because the MFA was fatigued into approving, and the DLP didn't catch it because it was in a category nobody had defined, and the SOC didn't notice because alerts had been muted after three months of false positives." Every control worked as designed. The design assumed the other controls would also work. None of them did.
The practical advice, if you're running enterprise security in 2025, is uncomfortable and boring. Assume the firewall can see less than it used to. Invest in endpoint agents and identity more than in network inspection. Segment aggressively where you can, and accept that complete segmentation is not achievable. Deploy DNS firewalling because it's cheap and catches the lazy attacks. Deploy TLS inspection only where the value exceeds the operational cost, which is a smaller set of places than the vendor will tell you. Instrument everything. Monitor what you instrument. Reduce alert noise ruthlessly, because an alert nobody reads is worse than no alert. Assume compromise, plan for response, test the response before you need it, and don't lie to yourself about what the firewall is actually doing.
Enterprise firewalling is terrible, enterprise firewalling is essential, enterprise firewalling is fundamentally harder than it was in 2015 because security everywhere else got better. We're not going backward on TLS, DNS privacy, zero trust, or work from anywhere. The firewall has to adapt, or be one of several controls, or be replaced by something that better fits the new shape of the problem. Welcome to the defender's dilemma. The attackers are not waiting.