Zero Trust, Explained
If you've read any cybersecurity article in the last few years, chances are it referenced the concept of "Zero Trust". What is Zero Trust, and why has it captured the attention of cybersecurity practitioners across industry and the government? These questions are addressed in this article, which covers the following topics:
- What is Zero Trust?
- Where did Zero Trust come from?
- Why do we need Zero Trust? What's wrong with today's security?
- What is the difference between Zero Trust and SASE?
- What is Zero Trust Network Access (ZTNA)?
- Aren't all Zero Trust solutions the same?
- What are the properties of an ideal Zero Trust architecture?
- What's the difference between Zero Trust and Micro-Segmentation?
- What is Secure Access 2.0, and why does it include both ZTNA and Micro-Segmentation?
What is Zero Trust?
Today's network security is oriented around controlling access to a network. Once you are "on the network," you can access any of the resources (applications and data) on that same network. Gaining access to the network means you are implicitly trusted to access everything on that network. The same principle works for malicious actors - getting network access opens the door to all kinds of attacks - including ransomware.
Zero Trust aims to eliminate that implicit trust in every network access. The core tenet of Zero Trust is “never trust, always verify" - no access should be allowed without first verifying the identity of the requestor and the resource, and verifying that the requestor has the authority to access the resource.
How it Works
To use a departmental server, Alice needs to be "on the same network" as the printer. The server accepts network traffic from Alice, just as it would trust anyone coming from the "trusted" network. If an attacker finds a way onto the trusted network, there are no controls left to protect that server.
Zero Trust adds a layer of network security controls that check Alice's identity and authorize her access against policy before she can send any packets to the server. The attacker on the same network fails the identity and policy checks, and can't probe or attack the server.
The Zero Trust methodology shifts the attention away from the network. With Zero Trust, having network access is useless to the attacker.
Where did Zero Trust come from?
The principles of Zero Trust aren't new - security visionaries noted the looming problem back in 2004, in the Jericho Forum. Forrester analyst John Kindervag coined the term "Zero Trust" in 2010; since then, interest in Zero Trust has grown as corporations became increasingly dependent on digital technology in ever-more complex infrastructure, increasing exposure to the threats like industrial espionage and ransomware.
Recognizing the power of Zero Trust to protect data, the Biden Administration recently issued an executive order directing federal agencies to adopt Zero Trust by the end of 2024.
In the Zero Trust model, we don't trust traffic implicitly because it comes from a "trusted" network; instead, we can follow a 4-step process to validate and secure that traffic:
- Deny access from the network
- Authenticate users, endpoints, and applications, to validate they are who they claim to be
- Authorize every access to ensure it is allowed by policy
- Continuously monitor for changes in authentication or policy that might affect a session that's already in progress.
It's much easier to manage security and risk, when it doesn't matter if your network is breached. That's the power and the promise of a proper Zero Trust framework.
Why do we need Zero Trust? What's wrong with today's security?
The traditional enterprise security model is based on a faulty premise: that you can keep hackers out of the network. Enterprises have invested incredible amounts of time and effort to harden the perimeter with next-generation firewalls, ensure that VPN access is uses multi-factor authentication, and continuously hunt for threats on the internal network. Yet the news carries reports of organizations getting hacked or falling victim to ransomware on a daily basis - and those are only the ones we hear about!
Why does this happen? Well, simply put, there are too many ways to get into the network. Contrary to their Hollywood image, hackers don't just rely on misconfigured devices or 0-day vulnerabilities - leaked chats from ransomware gangs show how easy it is for them to get VPN credentials through social engineering, phishing, or even simply bribing an employee or contractor to leave the back door open.
Although so much attention has been put on keeping hackers out, relatively little emphasis has been placed on making the internal network more secure. This is because the traditional network security approach is to break the network into smaller and smaller networks, or segments, so it becomes possible to insert security controls between the segments. This makes it very hard to implement granular policies or make changes to policies.
One way of doing this involves creating VLANs for each application. In practice, this is very hard to do for more than the most sensitive assets... which is why ransomware and cybersecurity attacks still constantly make headlines.
Additionally, modern enterprise applications run in an increasingly complex environment. Applications and data have increasingly moved beyond the traditional corporate perimeter into the public cloud. Add to that an increasingly mobile workforce, increasing requirements for ecosystem collaboration, and multi-cloud initiatives – it's clear IT has their work cut out for them.
The tried-and-true tools in IT’s arsenal – namely, firewalls, VPNs and VLANs – are now several decades old. They were built for a simpler time, when networks didn't change much, making it a struggle to support the complex and dynamic requirements of today's enterprise.
Access from point A to point B is defined by the programming of IP addresses and all of the routers and firewalls along the path. It's difficult enough to visualize what's connected to what, let alone try to manage the risk based on the interaction of these separately-managed components.
Consider the connection shown below, between a cloud and resources in a factory environment.
The policy allowing the cloud server to connect to machines on the factory floor is controlled by as many as 6-12 different routers, switches, and firewalls along the path, each potentially managed by a different team. Usually that the best way to validate that the connectivity is to try it – with a ping. It’s significantly more difficult to prove that the policy doesn't expose too much!
Zero Trust dramatically simplifies this problem, by reframing the distributed policy into a single statement of who can access what. For the example above, a Zero Trust architecture can allow you to dramatically simplify the cross-border connectivity, with just one place to check to configure policies and verify access.
What is the difference between Zero Trust and SASE?
Zero Trust is more of a security strategy that defines how authentication should be performed (granular and identity-based), and how authorization is performed (always). It does not define a specific implementation.
SASE, or Secure Access Service Edge, proposes a new deployment model for security services. SASE incorporates Zero Trust, insofar as it recommends that its security services access layer follow Zero Trust principles, but it focuses more on how connectivity and security should be managed.
For more information on SASE, check out our explainer article:
What is Zero Trust Network Access (ZTNA)?
Zero Trust Network Access (ZTNA) is the most secure method to allow remote users to access applications or endpoints, and is rapidly replacing traditional legacy VPNs as the remote access solution of choice.
The primary purpose of a VPN is connectivity - not security. With a VPN, a user's device is virtually added to the corporate network The user can access everything in the corporate network. The rush of corporations to increase capacity to support work from home explains the recent emphasis hackers have put on new VPN exploits and phishing attacks – lots of corporate data is accessible with the right set of credentials.
ZTNA solutions, on the other hand, are built from the ground up with Zero Trust security. They typically use multiple trust factors, such as geolocation and other metadata in combination with standard user authentication and MFA, to authenticate both the user and the device at the same time. This reduces the risk that phished credentials could be used to access corporate data.
ZTNA solutions provide granular policy controls that define which services or endpoints the user is allowed to access, and with which clients the user may use to access them. For example, a typical ZTNA policy might specify something like:
"Alice, from her identified Mac, can use ssh to connect to the build server, using TCP port 22."
"Bob, from his Windows laptop, can use a proprietary client software to connect to the building management system, using either TCP port 80 or 9987-9989."
Following the Zero Trust principles, a good ZTNA implementation must be able to provide a default block for applications and data it protects. If it relies on an external block, such as a perimeter firewall, then the ZTNA is only good as a remote access solution and cannot apply to protecting applications and data from on-premises accesses.
Aren't all Zero Trust solutions the same?
In short - no. As interest in Zero Trust has grown, many vendors of network security have rebranded existing solutions as Zero Trust. Still others participate in the broader Zero Trust market, but are pieces of a solution.
Still other solutions, like Software-Defined Perimeters (SDP), function as a perimeter "door." They may connect users to a resource, but without segmentation capabilities – once inside, they provide little to no control over how that access may be abused.
While there isn't yet an industry standard for a Zero Trust architecture, NIST is rapidly working to change that through architecture guidelines such as SP 800-207.
Because there are a wide variety of implementations, customers defining their Zero Trust strategy are advised to evaluate factors such as:
- The completeness of the Zero Trust vision
- What kinds of network connections the solution applies to (web apps only? ssh? any TCP/UDP traffic?)
- Whether the solution is natively integrated with segmentation controls
- The ease of implementation and end-to-end policy management; and
- Whether the solution will require an infrastructure upgrade to be effective in both on-premises and cloud environments.
What are the properties of an ideal Zero Trust architecture?
Simple Policy Definition
Enterprises need tools to define and enforce simplicity. Security policies should be human- readable; they should be easily mapped to business requirements to streamline implementation and increase auditability for compliance, and should support automation for repeatability.
Access That Follows Least Privilege
Hybrid environments are complex, yet most corporate resources (VMs, containers, etc) are service-oriented and single-purpose – the in- house git server is not also hosting the Active Directory service. As a result, the ideal solution should have strong mechanisms to define whitelist policies to minimize the attack surface.
Supports Policy Discovery and Evolution
It's difficult for admins to be able to guarantee that policies are properly defined when the service is first turned on. Applications can have undocumented behavior, and even "well understood" applications might have surprise dependencies somewhere in the network. It's important to look your Zero Trust tools to help you discover these dependencies and suggest actions that help increase security over time..
Portable to Any Environment
Streamlining cloud adoption and enabling user mobility means companies need to be able to secure their applications and data wherever they happen to be. The ideal Zero Trust solution must shift the focus from network-level architecture to the application-level, operating in every environment and across traditional security boundaries. It should not be dependent on firewalls.
Co-Exists with Existing Networks and Applications
Enterprises cannot switch to this new paradigm overnight. It is important that a Zero Trust solution be deployable to protect critical applications without requiring a forklift upgrade, not only for cost reasons, but also to avoid disrupting the enterprise’s infrastructure and operations.
|CoIP Access Platform satisfies all of the properties of an ideal Zero Trust solution.|
What's the difference between Zero Trust and Micro-Segmentation?
Zero Trust and Micro-segmentation are aligned topics, but don't refer to the same thing. Zero Trust has to do with security and the factors that drive security policies; micro-segmentation has more to do with how and where security functions are inserted to protect workloads. For more detail, check out the explanation on our resource page, Micro-Segmentation, Explained.
What is Secure Access 2.0, and why does it include both ZTNA and Micro-Segmentation?
Secure Access 2.0 is a new paradigm that enables specific resources (cloud servers, VMs in a datacenter, or manufacturing devices on a shop floor) as targets for secure access, and couples that access with micro-segmentation controls to protect the resource. The result is policy-driven, fine-grained access to specific resources that can be implemented quickly, completely decoupled from the existing network infrastructure.
For our view on Secure Access 2.0, check out our blog post on this topic.