Zero Trust: Go ahead and open apps on the internet

0

This article is part of a series that explores zero trust, cyber resilience, and similar topics.

The Federal Zero Trust Strategy recently released by the Office and Management and Budget (OMB) and the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) has one area of ​​action that has raised some eyebrows within the zero trust community: go ahead and open your apps on the internet. Wait what?

Described in Applications: Action 4, OMB states that “Agencies must select at least one FISMA [Federal Information Security Management Act] Moderate system that requires authentication and is currently not accessible over the Internet, and allows access over the Internet. This very direct mandate may, frankly, seem absurd and counter-intuitive. However, with a little more context, it makes sense in the context of a zero-trust architecture.

There are several reasons why OMB mandates this particular action. This is primarily about taking advantage of modern cloud protection services and reducing reliance on self-managed firewalls, which are always moments from misconfiguration. This forces segregation of traffic, especially in the modern cloud era where it’s easier than ever for any administrator, with the proper permissions, to open an organization’s cloud application to the world. But while a zero-trust architecture can change the concept of defense-in-depth, protecting resources at the edge is still very important. After all, just because an agency takes an application to zero trust doesn’t mean the adversary will stop throwing malicious payloads at it.

What changes with zero-trust defense-in-depth is how an authorized user traverses those defensive measures with risk management, or more specifically how that risk is now shared. With the Zero-Trust Network Access (ZTNA) and Secure Access Service Edge (SASE) architectures, we no longer directly connect to an internal application or its security device in front of it. Instead, we negotiate through a second party, such as an internal component or IT division, or third-party system to enforce the ZTNA Principles. So, in most cases, we transfer some risk to a vendor or internal component, such as a software-as-a-service based web application protection platform.

If we were to take OMB’s mandate at face value and simply open any internal application we have at hand that meets the definition of “requires authentication”, the results could be disastrous.

Take, for example, the on-premises Microsoft Exchange hack of 2020, which affected any Microsoft Exchange mail server running Outlook Web Access. This is a system that requires authentication; however, the hack bypassed its login screen authentication requirements by exploiting a previously unknown vulnerability. So what requirement should we add? The answer is to validate their identity and access before their computer can communicate with the application.

To illustrate, let’s take the example of the classic train ticket. A train conductor will often check your ticket once you are already on the train and comfortably seated in your seat. If your ticket is not in order in any way, or if you don’t have one at all, the conductor will expel you.

The problem is that you have already taken the train! You have traveled at least a station or two before the end of your trip. However, if there is a subway-style door to check your ticket before you reach the platform, you never even make it to the train. Your ticket was pre-validated and only then were you allowed to approach the train.

So how do we pre-validate and open our apps securely on the internet? The answer is relatively simple in concept, if not in application: pre-authentication. Whether self-managing or using a cloud service approved for the Federal Risk and Authorization Management Program, pre-authentication plays a major role in zero trust.

Pre-authentication, or pre-authorization for short, is nothing new. The concept and supporting technologies have been around for decades. It’s exactly what it sounds like – a way to pre-authenticate something, most often users, before they hit the intended target. One of the most basic examples of pre-authentication would be familiar to anyone who has ever used Windows Remote Desktop to connect to another computer.

In the Windows XP era, the normal mode of operation was to directly access a computer remotely, get a prompt on a desktop, and then enter your password with the session already established. For example, you would enter your password at a prompt on the Windows login screen. A channel had already been established, which meant that if that computer was vulnerable due to missing patches, exploits could be leveraged against that computer with relative ease.

The common practice today is to force users to authenticate first, then give them a background and a login screen. This is a subtle but very important difference because a communication channel with the remote computers cannot be established without prior authentication, which greatly reduces the risks.

In the context of applications, pre-authorization follows the same principles. It leverages various technologies and associated protocols, which are implemented through a number of proprietary and open source products. It’s also ubiquitous with everyday technology, like choosing to log into Zoom with a Google or Facebook account.

In this case, you used another party, Google or Facebook, to authenticate with Zoom. This is a key concept in pre-authentication, using a second or third party to prove your identity and not the app itself. It’s also more convenient because one set of credentials can be used for multiple applications. However, if these credentials can be used for many applications, it better be a secure service, right? Absoutely!

Although the term “zero trust” relates specifically to not trusting any user or device to implicitly connect to another device or application, the authentication and authorization service must be trusted – and highly trusted – to let that happen.

This service can be a cloud-based identity provider such as Okta, Azure AD, Ping Identity, OneLogin and more. Or it can be an on-premises system or self-maintaining cloud-based infrastructure as a service, such as Red Hat SSO/KeyCloak, AD FS, Shibboleth, and others.

Regardless of vendor or technical solution, using a trusted, trusted, and mature solution is essential, as a code flaw in the authentication system itself can be catastrophic. In fact, with the possibility of Golden SAML (Security Assertion Markup Language) attacks, it may no longer be a good idea to run your own federated service.

Considerations for OMB Application Action 4 may include:

Is your application safe to open on the Internet?

More than likely, “no”, at least not in its native form. If an application has any type of input form field and is not capable of pre-authentication, choose something else.

As more agencies continue their journey to cloud and software as a service, this mandate potentially leaves legacy on-premises applications like the “gGuinea pPig” for that action.

Regardless of your authentication provider, don’t choose your decades-old legacy enterprise resource planning system. Remember that defense in depth still counts.

Use common sense-allow internet access does not necessarily mean globally open to the Internet. Use conditional access methods, policies, firewall rules, and more to filter to the United States, other known offices, or other criteria.

At a bare minimum, this will reduce noise in the logs when trying to identify events.

Is an identity provider enough?

In short, no, because applications always require a network protection solution. Some identity providers have it built-in or as an optional add-on; however, it is more common to use a network protection solution such as Zscaler Private Access or Palo Alto Prisma and then integrate identity verification into it.

If time and budget permit, consider using two separate vendors.

For example, consider two distinct systems of high-value assets. If one high-value asset system uses Vendor A and another uses Vendor B, then a zero-day code flaw at Vendor A will only affect one of those systems. This halves this particular measure of risk; however, this doubles the complexity.

Finally, once we consider pre-authorization as part of a zero-trust architecture, some of the recommendations seem to make sense. But success and safety all depend on how well our solutions are implemented. Implementation is where these diabolical details live, and it’s critical that we focus on the dialectical give-and-take nature of creating a zero-trust architecture. Once you understand this, implementing pre-authorization and having your apps open to the internet starts to make a lot of sense.

Dan Schulman is founder and CTO of Mission: Cyber ​​LLC. Dan and his team develop and implement holistic zero-trust solutions focused on the people executing the mission, whether they do so using modern or legacy technology. Mission: Cyber ​​is currently engaged with several organizations who are actively defining their individual paths towards implementing and optimizing zero trust. He is a member of AFCEA’s Zero Trust Strategies Sub-Committee.

Any reference to individual product suppliers is for illustrative purposes only and does not constitute an endorsement.

Share.

About Author

Comments are closed.