The New TLS 1.3 Standard: Ready or Not, Changes Are Coming

Editor’s note: This article was originally published on the Gigamon Blog.Digital key

Over the past few years, there have been several serious attacks on TLS—the widely used encryption standard to protect data exchanged over application protocols such as HTTP, SMTP, IMAP, POP, SIP, and XMPP. For this reason, the Internet Engineering Task Force (IETF) will be voting in the next couple of months on whether to approve an updated version—TLS 1.3—of the standard.

Cryptologists believe the new standard will be faster and more secure. Enterprises, on the other hand, are concerned about the implementation and stability issues it might cause.

The difference between TLS 1.2 and TLS 1.3

TLS 1.2 provides a predictable way to negotiate a secure connection using an RSA key exchange and then perfect forward secrecy (PFS), which protects past sessions against future compromises (e.g., one compromised message can’t compromise others, unless you have a copy of the private key). With TLS 1.2, if you have a copy of encrypted data (as well as the key used to encrypt it), you can decrypt it by simply observing it, either in transit or at a later date. This practice is widely used across various enterprises for compliance reasons.

With TLS 1.3, the use of the RSA key exchange will be removed from the specification, which means that every conversation generates a unique key. Because this unique/ephemeral key is generated for every new connection, it will no longer be possible to decrypt the session using a copy of the private key. Unless you are the man in the middle of every conversation, you will not be able to decrypt traffic, either out of band or at a later time.

This change is especially problematic in large infrastructures where organizations need to tap lines across hundreds of different points to listen in on conversations, decrypt traffic, and, ultimately, determine good versus bad traffic. Moreover, in the name of making communications even more secure, there’s a chance that developers will eliminate backwards compatibility, thus preventing outdated standards from connecting using the new standard. Google, for example, recently announced that it would not support certain types of ciphers. If your browser doesn’t support older encryption, you won’t be able to download or use newer versions of Google Chrome. I recently experienced this when I installed Windows XP to run a legacy application. Google refused the version of Internet Explorer that ships with XP and I could neither conduct Google searches nor download Chrome.

This issue will become seriously non-trivial once all endpoints migrate to TLS 1.3. It will not only become difficult, but almost impossible to decrypt copies of traffic out of band. You will instead need to be inline, terminating the connection, decrypting it for analysis, and then re-establishing it. For example, if you were to engage a secure connection from a computer to a website, you would not be able to decrypt the traffic unless your connection were to pass directly through a piece of infrastructure with a copy of the website’s private key.

Why stronger encryption now?

There are different schools of thought.

The more philosophical one argues that just because something is safe today doesn’t mean it will be tomorrow; that the abuse of data privacy has spread too far; and that the new standard will better protect privacy.

The operational one worries that if privacy protection is taken too far, it could create implementation and operational issues that introduce costs and jeopardize business. The implication is that if you’re inline everywhere, your systems become less resilient. And if you need to be in the middle of every single connection—as opposed to taking traffic offline to inspect and not disrupt production traffic—you will also be adding latency, efficiency risk, and overhead expenses.

The first camp is thinking about how to make the world a better, more secure, and private place; the latter needs to be sure they meet the needs of shareholders and maintain compliance with a wide variety of industry standards and regulations.

For instance, regulated industries like healthcare and financial services, which have to comply with HIPAA or PCI-DSS, may face certain challenges when moving to TLS 1.3 if they have controls that say, “None of this data will have X, Y, or Z in it” or “This data will never leave this confine and we can prove it by inspecting it.” In order to prove compliance with those controls, they have to look inside the SSL traffic. However, if their infrastructure can’t see traffic or is not set up to be inline with everything that is out of band in their PCI-DSS, they can’t show that their controls are working. And if they’re out of compliance, they might also be out of business.

For other, perhaps newer companies, this might not be a huge concern. But for institutions contending with hefty investments in legacy infrastructure, the change could be difficult to manage. And it also begs the question: Is it reasonable to introduce new costs and potential reliability risks in the name of better security? Security usually takes a back seat to operational efficiency until a compromise occurs. Then the lament is, “We could have avoided this.”

Preparing for TLS 1.3

The good news is that adoption of TLS 1.3 and the deprecation of older standards is several years away. Nonetheless, short-term preparations for TLS 1.3 can be made, but will differ depending on business type and risk tolerance. In some environments, there are tight controls that must use decryption while, in others, the issue is less pressing.

For those who have been on the fence about what they want to decrypt, this could be a forcing function as there is so much complexity and so many different places where data resides. This is a new opportunity to begin looking for threats under the lens of, “If I have to pick certain parts of communications to decrypt, what would they be?” And much of this will be driven by regulations, brand reputation, or data volume.

You can always do business. It’s a matter of, at what cost? Can you put something inline that will terminate the connections and not add latency or risk to your operations? Or, do you have to put termination inline in hundreds of places and hire new people to make sure it all works? Or, could technology come along that enables you to do this in a way that still maintains some level of privacy and security while also allowing you to meet required controls?

What’s metadata got to do with it?

Gigamon’s GigaSECURE® Security Delivery Platform produces metadata on what it sees on the network that can be used to make decisions about whether something is good or bad. For instance, if you’re not inline or unable to decrypt, but still need to understand whether there is malware on your network or if people are doing things they shouldn’t be, you can use metadata to figure things out. You don’t need to see inside the private phases of conversations. You can, instead, get close enough with traffic metadata to make an approximation.

Most systems are overwhelmed with data that isn’t meaningful to the particular problem they are trying to solve. Typically, organizations look at traffic as it leaves or comes into the network and not as often at what is internal to the company (e.g., traffic between users, between a user and a file server, or between a user and some application). Network boundaries are disintegrating—and, there really isn’t internal versus external networks.

Looking at east-west traffic is important and can be indicative of good behavior or a malicious attack. With TLS 1.3, the potential risk is that as east-west traffic becomes more and more difficult to decrypt, organizations will stop looking at it—and that could prove risky.

If you’d like to learn more, join us on Thursday, October 27, at the NYC Cybersecurity Summit. I will be moderating a panel discussion with cybersecurity experts from DTCC, UCI Cybersecurity Research Institute, Area 1 Security, and The New York Times that covers the privacy and TLS 1.3 issues. And Kevin Mitnick, the world’s most famous hacker, will be delivering a compelling keynote guaranteed to generate further debate.

More information can also be found at the IETF.

Comments are currently closed.