TLS is DEAD! Long Live TLS!
As most of you are probably aware, TLS 1.3 (draft 28) was recently accepted by the Internet Engineering Task Force (IETF) as an official standard.
What ramifications will TLS 1.3 have on tried and true network operations like URL filtering or passively load balancing to specific servers based on hostname?
Being curious about the new standard and it’s repercussions, I downloaded the latest version of Firefox that has built in, default-enabled support for TLS 1.3, draft 28 and found the website from the fine folks at Mozilla.Org that is running only TLS 1.3, draft 28 that can be used for testing at https://tls13.crypto.mozilla.org/
Interestingly enough I could not get the latest version of Chrome (July 2018) to work with this website, even manually going into chrome://flags/ and turning on TLS1.3. My version of Chrome is Version 68.0.3440.75 (Official Build) (64-bit) and the TLS flag shows TLS 1.3 draft 23, but I’m sure Chrome will release an update soon with draft 28 compatibility.
Also, I grabbed the latest version of Wireshark, 2.6.2, which has dissectors for TLS 1.3 draft 28 (Older versions will flag these packets as TLS 1.2).
So, we are all set to capture some packets and “See what we can see”.
The SNI Data is In Clear Text
As you can see, the SNI Extension is still in clear text in the Client Hello packet.
So, it is still possible for Server Name Indication (SNI) manipulations to work without SSL offloading.
A layer 7 rule on a load balancer or network device could read the CLIENT HELLO SNI “Server Name” field and based on the hostname presented could steer traffic to a specific server, without SSL offloading or proxying the TLS 1.3 connection.
Now as to the completeness of URL filtering, that is another story, but SNI could certainly work by passively detecting the hostname being requested in the client hello as just a man-in-the-middle.
The problem with SNI and URL filtering (as I see it):
With TLS 1.3 the certificate that is to be presented by the server will be encrypted and therefore not passively viewable or verifiable by a filtering system. This is a change from TLS 1.2, where anyone can see the server’s presented certificate in the SERVER HELLO response in clear text.
So, with TLS 1.3, we have NO way to see the certificate presented to the client by the server, so as a URL filter or security appliance, we have no way to verify the authenticity of the server’s identity. We are left to “trust the end user to make good decisions”.
Now, imagine if I hacked your workstation and modified your hosts file or somehow changed the DNS response for http://www.mybank.com to point to a phishing site at 18.104.22.168 that I set up to harvest customer credentials.
Yes, the client would get a security warning, but how many users just click past that (Trust the user????) ? A lot I think.
Side Note: If I have enough access to modify the victim’s host file, I should have enough access to get them to trust my fake-CA, but I digress.
The client hello would still show http://www.mybank.com in the SNI field and therefore be allowed by a URL filter if that is what it is passively filtering on.
What is a “Good” URL Filter
I think that a good URL filter should:
- Intercept the server’s response
- Examine the certificate returned by the server
- Check the client SNI hello against the server’s returned certificate
- Verify that the returned certificate was indeed valid and signed by trusted, authoritative CA
- Not a forged, self-signed certificate.
So, we might see security vendors offer using to use the client SNI record as a feature in their product but we should not rely on this for any true or “real security”.
For true URL filtering, I believe that a full TLS 1.3 termination or proxy will be the only way to securely service TLS 1.3 in a filtering/security perspective.