I agree with most of this. If every website followed these, the web would be heaven (again)...
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
Some old-enough browsers don't support SSL. At all.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
Even older browsers that support SSL often lack up-to-date root certificates, which prevents them from establishing trust with modern SSL/TLS certificates.
Fairly recently I attempted to get an (FPGA-emulated) Amiga, a G4 Power Macintosh running System 9.2, and a Win2000sp4 Virtual Machine online (just for very select downloads of trusted applications, not for actual browsing). It came as a huge surprise to find that the Win2K VM was the biggest problem of the three.
So? If they still power on and are capable of talking HTTP over a network, and you don't require the transfer of data that needs to be secured, why shouldn't you "let" them online?
Usually browsers on hobbyist legacy operating systems, to which modern browsers haven’t or can’t be ported, not to mention keeping root certificates up to date. Or even if they do support SSL, then only older algorithms and older versions of the protocol. It’s nice to still be able to browse at least part of the web with those.
The problem is usually SSL support, the problem is that older SSL and TLS versions are being disabled.
I actually have an example myself - an iPad 3. Apple didn't allow anyone else than themselves to provide a web browser engine, and at some point they deliberately stopped updates. This site used to work, until some months ago. I currently use it for e-books, if that wasn't the case I think it by now it would essentially be software bricked.
I acknowledge that owning older Apple hardware is dumb. I didn't pay for it, though.
When you force TLS/HTTPS, you are committing both yourself (server) and the reader (client) to a perpetual treadmill of upgrades (a.k.a. churn). This isn't a value judgement, it is a fact; it is a positive statement, not a normative statement. Roughly speaking, the server and client softwares need to be within say, 5 years of each other, maybe 10 years at maximum - or else they are not compatible.
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
Except it's not actually true. https://www.ssllabs.com/ssltest/clients.html highlights that many clients support standard SSL features without having to update to fix bugs. How much SSL you choose to allow and what configurations is between you and your... I dunno, PCI-DSS auditor or something.
I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.
... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.
I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.
As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.
My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.
Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)
So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.
People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.
One imagines though that with enough clients connecting to your site you’ll end up seeing every type of incompatible client eventually.
The point I was trying to make is that removing SSL doesn’t make your site compatible and the number of incompatible clients is small compared to the number of compatible ones.
Compatibility alone is not a reason to not use SSL on its own, arguably. The list of incompatibility doesn’t stop at SSL, there’a still DNS, IPv6 and so on.
SSL is usually compatible for most people - enough that it has basically become the defacto default for the web at large. Though there are still issues. CMOS batteries dying and having bad client time is one that comes to mind first, certificate chain issues too. SSL is complex, no doubt. Especially for server-side implementation to remain compatible client-side. That’s why tools like Qualys’ exist in the first place!
Imagine if it was required that every single device or machine on your life to be designed within 5 years of every other one or they wouldn't work together.
We would be constantly trying to finish a home we could actually use, and forget about fruits or wood agriculture.
There's something deeply broken about computers. And that's from someone deeply on the camp that "yes, everybody must use TLS on the web".
> There's something deeply broken about computers.
It's just not that mysterious, if we want our communications to be secure (we do) then we can't reasonably use ciphers that have been broken, since any adversary can insert themselves in the middle and negotiate both sides down to their most insecure denominator, if they allow it.
What about governments? In my country they perform MITM attacks against unencrypted HTTP, while the best they can do with HTTPS is to block the site. I'd much prefer everyone enforcing HTTPS at all times.
Those are some of the most pedantic grasping at straws reasons I've ever read. It's like they know there's nothing wrong with http so they've had to invent worst case nightmare scenarios to make their "It's so important" reasons stick.
Https is great. I use it. That website is pathetic though.
And so what if my webpage about an obscure 1994 Australian rock band get a few ads injected into it?
Everything else in my life gets ads injected into it (TV, Music, Movies)
Such a silly argument.
this is the statement of someone who wasn't around in 2013 when the snowden leaks happened and google's datacenters got owned. everyone switched to https shortly thereafter
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
HTTP/2 doesn't matter in this case, there are only 4 files to transfer. The webpage itself (html), then the style sheet (css), then the feed icon and favicon. You can do with only the html, the css makes it look better, and the other two are not very important.
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
I have pings in excess of 300 ms to her site. TCP connections need a lot of time to "warm up" before speeds become acceptable. It's easy to say things like "http2 does not matter" when you're single digit milliseconds away from all major datacenters.
HTTP/2 matters on bloated websites with tons of external resources, it is not the case here. HTTP/2 will not get you the first HTML page faster and this is the only thing needed here to start showing you something.
In terms of round trips, HTTP/1.1 without TLS will do one less than HTTP/2 with TLS, and as much as HTTP/3 with TLS.
My first impulse is to scream obscenities at you because I've seen this argument so many times repeated that I tend just keep quiet.. I don't think you can't understand, but I think you refuse to.
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.