I really wanted to like OAuth but implementing it is a nightmare!
When can we just have client side certificates? That would be a great way to deal with most of the problems that emailing a "magic login link" (or just normal email based accounts) doesn't solve.
> I really wanted to like OAuth but implementing it is a nightmare!
Not only is the spec itself challenging, it leaves enough ambiguity and rough edges that most providers end up extending it some way that makes it hard to standardize. Most commonly: how to get refresh tokens (`offline_access` scope, `access_type=offline` parameter?), and how/when they expire (as soon as you get a new one? as soon as you've received 10 new ones? on a set schedule?)
And that's not to mention how OAuth gets extended to handle organization-wide access. Anyone that's dealt with GSuite/Workspace Service Accounts or Microsoft Graph Application permissions knows what a pain that is.
This is exactly why I built [Xkit](https://xkit.co), which abstracts away the complexity of dealing with all the different providers and gives you a single API to retrieve access tokens (or API keys) that are always valid. Not everyone should have to be an OAuth expert to build a 3rd party integration.
Have you set up client side certs? I'd love to hear your experience if so.
BTW, I'd defer implementing OAuth to a library or specialized piece of software (full disclosure: I work for a company providing this). There are a number of options, paid and open source out there.
Interesting! Why does the distinction of a country matter here? I mean - why would using client side certs be something a country as a whole uses, as opposed to a certain type of company or something? Does it have to do with some sort of national firewalls or anti-encryption laws?
It has to do with widespread deployment and a central trust authority - that the specific citizen holding the specific citizen's cert. Service providers don't have to deal with the massive pain that is identity verification, there's no cumbersome stuff like faxing someone a gas bill to prove their identity.
In my opinion, client certificates are great, you can let existing crypto infrastructure deal with the problem of "who is this user?".
The biggest problem is around revocation. You need to have some central revocation list and make sure that all of the users of your PKI are keeping that list up-to-date in production, which can be difficult if you do not plan for that from the start.
Not sure if you're referring to a particular spec or something, but we used client certificates as a 2nd factor to control access to an extranet web app, almost 15 years ago, long before OAuth, and when 2FA was only just beginning to come into existence.
From a security standpoint, it's pretty great. But the reality of generating keys and signing and distributing certificates was horrible, and our users were confused and hated it.
How would you solve key generation even now - assuming the client generates the key, is it locked to that browser on that machine? How do you generate a CSR (certificate signing request)? How do you send the signed certificate to the user? How does the user install the certificate? Again, does that mean the user can only access your app from the machine they installed the certificate to?
PKI is hard, mainly because of the distribution problem.
I"m not sure exactly what client-side certificates means here, but I have long wondered why we can't just use public key/private key authentication for most logins. Is it the same?
Before Chrome, all popular web browsers had a user interface for installing client side HTTPS certificates for user authentication, and a very small number of websites supported it. After Chrome became popular, those sites were forced to switch to a different form of second factor authentication, and it's fallen almost completely out of use.
Part of the reason was that the user interfaces for installing certificates were terrible, and websites needed to have guides on how to use it in each browser.
Thanks. I’m still not clear what the authentication method is, but I don't see why we can’t have a one click browser button “give this site my public key” and another “authenticate to this site with my private key”.
Who gives you the private key? Is it generated on the device? How do you move the keys to your different devices? I can end up working at any of 20 computers on a given day, not counting my personal devices.
Not sure what the best solution is, but some thoughts. First, you definitely want a different keypair per device.
One approach is to just supplement passwords. You could use a password (2FA, etc) to log in, then the site gives you the option of adding that device's public key and from then on you can log in on that device automatically. The site would maintain a list of public keys associated with your account, just like github does for repositories.
Of course, if you don't trust those 20 work computers, you wouldn't want it set up so that anyone using them can log in to all your accounts. One thing the browser could do is password-protect your private key, so you have to enter the master password when you start the browser, and as long as you remember to exit out of the browser, the next person to use it won't be able to use your logins.
Last I checked it could only install from the filesystem, and not directly generate a key and install a certificate through the web. Do you have an example site where this works with Chrome?
Has the goalpost been moved? Upthread you compared Chrome to "all popular web browsers [which] had a user interface for installing client side HTTPS certificates for user authentication". I just opened up Firefox, and found a very similar menu to Chrome's: the only option was to "import" a cert from the filesystem. I agree that we should expect more from our tools, but has any popular browser ever allowed the user to generate a new cert? If one were to do so, how would the generated cert be connected to PKI -- who would sign the cert and how would they do that?
Yes, browsers other than Chrome can generate keys, submit the public key to a site you're logged into and install the certificate you get in response (usually after a second factor verification). I am not aware of any site that still does this, so I can't show it to you. Skandiabanken in Norway used to do it before Chrome.
You won't be able to see this in Firefox in any way other than visiting such a site.
Now I'm curious. This seems like a procedure that would need to be precisely defined. Is there a standard protocol for this? Does it have an RFC or similar I could read? If nothing else, it would be nice to have a short bumper-sticker "Chrome destroyed protocol X!" complaint.
I did some digging, and I believe this was implemented with the <keygen> element and the generateCRMFRequest and importUserCertificate JavaScript functions.
Thanks for the information. I don't remember ever learning anything about <keygen>. It looks as though most popular browsers (not IE; shocking!) supported it in the past, but most have now removed that support. [0] Perhaps there were some security or usability issues with this functionality? (Off the top of my head, if user certs are a single factor how do we ensure that desktops with more than one user don't install them?) ISTM the PKI world is moving to more short-lived, or even ephemeral, certificates. A complicated user-driven certificate generation process in the browser doesn't really fit that trend.
This is exactly how TLS client certificates work - except that instead of the server storing the public keys of clients, the clients present a cryptographic proof generated by the server/CA that they represent some identity (ie., a certificate).
They normally store the User Principal Name from the cert, and then use the public/private key as part of the connection. Specifically, the connection is negotiated after the client sends the public client certificate, and uses it as part of the key exchange.
It doesn't necessarily need to store the public key, but it does need to store which certificate goes with which account. And the certificate is validated by checking that it's been issued by a CA the server trusts.
The server doesn't need to store the certificate, or even a mapping from certificate to identity. Just retrieving information encoded in the DN or SANs of a certificate presented by a client is enough to tie the connection originator/client to an identity. I mean, it's a design decision, whether you want to have a layer of indirection there - but keeping it without one allows TLS client certificates to be fully stateless, and be used across multiple backends that do not share any session/mapping store between them.
In addition, if I'm being picky, TLS 1.3 changes how client certificates are used, and they are now not part of the initial handshake.
You're familiar I'm sure with your browser authenticating that a TLS certificate is within its valid date range and assigned to the hostname of the server? You're probably also aware that your OS and/or browser have a list of certificate authorities one of which must have signed the certificate (or via a chain of CAs from a trusted root, with each CA cert signed by one closer to the root). Client certs work the same way except it's the server verifying all of this for the client (browser, mail user agent, whatever).
At $work we have several systems in which the server only accepts requests, or only accepts certain kinds of requests, from clients with client certificates with specific restrictions. Depending on the application and its authN/authZ needs, any of the solutions I'm about to mention might be combined with some combination of a username/password, with a time-based token, a JWT, IP range restrictions, an API key, or whatever else in addition to the client cert requirement - or sometimes the cert is sufficient by itself. Some just trust anything that was issued by the right CA and is in its proper date range. Sometimes we also verify that the certificate matches an assigned hostname of the client. Sometimes we trust certs by the right CA to connect, but parse the hostname out of the cert and check whether that client's hostname or the subdomain it's in has authorization to do certain things. Semantic hostnames might look long and confusing at first, but they can be used very easily for things like that. Semantic host names and naming schemes could be its own article.
This isn't a general use case for the general public because of deployment headaches. Which CAs do we trust? Is it the same as those issuing server certs? Will services be set up especially to issue client certs? Who's supporting the users to get the certs installed, many of whom enter the wrong email when signing up for services? We can do this in a corporate network pretty easily. We have automation systems for servers. We have another, different automation system for administration of Laptop, desktop, and mobile client devices. We just put what cert we want everywhere we want it.
A big problem I see with client certs and the general public is multi-device users. If you're logging into Google from your home desktop, your work desktop, a couple of laptops, a tablet, and a phone that's one email address but half a dozen different certificates. Some applications, especially cross-platform ones, insist on their own certificate store even if the OS provides a shared one. So for mail, a couple of browsers, and two other native apps, congratulations that's maybe two dozen. One can export and import client certs, but there's no simple way to get less technical end users to do that. So do we make it easier to configure multiple client devices and all their applications with the same certificate and key? Are end users going to remember to update them all when one is lost in a breach or it expires? Or do we expect all the service providers to trust multiple certificates signed by multiple different CAs for each user, then have the user upload the public (but not the private!) part of each cert/key pair to all of those services to say they should be trusted? Or does every service require its own CA to sign your cert for its own service, so you need an Apple cert, a Google cert, an Amazon cert... ad nauseum?
Tools like Bitbucket or Gitlab let you upload your public SSH key in the web UI to provide auth for the git repos. You can also have (hopefully with separate keys) automated applications that interact with git auth against a repository or all the repos in a project. That's the sort of interface one might expect a web application to offer for TLS certificates. *
* A certificate is basically the public key portion of a public/private key pair that's been signed by some CA. Preferably that CA is a broadly trusted one, except in very particular circumstances.
Thanks, this explanation and discussion is very helpful. It does confirm how I suspected things to work.
I have trouble understanding the need for the "signing" part of client-side certificates. Currently if I create an account at a website with a username/password, there's no need to get my account signed by a trusted third party. So why not let me create the account with a username/publickey instead? Why does a third party CA need to be involved?
And actually (as I mentioned in another post just now), one thought is to have keypairs supplement passwords rather than replace them. Basically when I move to a new device, I can still log in to the website with a password, and then the site will give me the option to add the device's public key so I can seamlessly log in automatically next time.
Ideally they would run some kind of CA and users would generate keys local to the device with some kind of alternate authentication when setting up the device initially.
Generating keys is cheap and the fact that your key could be tracked across services is a problem you'd want to avoid upfront. This is already an issue with things like SSH where you can finger-print devices when they present their public keys.
When signing up the user could be given the option to enable alternate authentication via FIDO2, Password, or Passwordless (email). Otherwise authenticating another device works by approving a new device from an existing one.
> Ideally they would run some kind of CA and users would generate keys local to the device with some kind of alternate authentication when setting up the device initially.
This presents a bit of a chicken and egg problem for how to secure the initial signup. Most services now use "control of email address" which has its own issues.
> Generating keys is cheap and the fact that your key could be tracked across services is a problem you'd want to avoid upfront. This is already an issue with things like SSH where you can finger-print devices when they present their public keys.
There are concerns about tracking, but that can be done without a private key. There are pros and cons for both singular and multiple keys one of the pros for a single keypair being your keypair does identify you as you. That's bad for tracking but it's good for more trustworthy authentication. Ideally, for some uses you could as a person get a signed personal cert and it'd be as good as government ID.
> When signing up the user could be given the option to enable alternate authentication via FIDO2, Password, or Passwordless (email). Otherwise authenticating another device works by approving a new device from an existing one.
A good first step for reused or unique-per-service or even unique-per-service-and-device keypairs would be to allow a user authenticated by password or whatever to upload a public cert (possibly via web form) and enable access to the account (or portions of the account) going forward only to sessions initiated with that client cert and key.
S/MIME handles some issues of key propagation pretty simply in that a sender, A, signing a message is sufficient for the recipient, B, to then send encrypted email to B. The initial S/MIME installation of the user's own S/MIME certificate is still more involved. However, it might be as simple with web apps and other apps that use a web or web-like remote API to have an option to auto-trust certs going forward that are used to log in to the account by other means (like password and Yubikey).
When can we just have client side certificates? That would be a great way to deal with most of the problems that emailing a "magic login link" (or just normal email based accounts) doesn't solve.