A TPM holds a secret. It's useful for establishing identity, performing signing, etc. Think about the fact that an attacker on your system can take a session cookie, move to another computer, and connect as you. A TPM can prevent this by establishing a cryptographic, hardware backed identity.
This is a very significant win and critical to 'zero trust', which is now something the government is telling people to embrace.
The whole thing with security is silly. They said it about UEFI - and yet I run Linux on UEFI systems all the time. I also have systems with TPMs that run Linux. Works fine.
edit: People are downvoting you now, though I think your question is very reasonable and worth answering. Such is HN, though.
> They said it about UEFI - and yet I run Linux on UEFI systems all the time.
Not just any UEFI, UEFI with SecureBoot enabled specifically. On SecureBoot-enabled systems, you're booting a Microsoft-signed executable that chainloads Grub, which then loads Linux. This has given Microsoft a power they should not have: the power to refuse to sign future versions of that first executable.
As others have said, this is either FUD or limited to specific systems with broken UEFI implementations.
I think the whole "MS-signed Grub" is a way by Ubuntu (and possibly others) to facilitate Linux installation by "non pros". And it's not that bad, I think.
I, personally, use Arch Linux and the installer isn't signed with anything. They have a "do your own signing" policy. I had to disable SecureBoot to install it, because I couldn't be bothered to create a new ISO, but then I registered my own keys to the UEFI and boot directly my own signed kernel [0], without using GRUB at all.
I did all this on multiple generation HP "enterprise" laptops and desktops and it worked without any issue. And those systems don't come with any sort of Linux support from HP.
---
[0] For the curious, this is EFISTUB combined with sbsign, that bundles the kernel image, initrd and CPU microcode in a single EFI binary and signs it. This is then booted directly by the UEFI, and you can even register multiple binaries and have them show up in the UEFI boot selection menu.
Uhm, but on my Thinkpad I have to add keys to load Linux in trusted mode (installer asks to choose a password to lock these keys in order to establish their legitimacy when rebooting.)
Also Ventoy needs to install user-defined keys into UEFI to make it work with Secure Boot.
So my assumption is that - at least on Thinkpads - you can add your own user-defined keys; the only Caudine Fork where a Microsoft blessed loader might be involved is the one used by the distro installer.
When this was first discussed in the early 2000s the very legitimate worry was that it would be introduced as optional and gradually become mandatory terminating in machines where the user had as little control over the hardware/software as a game console.
There is no reason to believe that this wasn't implemented thus far because of the high ethical standards Microsoft folks have historically adhered to and while we are to believe Microsoft has turned over a new leaf much of the current leadership was involved at the executive level in old bad Microsoft.
...So my session cookie gets tied to a crypto key that (for all intents and purposes of what I've gleamed from the marketing speak, is burned into the TPM at the factory) is unique?
I thought we were trying to get away from identifiable fingerprints? This seems like the most unique one you could get.
I'm trying to learn about it, but all google wants to throw up these days is news articles on win11's requirement if it, marketing none-speak about "securing the platform" (which does to a layman, read similar to the other comments cry of lock-in), or dense whitepapers on threat models and cryptographic math.
So, first off, the entire point of a session cookie is to uniquely identify you. Like, if you log into a website, it should be you, right?
Second, you're close but just a bit off. While the TPM has a secret internally, it wouldn't expose that - it would be used to generate other secrets, or to sign things, basically it's sort of like putting a crypto library into your device so that you can perform cryptographic operations without knowing the secrets involved. But yes, there's a seed that gets soldered into the device itself.
I use TPMs to ensure that no one can access our AWS environments from devices other than those we own, for example. I'm not really an expert on this sort of thing, though I do work in security professionally - hopefully I'm not butchering this. There are quite a few experts on the matter who comment on HN though, maybe one of them can speak more authoritatively on this.
edit: Actually, I misread maybe - you aren't off. The session cookie is tied to the TPM - either existing within it, or requiring re-validation via the TPM.
I don't want most session cookies to know I'm the same person from a previous cookie. And even if they know I'm the same person, I'd prefer they still not know whether the device I just logged in on is the same as a previous device.
Separately from that, using the TPM to lock the session cookie to the device sounds useful.
I think we should differentiate a TPM backed cookie from, say, a tracking cookie. It isn't like an arbitrary website can track you with the TPM. TPMs aren't exposed to the web by any browser, at least to my knowledge - except through intermediary protocols provided by something like webauthn or an extension (Extended Verification, for example).
Again, maybe someone more knowledgeable can explain more - I'm a consumer of TPMs, and I understand the concept, but I'm no expert.
For a web exposed use case: TPMs are used as part of Windows's FIDO2 implementation, to make sure that the secret actually cannot be exfiltrated to other hardware.
That doesn't come with any particular privacy concerns however.
Right, I meant that there's no javascript API to communicate with a TPM directly as far as I know. You can still use a TPM as part of your auth to a website, it just has to go through a protocol where the browser handles the interaction. So a website can't leverage the TPM for tracking purposes afaik.
But web is really not so much my thing, so I'm now at the point where I'm probably going to start saying incorrect things.
Well you can't secure normal cookies against extraction with a TPM, because you have to extract them to use them.
So the question is how a particular protocol works, and whether a site can still identify you after you've deleted your session cookie. As far as I understand it, if you were wantonly using webauthn for all your session cookies, it would be relatively easy for a site to check if you're the same person as before.
You would have the TPM hold a secret that's used to repeatedly re-authenticate the session. Though today I don't know that this is often the supported workflow - for example, most of my TPM-based policies only enforce device-authentication on sign-in and not on every API call.
OK? So you log into a website with an email and password and you're OK with moving the cookie to another computer and using it there, rather than having to re-prove that it's you? I don't think you know what a TPM does honestly, there's a clear ignorance in this thread as to how this identity functions - it does not have privacy or tracking implications.
I don't necessarily care about moving a valid session to other hardware as much as I care about being able to create a valid session on any random hardware without having to trust any 3rd parties or rely on any 3rd parties permission.
TPM would be fine if 2 things were true:
1 Only the end user managed the secrets inside. Or at least they could, and without any penalty like "This device is insecure and so you can't use it for X". It's fine if there are default secrets and most people never change them. I'm not sure about this last bit but I think ideally also any factory default secrets should be erasable, no indellible unique serial number.
2 The interior implementation is fully open and disclosed. How the secrets are generated and managed. Not just a description and a promise, but an actual full set of public specs such that any manufacturer can produce a fuctioning device, or even I could out of other parts.
These things would allow the end user to be benefit from the mechanism, while leaving the end user in ultimate final control over their property and their identity and information. And it requires both. 1 is hot air without 2.
Any objections to this are automatically invalid, as they depend something they have no right to.
"I need to know that your computer is not doing something I don't want." is just fundamentally invalid.
I think their might have been some disconnect on how we view the term session cookie (A cookie for a singular browser session vs. one specifically for an authenticated user) but the rest of this is starting to make sense.
No. Every TPM has a factory-supplied unique key called the endorsement key, but every other key on the TPM is randomly generated on the TPM. The keys you actively use /can/ be tied back to the endorsement key, but there's no need for them to be - for most cases, any cryptographic material you generate on the TPM will be indistinguishable from material you generate on the CPU, other than to the local client.
It’s like having a Yubi key soldered into your machine.
Each identity you want to protect has its own key (so its’s not necessarily de-anonymizing) with physical access - pin, biometric verification - required for use.
It’s potentially very useful, although I 100% expect it to be backdoored by every geopolitical block.
The problem is that the TPM holds a secret that even the user cannot access despite being the owner.
This is a very significant win and critical to 'zero trust', which is now something the government is telling people to embrace.
If the government is really saying the equivalent of "trust us, but not the people you usually trust", that should a huge warning sign. The analogies to 1984 are disturbingly close.
Let's say you don't want users to view foobar.com so you convince Microsoft to configure it's browser not to resolve foobar.com.
No problem the users install their own browser.
Now we ask Microsoft to limit software that is signed by it effectively granted permission to run some software but not others. The alternatively browser is right out.
The user installs his own OS now we ask Dell to only allow it's computers to boot software that implements the same restrictions as Microsoft has agreed to.
So the user gets his own computer. Now we simply ask your bank or better yet your ISP not to connect to "insecure" devices that don't follow the standards described.
Now you need a substantial unlatched vulnerability to be able to resolve foobar.com
At it's root to make a users own device effectively restrict what they are allowed to do you need something difficult to defeat that you control and they do not.
At one point a us senator wanted this used to allow music labels to remotely destroy users computers if they believed that you were pirating music and every other restriction described has been implemented or discussed using on user devices belonging to users and most implemented.
You can access TPMs on Linux just fine so this entire "but then what if" chain kinda makes no sense. A TPM restricts nothing on its own, it's just a hardware crypto token.
I mean sure, if Microsoft decided that it wanted to do all of the things you said, and could convince all of the OEMs to help, and a bunch of services and ISPs or whatever, yes, that would suck. It's a totally fabricated hypothetical that would never play out, but sure.
Linux is "insecure", Society^TM supports Windows only, please install Windows to continue using Bank^TM, School^TM, and irs.gov. 'Legacy' tax paying is now unsupported, please use new and improved windows.irs.gov
When trusted computing was first discussed everything but the "trusted" browser blocking a particular website was discussed as a feature not a threat model and you can absolutely see how many consumer devices are currently locked down in just such a fashion.
Accessing the tpm doesn't mean what you think it means. You can't by design get the key out of the tpm nor necessarily control which keys are trusted by local software nor remote machines.
Control of what software is allowed to run is ultimately control over everything else because one kind of control gives you every other variety. The fact that you can disable secure boot or control which keys are trusted is a nicety that could be withdrawn next year along with your ability to run Linux.
Your vendor approved software can then implement any sort of restrictions it pleases and remotely attest to the efficacy of such restrictions.
Yes sir mr Amazon sir user id running trusted windows 11 and can't save or share this video as per your contract with Disney nor are they allowed to view any known pirate sites or sites that look like pirate sites according to our heuristics.
Alternatively. Please upgrade to a trusted windows 11 machine to view this content.
It's not silly hypothetical there have already been devices that used secure boot to lock out alternative OS and allowing third parties a say in what software is run is literally the entire point of remote attestation.
The last sentence is particularly disappointing. The article's analysis is correct and not particularly complicated it stands on its own regardless of whatever bias you possess.
Let's not throw words like 'bias' around when linking to Stallman.
> allowing third parties a say in what software is run is literally the entire point of remote attestation
No it isn't.
We're clearly of two very different minds, there's no way we're finding common ground on this, and I've already done the work of explaining to many people what TPM technology is, so I feel like I've done my part.
There are quite a few computers already where Linux only runs as long as Microsoft provides a signed shim for the boot loader. There was quite a bit of a controversy because of this when secure boot came out.
Really? Which computers? I don't know of any. To my knowledge the entire secure boot thing was blown way out of proportion - Linux users still can install Linux, they can even sign their own bootloaders.
At least HP sold some where it seemed to intentionally disable setup mode for consumer grade hardware[1]. Arm based Surface devices where locked down to WindowsRT using secure boot[2]. The WindowsRT case was basically the foundation of the outrage, not only did Microsoft claim to not abuse its position as only entity able to sign code that would run on nearly all secure boot enabled devices, it immediately released a secure boot enabled product that locked out Linux.
> And how is HP fucking their eufi up a Microsoft problem?
My claim was that there where computers where Linux only ran with Microsofts signature. I listed two systems (HP/Surface) where this is the case. Mainly because finding more is a pain, with support requests more likely to go off to nowhere instead of outright showing that no support for it exists.
One might make the case that Microsoft claimed it would require the ability to add alternative keys and conveniently failed to enforce that claim. If the goal was to blame Microsoft instead of just showing that Secure Boot can be easily used to lock users into one top down approved system.
Not to do the 2021 version of Godwining the discussion but this is like getting vaccinated and concluding covid was overblown.
There is absolutely at least some money to be made locking down machines and corporate America would sell our kidney's for a quarter if they could get away with it.
There would likely have been lawsuits against Microsoft if they had taken further steps on that road but that doesn't mean they won't be interested in boiling us all slowly now or in the future.
It's a token and the ability to attest to others that you are running software signed with a key you don't control making it the root of any effective set of controls.
Without it how do you keep someone from running whatever they please and lying to the machine on the other side of the connection?
You absolutely have to trust whoever manufactured the tpm module not to have back doors.
You have to trust every manufacturer of every other chip the same way also, and we have in fact found incredible breaches of that trust already many times over, and so it's too late to suggest it's a crazy thing to worry about.
The only saving grace with other chips is thst they are generic and at least you usually have your choice of some range of suppliers.
Will I be able to buy a laptop with an intentionally defective or fake tpm if I want, which will allow me to use modern software without trusting anyone?
Probably not. Probably plenty of Chinese manufacturers would be willing to produce them, like all the HDCP defeating hdmi switches, but probably software vendors will have some way to detect and invalidate them.
That would depend on future implementations. For example, should Microsoft one day decide to tie session authentication caching to TPM, then a backdoor in the TPM adds a universal chain for a state actor to log into the OS. Worried about state actors? No, but they hire contractors that I probably don't trust to keep this stuff secure. The OS could of course already have a back door, but reversing their code and finding it is higher risk than a closed access chip. I could envision a mandate to relocate the lawful intercept code from the OS to the hardware.
Another implementation could be used to tie DRM to hardware and then brick any hardware that is in license violation. Actually I can think of many scenarios that have a unique identifier on the hardware could be used/abused by corporations.
It should be noted that I lived through a time when corporations wanted this same setup on televisions. Each TV would have a unique chip that would give corporations control over what a person could watch, or even brick their TV. This was called the V-Chip. [0] And then of course, there was also the Clipper-Chip [1]. In terms of boiling frogs [2], TPM could be the base framework to slowly reintroduce the Clipper-Chip under a different name. i.e. used to boot-strap the pieces of code required to watch movies, buy things, browse the web, anything you can imagine.
The point is that the TPM is an essential element of a set of standards called trusted computing that includes secure boot and more.
It's not that secure boot nor the TPM in any fashion keeps Linux from inherently working. It's that it enables the OEM to decide what software is allowed to boot on the computer and enforce this.
Microsoft typically gives OEMs a substantial break and the OEMs in turn sell most SKUs to 99% windows users and make only a small margin on those devices. A small discount on the cost of windows could make devices disproportionately more profitable and MS might be apt to see a percentage in offering such a discount to help lock out the competition.
They have in fact over the years engaged in far more unethical behavior including investing tens of millions in a "partner's" fraudulent lawsuit/pump and dump scheme against various Linux vendors bankrolling an entire list of felonies.
This is a very significant win and critical to 'zero trust', which is now something the government is telling people to embrace.
The whole thing with security is silly. They said it about UEFI - and yet I run Linux on UEFI systems all the time. I also have systems with TPMs that run Linux. Works fine.
edit: People are downvoting you now, though I think your question is very reasonable and worth answering. Such is HN, though.