Hacker Timesnew | past | comments | ask | show | jobs | submit | invokestatic's commentslogin

The privacy points in general are valid, but what irritates me is using this rationale against kernel mode anti cheats specifically.

You do not need kernel access to make spyware that takes screenshots. You do not need a privileged service to read the user’s browser history.

You can do all of this, completely unprivileged on Windows. People always seem to conflate kernel access with privacy which is completely false. It would in fact be much harder to do any of these things from kernel mode.


Kernel access is related to privacy though, and its the most well documented abuse of such things. Kernel level access can help obfuscate the fact that it'a happening. However, it is also useful for significantly worse, and given track records, must be assumed to be true. The problem is kernel level AC hasnt even solved the problem, so the entire thing is risky, uneccesary and unfit for purpose making an entierly unneccesary risk to force onto unsuspecting users. The average user does not understand the risks and is not made aware of them either.

There are far better ways to detect cheating, such as calculating statistics on performance and behaviour and simply binning players with those of similar competency. This way, if cheating gives god-like behaviour, you play with other godlike folks. No banning required. Detecting the thing cheating allows is much easier than detecting ways in which people gain that thing, it creates a single point of detection that is hard to avoid and can be done entierly server side, with multiple teirs how mucb server side calculation a given player consumes. Milling around in bronze levels? Why check? If you aren't performing so well that yoh can leave low ranks, perhaps we need cheats as a handicap, unless co sistently performing well out of distribution, at which point you catch smurfing as well.

point is focusing on detecting the thing people care about rather than one of the myriad of ways people may gain that unfair edge, is going to be easier and more robust while asking for less ergregious things of users.


>This way, if cheating gives god-like behaviour, you play with other godlike folks.

Anti-cheat is not used to "protect" bronze level games. FACEIT uses a kernel level anti cheat, and FACEIT is primarily used by the top 1% of CS2 players.

A lot of the "just do something else" crowd neglects to realize that anticheat is designed to protect the integrity of the game at the highest levels of play. If the methods you described were adequate, the best players wouldn't willingly install FACEIT - they would just stick with VAC which is user-level.


> kernel level AC hasnt even solved the problem

> There are far better ways to detect cheating, such as calculating statistics on performance

Ask any CS player how VAC’s statistical approach compares to Valorant’s Vanguard and you will stop asserting such foolishness

The problem with what you are saying is that cheaters are extremely determined and skilled, and so the cheating itself falls on a spectrum, as do the success of various anticheat approaches. There is absolutely no doubt that cheating still occurs with kernel level anticheats, so you’re right it didn’t “solve” the problem in the strictest sense. But as a skilled player in both games, only one of them is meaningfully playable while trusting your opponents aren’t cheating - it’s well over an order of magnitude in difference of frequency.


There is no need for irritation. I condemn all sorts of anticheating software. As far as I'm concerned, if the player wants to cheat he's just exercising his god given rights as the owner of the machine. The computer is ours, we can damn well edit any of its memory if we really want to. Attempts to stop it from happening are unacceptable affronts to our freedom as users.

Simply put, the game companies want to own our machines and tell us what we can or can't do. That's offensive. The machine is ours and we make the rules.

I single out kernel level anticheats because they are trying to defeat the very mitigations we're putting in place to deal with the exact problems you mentioned. Can't isolate games inside a fancy VFIO setup if you have kernel anticheat taking issue with your hypervisor.


> As far as I'm concerned, if the player wants to cheat he's just exercising his god given rights as the owner of the machine.

By this same logic: As far as I'm concerned, if the game developer only wants to allow players running anticheat to use their servers then they're just exercising their god given rights as the owner of the server.


This is just yet another example of the remote attestation nonsense where your computer is only "trusted" if it's corporate owned. If you own your machine, you "tampered" with it and as a result you get banned from everything. You get ostracized from digital society.

My position is this is unfair discrimination that should be punished with the same rigor as literal racism. Video games are the least of our worries here. We have vital services like banks doing this. Should be illegal.


This take sucks. The anticheat software in this context is for competitive games. No one cares about people cheating in isolation in single player games. The anticheat is to stop 1 guy from ruining it for the 9 others he's playing with online.

You can argue about the methods used for anticheat, but your comment here is trying to defend the right to cheat in online games with other people. Just no.


> The anticheat is to stop 1 guy from ruining it for the 9 others he's playing with online.

Don't play with untrusted randoms. Play with people you know and trust. That's the true solution.


I wish that is an option. Nowadays many non competitives games that you play with friends you trust still use EAC (yet accept non-kernel mode operation on Linux). I suppose other than VAC you can't buy a usermode anticheat middleware now.

That is not the solution if you want to play competitively of whenever you feel like it.

Kernel level AC is a compromise for sure and it's the gamers job to assess if the game is worth the privacy risk but I'd say it's much more their right to take that risk than the cheaters right to ruin 9 other people's time for their own selfish amusement


Cheating may not be moral but it's better to put up with it than to cede control of our computers to the corporations that want to own it.

If it kills online gaming, then so be it. I accept that sacrifice. The alternative leads to the destruction of everything the word hacker ever stood for.


I'm sorry but you are fighting a crusade you can not win by definition. If I am free to use my computer for anything I want then I am also free to lock it down to enjoy my favorite game. If I care about my freedom I will have a dedicated machine for this game that I accept I will not have control over.

You are hijacking this thread about VOLUNTARY ceasing of freedom as if the small community even willing to install these is a slippery slope to something worse. You have a point when it comes to banking apps on rooted phones and I'm with you on that but this is not the thread for it


I'm starting to think you've never actually played an online game before

This is the most asinine take I've seen on the subject in a while.

You may think it's your "god-given right" to cheat in multiplayer games, but the overwhelming majority of rational people simply aren't going to play a game where every lobby is ruined by cheaters.


I don't like cheaters either. I just respect their power over their machine and wouldn't see that power usurped by corporations just to put a stop it.

The computers are supposed to be ours. What we say, goes. Cheating may not be moral but attempts to rob us of the power that enables cheating are even less so.


Actually, it is completely true. The TPM threat model has historically focused on software-based threats and physical attacks against the TPM chip itself - crucially NOT the communications between the chip and the CPU. In the over 20 year history of discrete TPMs, they are largely completely vulnerable to interposer (MITM) attacks and only within the last few years is it being addressed by vendors. Endorsement keys don’t matter because the TPM still has to trust the PCR commands sent to it by the CPU. An interposer can replace tampered PCR values with trusted values and the TPM would have no idea.

Technically yes, but it would produce an untrusted remote attestation signature (quote). This is roughly equivalent to using TLS with a self-signed certificate — it’s not trusted by anyone else. TPMs have a signing key that’s endorsed by the TPM vendor’s CA.

No, this is not true at all. Microsoft requires their system vendors (Dell, HP, etc) to allow users to enroll their own Secure Boot keys through their “Designed for Windows” certification.

Further, many distributions are already compatible with Secure Boot and work out of the box. Whether or not giving Microsoft the UEFI root of trust was a good idea is questionable, but what they DO have is a long, established history of supporting Linux secure boot. They sign a UEFI shim that allows distributions to sign their kernels with their own, distribution-controlled keys in a way that just works on 99% of PCs.


Is it possible to un-enroll the Microsoft certificates, and just trust the efi shim?


> Is it possible to un-enroll the Microslop certificates

Technically yes, with a massive fucking asterisk: Some option-ROM are signed with the MS certs and if your Motherboard doesn't support not loading those (whether needed or not) you will not be able to sometimes even POST.

https://github.com/Foxboron/sbctl/wiki/FAQ#option-rom


With almost all modern motherboard firmware you can enter Setup mode and use KeyTool to configure the trust store however you want, starting from enrolling a user PK (Platform Key) upwards.

It’s generally a lot more secure to avoid the use of any shims (since they leave you vulnerable to what happened in this article) and just build a UEFI Kernel Image and sign that.

Some systems need third party firmware to reach the OS, and this can get a bit more complicated since those modules need to load with the new user keys, but overall what you are asking is generally possible.


> just build a UEFI Kernel Image and sign that.

examples and documentation welcome




I have a slow burn project where I simulate a supply chain attack on my own motherboard. You can source (now relatively old) Intel PCH chips off Aliexpress that are “unfused” and lack certain security features like Boot Guard (simplified explanation). I bought one of these chips and I intend to desolder the factory one on my motherboard and replace it with the Aliexpress one. This requires somewhat difficult BGA reflow but I have all the tools to do this.

I want to make a persistent implant/malware that survives OS reinstalls. You can also disable Intel (CS)ME and potentially use Coreboot as well, but I don’t want to deal with porting Coreboot to a new platform. I’m more interested in demonstrating how important hardware root of trust is.


I don't want Boot Guard or any of that DRM crap. I want freedom.

I want to make a persistent implant/malware that survives OS reinstalls.

Look up Absolute Computrace Persistence. It's there by default in a lot of BIOS images, but won't survive a BIOS reflash with an image that has the module stripped out (unless you have the "security" of Boot Guard, which will effectively make this malware mandatory!)

I’m more interested in demonstrating how important hardware root of trust is.

You mean more interested in toeing the line of corporate authoritarianism.


Well, this project is literally about me circumventing/removing Boot Guard so I don’t know how it’s corporate authoritarianism. I’m literally getting rid of it. In doing so I get complete control of the BIOS/firmware down to the reset vector. I can disable ME. To me, that’s ultimate freedom.

As a power user, do I want boot guard on my personal PC? Honestly, no. And we’re in luck because a huge amount of consumer motherboards have a Boot Guard profile so insecure it’s basically disabled. But do I want our laptops at work to have it, or the server I have at a colocation facility to have it? Yes I do. Because I don’t want my server to have a bootkit installed by someone with an SPI flasher. I don’t want my HR rep getting hidden, persistent malware because they ran an exe disguised as a pdf. It’s valuable in some contexts.


Some days you’re the anarchist, some days you’re the corporate authority. :D


I want an equivalent of boot guard that I hold the keys to. Presented only with a binary choice certainly having boot guard is better than not having it if physical device security is in question. But that ought to be a false dichotomy. Regulation has failed us here.


that defeats the point, having the "keys" allows malicious actors to perform the same kind of attacks... trust is protected by trusted companies...

certificate companies sell trust, not certificates.


Me managing my own (for example) secure boot keys does not inherently enable malicious actors. Obviously unauthorized access to the keys is an attack vector that whoever holds them needs to account for. Obviously it's not risk free. There's always the potential that a user could mismanage his keys.

There's absolutely no excuse for hardware vendors not to provide end users the choice.

> trust is protected by trusted companies...

The less control of and visibility into their product you have the less trustworthy they are.


the hardware is made by asus, asus signs with their key backed by a trusted company.

asus gives out keys to sign bios firmware, now aliexpress can not only counterfeit, but provide tampered hardware.

you can enroll your own secure boot keys so that's not really relevant.


Secureboot was being used as an example to illustrate the issue with your claim that a user controlling the keys must necessarily undermine security.

I'll grant that if the user is given control then compromise within the supply chain does become possible. However the same hypothetical malicious aliexpress vendor could also enroll a custom secure boot key, install "definitely totally legit windows", and unless the user inspects he might well never realize the deception. Or the supply chain could embed a keylogger. Or ...


you don't have to trust software, but you have to trust your firmware and hardware.


> You mean more interested in toeing the line of corporate authoritarianism.

That’s not what I got from their post. After all, they’re putting in some effort to hardware backdoor their motherboard, physically removing BootGuard. I read it as “if your hardware is rooted then your software is, no matter what you do.”


> persistent implant/malware that survives OS reinstalls

Try attacking NIC, server BMC or SSD firmware. You will achieve your goal without any hardware replacement needed.


Yeah, but that doesn’t give me a reason to use the hot air station and hot plate collecting dust on my desk ;)


Nothing drives more creativity from me than a tool in need of a project.


I mean, you could also do smartphone repairs.


> I want to make a persistent implant/malware that survives OS reinstalls.

You want to look into something called "Windows Platform Binary Table" [1]. Figure out a way to reflash the BIOS or the UEFI firmware for your target device ad-hoc and there you have your implant.

[1] https://hackertimes.com/item?id=19800807


> You want to look into something called "Windows Platform Binary Table" [1].

Is this how various motherboard manufacturers are embedding their system control software? I was helping a family friend with some computer issues and we could not figure out where the `armoury-crate` (asus software for controlling RGB leds on motherboard :() program kept coming from


That most likely comes from Windows Update though. It now has the ability to download "drivers". It actually had said ability for a long time (back from Vista days if I remember right) but back then it was only downloading the .inf file and associated .sys files/etc, where as nowadays it actually downloads and runs the full vendor bloatware.


Have your friend grab https://github.com/seerge/g-helper which can disable armory crate. It’s also a lot lighter on your system - I was having constant gradual frame drops (games would start find and performance would slowly degrade) until I tried this and used the option to disable the AC processes.


Likely so. I think that’s actually the intended use of this “feature”


Only works if the target is running Windows (paranoid people might be on Linux), so you'd probably want to slip in a malicious UEFI driver directly. Tools like UEFITool can be used to analyze and modify the filesystem of a UEFI firmware image.


Death approaches. Slow burn until. When Death arrives, what you are doing now will be obviously irrelevant.


Calling it a “kill switch” buries the lede here. What these politicians call a kill switch is technology to passively detect drunk driving. In 2021, Congress passed a law (HALT Drunk Driving Act) requiring NHTSA to eventually require auto makers to install passive drunk driver detection systems. NHTSA missed their statutory November 2024 deadline to finalize the regulations on this so it’s not like this amendment failing has a substantial impact. This technology is still many model years (maybe 2029? 2030?) away. I make no claims to the merits of this technology, I just feel the need to clarify the current situation.


No. It's Orwellian tech that won't work.


This is conceptually interesting to me because I see this as almost a more generic TI Webench. I’m curious why your focus in the sized “grid” blocks (presumably for placement directly on the PCB layout) instead of doing the same but for the schematic. That way I still have the flexibility of laying out the board how I want to meet eg mechanical constraints instead of working around a 12.7mm grid.


I saw routing as equally as big of a headache as the schematic, so formalizing the layout to a grid means layout becomes a compilation problem, not a design problem.

My intent for phaestus isn't to design pcb's, it's to design entire products, and also to be friendly to non technical users who don't know what a PCB is, let alone do layout themselves.


I’ve been paying for Google Workspace for my custom domain for years basically just so I can use Gmail. For just $7 more dollars a month, I upgraded my plan to access Gemini Pro, which has guaranteed enterprise-grade privacy controls. I think this is currently the best value platform for anyone who values their privacy for LLMs. If Apple and the DoD trust Google’s internal controls, I do too.


This too sounds like an ad.


Because Red Hat pays the salaries of dozens (hundreds?) of kernel maintainers all over different subsystems. So they’re subject matter experts, and know exactly which ones are relevant to Red Hat.


This is the right answer.

Source: 10+y long (past) tenure at RH in a team adjacent to the kernel team.

EDIT: also because companies like RH tend to know, and are happy to know, the details of their customers' deployments. Compare the article:

> Always remember, kernel developers:

> - do not know your use case.

> - do not know what code you use.

> - do not want to know any of this.


https://bugzilla.redhat.com/show_bug.cgi?id=1708775 https://www.openwall.com/lists/oss-security/2020/06/23/2

Even RHEL misses things that don't get announced. This is a big issue for LTS kernels and downstreams, although RHEL does a much better job than most due to the nature of the company/ products.

I don't have tons of examples off hand but Spender and Project Zero have a number of examples like this (not necessarily for RHEL! Just in general where lack of CVE led to downstreams not being patched).

https://googleprojectzero.github.io/0days-in-the-wild/0day-R...

Who is helped by this, for example? https://x.com/grsecurity/status/1486795432202276864

> Always remember, kernel developers: > - do not know your use case. > - do not know what code you use. > - do not want to know any of this.

I just found this part so odd. You don't need to know how users are deploying code to know that a type confusion in an unprivileged system call that leads to full control over the kernel is a vulnerability. If someone has a very strange deployment where that isn't the case, okay, they can choose not to patch.

It's odd for every distro to have to think "is this patch for a vulnerability?" with no insight from upstream on the matter. Thankfully, researchers can go out of their way to get a CVE assigned.


Paying maintainers doesn't give Red Hat a magic oracle for "which commits matter for security". What you actually end up with is cherry-picking + backporting. Backporting is inherently messy, you can introduce new bugs (including security bugs) while trying to transplant fixes, and omissions are inevitable. And CVEs don't save you here: plenty of security relevant fixes never get a tidy CVE in the first place, and vendors miss fixes because they often pretend the CVE stream is "the security feed".

Greg is pretty blunt about this in the video linked in the article: "If you are not using the latest stable / longterm kernel, your system is insecure" (see 51:40-53:00 in [1]). He also calls out Red Hat explicitly for ending up "off in the weeds" with their fixes.

RHEL as an entire distribution may provide good enough security for most environments. But that is not the same claim as "the RHEL kernel is secure" or "they know exactly which commits are relevant". It is still guesswork plus backports, and you're still running behind upstream fixes (many of which will never get pulled in). It is a comfortable myth.

[1] https://www.youtube.com/watch?v=sLX1ehWjIcw&t=3099s


I have an almost identical story except the state in question was Nevada. I’m curious what “dubious” domain it was, for me it was video game cheats. Maybe I’m actually the co-owner you’re talking about. :)


This made me curious. Like selling cheats for games?


Yes, both in the case of them and I.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: