The strength—and also the weakness—lies in how WASM is consumed in the browser. During instantiation, JavaScript engines validate the module and reject it if it uses unsupported instructions or features. In practice, due to browser compatibility differences, WASM modules often need to be built in multiple variants, such as a baseline version, a SIMD version, a SIMD+threads version, and so on. This is a significant pain compared to native binaries, which can rely on runtime feature detection and dynamic dispatch.
Before modern standardization, maintaining calendars and clocks was typically the responsibility of states or similar authorities, often guided by astronomers. Now it seems that international organizations are effectively following the early UNIX/POSIX model, and astronomers no longer have the same authority over timekeeping.
Yuck. I’ve already noticed compilation times increasing from C++17 to C++20, and this feature makes it much worse. I guess I’ll need to audit any reflection usage in third-party dependencies.
Please check the article again -- I made a mistake in the original measurements (the Docker image I used had GCC compiled in debug mode) and now the (correct) times are ~50% faster across the board.
Not free, still need to audit, but much better than before. Sorry.
Fxxk off, to all political actors pretending this is about child protection. Protecting children is not the job of the OS, the device manufacturer, or the internet service provider. It is the parent’s job. If you cannot supervise, monitor, and discipline your child’s internet use, that is your failure, not theirs.
They can provide tools, sure. But restricting adults because some parents fail at parenting is insane. That is how a totalitarian state grows: by demanding the power to monitor and control every individual.
If you cannot control your children, that is your fault. And if that is the case, you should think twice before having kids.
Cops to track what people did on the internet, checking every image to ensure it's not pornographic, or every transaction online, to ensure it's not criminal!
Sounds great! Let's just start by rolling out the program to target elected officials and their families as a trial. If every congressional or senate representative wants to undergo a few years of scrutiny to make sure the system works well, maybe the people will follow gladly.
Sorry, the point I am trying to make, is bullshit laws should be tested on the group of people advocating and passing those laws, because maybe they wouldn't like the law when it applies to them.
This reminds me of a voting method I've seen some anarchists advocate for: the rules passed by votes should only be enforced on those who voted for it.
this whole thing is part of building a mechanism to restrict free speech down the line to cover for a certain "greatest ally" of the united states. make no mistake, the "not a genocide" over the last two years and the recent "not a war" is very much related to this.
How does mandating every OS to have a parental controls API lead to wholesale suppression of speech? Will they mandate it to always be set to the most restrictive setting?
this isn't "parental controls" this is a mandate to verify your age and subsequently identity to an external third party. can't you see how this is a slippery slop to deannonymizing the internet and being able to restrict access for reason that won't be revealed until later?
In general, I argue for less state control on anything. But your argument seems flawed from its core. If someone is a bad parent, should we simply ignore it and let the children turn out idiots as well? And the line is often blurry, so that's why we designed schools that should compensate even for dumb parents.
And, just to be clear on this topic, I think these age restriction laws are mostly bullshit, but I'm deeply against the concept of putting all the responsabiliy of raising children onto the parents.
> we simply ignore it and let the children turn out idiots as well
There is not a lot of safeguarding against this in the real world tbh. At the very least I think the OS or internet age verification is not the place to start improving this.
There is some. Bars won't serve minors. The standardisation of parental controls law (the CA/CO one) is much closer to "bars won't serve minors" than it is to "camera drones will follow minors around to make sure they don't drink alcohol"
> should we simply ignore it and let the children turn out idiots as well?
Just because you're an idiot at 18 doesn't mean you are one for life.
> so that's why we designed schools that should compensate even for dumb parents.
Does that actually work?
> against the concept of putting all the responsabiliy of raising children onto the parents.
Then how do you feel about parents requiring a license before they have a child? If you wish to invite yourself into their responsibilities shouldn't you also invite yourself into their bedroom first?
> If you wish to invite yourself into their responsibilities shouldn't you also invite yourself into their bedroom first?
You're turning of question of measure (how much should society be involved in raising children) into an all or nothing debate, which I explicitly want to reject.
> Does that actually work?
Yes, because of mass education almost every adult you meet can read and write, something new for the last 100 years. Just because a system has (currently huge) faults, doesn't mean we should remove the system entirely.
what about children being fed unhealthy things? childhood obesity is dangerous and also affects their mental and physical health.
let's install cameras in all supermarkets that ensure parents cannot buy unhealthy things for their children.
of course, adults can continue to purchase anything they want for "themselves". but the facial scanning in supermarkets is imperative for child safety!
This is right on the money and really highlights how short-sighted these proposals are.
We're perfectly willing to destroy our privacy for things that don't matter, but then the stuff that does, we don't touch.
Realistically, seeing some boobies on instagram is NOTHING compared to childhood obesity. Nothing. We're talking lifetime of suffering and early death versus boobies.
You make a good point that society may be responsible as well, however we are arguing over trying to use technology to solve meatland problems and this one never should be automated into tech, ever. It's putting burden on artists and engineers to solve things they aren't causing or really responsible for.
It’s compelled speech. A transmission of expression required by law. The argument settled in 1791. The First Amendment does not permit the government to compel a person’s speech just because the government believes the expression thereof furthers that person’s interests.
It's also a consumer product regulation, of which many already exist. The government compels you to speak about the ingredients in a food product you manufacture, and we don't seem to have a problem with that.
A better analogy would be regulation of addictive activities like gambling and regulation of addictive substances like painkillers. Given that the platforms being regulated were intentionally engineered to maximize addictive potential, this seems a fair and reasonable response.
I am a parent. The devices my child uses have root certs that allow me to decrypt traffic that must pass through my proxy to be relayed to the internet. Voila. Problem solved with current tech.
Yes, and the next battle is ech-pinned params in apps. The browser can at least single that ech isn't supported. For apps, you'll just have to strip the ech and downgrade the connection and live with the server dropping you. But that's fine. My kids don't need tiktok if I, the parent, can't decrypt the info.
I assume you live in the free world. Some socialist states in history, such as East Germany, pushed child-rearing and early education much further into the hands of the state through extensive state-run childcare and kindergarten systems. That model is gone, and for good reason.
Even with schools in place, the basic responsibility for raising children still belongs to the parents. Schools can support, educate, and compensate to some extent, but they cannot replace parental responsibility.
I also see far too much awful news — in my country, Korea, for example — about terrible parents harassing school teachers because their children are out of control.
I was born in a communist country in Eastern Europe, which is now crony capitalist. The issue is extremely complex, and all I can say in such a short paragraph is that ideologically-driven implementations are doomed to fail. It doesn't matter if you believe in "free-market", "the state", "free-speach", "socialism" or "equality", if you put these above the concrete reality of modern parenting, and how much harder it's getting compared to previous generations.
To be fair if the the parent is garbage there isn't anything the state today can do to truly prevent the child from being corrupted short of taking the child. We ensure that vaccine laws are difficult to enforce, we ensure that the child cannot have any privacy from the parent codified at school. At every stage we gave parents essentially absolute authority over there children with exception to maybe physical abuse. And I say maybe because even in physically abusive parent, it can be difficult for the child to advocate and escape. They can ask to be emencipated but the odds are stacked against you that you can proof you can support yourself financially.
All this to say is while I think the OP is mean about it they but are not wrong. The law argues heavily the parent is supreme at least in the US. But this specific law push the responsiblity of being the supreme authority off of parents. I know you don't like that concept but I think it is very easy to argue that any other model is going to be unacceptable to a pluraity of parents. Thats not to be confused with a parent is responsible for everything there child does because thats not true. But the consquence of that thinking is that children ultimately have some responsiblity in the things they do over the parent, which I think the authors of this law would be sweating at such a statement.
Personally I think the biggest issue for children is impulse control around social media and to be frank I don't think Adults are necessiarly able to deal with the onslaught of endless feed short form video content either. I don't think our brains are built against it very well. I don't know what the solution is but I think what made youtube without shorts different from tiktok is the endless scroll nature. The added friction actually protected peoples conscious and something to add a minimal friction to interactions would actually be massively beneficial to society at large
Okay, assuming that’s the case for the sake of argument, that’s still a huge problem right? Kids raised by bad parents suffer, which is inhumane. And if you don’t care about that, they also cause problems or costs for society at large (especially if there are a lot of them).
Those are bad outcomes. So is it any wonder that we look for policy/regulatory issues to mitigate the harms of bad parenting?
Well, it’s a clever idea. Discord seems to have intentionally softened its age-verification steps so it can tell regulators, “we’re doing something to protect children,” while still leaving enough wiggle room that technically savvy users can work around it.
But in practice, this only holds if regulators are either inattentive or satisfied with checkbox compliance. If a government is competent and motivated, this approach won’t hold up—and it may even antagonize regulators by looking like bad-faith compliance.
I’ve also heard that some governments are already pushing for much stricter age-verification protocols, precisely because people can bypass weaker checks—for example, by using a webcam with partial face covering to confuse ID/face matching. I can’t name specific vendors, but some providers are responding by deploying stronger liveness checks that are significantly harder to game. And many services are moving age verification into mobile apps, where simple JavaScript-based tricks are less likely to work.
> Discord seems to have intentionally softened its age-verification steps so it can tell regulators, “we’re doing something to protect children,” while still leaving enough wiggle room that technically savvy users can work around it.
...source?
I sincerely doubt that Discord's lawyers advocated for age verification that was hackable by tech savvy users.
It seems more likely that they are trying to balance two things:
1. Age verification requirements
2. Not storing or sending photos of people's (children's) faces
Both of these are very important, legally, to protect the company. It is highly unlikely that anyone in Discord's leadership, let alone compliance, is advocating for backdoors (at least for us.)
Usually in cases like this, there is no source, there can’t be. Long long ago, long enough to be past the statute of limitations, I was involved in a similar regulatory compliance situation. We specifically communicated in such a way that “actual effectiveness” wasn’t talked about, and we set that up with a single, verbal only and without recording, meeting between the team and one of the lawyers.
Point is, these kinds of schemes where internal communication is deliberately hobbled to comply maliciously with requirements while still being completely in the clear as far as any actual recorded evidence goes. And there’s always at least one person piping in with a naïve “source?” as if people would keep recorded evidence of their criminal conspiracies.
Unless the governments come out with a first party national digital ID that can convey age of majority, they had better make themselves happy with a checkbox because nothing else is realistically possible.
A few days ago, Notepad++ got compromised—apparently by a state actor (or a proxy). And now, today, Windows’ built-in Notepad has a fresh CVE. What a life.
At this point, what am I supposed to do other than uninstall Windows completely? No real sandboxing, a mountain of legacy…
telnetd CVE-2026-24061. It's embarrassingly simple exploit but took years to be discovered.
> When telnetd invokes /usr/bin/login, it passes the USER value directly. If an attacker sets USER=-f root and connects using telnet -a or --login, the login process interprets -f root as a flag to bypass authentication, granting immediate root shell access.
Why does every Linux distro under the sun try so hard to protect the garbage under /usr/bin/ and /etc/ when literally the only files that matter to me are in /home, which is a free-for-all?
Because Linux (and other nixes) have their root in multiuser/time-share systems/servers. Protecting the system* from the users was important, and protecting users from other users equally as important. Protecting the user's $HOME from themselves/user-level programs wasn't as much of a concern, the user was assumed to be responsible enough to manage it themselves.
>Why does every Linux distro under the sun try so hard to protect the garbage under /usr/bin/ and /etc
Because a compromised user could infect shared executables and spread the infection. A bit harder to do with etc but for sure possible. The main target would be infecting bash and you are done from the get go.
>when literally the only files that matter to me are in /home, which is a free-for-all?
The home folder's read write is usually restricted to the user. The only scenario where this isn't the case to my knowledge is Ubuntu where others can read it, but this is just a huge flaw in Ubuntu that almost no other distro has.
> when literally the only files that matter to me are in /home, which is a free-for-all?
> The home folder's read write is usually restricted to the user.
Yeah, and that is the point. All user's programs including curl, wget, the web browser, anything else that connects to the network run as the user, and all the user's programs, by default, have access to everything inside ${HOME}.
Most people don't really care if /bin gets obliterated, but they do care dearly when /home/joe/photos/annies-2nd-birthday gets wiped.
Protecting a user from himself is hard. Protecting user from others is easy. Linux is influenced by unix and a lot of installations are servers. Where most programs run under their own accounts.
You can always have two user accounts: oblio and unsafe-oblio anf have a shared folder between the two for transferring files. Or invest into some backup software.
Just make another user bro. If you can't even create a user to run a program you distrust, the issue is not that windows doesn't provide sandboxes, it's that you don't use them
And no, it's not "a lot of work" it's the bare minimum
Yet 99% of the planet doesn't do "the bare minimum", bro.
We have supposedly all the smartest minds in the world working in tech and they haven't been able to create a simple, cheap, reliable cross platform solution for user data protection, backup and restore.
Yes, because the users are in fact the problem. The options are either to trust the user to make decisions (and technically illiterate users will screw things up for themselves), or lock down the system so that the user isn't allowed to do anything the corporate overlord doesn't let them. There is no middle ground.
There is one where desktops are slowly being remade, which Windows and MacOS are failing at. Have application repositories, but open ones like Debian or Linux in general, so that application developers can publish and don't ask for a cut of every sale. Sandbox all new desktop applications over the years and publish long roadmaps until everything is sandboxed, say, in 2035.
Provide more education and guidance for users and more corporate controls.
If they would have really started to do this in 2005, we would have been there by now. Instead we get more UI toolkits and more UI refreshes and AI everywhere.
I rolled out a home-made backup script in Powershell - just a wrapper around wbadmin that backs up an entire system image and the a standard "Backup and Restore" backup on an external disk once I plugged it in.
Yeah, yeah. It's not purely about installing apps. It's primarily about sandboxing them.
I always thought Americans were "nanny state this, nanny state that". Doesn't this also apply to huge state sized corporations mandating a cut of every app sold and forcing everyone to only install apps from them?
Linux /home is far from a free for all. flatpak, landlock, selinux, podman, firejail, apparmor, and systemd sandboxing all exist and can and do apply additional restrictions under /home
> At this point, what am I supposed to do other than uninstall Windows completely?
Uninstall Windows completely 4 years ago when Windows 11 was released heralding in a new era of absolutely insane, self-destructive, unnecessary and unwanted shit?
There is no valid excuse for this vulnerability. It's existence is a category error that's only possible because Microsoft has completely jumped the shark. Continuing to use /any/ of their products is a choice to accept pure insanity as a default.
That was a CCP group compromising the Notepad++'s underlying hosting provider; not really much to be done there aside from switching hosting providers. The update validation was also improved, and there's also scoop if you don't trust the built-in updater. Fortunately the attack was narrowly targeted and the IOCs are known.
It was not compromised a few days ago, that's just when the attack was disclosed. The actual compromise and exploitation happened months ago for several weeks.
- Windows Sandbox (consumer-level sandbox)
- Creating a separate User (User folders are permission locked to their user by default, system binaries cannot be modified without admin access)
- HyperV (VM hypervisor)
- Edge Browsers
Don't get me wrong MSFT quality is dropping steeply, but this is still a strong point. For comparision, on Ubuntu, user folder by default can be read by all users.
>Creating a separate User (User folders are permission locked to their user by default, system binaries cannot be modified without admin access)
Common practice, and even encouraged by Windows itself, is having the administrator account be the only account. This misuse is a very common thread in Windows systems, and security breaches alike.
Windows has garbage defaults, but if you read through their documentation on enterprise architecture they definitely do not recommend having admin be the only account. They do in fact encourage separate accounts, multiple level of privileges with login restrictions across different types of machines, etc.
Many Linux distros are also guilty of this, disabling the root account by default and having the only user have sudo privileges, just like Windows.
Yes, however much more can be done in the user's own directory on Unix systems. Needing sudo raises some eyebrows, whereas most Windows users don't necessarily understand UAC, and almost never think twice about pressing "Yes" on the popups, which are seen more as an annoyance than something critical for safety. Some even completely disable UAC.
> Common practice, and even encouraged by Windows itself, is having the administrator account be the only account.
This hasn't been true since Vista. Kind of even before that with XP, it really showcased using multiple accounts to home users with a much more stylized user selection screen.
Sorry, the era of free communication is fading. Across middle powers, developed countries, and increasingly North America, governments are tightening the rules around online speech—and often jawboning platforms into going further than the law strictly requires. The list of examples is so long I can’t even begin to type them all.
Instead of "free communication" I would say "free large public social media", because without going all DPRK, there's no stopping people from using the internet, a means of free communication.
I tested it a bit yesterday, and it looks good—at least from a structural perspective. Separating the LLM invocation from the apply step is a great idea. This isn’t meant to replace our previous deterministic GitHub Actions workflow; rather, it enables automation with broader possibilities while keeping LLM usage safer.
Also, a reminder: if you run Codex/Claude Code/whatever directly inside a GitHub Action without strong guardrails , you risk leaking credentials or performing unsafe write actions.
From my years of iOS development—and based on https://xcodereleases.com typically ships two major Xcode updates each year:
- X.0 (September): bumps Swift, SDK versions, etc. It also tends to have a noticeably longer beta cycle than other releases.
- X.3 or X.4 (around March): bumps Swift again and raises the minimum required macOS version.
Other releases in between are usually smaller updates that add features or fix bugs, but they don’t involve major toolchain-level or fundamental changes.
Today’s release doesn’t bump the Swift version, which suggests the core toolchain is essentially the same as Xcode 26.2—so it makes sense that the minimum macOS version wasn’t raised either.
reply