> ...stored in the global StorageDatabaseNameHashtable.
> This mapping:
> - Is keyed only by the database name string
> ...
> - Is shared across all origins
Why is this global keyed only by the database name string in the first place?
The post mentions a generated UUID, why not use that instead, and have a per-origin mapping of database names to UUID somewhere? Or even just have separate hash-tables for each origin? Seems like a cleaner fix to me compared to sorting (imo, though admittedly, more of a complex fix with architectural changes)
Seems to me that having a global hashtable that shares information from all origins is asking for trouble, though I'm sure there is a good explanation for this (performance, historical reasons, some benefits of this architecture I'm not aware of, etc.).
Someone from my high school added me on LinkedIn and works at Palantir.
What I find interesting, is that a few months after joining, he scrubbed all posts, descriptions, and mentions of the word "Palantir" in his profile, and replaced it by saying he works at an unnamed company as "a Forward Deployed Engineer". Judging by his activity reacting to other posts, it seems he coworkers also use the same term and removed mentions of "Palantir".
I find it interesting, I suppose it was to avoid backlash from others, or perhaps other companies would be hesitant to hire someone from Palantir (?). Or perhaps just a company policy to avoid scammers from finding employees.
But in any case, the hiding of the word is something I find interesting.
Back when I was in university, one of the units touching Assembly[0] required students to use subtraction to zero out the register instead of using the move instruction (which also worked), as it used fewer cycles.
I looked it up afterwards and xor was also a valid instruction in that architecture to zero out a register, and used even fewer cycles than the subtraction method; but it was not listed in the subset of the assembly language instructions we were allowed to use for that unit. I suspect that it was deemed a bit off-topic, since you would need to explain what the mathematical XOR operation was (if you didn't already learn about it in other units), when the unit was about something else entirely- but everyone knows what subtraction is, and that subtracting a number by itself leads to zero.
[0] Not x86, I do not recall the exact architecture.
It increases attack surface area on the browser. Even if you do need to "accept" a connection for a device, this isn't foolproof. I imagine adding WebUSB is a non-insignificant amount of code, who's to say there isn't a bug/exploit introduced there somewhere, or a bypass for accepting device connections?
This would still be better than downloading random native programs since it's under the browser's sandbox, but not everyone would _ever_ need to do something that requires WebUSB/USB, so this is just adding attack surface area for a feature only a small percentage of people would ever use.
The solution is to use a smaller separate _trusted_ native program instead of bloating the web with everything just for convenience. But I understand that most are proprietary.
I say all this, but a part of me does think it's pretty cool I can distribute a web-app to people and communicate via WebUSB without having the user go through the process of downloading a native app. I felt the same way when I made a page on my website using WebBluetooth to connect to my fitness watch and make a graph of my heart rate solely with HTML and Javascript (and no Electron).
I'm just not too happy about the implications. Or maybe I'm just a cynic, and this is all fine.
I do not understand the appeal of the workflow of working on separate things in parallel, then splitting it off into branches/commits. imo, isn't it better to fully focus on one thing at a time, even if it is "simple"?
I imagine if I follow this workflow, I might accidentally split it off in a way that branch A is dependent on some code changes in branch B, and/or vice versa. Or I might accidentally split it off in a way that makes it uncompilable (or introduce a subtle bug) in one commit/branch because I accidentally forgot there was a dependency on some code that was split off somewhere else. Of course, the CI/CD pipeline/reviewers/self-testing can catch this, but this all seems to introduce a lot of extra work when I could have just been working on things one at a time.
I'm open to changing my mind, I'm sure there are lots of benefits to this approach, since it is popular. What am I missing here?
From practical experience from using jj daily and having (disposable) mega merges:
When I have discrete, separate units of work, but some may not merge soon (or ever), being able to use mega merges is so amazing.
For example, I have some branch that has an experimental mock-data-pipeline thingy. I have yet to devote the time to convince my colleagues to merge it. But I use it.
Meanwhile, I could be working on two distinct things that can merge separately, but I would like to use Thing A while also testing Thing B, but ALSO have my experimental things merged in.
Simply run `jj new A B C`. Now I have it all.
Because jj's conflict resolution is fundamentally better, and rebases are painless, this workflow is natural and simple to use as a tool
> Because jj's conflict resolution is fundamentally better
I don't know jj well so its merge algorithm may well be better in some aspects but it currently can't merge changes to a file in one branch with that file being renamed in another branch. Git can do that.
I don't think I really understand the way jujutsu is doing this, but if it's what I think, one example would be that you realize while working that some changeset is getting too big and makes sense to split it. So B would depend on A and be on top eventually, but you don't know the final form of B until you've finished with both. I've always just done this with rebasing and fixups, but I could see it being easier if you could skip that intermediate step.
Sometimes you want to work on something and as a prerequisite that needs X. Then you realise once X is in place you can actually build a number of useful things against X. And so forth. There’s no good way to merge sequentially, other then a multi merge
I’ve found megamerge really helpful in cases where I’m working on a change that touches multiple subsystems. As an example, imagine a feature where a backend change supports a web change and a mobile change. I want all three changes in place locally for testing and development, but if I put them in the same PR, it becomes too hard to review—maybe mobile people don’t want to vouch for the web changes.
You’re right that I have to make sure that the backend changes don’t depend on the mobile changes, but I might have to be mindful of this anyway if the backend needs to stay compatible with old mobile app versions. Megamerge doesn’t seem to make it any harder.
A real case from my work. I had to work on an old Python project that used Poetry and some other stuff that was just not working correctly on my computer. I did not want to touch the CD/CI pipeline by switching fully to uv.
But I created a special uv branch that moved my local setup to uv. Then went back up the tree to main and created a feature branch from there. Merged them together and worked out from that branch moving all the real changes to the feature branch.
Now whenever I enter that project I have this uv branch that I can merge in with all the feature branches to work on them.
>I do not understand the appeal of the workflow of working on separate things in parallel, then splitting it off into branches/commits. imo, isn't it better to fully focus on one thing at a time, even if it is "simple"?
because agents are slow.
I use SOTA model (latest opus/chatgpt) to first flesh out all the work. since a lot of agent harness use some black magic, i use this workflow
1. Collect all issues
2. Make a folder
3. Write each issue as a file with complete implementation plan to rectify the issue
After this, i change from SOTA to Mini model
Loop through each issue or run agents in parallel to implement 1 issue at a time.
I usually need to do 3 iteration runs to implement full functionality.
I had a big feature I was working on. jj made it easy to split it into 21 small commits so I could give reviewers smaller things to review while I continued to work. It wasn't perfect and maybe git can do it all by itself but it's not my experience.
In other words, I effectively was working on one thing, but at a quicker easier pace.
But in general you are one of the few that does. Many devs do find git rebases scary. Having done it in both I can say with JJ it is much much simpler (especially with jjui).
It does seem to introduce a lot of complexity for its own sake. This kind of workflow only survives on the absorb command and like you said it doesn't really cover all the interplay of changes when separated. It's a more independent version of stacked diffs, with worse conceptual complexity.
As a jujutsu user, I don't disagree. I can see the appeal of doing a megamerge, but routinely working on the megamerged version and then moving the commits to the appropriate branch would be the exception, not the norm.
I gather one scenario is: You do a megamerge and run all your tests to make sure new stuff in one branch isn't breaking new stuff in another branch. If it does fail, you do your debug and make your fix and then squash the fix to the appropriate branch.
I wouldn't do it this exact way either but the benefit is "having any local throwaway integration branch" vs. having none at all. You don't need to do it this exact way to have one.
I wonder if someone more creative than me would be able to push this to do things it was not designed to do. I recently found a video where someone exploited some properties of certain transcript file formats to be able to make a primitive simple drawing app with Youtube's video player's closed captions.[0]
Since a brush's code can see the state of the canvas and draw on it, perhaps there can be a brush that does the opposite here, and instead renders a simple "video" when you hold down the mouse? Or even a simple game, like Tic-Tac-Toe.
I understand that obviously isn't the purpose of the brush programs, but I think it is an interesting challenge, just for fun.
[0] The video I am thinking of is by a channel named Firama, but they did not explain how they accomplished it. Another channel, SWEet, made their own attempt, which wasn't as full-featured as the original, but they did document how they did it.
> Never follow a shortened link without expanding it using a utility like Link Unshortener from the App Store,
I am unfamiliar with the Apple ecosystem, but is there anything special about this specific app that makes it trustworthy (e.g: reputable dev, made by Apple, etc.)? Looking it up, it seems like an $8 app for a link unshortener app.
In any case, there have been malicious sites that return different results based on the headers (e.g: user agent. If it is downloaded via a user-agent of a web browser, return a benign script, if it is curl, return the malicious script). But I suppose this wouldn't be a problem if you directly inspect and use the unshortened link.
> Terminal isn’t intended to be a place for the innocent to paste obfuscated commands
Tale as old as time. Isn't there an attack that was starting to get popular last year on Windows of a "captcha" asking you to hit Super + R, and pasting a command to "verify" your captcha? But I suppose this type of attack has been going on for a long, long, time. I remember Facebook and some other websites used to have a big warning in the developer console, asking not to paste scripts users found online there, as they are likely scams and will not do what they claim the script would do.
---
Side-Note: Is the layout of the website confusing for anyone else? Without borders on the image, (and the image being the same width of the paragraph text) it seemed like part of the page, and I found myself trying to select text on the image, and briefly wondering why I could not do so. Turning on my Dark Reader extension helped a little bit, since the screenshots were on a white background, but it still felt a bit jarring.
Agreed, the lack of borders or indentation on the screenshots is very confusing. It's hard to understand what text comes from the malicious website and what is from the author.
> they should provide built-in anti-cheat support in the OS.
As much as I dislike anti-cheat in general (why incorporate it instead of just having proper moderation and/or private servers? Do you need a sketchy third-party kernel level driver to police you to make sure you're "browsing the internet properly in a way that is compliant with company XYZ's policies", or even when running other software like a photo editor, word processor, or anything else? It's _your_ software that you bought.) something similar is already happening with, e.g, Widevine bundled in browsers for DRM-ed video streaming.
I agree that having some first-party or reputable anti-cheat driver or system, is probably preferable than having different studios roll out their own anticheat drivers. (I am aware there are studio-level or common third party common anti-cheat solutions already, such as Denuvo or Vanguard. But I would prefer something better)
> why incorporate it instead of just having proper moderation and/or private servers?
No one wants to become a moderator, they do it out of necessity. So it's pretty much the other way around: a lot of anticheats were, and are, originally developed by community members for private servers (because you're not deploying a 3rd party anti-cheat onto first party servers). BattleEye was originally for Battlefield games. Punkbuster for Team Fortress. EasyAntiCheat for Counter Strike. I even remember Starcraft Brood War 3rd party server ICCUP with a custom 'anti-hack' client requirement.
You still see this today with Counter Strike 2 private servers Face-IT: they have additional anti-cheat not less. Same with GTA V modded private server, FiveM have anti-cheat they call adhesive.
And then game developer saw that players are doing that, so they integrate the anti-cheat so that players do not have to go downloading/installing the anti-cheat separately. Quake 3 Arena added Punkbuster in an update for example.
>why incorporate it instead of just having proper moderation and/or private servers?
Because game studios these days are all about global matchmaking. Private servers aren't really a thing any more except in more niche games. Instead you (optionally with a party) queue for matchmaking. Every game has to have a ranked ladder these days, it seems.
I miss the days of Tribes 2 or CS1.6 when games had server browsers
> Because game studios these days are all about global matchmaking
Why not have moderation then? When participating in an online forum, you are essentially "matchmaking" to a topic or corner of the internet with similar interests. Have some moderators (be it members of the community, or staff) ban players on obvious hacking/cheating or rule-breaking behaviour, and allow members to report any instances of this (I believe this is already a thing in modern video games, I have seen videos of "influencers" getting enraged when losing and reporting players for "stream sniping").
Sure, this might cause the usual issues of creating an echo chamber where mods and admins might unfairly ban members of the community. But you could always just join a different server in that case.
I believe Minecraft has a system similar to what I described; you enter the URL of a server to join, each hosted on its own independent instance (not necessarily hosted by Mojang, the studio behind Minecraft) each with their own unique sets of rules and culture, and being banned in one server does not ban you from every other server. Incidentally, Minecraft also does not have kernel level anticheat, and still very successfully manages to be one of the most popular games around (By some accounts, the top-selling game of all time).
> I miss the days of Tribes 2 or CS1.6 when games had server browsers
>I believe Minecraft has a system similar to what I described
Except every big server has to run an anticheat. Some servers required clients with client side anticheats even. Some servers required you to screen share with a moderator and they would go through the files on your computer to look for cheats. Exploiting people for free labor to moderate servers was never enough to stop the issues cheating had. Even with these volunteers anticheat was essential for see what players were flagging checks to know who to watch over.
> Except every big server has to run an anticheat. Some servers required clients with client side anticheats even.
I am fine with anticheat on the server-side to help volunteers/moderators find issues, since it does not force the user to install any sketchy kernel-level software. As for the servers that require client-side anticheats, I was unaware there are Minecraft servers that do this (though I do not doubt you, and believe you when you say they exist), and can't speak to it.
> Some servers required you to screen share with a moderator and they would go through the files on your computer to look for cheats.
I was not aware this is a practice that some servers do. It is beyond ridiculous to ask to screen share just to verify no cheats were involved imo, and is a major invasion of privacy. The only scenario I can see this being okay, is in a physically hosted event, where players are playing on devices provided by the event organisers, so there would be no expectation of privacy in any case, in the same way you do not have an expectation of privacy on a work device.
In both cases, you could always find a different server that does not run anticheat, or even start your own server (if you were willing to do that). This isn't something that can even be done in other modern games that employ anticheat drivers and only allow connecting to their single official server.
Re: exploiting people for free labor to moderate servers
Nobody is forcing them to do it, I imagine they do it because they enjoy it and want to give back to the community, the same way someone would contribute to open source or moderate a forum in their spare time. In any case, is it always "free labor"? I have heard of paid-transactions and/or donations, sponsors, or servers being hosted by streamers who have other sources of income to pay for moderators. Though admittedly, I am not familiar with Minecraft in particular and if this is actually the case in most servers.
>the same way someone would contribute to open source or moderate a forum in their spare time
It would be like open source business where the owner makes millions of dollars a month off the software and then tries to get people to work for him for free to make him even more money. The volunteers do all the work and the owner makes all of the money.
> I agree that having some first-party or reputable anti-cheat driver or system, is probably preferable than having different studios roll out their own anticheat drivers. (I am aware there are studio-level or common third party common anti-cheat solutions already, such as Denuvo or Vanguard. But I would prefer something better)
Only Apple really has enough platform lockdown to achieve that. Whatever Microsoft ships would have more holes than swiss cheese (not that I'm opposed to that or anything).
Would that not create the issue that you would only need to find one bypass for said official anti-cheat that then works for all games out there?
I heard with Denuvo reverse engineering work needs to be done for each individual target to unprotect it, but I'm not sure how this will be the case with a first party anti-cheat driver.
While I don't like that the executable's update URL is using just plain HTTP, AMD does explicitly state that in their program that attacks requiring man-in-the-middle or physical access is out-of-scope.
Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
What I'm more curious about is the presence of both a Development and Production URL for their XML files, and their use of a Development URL in production. While like the author said, even though the URL is using TLS/SSL so it's "safe", I would be curious to know if the executable URLs are the same in both XML files, and if not, I would perform binary diffing between those two executables.
I imagine there might be some interesting differential there that might lead to a bug bounty. For example, maybe some developer debug tooling that is only present only in the development version but is not safe to use for production and could lead to exploitation, and since they seemed to use the Development URL in production for some reason...
For paying out, maybe, but this is 100% a high priority security issue regardless of AMD's definition of in scope, and yet because they won't pay out for it they also seem to have decided not to fix it.
I already said I do not like that it is just using HTTP, and yes, it is problematic.
What I am saying is that the issue the author reported and the issue that AMD considers man-in-the-middle attacks as out-of-scope, are two separate issues.
If someone reports that a homeowner has the keys visibly on top of their mat in front of their front-door, and the homeowner replies that they do not consider intruders entering their home as a problem, these are two separate issues, with the latter having wider ramifications (since it would determine whether other methods and vectors of mitm attacks, besides the one the author of the post reported, are declared out-of-scope as well). But that doesn't mean the former issue is unimportant, it just means that it was already acknowledged, and the latter issue is what should be focused on (At least on AMD's side. It still presents a problem for users who disagree with AMD of it being out-of-scope).
The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue. For saying that it's a real security issue and then another issue on top you should word it very differently.
> The phrasing of your first two sentences in your first post makes it sound like you're dismissing the security issue.
Genuine question, How does it sound like I'm dismissing it? My first sentence begins with the the phrase
> I don't like that the executable's update URL is using just plain HTTP
And my second sentence
> Whether you agree with whether this rule should be out-of-scope or not is a separate issue.
which, with context that AMD reported MITM as out-of-scope, clearly indicates that I think of it as an issue, albeit, a separate one from the one the author already reported.
Why is this global keyed only by the database name string in the first place?
The post mentions a generated UUID, why not use that instead, and have a per-origin mapping of database names to UUID somewhere? Or even just have separate hash-tables for each origin? Seems like a cleaner fix to me compared to sorting (imo, though admittedly, more of a complex fix with architectural changes)
Seems to me that having a global hashtable that shares information from all origins is asking for trouble, though I'm sure there is a good explanation for this (performance, historical reasons, some benefits of this architecture I'm not aware of, etc.).
reply