I was in camp 'boolean', but I think this has convinced me. I always had a problem that there were developers who didn't really understand the code, but would click 'approve' anyhow because they didn't see any problems in the parts they understood.
This meant that they were completely unable to actually 'approve' a review, but were only able to reject it. They were juniors, so they'd eventually get to that point, but by then, everyone would be used to just ignoring their approvals.
I haven't used this kind of +/-2 but I think it might be good communication between reviewers.
sometimes I review something and say "approved", but sometimes I can only review part of it, and really need someone else to check what's out of my wheelhouse.
sort of "partially approved".
I have used systems that can set things like "requires 2 reviewers" or "bob, fred are required reviewers, elon and sam are optional reviewers".
also we had "thumbs up, thumbs down, and some comments might have a "task" associated with them as a required fix before approval"
optionally, maybe before you say "approved" you have an overall comment, and see the comments of other reviewers.
Sure, it depends more than code review, but the code review is still a boolean flag, i.e. BOOLEAN getsMerged = codeReview && passesCI && passesTests....
Unless you're implying codeReview is a score and a low code review score can be offset by higher scores elsewhere eg. passes more tests?????
The nuance is comments on the PR itself, rather than the state of the approval, which is binary (or ternary, if you want to count leaving it in an unknown state for extended periods of time).
The PR needs to have someone who knows the whole thing.
Having several people review each separate parts but not understanding the others' can cause interaction bugs. If such bugs cannot happen (say, due to modularity, or type safety guarantees etc), then it won't be the case where you need to have a partial approve.
I am not a fan of partial approve. Either you think the code is approvable, or it isn't.
Domains of expertise are a thing. E.g. Google had "readability" which was the code style and opinioned language expertise that one person might have even without the deep system knowledge for a PR.
You can require approvals from N domains from (potentially) different people.
This could also solve the problem Github has where anyone with an account an "approve" a PR, but if you aren't a maintainer for the project your approval doesn't mean anything as far as actually getting the PR merged, but can be a signal to the original author that it is probably good, and to the actual maintainer that the PR is worth considering.
But with this, a non-maintainer could review be allowed to give a +1 or -1, but not a -2 or +2, and it is more clear that a "+1" isn't sufficient for actually merging the PR.
I think we are normalizing the PR process here, in reality its more convoluted and a good reflection of the team/organization itself. The relationship between the author and reviewer can have negative impact on the rationale and desired outcome of the PR itself.
To run the process smoothly, one can just hope that the team/tech lead is an ideal developer. Otherwise they are in a position where no one senior than them is available for the code review and any one junior would just rubber stamp their PR's.
I like the idea of being able to merge a PR that is a partial solution, while keeping the issue open to reflect that it is only partially done. It kinda makes sense to do this in a single action.
Also:
> If [a person is not suitable to make the decision of whether the PR should be approved] then the person should remove themselves from the list of reviewers.
This doesn't reflect what sometimes happens in real life. Someone could have sufficient specialized knowledge to be able to veto a PR, without having sufficient broader knowledge to approve a PR. That person should definitely be left on the reviewer list, with the ability to veto, the necessity to remark if he has vetoed or not, and the inability to definitively approve.
It is necessary for this specialist to notate "I have finished examining this PR, and there is nothing within my expertise that would cause me to veto it" before the PR is advanced.
Unfortunately, in a binary system, that often equates to him having to say "I approve" even though this does not truly capture the intent. Then you wind up with hacky work-arounds, like requiring a minimum number of approvals.
Why wild bet? It seems to be a safe bet that intermittent solar and wind availability will continue to create huge price swings, which storage can exploit and reduce to almost everyone's everyone's benefit. The investment just takes a while to break even, that's all. And with LFP or other long life battery technology, it will pay off for quite a while after break-even.
BESS asset life isn't great so a longer paypack period runs up against product life. Without running the actual model (volume, frequence and price differential) its tough to tell how quick it happens though I am sure I could build that relatively quickly if I wanted to (assuming granularity of public data).
My main point is that its a very large asset so you can't external forces come and mess up the financials (such as policy, regulatory changes, or large infra jump in that area) to make good on that bet. Certainly some public dollars being put to work to de-risk the bet.
When the system is so bad that fighting it improves it. It's a standard feature of a decent, non-totalitarian political system that it improves that way. Just don't tell Putin. But he wouldn't get it anyway. Totalitarian and democratic regimes often have basic problems understanding each other.
PREEMPT_LAZY triggering on page faults seems like a bad idea in light of this. It is probably not a good idea to suspend processes right when they get unexpectedly bogged down. The logic makes a little more sense for syscalls that are expected to take long compared to a scheduling quantum (a few milliseconds). But page faults are mostly invisible and unplannable.
It only took a few decades for Linux to get a good CPU scheduler and good I/O schedulers, too. I don't get how such an important area can be so bad for so long. But then, bad scheduling is everywhere. I find it to be a pretty fun area to work in, but, judging by how much it is less than half-assed in much existing software, most developers seem to hate dealing with it?
One thing I miss from using Windows is that the desktop didn't just freeze completely if you ran out of RAM.
At first I thought that maybe Linux doesn't have ways to give priority to the desktop environment (a.k.a. "graphical shell") which is why running out of RAM means your cursor starts lagging, clicking on things stops working, etc.
But maybe Linux is just bad at that in general and a single process eating too much RAM can simply bring the whole system to a halt as it tries to move and compress RAM to a pagefile on an HDD (not SSD).
Every time it happens to me I just find it so incredible. Here I am with a PC with a multiple cores, multiple processors, and a single process eating all the RAM can bottleneck ALL of them at once? Am I misunderstanding something? Shouldn't it, ideally, work in such way that so long as one processor is free, the system can process mouse input and render the cursor and do all the desktop stuff no matter what I/O is happening in the background?
Since it's Linux maybe it's just my DE/distro (Cinnamon/Mint). Maybe it does allocations under the assumption there will always be a few free bytes in RAM available, so it halts if RAM runs out while some other DE wouldn't. But even then you'd think there would be a way to just reserve "premium" memory for critical processes so they never become unresponsive.
I wonder if other people have the same experience as me. This part of Linux just always felt fundamentally poor for me.
This issue is much worse if you don't have swap. What happens, I think, is that as memory allocated by processes grows to fill the available RAM, it starts to push out memory that doesn't technically need to be in RAM, like cached file pages. Which accounts for some of the slowdown, until it reaches the code itself, which is 'just' a memory mapped file. So eventually most of the code that is actively trying to run is being pushed out of RAM and must be loaded in as it executes, slowing everything to a crawl and generally creating a death spiral. If you have swap the kernel can decide to put other pages onto disk and keep the more important stuff in memory. Or you can run something like early-oom which stops things from getting to that point in the first place (albeit in a somewhat brute-force manner).
Dealing with low-memory situations elegantly is pretty hard: firstly Linux uses memory overcommit by default, in part because the semantics of fork imply very large memory commitments which are almost never realised, and in part because a lot of software does the same because it's the default. Secondly, managing allocation failures is often tricky and ill-tests, and often requires co-ordination between different systems. The DE could, though, in principle, put running applications in a container which would prevent them from using above a certain amount of memory, but the results are similar to early-oom in that the result of reaching the limit is almost certainly the termination of the process using the most memory.
Yes, but the problem, I feel, is the priority of what gets pushed out to RAM on Linux.
You could split the processes into 2 categories:
1: applications that are doing tasks the user wants.
2: OS processes that the user needs to interact with in order to terminate applications.
There is an argument for applications taking priority: the user wants to do a task, if you move application out of RAM, the task is going to take longer.
But to me OS processes, including the graphical shell (taskbar, windowing system, etc.), should have priority: if an application hangs on I/O, the user NEEDS to be able to use the taskbar in order to terminate the application, otherwise they're going to have to wait who knows how long for the application to finish its task (or just hard reset the computer).
I don't know anything about how Linux handles memory, but the impression I have is that it has its priorities wrong, or it may not even have a way to configure priorities (unlikely), or maybe there is a to prioritize what is kept in memory but it only splits kernel/userspace memory so DE's that sit in userspace don't get priority (i.e. it's inadequate for a graphical operating system).
To be frank, as a desktop Linux user my biggest fear is that the Linux kernel is perfectly capable of prioritizing kernel/userspace memory, but it has no way to prioritize DE's. In other words, that the "graphical OS" use case of Linux is a second-class citizen, a feature bolted on top of GNU/Linux/Systemd. Because that would mean a lot of things are considered only from the perspective of a Linux server. This is only my imagination talking, since I'm not really involved with how Linux works. But to be fair I was never involved with how Windows worked either, and I never doubted it considered desktop a primary use case.
A specific process can use mlockall to keep all its mapped memory resident and prevent swapping or page cache eviction. That's what earlyoom does so that it can stay responsive when memory gets low. It's unfortunately underutilized in other infrastructure. It's also all-or-nothing: everything stays resident until it's munlocked regardless of how frequently it's used.
I had hoped that something like Linux Pressure Stall Information (PSI) would become more useful for low-memory scenarios. E.g. you could put critical processes in a cgroup that could rate-limit swap-outs/evictions so that it was always responsive. There are some cgroup knobs that affect reclamation, but you need a really good guess about how much memory something needs, which makes it hard to use.
The issue is, if you don't have swap, then it's not matter of prioritisation: there are some things in RAM that can't be pushed out, regardless of how unimportant they are (so the only recourse is to terminate some processes, usually the ones using the most RAM. Linux the kernel by default tries really had not to do this, which is why there are userspace applications like earlyoom to do it, which is probably the better location for such logic).
And yeah, you can adjust the priorities and the latency/throughput tradeoff (even per-application to some extent), but it's a difficult thing to get right in general (what works for one use-case might make another a lot worse). I don't know of any DE that really tries to adjust this, though (not because the kernel can't, but probably because no-one on the DE side has really prioritised it or they have tried and it hasn't made a noticable difference).
It’s even worse than that… you can hard lock a system with significant freeable memory left if you have insane vm.dirty_* settings (which is of course the case by default)
zram + no swap is a surprisingly workable workaround IME. The system slows down by a factor of 100 or so instead of 100000 or so, which allows to kill the offending process in a few seconds or have it killed by the OOM killer faster than a reboot.
The two mitigations are to:
- (somewhat counterintuitively) have swap enabled
- run something like earlyoom to stop the system from reaching a low-RAM situation in the first place.
Depends on how strong a moat really is, but it can be "enshittify and lose", too. Enlightened (as opposed to short-term) self-interest may pay off after two years or twenty, depending, and in the latter case, it may as well not pay off at all as far as a public company are concerned.
I think Microsoft’s home game is “monopolize and enshittify”. They are the masters and know the exactly what amount of enshittification is too much. E.g. Hashimoto quitting GH is probably totally worth the 10 SREs they fired. Us plebs cannot go anywhere.
I'd rather have the open core one running on my own servers (i.e. GitLab), but performance is a few orders of magnitude away from acceptable for both Git**b.
-2: This is a bad idea, don't do that
-1: This is a good idea but needs improvement
+1: LGTM but I don't have enough knowledge or authority to approve
+2: Approved
reply