My uncle Richard is one of the inventors on Honeywell’s early phase‑detect autofocus. Patent US4333007A, which figures out both the direction and amount the lens needs to move instead of hunting.
Modern systems like Canon’s Dual Pixel AF in bodies such as the EOS R5 are very direct descendants of that idea, just implemented on‑sensor with far more processing power.
Every time I see an article such as this, I beam with pride. (Pun intended).
I remember an anecdote our robotics lecturer told our university class in 1995, which was about how in the west we try to make expensive things that are the absolute best of technology and how the other side didn't have that luxury and relied on ingenuity.
He described a cold war Russian missile they had somehow obtained and were tasked with trying to reverse engineer. Ostensibly, it was thought to be a heat seeking missile, but there seemed to be no control or guidance circuitry at all. There was a single LDR (light dependent resistor) attached to a coil which moved a fin. That was it. Total cost for the guidance system maybe a couple of dollars, compared to hundreds of thousands for the cheapest guidance systems we had at the time.
The key insight was that if you shined a light at it, the fin moved one way and if there was no light the fin moved the opposite way. That still didn't explain how this was able to guide a missile, but the next realisation was that the other fins were angled so when this was flying (propelled by burning rocket fuel), the missile was inherently unstable - rotating around the axis of thrust and wobbling slightly. With the moveable fin in place, it was enough to straighten it up when it was facing a bright light, and wobble more when there was no bright light. Because it was constantly rotating, you could think of it as defaulting to exploring a cone around its current direction, and when it could see a light it aimed towards the centre of that cone. It was then able to "explore the sky" and latch on to the brightest thing it could see, which would hopefully be the exhaust from a plane, and so it would be able to lock on, and adjust course on a moving target with no "brain" at all.
This is really funny. My wife and I watched all of New Scandinavian Cooking over a few months and there was an episode where he made butter. It blew our minds at how simple it was. We had no idea!
So we bought a couple of liters of cream (35% fat), put it in the stand mixer and made butter. There's a Serious Eats page about it.
The butter we made was better than what we normally buy. We live in Switzerland so the normal grocery store butter is very good. Our butter had less water in it (you can tell in a frying pan) and more flavor. Plus we take the resulting buttermilk and make ricotta cheese and then we take the leftover whey and make Norwegian cheese (more like fudge). So we get three products from one batch of cream. The butter comes out to be about 20 cents cheaper per 250g than store bought and then the ricotta and "fudge" are free, so financially you come out ahead. The cleanup is a bit of a pain though.
We've also made cultured butter from crème fraiche. It's tasty but even when the crème fraiche is on sale it's still like 2x the cost of using cream so probably not worth it other than gifts and special occasions. We made mandarin sorbet with the sour buttermilk after the crème fraiche butter and that was excellent.
When I tell old Swiss people (people in their 70s/80s) that we make butter they think it's hilarious. They tell me about how when they were kids their parents made their own butter and also at parties/gatherings the parents would give the kids a jar of cream and it was their job to shake it and pass it around until it was butter.
If you have an hour on the weekend and if you have a stand mixer I suggest just trying it. Start with the balloon whisk and when the peaks start forming switch to the paddle watch it because when the butter forms it happens quick and you get a big clump of butter rattling around in the mixer knocking it off balance. It takes maybe 25 minutes and then you have to wash it in ice water, mold it, then clean up. About an hour.
That was back when Altavista, the first search engine, was in downtown Palo Alto.
Brian Reid was behind that. It was intended as a demo for the DEC Alpha CPU. They wanted to show
that a large number of little machines could do a big job, which was a radical idea at the time.
They were leasing an old telco building, on Bryant St. behind the Walgreens on University Avenue. The telco had moved to a larger building nearby when they went from crossbar to 5ESS, leaving behind the very tall racks typical of electromagnetic central offices.
That's where the modern data center began. Before this, data centers were raised floor operations. This one was racks and racks of identical servers, with cable trays overhead. This was the first one to look like a telephone central office. Because that's what it was before.
The building is still some kind of data center. For a while, it was PAIX, the Palo Alto Internet Exchange, the peer meeting point for west coast ISPs. Equinix has it now; it's their SV8 location, offering colocation services. Small by modern standards, but close to the early HQs of many famous startups, including Facebook.
The grease problem was written up in the local newspaper, back when Palo Alto had one. Palo Alto Utilities (the city owns its power company) got the report, and quickly realized someone was dumping grease into their transformer vault. So they put someone on stakeout, watching all night. The offending restaurant employee was caught. The restaurant was fined and billed for the cleanup.
In 2006, there was another grease dumping incident in a transformer vault a block further north. This one did result in a grease fire.[1]
Palo Alto Fire Department has a CO2 truck, and dumped enough CO2 in to put out the fire. Power was out for most of the night.
One of my favorite quotes:
“There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies.”
I think about this a lot because it’s true of any complex system or argument, not just software.
He came to give a lecture at UT Austin, where I did my undergrad. I had a chance to ask him a question: "what's the story behind inventing QuickSort?". He said something simple, like "first I thought of MergeSort, and then I thought of QuickSort" - as if it were just natural thought. He came across as a kind and humble person. Glad to have met one of the greats of the field!
Fun story - at Oxford they like to name buildings after important people. Dr Hoare was nominated to have a house named after him. This presented the university with a dilemma of having a literal `Hoare house` (pronounced whore).
I can't remember what Oxford did to resolve this, but I think they settled on `C.A.R. Hoare Residence`.
It looks like this is a community continuation of the axlsx gem which was maintained back in the day by Randy Morgan (randym) over at https://github.com/randym/axlsx. One of my earliest open source contributions was adding support so that you could output spreadsheets with "conditional" formatting (color something red if it is below some value, for instance). I remember Randy being extremely supportive of new contributors and it made me want to be a part of the ruby community.
FidoNet was a simply wonderful innovation, and it was a reflection of the creativity of its author - Tom Jennings - and his views of community and identity.
https://grokipedia.com/page/tom_jennings
Tom was working on FidoNet in 1984, the same time my Iris co-founders and I had begun work on what became Lotus Notes. Architecturally, those of us who were working on collaborative systems in that era were shaped by the decentralized architecture of USEnet - inspired and motivated by the observation that a community could be brought together by something technologically as simple as uucp.
Both dial-up focused, Tom took this in the direction of a decentralized BBS, while I took it in the direction of masterless replicated nosql databases we called 'notefiles'. Identity being at the core, Tom was focused more on public community while we focused on private collaboration.
It was such an exciting time for emergent decentralization, shaped by a strong dose of 60's idealism.
I have such strong nostalgia for that era, but man, every time I try to go back and experience a BBS like this is just feels so empty. There really isn't a way to experience the feelings back then. I admire them for keeping it alive, but the magic was long ago dispelled by ubiquitous internet connectivity.
I can go play a retro videogame and be taken back, but I've never felt that way with a BBS. Maybe it's just the intensity of what the BBS world was back then. It was a way into another world.. an exclusive world.. the first taste of digital life, long before it was taken over by the masses. An intimate community, but also a gateway to esoteric and faraway lands.
I was 12 when I got my first modem in '87. Suddenly I was no longer trapped in my town but connected to something secret yet global. Sure, long-distance charges kept things local for the most part, but it wasn't long before I found a way around that. Stolen calling cards, open PBXes, then Tymnet/Telenet and then in '90 an internet gateway of a local university. Wardialing, finding strange systems in the night... poking around until something gave way. Arrested. Reset. Probation. No computers. It all came to a halt. Then one day at Boeing Surplus I found an old green screen terminal and a 300 baud acoustic modem. Back online.. but the world began to change. MBBS, multi-line systems, and the world began to open. The world wide web began to take shape, Yahoo awoke, and the old steamship rolled into port for the last time.
The few times I've baked there, it's been a pretty good experience. There's a full height proving cabinet, yeast works really well at altitude, the ovens have steam injectors, there are good mixers, a commercial fryer. In many ways much easier than baking at home, but probably not a patch on a good bakery.
We almost ran out of sugar in 2021 and Rothera sent us a bag of Tate and Lyle in break-glass-in-emerhency box on one of the early transit flights the following summer. That's still hanging in the galley. Cream also goes pretty quickly, and forget about eggs. But you only need "egg product" anyway.
The foods that tend to be avoided are pasta and beans, or really anything which has to be boiled. There's a massive pressure cooker but it's a pain to use and clean. It's also hard to brew coffee if you tend to use off-the-boil. The best you'll get is about 93 C. Espresso is fine as its pressurised anyway.
Jay here: this is a transition I've been working towards for awhile, and I'm looking forward to advancing the vision and ecosystem as CIO (Chief Innovation Officer). Toni has been an advisor to us for years, and I personally recruited him to take over as CEO while I focus on new projects within the company. It's an honor to have him on board to lead us into this next stage of growth.
I think Steve Lemay is a good guy. I kind of fought with him when I was an engineer, he was a young, new designer (at Apple). But I always respected his point of view—even when we argued.
When Jobs came back to Apple in the latter 1990's "Design" slowly came to have an outsized role. I was one half of the engineering team that owned Preview (the application) when Steve Lemay became a seemingly regular presence in the hallway. As the new "Aqua" UI elements arrived in the OS like the "drawer" and toolbar, Steve and his boss (forgetting his name right now—Greg Somebody?) were often making calls about our UI implementation.
The bigger argument I remember with Steve revolved around the drawer UI element. With regard to PDF's, (the half of Preview that I worked on, another engineer handled images), the drawer was to display thumbnails for each page. If the PDF had a TOC (table of contents) the drawer is where we would display that as well.
So when you opened a PDF in Preview, the PDF content of course would appear in the large window—thumbnails, TOC (later search) would be relegated to a vertical strip of drawer real estate alongside the window—the user could open/close the drawer if they liked to focus perhaps on the content.
Steve Lemay insisted the drawer live on the right side of the window [1]. This was inexplicable to me. I saw the layout of Preview as hierarchical: the left side of the content driving the right side. You click a thumbnail on the left (in the drawer) the window content on the right changes to reflect the thumbnail clicked on.
Steve said, no, drawer on the right.
"Why? Why the hell would we do that?"
Steve was quick: "The Preview app is about the content. The content is king."
I admit that I still disagreed with him after the exchange, but I had a new respect for him as a designer because he was able to articulate a rationale for his decision. I suppose I was prejudiced to expect hand-waving from designers.
(Coda: some years later after I had left the Preview team, an engineer still on the app let me know that the thumbnails, etc, were at last moving to the left side of the app. The "drawer" as as a UI element had by this time gone away: resizable split-views were the replacement.)
(Addendum: Steve also invented the early Safari URL text field that also doubled as a progress bar. Instant hate from me when I saw it: it was as if the text of the URL you entered was being selected as the page loaded. So I'm old-school and Steve had some new ideas…)
[1] Localization was such that in countries where right-to-left was dominant, the drawer would of course follow suit.
I grew up on a small village in a small island.
The yogurt lady was an essential part of the community.
Many stay-at-home moms (including my mom) seemed to enjoy her visit.
She and my mom talked a lot, sometimes for hours (I still can't figure out how she completed her job when she spent so much time with one person).
They chatted about recent events, like the daughter of the fisherman gave birth, the great-grandpa of the liquor shop died of cancer, a newly opened restaurant in the nearest town sucked, and sometimes shared even personal struggles or family matters.
It really helped a lot of people combat mental struggles caused by the isolation of being traditional stay-at-home wives in a super rural area.
The only downside was anything you shared with her would be spread in the entire village before dawn.
Depending on the author 17th century English can also be very close to modern English. A couple phrases will be off and the spelling is different, but most of the difficulty is more the author using constructions that have fallen out of use or "showing off" with overly complicated sentences.
For example here's an excerpt from 1688's "Oroonoko"
I have often seen and convers'd with this great Man, and been a Witness to many of his mighty Actions; and do assure my Reader, the most Illustrious Courts cou'd not have produc'd a braver Man, both for Greatness of Courage and Mind, a Judgment more solid, a Wit more quick, and a Conversation more sweet and diverting. He knew almost as much as if he had read much: He had heard of, and admir'd the Romans; he had heard of the late Civil Wars in England, and the deplorable Death of our great Monarch; and wou'd discourse of it with all the Sense, and Abhorrence of the Injustice imaginable. He had an extream good and graceful Mien, and all the Civility of a well-bred great Man.
I've told this story before on HN, but my biz partner at ArenaNet, Mike O'Brien (creator of battle.net) wrote a system in Guild Wars circa 2004 that detected bitflips as part of our bug triage process, because we'd regularly get bug reports from game clients that made no sense.
Every frame (i.e. ~60FPS) Guild Wars would allocate random memory, run math-heavy computations, and compare the results with a table of known values. Around 1 out of 1000 computers would fail this test!
We'd save the test result to the registry and include the result in automated bug reports.
The common causes we discovered for the problem were:
- overclocked CPU
- bad memory wait-state configuration
- underpowered power supply
- overheating due to under-specced cooling fans or dusty intakes
These problems occurred because Guild Wars was rendering outdoor terrain, and so pushed a lot of polygons compared to many other 3d games of that era (which can clip extensively using binary-space partitioning, portals, etc. that don't work so well for outdoor stuff). So the game caused computers to run hot.
Several years later I learned that Dell computers had larger-than-reasonable analog component problems because Dell sourced the absolute cheapest stuff for their computers; I expect that was also a cause.
And then a few more years on I learned about RowHammer attacks on memory, which was likely another cause -- the math computations we used were designed to hit a memory row quite frequently.
Sometimes I'm amazed that computers even work at all!
Incidentally, my contribution to all this was to write code to launch the browser upon test-failure, and load up a web page telling players to clean out their dusty computer fan-intakes.
In the 1990s, in the UK, my secondary school English teacher, who had Shakespearian actor vibes and wore dark tweed trousers and a plain white shirt—imagine Patrick Stewart if you may—brought this poem to life in my class by vividly reenacting a soldier dying from mustard gas poisoning by falling onto a desk and flailing about in front of the stunnned students sitting at it. I've never forgotten the closing line since.
I attached a generator with some supercaps and an inverter to a stationary bicycle a few years ago, and even though I mostly use it as a way to feel less guilty watching Youtube videos, it does give me a quite literal feel for some of the items on the lower end of the scale.
- Anything even even halfway approaching a toaster or something with a heater in it is essentially impossible (yes, I know about that one video).
- A vacuum cleaner can be run for about 30 seconds every couple minutes.
- LED lights are really good, you can charge up the caps for a minute and then get some minutes of light without pedaling.
- Maybe I could keep pace with a fridge, but not for a whole day.
- I can do a 3D printer with the heated bed turned off, but you have to keep pedaling for the entire print duration, so you probably wouldn't want to do a 4 hour print. I have a benchy made on 100% human power.
- A laptop and a medium sized floor fan is what I typically run most days.
- A modern laptop alone, with the battery removed and playing a video is "too easy", as is a few LED bulbs or a CFL. An incandescent isn't difficult but why would you?
- A cellphone you could probably run in your sleep
Also gives a good perspective on how much better power plants are at this than me. All I've made in 4 years could be made by my local one in about 10 seconds, and cost a few dollars.
As others have said, keep the battery in the 80%-30% range. Use the `batt` CLI tool to hard limit your max charge to 80%. Sadly, if you're already down to <2hrs, this might not make sense for you. Also prevent it being exposed to very hot or cold temps (even when not in use)
I type this from an M3 Max 2023 MBP that still has 98% battery health. But admittedly it's only gone through 102 charge cycles in ~2 years.
(use `pmset -g rawbatt` to get cycle count or `system_profiler SPPowerDataType | grep -A3 'Health'` to get health and cycles)
These sorts of core-density increases are how I win cloud debates in an org.
* Identify the workloads that haven't scaled in a year. Your ERPs, your HRIS, your dev/stage/test environments, DBs, Microsoft estate, core infrastructure, etc. (EDIT, from zbentley: also identify any cross-system processing where data will transfer from the cloud back to your private estate to be excluded, so you don't get murdered with egress charges)
* Run the cost analysis of reserved instances in AWS/Azure/GCP for those workloads over three years
* Do the same for one of these high-core "pizza boxes", but amortized over seven years
* Realize the savings to be had moving "fixed infra" back on-premises or into a colo versus sticking with a public cloud provider
Seriously, what took a full rack or two of 2U dual-socket servers just a decade ago can be replaced with three 2U boxes with full HA/clustering. It's insane.
Back in the late '10s, I made a case to my org at the time that a global hypervisor hardware refresh and accompanying VMware licenses would have an ROI of 2.5yrs versus comparable AWS infrastructure, even assuming a 50% YoY rate of license inflation (this was pre-Broadcom; nowadays, I'd be eyeballing Nutanix, Virtuozzo, Apache Cloudstack, or yes, even Proxmox, assuming we weren't already a Microsoft shop w/ Hyper-V) - and give us an additional 20% headroom to boot. The only thing giving me pause on that argument today is the current RAM/NAND shortage, but even that's (hopefully) temporary - and doesn't hurt the orgs who built around a longer timeline with the option for an additional support runway (like the three-year extended support contracts available through VARs).
If we can't bill a customer for it, and it's not scaling regularly, then it shouldn't be in the public cloud. That's my take, anyway. It sucks the wind from the sails of folks gung-ho on the "fringe benefits" of public cloud spend (box seats, junkets, conference tickets, etc...), but the finance teams tend to love such clear numbers.
A couple years back John Reilly posted on HN "How I ruined my SEO" and I helped him fix it for free. He wrote about the whole thing here: https://johnnyreilly.com/how-we-fixed-my-seo
Happy to do the same for you if you want.
The quickest win in your case: map all the backlinks the .net site got (happy to pull this for you), then email every publication that linked to it. "Hey, you covered NanoClaw but linked to a fake site, here's the real one." You'd be surprised how many will actually swap the link. That alone could flip things.
Beyond that there's some technical SEO stuff on nanoclaw.dev that would help - structured data, schema, signals for search engines and LLMs. Happy to walk you through it.
update: ok this is getting more traction than I expected so let me give some practical stuff.
1. Google Search Console - did you add and verify nanoclaw.dev there? If not, do it now and submit your sitemap. Basic but critical.
2. I checked the fake site and it actually doesn't have that many backlinks, so the situation is more winnable than it looks.
3. Your GitHub repo has tons of high quality backlinks which is great. Outreach to those places, tell the story. I'm sure a few will add a link to your actual site. That alone makes you way more resilient to fakers going forward. This is only happening because everything is so new. Here's a list with all the backlinks pointing to your repo:
4. Open social profiles for the project - Twitter/X, LinkedIn page if you want. This helps search engines build a knowledge graph around NanoClaw. Then add Organization and sameAs schema markup to nanoclaw.dev connecting all the dots (your site, the GitHub repo, the social profiles). This is how you tell Google "these all belong to the same entity."
5. One more thing - you had a chance to link to nanoclaw.dev from this HN thread but you linked to your tweet instead. Totally get it, but a strong link from a front page HN post with all this traffic and engagement would do real work for your site's authority. If it's not crossing any rule (specific use case here so maybe check with the mods haha) drop a comment here with a link to nanoclaw.dev. I don't think anyone here would mind if it will get you few steps closer towards winning that fake site
Oh, this is really interesting to me. This is what I worked on at Amazon Alexa (and have patents on).
An interesting fact I learned at the time: The median delay between human speakers during a conversation is 0ms (zero). In other words, in many cases, the listener starts speaking before the speaker is done. You've probably experienced this, and you talk about how you "finish each other's sentences".
It's because your brain is predicting what they will say while they speak, and processing an answer at the same time. It's also why when they say what you didn't expect, you say, "what?" and then answer half a second later, when your brain corrects.
Fact 2: Humans expect a delay on their voice assistants, for two reasons. One reason is because they know it's a computer that has to think. And secondly, cell phones. Cell phones have a built in delay that breaks human to human speech, and your brain thinks of a voice assistant like a cell phone.
Fact 3: Almost no response from Alexa is under 500ms. Even the ones that are served locally, like "what time is it".
Semantic end-of-turn is the key here. It's something we were working on years ago, but didn't have the compute power to do it. So at least back then, end-of-turn was just 300ms of silence.
This is pretty awesome. It's been a few years since I worked on Alexa (and everything I wrote has been talked about publicly). But I do wonder if they've made progress on semantic detection of end-of-turn.
Edit: Oh yeah, you are totally right about geography too. That was a huge unlock for Alexa. Getting the processing closer to the user.
Author of LFortran here. The historical answer is that both LFortran and Flang started the same year, possibly the very same month (November 2017), and for a while we didn't know about each other. After that both teams looked at the other compiler and didn't think it could do what they wanted, so continued on their current endeavor. We tried to collaborate on several fronts, but it's hard in practice, because the compiler internals are different.
I can only talk about my own motivation to continue developing and delivering LFortran. Flang is great, but on its own I do not think it will be enough to fix Fortran. What I want as a user is a compiler that is fast to compile itself (under 30s for LFortran on my Apple M4, and even that is at least 10x too long for me, but we would need to switch from C++ to C, which we might later), that is very easy to contribute to, that can compile Fortran codes as fast as possible (LLVM is unfortunately the bottleneck here, so we are also developing a custom backend that does not use LLVM that is 10x faster), that has good runtime performance (LLVM is great here), that can be interactive (runs in Jupyter notebooks), that creates lean (small) binaries, that fully runs in the browser (both the compiler and the generated code), that has various extensions that users have been asking for, etc. The list is long.
Finally, I have not seen Fortran users complaining that there is more than one compiler. On the contrary, everybody seems very excited that they will soon have several independent high-quality open source compilers. I think it is essential for a healthy language ecosystem to have many good compilers.
The way I write code with AI is that I start with a project.md file, where I describe what I want done. I then ask it to make a plan.md file from that project.md to describe the changes it will make (or what it will create if Greenfield).
I then iterate on that plan.md with the AI until it's what I want. I then ask it to make a detailed todo list from the plan.md and attach it to the end of plan.md.
Once I'm fully satisfied, I tell it to execute the todo list at the end of the plan.md, and don't do anything else, don't ask me any questions, and work until it's complete.
I then commit the project.md and plan.md along with the code.
So my back and forth on getting the plan.md correct isn't in the logs, but that is much like intermediate commits before a merge/squash. The plan.md is basically the artifact an AI or another engineer can use to figure out what happened and repeat the process.
The main reason I do this is so that when the models get a lot better in a year, I can go back and ask them to modify plan.md based on project.md and the existing code, on the assumption it might find it's own mistakes.
A 'secret weapon' that has served me very well for learning classifiers is to first learn a good linear classifier. I am almost hesitant to give this away (kidding).
Use the non-thresholded version of that linear classifier output as one additional feature-dimension over which you learn a decision tree. Then wrap this whole thing up as a system of boosted trees (that is, with more short trees added if needed).
One of the reasons why it works so well, is that it plays to their strengths:
(i) Decision trees have a hard time fitting linear functions (they have to stair-step a lot, therefore need many internal nodes) and
(ii) linear functions are terrible where equi-label regions have a recursively partitioned structure.
In the decision tree building process the first cut would usually be on the synthetic linear feature added, which would earn it the linear classifier accuracy right away, leaving the DT algorithm to work on the part where the linear classifier is struggling. This idea is not that different from boosting.
One could also consider different (random) rotations of the data to form a forest of trees build using steps above, but was usually not necessary. Or rotate the axes so that all are orthogonal to the linear classifier learned.
One place were DT struggle is when the features themselves are very (column) sparse, not many places to place the cut.
Lots of logs contain non-interesting information so it easily pollutes the context. Instead, my approach has a TF-IDF classifier + a BERT model on GPU for classifying log lines further to reduce the number of logs that should be then fed to a LLM model. The total size of the models is 50MB and the classifier is written in Rust so it allows achieve >1M lines/sec for classifying. And it finds interesting cases that can be missed by simple grepping
No problem. You might be better off moving back, yes.
My understanding of immix-style collection is that it divides the heap into blocks and lines. A block is only compacted/reused if every object in it is dead, and so if you mix lifetimes (i.e. lots of short-lived requests, medium-life sessions, long-life db connections/caches/interned symbols) then you tend to fill up blocks with a mix of short and long-lived objects as users log in and make requests.
When the requests get de-allocated the session remains (because the user closed the tab but didn't log out, for example, so the session is still valid) and so you end up with a bunch of blocks that are partially occupied by long-lived objects, and this is what drives fragmentation because live objects don't get moved/compacted/de-fragged very often. Eventually you fill up your entire heap with partially-allocated blocks and there is no single contiguous span of memory large enough to fit a new allocation and the allocator shits its pants.
So if that's what the HN backend looks like architecturally (mixed lifetimes), then you'd probably benefit from the old GC because when it collects, it copies all live objects into new memory and you get defragmentation "for free" as a byproduct. Obviously it's doing more writing so pauses can be more pronounced, but I feel like for a webapp that might be a good trade-off.
Alternatively you can allocate into dedicated arenas based on lifetime. That might be the best solution, at the expense of more engineering. Profiling and testing would tell you for sure.
As someone who worked on kindle power consumption many years ago: One of the (by far) biggest consumers of power is the WiFi connection. It has to wake up and respond to the AP in order to not get disconnected every x seconds.
Off the top of my head, I think 'on' average power consumption was ~700uA without wifi, and about 1.5mA+/- with Wifi. This is from over a decade ago, so my memory is fuzzy though...
Obviously, page changes used relatively large amounts of power. I don't recall the exact amounts, but it was 100s of mA for seconds.
There is also an "every x pages, do a full screen refresh (black to white)" to fix up the ghosting issue that the article writer saw.
Older HN users may recall when busy discussions had comments split across several pages. This is because the Arc [1] language that HN runs on was originally hosted on top of Racket [2] and the implementation was too slow to handle giant discussions at HN scale. Around September 2024 Dang et al finished porting Arc to SBCL, and performance increased so much that even the largest discussions no longer need splitting. The server is unresponsive/restarting a lot less frequently since these changes, too, despite continued growth in traffic and comments:
Modern systems like Canon’s Dual Pixel AF in bodies such as the EOS R5 are very direct descendants of that idea, just implemented on‑sensor with far more processing power.
Every time I see an article such as this, I beam with pride. (Pun intended).