I’m happy to be wrong about this globally, but in my neck of the woods the readily exploited hydro resources are already exploited to 90% of their capacity and have been for 100 years. Hydro is in many ways the ultimate renewable energy, but that’s been true since electrification and we’ve been using it as part of the energy mix since then. I’d love to be wrong but my understanding is that there isn’t a huge amount of untapped new hydro capacity available without having severe impacts on ecosystems
Also what is probably used in your country is Pumped-storage hydroelectricity .
During the day you pump water into the reservoir using wind/solar energy and discharge e.g at night .
Elsewhere in the country yes but lol not so much in the very flat part of Western Canada. I pulled out some topographic maps a few years ago and was quite dismayed at the lack of elevation change suitable for pumped hydro.
In the last decade or so hydro generation has grown about as much as solar and wind (they all basically grow about the same amount as global nuclear generation, hydro doubling and wind and solar growing exponentially from basically zero).
So it's not going to take off like solar but it's a big chunk of relatively clean electricity production and it's often basically a byproduct of managing water supplies. It also pairs really well with renewables as even without pumps it has a degree of flex and storage.
Not who you were asking and not explicitly looking for vulnerabilities... I have gotten a ton of mileage from getting Claude to reverse engineer both firmware and applications with Ghidra and radare2. My usual prompt starts with "Here's the problem I'm having [insert problem]. @foo.exe is the _installer for the latest firmware update_. Extract the firmware and determine if there's a plausible path that could cause this problem. Document how you've extracted the firmware, how you've done the analysis, and the ultimate conclusions in @this-markdown-file.md"
I have gotten absolutely incredible results out of it. I have had a few hallucinations/incorrect analyses (hard to tell which it was), but in general the results have been fantastic.
The closest I've come to security vulnerabilities was a Bluetooth OBD-II reader. I gave Claude the APK and asked it to reverse engineer the BLE protocol so that I could use the device from Python. There was apparently a ton of obfuscation in the APK, with the actual BLE logic buried inside an obfuscated native code library instead of Java code. Claude eventually asked me to install the Android emulator so that it could use https://frida.re to do dynamic instrumentation instead of relying entirely on static analysis. The output was impressive.
One of the things that I've been chewing on lately is the sync problem. Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.
> To check that a module’s docstrings are up-to-date by verifying that all interactive examples still work as documented.
To perform regression testing by verifying that interactive examples from a test file or a test object work as expected.
To write tutorial documentation for a package, liberally illustrated with input-output examples. Depending on whether the examples or the expository text are emphasized, this has the flavor of “literate testing” or “executable documentation”.
I appreciate doctest very much, but those aren’t the kind of documentation I’m worried about drifting. I’m thinking more on the “how does this communication protocol between this server and this client work?”, which is generally terrible to try to summarize in doctests. If you want to take the idea to the extreme, imagine a CI test that answers “does this server implementation conform to this RFC?”
> Having a CI job that identifies places where the docs have drifted from the implementation seems pretty valuable.
Testing with lat isn't about ensuring consistency of code with public API documentation. It is about:
* ensuring you can quickly analyze what tests were added / changed by looking at the English description
* ensuring you spot when an agent randomly drops or alters an important functional/regression tests
The problem with coding agents is that they produce enormous diffs, and while reading tests code is very important in practice your focus and attention drifts and you can't do thorough analysis.
This isn't a new problem though, the same thing applies to classic code reviews -- rarely coding is a bottle neck, it's getting all reviews from humans to vet the change.
Lat shifts the focus from reading test code to understanding the semantics of the test. And because instead of reviewing 2000 lines of code you can focus on reviewing only 100 lines change in lat.md you'll be able to control your tests and implementation more tightly.
For projects where code quality isn't paramount I now just glance over the code to spot anti-pattern and models failing to DRY and resorting to duplicating large swaths of code.
I woke up early the other day. The house was perfectly silent until I got near the kitchen, when I heard a ping followed by an odd sound. As I got closer to see what was going on, an empty beer can casually rolled past my feet. The cats were nowhere to be seen.
One of my first real experiences with Border Collies was at a family reunion. There were a bunch of kids running around playing in the park. At one point someone showed up with a border collie and I watched with delight and amazement as the dog did the herding thing and slowly and carefully pushed the group of children closer together. The kids didn't even realize it until they were way too close to each other to comfortably play tag. The owner called the dog back and the games continued.
Later on I ended up with a sheltie with a very strong herding instinct. She mostly just acted like the Fun Police though with the other dog and cats. Lovely creatures!
Herding sheep is such an interesting experience too. The best way I can describe it is that each sheep has a really large soap bubble around them. You need to push gently on the bubble to get them to go where you want them too. If you push too hard and the bubble pops, they'll scatter and you have to step back and let the bubble reform.
> Do you really need an automated tool to tell you whether you're breaking common sense guidelines?
I say this on behalf of all of my neurospicy friends… sometimes, yes. Especially having taken a look at the whole list of guidelines, I definitely am friends with people who would could struggle to determine whether a given comment fits or not.
Edit: TIL “Apple makes up 89% of the company's revenue in 2025”
reply