Hacker Timesnew | past | comments | ask | show | jobs | submit | dpflan's commentslogin

“The world is too much with us” - W. Wordsworth

The world is too much with us; late and soon, Getting and spending, we lay waste our powers;— Little we see in Nature that is ours; We have given our hearts away, a sordid boon! This Sea that bares her bosom to the moon; The winds that will be howling at all hours, And are up-gathered now like sleeping flowers; For this, for everything, we are out of tune; It moves us not. Great God! I’d rather be A Pagan suckled in a creed outworn; So might I, standing on this pleasant lea, Have glimpses that would make me less forlorn; Have sight of Proteus rising from the sea; Or hear old Triton blow his wreathèd horn.

https://www.poetryfoundation.org/poems/45564/the-world-is-to...


Yes, probably a good one to Pump and Dump, Pump and Gump, Gump and Dump.

Sometimes you don't know what needs to be built until you build it. These end-to-end prototypes are how to enhance your understanding and develop deeper intuition about possibilities, where risks lie, etc.


“””

Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

“””

This perspective is crucial. Scale is the great equalizer / demoralizer, scale of the org and scale of the systems. Systems become complex quickly, and verifiability of correctness and function becomes harder. Companies that built from day with AI and have AI influencing them as they scale, where does complexity begin to run up against the limitations of AI and cause regression? Or if all goes well, amplification?


The more verifiable the domain the better suited. We see similar reports of benefits from advanced mathematics research from Terrence Tao, granted some reports seem to amount to very few knew some data existed that was relevant to the proof, but the LLM had it in its training corpus. Still, verifiably correct domains are well-suited.

So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult.


> The more verifiable the domain the better suited.

Absolutely. It's also worth noting that in the case of Tao's work, the LLM was producing Lean and Python code.


I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems. Potentially tens of thousands of words of instructions to get the LLM to act as a competent employee in the field. Then the models need to be good enough at instruction-following to actually explore the problem in the right way and apply basic intelligence to solve it. Basically treating the LLM as a competent general knowledge worker that is unfamiliar with the specific field, and giving it detailed instructions on how to succeed in this field.

For the easy-to-verify fields like coding, you can train "domain intuitions" directly to the LLM (and some of this training should generalize to other knowledge work abilities), but for other fields you would need to supply them in the prompt as the abilities cannot be trained into the LLM directly. This will need better models but might become doable in a few generations.


> I think the solution in harder-to-verify cases is to provide AI (sub-)agents a really good set of instructions on a detailed set of guidelines of what it should do and in what ways it should think and explore and break down problems

Using LLMs to validate LLMs isn't a solution to this problem. If the system can't self-verify then there's no signal to tell the LLM that it's wrong. The LLM is fundamentally unreliable, that's why you need a self-verifying system to guide and constrain the token generation.


Do you mind adding more color and details to your closing thought? I’m curious if you know if projects that exist to help with this.


I found this to be an interesting analysis:

“””

What has changed is where the durable value actually lives. It is increasingly useful to separate the stack into a few layers:

- The computing, IO, and compiler kernel libraries based on CUDA, compiler frameworks like MLIR or JAX’s XLA, and of course Apache Arrow.

-The database systems and caching layers, ideally connected with ADBC’s zero-serialization connnectivity.

- The language bindings and orchestration layers that expose those capabilities.

- The application or agent interfaces that sit on top.

When viewed this way, most of the long term value clearly resides in the first two layers (compute and data access), not the last two.

“””


"""

Claude Code with Opus 4.5 is a watershed moment, moving software creation from an artisanal, craftsman activity to a true industrial process.

It’s the Gutenberg press. The sewing machine. The photo camera.

"""

- Sergey Karayev

> https://x.com/sergeykarayev/status/2007899893483045321


Just pointing out here that "rue" is used to express "to regret", emphatically. Perhaps it is not the best name for a programming language.


That’s part of the reason for the name! “Rust” also has negative interpretations as well. A “rue” is also a kind of flower, and a “rust” is a kind of fungus.


Fair enough! I do like how others are framing this is as "write less code" -- if Rue makes one think more and more about the code that finally makes it to the production, that can be a real win.


Sounds fitting to me. Every line of code I wrote that ultimately didn't need code to begin with, is basically codified regrets checked into git.


The best code is the code not written, so perhaps it is the best name for a programming language?


With "servant leadership" in its current form being attributed to Greenleaf, here is the "source of truth" on servant leadership: https://greenleaf.org/what-is-servant-leadership/

"Growth" of those being led is a key concept it seems, which I would think is really only possible when the leader doesn't do everything by themselves as a die-hard servant, but utilizes the "leadership" part to help subordinates learn to lead themselves.

Granted this realm of ideas can be a gray-area, but it seems like servant leadership as presented by the author here does not incorporate the concept of growing those that they lead -- as indicated by the fact they have self-invented a new "buzzword" which actually seems to be involve the behaviors as laid out by servant leadership -- am I missing something?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: