Hacker Timesnew | past | comments | ask | show | jobs | submit | physicles's commentslogin

You’re not missing much.

I used Cursor for the second half of last year. If you’re hand-editing code, its autocomplete is super nice, basically like reading your mind.

But it turns out the people who say we’re moving to a world where programming is automated are pretty much right.

I switched to Claude Code about three weeks ago and haven’t looked back. Being CLI-first is just so much more powerful than IDE-first, because tons of work that isn’t just coding happens there. I use the VSCode extension in maybe 10% of my sessions when I want targeted edits.

So having a good autocomplete story like Cursor is either not useful, or anti-useful because it keeps you from getting your hands off the code.


When your hardware is in the physical custody of the attacker, the threat model changes significantly. Designing a console that takes years for attackers to crack is an impressive feat of engineering.

First, recognize that, for the first time ever, having good docs actually pays dividends. LLMs love reading docs and they're fantastic at keeping them up to date. Just don't go overboard, and don't duplicate anything that can be easily grepped from the codebase.

Second, for #3, it's a new hire's job to make sure the docs are useful for new hires. Whenever they hit friction because the docs are missing or wrong, they go find the info, and then update the docs. No one else remembers what it's like to not know the things they know. And new hires don't yet know that "nobody writes anything" at your company.

In general, like another poster said, docs must live as close as possible to the code. LLMs are fantastic at keeping docs up to date, but only if they're in a place that they'll look. If you have a monorepo, put the docs in a docs/ folder and mention it in CLAUDE.md.

ADRs (architecture decision records) aren't meant to be maintained, are they? They're basically RFCs, a tool for communication of a proposal and a discussion. If someone writes a nontrivial proposal in a slack thread, say "I won't read this until it's in an ADR."

IMHO, PRs and commits are a pretty terrible place to bury this stuff. How would you search through them, dump all commit descriptions longer than 10 words into a giant .md and ask an LLM? No, you shouldn't rely on commits to tell you the "why" for anything larger in scope than that particular commit.

It's not magic, but I maintain a rude Q&A document that basically has answers to all the big questions. Often the questions were asked by someone else at the company, but sometimes they're to remind myself ("Why Kafka?" is one I keep revisiting because I want to ditch Kafka so badly, but it's not easy to replace for our use case). But I enjoy writing. I'm not sure this process scales.


The Q&A doc you're maintaining is fascinating you've essentially hand-built the thing I'm trying to automate. The 'Why Kafka?' entry is exactly the kind of decision that disappears when you leave. The search problem you raised is the core of what I'm solving — not dumping commits into a .md, but extracting structured decisions from the conversation that surrounded the commit: the Slack debate, the PR review, the ticket context. Then making it queryable by the code it relates to. You said you're not sure your process scales what happens to that Q&A doc if you leave tomorrow?

If I leave, the Q&A doc probably never gets updated again.

We're in the process of trying to get as much stuff as possible into source control (we use google docs a lot, so we'll set up one way replication for our ADRs and stuff from there to git). That way, as LLM models get better, whatever doc gets materialized from those bits and pieces will also automatically get better.


I’ve been following that post too since I started using Claude (about two weeks now) and it’s great. Sometimes for small changes I shorten research+plan into one step but I always tell it to stop and wait before it writes the code.

One thing I’ve learned: if you notice the agent spinning its wheels, then throw the work away, identify a fix (usually to Claude.md) and start over. Context rot is real.


Here's the IANA time zone mailing list thread where this is being discussed:

https://lists.iana.org/hyperkitty/list/tz@iana.org/thread/66...

Bad timing on BC's part. They just tagged release 2026a today.


Eh, they're still keeping the impending switch to PDT, just ditching the future switch back to PST (and all future changes). That should give around 7-8 months for a new timezone file update to percolate.


Sure It’ll make it to all the distros, but how many sysadmins won’t patch in time?


Not sure why you're getting downvoted. When I read the headline, my first thought was "how are they going to update the tz database on all Linuxes in the world in time?" I expect some confusion on November 1.

Here's the thread on IANA time zone mailing list where this is being discussed: https://lists.iana.org/hyperkitty/list/tz@iana.org/thread/66...

BC should've timed this better. They just released 2026a.

In the future, you can check if your database has been updated with this: (it should show no transition in November):

    zdump -v America/Vancouver | grep 2026


I’ve had a concept like this in the back of my mind for years. Happy to see someone executing it so well.

For me, it started when I spent a year and a half reading and digesting books for and against young earth creationism, then eventually for Christianity itself (its historical truth claims). It struck me that the books were just a serialization of some knowledge structure that existed in the authors’ heads, and by reading I was trying to recreate that structure in my own head. And that’s a super inefficient way to go about this business. So there must be a shortcut, some more powerful intermediate representation than just text (text is too general and powerful, and you can’t compute over it… until now with LLMs?)

That graph felt a lot like code to me: there’s no unique representation of knowledge in a graph, but there are some that are much more useful than others; building a well-factored graph takes time and taste; graphs are composable and reusable in a way that feels like it could help you discover layers of abstraction in your arguments.


Yes - currently, each argument/graph is independent, but I've designed it in a way that should be compatible with future plans to "transclude" parts of other public graphs. Like if some lemma is really valuable to your own unrelated argument, being able to include it.

I do think there's quite a lot that could be done with LLM assistance here, like finding "duplicate" candidates; statements with the same semantic meaning, for potential merge. It's really complicated to think through side effects though so I'm going slow. :)


If it was successful beyond all expectations, why aren't we seeing more?


That's the question I was leading to, yes. Maybe the upfront cost for the unit of volume is simply too much.


It’s also in their best interest to set the price so as to maximize their own profits. If switching costs or monopoly power allow them to set a higher price, they will do so.

Have we learned nothing from a decade of subscription services?


Nobody said we should allow monopolies?


Especially Adam Smith. The claims are scattered throughout The Wealth of Nations, but he hated them with specificity. He said they raise prices and lower quality, misallocate capital, and corrupt politics, among other things.


Did you have a particularly bad experience? Things have changed _a little_ since 1992.

I switched from Windows in 2018 because I was trying to install some Python packages, and it was hours of work to find the specific visual C++ runtimes that were needed to get them working.

On Linux: pip install, done.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: