It looks like, to me, that someone spent a long back-and-forth with an LLM refining a design - everything they wrote screams "over-engineered, lots of moving parts, creating tiny little sub-problems that need to then be solved".
I find it very hard to believe that a human designed their process around a "Daytona Sandbox" (whatever the fuck that is) at 100x markup over simply renting a VPS (a DO droplet is what, $6/m? $5/m?) and either containerising it or using FreeBSD with jails.
I'm looking at their entire design and thinking that, if I needed to do some stuff like this, I'd either go with a FUSE-based design or (more flexible) perform interceptions using LD_PRELOAD to catch exec, spawn, open, etc.
What sort of human engineer comes up with this sort of approach?
> Do you not feel like there's a similar hit from switching full screen windows?
I feel like it should be, but in practice it isn't.
Sounds counter-intuitive, I know, but switching between windows on the same screen has near-zero context loss.
I also use a 3x3 grid of workspaces (center one is browser, all the others are dedicated to a single project/context/session/task each), and navigating workspaces (modifier+shift+arrows) also has near-zero contextual hit.
Even more counter-intuitively, while a second screen produces a large and irritating context-switch cost, using a little notepad next to me has even less context-switch loss than switching windows or workspaces do. It happens without me even realising it - sometimes I'd arise from a long session of coding and be surprised at some notes I made while coding.
There's probably something learnable about the human mind in all of this.
> Honestly, just from this question, I think you know enough that I’d go spend $20/month for a subscription to Codex, Claude Code, or Cursor, and ask them to teach you all this.
Paying $20/m sounds like overkill. I have tabs open for all of the most well-known AI chatbots. Despite trying my hardest, it is not possible to exhaust your free options just by learning.
Hell, just on the chatbots alone, small projects can be vibe-coded too! No $20/m necessary.
> I feel like Anthropic is going down a bad path here with billing things this way.
What do you expect them to do? You are looking at a business currently running at a loss, and complaining about their billing even though this is not a price-rise?
Unrelated, is it still possible to use $10k/m worth of tokens on their $200/plan?
> The fact that anything got leaked is a serious breach of best practices and security also at this point, something a company that used to work for DoD(W) shouldnt be doing, it can even be considered a national threat at this point.
Wasn't it CC itself the one that leaked, well, itself? It's completely vibe-coded, which I assume means it does its own build step too, which means it leaked itself.
The only breach of best-practice I see here is using an LLM for coding.
I'm probably missing something. Pre-PMF code by definition is not yet proven to solve a specific pain point, so why does it matter?
I think the crux here is the OP means the "quality of code" doesn't matter until PMF, only the utility matters (to the extent it helps you find PMF), in which case you're both in violent agreement.
But even then you don't need code. I briefly worked for a startup that found PMF by calling people, sending text messages, creating social media posts, measuring engagement to create reports, and sending invoices... all manually. The "code" as such was a bunch of templates in a doc for each of those. Once they actually started getting paid they moved to writing code.
> I briefly worked for a startup that found PMF by calling people, sending text messages, creating social media posts, measuring engagement to create reports, and sending invoices... all manually.
Right, and in that case there is no step-by-step recipe for the product. When all that is implemented in code, that is a set of step-by-step instructions for solving the pain points.
But the manual workflow was the step-by-step recipe the founders iterated on until they got traction; the product that came later was just an embodiment of that workflow as code.
I don't see how that is relevant? I thought the point under discussion was that code does not matter until PMF, and that this would be an illustrative example because there was no code until PMF.
Like, from the users' perspectives they were interacting via text messages both before and after PMF, until later down the line they were migrated to an app. At this point, the change was largely aesthetic, the core idea was the same.
Maybe we're using different definitions of terms like "PMF" here?
> Maybe we're using different definitions of terms like "PMF" here?
No, maybe different perspectives on what "the code matters" means.
In this context, I took it to mean that "code being closed/open does not matter", because the context included leaking the source.
I see you're taking it to mean "code being good/bad does not matter", because the context included a startup product.
We're talking at cross-purposes. There are two assertions here:
1. The code being good/bad (or even existing) is irrelevant.
2. The code being closed is important.
Both those assertions can be true at the same time.
My point is that the code, if it exists, is a recipe for solving some profitable pain point. In that case, there is absolutely no upside to making it open, and so the code does indeed matter.
I follow that too, when I try a new venture, but what does that have to do with "The Code Is/Isn't Important"?
What you listed is important, but those findings are distilled into the source code of the product. If you open the source, you are providing step-by-step instructions on solving some problem that other people are prepared to pay to solve.
Basically, you come up with a recipe for success for $FOO - why would you give that recipe away unless you've already capitalised on it?
None of what I said even speaks about source code. It could just as well be run by a bunch of people with notebooks, sending emails from time to time.
All that matters is that the product works, and we have a scaling path once we find a market fit. Yes, that's likely to involve source code, but I've no qualms with tossing out the MVP code and starting over from scratch. Or adapting the quickie we put together, if it happens to contain a kernel of value. Whether I open source it or not depends on where our moat lies.
But regardless, the new reality is that anyone can decompile your code with an AI and duplicate it. So merely putting an app out there opens the door for someone to copy it. So really, your moat better be something besides the code!
Nope. That’s what self-important engineers will tell themselves, but it doesn’t make it remotely true. You’re patting yourself on the back for throwing together a CRUD app and burning through a bajillion dollars on AWS.
>> Counterpoint: Code does matter, in the early days too!
>> It matters more after you have PMF, but that doesn't mean it doesn't matter pre-PMF.
>> After all, the code is a step-by-step list of instructions on solving a specific pain point for a specific target market.
----------------------------------
> Nope. That’s what self-important engineers will tell themselves, but it doesn’t make it remotely true. You’re patting yourself on the back for throwing together a CRUD app and burning through a bajillion dollars on AWS.
> If the accusation is that I am an inference engine pumping out words based on a trailing context window then I am guilty as charged. It’s just that I run on Fe + C6H12O6 + O2 (a bloodstream charged with lunch and air) instead of y/C/N2 -> Si+e- (sunlight, coal, and wind turned into silicon electrons.)
This sort of tells me that you are pro-LLM, and most pro-LLM people mostly paste the contents of their ChatGPT output and try to pass it off as their own.
Given that you say you aren't, the most likely explanation might be that you are spending a lot of time reading LLM prose, and are starting to write like it now too.
I find it very hard to believe that a human designed their process around a "Daytona Sandbox" (whatever the fuck that is) at 100x markup over simply renting a VPS (a DO droplet is what, $6/m? $5/m?) and either containerising it or using FreeBSD with jails.
I'm looking at their entire design and thinking that, if I needed to do some stuff like this, I'd either go with a FUSE-based design or (more flexible) perform interceptions using LD_PRELOAD to catch exec, spawn, open, etc.
What sort of human engineer comes up with this sort of approach?
reply