this resonates hard and exactly what i'm trying to build to solve at elvex.
the missing pieces we are building around are:
- unified auth/permissions across multiple AI providers
- secure data connections without exposing credentials to agents
- audit trails for compliance
- team collaboration (who can deploy what agents where)
the article's right that the tech is ready but the architecture isn't. most companies are duct-taping OpenAI API + LangChain + custom auth + manual governance
elvex gives you the platform layer: multi-provider AI, data integrations, team permissions, workflow orchestration. not saying it's the only option but we're solving the "how do we actually deploy this" problem
yeah the gap between "chatbot that writes code" and "actual multi-agent workflow" is real
built elvex to solve this with:
- multi-provider access (Claude, GPT, Gemini, etc.) so different agents can use different models
- actual team permissions so agents don't step on each other
- workflow orchestration without duct-taping APIs together
the parallel execution thing you mentioned - elvex handles that. you can spin up multiple agents with different contexts, they share a knowledge base, and you're not manually managing git worktrees or containers
not saying it's magic but it definitely solves the "how do i go from 1 agent to 10 agents without chaos" problem.
Luck is a huge factor in startup success. But you can't capture luck if you're not around to grab it. Persistence is what keeps you in the game long enough for luck to find you. Intelligence is commoditized in this industry. Persistence is rare.
I saw a tweet from Andrej Karpathy that's been sitting with me. He's never felt this behind as a programmer. I've been thinking about this through the marshmallow challenge, where kindergartners beat MBAs. The kids just build and iterate. Most of us are the MBAs right now with AI tools.
Some of it is that the radical acceleration in productivity isn't real. See Brook's "No Silver Bullet". You certainly have those moments where you describe a bug and ask if it can understand it and get an answer in two minutes, but when you consider everything that goes into the "definition of done", 10x just isn't realistic.
My take at work is that I'm not running much faster, but I am getting better quality. Some of it is my attitude, but with AI I am more likely to go back and forth and ask things until I really understand what is going on, write tests even when it is a hassle to write tests, ask the IDE questions about the dependencies I use so I can really understand how they work, try two or three possible solutions and pick the best, etc.
When it comes to things like that memory leak it is very hit and miss. If you give it try it might solve it, it might not. It's worth trying. But you can't count on something like that working all the time.
I think you're right that 10x isn't realistic for most work, and Brooks is still mostly correct. The "No Silver Bullet" argument holds because most of software development isn't typing code faster.
But you're describing exactly the shift that matters. You're not running faster, you're getting better quality. You're more likely to understand dependencies, write tests, try multiple solutions. That's the actual productivity gain.
The marshmallow challenge point isn't about whether AI makes you 10x faster. It's about the mindset shift. The MBAs didn't lose because they were slower. They lost because they spent their time planning the perfect approach instead of iterating.
The memory leak example from Boris Cherny isn't about AI being reliable. It's about his coworker not having the baggage of "this is how you debug memory leaks." They just tried asking Claude first. Sometimes it works, sometimes it doesn't. But the willingness to try it first is what creates the gap.
Personally I think it's that I know how to do software development and I always have my eyes on getting the project done.
I don't think about AI tools a lot. I use Junie because it is integrated with my favorite IDE and like sending more money Jetbrain's way. I don't read blogs or tweets about AI coding.
What I do do is try little things that are always oriented to the work in front of me. I work for an MBA who is great at what he does but when he tried "vibe coding" he got nowhere and had that similar feeling of puzzlement from the gap between the results he got and the results that influencers say they are getting that a lot of people express. I've learned AI assisted coding by doing and from square one I realized it was going to work some of the time and fail some of the time and have always prioritized not getting stuck. It's certainly fair to make some wild (but well-formed) request and see what you can get.
Appreciate the feedback (author, here). It is optimized for our qualified leads which are the top online publishers on the web. However, we've heard your feedback a few times in the past. We're working on a revamp to make it more descriptive for anyone who hits the site.
The trouble is that developing most software is a group activity. The result is more than the sum of individual contributions, it is also a function of how well the team works together.
Someone might be a competent developer in their own right, but not a ninjarocksuperstar. Still, if that person can understand three conflicting ninjarocksuperstars' points of view and see how to reconcile them and build a consensus, it might be the most valuable contribution of anyone on the team that day.
One of the problems with managing something like a software project is that it's very hard to measure this effect. If you're in an office where people are interacting regularly, there's usually an easy alternative to measurement: ask everyone who helped them do their job the most lately, and the guy that 75% of your team name is the quiet one who works in the background to keep everything ticking over. If everyone is 100% distributed, that guy might not be contributing at the same level as everyone else, but that's because you're not taking advantage of his skills and your team is weaker for it, which is your mistake and not his.
(I try to be consensus guy regardless of coding skill, but I'm not sure how good I am at it. That's why appreciate when there is someone who really is good at it on the team, even if their personal coding skills are merely average.)
I work at Parse.ly, who's office is literally right next door to these pizza shops. Last week, when they changed their prices, I asked the owner of the Indian shop how low he'd go, and he said two very interesting things:
1. He hates two bros and wants to go low enough to make them leave the area.
2. He didn't make money from pizza even when it was priced at one dollar. The pizza barely pays for the cost of the labor to make it. So why do it at all? Because it acts as lead gen for his Indian food which has a much higher margin. He has essentially a freemium model that works to beat his competitor!
An acquaintance of mine who once worked at a mexican restaurant told me that there was hardly any profit made on the food -- the profit came from the drinks the restaurant sold along with it. The taco as a loss leader ;-)
Same with inkjet printers nowadays.They almost cost nothing ($35), but when you run out of ink, you realize where they get their money from.
interesting logic, but honestly just doesnt seem very prudent [not saying that you are defending them]. doing something out of emotion, while old school isnt a reason on its own. and for the freemium model, it just doesnt make a ton of sense. if somebody is going in for cheap pizza, are they really going to pick up some indian food in the same order? if they want indian food, are they going to go to a place they consider a cheap pizza shop, or are they going to pick an "actual" indian restaurant?
the whole article just sorta reminded me of old school business practices that arent well thought out, are based on emotion and not metrics and dont seem to work out in the end except in rare cases of luck.
I'd give the guy the benefit of the doubt. He's been running this joint for a while now, and I bet he actually is making decisions based on metrics he has. Though I agree that conversions from pizza to indian food is probably pretty low, it may just be that the margins are high enough that it warrants selling the pizza. I don't think he would do the pizza business unless it had promise in one way or another.
You are right. The guy is there everyday. If he's like a typical small business owner (I've dealt with thousands) he knows exactly what he is doing and has it all in his head.
If you're speaking with investors, and are good about networking with funded startups chances are you can get office space for free for several months.
Regarding rent, you'll be paying a lot, and finding an apartment in NYC is a grueling process. But, there's a reason rent is high -- people love NYC!
Don't worry, soon the stylistic tides will turn and the analogy will be war/raising kids/making the perfect turkey sandwich instead of dating, and the articles will flow just as freely.
the missing pieces we are building around are: - unified auth/permissions across multiple AI providers - secure data connections without exposing credentials to agents - audit trails for compliance - team collaboration (who can deploy what agents where)
the article's right that the tech is ready but the architecture isn't. most companies are duct-taping OpenAI API + LangChain + custom auth + manual governance
elvex gives you the platform layer: multi-provider AI, data integrations, team permissions, workflow orchestration. not saying it's the only option but we're solving the "how do we actually deploy this" problem