Our sales and marketing have started making their own tools for themselves. This week. They actually launched a terminal.
They hit a wall with deployment, for now, but it’s amusing to watch.
And since I wouldn’t trust their stuff (or Claude’s) with a 10-mile long stick I strongly suggested we put it on Cloudflare behind eight layers of Access / Zero Trust. Easy deployment, and "solves" (if we can call it that) many of the security issues (or not; maybe I’m wrong).
Why would the synthesis round get expensive than the regular rounds?
> and quickly realized throwing 5 mediocre models at a problem just makes them argue in circle.
What was your selection strategy? My current issue is more that the more models I add, the less likely any specific one is to win two rounds in a row. Which would make perfect sense no matter the model quality, no? Unless there’s a huge gap.
> For brainstorm mode maybe weight models by past accuracy instead of pure voting?
By adding outputs history and a way to track the actual outcomes?
More than that. Building a throwaway-transient-single-use web app for a single annoying use kind of makes sense now, sometimes.
I had to create a bunch of GitHub and Linear apps. Without me even asking Codex whipped up a web page and a local server to set them up, collecting the OAuth credentials, and forward them to the actual app.
Took two minutes, I used it to set up the apps in three clicks each, and then just deleted the thing.
This. So much.
Nobody cares whether it’s AI or goblins under the hood. Just like nobody cares about how smartphones or the internet work. The only thing that matters to the majority of user is what it does for (or to) them.
Apple’s marketing was (is?) textbook this.
Also, I’d bet most people building with LLMs don’t care, or even know about, PyPI.
It’s truly amazing. This is why I’m not surprised people are ‘blown away’ by llm’s. They were never truly intrinsically intelligent - they were expert regurgitators of knowledge on demand.
Steve already suffered from immense scar tissue of starting with the technology. And yet.. this wisdom blows over peoples minds. More fool them.
> how OpenAI is not-so-subtly adopting a social-network-esque model, in how it's fine-tuned its chat system to always suggest another question that the user might want to ask.
There’s that, but it could also be adaptation to the fact users… just don’t know what to do with it.
Just like the prompt suggestions they added for new conversations a little time after releasing the first app. Those seem to be mostly gone now, at least on mobile.
reply