There are real productivity gains by using these tools right now. Instead of doing 1x your normal work, you can do 5x while still maintaining quality. This is like an accountant sticking to pen and paper because calculators are big and clunky.
Also, if your AI has a 20% error rate, you're not holding it right. You need to spend more time keeping it on rails - unit tests, integration tests, e2e tests, local dev + browser use, preview deployments, staging environments, phased rollouts, AI PR reviews, rolling releases. The error rate will be much closer to 0%.
That wasn't what the comment you responded to was referring to. I guess it makes sense since you are kind of like an LLM with how you respond to input.
> I feel the same way about the current crop of AI tools. I've tried a bunch of them. Some are good. Most are a bit shit. Few are useful to me as they are now. [...] If this tech is as amazing as you say it is, I'll be able to pick it up and become productive on a timescale of my choosing not yours.
I think the point the author is making is not that it's all useless, but against the very overly simplistic idea the plot of Amount of AI vs Productivity in All Situations is a hockey stick chart.
Being told to be excited about something when clearly all they're saying is "it works sometimes, other times not so much. I'll keep checking and when it's good enough for me I'll get on board" is aggravating.
Zod is installed in nearly every project I use. It’s an essential part of my toolkit. I adore this library. It near perfect as-is and these additions make it even better. Thanks for all the hard work.
I used to use Zod until I realised it’s rather big (for many projects this isn’t an issue at all, but for some it is). Now I use Valibot which is basically the same thing but nice and small. I do slightly prefer Zod’s API and documentation, though.
Edit to add: aha, now I read further in the announcement, looks like @zod/mini may catch up with valibot -- it uses the same function-based design at least, so unused code can be stripped out.
Though exciting, looks like there's still room for shrinkage, the linked article puts a bare-minimum gzipped @zod/mini at 1.88kb, while Valibot is at 0.71kb [1].
I know this is a port but I really hope the team builds in performance debugging tools from the outset. Being able to understand _why_ a build or typecheck is taking so long is sorely missing from today's Typescript.
Yes, 100% agree. We've spent so much time chasing down what makes our build slow. Obviously that is less important now, but hopefully they've laid the foundation for when our code base grows another 10x.
Agreed. I first seen it at Stripe (along with prefixing every ID). Whoever at Stripe (or where ever it was invented) needs a good pat on that back. It's adoption has been a huge for DX generally.
Sure, but with this new predictive model we will have better predictions to work backwards from.
OC was saying (I’m going to paraphrase) that this is the death of understanding in meteorology, but it’s not because we can always work backwards from accurate predictions.
Comparing the difference between correct predictions and incorrect predictions, especially with a high accuracy predictive model, could give insight into both how statistical models work and how weather works.
Lex Fridman has a long interview [1] with Marc Raibert, CEO of Boston Dynamics, which is really excellent. It might partially or wholly answer your question.
reply