Roughly 80% of the time, currently. Which is great! But if you don't keep that in mind, you risk making a fool of yourself.
It'll most likely improve in the future, I'm sure. The amount of real-world progress this will bring to humanity boggles the mind. I think we'll see progress in many and as-of-yet unthought of areas, because the power of mathematics, algorithms, and advanced systems will become available to a lot more people due to this "helpful uncle".
I do hope they work on improving its math skills though, also in the sense that it can explain the steps taken when solving equations or algorithms, or even how algorithms could perhaps be "translated" and implemented in various languages. In my opinion this is key to understanding mathematics, and a whole host of other systems as well. Moreover, it helps people who previously didn’t have the capacity to understand these things, so the effect on “world knowledge” is cumulative.
Personally, ChatGPT has been a great help for understanding many things a lot better, everything from mathematical equations to subsystems in Linux. And of course, it has helped me improve my coding.
Id say it has approximate as high a chance to work as "random code from the internet". Also, it is often just relying on an old version of an API or a Library.
This isn't really my point. My point is that you can't ask the documentation to expand upon what little is already in it. Stack Overflow works a little better for this purpose, because you can ask follow-up questions and get more than one answer. But even there you have to wait days for answers, and sometimes you don't get any at all. That's really inefficient, and perhaps you forgot why you asked in the first place, or just gave up and moved on.
Not so with an AI chat system. You get the answer immediately, and you can ask as many follow-up questions you like, to your heart's content. This means that, sure, even if the code you get from ChatGPT has approximately as high a chance to work as "random code from the internet," you can't ask that "random code from the internet" to expand on why it's good or bad, why it was made in a particular way, or if there are better ways, or if you can error check it one more time, or complain that, "Hey, random code, your dumb code didn't work!" In that way ChatGPT is already leaps and bounds better than "random code from the internet."
Also I assume here that you aren't just throwing darts at the wall, but instead testing out code with a clear goal in mind. Then it can be wise to ask ChatGPT about the underlying system, which it will gladly answer, and also immediately.
It'll most likely improve in the future, I'm sure. The amount of real-world progress this will bring to humanity boggles the mind. I think we'll see progress in many and as-of-yet unthought of areas, because the power of mathematics, algorithms, and advanced systems will become available to a lot more people due to this "helpful uncle".
I do hope they work on improving its math skills though, also in the sense that it can explain the steps taken when solving equations or algorithms, or even how algorithms could perhaps be "translated" and implemented in various languages. In my opinion this is key to understanding mathematics, and a whole host of other systems as well. Moreover, it helps people who previously didn’t have the capacity to understand these things, so the effect on “world knowledge” is cumulative.
Personally, ChatGPT has been a great help for understanding many things a lot better, everything from mathematical equations to subsystems in Linux. And of course, it has helped me improve my coding.