What holds me back the most from using these tools is the vagueness of the copyright situation. I wouldn't straight out want to use it at work. For hobby projects that I might want to use for something serious one day, I'm on the fence.
Developers who use this for serious stuff, how goes your reasoning? Is it just a calculated risk? Reward is greater than the risk?
Google vs Oracle is still being fought a decade on now.
What hope does the legal and legislative system have of possibly keeping up here? The horse will well and fully have left the barn by the time anything has been resolved and furthermore if AI continues becoming increasing useful there will be no option other than to bend the resolution to fit what has already passed.
> Developers who use this for serious stuff, how goes your reasoning? Is it just a calculated risk? Reward is greater than the risk?
It seems obvious that AI is the future, and that ChatGPT is the most advanced AI ever created.
For me (in the medical industry), if something goes wrong and someone dies a horrible death, I can just say that I didn't write that code, ChatGPT did. Not my fault.
Next time you are at the hospital getting an MRI, I hope you think about how it's entirely possible that ChatGPT wrote the majority of the mission-critical code.
I can't believe people are taking your comment seriously. How far has reality moved into parody, if a comment saying you'll be happy to blame ChatGPT when people die doesn't go far enough to give it away?
Maybe they're just replying without having properly read your post, but that's not great either.
Sad but true; these days you read so many news stories that you first think are satire, but turn out to be true, that sarcastic comments are increasingly difficult to recognize as such without the appropriate notice.
It's not hard to believe. Over the past few weeks, I've read several very serious comments that express a similar attitude, although not quite as blatantly.
I can assure you, as a former quality engineer at a medical device development facility, that there is absolutely, positively zero chance that anyone there will use any AI-powered coding tools to write code that goes onto any device that is ISO-13485, CE, or otherwise compliant to some existing medical device regulations (I speak in the USA and European markets; I cannot speak for other markets). There is literally a defined waterfall development cycle for FDA-regulated devices that requires software features to be very precisely specified, implemented, validated, tested, and manufactured. Anyone suggesting using AI at such a facility would be laughed out of the room, and perhaps even re-trained on the procedures. Anyone caught using such tools would probably be fired immediately and all their code patches would be put under intense scrutiny and possibly even rewritten; of course the device software they were working on would remain in development and not released until that was fixed.
The above two comments show the difference between software "engineers" vs "developers"...and none of the major social media platforms (and other consumer-level applications) employ engineers.
Other projects can't use waterfall development because they would like to actually produce something useful instead of what was decided at the start of the project.
This isn't the way pharmaceuticals are developed; we don't require the pharma companies to know how they work (and we shouldn't, because we don't know how many common safe drugs work). We validate them by testing them instead.
Other projects can't use waterfall development because they would like to actually produce something useful instead of what was decided at the start of the project.
It's a whole different world of software development. If you set out to build flight control software because it is needed to run on a new airplane, you're not going to pivot midstream and build something else instead.
> For me (in the medical industry), if something goes wrong and someone dies a horrible death, I can just say that I didn't write that code, ChatGPT did. Not my fault.
Liability doesn't work that way. Your view is so naive I'm having doubts about whether your an adult or not.
If you delivered the product, you're liable, regardless of where you got the product from.
After getting sued, you might be able to convince a judge that the supplier is liable. But getting sued is expensive, and the judge may not rule in your favour.
And even if it goes in your favour, OpenAI is simply going to turn around and point to the license you agreed to, in which no guarantee of fitness for purpose is specified, and all liability falls to the user.
I would trust ChatGPT code about as much as I trust the code produced by any human. All the Therac-25 code was written by a human, so what is the argument here exactly? At least when you tell ChatGPT that its code is wrong it agrees and tries to fix. Ok, it usually fails at fixing it, but it doesn't refuse to acknowledge that there is a problem at all, unlike the case of the Therac-25.
I like to think that it is not about who (or what) writes the code in the first place, it is about the review and testing procedures that ensure the quality of the final product. I think. Maybe it is just hopeless.
In general we would like developers/engineers to know as much as possible about the things they're engineering. ChatGPT-based development encourages the opposite.
So because ChatGPT exists now, less experienced programmers will be hired to developed critical software under the assumption that they can use ChatGPT to fill the gaps in their knowledge?
Even in that case, I would argue that is entirely a problem of the process, and should be fixed at that level. An experienced programmer doesn't become any less experienced just because they use ChatGPT.
I honestly have issue with using ChatGPT to write medical software. I don’t know what your exact process is like but I hope you’re giving the code it generates extra scrutiny to make sure it really does what you put in the prompt. It kinda feels like the judge who used ChatGPT to determine whether to deny or grant bail.
> I honestly have issue with using ChatGPT to write medical software.
GP is talking nonsense. No developer is ever going to be able to say "not my fault, I used what ChatGPT gave me" because without even reading the OpenAI license I can all but guarantee that the highly paid lawyers made sure that the terms and conditions include discharging all liability onto the user.
GP appears to think that if he sells a lethally defective toaster he can simply tell his buyer to make all claims against a unknown and impossible to reach supplier in China.
Products don't work like that, especially in life-critical industries (I worked in munitions, which has similar if not more onerous regulations).
I'm sure all the time; all people and processes are fallible.
But that's also why documentation is so important in this space.
I spent 15+ years building software for pharmas that was subject to GxP validation so I know the effort it takes to "do it right", but also that it's never infallible. The main point of validation is to capture the evidence that you followed the process and not that the process is infallible.
Let me provide a counterpoint - ChatGPT made the code base more readable, it was able to integrate a few useful solutions the devs didn't know about, it helped write tests and even came up with a few good ones on its own.
Going meta for a bit: before you can use a tool to produce medical device software, that tool must be qualified for use. I'd really like to see what the qualification process for ChatGPT would look like.
What is the qualification for using StackOverflow or a library book? What's the qualification for the keyboard that might introduce errors (hello, MacBook) or the monitor that might render improperly?
Not answering for medical industry, but answering for the similar realm of aerospace systems:
One big question is, does the proposed software tool assist a human engineer, or does it replace a human engineer?
If a tool replaces a human -- the phrase used often is "takes a human out of the loop" -- then that tool is subject to intense scrutiny.
For example, it would be useful to have a tool that evaluates the output of an avionics box and compares the output to expected results, to automatically prepare a test passed/failed log. Well, this would amount to replacing a human who would otherwise have been monitoring the avionics box and recording test results. So the tool has to be verified that it works correctly, in the specific operating environment (including things like operating system version, computer hardware type, etc.)
So what about ChatGPT? One big hurdle is that, given the same input, ChatGPT will not necessarily provide the same output. There's really no way to verify its accuracy in a repeatable way. Thus I doubt that it would ever become a tool that replaces a human in aerospace engineering.
How about using it then to assist an aerospace engineer? Depending on the assistance, this should not necessarily be materially different than getting help from StackOverflow.
Book or StackOverflow: this isn't a "tool," and the developer is expected to have sufficient skill to evaluate the information provided. If they can't do this then they're not qualified for that project.
A keyboard would be an example of a source whose output is 100% verified: we assume that you can see what you're typing. A process with 100% verification does not need to be separately qualified or validated.
I'm not sure how monitor errors could factor into this, can you elaborate?
If I've learned one thing as an adult, it's do whatever the fuck ya what (that my morals allow) and if necessary ask permission later, if asked. Never ask for permission before, just do it.
I've been gravitating towards projects that wouldn't have to care what a court thought anyway. Like if it can't operate in the presence of that kind of adversary then I'm better of treating that like a bug and fixing it instead of worrying about the particulars of why they're upset in the first place.
Developers who use this for serious stuff, how goes your reasoning? Is it just a calculated risk? Reward is greater than the risk?