Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I'm on the same page. Do people not analyze the problems themselves? Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

I don't get it.



> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

That is exactly what they are doing, yes


That's my take as well. I've had my unPRed branches grabbed up and blindly merged by an agent twice now. The guy doing it was shocked both times that his PR had my change sets in it.

Also one engineer is treating the code as assembly. I've asked some pointed questions about code in his PR and the response was "yeah, I don't know that's what the agent did".

Edit:

To everyone freaking out about the second guy. Yeah, I think being unable to answer questions about the code you're PRing is ill advised. But requirement gathering, codebase untangling, and acceptance testing are all nontrivial tasks that surround code gen. I'm a bit surprised that having random change sets slurped up into someone else's rubber stamped PR isnt the thing that people are put off by.


My friend is a CTO at a non-tech company and he's now dealing with code from non-SWEs trying to self serve with LLMs.

But it's like a kid running a lemonade stand. Total DIY weekend project quality stuff that they are demanding go live. Hardcoded credentials, no concept of dev/qa/prod environments, no logging, no tests, no source control.

I'm not really sure teaching basic SWE practices / SDLC / system design to people whose day job is like.. accounting makes sense compared to just accelerating developer productivity.


It’s the same dilemma as old: it’s easier to teach a doctor UML than a coder Doctoring. But, critically, that’s about making doctor-facing IT systems not performing their skilled jobs.

Bringing code does not help, but a validated user story with flow diagrams, a UI suggestion, and a valid ticket could. That’s the bridge to gap.

Were I that CTO I’d explain that code carries liability, SWEs can end up in jail for malfeasance, fines, penalties, and lawsuits are what awaits us for eff-ups. “Coders” get fired if their code doesn’t work. Same speech to the devs, do exactly as much unsolicited Accounting as you wanna get fired for. Talk fences, good neighbours.


The ROI on teaching UML to a doctor is pretty low though right?

Non-technical people are not writing tickets, they are just slinging slop.

Another anecdote of things I've seen - a non technical person setting up some web scraping monstrosity with 200k lines of code. They beat their chest about how they didn't need the IT org. 1 month goes by and of course it breaks as soon as anything on the website changes and now they have a gun to ITs head to "fix it" and take it over.

This outcome for a DIY brittle web scraper is obvious to anyone that's ever written code, but shocking to someone who thinks LLMs are magic.


No, you should have forward deployed engineers sitting and working right beside these traditional non SW roles if you need to fully integrate AI into their mix.


Right, unfortunately a lot of orgs are quickly letting loose some combination of non-tech self-serve AI coding and tech org staffing reductions rather than ADDING forward deployed engineers.


So he's being paid and is sitting there letting an AI tool do his work for him? Insanity.


We didn’t mind when typesetting was automated. Or when compilers were invented. Why is this different?


Because he's paid to deliver code that works. Letting an AI agent do everything would be fine if it didn't make any mistakes, but that's far from reality.


Compilers and typesetters make mistakes. Fewer as time goes on, but that’s not a categorical difference.

Do typesetters inexplicably change the meaning of the book or document being typeset? Do compilers alter the behavior intended by the programmer, sometimes in ways that are not immediately obvious? Did the invention of typesetters lead to investments so massive, that the investors had to herald the end of handwriting (no equivalent analogy for compilers)?


It reminds me of the guy who replaced his static blog deployment scripts with asking chatgpt to generate the html from his text based on a template, and said that he isn't sure that the llm isn't changing his writing but hopes it isn't


On compilers, you know they do! Compilers have bugs and some languages have undefined behavior.

On typesetters and investment: the WYSIWYG word processor is on almost every home and office desk in the world.


So I take we can soon replace coders entirely. Just fire all of them. And let some intern under VP prompt the whole thing?


Resistance to technological change has been a thing since farming was invented. Socrates thought that writing will ruin everyone's memory, and that people who just rely on written word will appear knowledgeable while actually knowing nothing.

The only difference is that this is happening to us.


Do typesetters or compilers write the code for you? Or are you perhaps using a disingenuous analogy?


A compiler writes the ASM code for you, and the typesetter does the layout for you, yes absolutely.

The high level language code is a prompt for the compiler. Consider that there is parsable C code whose behavior is not even defined. There are still bugs in compilers today, where the code produced is not what you intended. And further, modern compilers do lots of work to optimize performance. You usually don’t even look at the resulting code, you just gratefully accept the rewrite for the extra oomph.


To that last guy, as the manager I would say "What is it that you do here??"


That's just a straight-shooter with "upper management" written all over him.


He signs the TPS reports.


I then just basically space out for a while.


“I’m the prompter.”


I take the prompts to the AI so the manager doesn't have to! I have prompting skills!!

I just can't make the joke work. There really are people that think they can get paid to press the agent's on button. How long before their checks stop clearing and it "just works itself out naturally"?


That's literally how some Meta AI jobs looked a few years back - set up a few parameters, push a button, wait until training and evals are finished; repeat if. needed. $500k+/year.


> I take the prompts to the AI so the manager doesn't have to! I have prompting skills!!

This is honestly the mindset of the people on here who proudly proclaim that they haven't written a line of code in six months and are excited about what programming is "evolving" into. Naturally, _their_ AI skills aren't something that an "idea guy" can use to build a product without looping in a developer, so _his_ job is safe and will never go away -- "I understand system design, an LLM will never be able to do that!" Sure thing buddy.



do you... honestly not believe that system design is real???


That's not what I said. I said system design is not the exclusive domain of humans, so anyone thinking they possess some special knowledge of system design that an LLM isn't capable of obtaining are fooling themselves.

What color is your stapler?


"I write the prompts"


It's bizarre to me that people being paid to use their brains with a job title including the word "engineer", which essentially means "clever thought thinker" in Latin, just offloading all of their thinking to a bot instead of just using it as a way to ensure clean execution and faster understanding of the structures of underdocumented projects.


There's some people who are offloading all of their thinking to a bot, and I agree with you that I don't really understand this. But the good version of it is to offload some of your thinking to a bot so you can focus your own thinking on the parts that matter. My time is much better spent on "ah there is a scalability tradeoff here" than "I guess I have to initialize the FooBarProviderServiceProvider in a different spot so that I can pass a mock to the FooBarProvisionConsumer unit tests".


And why wouldn't they? Companies are quite literally instructing them to do so. I work at such a company and have heard similar anecdotes from colleagues that work at other companies.


Why wouldn't you do this even if not instructed to do so?

I can do so much more with my spare time now. I throw agents at problems and get way more done.

$1k in tokens every day is easy to hit.


What exactly are you “getting done”? I’m really curious what you’re doing with so many tokens.


To be fair, taking an average SWE at $160k/y, and spending $1k/m, and offloading mechanical ticket work from their working set sounds like a bargain to me. They could be spending the time on design and planning and working on new things, figuring out how to save costs in optimizations. In fact for every soul sucking mechanical task you offload, the better of you are overall.

It’s not like AI is the first time this happened. CI/CD and extensive preflight and integration and canary testing is also a way of saving engineer time and improving throughput at the cost of latency and compute resources. This is just moving up the semantic stack.

Obviously as engineers we say “awesome more features and products!” but management says “awesome fewer engineers!” either way pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice. There’s no John Henry award.


> pasting the ticket in and letting a machine do the work for a fraction of the cost was the right choice

If it were producing equivalent outcomes, sure. So far I haven't personally seeing strong evidence for that. LLMs do write code pretty competently at this point, but actually solving the correct problem, and without introducing unintended consequences, is a different matter entirely


This. LLMs are terrible at planning/architecture and maintaining clarity of vision across a project. There are lots of tools that mitigate these issues but they're going to keep coming up regardless because of the fundamental nature of LLMs.

If you're not doing the design of the solutions for problems as an engineer or at least making the decisions and owning the maintenance of that architecture/design, what even is your job at that point?


I’ve found LLMs very good at two things: 1. Recommending paths forward, 2. Following established architecture. Your job is to be able to treat the LLM and code as sheep

> LLMs are terrible at planning/architecture and maintaining clarity of vision across a project.

So are many corporations but that doesn't stop them from being economically successful.


> and offloading mechanical ticket work from their working set sounds like a bargain to me

Unfortunately the people who offload the work of understanding and interacting with tickets just end up offloading the consequences to everyone else who has to do extra work to make sure their LLM understands the task, review the work to make sure they built the right thing, and on and on.

The same thing happens when people start sending AI bots to attend meetings: The person freed up their own time, but now everyone else has to work hard to make sure their AI bot gets the right message to them and follow up to make sure what was supposed to happen in the meeting gets to them.


If someone sends a bot to a meeting, warn them the first time. Fire them the second, for exactly the reason that you said in your last paragraph: They're pushing their work onto other people.


Managers have processes for correcting for these behaviors and they fall into the second bucket of outcomes I mentioned.


That'd be crazy. The agent has a skill configured to fetch ticket descriptions from Jira by itself. Copy-pasting feels like manual labor.


Not what I do. I'll reformulate the ticket description so that the purpose and as many details as possible about the solution are made clear from the start. Then I tell Opus to go and research the relevant parts of the codebase and what needs to be done, and write its findings to a research.md file. Then I'll review that file, bring answers to any open questions and hash out more details if any parts seem fuzzy. When the research is sound I'll ask Opus to produce a plan.md document that lists all the changes that need to be made as actionable steps (possibly broken into phases). Then I'll let Sonnet execute the steps one by one and quickly review the changes as we go along.


You are making it too hard on yourself. Most people would just paste the ticket URL and type "fix this", then spend the next 3 hours on social media.

OTOH, I try hard to provide all possibly relevant context, manually copy/paste logs to reduce context overhead, always ask to produce an implementation plan and review it before making any code changes. Yet I often feel like a dinosaur here, all coworkers who tout "LLM productivity" just type a few words in and let the agent spin for hours without any guidance.


I'd call that irresponsible use. One of the principles I try to stick by is to never offload any major decisionmaking to the LLM without oversight. Because some percentage of the decisions it makes are going to be wrongful (and more often just against my taste).

> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

There's also the pattern of creating an army of agents to solve problems. Human write a plan. One agent elaborates on it. Another reviews it and makes changes. Another splits it up into tasks and delegates out to multiple agents who make changes. Yet another agent reviews the changes, and on and on. All working around the clock.


> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?

"Their ticket" = that was AI generated. After which they will wait their AI generated PR be checked by an automated AI QA that will validate against the AI generated spec.

It feels like important metric of "corporate AI adoption" should be how effective the human in steering the AI.

IF THE HUMAN ISN'T EFFECTIVE, THE HUMAN NEEDS TO GO.


Actually no. We ask business analysts to supply documentation for whole products. We use AI to analyze that documentation and after that we use AI to create tasks in Jira. Business analysts will review them.

After that we use AI to translate the tasks to a more technical view.

After that we use AI to implement the tasks.

After that we use AI to review the tasks.

After that a human QA tests the tasks.

If all is good, the code is merged and lands in production.

And yes, we burn a lot of tokens but the process is very fast. It takes months instead of years.


You should.

If it manages to solve the working solutions - then it's great! why would you waste your time on it?

It it fails - then it's great! you find your value by solving the ticket, which can be a great example where human can still prevail to the AI (joke: AI companies might be interested to buy such examples)

(All assuming that your time cost is pricier than token spending. Totally different story if your wage is less than token cost)


If your ticket description is large enough to put a dent in your context window, or otherwise end up as a meaningful share of total token expenditure, then something is very, very wrong with your ticket workflow. Nobody needs a 50-page ticket.

Why would you do anything else? It’s still faster than me doing it as long as I’m parallelizing. I can regularly get up to 5-6 things running in parallel at the moment with no downtime. I suspect by EOY I’ll figure out how to run more



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: