Hacker Timesnew | past | comments | ask | show | jobs | submit | koshyjohn's commentslogin

Ironically making the case for the thesis of the piece - what happens when you let A.I. do all the thinking instead of exercising competent judgement. Disclaimers and leaving real thought to others do not make it much better. Pangram is confidently wrong here.

I read the piece before coming to the comments and had a similar feeling as OP - hence my comment. If AI wasn't used then my apologies, I didn't mean to diminish your work.

I agree with the premise of your post, just felt it was a bit long and the section headers read a little weird.


Then why is the title photo very obviously AI? I feel like you get people on alert off the bat

Great question! I had A.I. critique what I wrote and wherever it gave me suggestions like “this sentence runs too long”, “this can be more punchy”, etc., I considered what it would tell me and chose to change direction if I thought it was warranted. But, notably, I typed out what I thought in my head to counter specific criticisms if I thought them valid instead of taking the LLM’s direct suggestion. I’d then ask it to critique my revision. I stopped when I read and re-read the final drafts end to end a couple of times and was reasonably happy with the flow myself. All the core ideas, the analogies, the choice of structure, etc. are authentically my thoughts and my message. The thing A.I. reined in the most was my tendency to have run-on sentences in early drafts. The concepts percolated in my head for weeks before I decided to blog about it - writing it end to end, and revising it over and over took about 3 hours.

The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.

On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My school’s librarian wrote ambiguously “write this in your own words”. I asked her what she had meant by that. She had thought I’d copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).


Thanks for sharing your process! It's helpful and refreshing to hear from someone about how they engage with AI when writing, and where / when the detection tools may fail.

(We obviously live in a more nuanced world than most social media interactions might make you think :P)

> On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.

My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.


Taking you at your word, your A.I. revision process nonetheless seems to have yielded content which may as well have been generated at the start for how difficult it is to get through it.

    The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
This is a list of six things, disguised as an actual paragraph. Of sentence fragments disguised as actual sentences. Etc. Either you wrote this yourself and the AI didn't tell you "this is repetitive and list-y", or...

    "The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf."

    "The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

    "In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output."

    "The ability to explain why something works, not just that it appears to work."

    "That process is not optional. It is how engineers acquire and elevate their competency."

    "The support system may make you look functional, but it does not make you capable."

    "The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive."

    "They will need interview loops that test reasoning, not just polished answers."

    "The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output."
^ Which of these are your thoughts? They all look like slop to me.

Sorry but it's very obvious you used an LLM for more than just suggestions. Ironic given the point of the article.

Can you explain why? I'm getting better at detecting some kinds of AI writing, but I constantly seem comments like this on HN for things I'm much less suspicious of, and I want to understand why people make them.

See my comment in this thread for what jumped out the most to me.

Can you link to it? Your comment history does not show such a comment.

"Taking you at your word..."

Oh I'm sorry, I didn't see you were different than the commenter I was responding to.

Don't do it man. just stop. you loose something of yourself when you turn to AI

Thank you for sharing this. We are all less rational than we imagine ourselves to be, even if we're hyper-critical of ourselves and exercise a lot of intellectual humility.

The better question may be "What value did that person acting as a glorified front-end for Claude create?" (vs. what they were expected to).

> Is the author actually claiming that that person must not deliver any results beyond their 10 units? No, I'm claiming that if someone or something else produced your 10 units of work, you better be able to verify that those 10 units of work are of at least the same quality as you producing them yourself. This is the bare minimum and not something to shift onto other people reviewing your work.

Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.


100% agree. The key difference now though is that it's no longer 'swim or sink immediately' situation - which used to be a forcing function against intellectual laziness where it was a choice.

> "not doing cognitive push-ups leads to cognitive atrophy" This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: