Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

I was surprised not to see any discussion on whether the author used AI to help write this post. As many people say, writing is thinking.

I started getting that "I'm reading another AI-written blog post" feeling around 1/3 of the way through, but I don't consider myself super calibrated on this.

Pangram seems pretty confident it's AI (https://www.pangram.com/history/e9f6eb77-86f9-46d0-a6c1-e57c...). But I know these tools aren't perfect. I'd love to hear from the author what their process was in writing this piece!

Related question (I'm trying to work this out for myself):

If you believe using AI to write an email or blog post for you isn't okay, but using AI to write code for you is... what's the difference?

Right now my instinct is something like:

- Code can be verifiably correct (especially w/ good tests) so it's less of a purely-creative act than writing.

- But always, always double-check the tests!

- I still wouldn't submit a PR where I can't vouch for every line of code.

- AI-written documentation and specs are mostly still bad and should be looked down upon. But mostly because the quality, at least today, is poor. (Lots of duplication, lack of a clear understanding of the reader's intent and needs, no thoughtful curation, etc.)

- Be psychologically ready to update these priors as models change.

I'd love to hear from anyone who's thought more about this.

 help



Great question! I had A.I. critique what I wrote and wherever it gave me suggestions like “this sentence runs too long”, “this can be more punchy”, etc., I considered what it would tell me and chose to change direction if I thought it was warranted. But, notably, I typed out what I thought in my head to counter specific criticisms if I thought them valid instead of taking the LLM’s direct suggestion. I’d then ask it to critique my revision. I stopped when I read and re-read the final drafts end to end a couple of times and was reasonably happy with the flow myself. All the core ideas, the analogies, the choice of structure, etc. are authentically my thoughts and my message. The thing A.I. reined in the most was my tendency to have run-on sentences in early drafts. The concepts percolated in my head for weeks before I decided to blog about it - writing it end to end, and revising it over and over took about 3 hours.

The one thing I can tell you is that pangram is confidently wrong in this instance. And I now worry about how many may have relied on such assessments blindly in consequential places (school essays?). Which ties back to the thesis of my piece - where do you rely on AI and where do you rely on your own intelligence.

On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read. My school’s librarian wrote ambiguously “write this in your own words”. I asked her what she had meant by that. She had thought I’d copied it from somewhere even though it was all my own words. I went on to become the school topper in my final year for English (and one spot shy of being the school topper for Computer Science).


Thanks for sharing your process! It's helpful and refreshing to hear from someone about how they engage with AI when writing, and where / when the detection tools may fail.

(We obviously live in a more nuanced world than most social media interactions might make you think :P)

> On a lighter note, decades ago, in middle school, we had an exercise to summarize a book we read.

My first experience with plagiarism was in first grade, when we were told to write a book report about a subject during our library time. I diligently took my book on the musk ox and copied three pages word-for-word into my notebook as my report. I can't remember when or how we learned this wasn't "right", but I still think back on that and laugh.


Taking you at your word, your A.I. revision process nonetheless seems to have yielded content which may as well have been generated at the start for how difficult it is to get through it.

    The valuable engineer is the one who sees the hidden constraint before it causes an outage. The one who notices that the team is solving the wrong problem. The one who reduces a vague debate into crisp tradeoffs. The one who identifies the missing abstraction. The one who can debug reality, not just read code. The one who can create clarity where everyone else sees noise.
This is a list of six things, disguised as an actual paragraph. Of sentence fragments disguised as actual sentences. Etc. Either you wrote this yourself and the AI didn't tell you "this is repetitive and list-y", or...

    "The software engineers who will be most valuable in the future are not the ones who do everything themselves. They are the ones who refuse to spend time on work that A.I. can do for them, while still understanding everything that is done on their behalf."

    "The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

    "In that world, the engineer is not replaced by A.I. The engineer becomes more leveraged because they are operating above the level of raw output."

    "The ability to explain why something works, not just that it appears to work."

    "That process is not optional. It is how engineers acquire and elevate their competency."

    "The support system may make you look functional, but it does not make you capable."

    "The challenge is not merely adopting A.I. tools. It is protecting the conditions under which real thinking, learning, and craftsmanship continue to thrive."

    "They will need interview loops that test reasoning, not just polished answers."

    "The organizations that handle this well will not be the ones that simply push A.I. adoption hardest. They will be the ones that learn to separate leverage from dependency, acceleration from imitation, and genuine capability from convincing output."
^ Which of these are your thoughts? They all look like slop to me.

Sorry but it's very obvious you used an LLM for more than just suggestions. Ironic given the point of the article.

Can you explain why? I'm getting better at detecting some kinds of AI writing, but I constantly seem comments like this on HN for things I'm much less suspicious of, and I want to understand why people make them.

See my comment in this thread for what jumped out the most to me.

Can you link to it? Your comment history does not show such a comment.

"Taking you at your word..."

Oh I'm sorry, I didn't see you were different than the commenter I was responding to.

Don't do it man. just stop. you loose something of yourself when you turn to AI

Right at the top: "That distinction matters more than people think." That's basically telltale AI :)

Also the entire framing around "judgment" and "taste" is what LLMs love to parrot about the topic.

There are fair arguments in the post but I totally agree that "writing is thinking" and also holding myself to "if you didn't bother to write it, why would I bother to read it"?


One of the many things that has been strange to me is how often people will label written thoughts as AI slop when the "signs" are just normal phrases. Sure, that's a tired expression, and I 100% agree we should be critical of writing that seems to embolden pointless trite expressions. But people have written in that way for years before LLMs.

I find it very interesting that we only now have more widespread discourse around the quality of prose and rhetoric now that LLMs have become ubiquitous.


> I was surprised not to see any discussion on whether the author used AI to help write this post.

It is definitely AI-written, far beyond "AI assisted". This is a shower thought turned into a needlessly long machine-generated essay that doesn't say anything a chatbot wouldn't say if you said "hey ChatGPT, write me a thought-provoking essay on <x> for HN".

I made a comment about it, along with several other folks, but the thing is... we get these AI-written "AI is bad" / "AI is great" articles multiple times a day. Debating them doesn't scale, but neither does complaining about them, and especially not complaining in a thoughtful way. Most people on HN are content to argue with a machine.


What counts as AI help and therefore should be disclosed? For example I often use Grammarly to edit some of my more important writing (but not this post obviously) because it does find grammar mistakes and it does give good readability suggestions (I have a tendency to be wordy) and the process is quicker saving time. I don't always take its advice, as many of its suggestions are not my voice, but it is a useful tool. So do I disclose?

Did not read it all but most likely AI. I read the beginning and that was just garbage in my opinion. I did run the whole piece through my shortening tool and the text came back larger than normal which is also a tell that it's not written by a two-legged.

AI to write code makes more sense because the grammar is almost infinitesimally smaller and of all the practical repercussions that come with that.

Think, yes, verifiability, but also tone, terseness etc. and how big those "problem spaces" are in written language versus programming language.


This article looks entirely written by AI to me. It's difficult to buy that it served as a mere light editing tool.

It hit me in the very second sentence:

"The danger is not that A.I. will make people lazy in some vague moral sense. It is that it makes it easy to simulate competence without building competence."

Ironically to the article, using AI so heavily in writing makes us worse writers. There's something in polishing words by hand, over many essays over years, that makes us good writers.

I see it in AI-generated UI, as well. There's some lack of theory of mind the AI has about its users. It seems fine for functional stuff -- I wish a lot of crummy local state government websites were done in AI, because AI seems to still have better theory of mind than lowest-cost bidders -- but it still leaves an "I didn't care enough to polish this myself" feel to it.

In that way AI UI feels very similar to AI writing.

I think the author still has good ideas here, but the problem with having AI write out your good ideas is that it generates the smell of thoughtless AI slop so strongly that even if your thoughts are good, the odor completely masks it.

I suspect people will start learning to not have AI assist too much with their writing, because it will actively hurt the interpretation of their ideas, no matter how good their ideas are. Better to be a bit rough around the edges but with good ideas than slop, because readers will make their judgment early.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: