HN2new | past | comments | ask | show | jobs | submit | appreciatorBus's commentslogin

> nobody got into it because programming in Perl was somehow aesthetically delightful.

To this day I remember being delighted by Perl on a regular basis. I wasn't concerned with the aesthetics of it, though I was aware it was considered inscrutable and that I could read & write it filled me with pride. So yea, programming Perl was delightful.


Yes, this is what I thought, too. I did program in Perl because it was beautiful. No other computer language compares so favorably with human language, including in its ambiguity. Not everyone considers this a good feature :)

You missed something much more important than all 4 of those points:

- what does the human behind the keyboard think

If you want us to understand you, post your prompts.

Some might suggest that the output of an LLM might have value on it's own, disconnected from whatever the human operating it was thinking, but I disagree.

Every single person you speak with on HN has the same LLM access that you do. Every single one has access to whatever insights an LLM might have. You contribute nothing by copying it's output, anyone here can do that. The only differentiator between your LLM output and mine, is what was used to prompt it.

Don't hide your contributions, your one true value - post your prompts.


The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts. If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

> The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.

If you mean in the sense of differentiating meaning from the base model, I take your point. But in another sense, using GPT-OSS 120b as example where the weights are around 60 GB and my prompt + conversation are e.g. under 10K, what can we say? One central question seems to be: how many of the model's weights were used to answer the question? (This is an interesting research question.)

> If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

Indeed, yes, this is a good practice for intellectual honesty when citing an LLM. It does make me wonder though: are we willing to hold human accounts to the same standard? Some fields and publications encourage authors to disclose conflicts of interest and even their expected results before running the experiments, in the hopes of creating a culture of full disclosure.

I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.


> how many of the model's weights were used to answer the question? (This is an interesting research question.)

That’s not the point. Every one of your conversation partners has the same access to the full 60 GB weights as you do. The only things you have to offer that your conversation partners don’t already have are your own thoughts. Post your prompts.

> I enjoy real human connection much more than LLM text exchanges. But when it comes to specialized questions, I seek any sources of intelligence that can help: people, LLMs, search engines, etc. I view it as a continuum that people can navigate thoughtfully.

We are all free to navigate that continuum thoughtfully when we are not in conversation with another human, who is expecting that they are talking to another human.

If you believe that LLM conversation is better, that’s great. I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.


I want to point out two conversational disconnects and offer some feedback, person to person. I edited my post a bit, so maybe you replied to a previous draft of mine. Anyhow, in terms of what we can see now, I want to clear up a few things:

---

>>> aB: The prompt & any follow-ups do have notable effects, but IMO this just means that most of actual meaning you wanted to convey is in those prompts.

>> xpe: If you mean in the sense of differentiating meaning from the base model, I take your point.

(I clarified; seems like we agree on this.)

> aB: That’s not [my] point.

(Conversational disconnect #1)

---

>>> aB: If I was your interlocutor, I'd understand you & your ideas better if you posted your prompts as well as (or instead of) whatever the LLM generated.

>> xpe: Indeed, yes, this is a good practice for intellectual honesty when citing an LLM.

(I clarified; seems like we agree on this.)

> aB: Post your prompts.

(Conversational disconnect #2)

---

> Post your prompts.

This feels abrasive. In another comment you repeat this line pretty much verbatim several times.

It is unclear if you are accusing me of using an LLM. I'm not.

---

> If you believe that LLM conversation is better, that’s great.

I hope you recognize that is not what I said, nor how I would say it, nor representative of what I mean.

> I’m sure there’s a social media network out there featuring LLMs talking to other LLMs. It’s just not this one.

This doesn't reply substantively to what I wrote; it feels like a caricature of it.

> That’s not the point.

This is kinder to the reader if you say "That's not my point". Otherwise it can sound like that you get to decide what the point is.

Overall, in total, we agree on many things. But somehow that got lost. Also, the tone of the comment above (and its grandparent too) feels a bit brusque and condescending.


“Like hot cakes” is relative.

> The 512GB Mac Studio was not a mass-market machine—adding that much RAM also required springing for the most expensive M3 Ultra model, which brought the system’s price to a whopping $9,499.

Number of people willing the number of people willing to spend $10,000 on a computer is pretty tiny. Maybe they are common enough in HN circles, but I doubt any one at Apple is losing sleep over them.


Of course, $10,000 workstations for a corporation working on AI products might just be a necessary tool.

Just a guess, but I think it’s entirely possible that Apple sold through the full production run that they intended for this generation of the machine and they don’t want to order a new batch before the next generation of processors come out.

I have to think that Apple is close to replacing the M3 Ultra with an M5 Ultra or something of the sort.


A retailer told me they sold more 512GB RAM MacStudios than any other type. N=1 I know but still...

There is a $6 thousand value add service to configure your Mac Mini with AI and have that accessible over iMessage.

Yup. In my experience, average non-nerd folk very very little feel for this stuff. I suspect some believe energy consumption of phone vs car is basically a toss up.

This submission might be an HN record for highest % of commenters who skipped reading the article. I'm sure it's always high but so far there are 125 comments and maybe 3 or 4 referencing what was in the actual article.

Yes, from the title and first few comments I thought it was about getting customer support and having to talk to a chatbot first. For anyone else who didn't read, this article is about how mindlessly copy-pasting LLM output is comparable to "making me talk to your chatbot".

Honestly I pretty much never actually read the articles and I don't see anything wrong with that. HN is more like discussion club, and the article is just a starting point for the discussion. If the discussion stays on topic that's great, if it moves onto better topics that's even better.

"Don't make me read your blog"

Do us all a favor and down vote those posts. I do, because I agree with the author and it doesn't matter if the text was human or AI generated. And if you're reading this and confused by my comment, try to RTFA, it's not long

The article wasn't about customer support chat bots at all.

The article wasn't about this at all. It wasn't about customers, about AI customer service, or about seeking help.


Oh good catch. I did not go back far enough. They should create a new account if they are chill now.

No. This person should not try to circumvent moderation by creating new accounts. They should ask the moderation team for reinstatement of normal posting privileges, but be willing to accept a refusal. They've behaved appallingly.

I havnt seen this before Does (dead) mean they got downvoted or that everything they write is voided? How do you know what they wrote is appallingly bad?

Turn on "showdead" in your profile and you can see what they wrote. Without it, you just see that it's dead.

Can be either. We know because receipts were provided upthread.

We can accept the other side of the curve, but it has consequences that many are unwilling to discuss, let alone face.

Many social services & benefits are designed under an assumption of growth. Ofc we can change the assumptions but this also requires changing the programs, either making them less generous or raising taxes. Neither of those are vote getters, so politicians & govt staff feel a strong incentivize to try anything but those. We can try to make it up with immigration too, but at the scale required, immigration is also unpopular.

I don't think there's any silver bullet here.


I think we're in the same space. It's about what kind of landing we have, not if we can avoid having to get to ground.

Merkel and Immigration to Germany comes to mind.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: