HN2new | past | comments | ask | show | jobs | submitlogin

Even a dog can vibe-code! And the apps kinda, sorta work most of the time, like most apps vibe-coded by people!

I'm reminded of the old cartoon: "On the Internet, nobody knows you're a dog."[a]

Maybe the updated version should be: "AI doesn't know or care if you're a dog, as long as you can bang the keys on on a computer keyboard, even if you only do it to get some delicious treats."

This is brilliant as social commentary.

Thank you for sharing it on HN.

--

[a] https://en.wikipedia.org/wiki/On_the_Internet%2C_nobody_know...

 help



Thanks for the kind words. I'm blown away by the response and positivity here.

There's definitely some social commentary to be had in the whole project. I decided it's best left to the reader to find their own rather than assigning mine to it.


A few days before this article was posted to HN, I had commented (https://hackertimes.com/item?id=47086836 ) on a post triggered by the drop in quality or engagement of the Show HN page. I was playing with the phrase "everyone and their dog" that the person I was responding to used and saying the "and their dog" part was more problematic than the "everyone" part, drawing a parallel between the dog and an LLM by implying that the solutions they both would produce would lack the guarentee of human intentionality and ownership.

And then your dog read my comment and said "hold by biscuits" I guess.


If we strip all euphemisms, Stockholm syndrome and hype from this, it is "AI doesn't know or care if you're a dog, as long as you pay for the tokens".

The human built the system, the AI did the implementation and the dog provided the "intent" even if that intent was just treat seeking randomness. It turns software creation into something that looks less like writing and more like cultivating

Cultivating is a great word choice. It would fit in nicely in a Brave New World.

This reaction feels like an llm wrote it

Just because LLMs habitually make negative comparisons doesn't mean we have to go around accusing anyone who uses them of having used an LLM.

I had to smile to myself when I noticed contrastive negation—with em dashes!—in a text written in the 90s. LLMs don't dictate what good writing is; people do.

we will almost certainly see a mass reactionary movement across disciplines to adopt styles out of distribution for LLMs, a cultural arms race that will be discussed in depth in the decades to come.

the new punk rock.


“I learned it from watching you dad!” —An LLM

Funny idea, but this proves my point that these tools are actually just slot machines.. Except the house in this case takes the money you give them lights it on fire.

Notice how people also have weird superstitious habits when using LLM tools, "You gotta write the prompt this way, say this first" Without having any way to prove it works. Its very similar to the behavior of gamblers. "push the buttons in this order for best outcome"

Also notice how llm tools allow you to multiply the output X2-X3-X4 to compare the ouputs, this is literally UX straight outta a casino.

Many of the users also exhibit excited, almost manic like states.. Addicted to the dopamine the output from their prompt produces...

This is going to be a weird trend to look back on, the hype is on par with the same gambling trends found in crypto/NFTS.


I think this is more of a statement of human behavior under uncertainty and non-determinism rather than the tools themselves. Perhaps the ease of use brings it closer to the funny analogy you made but I think you will find this in any system where users interact with a partially opaque mechanism that produces different quality outcomes contingent on their input...

Sorry, this can't be anything but an intentionally obfuscating comment that I need to call out.

> more of a statement of human behavior under uncertainty and non-determinism rather than the tools themselves.

This is basically saying "It's not gambling, it's just the psychological underpinnings that form the foundation of all gambling enterprises". Who cares to split this difference other than casino owners?


I was not actually defending LLM tools or casinos. Not every system with variable outcomes and ritualized user behavior is meaningfully equivalent to wagering money against probabilistic loss (slots). If the same reasoning were applied to video games or running scientific experiments of any kind, we'd end up labeling most uncertainty-laden interaction as gambling. I just did not find it particular enough.

>uncertainty and non-determinism

When you play slots in a casino, the certain things are that the casino determines the house edge, and the house always wins.


> these tools are actually just slot machines

Slot machines that are biased toward producing jackpots.

And "jackpots" are a metaphor for "training distribution".


Yeah. You always know you are doing something pretty unique when the LLM can conceptualize it (produce the right English output) but not put it into code.

That makes you think. It’s surely harder to hide your dog identity nowadays than when this was drawn.

[flagged]


Are you OK? (Or is this AI?) Either way it'd be good for you to articulate a bit better so others understand.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: