soo... i have no kept up with what's gone on in russia/ukraine. Are those drone videos what i think they are – drones sneaking up on humans and, presumably, ceasing them of life?
edit: Ok, I googled the guy
> I have read the works of authors such as Jean Baudrillard, Desmod Morris,
and Ted Kaczynski who believe that technology is harming us and the world.
https://wiki.opensourceecology.org/wiki/User:Alisherkhojayev
Both Russia and Ukraine build millions of drones per year, most of them fpv drones that are basically remote controlled flying grenades. There's plenty of electronic warfare with radio jamming, so in some places they use drone mounted spools of fiber optic cable to control them. It's probably been the most impactful weapon type in the war for the past years.
"between 400,000 and 1.5 million estimated casualties (killed and wounded) during the Russian invasion of Ukraine from 24 February 2022 to November 2025"
Mostly due to artillery. Both sides are firing in the region of 10,000 155mm shells per day. For years.
I think this is likely quite outdated by now - a lot of artillery is definitely still is use now, but there is also a very large gray zone dozens of kilometers around the front line where remote controlled UAVs (usually single use FPVs and reusable bomber drones) will quickly identify and strike anything that moves.
Due to that I have seen many people monitoring the war to estimate that drones are now causing more casualties than artillery - both due to being much more precise & by forcing artillery to move further back & fire less from the gray zone to avoid itself being destroyed by drones.
In any case things are moving pretty quickly & the current state is very different than just a year or two ago.
A reasonable estimate for the Russo Ukrainian war is that there have been half a million casualties due to drones. I would not recommend looking for the videos, many tens of thousands of those have live footage of them occurring.
Their point is that quoting chatgpt is a bad comment.
What's your point? It would be just as bad for someone to google a question and copy the first result snippet verbatim. So you've successfully brought up another bad way to comment.
I'm a scuba-diver and qualified marine archaelogist with a long-standing interest in archaeology and history.
I used Google to find suitable lay-descriptions/citations for the topics I already knew about (UK law on treasure and maritime law on salvage), and to understand more about applicable laws in the USA.
If you don't believe what otherwise sounds reasonable take, I don't know what to tell you. I mentioned it as a good starting point if s/he so cares to read further.
Please do so. And, forgive me if I speak heresy, but there has to be more proof of work (friction) to create accounts. I was shocked at how easy it is for something like chatgpt atlas to create new accounts on the fly.
Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.
As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.
Indeed. Here is a recent litmus test: https://hackertimes.com/item?id=47051852. How can we filter the lightweight stuff while still benefiting from posts like these?
One thing we did at reddit for a while was put posts from new people in "jail". They would show up in a special yellow box at the top of the home page to accounts that tended to be early upvoters of things that became successful later (our Nostradamusus so to speak), and then if it got enough upvotes from that group it got out of jail and placed on the regular /new page.
So maybe some sort of filter like that? Only show it to those kinds of accounts at first?
The downside is that if that group isn't big enough you get a lot of groupthink, but if your sample is wide enough, it can be avoided. To be honest, I don't recall why we stopped doing it.
Just sharing observations it may help, it may not…
what I’m seeing is new or sleeper accounts that have been idle for over a decade with low (<99) karma getting into comment circles. Over the last couple of weeks i’ll see several top comments on articles with back and forth between other similar accounts… it’s got to the point that I check a user habitually before I even bother reading… and I have never hidden so many comments before getting to something substantive in the comments…
Like many here, I don’t wish to limit new users, but this does seem from my armchair perspective to be a pattern to be on the look out for.
Maybe have a signup flow where you can skip the new account restriction by putting some file on a website of some currently trending link. And then the restriction is lifted temporarily for the thread linking to it?
Not every post is from the website of the person who is the topic of it. It's common to have e.g. a blogpost about $thing and then a new account chimes in with "Hey, I authored $thing 10 years ago when I was working for $company, someone linked me this post. [some contributions to the topic]"
I have often heard that vote rigging is detectable on HN because the site software penalizes voting from accounts at the same IP address.
Rumor had it that there is also some kind of social-network metric detecting when socially adjacent accounts (or alts) are engaged in astroturfing, the practice where a small cabal tries to pass themselves off as a broader grassroots campaign.
Flip that around though and the same metrics might allow new accounts to be meaningfully vouched for by existing ones.
Sorry, I need to ask the dumb question: Is that Show HN (AsteroidOS) post written by an LLM or not? Honestly, I cannot tell.
A few people in these comments seem wildly confident that it is written by an LLM. If anything, I hope it was written by a human as an elaborate troll to trigger these so-called immaculate LLM detectors.
Interesting litmus test, as the post isn't just green, it's riddled with LLM copyediting. Doesn't read as if originally composed by an LLM, so there's that.
Would seem to require some discernment to classify. Not all assistive use is slop.
> I am sooo tired of statements like "No x. No y. No z." and then optionally "Just Foo.". Who aside from Fred fucking Durst writes like that?
I disagree. This is a classic humor template in popular magazines from the 1990s and 2000s. The New Yorker's "Talk of the Town" probably has/had this style frequently. Also, (Timothy) McSweeney's Quarterly Concern is basically an extended trope of exactly this type of writing from 1990s and 2000s.
The discussion about the LLM assisted/written submission at the time, with replies by the author: https://hackertimes.com/item?id=47055300 The defence given was essentially "just reformatted it for better grammar"
It's obviously says LLM to me at first read through.
I suspect that:
a) less people are willing to expend a bit of energy to notice LLM usage given how much of it is. ("we've lost" theory)
b) that people are losing the ability to detect LLM submissions. ("we're cooked" theory)
or c) that people don't care about the use of LLM. ("who cares" theory).
Personally I've been feeling less invested, because it seems as if most users don't care and even the main users of the site don't notice it.
I should clarify and revise my thoughts and initial comment. I do not think that not being able to detect it leads to lack of care. I actually think that many things have passed me by and in the future this will be even more as LLMs improve ("we're cooked").
As to "what do we do when we spot it" - you hit the nail on the head of the feelings I felt as I was writing the comment. What do we actually do, what can we change and should we attempt futile things?
And even the example dang gave - the actual submission as very good. Is any amount of LLM use okay and what's the level? I use LLMs at work but I don't like writing readmes or blog posts with it. But others might like writing code at work by hand and don't like writing text so use LLMs for that. Maybe I lower my expectations!
You would need, say, a StackExchange-like crowdsourced moderation system whereby users with relatively high karma are randomly selected to check posts from new account, by casting votes to reject or keep.
>How can we filter the lightweight stuff while still benefiting from posts like these?
Well, the simplest automated method would be to run the post and comment together through an LLM with a prompt that's roughly:
"Is this person claiming to be the author or co-creator of the work discussed in this submission?"
Only green accounts subject to it. I predict you'd probably have a very low false positive and false negative rate.
It's of course a terribly slippery slope. My perhaps overly-cynical take is that once the infra is place some of your bosses would be prone to eventually abusing it.
Personally I'm here for it: Dang, moderator turned whistleblower—on the run from dark VC money—in a race against time to save freedom. Still working on a title for the film.
Responding from a new account is different from posting from a new account. You aren’t vetting people by making accounts have a minimum age to post articles. That’ll just cause people to make accounts before they need them.
Reddit has forums where you need a minimum karma to post to certain subreddits and that is typically upvotes on your comments, but it could also be upvotes on someone else’s moderated subreddit.
I think the right people will stick around. There is a certain kind of indivudal that has the paitence to understand that a system that restricts new accounts from post is a good thing. Of recent, there have been a lot of posters that come here from the open web just to try and slant opinion.
But sticking around doesn't solve the scenario mentioned by parent.
1. some interesting projects gets to HN main page
2. author of the project is not on HN so creates a green account and interacts
even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion.
This is one of the best things about HN. The sheer number of times someone has posted a link and the author or someone significant to the project deep within some megacorp makes a green account and starts answering questions that you never thought would get answered. Some of the most golden replies come from greenies.
Yes, and we've always gone out of our way to protect those. It's perhaps the thing I hate the most about our software that sometimes it kills such posts.
> even if that person would have the patience to stick around, by the time they would be able to respond, it would be too late for it to be relevant to the (now stale) discussion
This is a fundamental part of how HN sees its own functioning; they refer to it as "rate limiting".
I am only that kind of individual when I'm inclined to post unconstructively – not that I know that, at the time. When I'm feeling constructive, friction is likely to make me take my constructive energies elsewhere.
The key is that both were randomly assigned to users - you’d never know if you’d open a thread and be a moderator. If you posted in the thread you couldn’t moderate.
And about the same frequency you’d be assigned to metamoderate, basically being asked if a moderator’s “vote” was a good one or not (you didn’t have to fully agree you’d do the same, just that it wasn’t bad).
Someone who scored low in meta moderation would get less or no moderator chances.
I'm surprised posts aren't restricted a bit more. Maybe that's just my old school "lurk moar" mentality, but I feel like I really need to understand the vibes of a community before I start to contribute posts to it.
Yeah, exactly. Thirteen years ago, I was a lurker. No account, because why would I make an account just to read? But when I wanted to say something badly enough, I made an account. (I think the first thing I did is post an Ask HN about functional programming, so "no posting for X time" might have turned me away.)
I'd suggest: new accounts are read-only for at least a week. Then they can comment (rate limited at first, gradually relaxed) and vote, and then after some additional amount of time and/or karma they can submit a post. Maybe some of these mechanisms are already in place? Bots can probably game this too but drive-by bots maybe won't be patient enough.
Immediate comment privileges are really important. Lots of examples, but to give a silly one, someone pastes their clipboard without realizing it includes their API key or their email. Good Samaritans should be able to say, "Hey, I just caught something."
And, as another commenter mentions, if someone shares your work, you should be able to comment on that thread without delay.
This is the only reason I got myself a HN account: someone posted a link to a blog post of mine, and I happened to see the increased traffic on my VPS.
(And I stuck around after, a few posts are interesting enough. All the AI stuff isn't, and there is too much of that unfortunately.)
You reminded me how infuriating it was not to be able to post comments on StackOverflow. Felt like getting those few upvotes required was taking forever, and all without ability to ask for clarification.
Goodness that is rough, then they instantly own your posts where blanking edits are vandalism (obviously great for the internet, albeit at potential occasional individual cost).
It seems easy enough to circumvent: "We're launching our product in 2 weeks, so let the AI create and 'warm up' 20 new HN users so they're ready to shill".
It's really not a problem that can be solved easily :(
If someone is going to put that much effort into to it, let them. I think the ideas here are to try to get some low hanging fruit to see if that works “good enough”. You’ll never block all AI generated accounts, but you may not have to and still have the desired effect.
But if someone wants to plant 20 new accounts, grow them out with karma votes, so that they can game the voting, there are probably other ways to detect that.
Any amount of friction reduces the amount of slop. What proportion of clankers are going to realize that they need to warm up the accounts two weeks in advance? Answer: a proportion that your never going to see with that barrier in place.
With a couple few layers of defense, you'll weed out almost all of the bad actors. Without strong monetary incentives for spamming, you also avoid most persistent actors.
With enough layers you will also weed out almost all of the good actors. Normal people are busy and don't have time nor patience to jump over too many hoops to promote their cool new research, or to respond in a thread where someone linked it.
Which in itself is annoying, IMO. It creates a whole separate set of problems. You need karma, so people post in karma-farming subs to get a few crumbs. Then you get auto-banned from a dozen of the top subreddits preemptively for farming.
Reddit hasn't been as overrun by bots yet, for the most part, although how long they can hold out I don't know.
We live with GenAI, and the human to bot ratio is now leaning in a different direction. The old norms are dead, because the old structures that held them up are gone.
This idea that theres “more hoops - losing participation” on this thread keeps assuming that the community is unaffected by the macro trends.
It’s weirdly positing that HN posts and users, are somehow immune/unaffected by those trends.
Requiring accounts to be a certain age does not help and will only affect legitimate users. The slopsters will simply create accounts, wait a bit and start posting then.
Actually cross the will out. They are already doing this to avoid the green smell. This account replied to me today. 4 months old, but only started posting today.
https://hackertimes.com/user?id=BelVisgarra
Oh damn, that's the one who posted the AskHN about the verified job portal on the frontpage today. Either this is some chilling still in build up, or it's an actual human being with severe LLM slop impersonation derangement syndrome.
Yeah, unfortunately there are bots here that are much better at hiding that and even do language mistakes on purpose.
It's still a small minority of comments, but it's definitely getting a problem and just the chance — even if it's small one — of talking to a bot, rather than a human causes inhibition. Finding out that one has been talking to a bot is finding out you've been scammed. You invest time and human emotions into something for another human to read, even if it's just a quick HN comment, just to find out that it was all for nothing. It sucks the humanity out of it and thereby out of oneself. You get tricked into spending your valuable limited human social energy on soulless machines with infinite capacity of generating worthless slop instead of on other humans.
If most people are like my on that topic, then they use HN without an account, until they want to post or comment something, then they try to find out how to create an account. If they won't be able to post or comment then, then they will just not create or retain that account.
I was able to have discussions where one party has significantly unpopular opinions. Such discussions are unique to HN, please don't kill them.
But don’t worry, HN has been thoughtful about links from new accounts for months and months (can’t speak for longer, but maybe/probably). Effort could well be duplicative unless I’m unaware of some more granular detail.
This problem can be solved by an invite/vouch for system.
New account can be invited or vouched for by an old account with good karma. If an account that you vouched for starts spamming and/or slopposting, you lose your vouching for abilities for a period of time or forever.
I didn't know anybody here before I joined. (I have been here for a few years, and I still don't know anybody here.) How would a person like me get invited or vouched?
That looks interesting, but I feel like it’s likely to be close to impossible to join. Feels like it would be weird asking someone you know for an invite.
Same here, I don't know anyone who might send me an invite unfortunately. It's unlikely for this topic to come up organically in a conversation as in "hey by the way are you on lobste.rs" so my previous attempts were by sending messages in my company's notice board asking if someone is there. But in the last few years I have worked in smaller startups so the sample size is too small for this strategy to succeed.
FWIW, folks on lobste.rs are (mostly) friendly and willing to extend invites if you seem like a real person. My understanding is that the invite system is primarily in use to avoid drive-by spammers and the like.
Feel free to send me an email (findable via my HN profile) mentioning that you found it via this thread, and I’m happy to extend an invite.
Perhaps more proof of work is necessary, but it makes me sad.
I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.
I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."
It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).
It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.
Somehow I've been browsing HN since ~2019 without ever wanting to reply so much that I was willing to make an account (and start receiving emails, etc) but your comment made me curious how easy it could be, and wow. Now I have an account.
I kind of assumed it was hard to make an account (maybe even an invite-only situation) based purely off of how unique most handles were, and how well curated/moderated everything was. So I guess you could say, the quality of the usernames and the quality of the posts :)
I rotate accounts on "social media" (mostly Reddit and Hacker News, the others don't interest me) every few weeks or months to make sure not too much of my post history accumulates in one account. I would dislike it very much if there would be high friction to create new accounts. On the other hand my behavior is probably a major outlier.
Same, though I'm also surprised how easy I can make new accounts for this site. But I love that. Hope it doesn't require me to jump through a bunch of hoops in the future.
I appreciate the anonymity. Posting as throwaway is often useful to distance the poster from $work or $ex or other situations yet contribute to a conversation.
But will it continue under all the login id surveillance laws coming up?
You are aware of the guidelines? (You are not fostering community)
> Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.
Thanks, I was not aware. They seem to be guidelines, and not rules. I find my privacy and the prevention of anyone to build a full profile of me (especially how easy that is now in the age of LLMs) a bit more important than the vague concept of "fostering community". I am sorry.
Just like how HN itself can't be immune from macro trends, neither can its users, and macro trends have unfortunately made this a necessity for many of them.
On Reddit and Hacker News, I don't need an email address to sign up. But also I use SimpleLogin to have a separate email address per website/account. Quite necessary these days when personal data is leaked by some company or other every day.
I do the same. It simply means theres less accidental leakage / self-doxing that could be pieced together if you (or llm) read every comment on the account.
Suggestion: Pick a long term account, dump the comments, and see what an llm could figure out about the target
Same, but also for the opposite reason: a new account gives me a chance to do better. If I post lame comments, I accept the lameness of the posts attached to a particular user name and the hesitation I feel to post more lame comments decreases. With a fresh identity, I am more likely to avoid lame posting sort of like how you avoid going out in the mud in brand new sneakers. A sort of repentance; being born again in the digital realm.
You can build quite an extensive profile of someone given enough post history. More post history means more details. Especially nowadays with LLMs it's trivial. This can lead to all sorts of issues. One is people I know in real life being able to identify me. Another is that through various means my account may be linked to my personal identity (e.g. through matching usernames or emails across platforms) and oppressive regimes (now or in the feature) may use my post history to take action against me.
Honestly, it's probably good if platforms disincentivize this. If you know creating a new account is high friction, you are more likely to take care of the account you have, and be a higher quality member.
If you intend your accounts to be thrown away, you will likely behave worse.
*I'm using "you" generically, I don't mean you specifically.
Your behavior is only an outlier because we don't teach kids basic security practices and so they don't grow up into adults who think like that. We also don't teach kids how to avoid "Internet addiction" dopamine chasing, so seeing a number (eg: karma score) get smaller instead of bigger hurts feefees.
I'm well aware that the cyberlibertarian ethos endemic here opposes any form of regulation. But when the status quo clearly isn't working something has to change. Parent's have failed to step up and do their jobs. Somebody else has to.
Never got banned for it, though my "rotations" tend to be "a few weeks every year".
even if they did ban me: the account was going to be deleted in a short while regardless. So that fear isn't present for what's essentially a longer lasting throwaway.
I was going to suggest emotional leetcode, but LLMs do well on this.
When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
That type of question could also turn people off. We already have too many discussions where people are quick to jump to conclusions and attribute intent, rather than asking basic questions.
The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms.
But is there a connection between the front page being full of "AI" slop and "AI" worship and these new accounts? Or are the old timers also upvoting those submissions in the detriment of other, more interesting topics?
I echo this sentiment for all social media platforms today...
At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.
It's pretty interesting to see. My very first real software job was working on ground processing algorithms for the US Navy's Maritime Domain Awareness system, which is the "real" version of something like this that actually gives centimeter scale live activity detections of basically the entire world. The engineering effort that goes into something like that is immense. Bush announced in like 2004 or something and we didn't go into full operational capability until 2015. Thousands of developers across intel, military, commercial contractors, for over a decade, inventing and launching new sensor platforms, along with build outs of the data centers to collect, process, store, and make sense of all this.
I wish these weekend warriors would work on a project like that someday, to see what capabilities truly take. You want to know what's happening in the world, you need to place physical sensors out there, deal with the fact that your own signals are being jammed and blocked, the things you're trying to see are also trying to hide and disguise themselves.
The attention to detail is something I've never seen replicated outside. Every time we changed or put out a new algorithm, we had to process old data with it and explain to analysts and scientists every single pixel that changed in the end product and why.
I'm tired of arguments like this. If AI is helping you do work that you would have otherwise have had to pay people to do, then it is replacing white collar work.
The goal posts are becoming more narrow and these posts are becoming more frequent. It is almost like a therapy session for those facing an existential crisis while they continue to train the very thing that will replace them by giving it more training data to do their work.
This is a bit tricky, though. You could say the goalposts for self-driving cars are becoming narrower, but some things require complete automation to make a significant change.
That's because in the 1% of cases it fails it could result in someone dying. In fields where there isn't the same level of risk or regulation involved it shouldn't be as resistant to change.
Can we please, please move past this generation of bean-counter CEOs. Google and Apple have done great things under Pichai and Cook, and yet I couldn't been less excited for what either is doing.
Experimental culture is inherently risky though, and risk is not something you want too much of as a public company as your shareholders can and will be very loud.
They do still experiment but in their R&D divisions so as to shield their cash cows from risk as well as to be able to better conceal how much money they’re pouring into moonshots so as not to spook investors.
Waymo is the most recent moonshot I can remember going out of Google X, and they’re arguably a leader in the space. There are other projects in the works, and many more failed ones.
I like this. It reminds me of the interesting type of experimentation that was done with LLMs before agentic coding took over as the primary use case.
I am interested in seeing a personal version of this. Help people work out their own brain knots to make decision-making easier. I'm actually decent at mending fences with others. Put making decisions myself? Impossible.
You can actually register now (with a waiting list) and make your own private graphs, if that's what you meant by a personal version. (You'd be like member #4 haha)
I've actually had a lot of fun hooking it up to LLM. I have a private MCP server for it. The tools tell it how to read a concludia argument and validate it. It's what generated all the counterpoints for the "carbon offset" argument (https://concludia.org/step/9b8d443e-9a52-3006-8c2d-472406db7...) .
And yeah... when I've tried to fully justify my own conclusions that I was sure were correct... it's pretty humbling to realize how many assumptions we build into our own beliefs!
edit: Ok, I googled the guy
reply