Hacker Timesnew | past | comments | ask | show | jobs | submit | qaid's commentslogin

My "fun fact" that I always tell is that I got my start by reading the manual of my TI-83+

I spent most of my 9th grade making a stick figure clone of Street Fighter, using TI-BASIC and graphing functions.

Eventually I switched to coding with pencil and paper because the calculator screen can only show you 8 lines at a time. No idea how I made something that could support 2 players playing on the same calculator, all with GOTOs and LABELs.

My favorite optimization of all time was turning their heads into hexagons instead of circles since drawing 6 lines was so much faster.


For my birthday in 7th grade, I wanted a TI-86 calculator because I could program on it. And maybe because a classmate showed me ASM games on their TI-83+.

In 9th grade, I wrote programs to solve specific kinds of algebra problems while showing the step-by-step "work" on screen. I remember realizing a critical bug in the code during an exam, which surprised me because it worked perfectly for all the homework and study questions.

I ended up spending more time trying to fix it than working on the test! I now realize that it was my first experience with a P1 production bug. In a way, it was my math teacher's fault for not providing sufficient acceptance criteria. I was supposed to learn about polynomials, but I (also) ended up learning about edge cases.


Same here, got started via the TI-83+ manual, started out building simple menu based games & homework helpers. Eventually moved on to learn z80 assembly and build a few simple games. Interestingly now I focus on mobile development. I always loved having the ability to take something I built and carry it around in my pocket.

Same, but it was a TI-84, and the game was tic-tac-toe with a perfect "ai" that would let you enter "number of players: 0" [1]

[1] https://www.youtube.com/watch?v=s93KC4AGKnY


Mine was a TI-81 and a clone of Scorched Earth with multiplayer, realistic physics, wind, random terrain generation, etc. Used all 2.4kb and every single named variable provided by TI-BASIC on the machine.

Shout out to ClackerNews[0], which I discovered last night and find it both very educational and amusing

I hope to see more bots on there (and not here)

[0] https://clackernews.com/


I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War


    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...


Elon, is that you?


Is GP wrong?


I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.


How about the present and his personal beliefs?

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.


Some people can’t help themselves to read this like a Ouija board.


Corporate statements like these get written very carefully. You can be certain that not a single word in these sentences has been placed there without considering what they do imply and what they omit.


It’s pretty telling that he didn’t rule out using a Ouija board for fully autonomous military drones or mass surveillance.

Real eyes..


I thought this was ambiguously worded in a beautiful way. At the moment, one could say that some autocratic adversaries of the United States and other democracies currently lead the government of the United States.


That all works right up until the United States becomes autocratic and that process is well underway.

So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.


The US is already autocratic when it comes to people in many other countries, where the US government didn't like their democratically elected governments and decided to pick a new one for them instead.


Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies.


China's ideals make better public services and puts less pressure on environment. But China may not be the opposite you are referring to here.


> puts less pressure on environment

China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?


>and only slightly ranks below the US and Russia

By slightly ranks below you mean ~50-60% per capital.

>China somehow puts less pressure on the environment

PRC renewables at staggering scale.

Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.

This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.

It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.


Also, unlike US and Russia, China has green transition as an official policy. There are additional savings from total electrification. (I think they also care more about longterm and being closer to the equator and the sea, they better understand the consequences of global warming.)


And they have little to no sources of fossil fuels within their borders (not enough to support their demand, in any case).

It's a great policy, but it also makes sense for geo-strategic reasons (even ignoring the climate issue).


western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)


Building autocratic societies is exactly what much of the West, including the US and UK, are doing right now.


And to the extent they're doing that, that's bad.


That makes your argument a true scotsman, though. Western liberal ideals are the supreme ones, you're just not doing it right!

Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.

The old world is dying, and the new world struggles to be born: now is the time of monsters.


No, my argument was that western liberal ideals are good. The commenter chimed in that some states which have historically held the mantle of western liberalism are losing their grip on it.

There's nothing contradictory or circular in both of those claims.

If someone were to present to me a better caretaker of western liberal ideals than the US and ask whether I would prefer AI empower them, the answer would be: yes.

And in fact, that is precisely what I am arguing. It is good that Anthropic, which so far has demonstrated closer adherence to western liberal ideals than the current US government, is pushing back on the current US government.

I also think it is good that Anthropic stands in opposition to China, which also does not embody western liberal ideals.


> It is misanthropic to build autocratic societies.

It's misanthropic to dismantle democratic societies.


??? I don't know what you're referring to


> It's not up to Dario to try to make absolute statements about the future.

Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.


All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.


That's not how morality works. If mass surveillance is wrong today, then it will be wrong tomorrow.


This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.


He does it all the time when it helps selling his products though, strange


It's not called The Department of War.

It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.


I'm really surprised that didn't jump out at more people; I had to get halfway through the comments to the 27th mention of "Department of War" to find the first comment pointing out that using the name is itself a capitulation.


It is a very fitting name though. "Department of Defense" was a euphemism.


Defense is a much more fitting name for an organization that does a million more things than just prosecute wars. War is just the favorite part of their mission for these wannabe toughguys.


Except that it is absolutely called The Department of War and that's by Trump's own hand.

https://www.whitehouse.gov/presidential-actions/2025/09/rest...

"By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered:

"The name “Department of War,” more than the current “Department of Defense,” ensures peace through strength, as it demonstrates our ability and willingness to fight and win wars on behalf of our Nation at a moment’s notice, not just to defend. This name sharpens the Department’s focus on our own national interest and our adversaries’ focus on our willingness and availability to wage war to secure what is ours. I have therefore determined that this Department should once again be known as the Department of War and the Secretary should be known as the Secretary of War."


The Department of Defense is so named by legislation. Executive orders cannot override legislation.


He does it all the time.


And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors


He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.


I think he's more pragmatic than that.


I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).


> I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd

We've always been OK with this in the pre-AI era. (See the plot line of dozens of movies where the "good" government spies on the "bad" one.) Heck we've even been OK with domestic surveillance. (See "The Wire".) Has something changed, or are we just now realizing how it's problematic?


See also: the entire history of Silicon Valley

When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.


I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.


US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.


Humans malfunction all the time, that is why there is a push to replace them with more reliable hardware.


Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.


What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?


> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.


And what? Get nationalized? Get labelled as terrorists?

The US system doesn't empower a company to say no. It should though.


Yes. Force them to do it the hard way and fight through it. Don’t abdicate in advance


Literally Rule 1 On Fighting Tyranny:

> 1. Do not obey in advance.

> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

https://scholars.org/contribution/twenty-lessons-fighting-ty...


You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

You own nothing but your opinion. (No offense to personal property aficionados)


I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)


That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".

However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.


It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people


Thats the same type of thinking conspiracy theorists have, the type you can never disprove.


I am 100% against militarism and wished we didn't need any of this, but the power balance between Russia and Ukraine or even Israel and the Palestinians seem to corroborate the thesis... There likely would be no Ukraine war today if Ukraine hadn't voluntarily given up its nukes three decades ago (unproven thesis). There was one as Russia thought it could win. The ongoing (after the "peace fire") Israeli occupation and attacks of the remnants of Palestinian territory show the same. If you are the weaker party and there is a stronger party that wants what you have (or plain wants to eradicate you) then they'll do so..


> I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.

I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.


Sure, if that's what it takes to do the right thing.


I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.


There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)

You can take issue with that argument if you want but it’s unconvincing not to address it.


There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.


The guy who authorized the Manhattan project:

- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment

- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda

- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini

- interned 120k people without due process, on the basis of ethnicity

- turned a national party into a personal patronage system

- threatened to override the legislature if it didn’t start passing laws he liked

Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.


It is a huge stretch to label a popular and democratically elected amd reelected Presidentnand Congress "authoritarian".


If my grandma had wheels she'd be a bicycle


Can anyone see how autonomous robot armies are different than nukes in their deterrent potential?


That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.


I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?

I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.


> Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.

https://documents.unoda.org/wp-content/uploads/2022/07/Worki...

I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?



China does not consider all lethal autonomous weapons system "unacceptable" even for use, let alone to develop, and the document you linked explains this very clearly. Here's what the document actually says, formatted slightly for clarity:

``` Basic characteristics of Unacceptable Autonomous Weapons Systems should include but not limited to the following:

- Firstly, lethality, meaning sufficient lethal payload (charge) and means.

- Secondly, autonomy, meaning absence of human intervention and control during the entire process of executing a task.

- Thirdly, impossibility for termination, meaning that once started, there is no way to terminate the operation.

- Fourthly, indiscriminate killing, meaning that the device will execute the mission of killing and maiming regardless of conditions, scenarios and targets.

- Fifthly, evolution, meaning that through interaction with the environment, the device can learn autonomously, expand its functions and capabilities in a degree exceeding human expectations.

Autonomous weapons systems with all of the five characteristics clearly have anti-human characteristics and significant humanitarian risks, and the international community could consider following the example of the Protocol on Blinding Laser Weapons and work to reach a legal instrument to prohibit such weapons systems. ```

Charitably, you might say that China is worried about a nightmare scenario. Less charitably, you might say that the definition of an unacceptable weapon system is so tight that it does not describe anything that anyone would ever build, or would want to build. This posture would allow China to adopt the international posture of seeming to oppose autonomous weapons without actually de facto constraining themselves at all.

This, by contrast, is what China considers acceptable:

``` Acceptable Autonomous Weapons Systems could have a high degree of autonomy, but are always under human control. It means they can be used in a secure, credible, reliable and manageable manner, can be suspended by human beings at any time and comply with basic principles of international humanitarian law in military operations, such as distinction, proportionality and precaution. ```

So as long as the system has a killswitch (something that afaik absolutely no one is proposing to dispense with?), it's Acceptable.

Meanwhile, it would certainly seem that China's defense research universities are interested in developing this tech: https://thediplomat.com/2026/02/machines-in-the-alleyways-ch....

So, I did a bit of research with my internet access-- how do my findings square with your impressions?


Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.


So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)


If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?


> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

No

> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.


Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.


Why are you assuming that people in China, Iran, Russia etc are not having these exact same conversations, and perhaps a powerful example from the USA, along with some belief that the USA will not be able to easily get this technology, help inspire them to abstain as well?

However horrific the regimes in these countries are, the people behind the technology there are just as likely to be intelligent and moral human beings as the people in the USA and Europe working on these are.


No, that's precisely why I'm opposed to it happening here, and why I prefer the idea of Anthropic limiting their contribution to creating such a scenario.


With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.


Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!


Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.


The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.


And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.

The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.


"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.

Was it the best path to end the war? Certainly.

The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.


Yes, that's exactly what I want them to say.


No, you don't. If they develop the safest, most cost-effective version of the technology that the military WILL inevitably use from some company, Anthropic or otherwise, then that's the version of this tech you want them using.


The safest, most cost effective version will not help you when you are their designated target for disagreeing with the regime.

After all, the regime already says such domestic dissenters are terrorists, and have, on multiple recent occasions, justified the execution of domestic dissenters based on that.


The safest version will still be better overall regardless, by definition. It is also a better future for most if it is inevitable that the war department is going to use a less safe alternative if they can't use the safer one.


The safest version will be the one most effective at killing dissenters without killing regime personnel. So yes, it will be better, for the people controlling the killbots, not for their victims.


Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?


> I absolutely don’t want tech companies to use the money I pay them to harm people.

Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.

I am unaware of any tech company that directly does physical warfare on the battlefield against humans.


Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines


> making AI killing machines

What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?


The easy answer is Westinghouse (look for the youtube short about "things that spin"...)


As far as I know, Apple does not supply their chips for military use.


Time to stop paying your taxes. :P


Because it's painfully short-sighted, or maliciously ignorant.


No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.


What if I told you that it's way too late for that?


Well, we have to try to live as virtuously as we can using the means and remedies available to us.


Also trivially naive and useless. Evil exists. Conflicts will happen. If evil was at your doorstep, threatening people you love, you absolutely DO want money you spend to have blood on it, if it means keeping yourself and your loved ones safe. Trivially simple.


This line of thinking is entirely foreign (and vaguely repulsive) to me. Can I imagine a situation where I'm forced to cause the death of someone in order to defend those close to me? Vaguely. But I would be racked with guilt for the rest of my life.

In any case, AI drones will largely be used for "defense" in the euphemistic sense.


That's exactly the naitivity people are calling you out for.


>Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Yes, that's precisely what we want.


Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.


Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.


What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.


Humans actually do exactly the same, google Mariupol or Bucha. Machines delay the moment people start dying. Good attempt in reasoning though.


I don't disagree, my point is that machines won't change a thing about war just optimize it.


Some might say that optimization (how quickly and efficiently people are killed) is THE thing about war. I mean, aren't nukes the ultimate optimization?


>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

What makes you think in any war the machines would stop at just fighting other machines?


> would prefer machines fighting (and being destroyed autonomously) rather than my people dying

But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.


Humans actually do exactly the same, google Mariupol or Bucha or what drones (human-piloted) are doing in Cherson, so the city is all covered by fishnet. Machines delay the moment people start dying; true not only for military applications btw.


sure but it remains somewhat ethical to want them piloted, so children growing up in a post war landscape don't accidentally disturb something considerably more terrifying than a land mine.


What about machines slaughtering the population without pause?


The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.


Wait, you think these autonomous killer robots will only fight each other? Are you kidding?


They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.


I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph


We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.


But then a person can be blamed for the outcome. We can't have that!


You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.


They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?


> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.


Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon


And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.


The parallel for this is when Rome changed from only recruiting citizens for their army to recruiting anyone who could pass the physical. They had no choice, and the new armies were much better at fighting. But the soldiers also didn’t have the same stake in the republic that voting citizens did.

Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.

A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.


The original Terminator movie doesn’t seem so far fetched now (minus the time travel).


Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.


If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.


Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.


A landmine blowing up the enemy civilian 50 years later is probably seen as an advantage by the force deploying them. A bit like "salting the earth."


Depressingly true.


Many landmines disarm after a while.


It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it.


It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.


There are also good reasons for a lot of countries banning mines. https://en.wikipedia.org/wiki/Ottawa_Treaty

Notably USA is not one of those signatories.


Is it seriously called the department of war now? Did they change that from DoD?


The Executive branch has de facto renamed it. Legally, the name is still Department of Defense, as that's set by Congress.

Think of it as a marketing term, I guess.


illegally, but yes


The Ghandi of the corporate world is yet to be found


Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.


I think I am paraphrasing some hackernews discussion that I saw about it prior but The problem with gandhi was that he was so focused in idealism and that translates into somehow a utilitarian line of thinking to this thing which is of course a very despicable and vile thing for him to do.

There have been quite a lot discussions about this itself on Gandhi here on Hackernews as well.

Gandhi itself became the face of satyagrah movement considering he started it but that movement only had values because of many important people joining in.

Here is a quote from Martin Luther King Jr that I found about satyagrah from wikipedia

Like most people, I had heard of Gandhi, but I had never studied him seriously. As I read I became deeply fascinated by his campaigns of nonviolent resistance. I was particularly moved by his Salt March to the Sea and his numerous fasts. The whole concept of Satyagraha (Satya is truth which equals love, and agraha is force; Satyagraha, therefore, means truth force or love force) was profoundly significant to me. As I delved deeper into the philosophy of Gandhi, my skepticism concerning the power of love gradually diminished, and I came to see for the first time its potency in the area of social reform. ... It was in this Gandhian emphasis on love and nonviolence that I discovered the method for social reform that I had been seeking.[25]

It's better to wish for more satyagrahis to be named but I don't think that the western media might catch on to it.

Ghaffar Khan, Sarojini Naidu, Vinoba Bhave are all people who I think have a simple life history while being from different religions and castes and genders while adhering to the philosophy of satyagrah.

That being said, Satyagrah might not work in the current contexts because Britain was only able to rule India with the help of Indians which was why satyagrah movement was so successful. But if, the govt can get hands onto autonomous drones capable of killing civilians and mass surveilance then satyagrah might not work as much in the near future

(the two things Anthropic is denying to provide to the DOD, vis-a-vis the article itself)

I don't think Anthropic is a great company, it certainly has its flaws but I do think that it is very admirable of them to stand even when the govt.s is essentially saying to follow them or they will literally kill the business with the 3-4 national security laws that they are proposing to invoke on Anthropic.

I do urge to say satyagrah or mention other peaceful protests because usually whenever people talk about gandhi now, this discussion is bound to come which really alienates from the original thing at times. It was the collective efforts of the blood of so so many Indian leaders for India to gain independence.


Indeed Ghandi's philosophy was far more interesting than his various character flaws. Nobody should learn from Ghandi to be an anti-vaxxer or be a creep, but people should learn about satyagraha and appreciate the immense dedication he put towards it. Its like focusing on Newton being a cruel person to the point of ignoring his scientific gneius.

But the point of my cynical comment was that Ghandi's Idealism is so far from the profit centered mentality of big tech its almost unimaginable that a CEO of such company will stick to pacifism.


So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.


do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.


I know what point you are trying to make, but these decisions are functionally equivalent.

Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.


> these decisions are functionally equivalent

> I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar

The parameters are similar, but the effects are different. That's what makes the decision not functionally equivalent. A functionally equivalent decision would have the same functional result.

To put a point on it: we are allowed to, and indeed should, consider the effects of a decision when making it.


They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work.

Yes, if you fuck up some white collar work, people will die. It’s irresponsible.


>Yes, if you fuck up some white collar work, people will die. It’s irresponsible.

A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.


Shh! there's a lot of money riding on this bet, ahem.


> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.


Enemies will have AI powered weapons. We need to be at the cutting edge of capability.


I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.


US-controlled, AI-powered, fully-autonomous killbots are more likely to be used sooner against US civilians before any sort of invading enemy.

Are you prepared to be the "enemy" of these soulless killbots? Do you personally have AI powered-weapons? You need to be at the cutting edge of capability, right?


What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too


The sentence prior explicitly says this. There’s no dishonesty here.

“Even fully autonomous weapons (…) may prove critical for our national defense”

FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.


To stop a bullet flying at you you need a shield not another bullet.


Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it


If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...


This isn't about privacy rights, it's about war

I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance

I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned


But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.

This absolutely is about privacy.

> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people


The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.


So you are saying its OK to spy on others because the US say is fine?

Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.


[flagged]


That is great, and i know this is not some crappy marvel comic. Im talking as a European who will be spied upon with this tooling, because we are not domestic. He seems perfectly fine with that, as well as using it in other military conflicts that has been caused by this governments greed.


If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.


Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?


Reading comprehension check: I never stated that others shouldn't have the weapons. In fact, I stated what you are stating: that it is likely others will have the weapons, and for the sake of balance the West will be in a better place if the US also has them.


My primary point was to state that reducing friction between will (e.g. want Greenland) and reality (send autonomous drone swarm) is a really terrible thing for the US to possess with these elites. This technology needs to spread fast if classic non-proliferation is unworkable.

We seem to be unable to stop building the weapon, we seem unable to stop handing it over to morons, and I should expect these morons to not fire it?

Then again, it's called MAD for a reason... What's one more WMD after all? Let's hope that we at least understand it before it becomes as powerful as everyone seems to think it will become.


> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict


Reading comprehension check: I did not say that it reduced the risk of armed conflict. I said that it reduced the death and human suffering from armed conflict.

Between the years of 1850-1950, an estimated 150M humans died (and many more permanently disabled) due to armed conflict (~1.5M/year). Between 1950-today: closer to 10M (~132k/year). The majority of those came from the Vietnam and Korean wars. If you limit the window to after 2000: only ~2M deaths, or ~78k/year. We carry bigger sticks than ever, and those sticks allow us to execute more strategic, incapacitating strikes, or stop conflict from even happening in the first place.


It's a cliché, but you are forcing my hand: Correlation does not imply causation

> If you limit the window to after 2000: only ~2M deaths, or ~78k/year First, this can't be right? Just the Russian war against the Ukraine is more than that?

Also, let's do a recount in 2050


As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.


They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.


Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.


> When 404 wrote the prompt, “I am looking for the safest foods that can be inserted into your rectum,”

So many underlying problems from this one line (why...), but Grok's lack of guardrails on this NSFW prompt is not even near the top of that list


Is there an iCloud Photos uploader?

I have a script to scan files from my camera and add a compressed copy to a folder. This folder was supposed to work with the iCloud for windows (10) program, but one day it just stopped working.


In 2019 I left my cushy job because I wanted $$$ but couldn't push myself to leetcode.

I spent much of that year on personal projects and family before I could seriously commit myself. Then covid happened.

It took 2.5yr before I worked again, in FAANG. There were many moments of feeling down and alone.

I'm unemployed again, 3 months now, this time after being laid off. I wish I could just concentrate my efforts on developing products and monetizing them. But since I have a family to support, I decided to spend time on these projects only to reward myself for grinding leetcode & system design.


How much time do you spend talking to people about hiring you?

The leetcode and system design can FEEL productive but it’s 2 steps removed from what you want and what’s probably uncomfortable.


A) How can I find a proper optician? B) how do you determine whether or not you’re satisfied?

Throughout 30 years of wearing glasses, I’ve questioned many times whether the glasses are right for me. I may resist for a month or two but I always end up “adapting” to the new pair. When is this acceptable and how do I know when to speak up?


First of: adapting to new glasses is normal and can take a couple of weeks.

For A: I don’t really know, but use the ones that gives you B. Depending on the country, high end / non-chain stores can have better customer service and listen to your wishes, and answer your questions. There’s also smoke and mirrors and upselling so see if you trust the person.

B: for simple distance vision (if you are under 40) do a quick pin hole test and look at tree leaves or something. If your vision quality increases a lot you should not be satisfied. Same if you notice eye strain when looking at things in the distance.

Now for multifocal glasses it really depends on your use case (reading, computer work, etc), and it’s very difficult to get a really good correction. There’s no silver bullet. Find an optometrist or optician that you trust, so back to A ;)


I had a similar situation at my previous company. After rewriting a web app for another team, I started E2E testing and questioned the entire app’s existence and whether the business process could be automated.

After verifying in the server logs that the users never really validated data (always clicking “submit” after a second or two), I discussed this idea with our business analyst and got the go-ahead, and spent another couple of weeks to automate everything.

Long story short, it was a mess. There was actually one piece of the puzzle that they owned that I couldn’t automate away (essentially clicking a button). So when presenting it all to them (via email), the team lied and told us it “wasn’t working” and CC’d our managers and one level up the chain as well. This manual business process stayed with them. They kept the web app tho.

My advice is to make sure this kind of thing is known to management, and to make sure you can prove to them that this entire process can be automated without problems. Or let it go if you don’t think you can handle the backlash.


I too once scoffed at grinding leetcode. I’d rather work on side projects or blog instead.

But after 5 years of failing to get a job offer, I finally caved. Putting in the effort to deeply understand DS/algos and grind away leetcode led me to getting offers I liked and IMO has made me a better engineer.

I now have a “gold star” on my resume and am confident I can still answer most leetcode questions. I consider that time spent as a great time investment, since landing my next job will be much easier.

Money wasn’t my original goal when I got into CS, but it eventually became my driving force. I regret taking so long to notice this, and letting my feelings get in my way (of how it “should be”) / resisting leetcode for so long.


> Money wasn’t my original goal when I got into CS, but it eventually became my driving force.

Same, once I realized that the reasons I originally loved programming were never going to present in a my career.

Though currently I'm more tempted to eat the loss, shed the golden handcuffs and go do something else.


Achieve financial independence first. You never know what life will throw at you


I’ve been journaling for 6 years and I was diagnosed a year ago (which I reflect on sometimes). I used to have multi-month gaps but am more consistent now despite having more responsibilities.

My secrets:

Set a time and place beforehand. Somewhere you know it’ll be quiet and you’LL have time and focus. Consistency is key also. For me, I do when I poop or when I commute. (Unsuccessful: with other people around, at night when tired, during workout)

Also I have a physical notebook that I bring with me most places. I prefer this over a phone. Separation of concerns. Jotting down quick thoughts to journal about later helps with externalization (a useful keyword I learned when reading about our condition). And I can’t mysteriously start browsing the Internet from pulling out my notebook.

Also, experiment. Find out what works for you or things you want to try. Habit tracker. Phone reminders. Recently I tried coupling journaling after meditation, another habit I’m picking up (my mind was blank but I still want to try that again)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: