That doesn’t sound right. Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.
Let me put it this way: DoD needs a new drone and they want some gimmicky AI bullshit. They contract the drone from Lockheed. Lockheed is not allowed to source the gimmicky AI bullshit from Anthropic because they have been declared a supply-chain risk on the basis that they have publicly stated their intention to produce products which will refuse certain orders from the military.
Let’s put it this way, The DoD is buying pencils from a company. Should that company be prohibited from using Claude?
You are confusing the need to avoid Anthropic as a component of something the DoD is buying, with prohibitions against any use.
The DoD can already sensibly require providers of systems to not incorporate certain companies components. Or restrict them to only using components from a list of vetted suppliers.
Without prohibiting entire companies from uses unrelated to what the DoD purchases. Or not a component in something they buy.
There seems to be a massive misunderstanding here - I'm not sure on whose side. In my understanding, if the DoD orders an autonomous drone, it would probably write in the ITT that the drone needs to be capable of doing autonomous surveillance. If Lockheed uses Anthropic under the hood, it does not meet those criteria, and cannot reasonably join the bid?
What the declaration of supply chain risk does though is, that nobody at Lockheed can use Anthropic in any way without risking being excluded from any bids by the DoD. This effectively loses Anthropic half or more of the businesses in the US.
And maybe to take a step back: Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
> Who in their right minds wants to have the military have the capabilities to do mass surveillance of their own citizens?
Who in their right minds wants to have the US military have the capability to carry out an unprovoked first strike on Moscow, thereby triggering WW3, bringing about nuclear armageddon?
And yet, do contracts for nuclear-armed missiles (Boeing for the current LGM-30 Minuteman ICBMs, Northrop Grumman for its replacement the LGM-35 Sentinel expected to enter service sometime next decade, and Lockheed Martin for the Trident SLBMs) contain clauses saying the Pentagon can't do that? I'm pretty sure they don't.
The standard for most military contracts is "the vendor trusts the Pentagon to use the technology in accordance with the law and in a way which is accountable to the people through elected officials, and doesn't seek to enforce that trust through contractual terms". There are some exceptions – e.g. contracts to provide personnel will generally contain explicit restrictions on their scope of work – but historically classified computer systems/services contracts haven't contained field of use restrictions on classified computer systems.
If that's the wrong standard for AI, why isn't it also the wrong standard for nuclear weapons delivery systems? A single ICBM can realistically kill millions directly, and billions indirectly (by being the trigger for a full nuclear exchange). Does Claude possess equivalent lethal potential?
Anthropic doesn't object to fully autonomous AI use by the military in principle. What they're saying is that their current models are not fit for that purpose.
That's not the same thing as delivering a weapon that has a certain capability but then put policy restrictions on its use, which is what your comparison suggests.
The key question here is who gets to decide whether or not a particular version of a model is safe enough for use in fully autonomous weapons. Anthropic wants a veto on this and the government doesn't want to grant them that veto.
Let me put it this way–if Boeing is developing a new missile, and they say to the Pentagon–"this missile can't be used yet, it isn't safe"–and the Pentagon replies "we don't care, we'll bear that risk, send us the prototype, we want to use it right now"–how does Boeing respond?
I expect they'll ask the Pentagon to sign a liability disclaimer and then send it anyway.
Whereas, Anthropic is saying they'll refuse to let the Pentagon use their technology in ways they consider unsafe, even if Pentagon indemnifies Anthropic for the consequences. That's very different from how Boeing would behave.
Why are we gauging our ethical barometer on the actions of existing companies and DoD contractors? the military industrial apparatus has been insane for far too long, as Eisenhower warned of.
When we're entering the realm of "there isn't even a human being in the decision loop, fully autonomous systems will now be used to kill people and exert control over domestic populations" maybe we should take a step back and examine our position. Does this lead to a societal outcome that is good for People?
The answer is unabashedly No. We have multiple entire genres of books and media, going back over 50 years, that illustrate the potential future consequences of such a dynamic.
* private defense contractor leverages control over products it has already sold to set military doctrine.
The second one is at least as important as the first one, because handing over our defense capabilities to a private entity which is accountable to nobody but it's shareholders and executive management isn't any better than handing them over to an LLM afflicted with something resembling BPD. The first problem absolutely needs to be solved but the solution cannot be to normalize the second problem.
> Surely there’s a big difference between Anthropic selling the government direct access to its models, and an unrelated contractor that sells pencils to the government and happens to use Anthropic’s services to help write the code for their website.
Yes, this is the part where I acknowledge that it might be overreach in my original comment, but it's not nearly as extreme or obvious as the debate rhetoric is implying. There are various exclusion rules. This particular rule was (speculating here!) probably chosen because a) the evocative name (sigh), and b) because it allows broader exclusion, in that "supply chain risks" are something you wouldn't want allowed in at any level of procurement, for obvious reasons.
Calling canned tomatoes a supply chain risk would be pretty absurd (unless, I don't know...they were found to be farmed by North Korea or something), but I can certainly see an argument for software, and in particular, generative AI products. I bet some people here would be celebrating if Microsoft were labeled a supply chain risk due to a long history of bugs, for example.
Over the last 10 years or so in SF and LA, I’ve seen so many countless POS systems at restaurants and small businesses that it’s difficult to believe that Square is anything more than 1 player in an enormous field.
And businesses I frequent over many years seem to change their POS systems often. I’ve always assumed that every year there are a bunch of new startups using their VC funds to give away free iPad minis. When the cheap hardware breaks or the software company goes under, there’s always a new one to take its place.
That sounds like a spurious distinction. Pretty sure you can’t say “Person X is a murderer” and then say “well I’m only expressing my opinion, and in my opinion if you do something that annoys me that qualifies as murder.”
Nope, not in the US. It is perfectly legal to say, for example, "Kyle Rittenhouse is a murderer" despite him being acquitted. You're entirely free to disagree with the result, that is an opinion. Any opinion based on public knowledge is ok. It doesn't even have to be reasonable or rational.
What you can't do is imply non-public knowledge, aka "I heard from my cousin who works in law enforcement that Kyle murdered a hobo when he was 12 but the records were sealed", or state specific facts that can be proven true or false: "Kyle murdered a hobo on September 11, 2018 out back of the 7-11 in Gainesville, FL"
The standard for libel/slander is much, much higher than people think. It's extremely difficult to meet them, and for public figures, it's almost impossible.
Is “opinion versus fact” relevant to that example? My impression is that Kyle Rittenhouse wouldn’t have a strong defamation case against a random person tweeting that he’s a murderer, but the reason isn’t that “it’s a statement of opinion.” The reason is that it’s a high profile and controversial homicide case, and it would be very difficult for Rittenhouse to show that that the random Twitter user had “actual malice.”
Sure it is, that's how the 1A works. Saying he was convicted of murder is not true, but calling him a murderer is an opinion. Your opinion doesn't even have to be reasonable. It just has to be based on facts that both you and I have.
1A rights are construed really broadly. The courts don't do the 'he wasn't legally convicted therefore it's illegal to call him one' thing.
If that were true, news organizations wouldn't be as careful as they are to preface the word "alleged" before the behavior -- before or after a trial. I don't think you'll find any reputable commercial newsgathering organization that makes a plain statement that Kyle Rittenhouse is a murderer.
The First Amendment doesn't protect the speaker against all forms of defamation (though it does put some barriers up that make it harder to win in some circumstances). If it did, defamation as a cause of action wouldn't exist at all.
As a practical matter, though, this is largely theoretical. Once you've been through the rigamarole of arrest, prosecution, and trial, even if you're found not guilty of the crimes committed, the reputational damage is just too widespread. You're not going to go after the defamers: there are just too many, and if you tried, there would be a fair question as to whether you have any positive reputation left to injure. Your life is pretty much ruined. It's a pretty terrible situation for the wrongly accused.
Nope. News companies don't do it because of different reasons.
For one, it's an opinion, and traditional journalism likes to pretend it doesn't have those.
The bigger reason is anyone can sue for anything in the US. Litigation can be ruinously expensive, and cost hundreds of thousands of dollars just to get it thrown out. Hundreds of lawsuits gets expensive, even if you win or hand out $25K "get lost" settlements.
(That's why SLAPP laws are so important -- a strong SLAPP law like CA kills this behavior)
Whether or not something is prudent behavior has nothing to do with legality.
> While the nation is known abroad for minimalist lifestyles, their websites are oddly maximalist.
I’m not aware of this stereotype of Japanese minimalism. I guess there’s Marie Kondo, and some Japanese high-end dining tends towards minimalism. But then there’s manga, anime, kawaii, Nintendo, Sega, Miyazaki, etc., a lot of which is closer to maximalism than minimalism.
Having attended a lot of conferences in Japan, I would have said signage and the like tends towards the amateur and garish. Which isn't inconsistent with what you wrote. I've always found Japan a weird mix of refined/minimalist and kitsch.
A subset of Japanese people use minimalism as a justification as lesser purchase power these days.
That said, I think the Japanese commercial ecosystem is still less wasteful than the one in the US except the excessive plastic wrapping. I hope one day they realize that won't count as "Omotenashi".
Wet Japanese climate necessitate sterile packaging. It's not as extreme as the Southeast Asia, but things do get soggy in matters of hours. So "excessive" aluminized plastic wrapping is just a necessity.
I think a lot of what looks like to much wrapping can be explained by high humidity year round. The wrapping protects products from spoiling or being damaged in such an environment.
Well, if each unit leaves its own scent trail, that’s a lot of per-unit state, and little to nothing that you can pre-compute for the entire map. You could have all units trails on a global “scent layer” that all units read from, but then you’re basically just building up a graph of common paths that could have been precomputed for the entire map.
It also doesn’t at all address inter-unit collisions, which is a big topic in RTS pathfinding.
It seems to me that the open society is in a perpetual state of trying to outrun an endless sequence of problems: would-be invaders from closed societies, internal activists who would rather close down the society in the name of stability, exhaustion of resources on the planet, solar system, and so on, the inevitable asteroid impact or supernova, etc.
And the idea is that this endless sequence of problems exists regardless of how open your society is. So even if you were able to implement a perfect set of authoritarian rules to establish a stable closed society with the technology to capture all the resources from the solar system and redirect all dangerous asteroids, well crap, you still weren't innovative enough to stop the supernova from killing everyone 200 million years later.
Not really. Most of the cultural notion about the remarkable effects of placebos came from flawed studies in the 1950s. As far as I can tell, the modern consensus is that there's no clinically significant placebo effect except for conditions that can only be measured by a subject self-reporting their own perception (like pain and fatigue).
reply