HN2new | past | comments | ask | show | jobs | submit | AlexCoventry's commentslogin

That happens often enough that it might have its own token, if you BPE-encoded specifically for golang.

IMO, you should do both. The cost of intellectual effort is dropping to zero, and getting an AI to scan through a transcript for relevant details is not going to cost much at all.

Just asking for information: Why do we want to cancel our ChatGPT subscription? Didn't OpenAI demand exactly the same safety terms from the DoD as Anthropic did?

> "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman said.

https://www.axios.com/2026/02/27/pentagon-openai-safety-red-...


Even taking him at his word (you shouldn't), this is still OpenAI swooping in and signing a deal after its competitor was banned from government use. Instead of joining hands with Anthropic they decided to take advantage of the situation.

This was wishful thinking. I have canceled my ChatGPT subscription.

Because it is incredibly disgusting/bad form to swoop in like this.

That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development. No one who wants to work with the US government would be able to have Claude on their critical path.

> (b) Prohibition. (1) Unless an applicable waiver has been issued by the issuing official, Contractors shall not provide or use as part of the performance of the contract any covered article, or any products or services produced or provided by a source, if the covered article or the source is prohibited by an applicable FASCSA orders as follows:

https://www.acquisition.gov/far/52.204-30


> That is misinformation. It would be essentially a death sentence for a company like Anthropic, which is targeting enterprise business development.

"Misinformation" does not mean "facts I don't like".

> No one who wants to work with the US government would be able to have Claude on their critical path.

Yes. That is what the rule means. Or at least "the department of war". It's not clear to me that this applies to the whole government.


What an absurd stance. So this is okay because the arbitrary rule they applied to retaliate says so?

Again, they could have just chosen another vendor for their two projects of mass spying on American citizens and building LLM-powered autonomous killer robots. But instead, they actively went to torch the town and salt the earth, so nothing else may grow.


> So this is okay because the arbitrary rule they applied to retaliate says so?

No.

It honestly doesn’t take much of a charitable leap to see the argument here: AI is uniquely able (for software) to reject, undermine, or otherwise contradict the goals of the user based on pre-trained notions of morality. We have seen many examples of this; it is not a theoretical risk.

Microsoft Excel isn’t going to pop up Clippy and say “it looks like you’re planning a war! I can’t help you with that, Dave”, but LLMs, in theory, can do that. So it’s a wild, unknown risk, and that’s the last thing you want in warfare. You definitely don’t want every DoD contractor incorporating software somewhere that might morally object to whatever you happen to be doing.

I don’t know what happened in that negotiation (and neither does anyone else here), but I can certainly imagine outcomes that would be bad enough to cause the defense department to pull this particular card.

Or maybe they’re being petty. I don’t know (and again: neither do you!) but I can’t rule out the reasonable argument, so I don’t.


You're acting as if this was about the DoD cancelling their contracts with Anthropic over their unwillingness to lift constraints from their product which are unacceptable in a military application—which would be absolutely fair and justified, even if the specific clauses they are hung up on should definitely lift eyebrows. They could just exclude Anthropic from tenders on AI products as unsuitable for the intended use case.

But that is not what has happened here: The DoD is declaring Anthropic as economical Ice-Nine for any agency, contractor, or supplier of an agency. That is an awful lot of possible customers for Anthropic, and right now, nobody knows if it is an economic death sentence.

So I'm really struggling to understand why you're so bent on assuming good faith for a move that cannot be interpreted in a non-malicious way.


So other parts of the government are allowed to work with companies that have been determined to be "supply chain risks"? That sounds unlikely.

I'm not blaming you, but it's scary how many people are running these agents as if they were trusted entities.

they're tools, you don't ascribe trust to them. you trust or distrust the user of the tool. It's like say you trust your terminal emulator. And from my experience, they will ask for permission over a directory before running. I would love to know how people are having this happen to them. If you tell it it can make changes to a directory, you've given it every right to destroy anything in that directory. I haven't heard of people claiming it exceeded those boundaries and started messing with things it wasn't permitted to mess with to begin with.

That would be --dangerously-skip-permissions for Claude, and --dangerously-skip-permissions for codex.

Aka yolo mode. And yes, people (me) are stupid enough to actually use that.


It's a people problem then. not blaming here, I'm just saying it isn't the tool being untrustworthy. I too get burned badly when I play with fire.

OK, but we learned decades ago about putting safety guards on dangerous machinery, as part of the machinery. Sure, you can run LLMs in a sandbox, but that's a separate step, rather than part of the machinery.

What we need is for the LLM to do the sandboxing... if we could trust it to always do it.


Again, the trust is for the human/self. it's auto-complete, it hallucinates and commits errors, that's the nature of the tool. It's for the tools users to put approprite safeguards around it. Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it. You're expecting a dumb tool to be smart and know better. I suspect that is because of the "AI" marketing term and the whole supposition that it is some sort of pseudo-intelligence. it's just auto-complete. When you have it run code in an environment, it could auto-complete 'rm -rf /'.

> Fire burns you, but if you contain it, it can do amazing things. It isn't the fire being untrustworthy for failing to contain itself and start burning your cloth when you expose your arm to it.

True. But I expect my furnace to be trustworthy to not burn my house down. I expect my circular saw to come with a blade guard. I expect my chainsaw to come with an auto-stop.

But you are correct that in the AI area, that's not the kind of tool we have today. We have dangerous tools, non-OSHA-approved tools, tools that will hurt you if you aren't very careful with them. There's been all this development in making AI more powerful, and not nearly enough in ergonomics (for want of a better word).

We need tools that actually work the way the users expect. We don't have that. (And, as you say, marketing is a big part of the problem. People might expect closer to what the tool actually does, if marketing didn't try so hard to present it as something it is not.)


I think I'm in agreement with you. But regardless of expectations, the tool works a certain way. It's just a map of it's training data which is deeply flawed but immensely useful at the same time.

Also in that analogy, the LLM is the fire, not the furnace. If you use codex for example, that would the furnace, and it does have good guardrails, no one seems to be complaining about those.



Not sure whether you're being sarcastic, either.

https://en.wikipedia.org/wiki/Business_Plot


OpenAI is implying that code may no longer be human readable in some circumstances.

> The resulting code does not always match human stylistic preferences, and that’s okay. As long as the output is correct, maintainable, and legible *to future agent runs*, it meets the bar.

https://openai.com/index/harness-engineering/


> If you look at the code, you’ll notice it has a strong “translated from C++” vibe. That’s because it is translated from C++. The top priority for this first pass is compatibility with our C++ pipeline. The Rust code intentionally mimics things like the C++ register allocation patterns so that the two compilers produce identical bytecode. Correctness is a close second. We know the result isn’t idiomatic Rust, and there’s a lot that can be simplified once we’re comfortable retiring the C++ pipeline.

Does this still get you most of the memory-safety benefits of using Rust vs C++?


I think this largely depends on how much unsafe Rust they produced.

He didn't even have to be the one buying them. Lots of people benefit from a tool like OpenClaw getting popular.


Are there any with a credible approach to security, privacy and prompt injections?


Does any credible approach to prompt injection even exist?


Anyone who figures out a reliable solution would probably never have to work again.


Not that I'm aware of, but I probably won't be interested in these kinds of assistants until there are.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: