HN2new | past | comments | ask | show | jobs | submitlogin

Why does whether the agent "commits" to a rule cryptographically matter?

Surely it's just the enforcement, and maybe the measuring of sentinel events -- how far does it wander off course.

How is cryptography an important part of this, given that we're talking about a layer that sits on top of an LLM without an adversary in-between?

I know you mention non-repudiation, but ... there's no kind of real non-repudiation here in this environment.

 help



Very fair question. If you control the whole stack with your agent, your middleware and your logs, then cryptography doesn't add much. You already trust yourself.

But, it matters when there are multiple parties. An enterprise deploys an agent that can handle customer data. The customer wants proof the agent has followed the rules. The regulator wants proof that the logs were not just edited after an incident. Without cryptographic signatures and hash chains, the enterprise can just say "trust us." With them, the proof is independently verifiable.

It's just the difference between "we followed the rules" and "here's a mathematically verifiable proof we followed the rules." For internal use, it's an overkill. For anything with external accountability, that targets the point.


There's no mathematically verifiable proof that anyone followed the rules. There's a cryptographic chain, but it just means "this piece of the stack, at some point, was convinced to process this and recorded that it did this." -- not whether that actually happened, what code was running, etc.

It doesn't tell you anything about what code was running there or whether it was really enforced.

Look, it's cool that this is an area that interests you. But I want you to know that AI agents are sycophantic and will claim your ideas are good and will not necessarily steer you in good directions. I have patents in the area of non-repudiation dating back 25 years and am doing my best to give you good feedback.

Non-repudiation, policy enforcement, audit-readiness, ledgers: these are all good things. As far as I can tell, there's nothing too special about doing this with LLMs, too. The same kinds of code that a bank uses to ensure that its ledger isn't tampered with and that the right software is running in the right places would work for this job -- and it wasn't vibe coded and mostly specified by AI.


You’re correct. The cryptographic chain proves “this middleware has processed this action and has recorded it,” not that the enforcement logic itself was correct or that the code running was what you think it was. Those are both different guarantees and I have been conflating them.

On “nothing too special about doing this with LLMs,” also fair. The primitives (policy enforcement, audit trails, non-repudiation) aren’t new. The bet is that AI agents will need these at a scale and standardization level that does not exist yet, and having it as a composable library matters when every framework (LangChain, CrewAI, Vercel AI SDK) is building agents differently. But the underlying cryptography isn’t novel.


Proving policy controls are in place and that actions were taken is a fairly universal problem.

Cryptography doesn't really do as much to improve it as one would think. Yes, providing evidence of sequence or that stuff happened before a certain time is a helpful tool to have in the toolbox.

The earliest human writings date to about 3000-3500 BCE, and are almost entirely ledgers on clay tablets.

I want to point out a little asymmetry. It's a little rude to generate a bunch of stuff, including writing, using LLMs, and then expect actual humans to interact with it. If it wasn't your time to do and understand and say, why should it be worth others' time to read and respond to it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: