Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

Thanks. In practice, access control is enforced centrally by AxonFlow, not delegated to the orchestrator.

Each LLM or tool call is evaluated at execution time against the active policy context, which includes the user, workflow, step, and tenant. That allows different steps in the same workflow to run under different credentials, providers, or cost and permission constraints if needed.

In gateway mode, the orchestrator still issues the call, but AxonFlow pre-authorizes it and records the decision so the policy is enforced consistently. In proxy mode, AxonFlow holds and applies the credentials itself and routes the call to the appropriate provider.

The key point is that credentials and access rules are defined once and enforced centrally, while orchestration logic remains separate.



What kind of latency does this generate? I guess for LLM operations the extra latency might not bet that important. Is that correct?


Good question. The overhead is designed to be low enough for inline enforcement. For the fast, rule based checks we typically see single digit millisecond evaluation time, and in gateway mode the end to end pre check usually adds around 10 to 15 ms.

You’re right that relative to an LLM call this is usually negligible, but we still treat it seriously because policy checks also sit in front of tool calls and other non LLM operations where latency matters more. That’s why the static checks are compiled and cached and the gateway path is kept tight.

If you want more detail, I have a longer architecture walkthrough that goes into the execution path and performance model: https://youtu.be/hvJMs3oJOEc


Understood. Pretty cool, good luck with the project!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: