Hacker Timesnew | past | comments | ask | show | jobs | submit | criticalpudding's commentslogin

This is perfect for my use case! I'm building a MCP tool that can take 4~10 minutes to complete, and I'm using exactly what you described in README (having another MCP tool for models to poll results) to solve the async problem, which is not ideal. Hope this gets adopted more widely!


I'm curious how you're making the polling for results approach work right now? Is it a conditional logic that depends on the result from MCP. Or you let the LLM keep deciding on what to do next?


I'm working on an unified MCP server that can search and use a large number of tools. The current way of using MCP server (adding each MCP server directly) simply doesn't scale. If your AI agent needs to use 100 tools, you need to manually configure a lot of MCP servers. And when you feed those tools to the LLM, it may get confused and tool calling accuracy starts to drop.

This is why I'm building a unified MCP server with just two meta tools: - Search Available tools - Execute a tool

When I want to send an email, I ask LLM to use the Search Meta tool to search for Gmail related tools in the backend, then the meta tool returns the description of relevant tools. The LLM then uses the Execute meta tool to actually use the gmail tool returned. https://github.com/aipotheosis-labs/aci


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: