Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The `stdio` approach for local services makes complete sense to me. Including using JSONRPC.

But for remote HTTP MCP servers there should be a dead simple solution. A couple years ago OpenAI launched plugins as `.well-known/ai-plugin.json`, where it'd contain a link to your API spec, ChatGPT could read it, and voila. So all you needed to implement was this endpoint and ChatGPT could read your whole API. It was pretty cool.

ChatGPT Plugins failed, however. I'm confident it wasn't because of the tech stack, it was due to the fact that the integration demand wasn't really there yet: companies were in the early stages of building their own LLM stacks, ChatGPT desktop didn't exist. It also wasn't marketed as a developer-first global integration solution: little to no consistent developer advocacy was done around it. It was marketed towards consumers and it was pretty unwieldy.

IMO the single-endpoint solution and adhering to existing paradigms is the simplest and most robust solution. For MCP, I'd advocate that this is what the `mcp/` endpoint should become.

Edit: Also tool calling in models circa 2023 was not nearly as good as it is now.



I agree. What OpenAI did was simple and beautiful.

Also, I think there is a fundamental misunderstanding that MCP services are plug and play. They are not. Function names and descriptions are literally prompts so it is almost certain you would need to modify the names or descriptions to add some nuances to how you want these to be called. Since MCP servers are not really meant to be extensible in that sort of way, the only other alternative is to add more context into the prompt which is not easy unless you have a tone of experience. Most of our customers fail at prompting.

The reason I like the ai-plugin.json approach is that you don't have to change the API to make the description of a function a little bit different. One day MCP might support this but it will another layer of complexities that could have been avoided with a remotely hosted JSON / YAML file.


The good thing to note is that (AFAIK) MCP is intended to be a collaborative and industry-wide effort. Whereas plugins was OpenAI-specific.

So, hopefully, we can contribute and help direct the development! I think this dialogue is helpful and I'm hoping the creators respond via GitHub or otherwise.


It’s not just about passing prompts — in production systems like Ramp’s, they had to build a custom ETL pipeline to process data from their endpoints, and host a separate database to serve structured transaction data into the LLM context window effectively.

We’ve seen similar pre-processing strategies in many efficient LLM-integrated APIs — whether it’s GraphQL shaping data precisely, SQL transformations for LLM compatibility, or LLM-assisted data shaping like Exa does for Search.

https://engineering.ramp.com/ramp-mcp

PS: When building agents, prompt and context management becomes a real bottleneck. You often need to juggle dynamic prompts, tool descriptions, and task-specific data — all without blowing the context window or inducing hallucinations. MCP servers help solve this by acting as a "plug-and-play" prompt loader — dynamically fetching task-relevant prompts or tool wrappers just-in-time. This leads to more efficient tool selection, reduced prompt bloat, and better overall reasoning for agent workflows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: