Interesting thoughts regarding MCPs being the future App Store/Platform. I don't know that I agree but I don't necessarily disagree either. Time will certainly tell.
To me, MCP feels more like an implementation detail, not something that most people would ever use directly. I would expect that the future would be some app distributed through existing channels, which bundles the MCP client into it, then uses a server-side component (run by the vendor of course) to get the real work done. As much as I like the idea of people installing the servers locally, that future seems like a Linux nerd/self hosted type of activity. I just can't imagine a typical mac or windows non-power-user installing one directly. Just the idea that they would need to install "two apps" is enough to confuse them immensely. It's possible some might bundle the server too and run it locally as needed, but even in that case I think MCP is completely invisible to the user.
Agree that for mainstream use it needs to be and will be hidden from the user entirely.
Will be much more like an app store where you can see a catalog of the "LLM Apps" and click to enable the "Gmail" plugin or "Shopping.com" plugin. The MCP protocol makes this easier and lets the servers write it once to appear in multiple clients (with some caveats I'm sure).
MCP has a remote protocol. You don't need to install anything to add an MCP server, or rather, you won't once client support catches up to the spec. It will be a single click in whatever chat interface you use.
I'd expect "local MCP servers" will be generally installed as part of something else. Photoshop, or Outlook, or whatever could come with a local MCP server to allow chat clients to automate them. Maybe printer drivers or other hardware would do similar. I don't think there's much reason to install a cloud service MCP server to run locally; you'd just use the one provided in the cloud.
MCP's will be run by the service providers, and you'll have the ability to "link" them, just like today you can link a Google account to give access to Calendar, GDrive, ... in the future you'll be able to give a model access to the Google MCP for your account.
i wonder how granular the permissions will get though. giving model-level access to something like Gmail sounds powerful, but also like a privacy minefield if not done carefully. curious to see how trust and isolation get handled.
MCP is perhaps the biggest attack vector I've seen people willingly adopt simply for FOMO. Nothing about implementing it is defined or tractable. Even logging its use is extremely complicated.
> Think of MCPs as standardized APIs—connectors between external data sources or applications and large language models (LLMs) like ChatGPT or Claude.
This is incorrect.
MCP is Model Context Protocol.
You didn't "build an MCP", you implemented an MCP server. Lighttpd is not "an HTTP", it's an HTTP server. wget is also not "an HTTP", it's an HTTP client. Lighttpd and wget are different enough that it's useful to make that distinction clear when labeling them.
dnsmasq is not "a DHCP", it's a DHCP server.
This distinction also matters because it is certain that we will see further protocol iterations so we will indeed have multiple different MCPs that may or may not be compatible.
I’d just like to interject for a moment. What you’re refering to as Linux, is in fact, GNU/Linux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine’s resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
This particular way of seeing MCP that the article describes came up a lot during the early voice assistant ways - and I guess amazon did kind of attempt an app store approach to giving alexa more capabilities. In theory I like it! But in practice most people won't be using any one integration enough to buy it - like why go through the hoops to buy a "plane ticket purchasing app" when you do it maybe 4 times a year. I just don't see it playing out the way the author describes
I don't feel that way. Maybe the first examples have all been related to what software people do, but I think an MCP for a travel site would be a game changer.
There are so many things I want to tell a travel site that just doesn't fit into filters, so then end up spending more time searching all kinds of permutations.
For example, when I search for flights, there might be situational things (like, "can you please find me a flight that has at least a 2 hour layover at <X> airport because last time i had a hard time finding the new terminal" etc.
Or an agent that will actually even READ that information from the airport website to see notices like "expect long delays in the next 3 months as we renovate Terminal 3"
Right?
The agent could have this information, and then actually look at the flight arrival/departure times and actually filter them through.
Other things like, "I can do a tuesday if cheaper, or, look through my calendar to see if i have any important meetings that day and then decide if i can pick that day to save $400"
These are all things that synthesize multiple pieces of data to ultimately arrive at something as simple as a date filter.
that kind of synthesis is where current search interfaces fall short. the pieces exist in isolation like flight data, personal calendars, and airport notices, but nothing ties them together in a way that's actually useful. an agent using MCP could help connect those dots if the APIs are deep enough and the UX avoids feeling like a black box. the real challenge might not be the tech but getting providers to share enough useful data and trust whatever sits between them and the user.
People said similar things about smart contracts, yet here we are, with them being rather niche. I do agree that once the Alexa's and Siri's are LLM powered with MCP (or similar) support, these kinds of use cases will become more valuable and I do feel it will happen, and gain widespread use eventually. I just wonder how much other software it will actually replace in reality vs how much of it is hype.
It's a matter of organizing developer effort around a set of standards. Good architecture makes it easy to contribute to the ecosystem, and currently agentic tooling is the wild west, with little in terms of standardization. Now we are seeing more developer momentum around making our everyday tools accessible to agents.
Yeah, it's a good thing to be talking about pie in the sky ideas, most of which won't really work. The few good ideas that survive internet critics picking apart the smallest details could be interesting
My prediction: there will be no standard protocol, clients will do whatever works for them, and devs will do whatever it takes to be installable on those clients. Just like mobile.
As someone else once said, I want a Grocery Shopping Engine. "Here's my shopping list, taking into consideration delivery times and costs, please buy this for the lowest cost from any combination of supermarkets and deliver by day after tomorrow at the latest."
If MCPs gave the LLMs a window into all the major supermarkets home shopping sites that looks like it's a step closer.
Not OP but perhaps the following mcp tools: google maps api, nearest supermarkets, puppeteer their product listing pages?
Though to be honest not sure why you would need so much info - if I need lettuce or tomatoes for example, I know theyre gonna be at essentially every supermarket in my area....
Are there any open-source MCP 'app stores' currently available? I am considering building one for my employer. A place to host internally built MCPs. It would be useful to know if there is something already out there.
I just use OCI registry to host all my MCP modules (the way I chose to extends my MCP server's capabilities) - WASM plugins.
OCI registry is available every where and probably already presented in your infrastructure. You get to use all the OCI tools/security controls you already have with it.
To add new tools, you just have to update the config file to include new plugins and restart.
If you're sold on MCP, what was your "wow" moment? I've read the docs and tinkered a bit but it was a decidedly "meh" experience personally. It seems very similar to ChatGPT Plugins, and that was a flop. I don't really like the fuzzy nature of the architecture, where I never know what server will be invoked. And I have to manually opt-in to each server I want to use? To be unexpectedly useful, it seems like I would have to opt-in to tens or hundreds of servers. Yet I've heard that clients start to struggle once you have more than 20 servers plugged in...? Please excuse any fundamental errors I've repeated here, if any...
The first use case I found relevant and useful was the Supabase MCP server, allowing Cursor's agent to query my Supabase project. It meant no longer describing my database to Cursor, it could simply go and get the information it needed, as needed.
My "wow" moment was when I wrote an internal MCP server so that Claude Code could access our test databases. It was a tiny amount of code, simple to connect up, and immediately gave Claude Code a way to directly validate queries. It's been useful in numerous scenarios since then and got me thinking about additional MCP-based tools it might be nice to have.
Writing an internal MCP server to link our API layer to Augment/VSCode so that our Frontend developers can ask in plaintalk about API details. With over 1000 endpoints, it lets the devs find the endpoint, and more importantly the GQL fields, quickly. After some dogfooding we plan to open it up to our clients as well.
Wrote an MCP to hook into my logging so I could get Claude + Cursor to quickly answer "hey, why did request 20394 from yesterday evening fuck up?". It goes into the logs and finds the exception, hunts down the line and then tells me whats up. Of course, left unchecked it tries to fix the problem too but I've spent countless lines of prompt engineering to have it never attempt to "just start writing code".
For me it was implementing a simple `execute_terminal_command` tool along with hooking up to my company's jira and gitlab (dont worry security gurus, for the command line, I have a hardcoded list of allowed read-only commands that the LLM can execute, and both jira and gitlab servers likewise have readonly options.)
What I will say is I agree there should be an option to get rid of the chat confirmations of every single new tool call in a chat - as well as building a set of "profiles" of different tools depending what I'm working on. Also strongly agree there needs to be an internal prompt possibility to explicitely tell the LLM what tool(s) to favor and how to use them (even in addition to the descriptions / schemas of the tools themselves) I opened an issue on the anthropic repo exactly about this: https://github.com/modelcontextprotocol/typescript-sdk/issue...
Yep. It is currently a Meh experience as said in the OP because the UX sucks. The idea is take a step back and imagine what could it become if those are fixed.
Btw, one of my favorite MCPs is a Whois MCP so I can ask Claude Desktop to brainstorm domain names and then immediately check if they are available :).
MCPs tries too hard to signal XHR for AI, but nobody wants to earnestly deal with the consequences of AI interfacing in a wider context of mis-/disinformation, hallucination and generally letting it talk to stuff in a semi-unprompted manner.
we’ve been building most of what OP has written about with https://mcp.run
We started doing this the day Anthropic released MCP in November last year. Our company has always been devoted to secure plug-in system technology having built Extism, a WebAssembly plugin framework.
We immediately saw MCP as the plugin system for AI and knew it would be significant, but were concerned about the security implications of running MCP servers from untrusted parties and using the STDIO transport which makes user systems vulnerable in ways we weren’t ok with.
So we built mcp.run which is a secure implementation of the protocol, running servers in fully isolated & portable wasm modules. They must be allow-listed to access files & network hosts, and cannot access any part of your system without your explicit permission.
They also run everywhere. Each server (we call them servlets) on mcp.run is automatically available via SSE (soon HTTP streaming) as well as STDIO, but can also be embedded directly into your AI apps, no transport needed, and can run natively on mobile!
We are excited about MCP and glad so many are too - but we really need more security-oriented implementations before it’s too late and someone suffers a seriously bad exploit - which could tarnish the ecosystem for everyone.
Interesting thoughts regarding MCPs being the future App Store/Platform. I don't know that I agree but I don't necessarily disagree either. Time will certainly tell.
To me, MCP feels more like an implementation detail, not something that most people would ever use directly. I would expect that the future would be some app distributed through existing channels, which bundles the MCP client into it, then uses a server-side component (run by the vendor of course) to get the real work done. As much as I like the idea of people installing the servers locally, that future seems like a Linux nerd/self hosted type of activity. I just can't imagine a typical mac or windows non-power-user installing one directly. Just the idea that they would need to install "two apps" is enough to confuse them immensely. It's possible some might bundle the server too and run it locally as needed, but even in that case I think MCP is completely invisible to the user.
Agree that for mainstream use it needs to be and will be hidden from the user entirely.
Will be much more like an app store where you can see a catalog of the "LLM Apps" and click to enable the "Gmail" plugin or "Shopping.com" plugin. The MCP protocol makes this easier and lets the servers write it once to appear in multiple clients (with some caveats I'm sure).
MCP has a remote protocol. You don't need to install anything to add an MCP server, or rather, you won't once client support catches up to the spec. It will be a single click in whatever chat interface you use.
I'd expect "local MCP servers" will be generally installed as part of something else. Photoshop, or Outlook, or whatever could come with a local MCP server to allow chat clients to automate them. Maybe printer drivers or other hardware would do similar. I don't think there's much reason to install a cloud service MCP server to run locally; you'd just use the one provided in the cloud.
More like npm, not app store.
MCP's will be run by the service providers, and you'll have the ability to "link" them, just like today you can link a Google account to give access to Calendar, GDrive, ... in the future you'll be able to give a model access to the Google MCP for your account.
i wonder how granular the permissions will get though. giving model-level access to something like Gmail sounds powerful, but also like a privacy minefield if not done carefully. curious to see how trust and isolation get handled.
MCP is perhaps the biggest attack vector I've seen people willingly adopt simply for FOMO. Nothing about implementing it is defined or tractable. Even logging its use is extremely complicated.
> Think of MCPs as standardized APIs—connectors between external data sources or applications and large language models (LLMs) like ChatGPT or Claude.
This is incorrect.
MCP is Model Context Protocol.
You didn't "build an MCP", you implemented an MCP server. Lighttpd is not "an HTTP", it's an HTTP server. wget is also not "an HTTP", it's an HTTP client. Lighttpd and wget are different enough that it's useful to make that distinction clear when labeling them.
dnsmasq is not "a DHCP", it's a DHCP server.
This distinction also matters because it is certain that we will see further protocol iterations so we will indeed have multiple different MCPs that may or may not be compatible.
> You didn't "build an MCP"
The author explicitly states he built 2 MCP servers, not 2 MCPs, so I don’t know where your beef is coming from
I’d just like to interject for a moment. What you’re refering to as Linux, is in fact, GNU/Linux, or as I’ve recently taken to calling it, GNU plus Linux. Linux is not an operating system unto itself, but rather another free component of a fully functioning GNU system made useful by the GNU corelibs, shell utilities and vital system components comprising a full OS as defined by POSIX.
Many computer users run a modified version of the GNU system every day, without realizing it. Through a peculiar turn of events, the version of GNU which is widely used today is often called Linux, and many of its users are not aware that it is basically the GNU system, developed by the GNU Project.
There really is a Linux, and these people are using it, but it is just a part of the system they use. Linux is the kernel: the program in the system that allocates the machine’s resources to the other programs that you run. The kernel is an essential part of an operating system, but useless by itself; it can only function in the context of a complete operating system. Linux is normally used in combination with the GNU operating system: the whole system is basically GNU with Linux added, or GNU/Linux. All the so-called Linux distributions are really distributions of GNU/Linux!
Honestly can't tell if this is very dry sarcasm or not
Hey look I found the individual willing to die on the "ATM Machine" / "NIC card" hill!
“The map is not the territory.”
MCP might be one of the few technology pieces where more articles have been written about it than the actual use-cases being built.
It is like the ERC20 era all over again.
This particular way of seeing MCP that the article describes came up a lot during the early voice assistant ways - and I guess amazon did kind of attempt an app store approach to giving alexa more capabilities. In theory I like it! But in practice most people won't be using any one integration enough to buy it - like why go through the hoops to buy a "plane ticket purchasing app" when you do it maybe 4 times a year. I just don't see it playing out the way the author describes
Remember “push technology”?
I don't feel that way. Maybe the first examples have all been related to what software people do, but I think an MCP for a travel site would be a game changer.
There are so many things I want to tell a travel site that just doesn't fit into filters, so then end up spending more time searching all kinds of permutations.
These could be done with an MCP-augmented agent.
There is no saying that they will expose more functionality through the MCP API than their web site. I imagine the API will be more limited.
No, but let me be more specific.
For example, when I search for flights, there might be situational things (like, "can you please find me a flight that has at least a 2 hour layover at <X> airport because last time i had a hard time finding the new terminal" etc.
Or an agent that will actually even READ that information from the airport website to see notices like "expect long delays in the next 3 months as we renovate Terminal 3"
Right?
The agent could have this information, and then actually look at the flight arrival/departure times and actually filter them through.
Other things like, "I can do a tuesday if cheaper, or, look through my calendar to see if i have any important meetings that day and then decide if i can pick that day to save $400"
These are all things that synthesize multiple pieces of data to ultimately arrive at something as simple as a date filter.
that kind of synthesis is where current search interfaces fall short. the pieces exist in isolation like flight data, personal calendars, and airport notices, but nothing ties them together in a way that's actually useful. an agent using MCP could help connect those dots if the APIs are deep enough and the UX avoids feeling like a black box. the real challenge might not be the tech but getting providers to share enough useful data and trust whatever sits between them and the user.
So, Yahoo! Pipes, but with magic and wishful thinking
People said similar things about smart contracts, yet here we are, with them being rather niche. I do agree that once the Alexa's and Siri's are LLM powered with MCP (or similar) support, these kinds of use cases will become more valuable and I do feel it will happen, and gain widespread use eventually. I just wonder how much other software it will actually replace in reality vs how much of it is hype.
ERC20 stood the test of time and is ubiquitous today.
Who knows what MCP looks like in a decade?
It's very funny to see people talking about an extremely thin protocol like this.
It's a matter of organizing developer effort around a set of standards. Good architecture makes it easy to contribute to the ecosystem, and currently agentic tooling is the wild west, with little in terms of standardization. Now we are seeing more developer momentum around making our everyday tools accessible to agents.
Yeah, it's a good thing to be talking about pie in the sky ideas, most of which won't really work. The few good ideas that survive internet critics picking apart the smallest details could be interesting
[dead]
My prediction: there will be no standard protocol, clients will do whatever works for them, and devs will do whatever it takes to be installable on those clients. Just like mobile.
>>MCP Affiliate Shopping Engines
As someone else once said, I want a Grocery Shopping Engine. "Here's my shopping list, taking into consideration delivery times and costs, please buy this for the lowest cost from any combination of supermarkets and deliver by day after tomorrow at the latest."
If MCPs gave the LLMs a window into all the major supermarkets home shopping sites that looks like it's a step closer.
> If MCPs gave the LLMs a window into all the major supermarkets home shopping sites that looks like it's a step closer.
And how exactly will they do that?
There are existing comparison services that keep track of prices and locations of grocery items. MCP is just the glue code.
Not OP but perhaps the following mcp tools: google maps api, nearest supermarkets, puppeteer their product listing pages?
Though to be honest not sure why you would need so much info - if I need lettuce or tomatoes for example, I know theyre gonna be at essentially every supermarket in my area....
The Master Control Program has no future!
MCP in this context means "Model Context Protocol"
I thought it might be "managed cloud providers", but perhaps I'm too optimistic for a change
Or: Master Control Program
Are there any open-source MCP 'app stores' currently available? I am considering building one for my employer. A place to host internally built MCPs. It would be useful to know if there is something already out there.
I just use OCI registry to host all my MCP modules (the way I chose to extends my MCP server's capabilities) - WASM plugins.
OCI registry is available every where and probably already presented in your infrastructure. You get to use all the OCI tools/security controls you already have with it.
To add new tools, you just have to update the config file to include new plugins and restart.
https://github.com/tuananh/hyper-mcp
This is a registry I know of: https://smithery.ai but it's just a listing
But any self-hosted npm registry backend (e.g. github npm registry) should serve as a private MCP Server registry?
seen a lot with uvx as well (apparently a package manager for python, but i try to stay as far away as possible from python)
If you're sold on MCP, what was your "wow" moment? I've read the docs and tinkered a bit but it was a decidedly "meh" experience personally. It seems very similar to ChatGPT Plugins, and that was a flop. I don't really like the fuzzy nature of the architecture, where I never know what server will be invoked. And I have to manually opt-in to each server I want to use? To be unexpectedly useful, it seems like I would have to opt-in to tens or hundreds of servers. Yet I've heard that clients start to struggle once you have more than 20 servers plugged in...? Please excuse any fundamental errors I've repeated here, if any...
The first use case I found relevant and useful was the Supabase MCP server, allowing Cursor's agent to query my Supabase project. It meant no longer describing my database to Cursor, it could simply go and get the information it needed, as needed.
My "wow" moment was when I wrote an internal MCP server so that Claude Code could access our test databases. It was a tiny amount of code, simple to connect up, and immediately gave Claude Code a way to directly validate queries. It's been useful in numerous scenarios since then and got me thinking about additional MCP-based tools it might be nice to have.
Writing an internal MCP server to link our API layer to Augment/VSCode so that our Frontend developers can ask in plaintalk about API details. With over 1000 endpoints, it lets the devs find the endpoint, and more importantly the GQL fields, quickly. After some dogfooding we plan to open it up to our clients as well.
Wrote an MCP to hook into my logging so I could get Claude + Cursor to quickly answer "hey, why did request 20394 from yesterday evening fuck up?". It goes into the logs and finds the exception, hunts down the line and then tells me whats up. Of course, left unchecked it tries to fix the problem too but I've spent countless lines of prompt engineering to have it never attempt to "just start writing code".
For me it was implementing a simple `execute_terminal_command` tool along with hooking up to my company's jira and gitlab (dont worry security gurus, for the command line, I have a hardcoded list of allowed read-only commands that the LLM can execute, and both jira and gitlab servers likewise have readonly options.)
What I will say is I agree there should be an option to get rid of the chat confirmations of every single new tool call in a chat - as well as building a set of "profiles" of different tools depending what I'm working on. Also strongly agree there needs to be an internal prompt possibility to explicitely tell the LLM what tool(s) to favor and how to use them (even in addition to the descriptions / schemas of the tools themselves) I opened an issue on the anthropic repo exactly about this: https://github.com/modelcontextprotocol/typescript-sdk/issue...
Open webui let's you do all that. You set up single model agents and assign specific tools. You can also beef it up with system prompts.
On that note, the various agents libraries will let you create that same setup.
Yep. It is currently a Meh experience as said in the OP because the UX sucks. The idea is take a step back and imagine what could it become if those are fixed.
Btw, one of my favorite MCPs is a Whois MCP so I can ask Claude Desktop to brainstorm domain names and then immediately check if they are available :).
It’s clunky but I am still using it :)
MCPs tries too hard to signal XHR for AI, but nobody wants to earnestly deal with the consequences of AI interfacing in a wider context of mis-/disinformation, hallucination and generally letting it talk to stuff in a semi-unprompted manner.
[dead]
we’ve been building most of what OP has written about with https://mcp.run
We started doing this the day Anthropic released MCP in November last year. Our company has always been devoted to secure plug-in system technology having built Extism, a WebAssembly plugin framework.
We immediately saw MCP as the plugin system for AI and knew it would be significant, but were concerned about the security implications of running MCP servers from untrusted parties and using the STDIO transport which makes user systems vulnerable in ways we weren’t ok with.
So we built mcp.run which is a secure implementation of the protocol, running servers in fully isolated & portable wasm modules. They must be allow-listed to access files & network hosts, and cannot access any part of your system without your explicit permission.
They also run everywhere. Each server (we call them servlets) on mcp.run is automatically available via SSE (soon HTTP streaming) as well as STDIO, but can also be embedded directly into your AI apps, no transport needed, and can run natively on mobile!
We are excited about MCP and glad so many are too - but we really need more security-oriented implementations before it’s too late and someone suffers a seriously bad exploit - which could tarnish the ecosystem for everyone.
Cool platform. I got some errors while exploring your website. Searching for tools to use works on mobile but not on desktop for some reason.
we just had a minor outage — sorry about that. It should be fully back online.