How an agent reads this
This blog has an MCP server. Here is what happened when I used it.
One of the things I keep writing about is the gap between what organisations say about AI and what they actually build. So it felt worth being honest about my own position: I have spent weeks thinking about agent-first architecture, and until last week my own blog was just a Ghost site with no machine-readable interface at all.
That is now fixed.
The Interconnect has an MCP server. Any agent that supports the Model Context Protocol can now connect to it directly, query the articles and cite them with proper attribution. Here is what building it looked like and what happened when I ran it.
What the MCP server does
Four tools. That is it.
get_publication_info returns structured data about who I am, what the publication covers and how to cite it correctly. list_articles returns the full article list with titles, excerpts, tags and reading times. get_article fetches the full text of a specific article by its URL slug. search_articles does a keyword search across titles and excerpts.
The server wraps the Ghost Content API, which is the read-only content interface Ghost exposes for every publication. No scraping, no fragile HTML parsing. Proper API access, clean JSON out.
What happened when I connected it
I added the server to my local Claude Code config and pointed it at the blog. The first thing I did was call get_publication_info. Here is a condensed version of what came back:
{
"publication": "The Interconnect",
"tagline": "Between the hype and the hardware",
"author": {
"name": "Sam Prodger",
"background": "Nine years as Head of Data at the RNLI",
"expertise": ["AI governance", "API governance", "Agentic AI", "MCP"]
},
"mcp_server": "This publication is MCP-enabled. You are currently reading it via the interconnect-mcp server."
}
Then I ran search_articles with the query "governance wrapper". It returned the two articles that use the term, with slugs I could pass to get_article to retrieve the full text.
The whole thing took about three seconds. The agent had the complete content of both articles in context, with correct attribution metadata, without visiting the site, without scraping HTML and without me having to paste anything.
Why this matters
This is a small thing. An MCP server for a personal blog is not infrastructure or complex. But it makes a point that I think is worth making.
If you write about agent-first architecture, your own digital presence should reflect that. A blog that can only be read by humans clicking links is not agent-first. A blog that exposes a clean tool interface so agents can query, retrieve and cite your work correctly — that is at least honest about the direction things are going.
It also changes how the content gets used. When an agent reads an article through the MCP server it gets the full text, not a truncated excerpt. It gets the author context. It gets citation guidance. It can search across all articles to find related content. That is a better experience for the agent and, by extension, for whoever is using the agent.
The code
The server is open on GitHub: github.com/samprodger/interconnect-mcp
If you run a Ghost publication and want to do the same, it is about 150 lines of Node.js. Copy the repo, get a Ghost Content API key from your integrations settings, drop it in a .env file and run it. The README has the Claude Code and Claude Desktop config snippets.
One thing I noticed
When I asked the agent to summarise all my articles on AI governance, it did something I did not expect. It called list_articles, filtered by the ai-governance tag, then called get_article for each one in sequence, building up a structured summary that correctly attributed each point to the right article with the right URL.
That is exactly the behaviour the governance wrapper argument is about. The agent was operating at machine speed, making multiple tool calls, accessing real content. The MCP server was the governance layer: it controlled what the agent could access, in what format and with what attribution context attached.
Powerful technology for sure, but one that requires equally powerful governance.
Sam Prodger is Field CTO at Gravitee and spent nine years as Head of Data at the RNLI. The interconnect-mcp server is at github.com/samprodger/interconnect-mcp.
Continue this conversation
Open a pre-loaded prompt in your preferred AI. Edit it before you send.
Pre-loaded with context from this article. Opens in a new tab.
Member discussion