In my previous post, I introduced the Model Context Protocol (MCP) and explained how it enables AI agents to interact with external systems through a standardized interface. Today, I want to share a practical example that demonstrates why MCP matters – and how it solved a real problem I was facing with AI-assisted content management.
Each of my blog posts gets tags and categories associated with it. These help readers find related content and improve discoverability. As my blog grew, I wanted to automate this process using Copilot to select appropriate tags. What seemed like a straightforward task turned into an unexpected lesson about how AI agents process information.
First attempt: Learning from similar posts
My initial approach was simple: let the AI evaluate a new post against existing articles with similar tags. The problem? With hundreds of posts in my blog, this created significant “noise” in the context. AI models have finite context windows – a limited amount of information they can consider at once. More irrelevant content means less focus on the details that actually matter. Even as models grow to support more context, this fundamental challenge remains.
Second attempt: Custom instructions with metadata
To address the noise problem, I added custom instructions that explained my tag and category metadata structure. Each tag has a name and description stored in a dedicated folder. Now Copilot had guidance on what each tag means and how to apply them.
This helped, but a new problem emerged. Despite having explicit instructions, the AI still frequently explored other folders and sampled files to “learn” about available tags. This sampling is non-deterministic – different runs can examine different files, leading to inconsistent results. For example, the AI sometimes suggested tags that didn’t exist or picked inappropriate tags that appeared frequently in samples.
Third attempt: Task-specific prompts
Task-specific prompts provide a more efficient approach by only including relevant guidance when needed. This improved how requests were handled, but the core problem persisted. The AI still wasn’t relying solely on the metadata I provided. It continued sampling the file system, so I still saw incorrect tags being applied.
Understanding the root cause
The issue wasn’t that the AI was ignoring my instructions. It was trying to understand them amidst all the additional content in the context. When an AI agent gathers context, it doesn’t guarantee what it keeps or retrieves. As data volume increases, the model relies more heavily on samples and subsets, making decisions about what seems most relevant. More context means more noise and less predictable outcomes.
I needed to change my approach. Instead of hoping the AI would find the right information, I needed to give it exactly what it needed – nothing more, nothing less.
The MCP solution
I needed deterministic behavior: Copilot should know exactly what tags and categories exist, understand their purpose, and know which ones are currently applied to an article. Model context protocol (MCP) provides exactly that mechanism – it allows the AI to call specific tools that return the precise data it needs. No sampling, no exploration, no guessing.
Why tools over resources?
MCP offers two main ways to provide information: resources and tools. Resources are added to context by the user – you explicitly select what the AI should see. This wouldn’t work because I wanted tagging to be part of an automated review workflow. Resources are a better fit for when the user needs to interact and select a dataset to be provided as context to the AI.
Tools, on the other hand, can be automatically invoked by the AI when needed. By exposing tag and category information through tools, the AI retrieves the data it requires at the moment it needs it.
Building a Bash MCP server
I asked Copilot to implement a simple MCP server in Bash. MCP servers communicate over standard input/output using JSON-RPC, so any language that can read stdin and write stdout works fine.
The server implements four tools:
- list_tags – Returns all available tags with their names and descriptions
- list_categories – Returns all available categories with their names and descriptions
- get_post_taxonomy – Retrieves the tags and categories currently applied to a specific article
- modify_post_taxonomy – Updates an article’s tags or categories
Here’s a simplified example of how the list_tags response looks:
1 "tags": [
2 {
3 "tag": "#AI",
4 "title": "AI and Machine Learning",
5 "description": "Insights and resources on AI and machine learning technologies, including LLMs and MCP."
6 },
7 {
8 "tag": "DevOps",
9 "title": "DevOps",
10 "description": "Articles related to general development practices, operations, deployments, and CI/CD."
11 }]Every time the AI calls list_tags, it gets the exact same structured data. Because the AI calls the tool for specific information it needs, it doesn’t have to sample files and evaluate their contents.
The results
With MCP tools providing context, several things improved:
- Deterministic context
- The AI receives the exact same list of tags and categories every time. No variation based on which files happened to be sampled.
- Focused data
- The AI only sees what it needs for a request, such as the specific article content, its current tags, and the available tags. Again, no extra noise from gathering additional context.
- Correct updates
- The AI knows exactly how to modify tag values because the tool interface defines the expected format. No more guessing about file structures or YAML syntax.
- Better outcomes
- Tag suggestions now reliably match existing categories and align with their documented purposes. The AI even provides reasoning for its choices, sometimes arguing for new tags or against applying existing ones.
- Improved performance
- Fewer tokens in the context means faster responses and, if you’re paying per token, lower costs.
Lessons learned
This experience taught me something valuable about working with AI agents. When you ask an AI to gather its own context, you’re introducing variability. The agent will do its best, but “best” can mean different things on different runs.
MCP tools make context deterministic. Instead of hoping the AI finds the right information, you define exactly what’s available and how to access it. The AI still makes decisions, but based on consistent, reliable inputs. The AI can also make more surgical calls for updating local files, reducing the chance of errors.
This pattern applies beyond blog tags. Any time you have structured data or processes that an AI needs to understand – configuration options, API endpoints, user preferences, available resources – consider whether an MCP tool might provide more reliable results than letting the AI explore and discover. Sometimes providing deterministic, structured data gives the AI exactly what it needs to succeed.
