What are Agent Skills and Why do They Matter
Agent Skills were released by Anthropic in October 2025 and have since attracted significant attention. Companies and individual developers are using them to improve how they work with Large Language Models (LLMs) and agents, moving beyond generic prompting toward more structured, task-specific interactions.
At Umbraco, we’ve already introduced a set of Agent Skills to support backoffice extension development. Throughout the year, we’ll expand this collection with skills focused on our approach to content modelling and practical Umbraco implementation - codifying our expertise in a way that AI can apply consistently.
In this post, I’ll explore what Agent Skills are, how they work, and why they matter. I’ll also compare them with the Model Context Protocol (MCP), another approach to providing LLMs with structured context and control, and discuss where each fits in a modern AI-enabled architecture.
Everything through the lens of context engineering
If you want better results from Large Language Models, you need to think in terms of context engineering.
LLMs don’t just respond to prompts, they respond to the total context they are given. When that context includes irrelevant or overly broad information, the model is more likely to become confused, misinterpret intent, or even hallucinate details that weren’t required for the task.
In the early days of tools like ChatGPT, this was relatively straightforward. Each conversation started fresh. If you needed persistent guidance, you added it through a system prompt.
Over time, system prompts became the default mechanism for storing reusable knowledge. They often grew into large, centralized blocks of instructions — best practices, testing requirements, architectural rules — regardless of whether all of that information was relevant to the current task.
The alternative is manual context engineering: breaking up that monolithic CLAUDE.md or system prompt and deliberately deciding which pieces of context to include for each session. Instead of giving the model everything, you curate what it needs for the task at hand.
This is manual context engineering — and it works. It was the most dependable way to achieve the highest-quality results. But it demands ongoing effort, introduces friction, and becomes increasingly difficult to maintain at scale.
Automating Context Engineering
Manual discipline doesn’t scale. Context engineering needed an automated solution — and that solution has two parts.
Progressive discovery
Imagine walking into a bookshop. You scan the titles on the shelves, pick up a book that looks relevant, read the cover, skim the table of contents, and only then dive into the chapter that matches what you want to learn.
Agent Skills are the same thing for LLM. Each session starts by loading just a very brief name and description into the context. If the current task matches the skill, it will load more information from the skill and continue to do so until it has the level of detail it requires. Just like a book.
This approach is highly efficient. Hundreds of skills can exist in parallel without overwhelming the context window because only lightweight summaries are loaded initially. The deeper detail is pulled in only when relevant.
The result is focused context, reduced confusion, and more reliable and higher quality outcomes.
Active context management
Crucially, this process is automated.
The LLM determines which skills are relevant, when to load additional information, and how much context is required. Instead of relying on humans to curate this manually, the model dynamically manages its own working knowledge for the task at hand.
That’s what Agent Skills fundamentally are: modular, task-oriented units of context that are loaded only when needed. They allow LLMs to operate with specificity rather than generality — and that shift is what drives better results.
What do Agent Skills look like?
One of the strengths of Agent Skills is their simplicity.
At the heart of every skill is a SKILL.md file.