Codegarden

Don’t miss Umbraco’s event of the year

Get your ticket →

The Agent-Ready CMS: What are Agent Skills?

And why they matter for the future of your Umbraco project

Phil Whittaker
Written by Phil Whittaker

At Umbraco, we believe your CMS should meet your team where they work. By offering AI-augmented tools, we’re turning manual, repetitive tasks into intelligent actions. As AI moves beyond static AI assistants towards autonomous agents, we are positioning Umbraco within the emerging AI ecosystem. Following the release of Umbraco MCP, we are introducing Agent Skills, a structured pattern for shaping AI agent behaviour to help you build projects. In this post, we're diving into what these skills are, how they provide the context AI needs to help you build your next Umbraco project. Most importantly, we’ll get you up to speed with these new standards so your agency can build smarter with the latest in agentic AI.

We’ve observed a fundamental shift: the project lifecycle no longer begins with a local installation, they begin within the context of an Agent (like Claude or Cursor). Umbraco is among the first to establish a dedicated AI track focused on Model Context Protocol (MCP) and Agent Skills.

Our goal is simple: When your team opens an AI tool, Umbraco is already there, ready to guide, support, and empower you.

The Agent-Ready CMS - a 4-part series

This post is the first in a series exploring the external AI tools we’re building at Umbraco HQ. Over the coming weeks, we’ll look at how these tools can transform your workflow, provide more value to your clients, and give your agency a competitive edge.

In this post, we define Agent Skills, why they are the key to better AI context, and how they differ from and complement Model Context Protocol.

Coming up in this series:

  • The Backoffice Extension Skill: For rapid development and live prototyping. (ETA: mid-March)

  • Coming up next: Skills for Content Modeling and Implementation (ETA: mid-April)

  • Enable the Ecosystem: Our base MCP implementation for partners and package developers. (ETA: end-April)

But first, let’s dive into the world of Agent Skills:

What are Agent Skills and Why do They Matter

Agent Skills were released by Anthropic in October 2025 and have since attracted significant attention. Companies and individual developers are using them to improve how they work with Large Language Models (LLMs) and agents, moving beyond generic prompting toward more structured, task-specific interactions.

At Umbraco, we’ve already introduced a set of Agent Skills to support backoffice extension development. Throughout the year, we’ll expand this collection with skills focused on our approach to content modelling and practical Umbraco implementation - codifying our expertise in a way that AI can apply consistently.

In this post, I’ll explore what Agent Skills are, how they work, and why they matter. I’ll also compare them with the Model Context Protocol (MCP), another approach to providing LLMs with structured context and control, and discuss where each fits in a modern AI-enabled architecture.

Everything through the lens of context engineering

If you want better results from Large Language Models, you need to think in terms of context engineering.

LLMs don’t just respond to prompts, they respond to the total context they are given. When that context includes irrelevant or overly broad information, the model is more likely to become confused, misinterpret intent, or even hallucinate details that weren’t required for the task.

In the early days of tools like ChatGPT, this was relatively straightforward. Each conversation started fresh. If you needed persistent guidance, you added it through a system prompt.

Over time, system prompts became the default mechanism for storing reusable knowledge. They often grew into large, centralized blocks of instructions — best practices, testing requirements, architectural rules — regardless of whether all of that information was relevant to the current task.

The alternative is manual context engineering: breaking up that monolithic CLAUDE.md or system prompt and deliberately deciding which pieces of context to include for each session. Instead of giving the model everything, you curate what it needs for the task at hand.

This is manual context engineering — and it works. It was the most dependable way to achieve the highest-quality results. But it demands ongoing effort, introduces friction, and becomes increasingly difficult to maintain at scale.

Automating Context Engineering

Manual discipline doesn’t scale. Context engineering needed an automated solution — and that solution has two parts.

Progressive discovery

Imagine walking into a bookshop. You scan the titles on the shelves, pick up a book that looks relevant, read the cover, skim the table of contents, and only then dive into the chapter that matches what you want to learn.

Agent Skills are the same thing for LLM. Each session starts by loading just a very brief name and description into the context. If the current task matches the skill, it will load more information from the skill and continue to do so until it has the level of detail it requires. Just like a book.

This approach is highly efficient. Hundreds of skills can exist in parallel without overwhelming the context window because only lightweight summaries are loaded initially. The deeper detail is pulled in only when relevant.

The result is focused context, reduced confusion, and more reliable and higher quality outcomes.

Active context management

Crucially, this process is automated.

The LLM determines which skills are relevant, when to load additional information, and how much context is required. Instead of relying on humans to curate this manually, the model dynamically manages its own working knowledge for the task at hand.

That’s what Agent Skills fundamentally are: modular, task-oriented units of context that are loaded only when needed. They allow LLMs to operate with specificity rather than generality — and that shift is what drives better results.

What do Agent Skills look like?

One of the strengths of Agent Skills is their simplicity.

At the heart of every skill is a SKILL.md file.

 Gemini said The image displays a technical guide titled "SKILL.md Structure" showing a Markdown template with YAML frontmatter containing name and description fields, followed by a section for instructions. Below the template, three labeled boxes explain the specific formatting requirements: the name must be a gerund under 64 characters, the description must be in the third person under 1024 characters, and the instructions should provide step-by-step guidance. The layout uses a dark code block for the template and light-colored badges for the corresponding definitions.

This file begins with a small block of structured metadata containing the Agent Skill’s name and description. This is the part that is loaded into the model’s context at the start of a session, giving the Agent a clear summary of what each skill does before any detailed context is retrieved.

Just as with the book in the bookstore, the Agent Skill’s name functions like the title on the spine of the book, and its description like the summary on the back cover. Before opening the book, you already know what it’s about — and whether it’s relevant.

Below the Agent Skill’s name and description sits the main markdown content. It can hold the core knowledge itself or serve as an index to more detailed files — the body of the book or its table of contents.

Each Agent Skill sits within its own directory, which simply acts as a container.

An infographic explaining that SKILL.md acts as a table of contents to point to detailed files. It features a file directory diagram for a pdf/ folder containing SKILL.md (Overview), FORMS.md (Loaded when needed), and reference.md. A warning at the bottom advises keeping references one level deep because the AI may only preview nested files.

Alongside the SKILL.md, the directory can include additional referenced files that provide more detailed or specialised information - like individual chapters. When needed, the model can load one or more of these files, expanding its context from within the same skill.

Deterministic Scripts - Accuracy Rather than Guesstimates

There are certain tasks where variability is a liability.

LLMs are probabilistic by design. The same prompt can produce slightly different outputs each time. That flexibility is powerful for reasoning and language generation, but it becomes a problem when precision and consistency are required.

Reporting is a good example. Asking an LLM to calculate the latest sales figures directly would be unreliable. The numbers could vary depending on interpretation, rounding, or context.

For these scenarios, Agent Skills can include small, sandboxed scripts written in any programming language. These scripts execute deterministically and return concrete, verifiable data.

Instead of estimating or inferring, the model calls the script and receives exact results - which it can then use to generate accurate reports or structured output.

A technical diagram illustrating a file structure where SKILL.md serves as a central table of contents for more detailed files. The directory shows a pdf/ folder containing Markdown files for overview and forms, along with a scripts/ sub-folder containing analyze.py for executable code. A footer note emphasizes keeping references one level deep to ensure the AI can properly preview nested content.

Where Can You Use Agent Skills?

In December last year, Anthropic donated Agent Skills to the Agent AI initiative under the Linux Foundation. Since then, adoption has accelerated rapidly.

In a relatively short space of time, many organisations have begun supporting Agent Skills within their own tools and platforms. What started as a vendor-specific capability has quickly evolved into a broader industry pattern.

This pace of adoption signals something important: structured, modular context is becoming a standard expectation in AI tooling — not an experimental feature.

What About Model Context Protocol?

When Agent Skills were first introduced, there was speculation that they might replace MCP tools - particularly because early MCP implementations were token-heavy. Each tool was fully loaded into context at the start of a session, which raised efficiency concerns.

Since January 2026, that has become less of an issue. Platforms such as Claude Code have introduced active context management and progressive discovery for MCP tools as well. As a result, the technical gap between Skills and MCP has narrowed, and the decision of when to use each is no longer obvious.

At Umbraco, we see a clear distinction.

Skills are lightweight and easy to evolve. They are flexible and relatively unstructured, designed to provide focused knowledge when a task requires it.

MCP, by contrast, is structured and durable. Built from an SDK and capable of being hosted, it provides shared logic and operational stability.

If Skills provide knowledge, MCP provides capability. Together, they allow an LLM not only to understand what to do, but to execute reliably within a defined architecture.

A conceptual diagram featuring an "LLM" center point where two distinct areas overlap: a "Structured Toolbox" (MCP) on the left and an "Adaptive Information Cloud" (Agent Skills) on the right. The toolbox contains icons for a gear, ruler, and wrench, while the cloud contains icons for a magnifying glass, book, and lightbulb. The graphic uses a minimalist blue and light pink color scheme.

The Future of CMS Isn’t Just Human-Friendly

Agent Skills represent a shift in how we think about working with LLMs. Instead of relying on ever-expanding prompts or manual discipline, they introduce a structured, modular approach to context — one that scales.

Through progressive discovery and active context management, Agent Skills allow models to operate with focus rather than overload. Paired with deterministic scripts and complementary infrastructure like MCP, they form part of a broader architectural pattern: separating knowledge from capability, and designing systems where both can evolve independently.

At Umbraco, we see this as more than a technical feature. It’s an evolution in how we think about AI and the platform: not only adding AI into Umbraco, but also embedding Umbraco in AI — making the platform something agents can understand, reason about, and work with directly.

In the next two blog posts, we’ll explore how we’re applying Agent Skills within the world of Umbraco — implementation best practice, embedding content modelling guidance, and platform knowledge directly into AI workflows.

Our goal is simple: to make Umbraco as intuitive and frictionless for agents and LLMs as it is for developers.

Because the future of CMS isn’t just human-friendly. It’s also agent-ready.