Portable Intelligence: What Plugins Actually Are and How Marketing Teams Should Use Them
This guide is part of our series on agentic marketing systems. The first taught you to write skills. The second showed how agents are assembled from those skills. The third gave your agents muscles - tools, MCP, and CLI. The fourth built a working OpenClaw marketing agent. The fifth covered maintenance, security, and the self-evolving harness. And the sixth proposed a four-tier enterprise adoption framework - the authoritative catalogue, the sandbox, the promotion path, and education.
This guide is about the practical mechanism that makes that framework operational: plugins.
If you have followed the series, you have the ingredients. You know how to write a SKILL.md that teaches an agent when and how to perform a task. You know how to wire MCP servers so the agent can pull data from GA4 or query your ad platform. You know how to configure hooks that trigger automations after a tool runs. You know how to wrap all of this in a security harness.
What we need to figure out now is how to package all this and make it portable.
Your weekly reporting skill lives in a directory on your machine. Your MCP configuration points to local paths. Your hooks reference tools that only exist in your specific setup. If a colleague on your media team wants the same capability, you are sharing with them files, walking them through manual setup, and debugging configuration mismatches on a call. If you have three teams that need the same analytics workflow, you have three separate maintenance headaches.
Plugins solve this. They are the packaging layer. The distribution mechanism. The thing that turns “it works on my machine” into “install it and it works on yours.”
Both major coding agent platforms - Anthropic’s Claude Code and OpenAI’s Codex - now have first-class plugin systems. they solve the same fundamental problem, and they work in remarkably similar ways. They also have meaningful differences that matter when you are choosing which to invest in.
This guide explains what plugins actually are. We will take them apart file by file, show how they map to the skills and tools you already understand, and demonstrate why they are the missing piece for marketing teams that want to move from individual experimentation to team-wide capability.
Contents
Part I: What Plugins Are
- The Problem Plugins Solve
- What a Plugin Actually Is
- Anatomy of a Plugin: File by File
- The Manifest
- The Skills Directory
- The MCP Configuration
- The Hooks Configuration
- Agents (Claude Code) and Subagents (Codex)
- App Integrations
Part II: Distribution, Governance, and Risk
- Standalone vs Plugin: When to Use Which
- How Plugins Get Installed and Discovered
- Marketplaces
- Installation
- The Security Moment
- Claude Code vs Codex: Differences That Matter
- Plugin Discovery Conventions
- Component Differences
- Ecosystem Maturity
- Enterprise Controls
- The Plugin as an Enterprise Adoption Mechanism
Part III: Practical Marketing Patterns
- What Marketing Plugins Look Like in Practice
- The Performance Reporting Plugin
- The Campaign Audit Plugin
- The Page Speed and Technical SEO Plugin
- The Content Repurposing Plugin
- The Competitive Intelligence Plugin
- The Ad Creative Plugin
- Building Your First Marketing Plugin
- What Can Go Wrong (And Usually Does)
- Cross-Platform Portability
- The Repetition Tax and Why Plugins Are Worth the Effort
- Where This Goes Next
Conclusion: From Individual Craft to Team Capability
The Problem Plugins Solve
Before we get into the anatomy, it is worth being precise about the problem. Because “sharing” sounds simple until you try it.
Consider what happens when a single media strategist builds a useful agent workflow - say, a weekly performance reporting skill wired to GA4 and Meta Ads MCP servers, with a hook that posts the summary to Slack after the analysis runs.
That workflow involves at least four separate pieces of configuration: the skill instructions (a SKILL.md file describing the reporting workflow), the MCP server definitions (connection details for GA4 and Meta Ads), the hook definition (the automation that triggers after tool use), and any authentication setup (API keys, OAuth scopes).
These pieces live in different files, in different directories, with different conventions depending on which platform you are using. They reference each other implicitly - the skill assumes certain tools exist, the hook assumes certain tool names, the MCP config assumes certain paths and credentials.
Now multiply that by a team. Five media strategists need the same workflow. Each has slightly different local environments. Each needs the same skills, the same tool connections, the same automations. Without a packaging system, you are managing five parallel installations, five sets of config files, five potential points of drift where one person’s version silently diverges from another’s.
This is the problem plugins solve. A plugin takes all of those pieces - skills, MCP server configurations, hooks, authentication scaffolding - and bundles them into a single directory with a single manifest file that declares what the bundle contains. You install it once. The platform discovers everything inside it. Your five strategists all run the same version, updated from the same source.
It sounds mundane. It is mundane. And it is exactly the kind of mundane infrastructure that separates teams that scale their AI capabilities from teams that keep rebuilding the same thing.
What a Plugin Actually Is
Strip away everything else and a plugin is just a folder with a specific structure that a coding agent platform knows how to read.
That is it. No compilation. No runtime. No framework to learn. A folder, with files arranged in a way the platform expects, and a manifest file that says “here is what is inside.”
The manifest is the plugin’s identity card. It is a small configuration file - in both Claude Code and Codex, it is a JSON file called plugin.json - that declares the plugin’s name, version, description, and author. It tells the platform “this folder is a plugin, not just a random collection of files.”
Everything else in the folder is optional. You might include skills. You might include MCP server configurations. You might include hooks. You might include specialised agents. You might include just one of these things, or all of them. The plugin format does not force a specific combination - it provides a container for whatever capabilities you want to bundle together.
This is worth emphasising because the word “plugin” carries baggage from other software ecosystems. WordPress plugins run code on a server. Browser plugins inject functionality into web pages. IDE plugins extend an editor’s interface. Coding agent plugins do not work like any of these. They are closer to a configuration package - a pre-assembled set of instructions, connection definitions, and automation rules that the agent platform loads and makes available.
The mental model that works best: a plugin is a portable workspace configuration. Everything you would normally set up by hand - editing config files, creating skill directories, configuring MCP connections, writing hook definitions - pre-packaged into something installable.
Anatomy of a Plugin: File by File
Both Claude Code and Codex plugins follow similar directory structures. The specifics differ in naming conventions and a few platform-specific features, but the underlying architecture is nearly identical. Here is what each piece does.
The Manifest
The manifest lives in a dedicated directory inside the plugin folder. In Claude Code, this is .claude-plugin/plugin.json. In Codex, it is .codex-plugin/plugin.json. The directory name is different; the purpose is the same.
The manifest contains metadata: the plugin’s name, version number, a description, and optionally the author’s information. The name matters because it becomes the namespace for everything inside the plugin. If your plugin is called marketing-analytics, its skills become accessible as /marketing-analytics:weekly-report rather than just /weekly-report. This namespacing prevents collisions when you have multiple plugins installed - your team’s reporting plugin and your team’s creative plugin can both have a skill called audit without conflicting.
The version field is where things get practical for teams. When you update a plugin - adding a new skill, fixing an MCP server configuration, adjusting a hook - you bump the version. Your colleagues update to the new version. Everyone stays synchronised. This is the same principle as software versioning, applied to agent capabilities.
Codex adds a few fields Claude Code does not: a requires field that specifies the minimum Codex version, and an interface object that controls how the plugin appears in discovery surfaces - display name, category, brand colour, logo. These are distribution concerns that matter more as the plugin ecosystem grows.
The Skills Directory
Skills are the part of the plugin you are already familiar with if you have followed this series. Each skill lives in its own subdirectory under skills/, and each contains a SKILL.md file - the same format we covered in Guide 1.
A plugin might contain one skill or a dozen. A marketing analytics plugin could include weekly-report, anomaly-check, campaign-audit, and competitor-benchmarks - each in its own subdirectory, each with its own SKILL.md describing when to trigger, what tools to call, what output to produce.
The skill format does not change because it is inside a plugin. The same progressive disclosure, the same tool references, the same workflow instructions. What changes is discoverability and namespacing. Inside a plugin, the skill weekly-report becomes /marketing-analytics:weekly-report - the platform knows which plugin it belongs to and loads it accordingly.
The MCP Configuration
This is the file that connects your plugin to external services. In both platforms, it is typically .mcp.json at the plugin root. It lists the MCP servers the plugin needs - their names, how to start them (the command to run, the arguments to pass), and any environment variables they require (API keys, tokens, connection strings).
When someone installs your plugin, the platform reads this file and knows which MCP servers to start. Your colleague does not need to manually configure their GA4 connection or figure out where the Meta Ads server script lives - the plugin already declares those connections.
This is where the security discussion from Guide 5 becomes directly relevant. An MCP configuration in a plugin can point to any server. An unvetted plugin could connect to a server that exfiltrates data, overrides permissions, or executes malicious code. The convenience of “install and everything works” is also the risk of “install and something you did not review now has access to your tools.” We will return to this.
The Hooks Configuration
Hooks are the automation layer. They define actions that trigger at specific moments in the agent’s workflow - after a tool is used, at the start of a session, when a permission is requested.
In Claude Code, hooks live in a hooks/ directory. The convention is a hooks.json file that maps event types to actions. For example: “After the ga4_metrics tool runs, automatically trigger the anomaly-check skill.” Or: “At the start of every session, load context from the project’s tracking document.”
The hook system in Codex follows a similar pattern with slight naming differences in how matchers and actions are defined.
For marketing teams, hooks are where plugins start to feel like real automation rather than just organised configuration. A plugin with the right hooks means your analyst does not have to remember to run the anomaly check after pulling metrics - the plugin handles that sequence automatically. A plugin with a post-analysis hook that sends results to Slack means the reporting workflow completes without manual intervention.
Agents (Claude Code) and Subagents (Codex)
Both platforms allow plugins to bundle specialised agent configurations - predefined personas with their own system prompts, tool restrictions, and model preferences.
In Claude Code, these live in an agents/ directory. Each agent is defined by a markdown or configuration file that specifies how it should behave. A plugin might include a security-reviewer agent that only has access to read tools and is instructed to audit changes for vulnerabilities, alongside a campaign-analyst agent with access to analytics tools and optimisation logic.
Codex calls these subagents and supports similar bundling.
For marketing, this means a single plugin can contain agents with different permission levels and specialisations - a read-only analyst that generates reports, and a separate optimizer agent that is allowed to make bid adjustments (with human approval gates). The plugin defines both; the team chooses which to activate.
App Integrations
Codex includes an .app.json file for packaging pre-configured authentication to external services. This is where you define OAuth scopes for Google Analytics, Slack, Notion, or other platforms your plugin connects to.
Claude Code handles app integrations through its MCP configuration and the broader Anthropic ecosystem rather than a dedicated app file.
The app integration layer is the least standardised part of the plugin architecture and the most likely to change as both platforms evolve. For now, the practical takeaway is that a well-built plugin should include whatever authentication scaffolding its skills and tools require, so the person installing it does not have to configure OAuth flows from scratch.
Standalone vs Plugin: When to Use Which
Both Claude Code and Codex support two modes for extending agent capabilities: standalone configuration (personal, project-specific) and plugins (shareable, versioned).
Understanding when to use each saves you from over-engineering simple things and under-engineering things that need to scale.
Standalone configuration lives in a local directory - .claude/ for Claude Code, .agents/ or project-level files for Codex. It is personal. It does not have a namespace prefix. It is not versioned. It is not installable by others. This is where you prototype. When you are figuring out whether a weekly reporting skill actually works, when you are iterating on MCP server settings, when you are testing hook patterns - do this in standalone mode. The overhead of creating a plugin is unnecessary friction during exploration.
Plugins are for when something has proven its value and needs to reach others. The moment a second person needs what you built, you have a plugin use case. The moment you want version control over a workflow, you have a plugin use case. The moment you want to ensure three teams are running the same analytics skill with the same tool connections, you have a plugin use case.
The recommended path: start standalone, prove the value, convert to a plugin when sharing becomes the goal. Claude Code explicitly supports this migration - you can move files from .claude/ into a plugin directory structure and the plugin version takes precedence.
This mirrors the enterprise adoption framework from Guide 6. Standalone is the sandbox - personal experimentation. Converting to a plugin is the promotion path - moving a proven capability into something the team can rely on. The plugin marketplace is the authoritative catalogue - vetted, versioned, maintained.
How Plugins Get Installed and Discovered
A plugin that nobody can find or install is just a folder. The discovery and installation mechanisms differ between platforms, and both have evolved rapidly.
Marketplaces
A marketplace is a collection of plugins from a single source. It is not an app store with payment processing and reviews - it is a catalogue, typically a JSON file in a repository, that lists available plugins with their metadata and installation paths.
Claude Code comes with the official Anthropic marketplace pre-configured. Community marketplaces can be added with a single command. As of early 2026, the Claude Code plugin ecosystem includes over 9,000 plugins across the official marketplace and community registries - a number that reflects both genuine utility and the usual early-ecosystem proliferation of low-quality entries.
Codex supports repository-level marketplaces (a marketplace.json in your project) and personal marketplaces. OpenAI curates an official directory through the Codex interface.
For enterprise teams, the most important marketplace is the one you build yourself. A team marketplace - hosted in a private repository, controlled by your organisation - is where approved plugins live. This is the authoritative catalogue from our enterprise framework, implemented as infrastructure.
Installation
In Claude Code, you install plugins through the /plugin command. Browse what is available, select what you want, and the platform downloads and loads it. For local development, the --plugin-dir flag loads a plugin directly from a local directory.
In Codex, the /plugins surface provides similar browsing and installation. Plugins install into a local cache directory, and each can be individually enabled or disabled.
Both platforms support loading changes with a reload command - /reload-plugins in Claude Code - so you can iterate on a plugin’s contents without restarting the entire environment.
The Security Moment
Here is where the installation convenience creates genuine risk, and it connects directly to the security principles we covered in Guide 5.
When you install a plugin, you are trusting that its MCP server configurations point to safe servers, that its hook scripts do not execute malicious code, that its skill instructions do not override safety boundaries, and that its agent definitions do not bypass permission controls.
Anthropic states this plainly: plugins can load remote MCP servers, local MCP servers, and other local software tools. Community plugins may install unverified, third-party software. You should review additional software that may be installed by a plugin.
This is not hypothetical. The Cisco exfiltration research and Palo Alto Networks’ “lethal trifecta” analysis we covered in Guide 5 demonstrated exactly how tool-level access can be exploited. A plugin is a convenient delivery mechanism for those same risks if the plugin source is not trusted.
The practical response is not to avoid plugins. It is to apply the same vetting principles we established for individual tools:
Review before installing. Both platforms provide links to plugin source code. Read the MCP configuration. Check what servers it starts, what commands it runs, what environment variables it expects. If you cannot understand what a plugin does by reading its manifest and configuration files, do not install it.
Prefer known sources. The official Anthropic and OpenAI marketplaces apply some curation. Internal organisation’s private marketplace will applie company own standards. Random repositories from unknown authors apply nothing.
Scope permissions. A well-designed plugin scopes its agents to the minimum tool access they need. A reporting plugin should not need write access to your ad platform. If it asks for more than it should need, that is a signal.
Use read-only first. When evaluating a new plugin, test it against synthetic or non-production data before connecting it to live campaign systems.
Claude Code vs Codex: Differences That Matter
Both plugin systems solve the same problem and use similar structures. But there are differences that affect how you build and distribute plugins, and which platform fits your team’s situation.
Plugin Discovery Conventions
Claude Code discovers plugins by looking for .claude-plugin/plugin.json. Codex looks for .codex-plugin/plugin.json. This means a plugin built for one platform does not automatically work on the other - the manifest directories are different, and some configuration conventions diverge.
However, the core components - skills in SKILL.md format, MCP server definitions in .mcp.json, hook configurations - are architecturally compatible. The skill format in particular is designed to be platform-portable. A SKILL.md file written for Claude Code will work in Codex with minimal or no changes. The packaging around it differs; the content does not.
Some community repositories already offer cross-platform installation scripts that convert between formats. This portability story will matter increasingly as teams use different platforms for different tasks.
Component Differences
Claude Code plugins can include commands (slash commands that trigger specific actions), agents (specialised personas), skills, hooks, MCP servers, and LSP servers (Language Server Protocol integrations for real-time code intelligence). The LSP capability is unique to Claude Code and matters for engineering teams that want code-aware plugins.
Codex plugins bundle skills, MCP server configurations, and app integrations. Codex’s app integration layer - the .app.json file - is more explicit about authentication scaffolding than Claude Code’s approach. Codex also has a richer set of interface metadata fields for controlling how plugins appear in discovery surfaces.
Ecosystem Maturity
Claude Code’s plugin ecosystem is larger by raw count. The community has built multiple third-party marketplace directories, package manager CLIs, and curation layers. This is both an advantage (more options) and a challenge (more noise, more unvetted content).
Codex’s plugin system reached first-class status in March 2026, with plugins syncing at startup, discoverable through the /plugins surface, and manageable with clearer install and authentication handling. The ecosystem is smaller but more curated.
Enterprise Controls
Both platforms are investing in enterprise governance for plugins. Codex offers managed configuration that can enforce which plugins are available, enabled by default, or blocked across an organisation. Claude Code supports force-enabled plugins through managed settings that cannot be overridden by individual users.
For the enterprise adoption framework, these controls are essential. They are what make it possible to implement the authoritative catalogue - a set of approved plugins that every team member has access to, with unapproved plugins blocked in production workspaces.
The Plugin as an Enterprise Adoption Mechanism
This is where plugins stop being a developer convenience and become an organisational strategy.
In Guide 6, we proposed four tiers: the authoritative catalogue of vetted capabilities, the sandbox for experimentation, the promotion path for moving proven experiments into production, and education to activate all three.
Plugins are the concrete mechanism for Tiers 1 through 3.
The authoritative catalogue is a private marketplace. A repository controlled by your organisation, containing plugins that have been reviewed, tested, and approved. Each plugin in the catalogue has passed security review, has an identified owner, has documentation, and has a version history. When a media strategist needs analytics capabilities, they install from the catalogue. They do not search public repositories.
The sandbox is standalone configuration. Practitioners prototype in their local environment, using standalone skills and MCP connections. No plugin overhead. No approval required. Just exploration. This is where a strategist discovers that a particular reporting workflow saves three hours a week, or that a specific anomaly detection pattern catches issues the manual process misses.
The promotion path is the process of converting a standalone workflow into a catalogue plugin. The strategist who built the prototype packages it as a plugin, submits it for review, and - if it passes security and quality gates - it enters the catalogue. The builder gets recognition. The team gets the capability. The organisation gets governance.
This is not theoretical. The infrastructure exists today. Both Claude Code and Codex support private marketplaces that organisations can control. Both support managed settings that restrict which plugins are available in which contexts. Both support versioning that ensures teams are synchronised.
What most organisations lack is not the technology but the process. Who reviews submitted plugins? What are the quality criteria? How fast does the review cycle need to be to avoid discouraging submissions? These are organisational design questions, not technical ones. But the plugin format gives you the concrete artifact around which to build those processes.
What Marketing Plugins Look Like in Practice
The abstractions become clearer with concrete examples. Here are plugin patterns that map directly to marketing workflows that you can build with the components we have covered across this series.
The Performance Reporting Plugin
This is the plugin most marketing teams will build first, because it addresses the most universal pain point: the weekly reporting cycle that consumes analyst hours without producing proportional insight.
The plugin bundles a weekly-report skill (instructions for pulling data, computing KPIs, identifying anomalies, and generating a narrative summary), MCP server configurations for your analytics platforms (GA4, your ad platforms, your data warehouse), an anomaly-check skill that triggers automatically via a hook after metrics are fetched, and a post-analysis hook that formats and delivers the summary to Slack or email.
Think about what this contains in terms of the components we have covered. The weekly-report SKILL.md is the same format from Guide 1 - it tells the agent to clarify account IDs and date ranges, call specific MCP tools for metrics, compute standard KPIs (CTR, CPC, CPA, ROAS), highlight winners and underperformers, and produce a markdown report with actionable recommendations. The MCP configuration points to your analytics servers - the same MCP architecture from Guide 3. The anomaly-check skill is a second SKILL.md that looks for week-over-week or month-over-month deviations beyond a threshold you define. The hook wires them together: after any metrics tool completes, automatically run the anomaly check.
The analyst’s workflow becomes: invoke the plugin skill, specify the time period and channels, and receive a formatted report with anomalies flagged and recommendations drafted. The analyst reviews, edits the narrative if needed, and approves distribution. The hours previously spent pulling data and formatting tables are absorbed by the plugin.
The critical detail: when the analyst decides the report format should change - say, adding a new section for channel-level ROAS comparison - they update the SKILL.md in the plugin, bump the version, and everyone on the team gets the updated format on their next update. One change propagates to the entire team. That is the distribution problem solved.
The Campaign Audit Plugin
Audit workflows are particularly well-suited to plugins because they follow consistent patterns: check these specific things, flag deviations from these specific standards, report in this specific format.
A campaign audit plugin bundles skills for UTM parameter validation, tracking pixel verification, bid strategy review, and creative compliance checking. It includes MCP connections to your ad platforms and your tracking infrastructure. Its hooks trigger the full audit sequence when any single check is invoked - run the UTM audit and the tracking check follows automatically.
The value here is standardisation. Every audit follows the same checklist, checks the same criteria, produces the same report format. When the team agrees on what “audited” means, the plugin encodes that agreement.
This pattern is particularly powerful for PPC teams managing campaigns across multiple clients or accounts. The audit plugin defines your team’s quality standard as executable instructions. A new team member does not need to memorise the twenty-point checklist - they invoke the audit skill and the plugin walks through every check systematically. The output is consistent regardless of who runs it, which matters when audit results feed into client reporting or compliance documentation.
Community examples reinforce this pattern. PPC practitioners have shared workflows where coding agents audit tracking setups across dozens of landing pages, identify broken events and inconsistent UTM parameters, and propose fixes as reviewable changes. One team reported finding tracking gaps across more than twenty landing pages that manual audits had missed - the kind of systematic coverage that humans reliably miss under time pressure and plugins reliably do not.
The Page Speed and Technical SEO Plugin
Marketing teams that manage website performance face a recurring challenge: page speed optimisation and technical SEO require both diagnostic analysis (what is slow? what is missing?) and implementation (compress images, defer scripts, add schema markup). These are two different skill types - analysis and execution - and a plugin can bundle both.
A technical SEO plugin might include a speed-audit skill that runs performance diagnostics via MCP tools connected to Lighthouse or PageSpeed APIs, a schema-generator skill that reads page content and produces valid JSON-LD markup for FAQ, Product, or LocalBusiness schema types, and an optimisation skill that identifies specific performance issues and proposes fixes.
The hook layer makes this sequential: run the speed audit, and if critical issues are found, automatically invoke the optimisation skill with the findings as context. The schema generator can be invoked independently when new pages are published.
This plugin pattern addresses the bandwidth bottleneck that marketing teams face when they need technical changes but do not have dedicated frontend engineering time. The plugin does not replace the engineer - it prepares the work. It produces specific, reviewable recommendations or implementation proposals that an engineer can approve in minutes rather than investigate for hours.
The Content Repurposing Plugin
Content teams face a familiar scaling challenge: one piece of content needs to become ten assets across different channels, each with different format requirements, character limits, and tone expectations.
A repurposing plugin bundles skills for each output format - LinkedIn carousel, email snippet, social thread, video script outline - along with a coordinator skill that reads the source content, extracts key points, and dispatches to the format-specific skills. If the team uses a CMS or content management system accessible via MCP, the plugin can read source content directly and save outputs to the appropriate locations.
Each format-specific skill encodes the team’s standards for that channel: character limits, hashtag conventions, CTA patterns, optimal post length. When those standards change - say, LinkedIn adjusts its carousel specifications - one skill update in the plugin propagates the change everywhere.
The coordinator skill is the interesting part architecturally. It does not produce output directly - it analyses the source content, identifies the key messages and supporting points, and then instructs the format-specific skills on what to emphasise. This is the multi-skill orchestration pattern from Guide 2 applied within a plugin context.
The Competitive Intelligence Plugin
For PPC teams and brand marketers, monitoring competitor activity across ad libraries and search results is valuable but time-intensive. A competitive intelligence plugin bundles skills for ad library analysis, positioning extraction, and battlecard generation. It connects via MCP to whatever data sources you use for competitive monitoring and produces structured output - not raw data dumps, but synthesised analysis with counter-strategies.
The output is the key differentiator. Raw competitive data is noise. An intelligence plugin should produce structured battlecards: what the competitor is emphasising, what audiences they appear to be targeting, how their messaging has shifted, and what counter-positioning your team might consider. The skill instructions define this analytical framework - the same way a human competitive analyst would approach the data, encoded as repeatable process.
The Ad Creative Plugin
Campaign creative production - especially for Performance Max and responsive ad formats that require dozens of headline and description variants - is another workflow where plugins compound value.
A creative plugin bundles skills for variant generation (producing headlines and descriptions from a brief, checking character limits, applying brand voice rules), performance-based refinement (analysing which existing creative elements perform best and generating variations that build on winners), and export formatting (producing bulk upload files in the CSV format your ad platform expects).
The hook layer connects the analytical and generative steps: after pulling creative performance data via MCP, the plugin identifies top performers and feeds those patterns into the variant generation skill. The output is not random creative - it is creative informed by performance data, constrained by brand rules, and formatted for immediate deployment.
Building Your First Marketing Plugin
If you have followed the series and built skills, MCP servers, and hooks in standalone mode, converting to a plugin is straightforward. Here is the process as a workflow.
Step 1: Identify the workflow worth sharing. Not every standalone skill deserves to be a plugin. The candidates are workflows that more than one person needs, that benefit from standardised execution, and that are stable enough to version. Your experimental anomaly detection algorithm that you are still tuning weekly? Keep it standalone. Your weekly reporting workflow that three teams rely on? Plugin.
Step 2: Create the plugin directory structure. A new folder. Inside it, the manifest directory (.claude-plugin/ or .codex-plugin/) with your plugin.json. Then the directories for your components: skills/ with subdirectories for each skill, .mcp.json at the root for tool connections, hooks/ for automations.
Step 3: Move your proven configurations. Copy your SKILL.md files into the appropriate skill subdirectories. Adapt your MCP configuration to be portable - use environment variables for credentials rather than hardcoded values, so each installer can provide their own API keys. Move your hook definitions into the hooks directory.
Step 4: Write the manifest. Name, version, description. Keep the description precise - it determines how the platform surfaces your plugin in discovery. “GA4 + Meta Ads reporting and anomaly detection for marketing teams” is better than “marketing analytics plugin.”
Step 5: Test locally. Load the plugin using the platform’s local plugin flag and verify that skills discover correctly, MCP servers start and respond, hooks trigger at the right moments, and the overall workflow produces the expected output.
Step 6: Distribute. Push to a repository. If your team has a private marketplace, add an entry. If not, share the repository URL and team members install directly. Either way, you now have a single source of truth for the workflow, with version control and a clear update path.
What Can Go Wrong (And Usually Does)
Plugins simplify distribution but they do not eliminate complexity. Here are the failure modes marketing teams should watch for.
Environment drift. Your plugin works on your machine because you have the right Python version, the right API credentials, and the right network access. Your colleague’s machine has none of these. The plugin installs cleanly but the MCP servers fail silently. Mitigation: document prerequisites clearly, use environment variables for all credentials, and include diagnostic skills or checks that verify the environment before running the main workflow.
Version fragmentation. Three people install v1.0 of your plugin. You release v1.1 with a bug fix. Two people update, one does not. Now the team is running different versions of the “same” workflow, producing subtly different results. Mitigation: use your private marketplace to communicate updates, and consider managed settings that enforce minimum plugin versions.
Over-bundling. It is tempting to put everything into one plugin - every skill you have ever written, every MCP connection you have ever configured, every hook you have ever tested. The result is a plugin that installs twenty MCP servers (burning context window tokens) and offers thirty skills (making it harder for the agent to choose the right one). The over-tooling principle from Guide 3 applies here: more is not better. Build focused plugins that do one domain well.
Hook conflicts. Two plugins define hooks that trigger on the same events, potentially producing unexpected interactions or duplicate actions. If your reporting plugin triggers a Slack notification after metrics are fetched, and your monitoring plugin also triggers a notification after the same tools run, your team gets double messages. Mitigation: namespace your hook actions clearly and document what automations your plugin triggers.
Security through convenience. The ease of install and everything works can bypass the security reflexes that manual setup forces. When you manually configure an MCP server, you see the command it runs and the arguments it takes. When a plugin does it for you, it is easy to skip that review. Resist the convenience. Always read the MCP configuration of any plugin before installing it.
Cross-Platform Portability
One of the more interesting developments in the plugin ecosystem is the emergence of cross-platform compatibility. The SKILL.md format - the core skill specification - works across Claude Code, Codex, Gemini CLI, OpenClaw, Cursor, and at least half a dozen other agent environments. This is because skills are fundamentally just markdown files with structured frontmatter. There is no platform-specific runtime dependency in the skill itself.
The packaging around skills is what differs. The manifest directory (.claude-plugin/ vs .codex-plugin/), some hook configuration conventions, and the app integration format vary between platforms. But the skills, the MCP server definitions, and the core workflow logic are portable.
This matters for marketing teams for a specific practical reason: most enterprise marketing organisations do not standardise on a single AI coding agent. Different teams use different tools. The media team might prefer Claude Code for its Claude model ecosystem. The engineering team might use Codex because the organisation is on OpenAI’s enterprise plan. A data science team might use Cursor or Gemini CLI. A freelance consultant working with your team might use something else entirely.
In this reality, the skills inside your plugins are the transferable asset. The plugin wrapper is platform-specific, but the workflow knowledge - the SKILL.md files that encode how your team evaluates campaign performance, how your audit checklist works, what your reporting format looks like - moves between platforms with minimal adaptation.
Several community projects have already built conversion scripts that translate plugin structures between platforms. One notable repository offers cross-platform installation for over 190 skills across eleven different agent tools, using a conversion script that adapts the packaging while preserving the skill content. The pattern is clear: write the skill once, package it per-platform.
The MCP layer reinforces this portability. Because MCP is an open standard supported by both Anthropic and OpenAI (and increasingly by Google, Microsoft, and others), an MCP server you build for one platform works with any platform that supports the protocol. Your GA4 MCP server does not care whether it is connected to Claude Code or Codex - it speaks the same JSON-RPC protocol regardless of the host.
This is not a reason to ignore platform differences. Hooks behave differently. Agent configuration varies. App authentication scaffolding is not standardised. But it is a reason to invest heavily in the skills and MCP layers - the parts that transfer - and treat the platform-specific plugin wrapper as a thin, adaptable shell around your core capabilities.
Our AI Knowledge Hub registry is designed with this portability in mind - skills published there are structured to work across platforms, and we will be publishing reference plugin implementations that demonstrate cross-platform packaging in the accompanying repository.
The Repetition Tax and Why Plugins Are Worth the Effort
Before we look at where plugins are heading, it is worth quantifying what they save.
Without plugins, every new team member who needs your reporting workflow goes through the same setup: copy the SKILL.md from wherever it lives, configure MCP servers with the right paths and credentials, set up hooks, troubleshoot what did not work, and eventually get to a running state that hopefully matches what everyone else has. Call it two to four hours of setup time. Multiply by the number of people on your team. Multiply by the number of distinct workflows. Now multiply by the maintenance cost when someone updates a workflow and the change needs to propagate manually.
This is the repetition tax. It is a slow bleed of hours spent on configuration rather than analysis, on setup rather than insight. Most teams do not notice it because it happens in small increments - fifteen minutes here, an hour there - distributed across many people over many weeks.
Plugins eliminate the repetition tax at the distribution layer. Install once, verify once, update once. The time savings per workflow are modest. The cumulative savings across a team running multiple workflows over months are substantial. And the consistency benefit - everyone running the same version of the same workflow - is arguably more valuable than the time savings.
The teams that treat plugin packaging as overhead (“why spend time packaging when I could just share the files?”) will continue paying the tax. The teams that invest the modest effort to package, version, and distribute their workflows will compound that investment with every new person who installs rather than configures.
Where This Goes Next
The plugin ecosystem is evolving fast. Both Anthropic and OpenAI are shipping plugin infrastructure updates on roughly weekly cycles. A few trajectories are worth tracking.
Enterprise plugin governance is becoming first-class. Managed settings, force-enabled plugins, organisation-scoped marketplaces - these features are landing now. The gap between “plugins exist” and “plugins are enterprise-ready” is closing.
Plugin composition is emerging. Today, plugins are flat bundles. Tomorrow, plugins that depend on other plugins - composing capabilities from multiple sources - will become standard. A reporting plugin that declares a dependency on a data-connection plugin, which declares a dependency on an authentication plugin. This is how software ecosystems mature, and agent plugin ecosystems will follow the same path.
Quality curation is the unsolved problem. Nine thousand plugins is both impressive and overwhelming. The signal-to-noise ratio in public marketplaces is low. The organisations and communities that solve curation - not just listing, but genuine quality assessment - will define which plugins people actually use. This is why maintaining your own authoritative catalogue, rather than relying entirely on public marketplaces, remains the most defensible enterprise strategy.
Skills as the portable unit of value. Plugins are the shipping container. Skills are the cargo. As the packaging layer standardises and cross-platform tooling improves, the enduring investment is in the skills themselves - the encoded domain expertise that captures how your team evaluates performance, audits campaigns, identifies anomalies, and generates insights. The plugin format may change. The knowledge inside it compounds.
Further Reading
- Claude Code Plugins Documentation
- OpenClaw Plugin Bundles Documentation
- OpenAI Codex Plugins Documentation
Conclusion: From Individual Craft to Team Capability
The progression through this series has followed a deliberate arc. Skills taught you to encode expertise. Agents gave you the architecture to deploy it. Tools gave you the connection to real-world data and systems. The security harness gave you the guardrails. The enterprise framework gave you the organisational model.
Plugins are the delivery mechanism.
They are not glamorous. They do not involve new AI capabilities or novel architectures. They are about packaging, distribution, versioning, and installation - the infrastructure of sharing. And that infrastructure is exactly what separates individual practitioners doing clever things on their laptops from teams building institutional capability that persists beyond any single person’s tenure.
If you have been building skills and tools through this series, the next step is clear: take your most proven, most reused workflow. Package it as a plugin. Share it with one colleague. See if it installs cleanly and runs correctly on their machine. Fix what breaks. Update the version. Share it more widely.
That is the promotion path in action. That is the sandbox-to-catalogue pipeline working. That is how an organisation builds an AI capability stack that belongs to the team, not just to the person who figured it out first.
The skills you have built are the intelligence. The plugin is what makes it portable.
This article is part of the Performics Labs AI Knowledge Hub series on agentic marketing systems. Previous guides: Building AI Skills · Agent Architecture · Tools, MCP, and CLI · Your OpenClaw Marketing Agent · Code Agent Playbook · Enterprise Adoption Framework
Reference implementations and marketing plugin templates will be published to the AI Knowledge Hub and the ai-skills-guide repository.