Skip to content

Google Cloud Next 2026: What Changes for Marketing, Commerce, and Ad Tech

Google Cloud Next is an infrastructure conference. Most of the announcements from Las Vegas were about agent runtimes, TPU chips, data fabrics, security operations, and enterprise deployment.

That sounds far away from media planning and e-commerce until you follow the chain:

agents need tools → tools need identity → commerce needs product data → measurement needs first-party signal → marketing teams need governance before any of this touches live campaigns.

This piece isolates the parts of Google Cloud Next ‘26 that matter for performance marketing, e-commerce, and ad-tech engineering. It also connects them to the patterns we have been tracking in the Chrome AI Mode article, the Meta Moltbook analysis, and the skills / agents / tools series.

The short version: Google is no longer talking about agents as demos. It is packaging the runtime, governance, identity, data, and security layers needed to run them at enterprise scale. For marketers, the immediate question is not “should we build an agent?” It is “which parts of our marketing stack are ready for agents to operate on?”

Contents


The Big Shift: From Model Platform to Agent Platform

The headline announcement was the Gemini Enterprise Agent Platform. Google describes it as a developer platform for building, governing, scaling, and optimizing agents, bringing Vertex AI model and agent-building capabilities together with new features for integration, security, DevOps, and operations.

The important change is architectural. The enterprise AI problem is moving from “how do we call a model?” to “how do we manage a fleet of agents that can use tools, access data, and act on behalf of teams?”

Google’s platform answer includes:

  • agent authoring through Agent Studio and the Agent Development Kit
  • access to Gemini models and third-party models through Model Garden
  • agent runtime and orchestration
  • Agent Registry for discovery and lifecycle control
  • Agent Identity for per-agent credentials and auditability
  • Agent Gateway for policy-controlled external access
  • observability, evaluation, and simulation for deployed agents

Google’s own Cloud Next recap frames this as the shift into the “agentic era.” That phrase is broad, but the practical implication is precise: agents are becoming first-class workloads, not chat widgets bolted onto SaaS tools.

For marketing and ad-tech teams, this maps directly to the architecture we described in The Anatomy of a Marketing Agent: role, memory, tools, skills, identity, and guardrails. Cloud Next shows the managed enterprise version of that stack.


The Commerce Story: UCP Needs Enterprise Plumbing

In the Chrome AI Mode article, we covered the Universal Commerce Protocol: the emerging standard for letting agents interact with merchant systems across discovery, product selection, checkout, and account-linked benefits.

Cloud Next did not turn UCP into a finished universal checkout layer overnight. The more accurate reading is this: Google is building the enterprise plumbing that agentic commerce will need if it moves beyond pilots.

What Changed Around UCP

Since the initial UCP launch, Google and the broader commerce ecosystem have added more practical capabilities. Search Engine Journal’s coverage of the March update notes new Cart and Catalog capabilities for multi-item baskets and real-time product details, plus Identity Linking so shoppers can carry loyalty or account benefits into agent-mediated commerce flows.[SEJ]

The UCP Identity Linking specification is especially important. It uses OAuth 2.0 to authorize a platform or agentic service to act on behalf of a user with a merchant. That matters because commerce agents cannot be treated as anonymous crawlers once they start handling loyalty status, checkout sessions, subscriptions, or post-purchase actions.

Shopify is moving in the same direction. Its Agentic Storefronts documentation describes AI channels such as ChatGPT, Google AI Mode and Gemini, and Microsoft Copilot as places where eligible merchants can make products discoverable and, in some cases, purchasable inside the AI channel. Shopify’s docs are careful about rollout scope: ChatGPT storefronts are available to eligible stores, while Google AI Mode, Gemini, and Microsoft Copilot are described as early access for other agentic storefronts.

That caveat matters. The infrastructure is real, but availability is not universal.

Why Performance Marketers Should Pay Attention

If AI Mode, Gemini, Copilot, and other assistant surfaces become shopping interfaces, the product feed becomes more than a campaign input. It becomes the machine-readable representation of the product itself.

For teams running shopping, retail media, or performance campaigns, the immediate risk is not “we rank lower.” The sharper risk is “the agent cannot verify enough product information to include us in the decision set.”

The practical work is familiar but more urgent:

  • complete and consistent Merchant Center attributes
  • accurate product titles, variants, pricing, availability, and shipping data
  • structured descriptions that answer comparison questions
  • schema and feed alignment across site, ads, and commerce platforms
  • clear brand and product identifiers so agents can resolve entities correctly

This is the same two-layer problem we described in The Two-Layer Problem: brands need both human-readable persuasion and machine-readable access. The product page still matters, but the feed and protocol layer increasingly decide whether the product is visible to the agent at all.

The Microsoft and Shopify Angle

Our internal audience also lives inside Microsoft 365, so Copilot matters. Shopify’s docs explicitly mention Microsoft Copilot as an AI channel for built-in checkout experiences in early access.[Shopify]

That means the commerce layer is becoming cross-surface. Google has AI Mode and Gemini. Microsoft has Copilot. OpenAI has ChatGPT shopping flows. Shopify wants one merchant-side control plane for all of them.

The pattern is the same one we keep returning to in the skills guide and the tools guide: invest in portable standards and clean interfaces first. Vendor-specific features matter, but the durable work is in structured data, tool contracts, identity, and governance.


The Marketing Operations Story: Google’s Own Numbers

Google used Cloud Next to show its own internal adoption of AI-assisted work. Two claims are especially relevant to marketing and engineering leaders.

75 Percent of New Google Code Is AI-Generated

In Sundar Pichai’s Cloud Next post, Google says 75 percent of all new code at Google is now AI-generated and approved by engineers, up from 50 percent last fall. Google also says a complex migration done by agents and engineers was completed six times faster than was possible a year earlier with engineers alone.

This should not be read as “engineers are obsolete.” The better read is that the valuable human work is moving up the stack: specification, constraints, evaluation, architecture, and review.

For marketing teams, the parallel is direct. Campaign platforms already automate parts of bidding, targeting, placement, and creative assembly. Human value moves into the brief, the guardrails, the data quality, the experiment design, and the evaluation criteria.

That is why the SKILL.md pattern matters. A skill is not a clever prompt. It is an operational specification: when to use the workflow, which tools to call, what constraints apply, what output shape is expected, and what failure modes to avoid.

Google’s Marketing Team Used Gemini for Creative Production

Pichai also says Google’s marketing teams used Gemini models to generate thousands of creative variations for the Gemini in Chrome launch, reporting 70 percent faster turnaround and a 20 percent increase in conversions compared with previous manual processes.[Google]

This is a first-party Google claim, not an independently audited benchmark. It is useful as a directional signal, not as a guarantee that every brand will see the same uplift.

The practical question for marketing teams is more grounded:

  • Can we generate enough creative variation to test meaningful hypotheses?
  • Do we have guardrails for brand, policy, and claims compliance?
  • Can we connect generated variants to performance data cleanly?
  • Can we reject weak variants quickly without slowing the whole workflow?

If the answer is no, the bottleneck is probably not the model. It is the harness around the model.


The Measurement Story: First-Party Tagging Moves Up the Priority List

Google Tag Gateway for advertisers lets teams deploy Google tags using their own first-party domain. Instead of tags being requested from a Google domain, the tag loads from the advertiser’s domain and measurement events route through that domain before being forwarded to Google.

Google’s help documentation positions this as a way to improve conversion measurement accuracy, campaign insights, and signal recovery. A separate Google Ads resource says advertisers who configured Tag Gateway saw an 11 percent uplift in signals.[Google Ads Help]

The January Google Cloud integration makes setup easier for teams already on Google Cloud. Coverage from Search Engine Land describes the Google Cloud path as a one-click workflow using Google’s Global External Application Load Balancer to route tag traffic through the advertiser’s first-party domain before sending it to Google.

Why This Connects to Agentic Commerce

First-party tagging does not solve agent-mediated attribution. If a user discovers, compares, and buys through an AI channel without visiting the site in the traditional way, there may be no normal site session to measure.

But it does solve an adjacent problem: preserving the measurement signals you still control.

In practice, teams need both:

  • stronger first-party measurement on owned surfaces
  • new reporting logic for AI-mediated discovery and commerce surfaces

That is the same lesson from the AI Commerce Lab: synthetic signals are easy to produce, but observed validation is hard. As commerce shifts into agent surfaces, the measurement layer needs to become more explicit, more server-side, and more honest about uncertainty.


The Security Story: Agent Identity Becomes a Real Requirement

This is where Cloud Next becomes immediately relevant to ad-tech engineering.

Marketing teams want agents that can access Google Ads, DV360, BigQuery, analytics exports, creative repositories, CRM data, and commerce feeds. Many of those systems still rely on OAuth flows tied to human accounts. That creates a familiar problem: the agent inherits a person’s scope.

That is manageable for a one-person prototype. It is not acceptable for a multi-client, multi-account, enterprise agent fleet.

The risks are specific:

  • you cannot revoke the agent without affecting the human account
  • you cannot audit the agent separately from the human
  • you cannot enforce least privilege for the agent
  • credential rotation is tied to a person’s workflow
  • a compromised agent may inherit the full blast radius of the user’s access

EchoLeak showed why this matters. The Microsoft 365 Copilot vulnerability CVE-2025-32711 was reported as a zero-click flaw where a crafted email could influence Copilot and exfiltrate sensitive data without the user taking action. Microsoft mitigated the issue, but the architectural lesson remains: agents that read user context and act inside enterprise systems need their own boundaries.

What Cloud Next Announced That Addresses This

Google’s Agent Platform includes Agent Identity, Agent Gateway, Agent Registry, and Agent Observability as governance features for enterprise agents. In practical terms, this is the move from “an agent borrowed a person’s credentials” to “an agent has its own identity, its own permission boundary, and its own audit trail.”

That distinction matters for campaign and data workflows. If a campaign optimization agent accesses Google Ads, DV360, BigQuery, or a warehouse-backed reporting tool, the action should be logged against the agent’s identity, not hidden inside a human OAuth session. The agent should be revocable without revoking the human. It should have a narrower scope than the person who created it. It should be possible to answer a basic audit question: what did the agent do, when, and under which policy?

Agent Gateway is the second important piece. It is designed to sit between agents and the tools, services, or other agents they call. That is where policy enforcement belongs: which tools can be called, which protocols are allowed, which requests are blocked, and which high-risk actions require approval. Coverage from SiliconANGLE describes Agent Gateway as enforcing policy on agent-to-agent and agent-to-tool connections, with awareness of MCP and Agent2Agent.[SiliconANGLE]

Model Armor adds runtime inspection around model and agent interactions. Google’s Model Armor documentation describes detection for prompt injection, sensitive data leakage, unsafe URLs, harmful content, and related risks. This is the relevant control category for the EchoLeak pattern: if an agent is coerced into embedding sensitive data into an outbound URL or generated response, the platform needs a layer that inspects the request before it leaves the system. Model Armor does not eliminate that risk on its own, but it is aimed at the right failure mode.

There is also a behavioural layer. Google lists Agent Observability and evaluation as part of the agent platform, and third-party reporting from Cloud Next describes broader monitoring and governance capabilities around deployed agents.[Google Cloud][CRN] The important principle is not a specific anomaly-detection label. It is that agent systems need to detect unusual patterns: unexpected tool calls, changed reasoning traces, abnormal outbound destinations, or access patterns that do not match the agent’s normal job.

Finally, Wiz matters. Google announced its agreement to acquire Wiz in 2025 for $32 billion, and the Wiz team used Cloud Next to expand AI Application Protection Platform coverage across AI studios, agent platforms, cloud environments, and agent-assisted remediation.[Google Cloud][Wiz]. The direction is clear: agent security is becoming part of cloud security operations, not a separate AI side project.

This does not magically remove agent risk. It does show the shape of the control plane:

  • agents need separate identities
  • tool and API access should be gateway-mediated
  • prompts and responses need runtime inspection
  • agent behavior needs observability and evaluation
  • high-risk actions need policy gates and human approval

That is the same architecture we explored in the security harness article and the AI Harness Lab: agents can plan and execute, but the platform must enforce what is allowed.

The companion AI Harness Lab control-plane module now turns this architecture into concrete mechanics. The patterns are not abstract:

  • Approval State Machine shows how proposed agent actions move through approval, execution, rejection, and audit states.
  • Token Bucket shows how a gateway prevents runaway tool calls and protects API quotas.
  • Lease-Locked Queue shows how autonomous runtimes avoid duplicate execution when multiple workers process approved actions.
  • RBAC / ABAC Permission Graph shows how agent identity, role, account scope, action type, and data class combine into an allow/deny decision.
  • Audit Hash Chain shows how agent action logs can become tamper-evident.

Together, these are the deterministic mechanics behind the control-plane pattern: identity defines who the agent is, policy defines what it can do, the gateway enforces the decision, and the audit layer records what happened.

The OpenClaw Gateway Pattern

For teams that want to understand the pattern before buying a managed platform, an open gateway architecture is still useful. The AI Harness Lab is evolving around the same idea: agent actions should pass through a governed runtime with scoped permissions, audit logs, and approval boundaries.

The practical translation for ad-tech is simple. Do not give campaign agents personal OAuth tokens and hope audit logs will be enough. Put a gateway between the agent and the advertising APIs. Let the gateway own credentials, scopes, account mapping, and logging.

That is more engineering work upfront. It is also the difference between a useful internal copilot and an ungoverned automation risk.


The Platform Landscape

Cloud Next clarifies where Google is positioning itself, but it does not eliminate the rest of the market.

Google is building the broadest enterprise agent stack around Gemini Enterprise, Vertex AI capabilities, Agent Platform, Model Garden, Google Cloud data services, and security controls. Its strength is the integrated stack: models, data, runtime, identity, security, and infrastructure in one cloud.

Microsoft is strongest where the work already lives inside Microsoft 365. Copilot Studio and Azure AI Foundry are attractive when the workflow is Teams, Outlook, SharePoint, Power BI, or Entra-governed enterprise data.

OpenAI is strongest as a high-quality agent and coding experience. Codex is not trying to be a commerce protocol or fleet management layer. It is more useful as a specialist execution environment for software and workflow building.

Open-source and self-hosted stacks remain important classrooms. They make the architecture visible: skills, tools, gateways, sandboxes, memory, identity, and approval gates. Even if the production endpoint is managed, understanding the open pattern makes you a better buyer and a safer builder.

The strategic point: these are not exclusive choices. Most teams will use several. The portable layers matter most:

  • skill definitions
  • tool schemas
  • MCP servers
  • product data contracts
  • gateway and identity patterns
  • evaluation and audit logs

Those are the assets that survive platform changes.


Customer Signals: What Unilever, Walmart, and Google Are Doing

Google’s Cloud Next customer material is broad, but several stories are relevant to marketing and commerce teams.

Google’s enterprise agents post highlights companies using Gemini agents across procurement, customer experience, retail operations, and internal data access. The details vary by customer, but the pattern is consistent: agents are being attached to real enterprise data and operational workflows.

For marketing teams, the interesting lesson is the operating model:

  • agents are being used where workflows are repeatable
  • value depends on enterprise data access
  • governance and auditability are part of the deployment, not an afterthought
  • the system is useful only when it connects to action, not just analysis

Google’s own marketing example is the clearest marketing-specific signal. The company says its teams used Gemini to produce thousands of creative asset variations for the Gemini in Chrome launch, with faster turnaround and stronger conversion outcomes than previous manual processes.[Google]

Again: treat the numbers as a Google-reported case, not a universal benchmark. But the workflow direction is hard to ignore.


What to Do Next

If You Work in Performance Marketing or E-Commerce

Audit your product data for agent readability. Can a model or commerce agent resolve the product title, category, variant, price, inventory, shipping details, returns policy, compatibility, and comparison attributes without guessing?

Start with Merchant Center, structured data, and Shopify or commerce-platform feed settings. Then compare what your site says, what your feed says, and what AI surfaces say about the same product. Gaps between those layers are where agentic commerce visibility breaks.

Evaluate Tag Gateway if you depend heavily on Google Ads, GA4, or Floodlight measurement. It will not solve every attribution problem, but it can improve the durability of the Google measurement signals you still control.

If You Work in Ad Tech or Marketing Engineering

Build or review your MCP layer. Google’s platform supports open protocols such as MCP and A2A, and the broader ecosystem is moving in the same direction.[Google Cloud] An MCP server you build for campaign metrics, creative QA, or warehouse access should be designed as a reusable capability, not as a one-off bot integration.

Then review identity. For every agent-like workflow, ask:

  • what account does it use?
  • what systems can it access?
  • can access be revoked independently?
  • is there a gateway between the agent and external services?
  • are prompts, responses, and tool calls logged?
  • which actions require approval?

If the answers are unclear, the next engineering task is not “make the agent smarter.” It is “make the agent governable.”

If You Are Evaluating Platforms

Do not choose a platform as a single bet. Choose which layers you want to manage yourself and which layers you want to delegate.

Most organizations will end up with a mix:

  • Copilot for Microsoft 365 workflows
  • Codex or similar tools for code and automation
  • Google or Azure for managed agent runtime, data, identity, and security services
  • open-source harnesses for learning, experimentation, and portable patterns

Invest first in the layers that travel: skills, tool contracts, clean data, gateway patterns, and evaluation.


What We Do Not Know Yet

Consumer adoption of agent-mediated purchasing is still early. Shopify, Google, Microsoft, and OpenAI are building the infrastructure, but user behavior will decide how quickly these surfaces become material revenue channels.

Attribution is not solved. First-party tagging improves part of the measurement stack, but it does not fully explain what happens when discovery and checkout happen inside an AI conversation.

UCP availability is uneven. The protocol and related commerce tooling are real, but rollout depends on merchant eligibility, platform support, geography, and channel-specific terms.

Security controls are still being tested in production reality. Agent Identity, Gateway, Model Armor, and observability are the right categories of control. They still need implementation discipline, red teaming, and policy design.

The competitive landscape is fluid. Google, Microsoft, OpenAI, Shopify, Meta, and commerce platforms are all positioning for agent-mediated transactions. Their incentives are different. The advertising and attribution models are not settled.

These unknowns are not reasons to wait. They are reasons to build literacy now: product data discipline, tool contracts, governed agents, and measurement systems that do not assume the web journey still looks like 2016.


Reference Reading


Previous in this series: Chrome’s AI Mode Upgrade · Meta’s Moltbook Acquisition · The Agent Architect’s Playbook · The Code Agent’s Playbook

Companion resources: AI Harness Lab: Agent Control Plane · Control Algorithm Visualizers · github.com/ai-knowledge-hub/ai-harness-lab · github.com/ai-knowledge-hub/ai-skills-guide