Google Cloud Next '26 recap: The AI infrastructure shift marketing teams need to understand

Subscribe to our monthly newsletter to get the latest updates in your inbox

Key takeaways

  • Google is building AI into every layer of its stack, which means the infrastructure for agentic AI is maturing faster than most teams realize.
  • BigQuery has evolved from a data warehouse into the foundation for AI-powered marketing workflows, and having your data there puts you in a much stronger position to take advantage of what's coming next.
  • The biggest challenge with AI agents is making their outputs trustworthy and consistent enough that teams will actually rely on them.
  • AI agents work best when an experienced human is driving them. A knowledgeable practitioner guiding an agent outperforms either one operating alone.
  • Getting AI-ready is all about connecting your systems, cleaning your data, and building the organizational habits that let humans and AI work together.

 

As AI solutions continue to mature, marketing and data teams are wrestling with how to move from the AI they're using today — chat interfaces, one-off prompts, manual copy-paste workflows — to something that operates inside their systems and acts on what it learns.

That gap was the defining theme of Google Cloud Next this year. The event’s big message was that the infrastructure for agentic AI is here, and the teams who understand what to do with it are going to have a real advantage over those still figuring out where to start.

Here's what stood out from our time there.

Agents are no longer a pilot program

We already know that AI agents can do impressive things. Gartner’s project that by 2027, more than 50% of business decisions will be automated or augmented by AI agents, which was highlighted at Google Cloud Next, only underscores that. Now, marketing and data teams are working on how to move AI agents from proof-of-concept into production across the whole organization.

What I noticed at the event this year is that the sessions weren't really about what agents can do anymore. Instead, they were focused on about architecture, governance, evaluation, and scale, to help answer questions like:

  • How do you build a multi-agent system that works reliably?
  • How do you know when an agent is performing well enough to trust?
  • How do you keep it from going off-script?

The infrastructure Google announced to support this — the Gemini Enterprise Agent Platform (the evolution of Vertex AI), Agent Studio for low-code agent building, and pre-built Agent Starter Pack templates — is notable mostly because it signals that Google is building the on-ramp for widespread agentic AI.

With these solutions, you no longer need a dedicated AI engineering team to get a first agent into production. The way I think about it is that Google is handing everyone the keys to an F1 car. Our job — for our own team and for our clients — is learning to drive it well.

Building agents is the easy part

One conversation that kept coming up in the sessions I attended was around what it takes to make an agent trustworthy.

The challenge is that most AI outputs are probabilistic, not deterministic. A model might give you a slightly different answer each time you run it. That’s fine for a lot of use cases, but it becomes a serious problem when you're trying to build a workflow that a marketing team is going to depend on for campaign decisions, anomaly detection, or budget reporting. The closer AI moves to those kinds of consequential tasks, the more the consistency question matters.

To address this, Google introduced agent evaluation capabilities as part of the Gemini Enterprise Agent Platform. These are essentially tools that measure how well an agent is performing, track improvement over iterations, and surface inconsistencies before they become a problem in production. It's a meaningful step toward treating agent development more like software engineering: with testing, benchmarking, and iterative improvement. This is something we think about a lot in our own work, as our clients need accuracy and trust before they'll hand anything important to an agent.

The teams who will get real value from agents are the ones who approach it like bringing on a new team member. Define the scope clearly. Test it against a standard before giving it more responsibility. And don’t hand it the wheel on something consequential until it's demonstrated it can handle it. An agent driven by someone who deeply understands the domain will produce better results than an agent left to run on its own, which means the expertise your team has built up matters more rather than less.

BigQuery is now the foundation for AI-powered marketing workflows

I spent most of my time at Next in BigQuery sessions, which probably isn't surprising. What is maybe surprising is that BigQuery is no longer being positioned solely as a place to store and query data. Instead, it’s increasingly becoming the foundation on which AI-powered marketing workflows get built. Agents need data to be grounded in reality, and BigQuery is where that grounding happens.

A few announcements made this concrete:

Move from batch queries to live data streams

Continuous queries are now generally available, which means BigQuery can process streaming data in real time instead of running batch queries against a static snapshot. For marketing teams, that's the difference between an agent that tells you what happened yesterday and one that's reacting to what's happening now. And because continuous queries can trigger downstream events or agents, that reaction can kick off an entire workflow automatically.

The Knowledge Catalog announcement is also significant, as it gives agents a dynamic map of your organization's data estate so they can find and use the right information rather than guessing or hallucinating.

Query your data where it lives, regardless of cloud

The announcement that got the most reaction in our group was the cross-cloud lakehouse capability. Using Apache Iceberg, BigQuery can now query data sitting in AWS S3 or Azure Data Lake without moving it first.

This comes up often with our clients, some of whom have data living in AWS or Azure while the rest of the stack is in Google Cloud. Previously, getting an agent to work with that data meant paying egress costs to bring it into Google Cloud first. Now you can point the agent at it where it lives.

Keep AI outputs aligned with your existing data definitions

There's also a meaningful implication here for how agents access and interpret data. The LookML Agent announcement addresses a problem anyone who's worked with data teams will recognize: getting an AI to use the same metric definitions your analysts use, rather than calculating things its own way.

By grounding agent outputs in the existing Looker semantic layer, you get answers that match what your dashboards show. This sounds basic, but it’s a significant step toward making agents something finance and marketing leadership will trust.

Taken together, these announcements make a strong case that having your data in BigQuery — or at least queryable by BigQuery — is becoming a prerequisite for taking advantage of what Google is building on top of it. This isn't new thinking, exactly. But the pace at which the AI layer is maturing makes it more urgent.

And more to come soon on this topic, as we’re working on a dedicated post covering what your BigQuery environment needs to look like before you start building agents on top of it.

MCP integrations: The missing link between AI and your marketing stack

One of the more technically interesting discussions at Cloud Next centered on Model Context Protocol (MCP), a standard that allows AI agents to connect directly to the tools and platforms your team already uses. Google has expanded its list of supported MCP integrations substantially, which matters because the real power of an agent is what it can do when it's plugged into your actual systems.

When an agent is connected to your ad platforms, your analytics stack, or your CRM, it can act on what it learns rather than just surfacing it for a human to act on later. Here are a few examples of what that could look like in practice:

  • A campaign monitoring agent watches your ad platforms continuously and surfaces anomalies the moment they happen, without anyone having to log in and check.
  • An agent connected to your BigQuery marketing data can flag a drop in conversion rate before it shows up in the weekly report.
  • A media agent that detects a pacing issue can alert the right person immediately rather than waiting for a scheduled review.

Where we go from here

Google Cloud Next made it clear that Google is playing a long game on AI infrastructure by building out the full stack, from the hardware layer up to the interfaces that business users will touch. For marketing and data teams, the foundational work can start right now. Get your data in order, figure out which of your systems can connect to agents, and start with one workflow worth automating.

If you want to talk through how any of this applies to your specific setup, we'd love to have that conversation. And stay tuned for our follow-up post on all things BigQuery.

Your first-party data works harder when it's connected

AI agents are only as reliable as the data behind them. Data clean rooms are one of the most powerful ways to close the gap between your marketing spend and real business outcomes. Our introduction to data clean rooms eBook walks through how to get started, from foundational measurement to full media optimization.

Let's work together