The State of AI in B2B Marketing Report
24th of november 2025

The State of AI in B2B Marketing Report by
The Growth Syndicate

Key highlights:

  • Most teams believe AI will help their work, yet current use lags behind potential, with benefits rated 8.8 out of 10 on average and execution at 6.4 out of 10, and only 26 percent rating execution at 8 or higher.
  • Self‑reported knowledge sits between belief and execution, averaging 7.4 out of 10, with about half rating 8 or higher, which signals a steep drop from conviction to practical capability.
  • Trust in AI outputs is moderate at 5.8 out of 10, so a verify‑before‑publishing workflow is the norm rather than the exception.
  • Adoption concentrates on tactical tasks, with 91 percent using AI for content creation and 79 percent for productivity, while more strategic uses such as lead scoring at 31 percent and personalization near 25 percent trail significantly.
  • Skill gaps are the primary brake on progress, with 60 percent citing the need for training and 47 percent citing a lack of internal expertise, while only 25 percent point to budget constraints.
  • Many worry about sameness, with 63 percent concerned that AI increases noise and reduces differentiation as output volume rises.
  • Governance is thin at this stage, with around 50 percent lacking formal AI policies and roughly 20 percent reporting no clear owner for AI adoption, which weakens consistency and measurement.
  • Spending is rising without clear ROI visibility, as approximately 83 percent expect higher AI spend in the next 12 months while about 60 percent have no dedicated AI budget line.
  • Tool usage clusters around general‑purpose systems, with reported adoption of ChatGPT at 97 percent, Claude at 64 percent, Perplexity at 57 percent, and Gemini at 49 percent, while niche applications see far less use.

Introduction

AI has moved from experimental to operational in B2B marketing faster than most organizations were prepared for. If you lead a marketing function, own a P&L, or manage portfolio companies, you're already making decisions about where AI fits into your operations and where it doesn't belong.
The question is no longer whether to adopt AI, since for the vast majority of organizations this choice is no longer on the table. The priority now is understanding how to execute when everyone has access to the same capabilities.
What we've learned over the past year, implementing AI across our own operations and working with dozens of companies at different stages, is that the gap between adoption and value is wider than it should be.
Most teams have integrated AI into their workflows. When you ask what's actually improved, they point to time saved on first drafts, not revenue growth or customer acquisition.
Marketing leaders are increasing AI budgets in the same quarter they're being asked to prove ROI on current tools. Practitioners use ChatGPT daily but rate their overall AI capability as low.
The way we see it, this adoption is driven by a fear of falling behind and belief in potential, not by systematic capability building or clear measurement frameworks.
Speed improvements are real, but speed alone doesn't create competitive advantage. What matters is where that speed compounds into better decisions, stronger positioning, or more efficient go-to-market motion.

Why we wrote this

We firmly believe that the conversation about AI in marketing has been stuck between two useless extremes.
On the one hand, vendors promise transformation and 10x productivity gains, conveniently leaving out the infrastructure, training, and process changes required to get there. On the other hand, influencers promote sophisticated future use cases while teams struggle with basic implementation.
None of these perspectives helps you make better decisions.
We wanted to provide evidence instead of opinions. Over the past year, we've implemented AI across our own operations and client work, not as experiments but as production systems delivering actual outcomes.
We've surveyed 110 B2B marketing leaders spanning Series A startups to PE-backed companies doing $500M+ in revenue, capturing data on adoption patterns, capability levels, and business impact.
We interviewed expert practitioners who've integrated AI into real operations under real constraints, extracting patterns about what separates teams that scale from teams that stay stuck.
We studied academic research on AI competencies in B2B marketing to ground our observations in broader evidence.
This report pulls all of that together.

What this report covers

Section 1: What 110 marketing practitioners told us
  • The confidence-competence gap. Why marketers believe AI is critical but can't extract value
  • The differentiation paradox. Why most think AI creates sameness, not competitive advantage
  • Usage patterns. What teams are actually doing versus what they say they're doing
  • Barriers. Why adoption is high but implementation is shallow
  • The measurement crisis. Why 70% can't prove ROI but keep increasing spend
Section 2: AI tool map
  • A graphic map of the AI tools most commonly used by our 110 respondents, highlighting where adoption concentrates around core platforms and how niche apps show up.
Section 3: Insights from expert practitioners
  • What separates teams that scale from teams that stay stuck
  • Real implementation lessons from founders, fractional CMOs, and growth leaders
  • Why some practitioners achieve high AI capability while most remain stuck experimenting
  • Patterns across successful integrations (not vendor case studies with perfect results)
Section 4: Where we stand on AI
  • The specific decisions we've made about where AI creates value and where it doesn't
  • Boundaries we've set around what AI shouldn't touch
  • Three organizational paths based on where you're starting from
  • Two critical capabilities every marketer needs to develop
  • The choice between creative strategic leadership and technical AI orchestration
  • How to avoid getting stuck in the dangerous middle

What we're actually seeing

The state of AI in B2B marketing right now is messy. Adoption is high but competence is low. Investment is increasing while ROI remains unclear. Everyone believes AI matters but far too many can't figure out how to use it effectively.
Most marketing leaders are stuck in the dangerous middle. Not creative or strategic enough to differentiate through taste and judgment. Not technical enough to extract maximum value through systematic implementation. They're trying to be equally proficient at both paths when the data shows you need to choose one and maintain the other at a competency level just high enough to collaborate effectively.
But messy doesn't mean hopeless.
There's genuine opportunity for teams that can execute while everyone else is still figuring out the basics.
The question is how you execute when everyone has access to the same tools, the same models, the same capabilities.
Here's what we've learned.

1. Our survey: How B2B marketing teams view and use AI

Most of what we hear about AI in marketing comes from two sources that aren't particularly useful for making decisions. Vendor case studies that highlight perfect implementations under ideal conditions and individual success stories that may or may not transfer to your context.
What's missing is a broader view of what's actually happening across different types of organizations, particularly in the environment where most B2B marketing work actually gets done.
We surveyed 110 B2B marketing leaders to fill that gap. The sample skews toward smaller operations, which turns out to be useful rather than limiting. Sixty-five percent work at companies under 50 employees. About two-thirds hold VP-level positions or above.

This represents the context where most marketing teams operate. Small groups, constrained budgets, limited infrastructure, real tradeoffs about where to invest time and attention.
What we found was less a story of smooth transformation and more a story of meaningful gaps. There's distance between what people believe AI can do and what they can actually get it to do. Investment is rising while measurement remains unclear. The tools teams adopt easily often differ from the ones that could truly reshape strategic work.
Our priority in analyzing the data was identifying patterns that might help explain these gaps and what they reveal about the current state of AI adoption in practice. The full dataset will be made available on our Substack for those interested in examining the underlying evidence more closely.
Here are some key patterns we identified:
  • There's considerable distance between confidence in AI's potential, knowledge of how to use it effectively, and actual execution capability. The drop-off at each stage is significant.
  • When you ask practitioners directly whether AI makes differentiation easier or harder, more say harder. That's worth understanding given how much of the adoption narrative assumes AI creates competitive advantage.
  • Usage concentrates heavily on content creation and productivity tools while strategic applications like lead scoring and personalization lag substantially.
  • The main barrier to deeper adoption isn't budget constraints, it's skills and expertise.
  • Most teams report increasing AI spending over the next twelve months despite being unable to clearly measure returns from current investments.
All five patterns are very indicative of how technology adoption unfolds in practice, as opposed to how it appears in theory. These disconnects aren't necessarily immediate problems that require a solution right away.

They might represent natural stages in how organizations build new capabilities over time. However, they do suggest that the path from experimentation to systematic value creation is longer and more complex than early adoption narratives typically acknowledge.
What follows is an examination of each pattern, what the data shows, why it might be happening, and what it suggests about how teams should think about their own AI implementation.

The belief-reality gap

Let's start with what appears to be a straightforward question: How confident are B2B marketers that AI will benefit their work?
There’s a lot of optimism in the answers, about 86% percent of respondents rated AI's potential benefits at 8 out of 10 or higher, with an average of 8.8. By most measures, this looks like consensus.
To what extent do you agree that AI developments will be beneficial to your work?
8.8 average rating
110 respondents
To what extent do you agree that AI developments will be beneficial to your work?
8.8 average rating
110 respondents
But confidence in potential doesn't necessarily translate to confidence in capability.
When the question shifts to how well marketers understand AI, the numbers soften. Half of respondents rate their knowledge at 8 or higher. That's still substantial, but it represents a notable drop from the 86% who believe in its benefits. The average knowledge rating is 7.4.
How up to speed do you feel with developments in AI?
7.4 average rating
110 respondents
To what extent do you agree that AI developments will be beneficial to your work?
8.8 average rating
110 respondents
The gap widens further when we move to execution. Only 26% rate their team's current AI leverage at 8 or higher. That's less than one-third of those who believe strongly in AI's potential. The average execution rating sits at 6.4.
How well do you feel your team leverages AI in its work today?
6.4 average rating
110 respondents
So we have a pattern: 86% believe strongly, 50% feel they know strongly, and 26% report executing strongly. This could be read as a natural progression from conviction to capability. Or it might indicate that expectations are outpacing what teams can deliver today.
Another way to read the belief–execution pattern is by role. Belief is high everywhere, but capability tightens or loosens depending on structure. Fractional leaders show the closest alignment: knowledge at 7.81 and execution at 7.25 against belief of 9.12. CMOs are similar, at 7.50 knowledge and 6.60 execution with belief at 8.60.
The belief-execution gaps between roles
How different roles navigate from confidence to capability (from 0 to 10)
Marketing Managers land in the middle, with knowledge at 7.35 and execution at 6.40 versus belief at 9.20. The widest gap appears for Marketing Leaders (VP/Head), where belief sits at 8.70 but execution trails at 5.55.

In short: belief is a constant, while execution changes based on operating model. Fractional leaders execute highest on average, while VP-level leaders show the most distance between conviction and what teams can do today.
That pattern can feel counterintuitive. Conventional thinking suggests full‑time, senior roles should translate into stronger capability, but the data points the other way. There are two plausible explanations: economic pressure pushes fractional leaders to prioritize speed and outcomes, and working across multiple companies compresses feedback loops, accelerating both learning and execution.
Respondents also surfaced a gap between public perception and internal reality. LinkedIn timelines showcase wins; inside teams, implementations can feel clunky, outputs need heavy editing, and time savings are hard to quantify. One respondent called this the “LinkedIn versus reality” problem.
The belief–execution gap isn’t closing on its own. Belief is uniformly high and likely to remain so. Execution seems bounded by skills and process maturity, which often lag tool adoption. The gap isn’t evenly distributed. Operating models matter: fractional leaders are closest to closing it; VP‑level leaders show the most distance between conviction and what teams can do today.

The differentiation paradox

Here's a question that gets at something fundamental about how marketers view AI's competitive implications: Will AI make it easier or harder to differentiate your brand?
The most common answer, chosen by 47%, was "both, depends on execution." That's a reasonable position. Essentially, this view recognizes that results hinge more on implementation than on the tool itself.
Among those who took a clearer stance, the split is less even but not as dramatic as it might first appear. Thirty-four percent said AI will make differentiation harder. Sixteen percent believe it will create new opportunities for standing out. Another 3% aren't sure yet.
What impact do you expect AI to have on differentiation in B2B marketing?
110 respondents
Both — depends on execution
It will make differentiation harder (more sameness)
It will create new opportunities for standing out
Not sure yet
So the "harder" camp outnumbers the "opportunities" camp by about 2 to 1 among those with a definitive view. But that's still only a third of all respondents. The plurality seems to be withholding judgment, waiting to see how implementation plays out in practice.
The concern about sameness appears more prominently elsewhere in the data. When asked about barriers and challenges, "increased noise and less differentiation" ranks among the top responses. Sixty-three percent cite it as a major concern. That's higher than budget limitations, quality control, or data privacy.
So while only a third explicitly say AI makes differentiation harder when asked directly, nearly two-thirds flag noise and sameness as a challenge they're already experiencing or anticipating.
This might reflect something real about the current moment. Teams may not have seen enough evidence yet to form strong conclusions about whether AI ultimately helps or hurts competitive positioning. Both outcomes could be happening simultaneously. Some organizations might be using AI to handle routine work and create capacity for more strategic thinking. Others might be using it primarily to increase output of relatively standard content.
The 47% who chose "both" might be observing exactly that pattern. The technology doesn't determine the outcome. How teams deploy it—does.
What seems clear is that if AI makes basic execution easier to replicate, the value equation shifts. What becomes scarce isn't the ability to produce content at scale. It's the judgment about what's worth producing in the first place.
Teams that treat AI as an execution accelerator appear to be freeing up capacity for work that's harder to automate, like strategy, positioning, and creative direction. The kind of thinking that determines whether you're saying something distinctive or just saying more.
But there's a constraint built into how teams actually use these tools. When asked how much they trust AI-generated outputs in their marketing work, respondents averaged 5.8 out of 10. That's moderate skepticism, not confidence. The distribution clusters around the middle of the scale, suggesting most teams operate with a "trust but verify" mindset.
Trust in AI outputs: the "verify everything" mindset
How much marketers trust AI-generated content without human review (0-10 scale)
Trust Rating (0 = No Trust, 10 = Complete Trust)
Average trust sits at 5.8/10-moderate skepticism, not confidence.This explains why human judgment remains critical for differentiation.
This level of trust might actually be appropriate. It means human judgment remains in the loop. Someone still needs to evaluate whether the output is accurate, on-brand, and worth publishing. That verification step is where taste and strategic thinking come into play. The technology can accelerate production, but it doesn't eliminate the need for editorial judgment about what's worth producing.
Thoughtful brand-building becomes essential in this equation. When competitors can match your content output and basic quality within weeks, brand becomes the primary basis for standing out. Not brand as logo and color palette, but brand as a coherent and cohesive point of view that shapes everything you put into market.

Content dominance, strategic lag

In day-to-day work, AI shows up mostly in a handful of places. Content creation sits at 90.91%. Meetings, notes, and productivity tools follow at 79.09%. Creative design is 52.73%. These are the highest adoption areas in the survey.
After that, usage drops. Lead scoring comes in at 30.91%. Sales enablement is 20.00%. Personalization and conversion rate optimization cluster near 25%. The applications that change how go-to-market connects to revenue tend to trail the ones that scale execution.
Where teams are actually using AI?
Current use cases ranked by adoption rate (n=110)
Adoption rate (%)
Content creation (91%) is adopted about 4.5x more than sales enablement (20%). AI is primarily a content engine, not yet a strategic GTM tool.
Content creation is adopted about 2.9 times more than lead scoring. That ratio suggests AI is being used primarily as a content engine rather than as a system that re‑architects funnels, targeting, or handoffs. If you compare content creation with sales enablement, the gap widens to roughly 4.5 times.
Tool choices paint a similar picture. General‑purpose LLMs dominate: ChatGPT at 97.27%, Claude at 63.64%, Perplexity at 57.27%, Gemini at 49.09%. These tools map cleanly to the high‑adoption use cases above.
Specialized tools show up much lower, which likely reflects lower switching costs and minimal integration needs for general LLMs compared with the data plumbing and workflow redesign required for more strategic applications.
Teams appear to adopt what is fast to try and easy to measure. You can scale content workflows quickly and attribute outputs to hours saved. Building lead scoring or personalization that moves pipeline asks for data readiness, governance, and new processes. Those conditions are not universally in place.
This creates a strategic exposure. If most teams apply AI to the same execution tasks, outputs converge on similar patterns. The adoption curve rises, but the advantage curve may not. The opportunities that could create separation remain underused because they are harder to implement.
Whether this is a waypoint or a destination isn’t clear in the data. It could be a staging pattern that precedes deeper investment in strategic use cases. It could also indicate a skills and infrastructure ceiling that holds for longer. The persistence of this gap shows up in the next section on barriers.

Skills over budget

When asked what's holding back AI adoption, leaders point to skills more than spending. About 60% say the main issue is the need for new training. Another 47% cite a lack of internal expertise. Budget constraints come in at just 25%.
Skills, not budget: what actually blocks AI adoption
Top barriers cited by 110 B2B marketing leaders
Percentage of Respondents
Skills and expertise barriers (60% + 47%) far outweigh budget concerns (25%). The constraint is capability, not capital.
The skills gap is roughly 2.4 times larger than the budget constraint. That challenges a common assumption about technology adoption. The barrier appears to be less about affording the tools and more about feeling equipped to use them effectively.
Increased noise and less differentiation ranks highest at 63%, above even the skills and expertise concerns noted earlier. That suggests teams are navigating two connected problems at once. They worry about contributing to sameness while also reporting they lack the skills to avoid it.
How teams learn may help explain why the gap persists. The most common approach is self-directed. Seventy-two percent report learning by doing as their primary method. Sixty-four percent follow people on LinkedIn. About 55 percent learn directly from ChatGPT itself. Only 38 percent use formal training programs or courses.
So 60% identify skills as a barrier, but fewer than 40% are using structured training to address it. The gap between recognizing the problem and accessing systematic solutions appears significant. Teams seem to be relying heavily on experimentation and peer learning rather than formal skill development.
How marketing teams are learning AI
Primary learning sources - informal methods dominate (n=110)
Informal learning (blue) dominates over formal training (purple). 72% learn by doing while only 38% use structured training programs.
This DIY learning culture might explain why adoption is happening faster than competence is developing. You can experiment your way to basic usage relatively quickly. Creating a ChatGPT workflow for content drafts doesn't require formal training. But building more sophisticated capability, the kind that moves beyond content creation into strategic applications, likely requires more structured development. The data suggests most teams aren't getting that.
Infrastructure gaps compound the skills problem:
  • ~60% have no dedicated AI budget
  • ~50% have no formal AI usage policies
  • ~20% have no one formally owning AI adoption in their organization.
The foundation for sophisticated AI use, both technical and organizational, isn't in place.
AI skills being developed by company size
110 respondents
The sequence matters here. Skills enable infrastructure. Infrastructure enables governance. Governance enables scale. Budget helps at every stage, but it's not the binding constraint. Teams appear to be trying to solve this in reverse order, acquiring tools before building the capabilities and structures to use them effectively.
That might explain why the belief-execution gap we saw earlier persists. Investment is happening. Tools are being purchased. But the underlying conditions for effective use aren't being addressed in the right sequence.
This also clarifies why certain roles seem to execute better despite tighter constraints. Fractional leaders often operate with limited budgets and no formal training infrastructure. But they're forced to solve the skills problem immediately. Learn fast, apply directly, move to the next challenge. That compressed feedback loop, driven by economic necessity, might actually create an advantage in this environment.
It's fair to assume that throwing budget at AI adoption without first addressing capability and structure may not close the execution gap. The constraint isn't primarily financial, it's organizational and developmental. And those problems require different solutions than simply buying more tools.

Budgets, ROI, and who actually owns this

The relationship between AI investment and returns appears somewhat disconnected in the data. About 60% of teams report having no dedicated AI budget. Yet when asked about future spending, roughly 65% expect it to increase moderately over the next 12 months.
That combination suggests AI expenses are being absorbed into existing budget lines rather than tracked as a separate category. Teams are spending, but they're doing it through general software budgets, productivity tools, or individual subscriptions rather than through formal AI allocations.
What makes this pattern more interesting is what happens when you ask about returns. The data on ROI clarity is less definitive than the spending trajectory. Teams describe benefits in qualitative terms: faster content production, time saved on routine tasks, improved output quality, but measurable revenue impact appears harder to pin down.
This creates a somewhat unusual dynamic. Spending is increasing despite unclear returns. That's not necessarily irrational. Early-stage technology adoption often works this way. You invest before you can measure precisely because waiting for perfect measurement means falling behind. But it does suggest that current spending is driven more by competitive pressure and perceived necessity than by demonstrated ROI.
About 83% plan to increase AI spend (moderately or significantly), while roughly two-thirds say ROI is not yet clearly measured.
110 respondents
The ownership picture reinforces this interpretation. About 20% of teams report that no one formally owns AI adoption in their organization. Where ownership does exist, it tends to sit with marketing leadership or technically inclined individuals rather than with a dedicated function or role. That fragmentation likely contributes to the measurement challenge. When responsibility is distributed, systematic tracking becomes harder.
Governance follows a similar pattern. Roughly half report having no formal AI usage guidelines. Without clear policies about what tools to use, how to use them, or what standards outputs should meet, usage tends to be ad hoc. Individual team members experiment with different tools. Practices vary by person. Institutional learning accumulates slowly because there's no central mechanism to capture and distribute what works.
Do you have formal AI usage guidelines or policies in place?
110 respondents
Yes
No
In progress
This matters for the ROI question. If every person is using AI differently, comparing results becomes difficult. You can't easily determine which approaches generate value and which don't. The lack of standardization makes measurement harder, which in turn makes it harder to justify increased investment through traditional budget processes.
Yet spending continues to rise. One way to interpret this is that the perceived risk of not adopting outweighs the discomfort of spending without clear metrics. Teams seem to be betting that AI capability will become essential, even if current returns are hard to quantify. That's a reasonable bet in a rapidly changing environment, but it does create tension with traditional budget discipline.
The teams that appear to be navigating this more effectively are those that have established some form of ownership and governance early. Not necessarily elaborate structures. Just clear accountability and basic standards. That seems to enable more systematic learning, which in turn makes it easier to identify what's working and allocate resources accordingly.

2. AI tool map

Among these respondents, tool usage concentrates heavily around a small set of general-purpose systems. ChatGPT sits near-universal at 97 percent, with Claude (64 percent), Perplexity (57 percent), Gemini (49 percent), and Canva’s AI features (48 percent) forming a clear first tier. Below that, adoption diffuses into workflow and data tools such as n8n, Zapier AI, Clay, and Apollo.io, each used by roughly a third or less of the sample.

3. Expert perspectives

The survey captured where teams sit today. What it couldn't fully answer is why some teams seem to be pulling ahead while others remain stuck in pilot mode, or what separates effective adoption from activity that doesn't compound into advantage.
We spoke with ten people whose work gives them distinct vantage points on this transition.
Some build AI-native companies, testing what becomes possible when you design operations around these tools from scratch.
Others advise multiple organizations, watching patterns emerge across contexts.
A few research how work reorganizes when AI turns from productivity tool to autonomous executor.
We're deeply grateful to these experts for their contributions:
Their responses revealed patterns that help explain what the survey data couldn't fully capture. Three themes emerged consistently across conversations:
  • The capability chasm: Teams aren't progressing along a smooth adoption curve. Instead, they're clustering into distinct maturity stages with fundamentally different operating models. The distance between traditional, augmented, and automated teams appears to be widening rather than closing.
  • Execution to orchestration: The role of the marketer is changing from doing the actual work to designing systems that execute it. This changes what skills matter, how teams are structured, and what separates high performers from those at risk of obsolescence.
  • Infrastructure as prerequisite: Speed and automation only create advantage when built on deliberate choices about data quality, integration architecture, and human oversight. Without this foundation, AI adoption adds complexity without compounding value.

The capability chasm

The survey shows that most teams are in early adoption mode. AI use is concentrated in content creation, but integration into core workflows remains thin and policy coverage is sparse.
This suggests movement without compounding advantage. In conversation with the experts, we identified that progress isn't exactly linear among marketing departments. Teams aren't just at different points on the same path, they appear to be operating in fundamentally different modes.
There are three maturity levels that showed up in our conversations, and where you sit seems to matter more than the tools you’re using.
  • Traditional teams have humans executing workflows end to end. Campaigns get assembled manually. SDRs research accounts, write outreach, and update CRMs by hand. This is still where most teams operate.
  • Augmented teams have humans working alongside copilots. AI drafts content, summarizes calls, proposes next actions, and updates systems. Humans approve, steer, and maintain quality. The work moves faster, but the fundamental structure of roles hasn't shifted yet.
  • Automated teams have interconnected systems handling execution. Humans design the system, set strategic direction, run quality checks, and invest their time in relationships and creative work. The role structure has changed.
David Arnoux describes the growing separation between these levels as a capability chasm, and he's pretty straightforward about its trajectory:

"The teams that have been able to augment themselves will greatly surpass those that haven't. As long as you have the right guardrails and maintain taste and strategic thinking, it's a powerful tool. I don't think the gap here is linear. I think it's going to be exponential."

David Arnoux
Fractional CxO & Co-Founder at Humanoidz
The practical implication is that when execution moves from manual work to orchestrated systems, the ceiling of what's possible changes along with it.
In many ways, this is different from just doing same work, but faster. Teams that used to plan in quarters can now test and iterate in days or hours.
The constraint moves from execution capacity to decision-making speed and strategic judgment. Wes Bush describes this transition in terms of what becomes the basic unit of work:

"AI will turn B2B marketing from campaign-driven to always-on. Instead of launching quarterly campaigns, teams will run thousands of micro-tests in real time."

Wes Bush
CEO & Founder of ProductLed
In essence, what we might see in the future is a totally different operating model for marketing teams. When you can run thousands of tests, the bottleneck moves from "can we execute this?" to "what should we test next?" and "how do we interpret signals fast enough to act on them?"
Moving from campaigns to continuous testing changes what you're optimizing for. Volume of experiments matters more than perfection of any single one. Speed to insight becomes more valuable than depth of pre-launch research. The skill set required shifts accordingly.
Anuj Adhiya extends this logic from testing into orchestration, describing how teams can personalize at scale without adding headcount proportionally:

"Always-on programs tailor copy, timing, and channel at the account and contact level. AI drafts assets, proposes next-best actions, summarizes calls, updates the CRM, and surfaces recommended plays—while humans approve and steer."

Anuj Adhiya
Expert-in-Residence at Techstars and Author of "Growth Hacking for Dummies"
What Adhiya describes is a system where AI handles the mechanical work of personalization, the drafting and updating and tracking, while humans focus on approval and direction. This division of labor only works if the underlying data is clean, the systems are integrated, and the human oversight points are clearly defined.
When cycle times collapse and systems scale the work, smaller teams can carry significantly more weight. Here’s how Jean Bonnenfant describes this in practice:

"Revenue per employee is going to increase drastically at AI-first companies. The idea-to-experiment time is now shorter than ever. Your idea can be out there really fast, cut off or doubled down on quicker than ever."

Jean Bonnenfant
Head of Growth at Lleverage
But speed alone doesn't create advantage. It needs to be built on solid infrastructure, and that's where many teams stall out.
Our survey showed 60% cite skills as their primary barrier and 49% have no formal guidelines in place. That combination tends to produce fragmented pilots that don't scale.
Skills, not budget: what actually blocks AI adoption
Top barriers cited by 110 B2B marketing leaders
Percentage of Respondents
Skills and expertise barriers (60% + 47%) far outweigh budget concerns (25%). The constraint is capability, not capital.
Lisa Brouwer, speaking from an investor perspective, offers a useful lens for distinguishing real progress from activity:

"AI is significantly compressing the time it takes to go from idea to execution to iteration. Campaigns that once required weeks can now be launched in days, and with fewer people. In crowded software markets, the teams that move fastest and can test, learn, and scale quickly will gain a strong advantage."

Lisa Brouwer
Principal at Curiosity VC
Two conditions show up repeatedly when teams actually capture that advantage.
First, they measure learning, not just output. Metrics like experiments per person, editor edit rates, decision latency, and qualified meetings from AI-assisted work become more important than traditional campaign metrics.
Second, they invest in clean data, proper integration, and human oversight before trying to automate high-stakes decisions. Without those foundations, speed just amplifies noise.
There's a final point worth noting from the people actually building AI-first companies. Quality improves through iteration, not by waiting for perfection. Ricardo Ghekiere's experience at BetterPic illustrates this:

"People think AI is perfect and should be perfect. It's not. It evolves over time, and it's something you need to start perfecting your craft for today, not tomorrow."

Ricardo Ghekiere
Co-Founder at BetterPic and BetterStudi
In their case, that meant shipping early, measuring carefully, and steadily improving. Their refund rate dropped from around 10% to under 3% over roughly eighteen months. The broader lesson is that crossing the chasm isn't about a single tool purchase or hire. It's a sequence.
Start by augmenting work, instrument what you're learning, then automate with appropriate guardrails where the data quality and stakes make sense.
The throughline across all these perspectives is consistent. Traditional teams optimize individual campaigns. Augmented teams optimize cycle times. Automated teams optimize entire systems.
The further you move in that direction, the more your advantage comes from how quickly you learn and how reliably you can act on what you've learned at scale.

Execution to orchestration

Our survey data suggests that teams are experimenting broadly but integrating thinly. Most AI use concentrates in content creation, which tends to optimize individual tasks rather than entire workflows.
Where teams are actually using AI?
Current use cases ranked by adoption rate (n=110)
Adoption rate (%)
Content creation (91%) Is adopted 4.5x more than lead scoring (20%).Al is primarily a content engine, not yet a strategic GTM tool.
However, in conversation with the experts we identified a different position. In essence, the marketer's job appears to be moving from executing work to designing the systems that execute work.
The transition shows up in how work gets organized. Instead of polishing a single email, you define how emails get generated, versioned, approved, and measured across accounts and moments. The focus moves from the artifact to the system that produces artifacts.
Here’s how David Arnoux describes this change in mindset:

"It's no longer about executing the tasks. It's about designing the system that does the work. Make your job obsolete by building the system, then lead it. We still need humans in the loop for taste and strategy."

David Arnoux
Fractional CxO & Co-Founder at Humanoidz
Arnoux’s framing—"make your job obsolete"—is deliberate. It pushes against the instinct to protect current responsibilities and toward the practice of systematically automating whatever can be automated. But Arnoux also warns against outsourcing thinking entirely:

"Marketers who outsource all the thinking to AI will experience metacognitive decline. Struggle with the problem for 10–20 minutes first, then use AI as the double checker."

The goal isn't to remove humans from the process. It's to remove humans from the mechanical parts so they can focus on judgment, creativity, and strategic thinking.
As a result, systems thinking becomes more valuable than task optimization. You need to see how steps connect, where state needs to be maintained, and how handoffs create friction or opportunity. In that context, prompt engineering also has to move beyond single interactions. Arnoux frames it this way:

"Study prompt engineering. It's mastering the creative brief for machines. Move beyond single prompts to building autonomous campaigns. Document and template everything, because that's how you automate it."

Which brings us to an important point: data and retrieval literacy become critical prerequisites. Marketers need to understand how to feed, condition, and govern knowledge bases. It’s important to familiarize ourselves with concepts like retrieval-augmented generation (RAG) matters because it determines whether your AI outputs are grounded in actual information or just hallucinating plausibly.
Quality assurance and creative direction move from optional polish to core competencies. When AI handles the mechanical work, maintaining brand voice, correctness, and strategic alignment becomes the primary human contribution.
Anuj Adhiya translates these principles into a concrete first build:

"Start with an AI Content & Ops Copilot tied to your CRM/MAP. Pull a minimal account/contact view, use a versioned knowledge base, generate on-brand assets with citations, summarize calls, draft follow-ups, update CRM, recommend next steps. Humans approve and everything is logged."

Anuj Adhiya
Expert-in-Residence at Techstars and Author of "Growth Hacking for Dummies"

The logging piece is easy to overlook but critical. Without it, you can't tell what's working, where errors emerge, or how to improve prompts and workflows.
Bernardo Nunes offers a framework for thinking systematically about where humans stay in the loop. He references the Human Agency Scale, which classifies tasks from H1 (minimal human oversight needed) to H5 (high human agency required). Low-agency tasks like running reports can automate safely. Higher-agency work like interpreting experimental results requires collaboration with explicit review points.
Nunes also raises a practical concern:

"Time savings often don't translate into impact because teams fail to redesign the process. Decide what to do with the 'free time' and optimize the review process, otherwise there's almost no impact."

Bernando Nunes
Data & AI Transformation @ Workera.ai
This connects to what the survey showed about the confidence–capability gap. The value only materializes if the time freed up gets redirected toward higher‑leverage work.
Earlier, Wes Bush described how this plays out in practice: marketing moves away from a few big, calendar‑driven campaigns and toward a rhythm of continuous, always‑on experimentation. Many small tests running in parallel, with teams learning and adjusting in near real time.
That kind of cadence calls for people who can orchestrate systems, rather than just execute tasks manually. David Arnoux describes how these roles are starting to shift:

"Marketing managers become AI strategy leads. SDRs become sales AI orchestrators. Content teams become content AI architects and evaluators."

David Arnoux
Fractional CxO & Co-Founder at Humanoidz
It’s also important to point out that there’s more to this than just a change of title, simply because of the technology you’re using. In essence, a content AI architect isn't writing blog posts. They're designing the system that generates blog posts, defining quality criteria, building feedback loops, and maintaining the knowledge base that grounds outputs.
Building on that, Maja Voje sketches the practical target state:

"We're moving from 70% admin and 30% sales to an 80/20 inversion, with AI executing the repetitive work in the background."

Maja Voje
Go-To-Market Strategist and Author
Shifting the mix from admin to value work is less about adding tools and more about changing how work flows.
Someone has to own what gets automated, where human judgment stays, and how the team learns from each run. That ownership shows up in cadence, not slogans. It is visible in the way decisions are logged, review points are defined, and improvements are shipped weekly.
Once those basics are in place, leadership behaviors and operating habits start to compound. Here’s what Iliya Valchanov suggests leadership should do differently:

"Appoint one owner for 3–6 months. Do weekly usage reviews. Commit long enough to learn, then course-correct. Leaders need to be in the tools, adaptable, and strong on human connection."

Iliya Valchanov
CEO at Juma, formerly Team-GPT
The "be in the tools" point matters because leaders can't make good decisions about what to automate if they don't understand what the tools can and can't do.
Lisa Brouwer describes what separates teams that compound value:

"Move fast with compressed idea→execution→iteration cycles, but pair it with clean data, integrated stacks, and governance. That's where speed turns into durable advantage."

Lisa Brouwer
Principal at Curiosity VC
Orchestration only works if the underlying systems are integrated, the data is clean enough to trust, and there are guardrails to prevent automation from creating problems at scale.
Fundamentally, the experts converge on a pragmatic approach. Start by augmenting high-volume tasks with clear quality assurance. Instrument what you're learning with logging, metrics, and feedback loops. Then automate with guardrails, moving tasks to lower human agency only when data quality and oversight mechanisms are ready.
Skipping steps or trying to jump straight to full automation without the foundational work tends to produce disappointing results.
Orchestration changes what "good" looks like. Traditional teams optimize campaigns. Augmented teams optimize cycle times. Automated teams optimize systems. The further you move in that direction, the more your advantage comes from how quickly you learn and how reliably you can act on what you've learned at scale.

Infrastructure as prerequisite

The survey shows that there’s lots of momentum, but with little to no foundations. Most teams use AI for content, while around 50% report no policies and skills remain the primary barrier. In practice, that mix often produces pilots that work locally but do not scale.
Expert inputs seem to agree with this view. Speed and automation compound only when data is clean, systems are integrated, and oversight is explicit.
The practical constraint frequently sits in data plumbing rather than model choice. Stale account records, unjoined product events, and unversioned knowledge bases tend to surface as unreliable outputs. Integration gaps are a frequent blocker, and they often appear before model performance becomes the limiting factor. When your systems are better connected, how you use AI changes too.
Baptiste “Baba” Hausmann describes what becomes possible once signals are connected across the stack:

"AI will start to act more like ambient intelligence, a quiet layer connecting internal and external context. The real value lies in turning that context into proactive understanding and more confident, faster decision‑making."

Baptiste “Baba” Hausmann
Founder at Baba SEO
To make that transition safe, teams need to establish clear boundaries for where humans review and how outputs are audited. A good way to do that is mapping workflows by required human involvement as recommended by Bernando Nunes in the previous section. Automate the low‑risk tiers, add explicit review gates for higher‑stakes steps, and include a quick “how will this land?” check so automation is both correct and acceptable.
With oversight defined, teams can stage risk and add complexity over time. Anuj Adhiya recommends moving to higher‑stakes automation only after proving early value:

"Once you're shipping value and your data is clean and labeled, add predictive lead qualification as Phase 2 with explanations for scores, a sales feedback loop and holdout tests to prove incremental lift."

Anuj Adhiya
Expert-in-Residence at Techstars and Author of "Growth Hacking for Dummies"



That phased approach also intersects with a practical question most teams face early: whether to build custom tools or buy off the shelf. The answer often depends less on ambition than on organizational constraints and where you sit in the maturity curve.
Ricardo Ghekiere offers a rule that many organizations arrive at through experience:

“SMBs typically opt for out-of-the-box, enterprises build their own in-house solutions. Just like regular software, nothing new there.”

Ricardo Ghekiere
Co-Founder at BetterPic and BetterStudi
The build-versus-buy calculus changes in product-led contexts, where the richest signals often live inside the product itself. When usage data flows directly into communication systems, personalization can happen automatically rather than through manual campaign work. Wes Bush points to what that integration unlocks:

"AI takes PLG from static to adaptive. Imagine every user getting a personalized onboarding flow, the right upgrade nudge at the perfect time, and in‑app content tailored to their exact use case—all without a sales call."

Wes Bush
CEO & Founder of ProductLed
Foundations don't maintain themselves. They require operating discipline. As Iliya Valchanov outlined in the orchestration section, leadership needs to appoint an accountable owner, run weekly usage reviews, and commit to learning over several months before changing course. That cadence supports the architectural evolution that follows.
As foundations mature, team structure changes. Execution concentrates in an integrated core—what David Arnoux calls "AI swarms or AI orchestra"—while humans focus on high-level direction and quality control at defined checkpoints.
This architectural change also raises the stakes for knowledge quality. As discovery fragments across search engines, AI chat interfaces, and voice assistants, having structured, cited, and current knowledge becomes strategic infrastructure rather than nice-to-have documentation. Baptiste Hausmann's recommendation is to strengthen the substance before chasing new channel labels:

"Teams should deepen their understanding of search behavior across evolving platforms… Treat 'GEO' as the same core discipline—search behavior—expressed in new UX."

Baptiste “Baba” Hausmann
Founder at Baba SEO
Strong foundations show up in outcomes and capability development, not tool counts. Lisa Brouwer's investor checklist captures what durable adoption looks like in practice:

"Operational efficiency gains… Enhanced pipeline contribution… Capability building: upskilling talent so that AI isn't just a tool but… a core part of the process and way of working."

Lisa Brouwer
Principal at Curiosity VC
Drawing from the expert inputs, a practical baseline teams can implement quickly includes connecting the data you already have by reconciling a minimal account and contact view across CRM, MAP, support, and product events, and versioning a source‑of‑truth knowledge base with citations. Map human agency by workflow, tagging tasks H1 to H5, automating H1 and H2, and defining review gates, escalation, and rollback for H3 to H5.
Instrument for learning by logging prompts, inputs, outputs, approvals, edits, and results, and tracking editor edit rate and decision latency. Standardize with templates for prompts, creative briefs, acceptance criteria, error taxonomies, and red‑flag lists. Gate automation on readiness by advancing a workflow only after passing defined quality thresholds with audit logs in place.
For build versus buy decisions, when urgency is high, data is dispersed, and risk is low, buy first and integrate later. When data is sensitive, compliance is strict, or there is proprietary advantage, prototype off‑the‑shelf and plan in‑house once data is clean and governance is defined. In any path, track trendlines for quality and adoption, not snapshots.
Signals you are ready to scale automation include data readiness where core entities are reconciled, critical fields are populated, and duplicate rate is trending down. Quality readiness shows editor edit rate and factual corrections trending down across four to eight weeks. Oversight readiness means review gates are documented, audit logs are complete, and issues are resolved within agreed timelines. Adoption readiness includes weekly usage reviews running and blockers removed within a sprint.
Seen this way, infrastructure shouldn't be treated as a back‑office issue. It is what turns faster execution into faster learning, and faster learning into advantage that compounds.

4. Where we stand on AI

The principles that guide our work with AI

AI is moving fast enough that no one can keep up with every new tool, feature, or use case.

Trying to track everything is a losing game. What is realistic is having a clear philosophy for how you use AI: a small set of principles that guide how you learn, where you adopt, and how you apply it to commercial work.

We use AI extensively, but not indiscriminately. We have explicit boundaries on where we won’t use it, even if we technically could.
Our priority is thoughtful, outcome-focused deployment of AI.
Essentially, we only use it when it demonstrably improves results, not because it's available or because competitors are using it.
Our approach is built on one core logic: humans lead, AI executes.
What follows are the six principles that guide how we deploy AI across client work and internally. These are the constraints and decision frameworks we use every day to determine where AI adds value and where it creates risk.

Principle #1:
Human strategy sets direction; AI handles execution

Strategy cannot be delegated. Not to junior team members, not to agencies, and certainly not to algorithms. The moment you hand strategic decisions to an algorithm, you've eliminated the part of your role that creates the most value.
And our survey data validates this instinct.
  • 66% of marketers say human oversight is essential for AI deployment
  • 71% cite AI's lack of creativity and contextual knowledge as a fundamental barrier
  • Average trust in AI outputs sits at 6-7 out of 10, which represents moderate confidence, but not high

"AI is not about delegation. It's about implementing, amplifying, and distributing the core value that you create. If you delegate strategy, you're no longer performing your role."

Ferdinand Goetzen
Founding Partner of The Growth Syndicate
At The Growth Syndicate, we set campaign strategy, target audience definition, positioning, and core messaging.
AI operates within those parameters by generating content variations, processing research data, and handling repetitive analysis. We make the final decisions on strategic direction and output quality.

"AI should handle execution better than humans ever could. But it should never take over the experience, the strategic thinking, or the authenticity. Those stay human."

Joliene van Grieken
Founding Partner of The Growth Syndicate
Strategy, taste, and creative direction remain, at all times, human domains for us. AI executes within the boundaries these elements establish.
When that relationship inverts, quality collapses and differentiation disappears.
Teams that let AI drive strategy produce competent but indistinguishable output. Teams that use AI to execute human strategy maintain their distinctive voice while dramatically increasing output volume.
The challenge is capability, not understanding. Most marketers are stuck in the dangerous middle: not strategic enough to set clear direction, and not technical enough to get maximum value from AI execution (but more on that later).
Mediocre strategy executed at scale is worse than mediocre strategy executed slowly because at least the latter is containable.

Principle #2:
Scale through intelligent automation, not headcount

The traditional agency model is broken. More clients meant more hires, creating a linear relationship between revenue and headcount. Over time, that forces a drop in standards to maintain margins, and quality suffers. AI breaks that constraint.
Consider that:
  • 60% of founders in our survey have stopped hiring roles they previously would have filled, choosing AI-assisted productivity instead.
  • AI-native startups like Swan AI are generating $10M in pipeline per employee, an order of magnitude beyond traditional benchmarks.

"The old model was simple: more clients meant more hires. AI breaks that. We automate the time-consuming, low-impact work and redirect that capacity to strategic execution. As a result, we can serve more clients or deliver better outcomes without adding people."

Clement Dumont
Founding Partner of The Growth Syndicate
We automate time-consuming, low-impact work: reporting, data entry, initial research, basic content drafting. These tasks consume hours but rarely require strategic judgment. AI handles them faster and more consistently than humans. That freed time gets redirected to strategy, creativity, relationship-building, and execution that actually moves metrics.
Revenue per employee becomes a key metric alongside traditional measures. Not because headcount reduction is the goal, but because capital efficiency reveals whether AI is creating value or just adding complexity.
Execution-heavy agencies face an existential question: if clients can achieve similar output with one AI-assisted senior specialist instead of a team of five, why hire the agency?
The answer must be expertise that clients cannot access elsewhere. Pattern recognition, strategic judgment, and taste developed across years of experiences and hundreds of similar challenges.
We offer the kinds of people you can't get, not just the capacity you can't afford.

Principle #3:
First-draft assistance, final-draft human refinement

At this moment in time, AI-generated content has a quality ceiling. That ceiling sits well below the publication standard for any brand that cares about differentiation.
Quality and brand voice require human judgment because AI cannot distinguish interesting from uninteresting, cannot assess strategic fit, and cannot judge whether output serves the actual goal.

"Bad thought leadership starts with bad thinking, not bad writing. An interesting insight will always survive a poor articulation. A hollow insight doesn't improve with better articulation. AI amplifies whatever you put in, value or slop."

Ferdinand Goetzen
Founding Partner of The Growth Syndicate
The insight must come from human expertise. AI structures that insight, generates variations, and handles the mechanical work of drafting.
Humans then edit for brand voice, factual accuracy, strategic alignment, and what we call "interestingness," the quality that makes content worth reading rather than just grammatically acceptable.
We target less than 30% edit rate for good AI output. If we're rewriting more than a third of what AI produces, the input brief wasn't detailed enough or the strategic foundation wasn't clear.
The more context AI has about a client's voice, positioning, and past content, the better its outputs become. Early drafts require heavy editing. By the tenth piece, AI understands patterns well enough that editing focuses on refinement rather than reconstruction.

Principle #4:
AI as creative sparring partner, not original thinker

AI cannot create original strategy. It can help you develop one.
The distinction matters. AI recombines existing patterns. It doesn't create new ones. It iterates within parameters. It doesn't reframe problems. But used correctly, AI becomes a valuable sparring partner for human creativity.

"In a world where everything can be copied, the only thing that makes you different is your brand. Everything else in the next five to ten years will probably be replaceable. We use AI as sparring partner that helps us iterate and explore our own ideas."

Joliene van Grieken
Founding Partner of The Growth Syndicate
Instead of asking AI to generate ideas, present your rough thinking and ask AI to question it. The dialogue helps you develop depth, identify gaps, and articulate what you already know but haven't yet structured.
This works for campaign concepts, positioning frameworks, process design, and strategic planning. Start with human insight. Use AI to explore implications, test assumptions, and generate variations. Maintain ownership of the core creative direction.
Asking AI to generate campaign ideas from scratch produces generic concepts that could apply to any brand in any category. Differentiation requires human creativity grounded in specific market understanding, customer empathy, and strategic context that AI cannot access.

Principle #5:
Data discipline and privacy compliance are non-negotiable

AI systems make things up. Frequently. Confidently. Convincingly.
Our survey reveals the foundation problem: 67% of teams lack proper data setup for AI deployment. Without clean data inputs, AI outputs are unreliable at best and dangerously misleading at worst. 20% of practitioners cite trust and data security as their top concern.
Teams treat AI outputs like early internet search results: if the system says it, it must be true. No source checking. No verification. No critical assessment.
This creates two categories of risk. First, fabricated statistics and misrepresented sources that undermine credibility when published. Second, data privacy violations when client information gets fed into AI systems without proper safeguards.
Our protocols address both.
For factual accuracy: always verify sources for statistics and data claims. Check that sources are authoritative and accurately represented. Don't accept AI-provided citations without validation. Assume AI may have misread, misinterpreted, or invented the source entirely.
For client data: create client-specific knowledge bases with clear boundaries. AI systems access only specified sources, not everything. Data gets anonymized before processing where required. NDA adherence and audit trails for all client work.
Client trust depends on data protection. One breach, or one piece of fabricated data in a client presentation, destroys relationships that took years to build.
When working with client data, limit what AI can access. Don't prompt with "use 500 sources." Specify exactly which client documents, meeting transcripts, and research AI should reference. This improves output relevance while maintaining data hygiene.
Experience and domain expertise serve as the safety net. If AI claims "95% of B2B companies with $100M+ revenue use ABM," your industry knowledge should trigger skepticism. Verify before using. Critical thinking is the competitive advantage.

Principle #6:
Outcome-focused deployment

Technology for technology's sake is waste.
AI deployment must improve measurable outcomes or it shouldn't exist.
The data shows how rare this discipline is. According to our survey:
  • 70% of teams cannot measure their current AI ROI. Most teams are using AI because it's available, not because it demonstrably improves results.
  • Time savings without process redesign creates zero business impact.
  • Efficiency gains that don't translate to better outcomes or freed capacity are accounting fiction.
The 12-18 month accountability window is closing. Leadership will ask what AI actually improved. Teams without clear answers will lose budget and credibility.

"If you can't measure whether AI improved the outcome, don't use it. Deploy where you can prove impact, abandon where you can't."

Ferdinand Goetzen
Founding Partner of The Growth Syndicate
We use AI only when we can define success metrics before deployment, track improvement across three dimensions (efficiency, quality, business impact), and abandon applications that don't demonstrate clear value.
We deploy AI for research acceleration where we can measure speed improvement while maintaining quality. We use it for content operations where we can track throughput gains and brand consistency. We build AI into strategic thinking support where we can assess whether it helps us reach better decisions faster.
We don't use AI for applications where we cannot measure whether it improved the outcome. If success criteria are unclear or improvements are unmeasurable, deployment is premature.
AI provides speed. Humans provide quality assessment. The instinct that comes from seeing a situation a hundred times cannot be replicated by AI. That pattern recognition helps identify the one or two opportunities in a sea of thousands that will drive disproportionate impact. AI can help validate and execute on those insights. It cannot generate them.
Principle
Application
Key results
#1: Human sets direction; AI executes
Campaign strategy execution
Strategic frameworks translated into multi-channel content, positioning maintained across all touchpoints, quality consistency at scale
#2: Scale through intelligent automation
Operations & delivery infrastructure
60% reduction in hiring needs, revenue per employee 3-10x traditional benchmarks, capacity redirected to high-value strategic work
#3: First-draft assistance
Content operations & thought leadership
2-3x throughput, <30% edit rate, maintained brand consistency
#4: Creative sparring
Strategic thinking support
Faster scenario planning, assumption testing, framework development
#5: Data discipline
Research acceleration & institutional memory
3-5x speed improvement, client-specific knowledge bases that compound over time
#6: Outcome focus
GTM engineering & workflow automation
Automated audits, lead routing, scoring systems with minimal human intervention

Our AI boundaries

Boundary
Why we don't use AI here
What's required instead
Original strategy development
AI recombines existing pattern. It cannot reframe problems or identify unmet market needs from first principles
Market intuition, customer empathy, first-principles thinking, accumulated pattern recognition from hundreds of similar challenges
Brand voice and positioning
Authentic differentiation requires human taste and judgment; AI can replicate style but not the source of what makes a brand distinctive
Human taste, creative judgment, authentic voice development, strategic positioning rooted in specific market context
High-stakes client communication
Trust-building, complex negotiations, and relationship capital require contextual understanding AI cannot access
Relationship-building skills, contextual judgment, stakeholder management, accumulated trust and credibility
Quality judgment ("knowing what good looks like")
Experience creates pattern libraries AI cannot replicate; distinguishing interesting from generic requires accumulated expertise
Pattern recognition from years of practice, domain expertise, ability to assess strategic fit and "interestingness"

The two skills that matter

In our vision, there are two capabilities that separate leaders from everyone else.
AI deployment must improve measurable outcomes or it shouldn't exist.
Everyone who was mediocre is going to become competent. Everyone who was already excellent will stay excellent and do it in half the time. The middle is rising, but the top isn't getting caught.
What stops being a differentiator: volume of content, volume of ads, product feature differentiation, integration capabilities, execution speed and efficiency. All of these are becoming too easy to produce.

Skill one: Get really good at what AI can't do

These are appreciating assets.
Taste. Knowing what's good versus mediocre. AI can execute but cannot judge quality. Most content people are quite operational. They're not super creative, not super original. That's a good example of people who get stuck in the middle because they're not creative enough to get leadership excited about what they're doing.
Creativity and originality. Original thinking, not iteration. AI recombines existing patterns. It doesn't create new ones. There's something around authenticity and human connection and empathy that's very far from being automatable.
Strategy and first principles. Reframing problems, not just finding better solutions. Most marketers have moved away from first principles. They've gotten obsessed with playbooks and tactics and channels. The first principles don't change. AI can work within parameters. Humans set the parameters.
Judgment and pattern recognition. Knowing when to intervene. If you've seen something a hundred times, you develop instinct. That instinct helps you identify the one or two opportunities that are going to have the biggest impact. Experience creates pattern libraries AI cannot replicate.

Skill two: Get really good at using AI

This is technical fluency, not basic ChatGPT usage.
The future team structure looks different. You'll have a chief marketing officer-type person focused on who we're selling to, why, what our story is, what our brand means. Then you have one or two or three people under that person. The rest will be agents. You don't need large teams anymore. Maybe you'll have a couple of heads running departments by themselves without people reporting to them.
Required capabilities include prompt engineering and workflow design. Process mapping and automation architecture. Quality assessment and human oversight, knowing what to check. Managing AI agents, not just using tools.

The dangerous middle

The gap between individual contributors and leaders is widening. Most marketing people are going to have a difficult awakening because they're neither strategic, creative, or original enough to be on the strategy side, and they're not technically inclined or practical enough to really adopt these tools.
Our data confirms this pattern: 86% have strong confidence in AI’s benefits, about 50% rate their AI knowledge as high, and only 26% rate their current team’s ability to actually use AI as high.
The separation is already happening. Top performers master taste and creativity or AI orchestration, ideally both.
People stuck in the middle are mediocre at strategy and mediocre at AI usage. Bottom performers have neither strategic nor technical capability.
A lot of people who currently call themselves head of marketing or head of growth are going to have to choose. You're no longer going to be running teams of six or seven people. Entry-level positions and junior roles will largely cease to exist. On the execution side, you can probably build an AI agent that's cheaper, faster, and better.
The question is where do you stand.
If you can't articulate your brand's taste better than AI, and you can't design workflows or manage AI agents, you're in the dangerous middle. And the middle is disappearing.
Decision Point
Path 1: Strategic leadership
Path 2: Technical execution
Warning signs you're in the middle
What you master
What AI can't do
How to use AI
Neither sufficiently well
Your value proposition
Taste, creativity, strategy, judgment, brand building
AI orchestration, workflow automation, force multiplication
Generic execution, team management, operational tasks
Your future role
CMO with board influence, strategic direction, creative leadership
AI-assisted IC running entire functions solo
Eliminated (role ceases to exist)
Skills to develop
• Taste (good vsmediocre)
• Original thinking
• First principles
• Pattern recognition
• Authenticity
• Prompt engineering
• Workflow design
• Process automation
• AI agent management
• Quality protocols
Stuck developing neither
Team you'll lead
1-3 AI orchestrators (no traditional reports)
AI agents with human oversight
6-7 person teams (structure disappearing)
Self-assessment question
Can you articulate your brand's taste better than AI? Can you reframe problems, not just solve them?
Can you design workflows? Manage AI agents? Build automation architecture?
Are you average at both?
Evidence of success
Leadership gets excited about your ideas; you influence business direction, not just execute it
You can run an agency by yourself with the right agents; you deliver team-level output solo
You manage people but don't influence strategy; you use AI but can't build systems
Example roles
Chief Marketing Officer (true strategic role), VP Brand, Creative Director
AI Marketing Engineer, AI Content Orchestrator, GTM Automation Specialist
Head of Marketing (glorified manager), Marketing Manager, Content Manager

The bottom line

Most of the evidence in this report points in one direction. B2B marketing teams are using AI frequently and with clear optimism, yet the distance between belief, capability, and outcomes remains large. Adoption concentrates around content and productivity. Strategic, revenue-linked applications are far less common. Skills and infrastructure show up more often as constraints than budget. Governance and ownership are inconsistent. Taken together, this describes an environment that is moving quickly without a fully defined operating model.
The survey and expert perspectives suggest that AI is not just another tool added to existing workflows. It is changing what “doing the work” means. Traditional teams still rely on humans to execute end to end. Augmented teams use AI to accelerate tasks while keeping role structures largely intact. More automated teams treat marketing as a system, where humans design and supervise interconnected workflows rather than manually own each step. Where an organization sits on this spectrum appears to shape both its ceiling for impact and its exposure to risk.
A similar pattern appears at the individual level. The work that seems most durable clusters into two areas. On one side sit skills that AI does not perform well today: original strategy, taste, judgment, and brand building grounded in specific contexts. On the other side sit skills that make AI useful at scale: workflow design, data and retrieval literacy, orchestration, and quality oversight. Roles focused on generic execution with shallow AI usage sit in a narrowing middle, as more of that work can be automated or consolidated.
These findings come from a sample that skews toward smaller teams and earlier adoption stages, so they should be read as indicative rather than definitive. The underlying technology is also evolving quickly. Even so, the patterns here align with other research and with many practitioners’ lived experience, which suggests they capture something real about the current transition.
If there is a practical throughline, it is that sequence and clarity matter more than enthusiasm. Teams that accumulate tools without clear ownership, policies, or definitions of success tend to increase AI activity without a corresponding increase in advantage. Teams that start with a limited set of use cases, attach them to measurable outcomes, and invest in skills, infrastructure, and governance before deeper automation appear better positioned to close the gap between belief and value.
This report does not settle what AI will ultimately mean for B2B marketing. It does narrow the questions that matter. Not whether to adopt AI, but where it should sit in the work. Not whether AI will replace marketers, but which combinations of human judgment and machine capability are likely to compound over time. In this data, the binding constraint is rarely access to models. It is the discipline to build the skills, systems, and standards that turn broad potential into specific, defensible advantage.

Ready for your marketing to drive actual revenue?

Book a strategy session to discuss how we can build your growth engine.
Let’s grow your business