Most of what we hear about AI in marketing comes from two sources that aren't particularly useful for making decisions. Vendor case studies that highlight perfect implementations under ideal conditions and individual success stories that may or may not transfer to your context.
What's missing is a broader view of what's actually happening across different types of organizations, particularly in the environment where most B2B marketing work actually gets done.
We surveyed 110 B2B marketing leaders to fill that gap. The sample skews toward smaller operations, which turns out to be useful rather than limiting. Sixty-five percent work at companies under 50 employees. About two-thirds hold VP-level positions or above.
This represents the context where most marketing teams operate. Small groups, constrained budgets, limited infrastructure, real tradeoffs about where to invest time and attention.
What we found was less a story of smooth transformation and more a story of meaningful gaps. There's distance between what people believe AI can do and what they can actually get it to do. Investment is rising while measurement remains unclear. The tools teams adopt easily often differ from the ones that could truly reshape strategic work.
Our priority in analyzing the data was identifying patterns that might help explain these gaps and what they reveal about the current state of AI adoption in practice. The full dataset will be made available on our Substack for those interested in examining the underlying evidence more closely.
All five patterns are very indicative of how technology adoption unfolds in practice, as opposed to how it appears in theory. These disconnects aren't necessarily immediate problems that require a solution right away.
They might represent natural stages in how organizations build new capabilities over time. However, they do suggest that the path from experimentation to systematic value creation is longer and more complex than early adoption narratives typically acknowledge.
What follows is an examination of each pattern, what the data shows, why it might be happening, and what it suggests about how teams should think about their own AI implementation.
The belief-reality gap
Let's start with what appears to be a straightforward question: How confident are B2B marketers that AI will benefit their work?
There’s a lot of optimism in the answers, about 86% percent of respondents rated AI's potential benefits at 8 out of 10 or higher, with an average of 8.8. By most measures, this looks like consensus.
To what extent do you agree that AI developments will be beneficial to your work?

8.8 average rating
110 respondents
But confidence in potential doesn't necessarily translate to confidence in capability.
When the question shifts to how well marketers understand AI, the numbers soften. Half of respondents rate their knowledge at 8 or higher. That's still substantial, but it represents a notable drop from the 86% who believe in its benefits. The average knowledge rating is 7.4.
How up to speed do you feel with developments in AI?

7.4 average rating
110 respondents
To what extent do you agree that AI developments will be beneficial to your work?

8.8 average rating
110 respondents
The gap widens further when we move to execution. Only 26% rate their team's current AI leverage at 8 or higher. That's less than one-third of those who believe strongly in AI's potential. The average execution rating sits at 6.4.
How well do you feel your team leverages AI in its work today?

6.4 average rating
110 respondents
So we have a pattern: 86% believe strongly, 50% feel they know strongly, and 26% report executing strongly. This could be read as a natural progression from conviction to capability. Or it might indicate that expectations are outpacing what teams can deliver today.
Another way to read the belief–execution pattern is by role. Belief is high everywhere, but capability tightens or loosens depending on structure. Fractional leaders show the closest alignment: knowledge at 7.81 and execution at 7.25 against belief of 9.12. CMOs are similar, at 7.50 knowledge and 6.60 execution with belief at 8.60.
The belief-execution gaps between roles

How different roles navigate from confidence to capability (from 0 to 10)
Marketing Managers land in the middle, with knowledge at 7.35 and execution at 6.40 versus belief at 9.20. The widest gap appears for Marketing Leaders (VP/Head), where belief sits at 8.70 but execution trails at 5.55.
In short: belief is a constant, while execution changes based on operating model. Fractional leaders execute highest on average, while VP-level leaders show the most distance between conviction and what teams can do today.
That pattern can feel counterintuitive. Conventional thinking suggests full‑time, senior roles should translate into stronger capability, but the data points the other way. There are two plausible explanations: economic pressure pushes fractional leaders to prioritize speed and outcomes, and working across multiple companies compresses feedback loops, accelerating both learning and execution.
Respondents also surfaced a gap between public perception and internal reality. LinkedIn timelines showcase wins; inside teams, implementations can feel clunky, outputs need heavy editing, and time savings are hard to quantify. One respondent called this the “LinkedIn versus reality” problem.
The belief–execution gap isn’t closing on its own. Belief is uniformly high and likely to remain so. Execution seems bounded by skills and process maturity, which often lag tool adoption. The gap isn’t evenly distributed. Operating models matter: fractional leaders are closest to closing it; VP‑level leaders show the most distance between conviction and what teams can do today.
The differentiation paradox
Here's a question that gets at something fundamental about how marketers view AI's competitive implications: Will AI make it easier or harder to differentiate your brand?
The most common answer, chosen by 47%, was "both, depends on execution." That's a reasonable position. Essentially, this view recognizes that results hinge more on implementation than on the tool itself.
Among those who took a clearer stance, the split is less even but not as dramatic as it might first appear. Thirty-four percent said AI will make differentiation harder. Sixteen percent believe it will create new opportunities for standing out. Another 3% aren't sure yet.
What impact do you expect AI to have on differentiation in B2B marketing?

Both — depends on execution
It will make differentiation harder (more sameness)
It will create new opportunities for standing out
So the "harder" camp outnumbers the "opportunities" camp by about 2 to 1 among those with a definitive view. But that's still only a third of all respondents. The plurality seems to be withholding judgment, waiting to see how implementation plays out in practice.
The concern about sameness appears more prominently elsewhere in the data. When asked about barriers and challenges, "increased noise and less differentiation" ranks among the top responses. Sixty-three percent cite it as a major concern. That's higher than budget limitations, quality control, or data privacy.
So while only a third explicitly say AI makes differentiation harder when asked directly, nearly two-thirds flag noise and sameness as a challenge they're already experiencing or anticipating.
This might reflect something real about the current moment. Teams may not have seen enough evidence yet to form strong conclusions about whether AI ultimately helps or hurts competitive positioning. Both outcomes could be happening simultaneously. Some organizations might be using AI to handle routine work and create capacity for more strategic thinking. Others might be using it primarily to increase output of relatively standard content.
The 47% who chose "both" might be observing exactly that pattern. The technology doesn't determine the outcome. How teams deploy it—does.
What seems clear is that if AI makes basic execution easier to replicate, the value equation shifts. What becomes scarce isn't the ability to produce content at scale. It's the judgment about what's worth producing in the first place.
Teams that treat AI as an execution accelerator appear to be freeing up capacity for work that's harder to automate, like strategy, positioning, and creative direction. The kind of thinking that determines whether you're saying something distinctive or just saying more.
But there's a constraint built into how teams actually use these tools. When asked how much they trust AI-generated outputs in their marketing work, respondents averaged 5.8 out of 10. That's moderate skepticism, not confidence. The distribution clusters around the middle of the scale, suggesting most teams operate with a "trust but verify" mindset.
Trust in AI outputs: the "verify everything" mindset

How much marketers trust AI-generated content without human review (0-10 scale)
Trust Rating (0 = No Trust, 10 = Complete Trust)
Average trust sits at 5.8/10-moderate skepticism, not confidence.This explains why human judgment remains critical for differentiation.
This level of trust might actually be appropriate. It means human judgment remains in the loop. Someone still needs to evaluate whether the output is accurate, on-brand, and worth publishing. That verification step is where taste and strategic thinking come into play. The technology can accelerate production, but it doesn't eliminate the need for editorial judgment about what's worth producing.
Thoughtful brand-building becomes essential in this equation. When competitors can match your content output and basic quality within weeks, brand becomes the primary basis for standing out. Not brand as logo and color palette, but brand as a coherent and cohesive point of view that shapes everything you put into market.
Content dominance, strategic lag
In day-to-day work, AI shows up mostly in a handful of places. Content creation sits at 90.91%. Meetings, notes, and productivity tools follow at 79.09%. Creative design is 52.73%. These are the highest adoption areas in the survey.
After that, usage drops. Lead scoring comes in at 30.91%. Sales enablement is 20.00%. Personalization and conversion rate optimization cluster near 25%. The applications that change how go-to-market connects to revenue tend to trail the ones that scale execution.
Where teams are actually using AI?

Current use cases ranked by adoption rate (n=110)
Adoption rate (%)
Content creation (91%) is adopted about 4.5x more than sales enablement (20%). AI is primarily a content engine, not yet a strategic GTM tool.
Content creation is adopted about 2.9 times more than lead scoring. That ratio suggests AI is being used primarily as a content engine rather than as a system that re‑architects funnels, targeting, or handoffs. If you compare content creation with sales enablement, the gap widens to roughly 4.5 times.
Tool choices paint a similar picture. General‑purpose LLMs dominate: ChatGPT at 97.27%, Claude at 63.64%, Perplexity at 57.27%, Gemini at 49.09%. These tools map cleanly to the high‑adoption use cases above.
Specialized tools show up much lower, which likely reflects lower switching costs and minimal integration needs for general LLMs compared with the data plumbing and workflow redesign required for more strategic applications.
Teams appear to adopt what is fast to try and easy to measure. You can scale content workflows quickly and attribute outputs to hours saved. Building lead scoring or personalization that moves pipeline asks for data readiness, governance, and new processes. Those conditions are not universally in place.
This creates a strategic exposure. If most teams apply AI to the same execution tasks, outputs converge on similar patterns. The adoption curve rises, but the advantage curve may not. The opportunities that could create separation remain underused because they are harder to implement.
Whether this is a waypoint or a destination isn’t clear in the data. It could be a staging pattern that precedes deeper investment in strategic use cases. It could also indicate a skills and infrastructure ceiling that holds for longer. The persistence of this gap shows up in the next section on barriers.
When asked what's holding back AI adoption, leaders point to skills more than spending. About 60% say the main issue is the need for new training. Another 47% cite a lack of internal expertise. Budget constraints come in at just 25%.
Skills, not budget: what actually blocks AI adoption

Top barriers cited by 110 B2B marketing leaders
Percentage of Respondents
Skills and expertise barriers (60% + 47%) far outweigh budget concerns (25%). The constraint is capability, not capital.
The skills gap is roughly 2.4 times larger than the budget constraint. That challenges a common assumption about technology adoption. The barrier appears to be less about affording the tools and more about feeling equipped to use them effectively.
Increased noise and less differentiation ranks highest at 63%, above even the skills and expertise concerns noted earlier. That suggests teams are navigating two connected problems at once. They worry about contributing to sameness while also reporting they lack the skills to avoid it.
How teams learn may help explain why the gap persists. The most common approach is self-directed. Seventy-two percent report learning by doing as their primary method. Sixty-four percent follow people on LinkedIn. About 55 percent learn directly from ChatGPT itself. Only 38 percent use formal training programs or courses.
So 60% identify skills as a barrier, but fewer than 40% are using structured training to address it. The gap between recognizing the problem and accessing systematic solutions appears significant. Teams seem to be relying heavily on experimentation and peer learning rather than formal skill development.
How marketing teams are learning AI

Primary learning sources - informal methods dominate (n=110)
Informal learning (blue) dominates over formal training (purple). 72% learn by doing while only 38% use structured training programs.
This DIY learning culture might explain why adoption is happening faster than competence is developing. You can experiment your way to basic usage relatively quickly. Creating a ChatGPT workflow for content drafts doesn't require formal training. But building more sophisticated capability, the kind that moves beyond content creation into strategic applications, likely requires more structured development. The data suggests most teams aren't getting that.
Infrastructure gaps compound the skills problem:
~60% have no dedicated AI budget
~50% have no formal AI usage policies
~20% have no one formally owning AI adoption in their organization.
The foundation for sophisticated AI use, both technical and organizational, isn't in place.
AI skills being developed by company size

The sequence matters here. Skills enable infrastructure. Infrastructure enables governance. Governance enables scale. Budget helps at every stage, but it's not the binding constraint. Teams appear to be trying to solve this in reverse order, acquiring tools before building the capabilities and structures to use them effectively.
That might explain why the belief-execution gap we saw earlier persists. Investment is happening. Tools are being purchased. But the underlying conditions for effective use aren't being addressed in the right sequence.
This also clarifies why certain roles seem to execute better despite tighter constraints. Fractional leaders often operate with limited budgets and no formal training infrastructure. But they're forced to solve the skills problem immediately. Learn fast, apply directly, move to the next challenge. That compressed feedback loop, driven by economic necessity, might actually create an advantage in this environment.
It's fair to assume that throwing budget at AI adoption without first addressing capability and structure may not close the execution gap. The constraint isn't primarily financial, it's organizational and developmental. And those problems require different solutions than simply buying more tools.
Budgets, ROI, and who actually owns this
The relationship between AI investment and returns appears somewhat disconnected in the data. About 60% of teams report having no dedicated AI budget. Yet when asked about future spending, roughly 65% expect it to increase moderately over the next 12 months.
That combination suggests AI expenses are being absorbed into existing budget lines rather than tracked as a separate category. Teams are spending, but they're doing it through general software budgets, productivity tools, or individual subscriptions rather than through formal AI allocations.
What makes this pattern more interesting is what happens when you ask about returns. The data on ROI clarity is less definitive than the spending trajectory. Teams describe benefits in qualitative terms: faster content production, time saved on routine tasks, improved output quality, but measurable revenue impact appears harder to pin down.
This creates a somewhat unusual dynamic. Spending is increasing despite unclear returns. That's not necessarily irrational. Early-stage technology adoption often works this way. You invest before you can measure precisely because waiting for perfect measurement means falling behind. But it does suggest that current spending is driven more by competitive pressure and perceived necessity than by demonstrated ROI.
About 83% plan to increase AI spend (moderately or significantly), while roughly two-thirds say ROI is not yet clearly measured.

The ownership picture reinforces this interpretation. About 20% of teams report that no one formally owns AI adoption in their organization. Where ownership does exist, it tends to sit with marketing leadership or technically inclined individuals rather than with a dedicated function or role. That fragmentation likely contributes to the measurement challenge. When responsibility is distributed, systematic tracking becomes harder.
Governance follows a similar pattern. Roughly half report having no formal AI usage guidelines. Without clear policies about what tools to use, how to use them, or what standards outputs should meet, usage tends to be ad hoc. Individual team members experiment with different tools. Practices vary by person. Institutional learning accumulates slowly because there's no central mechanism to capture and distribute what works.
Do you have formal AI usage guidelines or policies in place?

This matters for the ROI question. If every person is using AI differently, comparing results becomes difficult. You can't easily determine which approaches generate value and which don't. The lack of standardization makes measurement harder, which in turn makes it harder to justify increased investment through traditional budget processes.
Yet spending continues to rise. One way to interpret this is that the perceived risk of not adopting outweighs the discomfort of spending without clear metrics. Teams seem to be betting that AI capability will become essential, even if current returns are hard to quantify. That's a reasonable bet in a rapidly changing environment, but it does create tension with traditional budget discipline.
The teams that appear to be navigating this more effectively are those that have established some form of ownership and governance early. Not necessarily elaborate structures. Just clear accountability and basic standards. That seems to enable more systematic learning, which in turn makes it easier to identify what's working and allocate resources accordingly.