AI has quietly moved from experiment to everyday tool in marketing: roughly 92% of marketing professionals now use AI in their workflows. For brands this has translated into faster campaign launches, greater operational efficiency and measurable lifts in engagement. Yet at the same time shoppers are growing wary — especially about how their personal data is handled — creating a widening personalisation gap that threatens to undermine the long-term value of AI-driven marketing.

Why marketers love AI
Marketers report real, practical wins. Around 71% say AI speeds up campaign creation, saving on average more than two hours per campaign, while 72% say it frees teams to focus on strategic and creative work rather than repetitive tasks. Companies also see tangible business results: majorities report higher customer engagement and improved loyalty after adopting AI. Investment intentions remain bullish, with 64% planning to increase AI marketing spend next year.
Taken together, these statistics show AI delivering on the classic promises of automation: scale, efficiency and the ability to personalise at speed.
Why consumers are pulling back
But customers aren’t responding in lockstep. Surveys show a striking disconnect: many shoppers feel brands still “don’t get them” — the share of consumers who feel misunderstood has risen from 25% to 40% in a year — and 60% say marketing emails they receive are mostly irrelevant. Most concerning, trust in AI handling personal data has fallen: 63% of consumers globally say they do not trust AI with their information (up from 44% in 2024), with UK distrust particularly acute at 76%.
That decline in trust threatens the very currency of personalised marketing: access to accurate, consented customer data. When consumers hesitate to share or when they disengage because recommendations feel intrusive or off-base, the ROI of AI personalisation can evaporate.
What’s driving distrust
Several forces are converging to erode confidence:
- Opacity: Consumers don’t understand how AI models use their data or why a particular recommendation appeared.
- Irrelevance: Poorly tuned models generate irrelevant or repetitive messaging, undermining perceived value.
- Privacy fears: High-profile data incidents and vague data practices make people cautious.
- Ethical anxieties: Concerns about manipulation, profiling and bias raise moral questions about automated persuasion.
Regulatory change is already reshaping behaviour. In the wake of new rules such as the EU AI Act, many marketers — especially in the UK — are retooling how they use AI; 37% of UK marketers report having changed their AI approach, and 44% say their AI use is now more ethical. At the same time, 28% worry that strict regulations could stifle creativity, highlighting the tension between guardrails and innovation.
What success looks like: people-first AI
Examples from the field show the path forward. Some brands treat AI as an amplifier of human creativity rather than a replacement. For instance, creative-focused companies use tools to free staff for higher-level strategic tasks, while retailers harness predictive models to retain customers: one retailer used AI to identify likely churners and re-engaged nearly half (48%) of those customers within three months. The common thread is utility — AI that solves a clear customer problem and is applied with sensitivity wins trust.
Practical steps to close the trust gap
To reconcile aggressive AI adoption with rising consumer concern, marketers should pursue a trust-first strategy that includes:
- Transparency and consent: Be explicit about what data you collect, why, and how AI uses it. Offer easy opt-outs and granular controls.
- Data minimisation: Collect only what’s necessary to deliver clear, demonstrable value. Less data means less risk and often better customer acceptance.
- Human oversight: Keep people in the loop for high-impact decisions; use AI for suggestion, humans for judgment.
- Explainability: Build simple explanations for recommendations so customers understand why a product was suggested.
- Rigorous measurement: Track not just short-term engagement metrics but long-term trust and retention signals.
- Ethics-by-design: Embed fairness, privacy and safety into model development and campaign workflows.
- Cross-functional governance: Align legal, security, product and marketing teams to enforce consistent standards.
Regulation and innovation: finding the balance
Policymakers face a delicate job: protecting consumers without freezing creativity. A pragmatic approach is to prioritize outcomes-based rules and sandbox environments where companies can test responsible AI in real-world settings. Standards and certification for data handling and model safety can help restore public confidence while allowing marketers room to innovate.

Bottom line
AI is already a workhorse for marketing — it speeds work, surfaces insights and can materially improve performance. But the technology’s long-term value depends on rebuilding consumer trust. That means shifting from indiscriminate personalisation to purposeful, transparent experiences that respect privacy and provide clear value. Brands that make trust a strategic priority will not only avoid reputational risk; they’ll earn the richer, sustainable relationships that make AI-powered marketing truly worthwhile.