The method
1. Built our ICP and persona foundations
Before creating any content, we needed to ensure the AI models we used had the right context about our audience. We compiled our persona documentation and customer insight library, real data on how our ICPs think, search, and evaluate agencies. This ensured that when we generated content through AI, it would be grounded in authentic audience behaviour, not generic language.
2. Created an AI-powered prompt simulation engine
Using those insights, we built a Claude project for persona synthesis, an intelligent prompt engine capable of thinking like our ICPs. This system allowed us to input a topic or industry and generate realistic, human-like search prompts that reflected how our target buyers would query AI engines. It effectively acted as a bridge between traditional keyword research and modern AI prompt simulation.
3. Set up tracking to measure AI visibility
Once our personas and prompt models were in place, we configured our AEO tool, Scrunch, to track the prompts that mattered most. We created a formulaic tracking framework with consistent prompt structures across our key industries. This gave us a scalable, repeatable way to monitor which prompts we appeared for and measure progress as we deployed more content.
4. Reverse-engineered how AI retrieves and cites content
Before producing content, we analyzed how AI engines source and prioritise information. From hundreds of test prompts, we identified clear patterns: AI prefers structured, answer-first content with clear lists, entities, and supporting context.
Using these insights, we created standardised wireframes for AEO content. The content outlines were designed for maximum AI retrievability, built so large language models could easily parse, cite, and reference Blend's content in answers.
5. Built an AI content engine for scale
Next, we developed dedicated Claude projects for content creation at scale. To ensure quality and consistency, we created an enhanced instruction guide, effectively our internal AEO playbook, which included the relevant wireframe structure, brand and tone guidelines, and a curated knowledge base of Blend's case studies, portfolio pieces, and proof points. This ensured the AI not only wrote in our voice but also embedded real data, links, and results, which are critical for citation and credibility in AI retrieval systems.
6. Produced content with embedded technical optimisation
With our framework in place, we began generating and publishing content. Tens of pages across key industries, each built with precise, persona-driven copy, inline case study references, and embedded schema markup. This further strengthened our site's structured data profile and helped AI systems better understand the relationships between Blend's services, industries, and results.
7. Monitored, measured, and iterated continuously
Once live, all content and schema were indexed for Scrunch tracking and cross-checked against AI results. We monitored which prompts began surfacing Blend content or citations, traffic from AI referral sources in HubSpot, canges in self-reported attribution mentioning AI tools, and pipeline generated from those leads. These signals provided concrete data on the direct link between AEO optimisation and measurable AI visibility.