SEO vs. AEO: the evolution of online search

Conceito de AEO – Answer Engine Optimization com pesquisa em teclado e tablet

Search no longer lives only in SERP. It lives in your chat box.

SEO remains the operating system of organic visibility.

AEO (Answer Engine Optimisation) is the new layer that ensures your brand appears within the responses of AI assistants (ChatGPT, Gemini, Copilot, Perplexity) and in Google’s AI Overviews / AI Mode.

What is SEO, what is AEO

Strategy

What optimises

Results

Objective

Key metrics

SEO

Website, content, authority, UX

Page listed in SERP

Bring traffic and conversions

Impressions, CTR, clicks, conversions, revenue

AEO

On-site + off-site presence and citations

Brand within the response

Be a contextual reference

Citations mentioned, co-occurrences, share of citations, traffic attributable by AI

 

Clickable index

It’s not SEO vs AEO.

It’s SEO + AEO.

SEO builds presence.

AEO projects that presence into the response.

What has changed with AI research

People ask long, contextual questions.
LLMs respond with text + sources (when using RAG/search).
Much of the decision-making happens without a click.
Google takes away traffic with AI Overviews, but remains the primary source for AI grounding.
Translation: optimising for rankings alone is not enough. You need to be cited.

First principles of visibility optimisation in LLMs

Where to invest: content that AI prefers to cite

What we see in the data and in practice:

Long, clear information guides, with the bottom line up front in each section.
Comparisons and ‘best X for Y’ with explicit criteria.
Updated and detailed institutional pages (About, Mission, Team, Security, Prices).
Studies/reports with proprietary data, methodology, and graphics.
Videos with transcripts (YouTube).
Accessible documents (PDFs, white papers) that explain, not just sell.
Less frequently cited: interactive tools without context, poor e-commerce listings, and pages dependent on JS to render critical content.

Mentions are the new backlinks

LLMs learn through co-mentions. Where and with whom you are mentioned shapes the context in which you appear in responses.

Reddit/Quora: spontaneous discussions legitimise categories.
Review sites: G2, Capterra, CNET, Wirecutter, PCMag, etc.
Specialised media in your industry.
YouTube with clean transcripts and chapters.
Classic example: Fjällräven is often cited in ‘school backpacks’ because it is repeatedly mentioned as such in reviews and forums, even though it manufactures outdoor backpacks like Patagonia. Result: it appears in this context in AI responses.

Tactic: map the most cited domains and pages in your category and work on your presence on them (content, editorial partnerships, PR, reviews, creators).

Fan-out queries: what they are and why being different helps

When you ask a chatbot a question (e.g., ‘What are the best backpacks for travelling in 2025?’), it does not go to Google and search for that exact phrase.

Instead, it:

Breaks the question down into several simpler questions.
Performs several separate searches.
Combines the information from these searches to construct the final answer.
In other words, it ‘spreads’ the question across different mini-searches → this is fan-out.

Real example of a fan-out query:
When you ask:

‘Best backpacks for travelling in 2025?’

The chatbot may search for:

‘best backpacks for travelling’
‘backpacks for long trips’
‘backpacks recommended by experts’
‘backpack reviews 2025’
‘backpack x y comparison’
Then it:

reads the results
combines the information
returns a single response.

Why is this important?
Because the pages cited in the response do not need to be in 1st place on Google for the original word.

If your content responds well to one of the mini-questions, it has a chance of being cited even without being at the top of the main SERP.

This is what opens up space for:

unique opinions
comparisons
pages that explain a topic well
fresh content
And that’s why copying the SERP is no longer enough!

In practice:

Only a small portion of the sources cited coincide with the ‘exact’ organic top 10 of the original query.
This opens up space for pages with different angles, formats, and opinions.
The old trick of ‘copying the SERP’ is no longer enough. Having a clear opinion and transparent criteria gives you a real chance of being cited.

Technique: what AI can read today

Today, AI can read very well what is already visible in the HTML of the page, right from the initial loading.

That is:

 

✅ It can easily read:

Normal text written on the page
Headings (H1, H2, H3…)
Short, clear paragraphs
Keywords and entities (brand names, products, places, etc.)
Schema Markup in JSON-LD rendered in HTML (not via JS)
Video transcripts (when they are on the page or on YouTube)
PDFs and accessible public documents
Internal and external links

⚠️ It has difficulty reading or cannot always read:

Content that only appears after JavaScript runs
(SPA pages, text that only appears after scrolling, ‘See more’ buttons)
Sites that load content dynamically
Content hidden within iframes or widgets

❌ Cannot (yet) interpret well:

Heavy JavaScript as the sole source of content
Content within images (unless it has clear and descriptive alt-text)
Complex data without context (lists, tables without explanation)
Poorly formatted schema or no links between entities
Avoid total dependence on JavaScript for essential content. Many AI crawlers still do not render JS.
Freshness: strategic pages should be reviewed and dated.
Impeccable headings and hierarchy (single H1, clear H2, H3 for detail).
Declarative, short sentences with entities (products, brands, categories, standards, locations).

Context throughout the document: reinforce the theme every few paragraphs.
Bonus: monitor crazy URLs (AI traffic to 404). If it happens on a large scale (does not apply to small websites…), create redirects or ‘rescue’ pages with real content.

Schema Markup and tokeniztion for LLMs

We have already discussed this in other projects, so here is the practical, LLM-friendly approach:

Render critical JSON-LD server-side (do not inject via JS).
A single graph per page with the essentials: Organisation + Website + WebPage + Article/FAQPage/HowTo/VideoObject as appropriate.
Stable IDs (@id) and internal links between nodes (e.g., isPartOf, about, mentions). This helps preserve relationships when text is ‘partitioned’ in tokenisation.
Named entities in the text and in the schema (same strings, same spelling).
No fluff in the schema. Informative, substantive, and short fields.
Microdata/RDFa vs JSON-LD: you can use microdata/RDFa to reinforce the proximity between text and markup, but the recommended standard is JSON-LD. If you use microdata, keep the semantics identical to the JSON-LD rendered in HTML.
Avoid flattening without links. Instead of dumping attributes, model relationships (e.g., Article.about points to Thing with name and sameAs).
Summary example (adapt to your CMS; render SS):

 

{

  “@context”: “https://schema.org”,

  “@graph”: [

    {

      “@type”: “Organization”,

      “@id”: “https://tua-marca.pt/#org”,

      “name”: “Tua Marca”,

      “url”: “https://tua-marca.pt/”,

      “sameAs”: [“https://www.linkedin.com/company/tuamarca”]

    },

    {

      “@type”: “WebSite”,

      “@id”: “https://tua-marca.pt/#website”,

      “url”: “https://tua-marca.pt/”,

      “publisher”: {“@id”: “https://tua-marca.pt/#org”}

    },

    {

      “@type”: “WebPage”,

      “@id”: “https://tua-marca.pt/seo-vs-aeo/#webpage”,

      “url”: “https://tua-marca.pt/seo-vs-aeo/”,

      “isPartOf”: {“@id”: “https://tua-marca.pt/#website”},

      “about”: [{“@type”:”Thing”,”name”:”SEO”},{“@type”:”Thing”,”name”:”AEO”}]

    },

    {

      “@type”: “Article”,

      “@id”: “https://tua-marca.pt/seo-vs-aeo/#article”,

      “headline”: “SEO vs. AEO: a evolução da pesquisa online”,

      “isPartOf”: {“@id”: “https://tua-marca.pt/seo-vs-aeo/#webpage”},

      “author”: {“@id”: “https://tua-marca.pt/#org”},

      “about”: [

        {“@type”:”Thing”,”name”:”Answer Engine Optimization”},

        {“@type”:”Thing”,”name”:”AI Overviews”}

      ],

 “dateModified”: “2025-11-06”

    },

    {

      “@type”: “FAQPage”,

      “@id”: “https://tua-marca.pt/seo-vs-aeo/#faq”,

      “mainEntity”: [

        {

          “@type”: “Question”,

          “name”: “O AEO substitui o SEO?”,

          “acceptedAnswer”: {“@type”:”Answer”,”text”:”Não. AEO complementa SEO…”}

        }

      ]

    }

  ]

}

 

 

This is “tokenisation-friendly” because:

there is an explicit link between nodes in the graph;
the entities appear in the text and in the schema;
everything is in the initial HTML, without relying on JS.

 

Closing “entity gaps”

Practical steps:

List 10 to 20 critical topics in your category.
Compare your brand with 2-3 competitors in off-site mentions/citations (reviews, forums, media, YouTube).
Identify topics where your competitor is mentioned and you are not.
Closing plan:
Pillar page on your website with sections dedicated to the topic.
Honest comparison with clear criteria.
2-3 off-site appearances: review site, forum, video with transcript.
Light PR: request a comment from an expert in the media in the field.
Quarterly updates with new data.

Case studies and playbooks

A) B2B SaaS (e.g., analytics platform)

Pages to create/reinforce: ‘About,’ ‘Security,’ ‘Pricing,’ ‘Integrations,’ ‘Competitor Comparisons,’ ‘ROI Studies.’
Off-site: Complete G2/Capterra, “best X for Y” articles, technical answers on Stack Overflow, “how to set up X in 10 minutes” video with transcript.
Schema: SoftwareApplication, FAQPage, HowTo, Product.
B) Specialised e-commerce (e.g. outdoor sports)

Editorial pages: ‘How to choose X’, ‘X vs Y’, ‘Size guide’, “Maintenance”, ‘Warranty’.
Off-site: relevant subreddits, comparative guides in niche media, hands-on YouTube.
Avoid: thin product sheets with only specs.
Schema: Rich product, FAQPage, VideoObject in reviews.
C) Consulting/services (e.g., marketing/RevOps)

Pillars: ‘Methodology,’ ‘Case studies,’ ‘Processes and SLAs,’ ‘Tools.’
Off-site: participation in podcasts, opinion articles in industry media, Q&A in professional communities, listings in trusted directories.
Prove authority with your own data (benchmarks).

Measurement: a framework for SEO + AEO

Objective

SEO Indicators

AEO Indicators

Cadence

Visibility

Impressions, average positions

Number of citations/mentions per assistant (ChatGPT, Gemini, Perplexity), share per topic

Monthly

Traffic

Organic Sessions

Traffic attributable to AI (parameters, referrers ‘AI bots’), drops in keywords with AI Overviews

Monthly

Authority

Referring domains

Domains cited where the brand appears, presence in UGC/reviews/YouTube

Monthly

Quality

CTR, time on page, conv.

% of broken URLs (404) and resolutions; post-mention engagement (brand search lift)

Monthly

Freshness

% pages updated 90d

Average age of mentioned URLs

Quarterly

Tip: tag citable URLs with specific UTMs for AI, and create filters for 404s coming from ‘LLM/AI’.

Common mistakes and black hat

Dumping 20k AI-generated words into worthless markdown.
Hiding critical content behind JS.
Artificial entity stuffing.
Buying mentions on low-quality websites.
Living solely on-site and ignoring off-site.
Likely outcome: short term, noise; medium term, loss of trust and filtering.

FAQs

Does AEO replace SEO?
No. AEO without SEO is a pipe dream. SEO gives you indexability, quality, and sources. AEO ensures that these sources are chosen in the response.

How does AI choose sources?
A combination of training + RAG. In RAG, it uses indexes (Google/Bing) and pages with the best contextual signal for the question. Fresh, clear content with entities and on trusted domains has an advantage.

Should I rewrite the entire website for AEO?
No. Prioritise pillar pages, comparisons, institutional pages, studies, videos with transcripts and FAQs. Keep content fresh and schema clean.

Is microdata/RDFa enough, or do I need JSON-LD?
You can use both, but JSON-LD rendered in the initial HTML is the main route. If you use microdata, ensure consistency of entities and links.

If AI invents my URLs, what do I do?
Log 404 by AI referrer/UA, identify patterns, create redirects to real pages, and publish ‘expected’ content when it makes sense.

How long until I see an impact?
It depends on the model update cycle and the traction of your content/citations. Work in quarterly loops with freshness and mention goals.

Final checklist

Complete, clear pillar page with BLUF per section
Comparisons and ‘best X for Y’ with criteria
Robust and up-to-date institutional pages
Own study/benchmark with data and graphs
Video with transcript + chapters
Server-side JSON-LD, linked @graph and consistent entities
Essential content visible without JS
Off-site plan: Reddit/Quora + reviews + media + YouTube
AI 404 monitoring and fixes
Quarterly updates to citable pages

 

Update note
Updated on 7 November 2025 with AEO practices based on recent Ahrefs studies and practical testing. The discipline is rapidly evolving; this guide will be revised periodically.

Autor

  • Rui MartinsPartner

    Rui Martins is a skilled professional with over 20 years of experience aligning Sales and Marketing, specialising in Digital Strategy and Distribution for B2B and B2C sectors, particularly in Hospitality and Tourism.
    At the Pestana Group, Rui’s experience included managing global online accounts and online distribution for the Group's European and American hotels. As Partner & Co-Founder of SmartLinks.pt, he has established the agency as a digital leader in Portugal. Naturally curious, he stays up-to-date with the latest trends and tools on the market. This enables him to analyse any business within minutes and quickly suggest the most suitable marketing strategy.
    Connect with Rui Martins on LinkedIn.

    Ver todos os artigos
Scroll to Top

Google Consent Mode v2 and Usercentrics Cookiebot CMP

 

Digital Solutions