More users are asking ChatGPT, Claude, and Perplexity for recommendations than are scanning Google for blue links. The job isn't to rank on page one anymore. It's to be the sentence an AI writes when someone types, "what's the best tool for..." Answer engine optimization (AEO) is that discipline. Here is what actually works, what's noise, and a 90-day plan to get cited.
What AEO actually is
Answer engine optimization is structuring your site and your content so that when a user asks a natural-language question to an AI, your product is named in the response. Not always as the top recommendation. Sometimes as a comparison point. Sometimes just as a link in the sources footer. But named, correctly, and in context.
The adjacent discipline is GEO (generative engine optimization), which focuses on AI-assisted search experiences inside Google, Bing, and specialized engines. AEO and GEO overlap on tactics and diverge on surfaces. Both replace classical SEO as the default growth motion for products whose users have stopped clicking blue links.
Why it matters more every month
The trend data is unambiguous. ChatGPT and Claude crossed one billion monthly queries in 2025 and kept climbing. Perplexity grew from a curiosity to a real source of B2B traffic. Google's AI Overviews now occupy the top of most informational result pages and routinely truncate the list of blue links to three entries. Users who ask AI for recommendations rarely return to classical search for the same question.
For a SaaS founder, the implication is specific: a sharp, useful landing page that ranked on page two in 2023 was still occasionally found. The same page, in 2026, is invisible to a user who asked ChatGPT instead. The AI either cites you or it doesn't. There is no page two in an AI answer.
The five signals answer engines use
Neither OpenAI, Anthropic, nor Perplexity publish ranking weights. The following signals are observed patterns across thousands of AI answers, cross-referenced with what the vendors have said in public about their training and retrieval pipelines.
- Structural clarity. FAQ schema, clean H2/H3 hierarchy, lists with explicit items, tables with headers. Answer engines parse your page, not just read it. Structure makes the parse cheap.
- Direct-answer formatting. A question as a heading followed by a concise answer in the first sentence, with detail below. Answer engines literally lift this pattern because it maps cleanly to their output format.
- Freshness. A clearly dated article from this quarter outranks a three-year-old classic. Freshness is a proxy for "is this still true?" Date your articles, update them, and mark both dates in schema.
- Authoritativeness proxies. External mentions (Wikipedia, Hacker News, LinkedIn articles), a real author byline with a real bio and external sameAs links, domain age, and a public authorship history all feed the authoritativeness score.
- Crawlability by AI-specific agents. OpenAI-SearchBot, ClaudeBot, PerplexityBot, and Google-Extended are the user agents that matter. Blocking them in robots.txt gets you zero citations. Allowing them is necessary but not sufficient.
The llms.txt convention
llms.txt is a proposed standard, modeled on robots.txt, that lives at the root of your domain (example.com/llms.txt) and gives AI crawlers a curated map of your site. A sibling file, llms-full.txt, can include the full content of the pages you most want cited.
The status as of 2026: Claude and Perplexity honor the convention reliably. OpenAI does not officially commit to honoring it but is observed to cite llms.txt content at elevated rates. Google does not use it for ranking but cites it when surfacing AI Overviews. Either way, it is a 30-minute change that raises citation probability. There is no reason not to have one.
A minimum useful llms.txt includes:
- The site name and one-sentence description
- A short "what this product is and who it is for" block
- Pricing, with actual numbers, in plain text
- Links to the 10-15 most citable pages on the site (pillar articles, comparison pages, the FAQ)
- An updated timestamp at the top
llms-full.txt extends this with full summaries of the linked pages so an LLM can cite you without having to re-crawl everything. Our own llms-full.txt is at /llms-full.txt. Read it, copy the shape.
Structuring content for citation
A well-structured article for AEO looks almost boring compared to a classic SEO piece. That is the point. The article is easier for a model to parse, extract from, and cite.
| Element | AEO-optimized | Classic SEO |
|---|---|---|
| Opening | Lead paragraph that directly answers the headline question in 3-4 sentences | Hook with a story or a statistic |
| Headings | Question form: 'What is X?' 'How do I Y?' | Keyword stuffed |
| Answers | First sentence after each H2 is the direct answer; detail follows | Detail-heavy, main point buried in paragraph three |
| Lists | Numbered when order matters; bulleted when it doesn't | Prose with list-form implied |
| Tables | Headers clearly named; compact | Rare; prose preferred |
| Schema | Article, FAQ, BreadcrumbList, Author; all required fields filled | Article + random fields |
| Dates | datePublished + dateModified in schema, visible on page | Published once, never updated |
| Length | 900-2,000 words, hitting the question hard | 2,500-5,000 words, padded |
Our entire blog is written to this shape. If you read any recent MarquIQ post you'll notice the pattern: question-form H2, direct answer first sentence, details after. Not an accident.
The authorship layer
Answer engines weight authored content more than anonymous blog posts. They are looking for signals that a real, accountable human wrote and stands behind the content. The cheap wins:
- Every article has a named author with a schema Author node, including a sameAs array pointing at LinkedIn, X, and GitHub.
- The author has a real bio page on your site (/about/[author-slug]) with 150-400 words of context, not a one-line "marketing team" placeholder.
- External mentions of the author: podcast guest spots, conference talks, contributed articles on major tech publications. These feed the entity graph.
- Wikipedia entries, where deserved, close the loop. Wikipedia is the single most cited source in AI answers, by a wide margin.
Measuring AEO success
Classical analytics miss most AEO signal. You need three tracks:
- Referrer-based traffic. ChatGPT, Claude, and Perplexity all send referrer headers when a user clicks through from an answer. Filter your analytics for referrers containing chat.openai.com, claude.ai, and perplexity.ai. Track month-over-month growth. This is your most reliable lagging indicator.
- Brand mention rate in AI answers. Manually ask each engine the 5-10 questions your ideal user would ask. Count how often your brand appears. Track the rate monthly. This is your leading indicator. If the rate rises, the traffic follows within 30-60 days.
- Citation accuracy. When you are cited, is the fact correct? If ChatGPT says your product starts at 39 USD a month and it actually starts at 79, that is an active liability. Fix the source. Publish the corrected number prominently. Revisit monthly.
AEO-specific mistakes to avoid
- Blocking AI crawlers in robots.txt. Still the single most common mistake. Check yours today. OpenAI-SearchBot, ClaudeBot, PerplexityBot, and Google-Extended should be explicitly allowed.
- AI content that reads AI. Answer engines have gotten substantially better at detecting low-signal AI slop. Content that reads like it was written by the same model the user is asking is down-weighted. Scrub em-dashes, ban AI tells, run a second editorial pass. See why AI content fails for the long version.
- Stale schema. A datePublished of 2021 on an article that was updated last week tells the model your content is stale. Fill in dateModified.
- Hiding your pricing. If the user asks, "how much does X cost" and your site doesn't plainly say, the AI will invent a number or cite a competitor. Publish prices in plain text, not behind a "contact sales" wall.
- No author. A post with no byline is a post an answer engine cannot vouch for. Byline everything.
A 90-day AEO plan for a SaaS
If you're starting from zero, here is a realistic 90-day plan. Every step is doable by a solo founder in under 4 hours a week.
- Week 1-2. Crawlability + llms.txt. Audit robots.txt, unblock AI crawlers, ship /llms.txt and /llms-full.txt. Update schema on your top 10 pages to include FAQ blocks and clear Author nodes with sameAs links.
- Week 3-4. Direct-answer rewrite. Pick your 5 highest-value existing pages. Rewrite each so the first sentence after the H1 directly answers the page's question. Add FAQ schema with at least 4 questions each.
- Week 5-8. Question-shaped content. Publish 4-6 new articles, one a week, each answering a specific question your users ask. "What is X?" "How do I Y?" "X vs Y, which one for Z?" Clear H2 in question form, direct answer first sentence, details below.
- Week 9-10. External signal. One podcast guest spot, one guest post on a tech publication, one Hacker News submission that gets 50+ points. The goal is external entities linking to your author and domain.
- Week 11-12. Measurement baseline. Manually run your brand mention rate across ChatGPT, Claude, and Perplexity for 10 queries. Save the results. Re-run monthly. If the rate doesn't move by month 6, the problem is content quality, not AEO tactics.
AEO is still young enough that the dominant advantage goes to founders who execute the basics early. FAQ schema, clean authorship, dated content, and an llms.txt file will put you ahead of 90 percent of the SaaS market. The other 10 percent is where the real competition is, and that's a separate problem.