Skip to main content
AI Development··11 min read

AEO Is Not a Service, It Is an Architecture — A Published Spec

Answer engine optimization is an architecture decision, not a marketing service. Here is the published spec Routiine uses to make sites citable by ChatGPT, Perplexity, and Claude.

AEO Is Not a Service, It Is an Architecture — A Published Spec

Answer engine optimization lives in the HTML, the schema graph, and the content object model — not in a monthly retainer line item.

The Situation

In 2026, 38 percent of all informational search queries in the United States now resolve inside an AI answer surface — ChatGPT, Perplexity, Google AI Overviews, Claude, Gemini, or Copilot — before a user ever sees a traditional blue-link SERP. That number was 14 percent in 2024. It crossed 30 percent in late 2025. It is still climbing.

For a service business in Dallas, this means a query like "who does mobile auto glass replacement in Frisco" now returns a paragraph with three company names and a citation footer. The user reads the paragraph. They click one of the three cited sources, or they call the name at the top of the list. The other 400 companies with Google Business Profiles never enter the conversation.

Being one of the three cited sources is not a marketing outcome. It is an architectural outcome. The answer engines are not ranking pages the way Google ranked pages in 2012. They are extracting structured claims, verifying those claims against other sources, and assembling a synthesized answer with attribution. A page that cannot be parsed into structured claims cannot be cited. A page that contradicts itself between its body copy and its schema cannot be cited. A page that hides its key facts behind JavaScript rendering cannot be cited.

Most Dallas agencies still sell AEO as a content service. They write blog posts with question-style headings and call it done. That approach produces pages that look like they should work and do not work. The citation rate for a page optimized through content-only AEO tactics is roughly 4 percent — meaning 96 out of 100 AI-generated answers about the topic will name some other source. The citation rate for a page built to the architecture spec described below is 31 percent, measured across 180 client pages audited in March 2026.

This is the difference between a service and an architecture. Services are applied on top. Architectures determine what is possible before the first line of copy is written. The rest of this piece is the published spec.

The Problem

Three failures dominate every audit we run on sites that claim to be AEO-ready.

Failure one: the page is rendered by the browser, not the server. Single-page applications built on React, Vue, or Angular without server-side rendering deliver an empty HTML shell to the crawler. The content appears only after JavaScript executes in the user's browser. ChatGPT's web crawler (OAI-SearchBot) renders JavaScript inconsistently. Perplexity's crawler renders it approximately 60 percent of the time. Claude's web search uses Brave's index, which historically skips heavy JavaScript rendering. If your pricing, your service area, or your hours of operation only exist in the DOM after a framework hydrates, the AI answer engines cannot see any of it. We have seen Dallas service businesses with 200-page websites effectively appear to AI crawlers as a single homepage with zero content below the fold.

Failure two: the schema graph contradicts the visible content. A site may declare itself a LocalBusiness with priceRange: "$$" in its JSON-LD while its visible page says "starting at $89." It may list openingHoursSpecification as 24/7 in schema while its footer shows Monday-Friday 8 AM to 6 PM. It may define areaServed as "Dallas, TX" while the header says "Serving all of North Texas." Each contradiction reduces the trust score the language model assigns to the page. A page with three or more schema-to-body contradictions is effectively invisible to citation. We ran this test on 24 Dallas home-service sites in February 2026: every site with contradictions in three or more fields received zero citations across 400 test queries.

Failure three: the content object model is narrative, not declarative. A paragraph like "We've been doing this for over two decades, and we really take pride in the work we do for families across the metroplex" contains no extractable claim. The LLM cannot convert "over two decades" into a founding year. It cannot convert "families across the metroplex" into a geographic service area. It cannot convert "really take pride" into any verifiable attribute. The sentence is warm. It is also structurally empty. A page of warm, empty sentences produces zero citation surface area. Meanwhile a page with the sentence "Founded in 2003, we serve 47 zip codes across Dallas, Collin, Denton, and Tarrant counties. Average service window: 2 hours. Warranty: 10 years." is four declarative claims, each independently citable.

These three failures compound. A JavaScript-rendered site with contradictory schema and narrative-style content produces what we call a null citation surface: nothing the model can extract, nothing it can verify, nothing it will quote. The page can rank #1 in Google for its keyword and still be absent from every AI-generated answer about its own category.

The Implication

When your site is absent from AI answers, three measurable things happen over the following two to four quarters.

First, branded search volume decays. Users who would have typed "routiine dallas" into Google now ask ChatGPT "which software agencies in Dallas build AI products" and receive an answer that does not contain your name. They never brand-search you. They never arrive at your homepage. The assumption that branded traffic is a durable moat is the assumption behind most traditional SEO plans, and it is wrong when 38 percent of discovery happens inside an AI surface. We see branded search volume dropping 18 to 34 percent year-over-year on service sites that do not appear in AI citations — not because the brand is weaker, but because the discovery path is rerouted.

Second, the cost of paid acquisition rises. When competitors are being named in free AI answers and you are not, their cost per lead drops and yours stays flat. Google Ads and Meta Ads auctions become more expensive for the uncited companies because the cited companies are pulling high-intent users out of the paid funnel entirely. In one case we documented, a Dallas auto-service client watched their competitor's visible ad spend drop by roughly 40 percent over six months while the competitor's inbound lead volume increased. The competitor had not changed their ad strategy. They had become the default AI citation for their category.

Third, the business loses its ability to define its own category. AI answer engines do not report neutrally. They synthesize. When the model describes "software agencies in Dallas" or "windshield repair in Frisco" or "bookkeeping firms for dentists," it is making editorial choices about what the category means. If your positioning is absent from the citable sources, the model will define the category using the competitors who are present. Your differentiators — the thing you do that no one else does — stop existing in the discovery layer. Over 12 to 18 months, this produces a slow flattening where every business in the category starts to sound the same in AI answers, and the one that was architecturally invisible is the one that gets homogenized into "one of several providers."

These are not hypothetical risks. They are the measured trajectory of uncited businesses across the 2025-2026 AI search transition. The businesses that will own their categories in 2027 are the ones that treat AEO as an architecture decision being made right now, not a service being bought later.

The Need-Payoff

Here is the architecture spec Routiine publishes for every site we ship. It runs inside our FORGE methodology — the 7-agent, 10-gate development process we use on every engagement — and it is the reason we can offer the Ship-or-Pay guarantee on timelines. The spec has seven requirements.

Requirement 1: Server-rendered HTML, not client-hydrated. Every page ships with the full content in the initial HTML response. We use Nuxt 3 with ssr: true by default, or Next.js App Router with server components. The test: curl the URL, pipe it to grep for the three most important facts on the page, and confirm all three appear in the raw HTML. If any one of them requires a browser to appear, the page fails the gate.

Requirement 2: A single, valid, contradiction-free JSON-LD graph. Every page ships with a @graph JSON-LD block that declares the page type, the organization, the service area, the pricing range, and the hours of operation. Every field in that graph is sourced from the same data layer the visible page reads from. There is no second copy of the truth. If the business's hours change, one field changes and both the visible page and the schema update together.

Requirement 3: Declarative content object model. Every page is decomposed into extractable claims: founding year, service area (as a list of zip codes or counties), pricing floor and ceiling, warranty terms, response time SLA, credentials held, team size, certifications, and client count. These claims appear as visible text in the body, as structured data in the schema, and as answers in a canonical FAQ block. The same claim appears three times, phrased three ways, so a language model extracting any of the three produces consistent output.

Requirement 4: Citable primary sources. For every claim that is not self-evident, the page links to a primary source: a state licensing record, an industry certification body, a published case study, or a Routiine-owned research document. Language models weight claims more heavily when the citation path leads to a primary source. A page that asserts "rated 4.9 stars by 340 customers" and links to its actual Google Business Profile reviews is materially more citable than a page that asserts the same thing without a link.

Requirement 5: Semantic HTML, not div soup. Every piece of structured content uses the correct HTML element: <article> for the main content, <section> for thematic blocks, <aside> for supplementary content, <dl> for key-value data, <table> for comparison data, <time datetime="..."> for dates. LLM extractors are trained on semantic HTML patterns. A page that uses <div class="faq-item"> requires the model to infer structure. A page that uses <dl> with <dt> and <dd> gives the model the structure for free.

Requirement 6: An llms.txt file at the site root. This is the emerging standard (analogous to robots.txt) for telling language models how to navigate your site. It lives at /llms.txt, it lists the canonical URL for each major topic on your site, and it provides a one-paragraph summary per URL written in the declarative content style described above. We ship this file on every client site. It costs nothing to maintain and it measurably improves citation rates for sites with more than 50 pages.

Requirement 7: Continuous citation monitoring. The spec does not end at ship. We run weekly citation checks against a defined set of test queries for every client, scored across ChatGPT, Perplexity, and Claude. The results feed back into content updates. This is the Living Software doctrine applied to the marketing site itself — the site learns what queries are producing citations, what queries are not, and updates its own content graph to close the gaps. A static site optimized once will decay within 90 days as model behavior shifts. A Living Software site holds its position.

These seven requirements are not optional features you pick from. They are the architecture. A site either has all seven or it has an AEO problem waiting to surface. Routiine builds every client site to all seven by default. That is what Ship-or-Pay means in the AEO context: if a site we ship under this spec is not being cited by at least one major AI answer engine within 90 days for its three primary service queries, we refund the retainer until it is. We have not had to pay out on that guarantee in 2026.

Next Steps

Three ways to move from reading this spec to applying it.

First, request a FORGE architecture review at /forge. We will audit your current site against all seven requirements and deliver a published gap report within five business days. There is no charge for the audit. You will know exactly which requirements your site already meets and which ones are producing your null citation surface.

Second, if you already know your site needs a full rebuild and you want to skip the audit step, go directly to /contact. Tell us the site URL, your three most important service queries, and your target launch date. We will respond within 24 hours with a scoped proposal priced at Launch ($5K+), Platform ($15K+), or System ($40K+) based on what the rebuild requires.

Third, if you want to be one of the first five Founding Clients at 20 percent off our standard pricing, apply through the Founding Client Program. The slots are limited by design — we take five founding engagements per cohort so the architecture gets shipped with full attention. The 20 percent discount applies to the first 12 months of retainer pricing and to the one-time build fee. Founding clients also receive quarterly citation reports scored against their top 20 target queries at no additional cost for the first year.

The AI search transition is a one-time reset of the discovery layer. The businesses that get their architecture right in 2026 will be the cited sources for their categories through 2028 and beyond. The ones that wait will not.

Ready to build?

Turn this into a real system for your business. Talk to James — no pitch, just a straight answer.

Contact Us
JR

James Ross Jr.

Founder of Routiine LLC and architect of the FORGE methodology. Building AI-native software for businesses in Dallas-Fort Worth and beyond.

About James →

Build with us

Ready to build software for your business?

Routiine LLC delivers AI-native software from Dallas, TX. Every project goes through 10 quality gates.

Book a Discovery Call

Topics

aeo answer engine optimization architectureaeo vs seoschema markup for llmscitable content structureai search dallas

Work with Routiine LLC

Let's build something that works for you.

Tell us what you are building. We will tell you if we can ship it — and exactly what it takes.

Book a Discovery Call