The One-Hour FORGE Audit — What We Actually Deliver
What Routiine delivers in the one-hour FORGE audit. Ten gates scored, top-20 action list, and a 48-hour turnaround. Free, no retainer, no pitch slides.
The One-Hour FORGE Audit — What We Actually Deliver
For founders, operators, and revenue leaders considering a $5,000–$40,000+ software engagement — and anyone who has sat through an hour-long "strategy call" that turned out to be a 50-minute sales pitch.
The Situation
The FORGE audit is a 60-minute call between the Routiine founder and the applicant's decision-maker, followed by a 48-hour asynchronous analysis, followed by a written deliverable — a 10-gate scorecard and a top-20 action list. The audit is free. There is no discovery fee, no deposit, no clawback if the applicant does not sign. The deliverable is the applicant's to keep regardless of whether they engage Routiine for the build.
We run the audit because the Founding Client Program requires it as a gating step — no applicant signs a Launch, Platform, or System engagement without completing the audit first. We also run it for non-applicants. If a founder has a business, a software problem, and 60 minutes, they can book the audit without committing to a proposal conversation. Roughly 40% of the audits we have run in the first 45 days did not lead to a Routiine engagement. Those applicants took the scorecard and the action list and ran with it. We are fine with that outcome. The audit's cost to us is 90 minutes of founder time. The goodwill from the 40% is a longer-cycle asset — referrals, future re-engagement, and reputational weight in Dallas's software buyer community. The math works.
What the audit actually is: a structured walk-through of the applicant's current operations against the 10 gates of our build system, scored on evidence the applicant presents during the call. The gates are — vision, scope, data, interface, integration, test, staging, sign-off, production, and measurement. Every engagement Routiine ships passes through the 10 gates in sequence. The audit scores how ready the applicant's business is at each gate. A score of 10/10 means the applicant is ready to ship today and probably does not need us. A score of 3/10 or below means the applicant needs pre-work before any software engagement makes sense. Most audits score between 5 and 7, which is the sweet spot where Routiine's work has the highest return. The rest of this post is the specification of what happens in the 60 minutes, what the deliverable contains, and why we run this as a free service instead of a paid discovery.
The Problem
Most software engagements fail for reasons that were visible before the contract was signed and that nobody on either side was structurally motivated to surface. The agency wanted the deal. The founder wanted the outcome. Both parties skipped the diagnostic because the diagnostic, done honestly, often kills the deal. Killing the deal is bad for the agency's revenue in the short term and bad for the founder's timeline in the short term, so both parties collude — informally, sometimes unconsciously — to not run it. The engagement starts. The hidden misalignment surfaces in week four. The project slips. The relationship sours. The founder writes a post on LinkedIn about how agencies do not deliver, and the agency writes an internal post-mortem blaming the client. Both are right about what happened and wrong about why.
The first structural problem is the discovery-as-sales-call pattern. A paid discovery session — $2,500 for a two-day sprint, $5,000 for a week-long assessment — is sold as a diagnostic, but it is priced and structured as a lead magnet. The agency's goal is to produce a document that recommends the agency's paid engagement. The document is designed to convert, not to diagnose. The founder reads a 40-page deck that says "you need our Platform tier, $55,000, starting Q3" and has no way to tell whether the recommendation is honest or revenue-motivated. The discovery is a trap wrapped in a document. We refuse to run that. Our audit is free, which eliminates the incentive to convert the document into a sale. We benefit when 60% of audits convert. We do not benefit when 100% convert — a 100% conversion rate means we are taking on engagements that should not have been signed, which kills our Ship-or-Pay margin.
The second structural problem is the absence of a standard framework. Most agency audits ask open-ended questions — "tell me about your business," "what are your goals," "what keeps you up at night." The answers are narrative. The narrative is impossible to score, compare, or prioritize across engagements. Two founders with the same underlying problem describe it differently, and the agency's recommendations diverge on vibes rather than on analysis. Our 10-gate framework produces a numeric score on every gate, and the scoring rubric is public and reproducible. Two founders with similar businesses get similar scorecards. Two founders with different businesses get different scorecards for specific, named reasons. The comparability is the point. It turns the audit from an art into a measurement.
The third structural problem is the action-list vagueness. Most audits deliver recommendations like "you need to strengthen your go-to-market" or "consider a product roadmap refresh." The recommendations are unfalsifiable — the founder cannot tell whether they completed them, whether completion was cheap or expensive, or whether completion moved the outcome. Our top-20 action list is specific — each action names a deliverable, a rough effort estimate in hours or dollars, and a testable outcome. Example actions from a recent audit — "Rewrite the auth provider config to use session cookies instead of local storage (4 hours, fixes intermittent logout bug)" or "Move the webhook handler from the main app to a background worker (8 hours, eliminates response-time spikes above 2 seconds)." The applicant can execute any action on the list without Routiine's involvement. The action list is a real deliverable, not a proposal dressed as a diagnostic.
The deepest structural problem is the 60-minute ceiling itself. Most agencies insist on 2–5 hours of diagnostic time before they can deliver anything useful. The insistence is usually wrong. A 60-minute structured conversation with the right decision-maker against a good framework produces more usable output than a five-hour open-ended workshop. The five-hour workshop is a scope-inflation mechanism — every hour surfaces a new concern, and the document at the end has to address all of them, which makes the document long and the recommendations shallow. The 60-minute audit has a forced prioritization discipline. We ask the six questions that matter and drop the rest. The applicant leaves the call knowing we heard them on the things that were actually load-bearing for the engagement.
The Implication
When an applicant skips a rigorous audit — either because the agency offers a soft version or because the applicant does not want to spend the 60 minutes — three costs land, and all three are priced into the eventual engagement. First, the scope is set on the wrong evidence. The applicant tells the agency what they think the project is. The agency writes the proposal against that description. The description is incomplete or wrong in specific, predictable ways — the applicant overestimates what users will adopt, underestimates what the backend integration will require, and omits the compliance constraints the legal team has not yet surfaced. The agency discovers all three during the build and either eats the cost or renegotiates the scope. Either outcome damages the relationship. The audit catches all three before signing in most cases — we ask the questions that surface them.
Second, the timeline is unreliable because the dependencies are unknown. An engagement that looks like a four-week Launch turns into a seven-week project because the applicant's payment processor was not actually set up, the domain was held in a personal Squarespace account the applicant could not access, and the content translations the applicant promised were not started. None of those dependencies are software problems. All of them are gating problems that an audit surfaces at gate 1 (vision), gate 2 (scope), and gate 5 (integration). We have seen every one of those three in the first 45 days. The audit caught all three before signing. The engagements started clean and shipped on time.
Third, the Ship-or-Pay guarantee becomes uneconomic if the audit is weak. We guarantee sprints against a scope that was defined against the audit's findings. If the audit was sloppy, the scope is fragile. If the scope is fragile, the sprints will miss. If the sprints miss, we do not get paid. The audit is the upstream gate that makes the entire guarantee possible. We run it carefully because our own P&L depends on it. Applicants who complete a careful audit close at 83%. Applicants who ask to skip or compress the audit close at 12%, and the ones who sign after compressing the audit produce sprint misses at 3x the rate of the full-audit cohort. The math of the guarantee requires the audit. It is not a formality.
Beyond the direct costs, a weak audit sets the applicant up to choose the wrong tier. The three tiers — $5,000 Launch, $15,000 Platform, $40,000+ System — are not interchangeable. Each tier is a different scope shape, a different sprint cadence, and a different post-ship relationship. An applicant who should have bought a Platform and instead bought a Launch ends up with 40% of the functionality they needed and a stalled project. An applicant who should have bought a Launch and instead bought a Platform ends up with overbuilt software and a cash-flow crunch. The audit's primary output for Routiine's sales motion is the tier recommendation. We are willing to recommend no tier — we have done so twice in 45 days when the applicant's business was not ready for custom software and would have been better served by an off-the-shelf product. That recommendation would have been impossible to deliver honestly at the end of a paid discovery. The fee would have biased us. The free audit does not.
The Need-Payoff
The one-hour audit runs in a fixed structure. Six minutes of orientation — the applicant names the business, the decision-maker, and the one thing they want to ship. Fifty-four minutes of gate-by-gate walk-through, at roughly five to six minutes per gate. The agenda is the 10 gates of FORGE, and we run them in the order they appear in the build system.
Gate 1 — Vision. The applicant states the business outcome the software is supposed to produce in one sentence. If they cannot produce the sentence, we score the gate 2/10 and pause to write the sentence together. The sentence is the load-bearing artifact for the whole engagement. Without it, no downstream decision can be scored against anything.
Gate 2 — Scope. The applicant names the three features that are non-negotiable and the three they are willing to cut. Scored on whether the applicant can produce six specific features, not categories. "User management" is a category. "SSO login with Google and Microsoft, no local-password fallback, and no team-based permissions in v1" is a specification.
Gate 3 — Data. The applicant describes the data the software will read, write, and preserve. Scored on whether they know the source of truth, the retention policy, and the export format. Most applicants score 4/10 here. The data questions are the single most frequently skipped item in a bad discovery, and they are the most expensive items to fix after a build starts.
Gate 4 — Interface. The applicant names the user types and the surfaces they will interact with. Web? Mobile? Embedded in a partner's product? API only? Scored on whether the surfaces are named and the user types are distinct.
Gate 5 — Integration. The applicant lists every external system the software will talk to — CRM, payment processor, email provider, analytics, customer-support tool, and any domain-specific systems. For each, scored on whether the applicant knows the API posture, the auth method, and the rate limits. Integration scores below 5 are the single most common reason we recommend pre-work before an engagement.
Gate 6 — Test. The applicant names the user-facing acceptance criteria for the launch version. "It works" is not criteria. "A logged-in user can create a project, invite three teammates, and receive an email receipt within 10 seconds" is criteria.
Gate 7 — Staging. The applicant describes the environment the software will be reviewed in before production. Scored on whether the applicant has, or will have, a stable URL for pre-launch review and whether the review audience is named.
Gate 8 — Sign-off. The applicant names the single human who approves the launch. Multiple approvers means no approvers. This gate is often the easiest to fix and the most frequently unasked in a typical agency discovery.
Gate 9 — Production. The applicant describes the hosting, domain, SSL, and DNS plan. Scored on whether the items are under the applicant's control and whether the access credentials are retrievable.
Gate 10 — Measurement. The applicant names the three metrics that will confirm the software worked. Scored on whether the metrics are instrumented today — and most of the time, they are not. Measurement is the most frequently undone gate. We include an instrumentation plan as part of every Platform and System engagement because the applicant cannot ship it alone.
Each gate scored 0 to 10 based on evidence presented during the call. Total score 0 to 100. The scorecard is the first page of the deliverable.
The second page is the top-20 action list. Actions are drawn from the gate scores — every gate scoring 6 or below produces one to three actions. Actions are sorted by a return-on-effort ranking the founder performs after the call. High-return, low-effort actions land in the top five. High-return, high-effort actions land in the next ten. Low-return actions land at the bottom, and applicants can safely defer them. Each action has three fields — the deliverable, the rough cost or time, and the testable outcome. The list is 20 items because 20 is the number an applicant can realistically execute or delegate in a quarter. Longer lists produce decision fatigue. Shorter lists omit items that turn out to matter.
The third page is the tier recommendation. The recommendation names one of — "no tier" (the applicant should not buy custom software at this stage), Launch, Platform, or System — and justifies the recommendation against the gate scores. If the applicant is below 50/100, the recommendation is "no tier yet, run the top five actions and re-audit in 30 days." If the applicant is 50–70, the recommendation is usually Launch. Above 70 and the project has a Platform or System shape depending on the scope gate. The recommendation is specific about why, and the applicant can read it and decide independently of whether they work with Routiine.
The fourth page — optional and included only when Routiine would be a fit — is a proposal outline with the Founding Client pricing, the sprint schedule, and the Ship-or-Pay terms. If we cannot recommend ourselves in good faith against the audit findings, page four is replaced with "We are not the right studio for this project — we recommend specific alternative." That replacement has happened in two of the first 30 audits. Both replacements named competing studios. Both founders have since referred other applicants back to us. The reciprocity cycle runs longer than the immediate sale.
Inside a Platform engagement, the audit becomes the first sprint's gate 1 document. The action list feeds the sprint backlog. The scorecard becomes the baseline against which the Living Software dashboard measures progress. The 60 minutes pays dividends across the full engagement, not just the signing moment. Clients who keep the scorecard and review it at the six-month mark consistently report that the action list predicted their actual post-launch pain points with 80%+ accuracy. The audit is not just a sales diagnostic. It is a project-planning artifact that keeps producing value long after the engagement closes.
Next Steps
The FORGE audit is a 60-minute call, a 48-hour turnaround, a scorecard, a top-20 action list, and a tier recommendation — free, regardless of whether you engage Routiine for the build. It is the fastest way to get a specific, measurable read on whether your business is ready for a custom software project.
Three paths are open. First, book a FORGE audit directly — the scheduling page offers three slots per week on the founder's calendar, and audits typically run within 10 business days of booking. Second, contact us if you have pre-audit questions — common questions include whether your project is too small for Routiine (below $3,000 in expected spend, probably yes) and whether we sign NDAs before the call (we do, on request). Third, apply to the Founding Client Program if you already know the tier you want — the audit is the gating step for program applicants, and the first five seats carry a 20% discount under Ship-or-Pay. Three seats remain. The fifth signing closes the program.
Ready to build?
Turn this into a real system for your business. Talk to James — no pitch, just a straight answer.
James Ross Jr.
Founder of Routiine LLC and architect of the FORGE methodology. Building AI-native software for businesses in Dallas-Fort Worth and beyond.
About James →In this article
Build with us
Ready to build software for your business?
Routiine LLC delivers AI-native software from Dallas, TX. Every project goes through 10 quality gates.
Book a Discovery CallTopics
More articles
Offshore vs. Local Software Development
Weighing offshore vs local software development? This honest comparison covers cost, quality, communication, and what Dallas businesses actually experience.
Brand StrategyOne-Voice Brand Systems for $5M-$50M SaaS Founders
SaaS brand strategy for founders at $5M-$50M ARR. A one-voice system replaces fragmented messaging across product, marketing, sales, and support.
Work with Routiine LLC
Let's build something that works for you.
Tell us what you are building. We will tell you if we can ship it — and exactly what it takes.
Book a Discovery Call