Paid Media Scenario Planner
Enter access code to continue

WindowsUSA — Paid Media Scenario Planner

Pilot Mar–Oct 2026 Google Search · Meta · Brand/Competitor · PMax

Agency-Controllable Levers (Steady-State)

Blended performance at maturity (Sep–Oct). Includes Google Search non-brand, brand/competitor, PMax, and Meta. Earlier months interpolate via learning curve.

11.0%
33.0%

Adjustable Assumptions

🔒 Locked Rates — W.USA Operational

Dependent on W.USA sales org, scheduling, driver availability, and call center operations.

🔒 Set → Pitch Rate48.0%
🔒 Pitch → Close (Sales %)38.0%
🔒 Close → Install %85.0%

Steady-State Monthly Funnel (Oct)

Installed Cost of Marketing (COM)

0%▲ 15% target30%50%100%

CPA Stack: Scenario vs. Max Allowable

8-Month Pilot Projection — Mar–Oct 2026

Pilot Summary — 6-Month Active Spend (May–Oct 2026)

Key Definitions

ROAS (Return on Ad Spend)
Installed Revenue ÷ Ad Spend. Measures media efficiency only — excludes agency fees. A 6.5× ROAS means every $1 of ad spend generates $6.50 in installed revenue.
ROMI (Return on Marketing Investment)
Installed Revenue ÷ Total Marketing Cost (ad spend + agency fees). The all-in efficiency metric. Always lower than ROAS because the denominator includes the fixed agency retainer.
Installed COM (Cost of Marketing)
Total Marketing Cost ÷ Installed Revenue, expressed as a percentage. The inverse of ROMI. 15% COM = 85¢ of every revenue dollar remains before COGS. At pilot spend levels with a fixed retainer, 15% COM requires very high ROAS. As spend scales and the fixed fee amortizes, COM drops.
Blended CPC
Weighted average cost-per-click across all channels (Google Search non-brand, brand/competitor, PMax, and Meta), factoring in each channel's spend share and CPC. Google non-brand runs $18–30; brand/competitor $3–8; PMax $8–15; Meta $3–8. With ~30% of budget on Meta and some brand/PMax, blended CPC lands $8–12.
Blended CPL (Ad Spend) vs. CPL (All-In)
Ad Spend CPL = Ad Spend ÷ Leads. The media-only cost to generate one lead. All-In CPL = (Ad Spend + Agency Fees) ÷ Leads. Includes the agency retainer. Both are reported in this tool.
Steady-State vs. Blended Period
Steady-state = performance at campaign maturity (modeled as Oct, month 6 of active spend). This is where optimized audiences, proven creative, tested landing pages, and enhanced conversion signals compound into peak efficiency.

Blended period = the average across all active months (May–Oct), including early learning-phase months. Blended will always appear lower than steady-state because it includes ramp-up costs. Blended is the honest pilot number; steady-state is what you are building toward and what informs the scaling decision.
Set Rate %
Percentage of leads that convert to scheduled appointments. Partially agency-controllable through lead quality — higher-intent leads from better targeting and landing pages set at higher rates. Also partially operational — depends on W.USA call center speed-to-lead and scheduling capacity.
Set → Pitch, Sales %, Install %
Downstream operational rates controlled by W.USA. Set → Pitch (48%) depends on driver/estimator availability. Sales % (38%) depends on in-home sales team effectiveness. Install % (85%) reflects the ~15% of signed jobs that cancel before completion. These are modeled at client-provided baselines.
Learning Curve Model
Campaigns don't launch at full efficiency. This tool models a 6-month ramp from conservative starting assumptions to steady-state targets. The curve reflects the compounding effect of audience signals, bid strategy maturation, creative testing, and conversion data accumulation. The biggest performance inflection occurs in months 4–5 (Aug–Sep) as enhanced conversion signals feed back into algorithms.

Strategic Analysis: What Each Side Controls

AIMCLEAR Agency-Controllable Levers

The agency's primary impact is at the top of the funnel: traffic quality, lead volume, and cost-per-lead. Every downstream metric is a multiplier on what enters the funnel — improvements here cascade through the entire model.

  • Landing page optimization and CRO. The single highest-leverage move. Converting 12% of clicks to leads instead of 8% delivers 50% more leads at the same spend. Purpose-built landing pages per market, quiz-funnel architectures that qualify intent before form submission, and continuous testing of headlines, CTAs, and social proof.
  • Audience targeting and geo-fencing. Serving ads only in zip codes with W.USA driver capacity and installation bandwidth. Prevents wasted spend on un-serviceable leads and improves downstream set rates. Demand-trend overlays (permit data, home sale velocity, housing age) sharpen targeting.
  • Competitive positioning on non-brand queries. Identifying high-opportunity, low-competition queries — long-tail variations, comparison terms, problem-aware searches — where CPC pressure is lower and intent is higher. Dedicated landing pages per query cluster.
  • Creative and copywriting improvements. Especially on Meta, where creative fatigue is the primary efficiency killer. A structured testing cadence (new concepts every 2–3 weeks) with hypothesis tracking prevents performance decay.
  • Enhanced conversion signals. Feeding offline conversion data (sets, pitches, sales, installs) back into Google and Meta algorithms. This shifts automated bidding from optimizing for form fills to optimizing for leads that actually become revenue. Typically takes 60–90 days of data — the biggest efficiency gains appear in months 4–6.
  • Strategic channel allocation. Shifting spend between Google Search, Meta, brand/competitor, and PMax based on per-channel performance by market. Meta delivers cheaper leads; Google delivers higher-intent leads. The optimal blend depends on W.USA's close rates by lead source, which will become visible once offline data flows back.

W.USA Client-Controlled Levers

Everything from set-to-pitch onward is driven by W.USA's operational capacity. The agency delivers leads — conversion to revenue depends on the client's infrastructure.

  • Speed-to-lead. The single most impactful operational variable. Leads contacted within 5 minutes set at 2–3× the rate of leads contacted after 30 minutes. Requires adequate call center staffing and CRM with real-time routing.
  • Driver and estimator availability by market. If a lead can't get a pitch appointment within 5–7 days, set-to-pitch rate drops significantly. Geographic expansion requires personnel in those markets.
  • Sales team training and scripting. The 38% close rate is modeled as constant, but trained reps with refined scripts, competitive rebuttals, and financing options can push this to 42–45%. Even a 4pp improvement at the close stage creates meaningful downstream impact on ROAS.
  • Drip and nurture funnels. Automated email/SMS sequences that re-engage leads at 7, 14, and 30 days can recapture 5–10% of initially lost opportunities — effectively free volume.
  • Install completion rate. Reducing the ~15% cancellation rate between close and install through better expectations-setting, faster scheduling, and proactive communication about timelines.

Sequence of Events by Scenario

Good · ~5.5× ROAS

Achievable if fundamentals execute without major breakthroughs. Requires: dedicated landing pages live by May, basic geo-targeting to W.USA's top 5–8 markets, standard conversion tracking, and a functional lead handoff process. On W.USA's side, current operational performance holds steady. The agency drives blended CPC to ~$10 through channel mix optimization (Meta at 30% of budget pulling the blend down) and CVR to ~10% through LP optimization.

Better · ~6.5× ROAS

Requires everything in Good plus: quiz-funnel LP architecture raising both CVR and set rate, enhanced conversion signals feeding back into bid strategies (inflection around August), and tighter geo-fencing to high-demand zip codes. CPC drops to ~$9 as algorithms optimize toward higher-value signals, CVR reaches 11%, set rate improves to 33%. W.USA needs consistent speed-to-lead under 10 minutes and no capacity bottlenecks in active markets.

Best · ~7.5× ROAS

Requires everything in Better plus: offline conversion data at the sale/install level (not just lead), mature creative testing on Meta, aggressive competitive conquest on low-CPC non-brand queries with dedicated LPs per query cluster, and budget flexibility to shift spend toward highest-performing markets in real time. CPC drops to ~$8, CVR hits 12.5%. W.USA needs speed-to-lead under 5 minutes, a drip nurture sequence for unconverted leads, and ideally improved close rates through enhanced sales training.


On the 15% COM Target

Bayesian Posterior Probability Analysis

This analysis uses Bayes' theorem to estimate the probability that WindowsUSA's paid media pilot achieves each ROAS target, given observable evidence from the channel mix, historical conversion benchmarks, and the planned optimization trajectory. The posterior updates a conservative prior with the likelihood of observed conditions producing each outcome.

P(ROAS target | evidence) = P(evidence | ROAS target) × P(ROAS target) / P(evidence)

Where P(ROAS target) is our prior belief in achieving each target before campaign data, P(evidence | ROAS target) is the likelihood of the channel mix, CPCs, and conversion rates we observe given that each target is achievable, and P(evidence) is the normalizing constant across all outcomes.

Prior Beliefs — P(H)

Base rates before campaign data, informed by home services PPC benchmarks and W.USA's historical funnel rates. These are conservative — a new campaign without optimization history.

Observable Evidence — E

Factors that update our priors. Each piece of evidence either increases or decreases the likelihood of each ROAS target being achievable.

Likelihoods — P(E|H)

Probability of observing this specific evidence set given that each hypothesis (ROAS target) is true. Higher likelihood = the evidence is more consistent with that outcome.

Posterior Probabilities — P(H|E)

Updated probability of achieving each ROAS target after incorporating the evidence. This is the key output — it tells you how likely each scenario is given what we know.

Incremental Lift at Realistic Budget Levels

Inputs That Would Improve Future Models

The posterior probabilities above are based on industry benchmarks, the channel mix structure, and W.USA's self-reported funnel rates. The following data points, once available, would significantly tighten the confidence intervals and allow the model to update from informed estimates to empirically-grounded projections.

  • 1.
    Actual close rates by lead source. If Google Search leads close at 42% but Meta leads close at 28%, the channel allocation math changes substantially. Currently the model applies a flat 38% across all sources.
    High impact
  • 2.
    Speed-to-lead data by market. Average time from form fill to first contact, broken out by call center location and time of day. This is the #1 operational predictor of set rate — a 5-minute vs. 30-minute response can shift set rates by 15–20pp.
    High impact
  • 3.
    Historical CPC and CVR by DMA. Market-level performance data from any prior paid campaigns (even other agencies). Allows the model to use actual geo-specific rates instead of national averages.
    High impact
  • 4.
    Average ticket by job type and market. If certain markets or job types (e.g., full-home replacements vs. single-window jobs) have significantly different ticket sizes, the ROAS math per market changes. Currently modeled at a flat $16,222.
    Medium impact
  • 5.
    Cancellation reason codes. Understanding why the 15% of closed jobs cancel before install — financing, scope changes, buyer's remorse, scheduling delays — would identify whether the agency can influence this rate through better pre-qualification or whether it's purely operational.
    Medium impact
  • 6.
    Seasonal demand curves. Monthly install volume over the past 2–3 years. Allows the spend ramp to align with demand peaks rather than using a generic curve. Window replacement has seasonal patterns that vary by region.
    Medium impact
  • 7.
    CRM data export — lead to install. A full-funnel data pull from the CRM with timestamps at each stage (lead, set, pitch, close, install) for the last 12 months. This would allow us to validate or adjust every conversion rate in the model with real data instead of self-reported rates.
    High impact
  • 8.
    Competitive landscape by market. Auction Insights or third-party competitive data showing who's bidding on non-brand queries in each DMA, their estimated spend, and impression share. Identifies markets where AIMCLEAR can achieve outsized efficiency vs. saturated markets where CPCs will stay high regardless of optimization.
    Lower impact (refines CPC assumptions)
  • 9.
    Attribution and multi-touch data. If W.USA has any existing analytics showing how customers interact with multiple touchpoints before converting (e.g., see a Meta ad, then search branded, then convert), it would inform the brand/PMax budget allocation and allow proper credit across channels.
    Medium impact