College
HOOPSDATA
  • Home
  • Scoreboard
  • NET Rankings
  • Picks
  • Schedule
  • Articles
  • Teams
  • Players
  • Recruiting
  • Portal
  • Coaching Carousel
  • Bracketology
  • Simulator
  • Accuracy
Privacy·Terms·About
HomeTeamsArticlesRecruitsPortalCoaches
Privacy·Terms·About

Rankings & Stats

  • NET Rankings
  • Stat Leaders
  • Bracketology
  • Prediction Accuracy

Conferences

  • ACC
  • Big Ten
  • Big 12
  • SEC
  • Big East
  • AAC

Tools

  • Live Scores
  • Gameday Analysis
  • Resume Simulator
  • Daily Picks

Learn

  • All Guides
  • NET Rankings Guide
  • Efficiency Ratings Guide
  • NIL & Athlete Pay Guide
  • Transfer Portal Guide
AboutPrivacy PolicyTerms of UseDisclaimerContact

© 2024-2026 College Hoops Data. All rights reserved.

  1. Home
  2. /Guides
  3. /CHD Scout Prediction Model
Technical Deep Dive

CHD Scout
Prediction Model

Nine factors. Thousands of games. Every prediction frozen at tip-off. This is the definitive technical reference for how CHD Scout predicts college basketball game outcomes — every component, every version, every number.

V21
Current Version
Conference adjustments
76%+
Winner Accuracy
Across all games
9
Prediction Factors
Contextual + statistical
4,000+
Games Analyzed
2025-26 season

Architecture Overview

CHD Scout is a multi-factor prediction model implemented as a SQL function (calculate_score_prediction) running directly in Supabase's Postgres database. Rather than running a separate ML service, the entire prediction engine lives inside the database — right next to the data it consumes. Every prediction is deterministic and reproducible.

How It Works

A Cloud Function (capturePredictionForGame) fires before each game goes live. It gathers all contextual data — rest days, travel distance, conference membership, player form — and passes 11 parameters to the SQL function. The result is frozen at tip-off time: the model never revises a prediction once the game starts.

The SQL function accepts 11 parameters, 8 of which are optional with sensible defaults. This makes it backward compatible across model versions — older call sites continue to work when new parameters are added. Every prediction is stored with its model_version, enabling historical accuracy tracking across versions.

Prediction Data Pipeline

ESPN API→
Data Sync→
Efficiency Calc→
Context Factors→
Prediction Capture→
Accuracy Grading

ESPN game data syncs every 5 minutes during live games. Predictions are captured once per game, before tip-off.

The Nine Prediction Factors

Every CHD Scout prediction combines nine distinct factors. Some are statistical (efficiency, NET rankings), some are contextual (rest, travel, venue), and some are behavioral (player form, competitive dynamics). Here is every factor in detail, with real numbers from the V21 implementation.

Adjusted Efficiency Margin

AdjEM
60-70%

The foundation of every prediction. Compares each team's offensive and defensive efficiency adjusted for opponents faced.

  • The predicted margin starts here: Home AdjEM minus Away AdjEM
  • Offensive efficiency = points scored per 100 possessions, adjusted for opponent defense
  • Defensive efficiency = points allowed per 100 possessions, adjusted for opponent offense
  • Accounts for roughly 60-70% of the prediction's weight in non-toss-up games
  • A team with +15 AdjEM vs a +5 AdjEM opponent starts with a ~10-point predicted margin before any adjustments

NET Rankings

NET
50% in toss-ups

The NCAA's official team quality metric, integrated as the dominant signal in close games.

  • In toss-ups (predicted margin < 3), NET ranking accounts for 50% of the nudge signal
  • This weight was increased from 40% in V17 after V22 research proved NET is the best toss-up predictor
  • Higher NET rank = higher quality signal when efficiency margins are nearly identical
  • NET combines team value index, net efficiency, winning percentage, adjusted win percentage, and scoring margin
  • Most impactful in late-season games when NET rankings have stabilized with larger sample sizes

Player Form (Hot/Cold)

PRA
30% in toss-ups

V3 PRA-based model tracking Points + Rebounds + Assists over a 5-game rolling window with recency weighting.

  • Recency weights across 5 games: 40% (most recent), 25%, 18%, 10%, 7% (oldest)
  • Core PRA signal contributes up to +/-0.18 to the form score
  • 3PT shooting bonus adds between -0.06 and +0.10 based on recent shooting
  • Flat turnover penalty, 5% win bonus, trend dampener, and peak performance bonus
  • Multiplicative factors: opponent quality (1.0-1.20x), consistency (0.80-1.0x)
  • Thresholds: score >= 0.12 = HOT, <= -0.12 = COLD, clamped to [-0.5, +0.5]
  • In the toss-up nudge system, form accounts for 30% of the signal weight

Home Court Advantage

HCA
~3.5 pts baseline

Venue-specific HCA calculation with recency weighting — not a flat number applied uniformly.

  • Base HCA recalibrated in V18 from 2.0 to 3.5 points (closer to observed home win margins)
  • V21 added recency weighting: 0.92 decay factor per game in the rolling window
  • Recent home games weighted approximately 2.3x more than oldest games in the window
  • Actual observed home win rate: roughly 60-65% in non-conference, ~52% in conference play
  • Road games receive a negative HCA penalty; neutral site games receive zero HCA

Conference HCA Dampener

Conf Damp
0.70x multiplier

V21's key innovation: reduces home court advantage for conference games where teams know each other well.

  • Applies a 0.70 multiplier to HCA specifically for conference games
  • Reasoning: teams play each opponent twice per season, reducing the home environment's impact
  • Reduces predicted HCA from approximately 4.1 points to 2.9 points for conference games
  • Conference detection uses teams.conference_id comparison (home vs away), not the games.is_conference_game flag
  • This single change improved conference game accuracy by +7.14 percentage points (66.48% to 73.71%)
  • Also indirectly dampens V19's competitive boost since it uses the dampened HCA value

Conference Margin Compression

Conf Comp
0.90x for 8+ margins

Prevents the model from predicting blowouts in conference matchups between familiar opponents.

  • Applies a 0.90 factor to predicted margins exceeding 8 points in conference games
  • Applied LAST in the calculation chain, after all other adjustments are finalized
  • Addresses the observation that conference games are systematically closer than non-conference games
  • A predicted 14-point margin becomes 12.6 points after compression
  • Does not activate for non-conference games or conference games with tight predicted margins

Rest & Travel Adjustments

R&T
Contextual

Accounts for days between games and travel fatigue, computed at prediction time by the Cloud Function.

  • Added in V18 to capture scheduling-related advantages and disadvantages
  • Teams on short rest (1 day between games) receive a fatigue penalty
  • Teams on extended rest (4+ days) receive a small preparedness boost
  • Travel fatigue factors in for road games with significant geographic distance
  • Computed by capturePredictionForGame Cloud Function at tip-off, not stored statically
  • Most impactful during mid-week conference games and back-to-back road trips

Competitive Game Boost

Comp Boost
Up to +/-1.5 pts

Venue-scaled adjustment for close predicted games where crowd energy amplifies home court advantage.

  • V19 addition: activates when predicted margin is under 9.0 points
  • Maximum boost capped at +/-1.5 points, scaled linearly by venue factor
  • Placed between HCA application and the nudge system in the calculation chain
  • Rationale: competitive games at home see increased crowd engagement and energy
  • Backtest showed this improved overall accuracy from 75.91% to 76.40% across 4,122 games
  • Venue scaling means the boost is proportional to the team's observed home court strength

Nudge System (Toss-Up Resolution)

Nudge
Toss-ups only

The decision engine for close games: blends efficiency, NET rank, and form to break ties when the margin is near zero.

  • Activates for toss-up games where the predicted margin is near zero
  • V21 weights: 20% efficiency, 50% NET rank, 30% player form (changed from V17's 40/40/20)
  • Mid-range dampener: 0.80 factor applied to predicted margins between 4 and 8 points
  • Toss-up dampener: 0.85 factor applied to predicted margins under 4 points
  • Nudge caps: +/-2.0 points for mid-range games, +/-1.5 points for toss-ups
  • The V22 research finding: NET rank is the single best predictor of winners in close games
  • The shift from 40% to 50% NET weight was the largest accuracy improvement in the nudge system

Calculation Order: From Raw Margin to Final Prediction

The order of operations matters. Each step modifies the predicted margin in a specific sequence. Conference detection must happen before HCA because the dampener depends on it. The nudge comes after the competitive boost. Margin compression is always last. Here is the exact chain:

1
Raw Efficiency Margin
Home AdjEM - Away AdjEM
2
Rest & Travel
Fatigue and prep adjustments
3
Conference Detection
Compare team conference IDs
4
Home Court Advantage
HCA with conf dampener if applicable
5
Competitive Boost
If margin < 9, venue-scaled boost
6
Nudge System
If toss-up: NET 50%, form 30%, eff 20%
7
Conf Margin Compression
If conf && margin > 8: multiply by 0.90
8
Final Prediction
Predicted margin + winner

Example Walkthrough

Duke (home) vs UNC (away), conference game:

1. Raw margin: Duke +18.5 AdjEM minus UNC +14.2 AdjEM = +4.3

2. Rest: Duke on 3 days rest, UNC on 2 days = +4.5

3. Conference detected: both ACC = yes

4. HCA: 3.5 baseline x 0.70 conf dampener = +2.45, total = +6.95

5. Competitive boost: margin < 9, small venue boost = +7.4

6. Nudge: not a toss-up (margin > 4) = +7.4

7. Conf compression: margin < 8, does not apply = +7.4

8. Final: Duke by 7.4 points

Version History: V17 to V22

The model has evolved through six versions in February 2026 alone, with each iteration informed by rigorous backtesting. Not every idea ships — V20 was rejected after testing proved it hurt accuracy, and V22 remains research-only. This disciplined approach ensures every change earns its place.

V17Feb 9, 2026Superseded

Nudge Architecture

Introduced the nudge system to resolve toss-up games. Fixed V16's margin distribution compression bug. Parameters: mid dampener 0.80, toss-up dampener 0.85, nudge weights efficiency 40%, NET 40%, form 20%.

V18Feb 10, 2026Superseded

Contextual Adjustments

Added rest days, travel fatigue, HCA recalibration from 2.0 to 3.5 points baseline, star player absence detection, and conference dampening. Introduced the 11-parameter calculate_score_prediction function (8 optional, backward compatible).

V19Feb 11, 2026Superseded

Competitive HCA Boost

Added venue-scaled competitive boost for close games (threshold 9.0, max 1.5 points). Backtest across 4,122 games: accuracy improved from 75.91% to 76.40%, MAE from 9.338 to 9.268.

V20Feb 12, 2026Rejected

NET Trajectory

Tested 7-day NET rank trajectory (rate of change). Rising-away games had 64.6% model accuracy — a 12pp gap. But trajectory was only ~55% accurate in toss-ups, flipping more wrong than right. Overall accuracy dropped 76.40% to 76.30%. Rejected.

V21Feb 15, 2026Current

Conference Adjustments

Addresses conference play accuracy collapse (66.48% Feb 8-15 vs 75-78% season average). Conference HCA dampener 0.70, margin compression 0.90 for margins > 8, HCA recency weighting with 0.92 decay. Validation on 350 games: +7.14pp improvement.

V22Feb 16, 2026Research Only

Literature Review Backtest

Tested 9 ideas from academic papers against 2,275 games. Best combination gained +0.88pp (72.57% vs V21's 71.69% on that period). Key finding: dynamic nudge weights with NET at 50% dramatically outperforms equal weighting. Variance dampener and conference-specific HCA also proved valuable. Research informs future versions.

Accuracy Deep Dive

Accuracy varies dramatically by game type and prediction confidence. A model that claims “76% overall accuracy” is hiding a wide distribution: near-certain in blowouts, near-random in toss-ups. Here is the honest breakdown.

76%+
Overall Accuracy
4,000+ games
73.7%
Conference Games
Improved from 66.5%
~78%
Non-Conference
Larger quality gaps
~8.2
Mean Absolute Error
Points (score prediction)
Strong Picks
Margin 12+
85-93%

Games where the model sees a clear quality gap. Typically non-conference games between teams from different tiers, or lopsided conference matchups. V21 improved strong pick accuracy from 80.85% to 92.96% on conference games.

Moderate Picks
Margin 6-12
~77%

Solid favorites where the model has meaningful confidence. These make up the bulk of conference play. V21 improved this tier from 69.74% to 77.03% on conference games through margin compression and HCA dampening.

Lean Picks
Margin 3-6
~65%

Slight favorites where contextual factors become critical. The nudge system is partially active. Rest, travel, and player form have outsized impact in this tier. These are the games that separate good models from great ones.

Toss-Ups
Margin < 3
~55%

Essentially coin flips with a slight edge. The nudge system is fully active, blending NET rankings (50%), form (30%), and efficiency (20%). 55% is near the theoretical ceiling for games this close. V20's failure proved that adding more signals doesn't help here.

Seasonal Accuracy Pattern

Early-season games (pre-January) achieve 78.2% accuracy. Post-January games drop to ~72%. This is not a model weakness — it reflects the nature of the sport. Early-season schedules feature large quality gaps (Kentucky vs. a low-major). By January, every game is a conference matchup between similar-quality teams. The model is naturally better when the gap between teams is larger.

What the Model Cannot Do

Transparency about limitations is as important as explaining strengths. No prediction model is omniscient. Here are the known boundaries of what CHD Scout can and cannot predict.

In-Game Injuries

Cannot predict injuries that occur during a game. A star player going down in the first half invalidates the pre-game prediction.

Referee Tendencies

Cannot account for referee crew assignments or tendencies. Some crews call significantly more fouls, which affects game pace and free throw rates.

Toss-Up Ceiling

Games with a predicted margin under 3 points are fundamentally unpredictable. 55% accuracy is near the theoretical ceiling — no model can reliably pick coin-flip games.

Streaks Beyond 5 Games

The player form window is intentionally limited to 5 games. Longer hot or cold streaks beyond that window are not captured — by design, to avoid overfitting to noise.

Early-Season Data Gaps

First games of the season have minimal data. Early November predictions are the weakest because efficiency metrics and NET rankings haven't stabilized yet.

Score Prediction vs. Winner Prediction

The model optimizes for picking the correct winner, not the exact score. The Mean Absolute Error of ~8 points means predicted scores are approximate — don't use them as exact forecasts.

Frequently Asked Questions

What version is the CHD Scout prediction model currently on?

CHD Scout is currently on V21, released February 15, 2026. V21 introduced conference-specific adjustments including a 0.70 HCA dampener for conference games, margin compression for predicted blowouts, and recency-weighted home court advantage calculation. V22 was a research-only backtest that identified NET rankings as the dominant toss-up signal.

How accurate is CHD Scout at predicting game winners?

CHD Scout correctly predicts the winner in over 76% of all games across the 2025-26 season (4,000+ games). Accuracy varies by confidence: strong picks (predicted margin 12+) hit 85-93%, moderate picks (6-12 points) around 77%, lean picks (3-6 points) around 65%, and toss-ups (under 3 points) around 55%. Conference games are harder at 73.7%, while non-conference games reach approximately 78%.

What is the most important factor in the CHD Scout prediction model?

Adjusted Efficiency Margin (AdjEM) is the foundation, accounting for roughly 60-70% of the prediction weight in non-toss-up games. It compares each team's offensive and defensive efficiency adjusted for strength of schedule. However, in toss-up games where efficiency margins are close, NET rankings become the dominant signal at 50% of the nudge weight.

Why are conference games harder to predict than non-conference?

Conference opponents play each other twice per season, which reduces information asymmetry and home court advantage. Before V21 adjustments, the model predicted conference games at just 66.5% accuracy. The familiarity factor means home court advantage drops from roughly 60-65% in non-conference to approximately 52% in conference play. V21's conference dampener (0.70 multiplier on HCA) corrected this, improving conference accuracy to 73.7%.

What is the nudge system in CHD Scout?

The nudge system activates for toss-up games where the predicted margin is near zero. It blends three signals with specific weights: NET rankings (50%), player form (30%), and efficiency (20%). The nudge is capped at plus or minus 1.5 points for toss-ups and plus or minus 2.0 for mid-range games. This system was introduced in V17 and refined in V21 after research showed NET rank is the best predictor of winners in close games.

Why was V20 (NET Trajectory) rejected?

V20 tested whether 7-day NET rank trajectory (rate of change) could improve predictions. While it correctly identified that rising-away teams had lower model accuracy (64.6%), the trajectory signal was only ~55% accurate in toss-up games — barely above a coin flip. Applying it flipped more games wrong than right in the tier with the most games, dropping overall accuracy from 76.40% to 76.30%. The lesson: directional signals below 60% accuracy in toss-ups will always hurt overall performance.

Is CHD Scout better than KenPom for predicting game outcomes?

CHD Scout and KenPom use different approaches. KenPom uses tempo-adjusted efficiency ratings as its core. CHD Scout layers contextual factors (rest, travel, conference familiarity, player form, competitive boost) on top of efficiency metrics. CHD Scout is free and available at college-hoops-data.com, while KenPom requires a subscription. Both achieve similar accuracy ranges (mid-to-high 70s percent), but CHD Scout's conference-specific adjustments address a known weakness in pure-efficiency models.

Related Pages

Prediction Model Overview →NET Rankings Explained →Efficiency Ratings Guide →Home Court Advantage Guide →Free KenPom Alternative →Today's Predictions →