Bracket Prediction Accuracy
How our projections compared to the actual NCAA Tournament field
55 / 68
Overall Field (80.9%)
0.78
Avg Seed Delta lines off
20 / 55
Exact Seed Matches
20 / 33
Conf. Tourney Winners (unpredictable)
The 2026 NCAA Tournament bracket has been set, and it is time to measure our bracketology model against the committee's final selections. Over the course of the season, our algorithm projected the 68-team field using a composite bracket score built from NET rankings, quad quality, strength of record, efficiency margins, and road performance.
The headline number: our model correctly identified 29 of 31 at-large selections, a 93.5% accuracy rate on the picks the committee actually deliberates over. When you add in the 26 conference tournament champions we correctly projected, the overall field accuracy reaches 55 of 68 teams (80.9%). The gap between those two numbers tells the real story of bracketology: predicting which teams deserve a bid is a solved problem. Predicting which team will win a conference tournament is closer to a coin flip.
Three of our four projected No. 1 seeds matched the committee's picks: Duke, Michigan, and Arizona were locked in as top seeds by virtually every metric. The one miss was instructive. Our model slotted Houston as the fourth No. 1 seed based on its NET ranking and bracket score, but the committee went with Florida, whose deeper SEC resume and higher overall record earned them the nod. Houston landed on the 2-line, just a single seed off from our projection.
This miss highlights an important dynamic: when two teams are separated by fractions of a point in composite metrics, the committee's eye test and conference-strength bias can tip the balance. Florida's 27-8 record in the nation's toughest conference ultimately outweighed Houston's higher NET ranking.
Last Four In
3/4 CorrectFirst Four Out
3/4 CorrectOur bubble calls were among the strongest elements of the projection. Of the four teams we tagged as the "Last Four In," three made the field: UCF earned a 10-seed, SMU drew an 11-seed in the First Four, and Texas also landed on the 11-line through the play-in round. Only New Mexico missed entirely after a late-season slide.
On the "First Four Out" side, we correctly predicted three of four teams would be left out. The lone miss: Missouri, which we had just outside the field, snuck in as a 10-seed on the strength of its SEC schedule. The committee gave Mizzou credit for playing in the deepest conference in the country, a judgment call that pure metrics alone did not fully capture.
Full Field: Seed-by-Seed Comparison
Match
1 Seed
Field
Teams
The seeding accuracy tells a compelling story about the model's calibration. Of the 55 teams appearing in both our projection and the actual bracket, 20 landed on the exact seed line we predicted. Another 28 were within a single seed, bringing the total to 48 out of 55 within one seed (87%).
The largest miss was Utah State, which we projected as a 6-seed but the committee placed on the 9-line. The three-seed gap reflected the committee's traditional skepticism of the Mountain West's strength relative to power conferences.
All 13 teams we missed in the field were conference tournament champions from small conferences where a different team won the postseason tournament than the one with the best regular-season NET ranking. This is an inherent limitation of any metrics-based model: our algorithm correctly identifies the best team in each conference by resume, but conference tournaments are single-elimination events where upsets are the norm. Predicting those outcomes would require an entirely different kind of model.
Conference Representation: Predicted vs Actual
At the conference level, our model was remarkably accurate across the board. We were exact on 4 of the 6 major conferences: the Big Ten (9), the Big 12 (8), the Big East (3), and the WCC (3). The SEC earned one more bid than projected (10 vs 9), and the ACC outpaced our model by one (8 vs 7).
The only notable conference-level misses came from the American Athletic Conference (projected 2, actual 1) and the Mountain West (projected 2, actual 1), where we had Tulsa and New Mexico as at-large selections that the committee ultimately left out.
Looking ahead to 2027: Our model's 93.5% at-large accuracy and sub-one-seed average deviation demonstrate that composite metrics are a reliable foundation for bracketology. The remaining challenge is not if a team belongs in the field, but which team emerges from unpredictable conference tournaments. Every mid-major conference tournament delivers upsets, and those upsets account for virtually all of our field misses. That is not a flaw in the model; it is the nature of March.


















































































