2026 NCAA Tournament Bracket Projection
Generated Sunday, April 12, 2026
Field
Bids
Bids
Projected #1 Seeds
The top seeds in our projected NCAA Tournament bracket are led by Michigan, which boasts an impressive 36-3 overall record and a 19-1 mark in the Big Ten Conference. According to our model, Michigan's bracket score of 99.2 is the highest among all teams, reflecting its strong performance in quad 1 games, where it has gone 21-3. Michigan's NET ranking of 1 also underscores its status as a top contender. Duke follows closely, with a 35-3 overall record and a 17-1 record in the Atlantic Coast Conference. Duke's quad 1 record of 19-3 and NET ranking of 2 demonstrate its own exceptional strength.
Arizona and Houston round out the top seeds, with Arizona earning a bracket score of 96.8 according to our model, thanks in part to its 19-3 quad 1 record and 16-2 conference mark. Arizona's NET ranking of 3 further solidifies its position as a top seed. Houston has a slightly lower bracket score of 91.2, but its 10-7 quad 1 record and 14-4 conference mark are still notable achievements. Houston's NET ranking of 6 is the lowest among the top seeds, but its overall performance has been strong enough to secure a top seed. Michigan, Duke, Arizona, and Houston are all well-positioned for a deep tournament run, with their impressive records and quad performances setting them apart from the rest of the field. According to our model, these teams have established themselves as the top contenders, with Michigan and Duke standing out as particularly formidable opponents.
The last four teams projected in the NCAA Tournament field are holding on to their spots by thin margins. NC State is currently sitting at a 74.5 bracket score, according to our model, with a 10-8 record in the Atlantic Coast Conference. Their 5-9 quad 1 record is a concern, but their overall 20-14 record and NET #36 ranking are keeping them in the field. Tulsa is also clinging to a spot, with a 73.7 bracket score, according to our model, and a 13-5 record in the American Athletic Conference. However, their 1-3 quad 1 record and NET #51 ranking make them vulnerable to being pushed out.
Oklahoma and Auburn are also on the bubble, with bracket scores of 73.5 and 73.2, respectively, according to our model. Oklahoma's 6-10 quad 1 record is a significant concern, and their 7-11 record in the Southeastern Conference does not help their case. Auburn's 4-13 quad 1 record is also a major issue, but their 7-2 quad 2 record and NET #37 ranking are keeping them in the field. NC State's quad 2 record of 6-4 and Tulsa's 5-4 quad 2 record are also factors in their respective cases, and any losses by these teams could push them out of the tournament field. Oklahoma and Auburn's poor conference records, 7-11 in the Southeastern Conference, are also putting pressure on their tournament hopes.
The first four teams on the outside looking in are New Mexico, UCF, San Diego State, and SMU. According to our model, these teams have bracket scores of 72.3, 72.0, 71.9, and 71.6, respectively. New Mexico, with a NET ranking of 46, needs to improve its Quad 1 record, currently sitting at 2-7. The Lobos also have a conference record of 13-7, which is respectable, but they need to demonstrate more consistency against top-tier opponents. UCF, on the other hand, has a more balanced resume, with a 5-8 Quad 1 record and a 6-3 Quad 2 record, but its 9-9 conference record is a concern.
San Diego State and SMU face similar challenges. San Diego State has a 3-8 Quad 1 record, which is a significant gap in its resume, and its 14-6 conference record, while strong, may not be enough to overcome its struggles against top opponents. SMU, with a NET ranking of 40, has a 4-9 Quad 1 record and a 5-5 Quad 2 record, indicating a need to perform better against higher-level competition. According to our model, these teams need to address their respective weaknesses and close the gaps in their resumes in order to play their way into the NCAA Tournament field. New Mexico, UCF, San Diego State, and SMU must focus on improving their records against Quad 1 and Quad 2 opponents to boost their chances of earning a tournament bid.
The current state of the bracket remains relatively unchanged, with Michigan, Duke, Arizona, and Houston holding steady as the top seeds. According to our model, these four teams have maintained their position due to their consistent performance, with no significant shifts in their bracket scores. The bubble remains unchanged, with no new teams entering or exiting the last four in, indicating a sense of stability among the middle-tier teams. The overall field size of 68 teams, comprising 31 auto-bids and 37 at-large bids, is set, and the lack of movement on the bubble suggests that the selection committee's decisions will be largely influenced by the performance of the top seeds and the auto-bid winners, rather than any late-season surges by bubble teams.
How Our Bracket Model Works
Normalized 0–100 from rank position. The NCAA's own evaluation tool combining wins/losses and game-level efficiency across all Division I opponents.
Weighted quality score — Q1 wins +5, Q1 losses −1, Q2 wins +2.5, Q2 losses −2.5, Q3 wins +0.5, Q3 losses −5, Q4 wins 0, Q4 losses −8. Normalized 0–100.
SoR rank normalized 0–100. Measures how impressive a team's record is given the difficulty of its schedule — a 20-win team in a weak conference scores lower than a 20-win team in the ACC.
Adjusted offensive minus defensive efficiency (points per 100 possessions). Captures how dominant a team is regardless of pace. Normalized 0–100 across the field.
60% road record value + 40% SOS rank, both normalized. Rewards teams that schedule tough and win away from home — factors the committee explicitly values.
Final bracket score = weighted sum of all five components, scaled 0–100.
Our Model vs. The Selection Committee
The NCAA Selection Committee uses the same core inputs — NET rankings, quad records, strength of schedule, and road record — but applies subjective judgment to each case. Committee members can weigh injuries, recent form, head-to-head results, conference tournament performance, and what is often called the “eye test.”
Our model is purely data-driven: the same formula applied consistently to every team, with no adjustments for narrative or circumstance. That removes human bias — but it also means we can't account for context that only humans can evaluate. When the model and the committee diverge, it's often because of factors that don't yet show up in the numbers.











