Updated: Apr 4, 2019
The Cross Country Rating Index (CCRI) is USTFCCCA’s attempt at coming up with an objective ranking based on feedback from the 2017 USTFCCCA Convention. The system is built off of the RPI model used in other collegiate sports such as basketball with additional contributing factors that are more applicable to cross country than other sports. It uses individual rankings to build team rankings.
The team rankings are broken up into sections: Team Potential Index (TPI) and Team Actual Varsity Context Index (AVC). Using these components, each Division 1 team is giving a score which is used to rank every team. There is not much subjectivity to this system; coaches are not polled for these results.
Here is some quick background information before we jump into the actual article...
How does it work?
The individual rankings are determined using every Division 1 race throughout the season. Each athlete is rated according to how well he/she performs in relation to the rest of the field. The ratings also are adjusted by including how competitive the race was and the overall strength of the field. This all is combined to create the Individual Contest Performance Rating (ICPR). All of an athletes ICPRs is then averaged to create an Individual Season Performance Rating (ISPR). The individual’s CCRI is then computed by comparing each athletes ICPR for every race against every other runner.
Or in really basic terms, individuals are scored, averaged, and then compared.
The team rankings are formulated using the individual rankings as well as the AVC Index and the TPI. These two indexes are combined to give each team a final score.
What are the AVC Index and TPI?
Let’s start with the Team Potential Index (TPI) as it’s the easier of the two concepts to grasp. In 2018, the TPI measures the “win share” of a team against every team in the NCAA Division I. By establishing the individuals’ CCRI, this indicates the top seven runners on each team and can thus, predict the dual meet results for all teams in Division I.
For purposes of the TPI, dual meets are scored and then measured with respect to a perfect meet (15-50, a 35 point win). So, a 15-50 dual meet victory equates to 1000 win share points for the winning team with 0 points going to the losing team. Any other victory is compared by margin of victory using the following formula.
Note: Margin of Victory would be negative for a loss.
500 + 500 ( (Margin of Victory) / 35 ) = Win Share
The TPI is then, “based on a formula that weighs the opponent with the calculated margin of victory or defeat (also known as "win share”)” according to USTFCCCA’s official release of the ranking system. This weighting system is likely based on a comparison of the raw individual CCRI, but has no been officially released.
Or in really basic terms, they find the top seven of each team, simulate a dual meet between every team in the country, and calculate scores for teams based on how well they did compared to competition.
The Actual Varsity Contest Index (AVC) is more akin to the current qualifying procedure that focuses on "A" squads as determined by who competes at the NCAA Regional Championships. Prior to those meets, a varsity squad is determined by the top seven athletes in the individual CCRI rankings or top nine at a conference meet.
TPI + 500 ( (Margin of Victory) / 35) = Matchup AVC
Like current procedure, an “A” squad in the regular season needs four runners from the official regional team or the aforementioned early-season determinations. Then, all head-to-head meetings where both teams ran “A” varsity squads are scored as dual meets. A matchup’s AVC score is then created by taking the opponent’s TPI plus the “win-share” margin of victory. So if Teams A and B both have TPIs of 800 and Team B win the dual meet 20-35. Team B’s win share would be approximately 214, giving Team B a matchup AVC of 1014 and Team A a matchup AVC of 586. The final AVC is the average of all matchup AVCs.
Finally, the final CCRI is a combination of the TPI and the AVC score, meshing the hypothetical results with actual head-to-head finishes.
Having gone through some of the current ranking procedures, there are still a lot of questions about how the CCRI will be used, how it will evolve, and how it exactly functions. Ben Weisel, Michael Weidenbruch, and Sean Collins weigh in on some of those questions and more.
After researching into the CCRI and working through the scoring procedure, I think there still needs to be a little more transparency into how exactly the formulas, rankings, and procedures are run. The USTFCCCA has been incredibly thorough and transparent up to now, especially with their full explanation of the system here and some further expansion on that throughout the publication of team resumes and additional details presented in their subsequent rankings releases. Regardless, the ambiguity in a few areas is still troubling should the CCRI ever move from a ranking system, to a qualifying procedure, but we will get to that in a bit.
Looking purely at the ranking system as is, USTFCCCA has already announced a change to the 2019 version that will exaggerate the current system’s margin of victory. Not only will it give points based on the “win share” procedure noted above, it will supplement those with an average individual CCRI bonus.
The example given by the USTFCCCA is that Team A beats Team B 15-50, but also has an average individual CCRI of 1200, compared to team B’s 900. Team A would earn the maximum 500 points* on win share from the perfect victory PLUS a 300 point bonus from the individual CCRI difference (1200-900 = 300). What other things might you like to see in future rankings?
*Note: USTFCCCA published that example with a maximum of 400 points from the win share, but that appears to be a typo based on their other statements on TPI.
The question of how much weight postseason races carry would need to be addressed. I don’t think it would be too difficult to make conference and regional results have a bigger impact on the rankings, but this would need to be done. A team could have a great first half of the season, turn in mediocre postseason performances, and still be ranked highly based on their earlier results. Similarly, teams that heat up late in the season would need to be rewarded appropriately.
The question of how regional results directly affect the ranking would need to be answered as well. A few regions only have one team in the top 31 as the current rankings stand. Assuming the number and makeup of regions remain the same, this would be a foreseeable issue every year. Would the priority be bringing in the 31 best teams, or would there still be an emphasis on having at least two teams from each region? If it’s the latter, the ranking would be significantly affected.
Princeton was the top team from the Mid-Atlantic ranked at 30th, but adding a second team from both the Northeast and South would bump Princeton out of qualifying contention. The Mid-Atlantic would then need both of their teams added in, which would bump out the last two teams that were 3rd or worse in their region, but still top 31 nationally. That would result in teams that were originally solidly in the top 31 being bumped out because of the regional requirement.
For this reason, I can’t see the CCRI and having two teams from each region work together. It would need to be one or the other.
Another question I have is, how are teams that do not qualify for Nationals affected in the post-nationals rankings? For example, Virginia finished the season ranked 13th by the CCRI. They moved up two spots after NCAA's concluded. How does this happen?
Presumably, teams that were previously ranked ahead of them slipped back, but I struggle to see how a team that missed qualifying for Nationals has somehow improved despite not even making the meet. There was also a lot of movement in the bottom half of the rankings, where none of the teams raced after regionals.
In our Over/Under reaction article this year, we discussed whether it is a good idea to realign the regions. One suggestion that a coach had suggested to me is reducing the amount of regions to four. If the amount of regions are reduced, then teams will not automatically earn qualifying spots at regionals. The CCRI could potentially be a way to determine which teams go to which regions as well as a way to balance out the four regions. By creating an objective ranking system, there could be a fair system that makes all of the regions balanced. The geographic regions would need to be replaced by regions that are similar to the NCAA basketball tournament regions. While it is nice to put teams in their geographic region, the emphasis is on creating four equal regions.
The obvious problem with this is that the system needs to be more transparent, so teams know how to give themselves the best chance at qualifying for regionals. In addition, as the system is not exact so by drawing a line at an arbitrary number, could unfairly exclude teams.
Another issue with using four equal regions is the amount of travel that some teams could be making. This obviously could cost schools more money which could prevent some teams from attending regionals even if they qualify. The hardest part of this proposal is how to determine which individuals are invited to the regional competition.
The benefits would be more competitive regional races and a higher likelihood of the 31 best teams in the country earning a spot at Nationals. This would also increase the overall competition of Nationals. The at-large bids could be eliminated along with the complex Kolas point system. Instead, the top eight teams from each region would automatically qualify. While this system could make qualifying for regionals more complex, it would certainly make the national qualifying system much easier.
I think if the NCAA were to consider regional realignment with the CCRI, it would need to reconsider regional meets altogether. Trying to balance regions based on historic CCRI would only make the more competitive regions less so, and not make any weaker regions significantly more competitive in my estimations (should the number of regions not change).
The larger issue I have with regional realignment by CCRI would be the logical contradiction by the NCAA. By doing so, the NCAA would admit it’s looking either for the top 31 teams in the nation or for a greater distribution of teams across regions. In the first scenario, realignment by CCRI would only partially adjust the procedure and not solve the issue. If you believe the CCRI determines the top 31 teams in the nation, then the CCRI should pick those teams outright without regard to region.
With regard to the latter, realignment does not alter the distribution of teams from across the nation. It might change the way we think about the selection process, but it would not significantly alter the geographic distribution. UTEP will be in El Paso regardless of whether they compete in the South Central or the Mountain region. Very few consistent team qualifiers would be affected and it would not significantly alter the results.
Moving to four regions (or five, or six) could help with both of those issues, but I’m struggling with how that makes a large difference. The eastern regions (Northeast, Mid-Atlantic, and Southeast) are some of the largest by number in the current system and any realignment would also have to stand up to scrutiny on equally proportioned regions by quality and quantity. This would likely lead to an even more stacked West region that would include most of the current Mountain region, or a CCRI balancing would gerrymander some odd looking regions that could not be embraced.
I don’t think it would be a terrible idea to use the CCRI rankings to help determine national qualifying spots. The ranking is based entirely on performance throughout the season, so every meet matters. It would also take the weight off of teams that underperform at regionals, as of right now regional performance is a heavy influencer when it comes to qualifying and I think that is somewhat flawed.
The Kolas system does a good job picking the at-large teams, but sometimes weaker teams that have strong regional performances get pushed in and end up keeping other teams out. For example, Tulsa was not expected to get pushed into NCAA's this year, and had Oklahoma State finished 3rd in the Midwest, Georgetown would have qualified. Georgetown is ranked 11 spots higher than Tulsa in the CCRI.
I like the idea of reducing the number of regions, but I agree with Sean that this would not fix some of the perceived issues with the current system. Ben’s idea of teams needing to qualify for regionals based on the CCRI rating is interesting. If the number of regions is reduced, the number of teams in each will increase, so it would not make sense to have every team compete in what would be a massive race. Breaking the CCRI down by region would be easy, and the top X teams could qualify for regionals. That seems fair to me.
The problem with this is for individuals. The ICPR could be used to determine this, but I would be worried that this could be a messy system for determining which individuals on low-ranked teams get a spot at regionals.
To piggyback on the individual ranking issues, I’ve been worried about athletes who don't race until late in the season. While I cannot come up with an immediate example, I’m wondering how the ranking applies to someone who runs unattached early in the season and then competes officially later. Do the unattached races count toward their ranking (or should they)?
Another possible, but less likely issue, is that a team could decide to run slower athletes in order to lower the mean time of a race to bolster their teammates’ gap scores. It likely would not make a large effect, but running extra speed-oriented 800 runners in 6k/8k races would likely put them toward the back of the pack and benefit faster athletes. This could especially have a large impact in a dual meet between mismatched teams or to bolster an athlete into individual qualifying if they use that after limiting teams.
I admit, there are a number of rather drastic negative assumptions one has to make in order to reach that conclusion, but it appears that bad-natured racing tactics and decisions could positively impact some athletes and that does not sit extremely well with me.
To close out this discussion, how does everyone feel about the rankings results? Do you think they align with your views of the top teams in the nation?
Looking at the top 10 teams, those teams matched exactly with the top ten teams at NCAA's, so I’d have to say that these rankings do a relatively good job overall. Moving further down the list, it becomes difficult to have a ton of faith in the rankings. For example, Buffalo men finished 7th in the Northeast region while Yale men finished 17th, but the final rankings have them separated by only 2 positions, 85th to 88th. Neither team had any major issues at regionals to my knowledge.
So overall, I’m glad that there’s a new ranking in place and I think that it’s doing alright especially for the first year, but I’m not quite confident in it yet to make close calls between teams.
Overall, these rankings seem pretty fair. There’s nothing I have a major objection to in the top 50. It’s really interesting to see how well some teams placed in the rankings despite not qualifying for nationals. Virginia stands out to me here. They are ranked 13th, which I think is appropriate (maybe a tiny bit of an overrating, but not much).
It is refreshing to see teams getting the recognition they deserve despite not necessarily performing on the biggest stage. Syracuse is another team that I think is appropriately recognized in the rankings. A bad performance at NCAA's doesn’t have to ruin a team’s position. However, I agree with Sean that the rankings become less reliable further down the list.
Lehigh is ranked 110th compared to Bucknell in 120th. Bucknell beat Lehigh at the Patriot League Championship as well as the Mid-Atlantic regional. The two barely met during the regular season, and the postseason results tell me that Bucknell is the better team.
Despite the flaws in the lower end, I think this new ranking system is a good addition. I will be interested to see if it is eventually used to determine national qualifying spots because it could dramatically change what the fields for NCAA's look like. If regionals is no longer an all-or-nothing affair, some teams may get the benefit of the doubt and have another chance to prove themselves at Nationals. This would also likely increase the level of competition.
Like Michael and Sean, I think the rankings have done a good job at rating the top teams in the country. The top 50 teams are about right with just a few exceptions. As you go further down the list, there are more errors. For the South region, 5th place Georgia Tech is ranked 20 spots ahead of 2nd place Florida State while 3rd and 4th place Tennessee and Belmont are also ranked ahead of the Seminoles. The CCRI system needs to at least have similar teams within 10 spots of each other in order for the system to be used as the qualifying metric for regionals or Nationals