Producing a True Champion

In 1998, the powers-that-be introduced the Bowl Championships Series (BCS) formula to determine the best two teams in the country at season's end. Many saw this as an upgrade from the Super Alliance of an earlier time or prior agreements lacking both Big 10 and Pac-10 support. For several years, college football's elite tinkered with the formula, until after last season when the denizens dismantled the concept.

Oh, it is still there, but now the overwhelmingly preponderance of the weight falls on the human voter: writers and coaches. To make matters worse, the preseason polls from either sector now carry an increased weight; it is that much more unlikely that a team with a strong strength of schedule can play its way to the top -- as Colorado nearly did in 2001.

Unfortunately, matters eroded even further, as both margin of victory (following Florida State's ascension over Miami in 2000) and strength of schedule (following a series of complaints from USC supporters in 2003) are no longer parts of the system.

College football is an oligarchy of the elite, and the vast majority of that muscle is derived from reputation, as determined through pre-season polling -- which, for several reasons, tends to hold up quite nicely.

In the first place, prognostication is thorough, excepting a few lapses, and most July/August national championship contenders will enjoy at least successful seasons. A problem arrives when one realizes that this is also tantamount to a self-fulfilling prophecy -- in other words, if a team is ranked fairly high, all they are required to do is prevail on the football field, which is not the case with teams ranked below them.

Any sway that the former BCS possessed, dissolved in the wake of 2003, when Pete Carroll's USC Trojans were forced to Pasadena, despite holding the No. 1 spot in both polls, because of their piddling No. 3 rating in the BCS.

Why was USC damaged? Because Auburn, Washington, Arizona State, and Notre Dame were not as good as originally forecast, thus USC's schedule did not achieve parity with then-No.1 Oklahoma. Perhaps, if the margin of victory component was still in vogue...

All media personnel and all collegiate coaches are in some respect biased, as are all computer-generated formula gurus, and as is any fan. However, for a good, solid, and oh yes, objective, formula for determining a national champion only two criterions appear to suffice: Strength of Schedule (SOS) and Margin of Victory (MOV).

According to logic, these two components should cancel each other out thus providing a true, and fair, rendering of America's best college football teams. Every team in Division 1-A, nearly 120 schools, deserves a chance to play for the championship in early January.

College football is not the National Football League and cannot provide for an adequate playoff system given its arrangement. Besides, given the nature of the college game, a four- or eight-team playoff would prove more exclusionary than the current setup.

Only members in the oligarchy, defined currently as the Big Six conferences and Notre Dame (although the Big East is teetering and ND does not currently meet the standard), have an opportunity to play for the national championship. The reasons for this are sound: these are the only 50-some schools that play a schedule rigorous enough to legitimize their annual campaign, and thus from this collective quarrel, two teams are found.

However, lesser-known and lesser-funded schools occasionally possess teams on par with the nation's elite -- but are routinely dismissed as lightweights, when they do not routinely dismiss themselves with decisive losses on hallowed ground (i.e. a Big Six campus), which has bore witness to Heisman Trophy winners and championship teams.

One of the best ways of determining which school had the better Saturday is to compare their competition with their scoreboard, such as this hypothetical: one favorite struggled at home while another, in the form of an underdog, pounded away in hostile territory. Was there any question which team was more proficient that day? Why we cannot extrapolate this analysis over the course of an entire season baffles this observer.

We must repeat, this is not the NFL, and a win is not a win. There are too many teams and home field is too staggered for this comparison to be made. Complex though it may sound, the best method for determining the two best college football teams in a given season is to rate them by SOS and MOV -- however, this remains inadequate because it does not distinguish between friendly settings and those of a decidedly more onerous sort.

If a team wins on the road, all matters proving equal, it has performed better than at home. This has never been explicitly defined in college football. To some observers, this has been defined as CLUTCH, particularly when at the expense of an imposing foe.

Taken by itself the CLUTCH Index [.01 for each road win, escalating to .02 if the beaten team otherwise won 70% of its games, raised to .03 if the beaten team otherwise triumphed in 80% of its games, with half of each value allocated to neutral site games; bonuses of .05 are offered if the vanquished team otherwise did not lose a game (even if the winner prevailed at home), totals reach a maximum of .08 if the losing team that otherwise did not lose a game flopped at home: see Oregon 2001, North Carolina 1997], is not a sufficient mechanism for establishing the best two teams in America.

Originally, it was conceived for the Big Six oligarchy (and ND), but why not expand it for every Div. 1-A school?

Consider: Lesser conferences inherently play larger ones (usually at the beginning) and this will drag down their percentages. There is a much greater likelihood, therefore, of a team originating from the SEC actually having a chance to play excellent football teams and defeat them (particularly on the road), than a team from the WAC.

The two conferences simply are not as good, but we do not require that nor profess to care, we ought to be concerned only that the best WAC team is not viewed as inherently inferior to the best SEC team. Is this usually the case, most certainly, but we must entertain the possibility, however remote, that a non-Big Six oligarch can play for all the marbles.

Interestingly enough, this both helps and hurts non-Big Six schools. It can assist them greatly should they defeat a conference opponent (or someone else) that does not otherwise lose; it can destroy their ranking, however, should they not even play a .700+ team on the road, let alone beat them.

Strength of Schedule is correlated with CLUTCH, but the difference is palpable; CLUTCH recognizes exemplary play away from familiar locales. So here is what one should do: accumulate the data, following the end of the regular season, for all Div. 1-A schools, and one may use the standard SOS if that is so desired. To stoke the fire, wizardly mathematicians could run the numbers every Saturday, thus giving us our poll, bearing in mind that these numbers would vary weekly -- sometimes to an extraordinary degree.

Three categories, each afforded equal weight, abruptly divided by three could produce the best two teams in America, regardless of whatever pollsters believe they see or even the number of losses of each particular school.

I will proceed further; we should not overly concern ourselves with a team's W-L record, in fact, the three formulaic components espoused above should predict that statistic for us. If a team finished first, following an aggregate of their SOS, MOV, and CLUTCH totals, they are, by any real definition, the best team in the country.

A team's won-loss record is often seen as the starting point, which can lead to voter error in wrongly awarding accolades to a team that might possess a suspect SOS, a less-than-stellar MOV, and no real CLUTCH score of any kind. Many may persist in their belief that a W-L mark reveals the nation's elite, save small schools, but in college football, not all wins and losses are created equal.

Take the example of 2002, prior to their bowl games USC and Ohio State were strikingly similar, except for the fact that one school (USC) had played two teams, on the road, that were otherwise 10-2 (Kansas State) and 9-3 (Washington State), respectively. Not so coincidentally, the Trojans lost both contests, but the larger point is that the Buckeyes did not play any .700 teams in hostile territory.

Do we need to provide for a "good loss" mechanism, as well, to ensure that teams are not penalized for losing tough, long-shot, football games, when others do not participate in such contests?

Probably, but I have yet to normalize precisely how a home game differentiates from a neutral or away game for a said team -- all I can conjecture is that home games are easier, over the long run perhaps far easier, to win. In this respect, the SOS/MOV/CLUTCH formula is flawed -- but in my judgment, if applied in its aforementioned form -- this would be far more becoming than what we currently "enjoy."

Leave a Comment

Featured Site