Feb 22, 2017

Trends: The Least-Quantified Metric

Since I left you the entire season in suspense over the "most talked about....least quantified" metric, I thought I would end the suspense right away by putting it in the title. This way, we can focus entirely on the idea instead of the mystery. If you have ever watched a basketball (or any sporting) game, or even the unveiling of the brackets, I guarantee that you have heard a specific team's current trend "qualified" in no uncertain terms: "This team is on a 6-game winning streak," This team seems to be in shooting slump," or "This team's average points per game is X, but over their last 4 games, it is higher." Although these statements contain numerical evidence suggestive of a trend in place for a specific team, I would say these qualitative statements suggest nothing more than plain factual evidence. Yes, that team has won 6 straight games. Yes, that team is missing a lot of open shots. Yes, that team's points per game is higher now than it was 4 games ago. To see why these statements are deceptive when it comes to the concept of trends, let's jump right in.

Feb 15, 2017

The New Standard - Changes to the Selection Process

It's good to be back here on another Wednesday, writing another article and preparing to pick a perfect bracket this year. If you have followed recent announcements in the college basketball landscape, you will know that the NCAA scheduled a meeting on Jan 20 concerning the use of advanced metrics in the Selection Process. Though the NCAA specifically stated that this meeting was exploratory and such changes wouldn't be implemented for the 2017 tournament, this change is definitely worth exploring. For this article, I will record the known details of the meeting, examine the old rating system (the RPI rankings), and investigate the new meta of bracket prediction (when the selection committee knows/values what we have known/valued for the last decade).

"Mr. Secretary, I propose an official reading of the Minutes of the last meeting"

If you've ever been to a business meeting, my advice is to not attend one sleepy. In my opinion, the reading of the minutes of the previous session is the absolute worst part. You are essentially going over the same stuff you went into full-detail in the last meeting, but this time, its purpose is purely for record-keeping. Here is the minutes of the meeting, but I'll try to do it in a way that won't put my readers to sleep.

Who attended the meeting?
  • Dan Gavitt - NCAA Senior Vice President for Basketball: He ran the meeting
  • David Worlock - NCAA Director of Media Coordination and Statistics: He was the statistics expert from the NCAA's side of the table.
  • Jim Schaus - Ohio University Athletic Director: Member of the 2016-17 NCAA Selection Committee - He represented the Selection Committee.
  • Ken Pomeroy - Advanced Metrics Statistician for College Basketball (Link): KenPom Ratings uses a predictive approach.
  • Jeff Sagarin - Advanced Metrics Statistician for College Basketball (Link): Sagarin Ratings uses a predictive approach.
  • Ben Alamar - Advanced Metrics Statistician for College Basketball (Link): BPI Ratings uses a predictive approach.
  • Kevin Pauga - Advanced Metrics Statistician for College Basketball (Link): KPI Ratings uses a results-based approach.
  • Others attended, but I believe these are the most relevant.

Feb 1, 2017

2017 Quality Curve - February Edition

It has been exactly one month since the last Quality Curve (QC) analysis, and that can only mean one thing: It's time to do another one. With half of the conference schedule completed and more than 97% of the non-conference schedule also completed, we are somewhere between the 2/3- and 3/4-mark of the 2016-2017 pre-tournament season. Most of all, our data is better now than it was in January, and it is a little bit closer to its eventual mark (which occurs on Selection Sunday). Without further ado, let's get started with some preliminary factors before we hit the main event.

Preliminary Factors

You are about to enter the February Edition of the Quality Curve Analysis. Please follow these simple instructions before proceeding.
  1. If you haven't read the January Edition, follow the link and read the section on "Reviewing The Changes." The efficiency data typically used for this analysis changed formats between this season and the previous season. Understanding these changes is important to understanding the data.
  2. In typical QC analyses, the current data is compared to pre-tourney data from previous years in order to maximize predictive value. Since the method of calculation changed (see Step 1), I do not have the pre-tourney data for any of the previous years. Instead, I am approximating those years using their post-tourney data, which happens to be readily available. However, I have noticed sizable movements from pre-tourney to post-tourney data, so if anyone has the post-tourney data using the old methodology (Pythag), I possibly could make a workable substitute.
  3. The QC uses the efficiency ratings for the Top 50 teams, which is a close approximation for the 1-12 seeds in the tournament. More often than not, teams in the Top 50 efficiency ratings do not make the tournament and teams not in the Top 50 do make the tournament. The QC gives us a picture of the college basketball landscape, whereas the seed curve (not produced until the Bracket is revealed) gives us a picture of the NCAA tournament quality.