Dec 18, 2017

Upests in the Making (Part 2) - Historical Analysis of Upsets

It's time to upset my readers even more with the next article in this 3-part series on Upsets. In Part 1, a theoretical framework was built for analyzing upsets based entirely on probability. It was constructed using all possible combinations of upset-potential match-ups (UPMs) for each of the six rounds of the bracket. It also examined the mathematical underpinnings of seed differentials for each of the six rounds. While Part 1 covered the vast range of what could happen in the tournament (the theoretical perspective), this article will cover what has actually happened in the tournament: The Historical Perspective. This article will examine the historical perspective through two viewpoints: the big picture (the overall counts on upsets and UPMs) and the individual games (a breakdown of the seed-by-seed match-ups). Furthermore, this article will be more descriptive than explanatory -- recording and summarizing the results rather than providing explanations and insights for the results. Let's get to it.

Dec 4, 2017

Upests in the Making (Part 1) - Theoretical Analysis of Upsets

I hope the two teasers that I left helped you figure out the subject of this article: Upsets. It doesn't take a rocket scientist (and I certainly am not one of them) to know that upsets are the biggest part of the tournament experience. In fact, a well-rounded understanding of upsets can make a difference of 3-15 picks in your bracket compared to your competitor's bracket. To gain that top-level insight into upsets, this article will examine upsets from the first of three different perspectives: The Theoretical Perspective.

Nov 20, 2017

Unorthodox Bracket-Picking Methods

If you have followed Bracket Science in the past or PPB currently, you are already familiar with the mainstream bracket-picking tools, such as F4/Champ Contender/Pretender Rules, Upset/Victim Rules, QC/SC Analysis, and Aggregate Value Estimation. Tools like these are mainstream due to a number of reasons, including but not limited to reliability in accurate picks, time/cost efficiency in creation, and simplicity in application. Not all bracket-picking methods have these qualities, and usually this results in the method being passed over for one that does have them. Since we at PPB are always trying to break the norms and raise our head above the crowd to gain newer or better insights, I thought I would dig up one of my first bracket-picking systems and use it as the basis of this article on unorthodox methods.

Nov 6, 2017

Welcome to the 2017-18 College Basketball Season

Just as the title says, Welcome to a new season of college basketball. I'm excited to bring back Project: Perfect Bracket for a third season, and as always, the goal is to do what no one has done before: Pick a "PERFECT" 63-game NCAA tournament bracket.

Before I grade myself on last season's performace, I want to give you a pretty firm outlook of what to expect on PPB for the whole season. In the first year (2015-16), I was pretty disorganized and pretty erratic in scheduling that I literally over-worked myself to the point of not producing a bracket (other than my gut picks as soon as the bracket is revealed). I definitely didn't make this mistake in the 2016-17 season, but I did make new ones (all the more from which to learn). I doubt I have it completely figured out for the 2017-18 either, but if I don't make the same planning/preparation mistakes of the last two years, then it should have my best year yet. So, what can you expect this season?

Mar 16, 2017

Return and Improve Model

Too Long/Didn't Read: The introduction is a very long story on how I came up with the idea of Returning Players and Improving Tournament Performance. It may waste your time! This will also be a rather quick article because I'm going to do significant back-testing to this model (and many others) over the summer and present the content in the 2017-18 season.

Mar 14, 2017

2017 Quality Curve Analyis - Final Edition

The 2017 NCAA Tournament bracket has been revealed and we know the 68 teams who will be playing. The best part of all: The narrowing of the basketball landscape to these 68 teams means we can finally take a look at the elusive Seed Curve (SC). This article is going to be pretty straight-forward, so here is the run-down if you are interested:
  1. Catching up to speed from the March Edition to the Final Edition
  2. Comparison of the 2017 QC to the 2017 SC
  3. Breakdown of the 2017 SC and comparisons to previous years

Mar 2, 2017

2017 Quality Curve - March Edition

How's everyone doing? It's the 1st day of March, which means a lot of things: the weather gets crazy (lion vs lamb), bracket scientists begin working a little harder and sleeping a little less, and most of all, it is the day that we set aside for Quality Curve Analysis for all games concluded up to February. This is definitely going to be an interesting one, so without further ado, let's jump right into it.

The Evolution of the 2017 Quality Curve

The first place that we must begin is an analysis of how the QC has progressed over the three time points of analysis, from the January analysis to the February analysis to the current one. I say a picture is worth a thousand words, so let's see what we have.












Feb 22, 2017

Trends: The Least-Quantified Metric

Since I left you the entire season in suspense over the "most talked about....least quantified" metric, I thought I would end the suspense right away by putting it in the title. This way, we can focus entirely on the idea instead of the mystery. If you have ever watched a basketball (or any sporting) game, or even the unveiling of the brackets, I guarantee that you have heard a specific team's current trend "qualified" in no uncertain terms: "This team is on a 6-game winning streak," This team seems to be in shooting slump," or "This team's average points per game is X, but over their last 4 games, it is higher." Although these statements contain numerical evidence suggestive of a trend in place for a specific team, I would say these qualitative statements suggest nothing more than plain factual evidence. Yes, that team has won 6 straight games. Yes, that team is missing a lot of open shots. Yes, that team's points per game is higher now than it was 4 games ago. To see why these statements are deceptive when it comes to the concept of trends, let's jump right in.

Feb 15, 2017

The New Standard - Changes to the Selection Process

It's good to be back here on another Wednesday, writing another article and preparing to pick a perfect bracket this year. If you have followed recent announcements in the college basketball landscape, you will know that the NCAA scheduled a meeting on Jan 20 concerning the use of advanced metrics in the Selection Process. Though the NCAA specifically stated that this meeting was exploratory and such changes wouldn't be implemented for the 2017 tournament, this change is definitely worth exploring. For this article, I will record the known details of the meeting, examine the old rating system (the RPI rankings), and investigate the new meta of bracket prediction (when the selection committee knows/values what we have known/valued for the last decade).

"Mr. Secretary, I propose an official reading of the Minutes of the last meeting"

If you've ever been to a business meeting, my advice is to not attend one sleepy. In my opinion, the reading of the minutes of the previous session is the absolute worst part. You are essentially going over the same stuff you went into full-detail in the last meeting, but this time, its purpose is purely for record-keeping. Here is the minutes of the meeting, but I'll try to do it in a way that won't put my readers to sleep.

Who attended the meeting?
  • Dan Gavitt - NCAA Senior Vice President for Basketball: He ran the meeting
  • David Worlock - NCAA Director of Media Coordination and Statistics: He was the statistics expert from the NCAA's side of the table.
  • Jim Schaus - Ohio University Athletic Director: Member of the 2016-17 NCAA Selection Committee - He represented the Selection Committee.
  • Ken Pomeroy - Advanced Metrics Statistician for College Basketball (Link): KenPom Ratings uses a predictive approach.
  • Jeff Sagarin - Advanced Metrics Statistician for College Basketball (Link): Sagarin Ratings uses a predictive approach.
  • Ben Alamar - Advanced Metrics Statistician for College Basketball (Link): BPI Ratings uses a predictive approach.
  • Kevin Pauga - Advanced Metrics Statistician for College Basketball (Link): KPI Ratings uses a results-based approach.
  • Others attended, but I believe these are the most relevant.

Feb 1, 2017

2017 Quality Curve - February Edition

It has been exactly one month since the last Quality Curve (QC) analysis, and that can only mean one thing: It's time to do another one. With half of the conference schedule completed and more than 97% of the non-conference schedule also completed, we are somewhere between the 2/3- and 3/4-mark of the 2016-2017 pre-tournament season. Most of all, our data is better now than it was in January, and it is a little bit closer to its eventual mark (which occurs on Selection Sunday). Without further ado, let's get started with some preliminary factors before we hit the main event.

Preliminary Factors

You are about to enter the February Edition of the Quality Curve Analysis. Please follow these simple instructions before proceeding.
  1. If you haven't read the January Edition, follow the link and read the section on "Reviewing The Changes." The efficiency data typically used for this analysis changed formats between this season and the previous season. Understanding these changes is important to understanding the data.
  2. In typical QC analyses, the current data is compared to pre-tourney data from previous years in order to maximize predictive value. Since the method of calculation changed (see Step 1), I do not have the pre-tourney data for any of the previous years. Instead, I am approximating those years using their post-tourney data, which happens to be readily available. However, I have noticed sizable movements from pre-tourney to post-tourney data, so if anyone has the post-tourney data using the old methodology (Pythag), I possibly could make a workable substitute.
  3. The QC uses the efficiency ratings for the Top 50 teams, which is a close approximation for the 1-12 seeds in the tournament. More often than not, teams in the Top 50 efficiency ratings do not make the tournament and teams not in the Top 50 do make the tournament. The QC gives us a picture of the college basketball landscape, whereas the seed curve (not produced until the Bracket is revealed) gives us a picture of the NCAA tournament quality.

Jan 4, 2017

2017 Quality Curve - January Edition

Custom has it that you always wish your readers a Happy New Year in your first article of a new year. I'm not going to do that since our New Year begins in November. Instead, I am going to dive into what could be a PPB tradition for the beginning of January: the first Quality Curve analysis of the current season. It seems fitting that we have enough information by this point in the season to examine its current state and what it may hold in store for March.

However, the 2016-17 season presents a brand new caveat: the methodology of the data set (KenPom Ratings) used to make the Quality Curve changed. Instead of blindly pasting the current curve to a historical curve and inferring tournament results from the year-of-best-fit, we first need to determine if the old way of doing things will work with the new data set.