Jan 1, 2018

2018 Quality Curve Analysis - January Edition

While everyone else in the world is turning in the new year watching the ball drop, I'm pretty sure my loyal readers are turning in the new year by reading this article. Okay, I'm not that interesting of a writer, but it has reached that point in the season where we take our first potential peek at the 2018 tournament. Today, I bring you the January Edition of the 2018 Quality Curve Analysis, which is an annual tradition here at PPB, and to make it even more special, it marks the birthday of PPB. Today, PPB turns 2-years old (technically Jan 8, but I go by the article, not by the date on the calendar). I do want to take this quick opportunity to thank all my readers for this special occasion: If you guys weren't reading and interacting, I know for sure I wouldn't be writing articles. I owe this birthday to you all! So, Thank You Very Much, and with that, we move onto the gift that keeps on giving: The Quality Curve.




What a Difference a Day Makes

I've been working since Tuesday on this article, putting some topics together, rummaging through data sets, and compiling it all into some nice charts that tell the whole story. Then, Saturday Dec 30 happens, and the games on that day threw a wrench into everything. So, let's start there and with a chart that shows what I'm talking about.


To the untrained eye, this chart doesn't look like much. However, I follow the day-to-day movements of advanced metrics, and typically, movements of these magnitudes take a couple of weeks (3-4 games) to happen. This movement happened over one day of games (For comparisons, look here in the section "The Evolution of the 2017 QC" at monthly movements in the QC). To start the analysis off, let's look at the what the lines have in common. The blue line shows a steep drop from the 1 to the 10 spots and levels out from there whereas the pink line shows the same steep drop from the 1 to the 9 spot and levels out from there. I'm not going to quibble over one difference in spots because it most likely means nothing, but both the 9th and 10th ranked teams would translate to 3-seeds. If you remember the evolution of the 2017 QC (monthly changes) from the link above, those three curves saw differences in the structure and/or the transition points. The 12-30 and 12-31 curves show the same structure (steepness then leveling) and a meaningless shift in the transition point (10th to 9th). Now that we can look at the differences in the two curves, the big difference in the curves is noticeable at the inflection point, which occurs at the 17th and 18th spots. From the 1-spot to 17th-spot, the 12-30 (Dec 30) curve is higher than the 12-31 (Dec 31) curve, and from the 18th-spot to the 50th-spot, the 12-30 curve is mostly lower than the 12-31 curve. This is a movement towards parity. This happens when top teams lose (NOVA, TXAM, and TENN) or win close against inferior competition (UVA, UNC, and XAV). Since I promised that I would start incorporating other advanced metrics data-sets into my work, let's look at the same phenomenon through the Sagarin ratings* (produced by Advanced Metrics statistician Jeff Sagarin).

We see roughly the same pattern, but at different intervals, and this is most likely due to the methodology difference between the KenPom ratings and the Sagarin Ratings (explained in Reference section). First, they both show steepness then leveling (a similar QC structure). Second, the point of transition (where the steepness changes to flatness) is located at the 11-spot, which is still in the 3-seed range. Third, the inflection point is located 26th and 27th spots, which is the only difference between the two curves. This difference is almost ten spots later than the KenPom ratings, but I still believe this to be a difference due to methodology than differences in meaning or interpretation. Nonetheless, college basketball had a rough day on Saturday Dec 30, and the advanced metrics show it all to clearly.

What a Difference a Year Makes

Yes, I can analyze that day of basketball from every angle, but the real question is simple: What does the January QC say about the tournament in March. Let's take a look at how 2017 fared at this same point in its season. Since we know how 2017 turned out and it was our first full year under the AEM methodology, we should be able to make some predictions. I will actually compare both 12-30 and 12-31 to 2017's January QC, but I actually think the 12-31 curve will look more like 2018's Final QC than 12-30 will.

I believe that picture says a mouthful. All throughout 2017, I said there was strength at the top. It showed in the data and it manifested itself in the tournament, as all four 1-seeds, 2-seeds, 3-seeds, and 4-seeds won their R64 game and only four upsets in that round, which is below average for the R64. In 2018, there is strength at the very front of the curve (and I don't expect this to last), and relative strength after the 26th-spot. At this point in the season, we see plenty of parity, and parity is the key ingredient for upset soup. Some teams will find their way, start meshing together, and improve their team quality. This means every team ranked 3 through 14 must do this to reach the quality levels that 2017 did, I don't see that happening. Keep in mind, for them to reach this feat, they will have to do this at the expense of the teams ranked 15 to 50 because these teams are higher (on-average) than their 2017 counterparts. Let's not forget, the 2017 QC got stronger as a whole as the year went along (viewable from the link above), so the 2018 QC will have to make up this ground plus the ground gained by the 2017 QC from January to March. It is a tall task indeed.

I do want to bring to your attention some interesting facts about the 2017 QC in January:
  • The eventual National Champion was ranked 8th at this time.
  • The eventual National Runner-Up was ranked 10th at this time.
  • The other two Final Four participants were ranked 22nd (ORE) and 37th (SCAR) at this time.
  • The Overall #1-seed (KU) was ranked 5th.
  • Eight teams in the January Top 50 (which is the amount used to make the QC) did not make the tournament field on Selection Sunday, and their ranks at the time were 21, 24, 30, 39, 42, 45, 48, and 50.
  • The 2017 Cinderella (11-seed XAV) was ranked 15th in this list. In other words, they were playing good at the start, then they lost their way during conference play, and then they found their magic again in March.
Now, let's take a look at the 2018 Jan QC through the Sagarin ratings.

As you would expect, no differentiation here. There is pronounced relative strength at the top (from the 1-spot to the 25-spot) and significant relative weakness at bottom (from the 26-spot to the end). In fact, there is so much dominance at the top in the 2017 QC that the #1 team is literally off the chart with a rating over 96 (I intentionally left it this way so that this chart would match the same scale as the previous QC using Sagarin ratings). For the fun facts of the Sagarin QC:
  • The eventual National Champion was ranked 7th at this time.
  • The eventual National Runner-Up was ranked 10th at this time, as above.
  • The other two Final Four participants were ranked 21st (ORE) and 36th (SCAR), similarly.
  • The Overall #1-seed (KU) was ranked 5th, also like above.
  • Nine teams in the January Top 50 (which is the amount used to make the QC) did not make the tournament field on Selection Sunday, and their ranks at the time were 22, 27, 32, 34, 37, 45, 46, 47, and 50.
  • The 2017 Cinderella (11-seed XAV) was ranked 16th in this list, which is also similar.
No matter how we look at it, 2018 appears much weaker than 2017, and we all know what it foretells.

Interpretations

You may have noticed throughout the blog that I have been heavily emphasizing upsets in my articles. This is the reason. Even before the data began to take shape in December, I have seen the potential for upsets in March just from watching the games in November. I've essentially tried to get ahead of the storm. When all of the basketball pundits and analysts in March will be pulling their hair out and saying the same old redundant phrases -- "Who would have seen this coming?" or "Another top-seed goes down, what is happening here?" -- I hope to be the one left standing (with my readers of course) at the top. The real question I want to answer is this: Why is 2018 different from 2017?

The first explanation is the experience factor. Time and time again, we hear that talent is essential for winning. I don't deny that talent will win you a lot of games. However, talent very rarely wins consecutive meaningful games, which describes the NCAA tournament. Experience, on the other hand, is more likely to accomplish this feat. The 2017 Final Four was blanketed with experience -- juniors and seniors -- at every position. In fact, this experience may have been so important, two of the 2017 F4 teams most likely won't make it to the 2018 dance and the other two won't have as high of seeds that they had in 2017. This is all due to the amount of experience these four teams lost to graduation (as well as a few early entrants to the NBA that would have returned with a year of experience under their belt). Yes, these four teams all had talent, but so did the other teams in the tournament. The difference was experience, and in my opinion, experience (or an even better phrase - experienced talent) is essential for deep tournament runs. 2018 is full of youthful talent, and one of my goals for this season is to figure out how youthful talent will impact the 2018 tournament. I think we already see one impact of youthful talent in the 2018 QCs above. An upcoming article for February will go more in-depth into this hypothesis and develop a predictive model, so for now, I'll leave this idea in its hypothetical infancy.

Another factor responsible for deep runs in tournaments is offensive consistency. I remember one of Pete's final articles on Bracket Science about this subject, and it focused on teams that get significant percentages of their points from post players versus points from guards. His point was simple: Scoring is the difference between early exits and deep runs, and the place to get consistent scoring was in the post. When you see the season statistics for F4 teams over the last few years, the vast majority of these teams scored a significant percentage of their points from post players. For example, the 2012 Championship game saw Kentucky's Anthony Davis and Terrance Jones against Kansas's Jeff Withey and Thomas Robinson, all of whom scored a significant percentage of their team's season point totals even though they were all inefficient in that particular game. I could list tons of F4 teams like these two that fit this mold: 2017 UNC, 2017 GONZ, 2016 UNC, 2015 UK, 2015 WISC, 2015 DUKE, 2014 WISC, and 2012 OHST. What I found really interesting was looking at the years without significant post contributors: 2014, 2013, 2011, and 2010. Ironically enough, these were pretty crazy years in the tournament. Although two of these years had 1-seed National Champions, the other two had a 3-seed and a 7-seed National Champion. When we look at the F4 breakdowns of those years, it is indicative of upset-ridden tournaments: 2014 (1, 2, 7, 8), 2013 (1,4,4,9), 2011 (3,4,8,11) and 2010 (1,2,5,5). 2018 seems to be lacking in this department. There are teams with very talented yet very young post-players (DUKE, ARI, UK, and KU), teams with system-oriented big men (NOVA, XAV, UCLA, and DAME), and teams with big men that contribute in other ways than scoring (PUR, LOU, GONZ, and MINN). There are very few teams in 2018 that fit the F4 mold that we want to see, but they do exist (MIST and USC). Consistent scoring inside has been quintessential in deep F4 runs, and the lack thereof has been quintessential in lower-seeded Final Fours. With the lack thereof in 2018, we might see Final Fours like those of 2014, 2013, 2011, and 2010.

I'm pretty sure the QC charts above were an eye-opener for the tournament, which is a little more than two months away. While it is still too early to confirm the legitimate reasons for weakness so far in this season, the two above (and any extensions of those two: i.e. - more 3s and less 2s) are really good starting points. Nonetheless, I hope this article illuminated your crystal ball for 2018. As always, thanks for reading my work and thanks for a wonderful two years of bracket picking. I hope to see you again in two weeks with the final part in the series on upsets.

Reference

*For more insight and access to the Sagarin ratings, click here.

There is a difference in methodology between the KenPom ratings and the Sagarin ratings, and this has a noticeable impact on the differences pointed out in the article. The KenPom methodology, to my knowledge, still uses Dean Oliver's Four Factors approach to basketball analysis, which calculates these values into an offensive and a defensive component that when combined into a rating value (AEM) could be utilized to produce an expected margin of victory for a game. The Sagarin methodology, to my knowledge, uses the points methodology that utilizes margin of victory (pace-irrelevant) as its primary input, standardizes it across all teams, and produces a rating value that could be utilized to produce an expected margin of victory for a game. In other words, they both aim to predict expected margin of victory, they just arrive at the same goal differently. This is why the KenPom rating system goes from -XX.XX to +XX.XX (the standardization is in the scale), yet the Sagarin rating system goes from base value YY.YY to XX.XX (the standardization is in the calculation).

No comments:

Post a Comment