While most of the sporting world is moving on from American football (only the Super Bowl remains), the college basketball world is starting its push to the finish line. At the current moment in the college basketball season, most teams (referring to power conferences) have four weeks of regular-season conference play and one week of conference tournament play. Last week concluded the first pair of most rivalry games, and to my surprise, it didn't damage the quality curve as much as I thought it would. I already anticipated most of the high-seed teams to lose at least one game this week, and only IAST lost to a team outside the QC (yes, FLA, VT and OKST are in the QC if you are thinking about the losses by TENN, UVA, and TCU). While this fact is good for maintaining quality, it is not good for creating separation in quality, which is something that usually manifests in more saner tournaments. With further monologuing, let's see what the data shows.
Current State of the QC
The first place to start our analysis is a comparison of the Feb QC with the Jan QC.
As you can see, not much has changed. The QC is essentially "maintaining" quality. Compared to January, the QC is slightly higher at the front of the cure (spots #1-#14), approximately similar in the front-middle of the curve (spots #15-#26), slightly weaker at the back-middle of the curve (spots #27-#37), and approximately similar at the end of the curve (spots #38-#50). If we look back at the Jan QC article, the range from spots #5-#36 had only five teams playing above the mid-point of the range from Nov- Jan (#8, #13, #24-#26), while all other teams in this range had quality at the lower-bound of the range. Coincidentally, two of those points -- #13 and #26 -- mark the boundary points for one of the groupings --the front-middle grouping -- in this month's QC. I find coincidences like this intriguing: These five spots were playing at or above 50% of their Nov-Jan range, but now, these spots demarcate the grouping that is playing approximately in-line with their January counterparts. Before we make any other judgements, the next step is to look at the range of QC-values from Jan to Feb.
Since there are several things going on here, I'll describe the chart first, then address the questions raised from the previous graph, and finally propose new questions about the QC.
- Spots #1-#8: Most are at the minimum values for the month. This simply means that most of the eight teams were playing better at some point in time between the Jan QC and Feb QCs than they are currently playing. Quality at these spots has declined as we approached this time-point.
- Spots #9-#14: Most are playing at the middle of the range (neither at their best nor at their worst).
- Spots #15-#45: Most are playing at the maximum values for the time range. With the exception of the teams at spots #20 and #21, these teams are closer to the maximum value than the mid-point of the range. I'll talk more about this later.
- Spots #46-#50: Most are playing at the mi-point of the range (same as the second group above).
Now let's answer the previous section's question and ask a few more.
- If the teams at spots #15-#26 were in line with Jan counterparts, yet the graph above shows they are playing at their peak for the time period, it simply means they declined in quality and then recovered to that level over the course of the month. We'll need a little more information from the third chart before we can ascertain the significance of the move.
- The teams at the front of the QC are performing at the bottom of the range (spots #1-#8) or at the mid-point of the range (spots #9-#14), but they are still above their Jan counterparts. In simple terms, the teams at these spots significantly improved over the course of the month and then gave it all or most back. In combination with the first question, it seems there was quality separation at the beginning of the time period followed by quality convergence (a.k.a. - trending to parity) at the end of the time period.
- In the Jan QC, I identified an attribute called Inflection Points, or locations on the QC where quality separation accelerates. In that article, the Inflection Points on the Jan QC were at #1, #4, #8, #13, and #26. From the two charts, we can see the connection to the inflection points. Spot #5 in the Feb QC is below its Jan QC counterpart, but this should be no surprise since the quality decline accelerates after the #4 spot in Jan. Spot #8 marks the boundary of the first and second groupings in the second chart. Spots #13 and #26 are approximately the boundaries of the grouping in the first chart. These sharp breaks in quality are just too coincidentally connected to ignore, which is why I will examine them later in the article.
Let's look at the final piece of the QC puzzle by comparing the two ranges of both time periods (the Jan QC and the Feb QC) to see the big picture of quality changes.
The range of the Jan QC is in light green and the range of the Feb QC is in pink.
- From the #1-#5 spots, the two ranges are approximately in sync with one another. This concerns me a little bit because these teams haven't advanced higher out of a season-long range. Even worse, we know these teams are currently performing at the lower-bound of this range. We need to see the opposite of this by the final QC article for any sign of quality separation at the top and its corresponding implication of tournament sanity.
- From the #6-#33 spots, the upper-bound of the Jan QC far exceeds the upper-bound of the Feb QC. This implies that the quality ceiling (the maximum potential quality) of the teams in these spots has fallen. They were playing better in Nov and Dec than in Jan. In a way, this is indicative of quality separation, but only because these teams are deteriorating in quality, not because the top teams are improving in quality. From these same spots, the lower-bound of the Jan QC also exceeds the lower-bound of the Feb QC. This implies that the quality floor (the minimum potential quality) of the teams in these spots has fallen, or these teams at their lows in Jan went a step backwards from their lows in Nov and Dec.
- From the #7-#15 spots, the Jan lower-bound traced the Feb upper-bound.
- From the #16-#28 spots, the Jan lower-bound traced the mid-point of the Feb range.
- From the #29-#33 spots, the Jan lower-bound was slightly above than the Feb lower-bound.
- These three groupings show that the worst quality values in Nov and Dec were better than most quality values in the whole month of Jan.
- From the #34-#50 spots, the upper-bound of the Jan QC still significantly exceeds the upper-bound of the Feb QC but the lower-bounds of both QC roughly approximate each other. This simply implies the quality ceiling of the teams in these spots have fallen. In other words, their best is more limited now than earlier in the season. This is good as these teams would be 9- thru 12-seeds.
From the three charts, we can finally answer our pervasive question regarding the #13-#26 spots. To recap the findings, these teams are at the same quality now that they were at the Jan QC, they are playing at their best quality over the course of the month of Jan, and the lows of Jan QC were better than the lows of the Feb QC. From these three facts, it seems more likely that the "true" quality range of these teams is probably lower than higher, which would imply a little more quality separation over the next month. Since the #13-#26 spots represent likely 4- to 6-seeds, our giant-killers may be weaker by tournament time (keep in mind, this assumes that teams are seeded according to advanced metrics ratings, which is not the case). First, we need to verify that this happens (with the Final QC Analysis during BCW). Second, we need to verify that these teams received true quality seeds (they actually get seeded appropriately: 4- to 6-seeds). Third, we need to see the teams in the front of the curve improve over the next five weeks, which will strengthen the probability of quality separation. If these three things happen, our predictive situation becomes more manageable.
Historical Comparisons of the Feb QC
Let's start with the Feb QC compared to its historical counterparts at the same time in the season.
We know from the Jan QC article that 2018 and 2022 were the best approximations for the 2023 QC. If you don't count the below-all-other-curves quality from the #1-#15 spots, then you can say that the 2023 QC almost mirrors the 2018 QC. The quality of teams at the top is currently below all other years at the time of the Feb QCs. As an important reminder, 2018's QC improved from the Feb QC to the Final QC, and we can see from the previous sections that this also needs to happen for the 2023 QC. We also know that 2022's QC meandered for the remainder of the season, so any improvement in 2023's QC at the top will put it more in-line with that year as well. I made these two points in the Jan QC article, and I see nothing from the four charts above to deviate from them.
- Both years produced a F4 of 1,1,3,11 (2018) and 1,2,2,8 (2022), a M-o-M rating of 20.30% (2018) and 21.65% (2022), and round-by-round upset counts of 5-5-3-0-0-0 (2018) and 6-5-3-0-1-0 (2022).
- I would say more higher seeds losing in 2023 R64 than in 2022. In 2022, there were six upsets in the R64: One 2-seed, two 5-seeds, and three 6-seeds. This could mean either more than six upsets in the R64 or six again but better seeds losing (more 3- and 4-seeds taking the place of 5- and 6-seeds losing).
Inflection Points
I stated earlier that I would talk about these, so let's tidy up the Feb QC by keeping this promise.
As I stated above, I think the last few weeks up to this point of the season has been trending toward parity. This is bad for tournament sanity (and tournament predictability as a corollary). The inflection points seem to spell this as well.
- First of all, there are fewer in the front 90 spots in the Feb QC (five total) than the Jan QC (seven total).
- Second of all, the spikes are smaller in magnitude. In the Jan QC, there were two spikes greater than two quality points and there were three more in the +1 to +1.5 quality point range. In the Jan QC, there are two in the +1.5 to +2 quality point range and one in the +1 to +1.5 range.
- Inflection points are sharp spikes in quality, so if there is quality separation, there should be quantity and magnitude. Both went in the opposite direction, which demonstrates quality convergence.
- The two spikes in quality near the back-end of the Top 90 also has me a little worried. In the Jan QC, these occurred at the #65, #88 and #90 spots. In the Feb QC, these occur at the #79, #90 and #94 spots. By occurring at spots further down the curve, it means more lower-quality teams on the QC are closer in quality-value to the top teams than before. Teams located in spots up to the inflection point (and including) should probabilistically be more likely to pull an upset. It just so happens that the three teams at these spots are Charleston (CHRL), Santa Clara (SCLR), and UC Irvine (UCI). Of these three, I think CHRL has the highest odds to make the tourney, but I would be familiar with all teams up to these inflection points, especially if they make the tournament and get improperly seeded. 2022 SPTR was spot #118, and in my opinion, that was way too high to receive a 15-seed (almost as foolish as giving a 15-seed to a team from Conference USA like 2016).
- As for the clusters (orange-arrows), I mentioned them in the Jan QC as areas that needed to be tracked. The first cluster has moved toward the front of the curved. In the Jan QC, it was located from the range of spots #119-#146. In the Feb QC, it is located in the range of spots #102-#129. If you combine this with the information in the previous bullet mark, there is significant quality spikes lower at #79, #90, and #94 with smaller spikes at #102, #111, #123, and #129. This much quality deterioration over a range of fifty spots leads me to believe that no upsets will come from teams after spot #129.
We've got a lot to follow (and hope for) over the next five weeks, so stayed tuned. As always, thanks for reading my work, and I should have another article up in a week.
No comments:
Post a Comment