Nov 1, 2018

Welcome to the 2018-2019 College Basketball Season

Important Dates:
Selection Sunday is March 17, 2019

WELCOME BACK PPB READERS!!! It's nearing the start of the 2018-2019 college basketball season, so we're going to start this new season off with the usual assessment of my predictions made in the previous season. Sit back, relax, and I'll tell you how good or bad I did in the 2017-2018 season (unless you already spent your off-season doing it). The predictions will be graded in chronological order of when they were made, and each individual prediction will be identified by italics and red highlight. Followed by the grades, the PPB schedule for 2018-19 CBB season will appear.



Predictions in the January QC Analysis

I will actually compare both 12-30 and 12-31 to 2017's January QC, but I actually think the 12-31 curve will look more like 2018's Final QC than 12-30 will.

This prediction is somewhere between 33% and 50% correct. If the 2018 Final QC is divided into thirds (1-17, 18-33, and 34-50), the first group is far closer to the 12-30 curve, the final group is closer to the 12-31 curve, and the middle group is a dead even split between the 12-30 and 12-31 curves. So it honestly depends on how much credit you want to bestow upon me, 33% for the one grouping that I got correct with no bonus for the tied group or 50% for the third I got right and half credit for the third I split (33+17). I probably shouldn't waste any time on this prediction since I'm more concerned with predicting brackets than predicting curves, but it was a prediction and it must be evaluated on its merits. Prediction Grade: B- (1/3 to 1/2 correct, but a waste of time to predict in the first place).

Consistent scoring inside has been quintessential in deep F4 runs, and the lack thereof has been quintessential in lower-seeded Final Fours. With the lack thereof in 2018, we might see Final Fours like those of 2014, 2013, 2011, and 2010.

To be honest, I'm not sure if this quote was intended to be a prediction for 2018 or to be a restatement of patterns and trends. It does have elements of both which makes me uncertain of its nature, but it's also littered with flaws-in-logic that I want to discuss and try to avoid for this season.
  • In the derived paragraph from the article, those four years were grouped together because of their lack of big men scoring balance. However, they really had nothing in common in terms of F4 results. (Proof: [1,7,2,8][1,9,4,4][4,3,11,8] and [5,5,2,1] in order as listed in the quote). As you can see, the AVs for those four years are 18, 18, 26, 13, and they are as spread-out as you can get (with 2010 looking like an outlier at 13). 2018 produced a 1,1,3,11 F4, which is only four seeds from 2017's F4 of 1,1,3,7, and the whole premise of the article was the difference between 2017 and 2018. Among the four-year grouping, the E8 and R32 AVs are also spread-out with only the S16 AVs being the most similar (79, 81, 80, 80). However, 2018 produced a S16 EV of 85, which is well out of the range even though 2018 shared the "lack of big men scoring balance" attribute with those four years. A better way to view these four years is by M-o-M ratings (21.35%, 20.75%, 19.85%, and 17.14%, in order as listed in the quote) with 2010, in my opinion, potentially being an outlier again. As a comparison, 2018 produced a M-o-M rating of 20.30%, which is definitely in-line with the group, especially when excluding the potential outlier. In short, there was no logical reason to connect 2018's lack of big men scoring balance with the F4 AV of these four years. Since all four years produced wild tournament results on the whole (which was the initial claim of the paragraph), it would have been more logical to target the M-o-M rating (which measures tournament craziness) rather than AVs (which measure structural and match-up characteristics of the tournament).
  • Second, this statement was made in January with no knowledge of how the inside-scoring based teams would be seeded in March, if seeded at all. In the January article, 14 teams were listed and they were grouped into four different categories based on the characteristics and roles of the big men. Of the inexperienced talent big men, KU, DUKE, ARI, and UK received seeds of 1, 2, 4, and 5. Of the system-oriented big men, NOVA, XAV, UCLA, and DAME received seeds of 1, 1, 11(play-in), and NS. Of the non-scoring contributing big men, PUR, GONZ, LOU and MINN received seeds of 2, 4, NS, and NS (and for historical record, both of the seeded teams from this group suffered injuries to their big men during the tourney). Of the two teams that fit our desired mold, MIST and USC received seeds of 3 and NS (with MIST being seeded in the same region as KU and DUKE from above). As I originally stated, these seeds were not available in January, but if they were, I would have definitely rephrased the statement in question. It would have been hard for 2018 to replicate the F4 AVs in the four years when it didn't have teams with inside-scoring big men with low seeds. Out of these fourteen teams, three weren't seeded, and of the remaining eleven teams, only one received lower than a 5-seed and that team lost in the play-in game. 2018 couldn't produce deep tourney-running 8- to 11-seeds from this list. If this was a prediction, a more-informed prediction could have been made along the same lines by using the 2018 Stat Sheet, which gives a breakdown of all tournament teams by guard%-scoring and post%-scoring.
Prediction Grade: N/A (for many reasons)
Predictions in the March QC Analysis

If 2017 produced 4 upsets in the R64 with all of that strength, a minimum of 5 upsets seems doable in 2018 with all of this weakness. As always, a lot will be determined by seedings on Selection Sunday.

This prediction seems like a gimme. Considering the historical range of R64 upsets is from 1 to 8, predicting more than five R64 upsets in a year that shows all signs of being crazy is not exactly going out on a limb. On the contrary, the final count for the R64 was exactly five, which was the lower-bound set by the prediction, and one of those five upsets was a #16 over #1. If the historical record of the 1v16 match-up had held to form, this prediction would have been wrong. Prediction Grade: A

The Madness of Metrics and Match-ups: Conclusions Section (Total)

Keep in mind, the last two years, the process of weighting Conference affiliation resulted in upsets of members of favored conferences. If the Quad System shows this partiality, look for early-round upsets of teams from the favored conference (SEC). The next most favored conference seems to be the B12, followed by the ACC.

Looking at the end results, this prediction looks fairly accurate. Of the Power-5 conferences (for 2018, this means the Power-6 minus the PAthetiC-12), the SEC did the worst in terms of win-percentage (50% of all games played). Of the eight bids, two did not win a single game, and only two advanced to the S16 (with both losing this game). The second-worst win-percentage was the ACC (third-most favorite by the Quad System) with 57%. The ACC lost four out of nine teams on the first game (one of which was a 1-seed) and lost another team before the weekend finished (a total of 55% of their bids not making the S16). The B12 actually did respectable, even though it was the second-most favored conference by the Quad System. It had a win-percentage of 63%, losing three teams on the first game, but having three of the remaining four teams make the E8 and one in the F4. The BEC probably did worst of all, but it's win-percentage (64%) gets inflated by a Title Run. It received six total bids, two of which lost their first game, and three more losing before the S16. Excluding NOVA's six wins, the BEC goes 3-5 for a paltry win-percentage of 37.5%. This also makes the third year in a row where the eventual National Champion is the only representative from its conference in the S16. Prediction Grade: B/B+ (can't be in the A-range because of how good the B12 did).

In the "To my readers" Section

Pay close attention to site location and participating teams.

I honestly couldn't believe how bad of a job the committee did in 2018. I referred to some peculiarities for 2017, such as #7 SCAR getting a R64/R32 game in its own backyard versus #10 MARQ and #2 DUKE, and I suggested these peculiarities seemed even more present in 2018. I should have went into more detail.
  • For example, CIN and XAV should have never played together in Nashville, TN. I concede that Nashville is closer than Pittsburgh to either of their campuses (18 miles for the record), but Ohio (where the CIN and XAV fanbase is spread among) as a whole is "geographically closer" or "demographically closer" to Pittsburgh than Nashville. Even worse, Detroit, MI is closer to Cincinnati (3 miles straight-line and 11 miles driving distance) than either Pittsburgh or Nashville, so it really makes me doubt the criteria of site selection of the Committee. For example, if the criteria is ticket sales and you want MICH and MIST playing in Detroit, MI as 3-seeds even though #1 XAV, #2 CIN and #2 PUR can all claim seed-based priority over MICH and MIST, I'm fine with that criteria. I just want the criteria to make logical sense, and at this point, it's not making any sense (which, in my mind, is a good recipe for two rival teams at the same non-favorable site to be upset on the second day of games). 
  • Along these same lines, how does #10 BUT get the same trip to Detroit, MI as #2 PUR (two teams from Indiana in the same pod), and #7 ARK (BUT's R64 opponent) has to mirror-image the Oregon Trail from Arkansas to get to Detroit? 
  • How does #4 ARI get shifted all the way up to Boise, ID when San Diego, CA is right next door? #4 GONZ got a backyard game in Boise. #4 WICH and #4 CLEM are both closer to Boise than San Diego (both straight-line distance although WICH is also driving distance closer too). Why make ARI travel all those extra miles when the site is just as accessible to the other competing teams? Perhaps this is a contributing factor in both WICH's and ARI's first-round exits (in addition to drawing bad match-ups). The site selection criteria makes no sense!!!
  • Another interesting detail is the first-round site for play-in winners. The STBO/UCLA winner had to travel from Dayton, OH to Dallas, TX whereas the SYR/AZST winner had to travel from Dayton, OH to Detroit, MI. For a team that plays a 2-3 zone entirely on defense to travel that close between the play-in game and the R64-game, I'm not surprised that SYR pulled off two upsets in such a favorable travel situation. I'm also aware that favorable site locations didn't necessarily lead to two opening-weekend wins (#1 UVA, #2 UNC, #3 MIST) nor did it help smaller schools spring the upset (#12 SDST, #12 NMST, and #14 STFA). 
Nonetheless, distance traveled and geographic advantage or disadvantage continue to play a role in match-ups and it was a fair point to address (despite how little I did address it last season). This bring me to one final rant: I'm sick and tired of Boise being a first-round site. What does that city and its location have to do with college basketball? NOTHING!!! Unless ARI, GONZ, ORE, UCLA, USC, WASH and NEV/UNLV are all vying for 1-4 seeds in a given year and the Los Angeles, San Diego, Las Vegas, and Sacramento sites are all occupied for the first weekend of games, I may consider Boise as an acceptable site if that rare scenario occurs. In general, it should never be an option. Prediction Grade: N/A due to insufficient predictions and explanations.

Predictions in the Final QC Analysis

If there are predictions that I make that are for-sure "On-the-record" predictions, it would be predictions made in the Final QC Analysis. The last couple of sections were either generic statements that could be construed as predictions, predictions that could be called "gimmes," or advisory statements that lacked constructive detail. The predictions made and soon-to-be graded from this section are the make-it-or-break-it predictions for my blog. Let's start the grading.

This lack of over-seeds and under-seeds should mitigate some of the craziness. I would have instantly called for 13-15 upsets, but if true-seeds are facing-off against one another, we could instead be looking at 11-13 upsets this year.

I'm so glad that I spent the extra time looking into this anomaly. It took me an extra day to release the Final QC because of the tightness between the QC and the SC, but the extra day was worth it. As I said in the article, "I would have set the minimum at 11 and started counting higher, but the tightness of the curve scared me." I was dead-set in the 13-15 range and the tightness made me re-evaluate that margin downward by 2 on both the minimum and the maximum. You will see in a later prediction why the downward revision matters, but the final count was 13 upsets, so either range would have contained the correct amount. Grade: A+

The strength lies in the 2-seeds, 3-seeds, and 5-seeds with the 4-seeds and the 9-seeds narrowly coming up short of the mark. In a nut-shell, I expect "as a whole seed-line" these three groups to meet seed-expectations. Thus, 2-seeds should make the E8 (twelve total wins among all 2-seeded teams), 3-seeds should make the S16 (eight total wins among all 3-seeded teams), and 5-seeds should make the R32 (four total wins among all 5-seeded teams). I don't think I am going out on a limb and saying this: I think the 5-seed group should actually surpass seed expectations (more than four total wins in the tournament).

Let's see the results.
  • 2-seeds: 7 total wins vs 12 predicted wins. FAIL. I was putting a lot of weight in my own bracket on a DUKE title run and a PUR F4 run. I didn't like UNC's or CIN's path.
  • 3-seeds: 10 total wins vs 8 predicted wins. PASS. I was putting a S16 run by all four 3-seeds in my own bracket, with MICH winning one more. In the article shortly after this prediction, I said you could err on the side of caution and pick six wins. This was a mistake!
  • 5-seeds: 7 total wins vs 4 predicted wins. PASS & PASS. In my own bracket, I had a F4 run by UK, S16 appearances by CLEM and WVU, and a R32 appearance by OHST (a shocking 9 total wins by five-seeds. In the article shortly after this prediction, I said I was taking three 5-seeds and one 12-seeds, but in my own bracket, I did not. This was a double mistake: It was wrong in prediction and it was in wrong by misleading my readers.
Prediction Grade: B- (I was most confident in my 2-seed predictions and shouldn't have been, and I tried to "play-it safe" with my 3-seed and 5-seed predictions and shouldn't have because those predictions were better than my 2-seed predictions.)

For the record, I like 5 - 4 - 2 - 1 - 0 - 0 for 12 total upsets.

The actual result was 5-5-3-0-0-0 for 13 total upsets. After the "gimme" prediction said minimum of 5 upsets in the R64, but the tightness in the curve showed "approximately closely teams", this is where I trimmed my upset count to account for "appropriately seeded teams". Originally, I would have said 7 in the R64 with 4-2-1-0-0 accounting for the final five rounds, but the tightness of the SC-QC overlay scared me into 5-4-2-1-0-0. All in all, my prediction was one short of the correct total, one short in both the R32 and S16, and one extra in the E8. In bracket contests, this roadmap would have cost you 14 points (0+2+4+8+0+0) assuming you picked all the teams correctly except for the road-map based picked (curve-fitting). Prediction Grade: B+

Final QC Analysis: For the record, I like an AM of 188 - 74 - 24 - 16 - 7 - ???

Above, I admitted to picking a Duke title run, so my final value was 2, and you should be able to figure out a few more from other admissions above. The actual result was 193-85-39-16-4-1. I nailed the F4 AV, and the R32 AV was only off by five. My preference for 2-seeds over 1-seeds adds one point to each the NS (4) and NC (1) AVs. Not to mention, I even stated in the paragraph after the prediction that "I seriously doubt a 1-seed wins the title (definitely wrong!). My personal bracket had 13 (2-2-4-5) in the F4 AV, so you can tell that I've moved on from curve-fitting in my personal bracket after the heart-break from my 2017 curve-fitting experiences. The S16 and E8 AVs were short by 11 and 15 respectively (and too far short for my own liking). This is most likely due to two things. The first is process. I develop these AV predictions by matching QCs and SCs from previous tournaments. 2018's was predicted based on 2003, 2006, and 2010. They were the three most-optimal choices, but they are also from a different era of college basketball (teaser), so they need to be re-scaled to the current era. The second is statistical estimation. Not only did I try to fit 2018 AVs to the Aggregation Models of the three years, but I also tried to linear best-fit them according to the number of upsets. Since I was one short in the upset counts for both rounds, it most likely pulled down the 2018 AVs for those two rounds to account for one fewer predicted upset in each of those rounds. This could also explain why the predicted F4 AV was on target even though my own personal bracket had 13 (it was pulled upward by the predicted upset that didn't actually happen). Prediction Grade: B+ (probably even A- since I'm a really tough grader).

PPB Grand Prediction for all of 2018

"Parity exists in 2018 and parity translates to an above average number of upsets."

Prediction Grade: A++. I probably said this statement in every article, and if I didn't, I probably should tick the grade down for each article that I didn't. I knew this fact in November, and it is a good reason my predictions were on-target in March. However, there is one thing about this statement that I do not like, and it (along with the other teaser in this article) will be the basis of an article on PPB appearing later this month. With that said, let's see the layout for 2019.

2018-2019 PPB Schedule

Since Selection Sunday falls on the third Sunday instead of the second Sunday, it distorts the schedule for the whole year by an entire week. Typically, I want to post bi-weekly articles that line-up with end-of-month dates throughout the year and end on Selection Sunday. I can make it work with March and February, but the rest of the months don't line up as conveniently. Jan 20 will mark the start of the bi-weekly article schedule, and I will schedule-as-I-write until that date.

Nov 1 until Some Date in Jan: Schedule-as-I-Write. The date of the next article will be stated in the current article. If the schedule becomes more definitive, it will be brought to your attention in the "To My Readers" section of the blog.
Early Jan: January QC Analysis
Feb 3: February QC Analysis
Feb 17: Article - Finish any multi-part series or address any relevant topic before final stretch
Mar 3: March QC Analysis
Mar 10: Article - Probably some new metric or statistical toy I want to introduce/talk about
Mar 17: Selection Sunday
Mar 18/19: Final QC/SC Analysis
Mar 17-20: Bracket Crunch Week content

As we progress closer to the dates, I will have a better idea of the content of the unnamed Feb and Mar articles. You can expect the next article (which will be the promised update to the Upsets in the Making - Part 2 article) around the week of Nov 11, and I will post an exact date in the "To My Readers" section. Until then, thanks for reading for my work, and by reading the PPB blog, you're already on the best path to optimal bracket-picking for the 2019 tourney.

No comments:

Post a Comment