Earlier today, Alex Fanaroff of Duke's daily newspaper The Chronicle posted a wonderful article, "Duke Does Decline, Objectively Speaking." It's a refreshing look at basketball in a way that we love here at the Immaculate Inning: using tempo-free statistics. Fanaroff describes the general issue:
1) Duke fades down the stretch.
2) This is due to the starters playing way too many minutes, causing them to get tired.
Fanaroff went into the article expecting to at least debunk #1 as a myth, but ended up finding a strong correlation between Duke's efficiency margin (Offensive Efficiency - Defensive Efficiency), and the number of ACC games played. This suggested that over the last seven years, the data indeed does suggest a Duke Fade. Fanaroff was unable, however, to show that "minutes by starters" had any effect on efficiency margin. Naturally, I was intrigued, and wanted to dial further into the data. Here are a number of issues:
1) Year effects do matter. In his accompanying blog post, Faranoff says "Predictably, most seasons failed to produce robust trends due to the limited sample size, though all seasons from 2004-2009 demonstrated a downward trend." This statement is misleading on two accounts. First of all, here is each year plotted individually, with the linear regression line on each:
True, there is a downward trend most years, and Faranoff does mention that Duke's trend is technically upwards this season. Most seasons, that line is not significant: there is not enough evidence to say that the regression line differs from no slope.
However, the null hypothesis can be rejected in 2004 (r-squared 0.2177, p = 0.04406) and 2008 (r-squared 0.2993, p = 0.01879). Here's my question: if a declining efficiency margin during ACC play is a bad thing, then why did the 2004 team make the Final Four, despite having one of of the few significant in-year declines?
2) Opponents matter. As many folks over at the fine institution of DBR have pointed out, Duke always plays Carolina as its last ACC regular season game. Then, while the early rounds of the ACC tourney may provide a brief dip in competition, Duke almost always advances to the end of the tournament, and they naturally find better teams there. We can look at the effect of Opponent Rank (determined by Pomeroy) on Efficiency:
The scatterplot doesn't look like much, but the trend line is there, and it is significant: Duke's opponents get tougher as the season goes on. And Duke does play much better against inferior competition:
Since Duke's opponents are harder later in the season, and Duke has lower efficiency margins against better opponents, how can we take this into effect? One way is with an Analysis of Variance (ANOVA: more info here), in which we can specify variance components. Basically, we have two main effects on efficiency margin: ACC Game # and Opponent Rank. If we account for the variance accounted for by the strength of the opponent, is the correlation still significant? For the stat-geeks out there, here's the ANOVA table:
Anova Table (Type III tests)
Response: Delta.100
......Sum Sq....... Df........ F value........ Pr(>F)
(Intercept) 0.12776 ....1 ........6.0593 ..........0.015267 *
Opp.Rank 0.48675 ..1...... 23.0858 .........4.563e-06 ***
Game 0.19808 .....1 ........9.3947 ...........0.002693 **
Residuals 2.50903 ....119
To summarize, there is still an overall significant downward trend in efficiency margin as the season progresses, although the significance is reduced.
3) Home Court Matters? When Ken Pomeroy adjusts for home court, he adds 1.4% to the home team's offensive efficiency and visiting team's defensive efficiency, and subtracts the same from the home team's DE and visiting team's OE. So the difference between Faranoff's raw data and what Pomeroy would consider "adjusted" is -2.8 for Duke's home games and +2.8 for Duke's away games. What happens to the correlation if we make the "Pomeroy Adjustment"?
The correlation coefficient decreases, as does the significance, but not to the point where the slope becomes non-significant. There is still a Duke Fade when we account for home and away games.
I must say I come away unimpressed with other explanations for the Duke Fade. Opponents do get tougher but not enough to overcome the efficiency drop. Adjusting for home and away games also doesn't have much of an effect. It is very important to note that nowhere have I suggested a causal agent for the Duke Fade. This is to avoid the common fallacy that correlation implies causation. Clearly, Duke can still have an historic season (2004) despite having one of the few significant in-year Duke Fades.
Instead, I'll take an "I Report, You Decide" kind of approach here. These are the statistical facts, and I'll be happy to attempt more rigorous investigations if they are suggested in the comments.
Wednesday, February 17, 2010
Subscribe to:
Post Comments (Atom)
2 comments:
This is fantastic stuff. I linked to it over at the Chronicle Sports Blog. Hopefully I did it justice.
I would love to see the data from the '04 team to this years team compared to the 3 NC teams of '90, '91 and '01. My personal opinion is it has nothing to do with minutes rather the personnel on the various teams, talent level.
Post a Comment