Learning from Ryan Howard’s Slump

As most of us know, the Phillies are having a forgettable 2015 season. Their first baseman, Ryan Howard, recently had a “0 for 35” slump. What does that say about Howard’s batting ability this season? We will first describe a traditional “frequentist” calculation, and then describe a Bayesian simulation approach that more directly addresses the question about Howard’s ability.

A standard way to make sense of an “0 for 35” is to make some assumption about Howard’s batting ability and then compute the probability of this batting event if that model is true.

Let $p$ denote the probability that Howard gets a hit in a single at-bat. If Howard’s ability is $p = 0.25$, say, one can explore the distribution of Howard’s longest ofer in a single season (assuming 500 at-bats). One simulates 500 AB assuming $p = .25$ and find the longest ofer. Repeating this 1000 times, one obtains the following histogram of ofers. Howard’s actual ofer of 35 is shown as a vertical line. This ofer value is clearly in the right tail of this distribution. This tells that it is very unusual to observe a “0 for 35” if Howard was indeed a consistent hitter with a constant probability of .250 of getting a hit. Learning About Howard’s Batting Ability by Bayes’ Rule

Although this calculation is interesting, it really is addressing the wrong question. We observe Howard’s 0 for 35 — what does that tell us about Howard’s true batting average? We describe a simple Bayesian procedure that describes how we can learn about a player’s batting ability from this streak data.

Initial Beliefs about P

What do we initially believe about Howard’s true batting averages $p$? I’ll pretend I know little about baseball, but someone tells me that batting averages fall between .100 and .400. So I initially assume that $p$ can be any value from .100 to .400 and each possible value is equally likely. Updating Beliefs with Data

So suppose one observes a player’s hitting slump. First we suppose that one has a “0 for 15” slump during a 500 AB season. What does that tell us about the player’s hitting ability?

We do a so-called “model-data” simulation to answer this question.

• First we simulate a possible value of $p$ from the prior — choose a value at random from the set of values {.10, .11, …, .40}
• For this value of $p$ we simulate 500 at-bats and find the longest ofer (hitting slump).

We repeat this simulation a large number of times. To get the updated (posterior) probabilities about the batter’s abiity, we collect only the values of $p$ where the ofer is 15 or longer. Here is a histogram of the values of $p$. This is the posterior density of the true batting ability given the data “0 for 15 or more”. What is interesting is that this density is pretty flat from $p = .10$ to $p = .25$. So we have not learned much about the batter’s true ability from the “0 for 15” slump.

But Howard had a “0 for 35” slump. We repeat this simulation to see what we learn from a “0 for 35 or greater” slump. Here we see that we have learned much more about a batter’s ability from this long slump. His true batting ability is more likely to be in the (.10, .15) range.