Introduction – Learning about Situational Effects
In baseball, one is generally interested in learning about player abilities and making predictions of future performance. For particular types of performance like getting on-base or hitting home runs, this task is relatively easy. We have good measures of performance like OBA and home run rates and we know the players who excel in these areas. But other types of performance are more obscure, specifically the performance of players in different situations such as home and away games, facing pitchers of the same arm against opposite arm, and clutch and non-clutch situations. We know about the general situational effect — for example, we know that batters tend to hit better against pitchers of the opposite arm. But despite some interesting data at the individual level, it is much harder to say that players tend to have special talents to perform well in specific situations. For example, when one says that a specific player tends to be streaky — this indicates that this player gets hits in a manner that is different from other players who are not streaky. Do we really believe that players have a special talent to be streaky? Many baseball people would say yes while statistical folks (like me) would tend to say no. Bayesian multilevel models are helpful in understanding if the observed situational effects we see at the individual level are real or if they are simply a byproduct of normal sampling variation.
In this post, we’ll explore one important situational effect — a player’s tendency to hit better in balls in play when he is ahead in the count. We’ll see some interesting effects when we compare batter’s performance in different types of counts, and this will set the stage for a discussion of multilevel modeling in a follow-up post.
Count Effects for wOBA
Using Statcast data for the 2019 season, let’s focus on the estimated_woba_using_speedangle Statcast variable that is the estimated value of wOBA given values of the launch conditions (exit velocity and launch angle). A player’s average estimated wOBA is actually a better estimate of hitting ability than his actual wOBA in the sense that it is a better predictor of wOBA in the next season.
Using this measure, I show how this “expected wOBA” on balls in play varies by the count using the below graph. Note that the values cluster naturally into three groups — “ahead” counts (2-0, 3-0, 3-1) have wOBA values between 0.42 and 0.48, “behind” counts (0-1, 0-2, 1-2, 2-2) have wOBA values between 0.32 and 0.36, and “neutral” counts (0-0, 1-0, 1-1, 2-1, 3-2) have wOBA values between 0.36 and 0.40.

Individual Counts Effects?
Okay, collectively it is obvious that hitters’ performance on balls in play depends on the count. This is not surprising. But we want to see if this count advantage depends on the hitter. So I collected all of the players in the 2019 season who had at least 200 balls in play. For each player, I computed the mean estimated wOBA in each of the behind, neutral, and ahead count situations on balls in play. For each of the three paired comparisons (ahead vs behind, neutral vs behind, and ahead vs neutral), I plot the wOBA improvement against the average wOBA for these players.

There are some interesting takeaways from these plots:
- The red line corresponds to no improvement in wOBA in one count situation. In each case, most of the points fall above the red line indicating that players tend to do better in the more advantageous count situation.
- But there are still many points that fall below the red line in each plot — many players actually hit poorer in the more advantageous situation during the 2019 season. That raises the question — do you believe that some hitters are not able to take advantage of the more favorable count?
- To me, the most interesting takeaway is that there is a positive association in the left two plots corresponding to the “ahead-behind” and “ahead-neutral” comparisons. This indicates that stronger hitters (those with higher wOBA values) are more likely to take advantage of the favorable situation than weaker hitters. This provides some evidence that hitters actually have varying abilities or talents to take advantage of the count situation. Note that we don’t see this positive association in the “neutral-behind” comparison.
What to Do? (To Pool or Not to Pool?)
Okay, we are facing a dilemma. It is pretty obvious that if we pool all of the hitting data together, there are clear count effects. Collectively, batters hit better on balls in play when they are in favorable count situations, followed by neutral and behind situations. Pooling the data allows us to see these general effects, but it hides any differences between individual hitters.
Searching for individual effects, we collected all regular players and computed individual-level advantages in count for all players in this group. Although we see some general effects, we also see some puzzling results. Some hitters actually hit worse in more advantageous count situations. Looking further, we notice that, on average, each of our regular players places 22, 136 and 177 balls in play in the ahead, behind and neutral situations, respectively. So although we see some interesting patterns at the individual level, some of these averages are based on small samples, especially in the ahead in the count situation. So we don’t know if we really can believe these individual-level results in the sense that they may not correspond to real effects that reflect baseball talents.
Preview: Multilevel Modeling
What I have hopefully convinced you in this brief study is that we can pool the data or instead focus on individual-level effects, and both strategies are not desirable. Pooling the data ignores effects at the individual player level. But individual-level analyses are not satisfactory since we have relatively little data at the individual-level (this is most obvious in the ahead of the count situation.) It would seem better to consider some compromise alternative that allows one to partially pool the data.
I’ve written several posts that illustrate the use of Bayesian modeling that can achieve this partial pooling of data. For example, this post considers the use of a Bayesian multilevel model to estimate players’ hitting probabilities based on data from a few weeks of the season. (I am considering a famous dataset by Efron and Morris that was used to illustrate multilevel modeling.) One constructs a prior distribution for the hitting probabilities in two stages — at the first stage, one assumes that the hitting probabilities follow the same distribution with unknown parameters, and at the second stage, one assigns these unknown parameters their own prior distribution. (Since we have two levels for this prior, we call the modeling multilevel.) The Bayesian estimates from this multilevel model will shrink or adjust the raw batting averages towards a common value. And these Bayesian multilevel estimates provide better predictions than the individual batting averages of future batting performance.
Our situation is more complicated since we’re interested in both a player’s wOBA talent and also in his ability to take advantage of a particular count situation. We’ll be putting this problem into a regression context and so the issue will be how to combine a collection of individual-level regression estimates. In Part 2 of this Player Count Effect post (Part 4 of my Introduction to Bayes post), I’ll outline the Bayesian multilevel modeling, describe how one simulates from the model, and show what we can learn about “true” situational count effects.