# Modeling Balls in Play Outcomes with an Ordinal Model

#### Introduction

One of my popular activities (besides tennis) has been to model in-play home run rates using a generalized additive model (GAM) where the logit of the home run probability is written as a smooth function of the exit velocity and the launch angle. (I recently used this model in the Home Runs – Beyond Launch Angle and Exit Velocity post.) This modeling has been helpful in my work, but the in-play result actually consists of five possible outcomes — out, single, double, triple, and home run. So it would be of interest to model this multinomial outcome by a smooth function of the launch conditions. There is a relatively straightforward generalization of the logistic model, called an ordered logistic regression, that can be applied for ordered outcomes like the ones we have here.

The purpose of this post is to briefly describe this ordered logistic GAM model, use it to model in-play outcomes from the 2019 season, display the fit of this model, and see if it appears to be a reasonable fit to the in-play data.

#### Ordered Logistic GAM Model

Let $Y$ denote the ordered in-play event — we’ll let $Y$ values of 1, 2, 3, 4, 5 corresponding respectively to the ordered outcomes out, single, double, triple, and home run. Let $P(Y \le j)$ denote the probability that the in-play event is at most outcome $j$. The ordered logistic model says that the logit of $P(Y \le j)$ (for any choice of the index $j$) is a smooth function of the launch variables LA (launch angle) and LS (exit velocity).

$\log \frac{P(Y \le j)}{P(Y > j)} = s(LA, LS)$

This model is remarkably easy to fit in R using the gam() function from the mgcv package. One first defines a variable Outcome that is equal to 1 through 5 corresponding to the five ordered outcomes. Then a single line of code performs the ordinal model fitting (note the use of the ocat family with R = 5 ordinal outcomes in the function).

By use of the predict() function, one obtains predicted probabilities of the five in-play outcomes for any specific values of the launch variables. For example, suppose Mike Trout hits a line drive with a launch angle of 20 degrees and a exit velocity of 100 mph. Using the predict() function, we obtain the following predicted probabilities.

From this fitted model, for these launch conditions, we see Trout is most likely to single with probability 0.415, followed by a double (0.241), out (0.215), home run (0.105) and triple (0.022).

#### Displaying the Fit

It is challenging to display the fit of this model since the predicted probabilities have five components corresponding to out, single, double, triple and home run. One way of visualizing the fit is to focus on the most likely outcome — that is, the in-play event with the largest predicted probability.

Below I have displayed a sample of 7000 (launch angle, launch speed) values and colored the points according to the most likely outcome from the fitted model. The red region where a home run is most likely is close to the “Barrels” region defined by Statcast. The thin blue curve corresponds to the small region where a double is most likely. A single is most likely in the long-shaped orange region. This demonstrates that hitting a single is more about the “right” launch angle than about the right launch speed.

#### Is the Model a Good Fit to the Data? (Mike Trout and Brett Gardner)

There are several assumptions behind this ordinal model and so one wonders if these predicted outcome probabilities provide a reasonable match to the actual player data.

Let’s focus on Mike Trout for the 2019 season. As the following table indicates, in his in-play events, Trout had 25 doubles, 45 home runs, 187 outs, 57 singles and 2 triples. Using the launch variable values for each in-play event, the model provides probabilities of the five outcomes, and by adding up these probabilities over all plays, we obtain expected counts as shown below. One can judge the closeness of an observed count with an expected count by use of the Pearson residual

$r = \frac{(obs - exp)^2}{exp}$

Looking at the table, the residuals are small and so it appears that the model has done a pretty good job in predicting Trout’s output. What this says is that one can accurately predict Trout’s in-play production from the values of the two launch variables. One can summarize the agreement of the observed and expected counts by summing these Pearson residuals. Here the summary Pearson measure is 3.03. In this situation, if the model is reasonable, one would expect that the summary measure to be about 4 and one wouldn’t be too surprised unless this summary measure is larger than 10. (By the way, the numbers 4 and 10 aren’t magic. The reference distribution for a multinomial goodness of fit measure with five bins is chi-square with four degrees of freedom. The mean value of this chi-square distribution is four and the 95th percentile is 9.5.)

There are a number of players where the model doesn’t do as well in prediction. Remember Brett Gardner from a previous post? Here are the observed and expected counts for Gardner for the 2019 season. Here Gardner hit 28 home runs which is much larger than the expected count of 13.2. In addition, Gardner hit 7 triples which is significantly larger than the model’s prediction of 2.4. Remember that our model is only using exit velocity and launch angle as inputs. Brett Gardner illustrates several other inputs that are relevant in hit production, namely

• the ability to hit a ball (including home runs) down the lines
• the running ability to take an extra base like a triple

For Gardner, the Pearson summary that measures agreement of the observed and expected counts is 28.7 which is surprising given the model (remember I said that I’d be surprised if the value was larger than 10).

Here are several other players whose observed counts are significantly different from the expected counts from the model (as measured by the Pearson statistic):

Adalberto Mondesi, Mallex Smith, Dee Strange-Gordon

All these players are considered pretty speedy. By looking at the observed and expected counts in each case, we see that each player had a significantly greater count of triples than predicted from the model.