#### Introduction

Over many years, I’ve played with various Bayesian models to predict future performance of baseball players. In some previous posts (for example, see my “Shrinking Batting Averages” posts here and here and my Efron and Morris post), I have explained multilevel modeling and showed R code to implement the methods.

Here I will try something different. I have written a Shiny app to illustrate Bayesian shrinkage using some interesting data, specifically Retrosheet play-by-play data for the 2019 season. I’ll focus on explaining the app. Hopefully, by playing with the app, this will provide some insight into the behavior of these predictions.

#### Scenario

Here’s the situation. I collect all of the individual at-bat outcomes for the 2019 season from Retrosheet. I choose a particular date in the middle of the season, breaking the data into “Train” (before the date) and “Test” (after the date) portions. I collect the batting averages of all players in the Train dataset who have at least a particular number of AB. The problem is to predict the batting averages for these players in the remainder of the 2019 season. Here are two plausible prediction rules

**Observed.**I just use the observed batting averages in the Train dataset to predict the batting average in the Test dataset. So for example, if a player has 30 hits in 100 AB in the first dataset, I will predict his batting average in the remainder of the season to be 0.300.**Multilevel.**The multilevel prediction method partially pools the observed rates. The prediction shrinks or adjusts the player’s observed BA towards the average. This prediction estimate can be written as

where is an estimate at the average BA across all players and the estimate tells me how much the player’s BA will be shrunk towards the average. I use a Bayesian multilevel model to estimate the values of and .

I claim that the Multilevel method will do better than the Observed method in prediction. Specifically, if one measures the goodness of the prediction by the sum of squared prediction errors, one obtains a smaller value of this sum of squared errors using the Multilevel method.

#### The App – Inputs

This Shiny app illustrates this exercise with the 2019 Retrosheet data.

This app is currently live at https://bayesball.shinyapps.io/PredictingBattingRates/

Here’s a snapshot of the Intro page.

There are three inputs to this app:

**Choose the Outcome.**By choosing H, one is predicting hit rates or batting averages. By choosing SO, you are predicting strikeout rates and by choosing HR, you are predicting home run rates.**Choose Date Breakpoint**. One can decide where to break the Train and Test datasets. Here I have chosen July 1. If instead you want to work with a small amount of observed data, choose an earlier date, say, May 1.**Choose a Minimum At-Bats.**You can require that you only consider players with at least 100 AB in the Train dataset. If you choose 0, then you are using all players with at least 1 AB. (You might want to choose a larger value of minimum AB so that pitchers are excluded.)

#### The App – Outputs

After you choose these inputs, the model will be fit and you will see the estimated values of and in the left panel. In this snapshot, = 0.258, which means the Bayesian estimate shrinks the observed BA towards 0.258. Here = 344.550 — since this is a relatively large value in comparison with the AB values, the observed BA is shrunk strongly towards the average.

**The Rates tab** shows the observed and predicted rates. Note the high variability of the observed rates — in the first part of the season (before July 1), we see a number of batting averages under the Mendoza line (under 0.200) and BA values over 0.350. In contrast the predicted BA’s all fall between 0.225 and 0.300.

At the bottom of the Shiny display, you see the sum of squares of prediction errors for both the Observed and Multilevel Estimates. As one would anticipate, the Multilevel Estimates does better than the Observed Estimates with a smaller prediction error — 0.856 is smaller than 0.982.

**The Talents tab** shows the estimated talent curve — this is a density estimate of the batting probabilities for these players. It is relatively common for a player to have a sub-0.200 average in the first few months of the season, but we see from this curve that it is unusual to have a batting probability (talent) below the Mendoza line. By the way, the shape parameters of this Beta talent curve are just and . If you plug in the estimates of and , you’ll get the values 88.7 and 255.8 that you see in the title of the figure.

#### Playing with the App

What I like about this app is that it allows the user to play with the inputs and quickly see the outputs. The app gives insight on the performance of these multilevel shrinkage estimates in different situations. Here are some things for the interested reader to try.

**Change the outcome from H to SO.**You’ll see that one gets a different estimate at the parameter that controls the shrinkage. A strikeout rate is much more ability-driven than a batting average. If you display the talent curve for SO rate, you will see that there is a wider distribution of strikeout probabilities than we saw for hitting probabilities.**Change the outcome from H to HR.**Now you are predicting home run rates in the remainder of the season. How does the shrinkage size compare to what you saw for H and SO?**Change the date.**Try using a date breakpoint earlier in the season, say May 1, and see what impact this dataset with smaller AB values has on the predictions for the remainder of the season.**Change the minimum number of AB.**Does the shrinkage change if you do the prediction for all batters (change the minimum AB to 0)?

#### See the Data and the Predictions

One might be interested in seeing the actual observed and predicted rates for all of the 2019 hitters. If you press the Download Rates button in the left panel, you can save all of the results (observed data and predictions for that particular choice of rate) as a cvs file that you can later import into R. Each of the rows in the dataset corresponds to a specific player identified by his Retrosheet id.

#### Closing Comments

- If you wish to see a detailed description of the Bayesian multilevel model, see Chapter 10 of our Probability and Bayesian Modeling text . We explain the whole model and illustrate the use of JAGS in applying Markov chain Monte Carlo to simulate from the posterior of all unknown parameters.
- In the Shiny app, I am using a “quick fit” of this model using a Laplace approximation function
`laplace.R`

from the LearnBayes package. It allows one to quickly see the impact of changing the inputs on the posterior estimates. - This Shiny app is currently the function
`PredictingBattingRates()`

in my ShinyBaseball package. The code for the app is contained in a single file`app.R`

that can be found here. The Retrosheet data for the app is read from my Github site.