HOF Election — Trajectories of Adjusted Measures

Introduction

In my last post, I mentioned that it would be appropriate not to plot raw measures, such as OBP or SLG, but instead plot adjusted measures that show how a player performs relative to his contemporaries.  In this post, I’ll explain how one computes a Bayesian predictive “z-score” and then apply this adjusted measure to compare some of the hitters on the 2017 Hall of Fame election ballot.

Adjustment method

Jeff Bagwell had a slugging percentage of .750 in the 1994 season.  Certainly this seems high but how would it compare to Gabby Cravath’s SLG of .568 in the 1913 season?  It is a bit unclear since the slugging environments of 1913 (during the deadball era) and 1994 seasons were very different.  To make a reasonable comparison of Bagwell and Cravath’s SLGs, one need to adjust each measure for the respective slugging environments.

Here’s my method for adjusting SLGs (I’ll describe it for Bagwell’s SLG in the 1994 season).

  1.  Fit a multilevel model to the slugging percentage data for all players in the 1994 season.  Without using formula, this says that the observed slugging percentages are normally distributed with means M_1, …M_N which correspond to the “slugging talents” of the N players.  Then the talents M_1, …, M_N come from a normal talent curve with mean mu and standard deviation tau.  We fit this model to the 1994 slugging data and get estimates for mu and tau.
  2. The predictive density of a slugging percentage SLG will be normal with mean mu and standard deviation sqrt(sigma^2 / n + tau^2).  Here sigma is the standard deviation of total bases for a particular at-bat, n is the number of AB.  This tells that the variability of an observed slugging percentage depends on chance variability for a particular player, and also variability between players of different slugging talents.
  3. So we can compute the predictive residual (or z-score)

    z = (SLG – mu) / sqrt(sigma^2 / n + tau^2)

    This tells us how many standard deviations the player SLG value is relative to the average.  Since the players that we’ll compare are on the HOF election ballot, we’d expect them to have z scores above 0.  In my experience, really good players have z-scores that are consistently 2 or above.  A z-score above 3 is a remarkably good SLG season.

  4.  Let’s compute this adjusted SLG for Jeff Bagwell.  In the 1994 season, we fit the multilevel model and we obtain mu = 0.402 and tau = 0.0766 — these represent the mean and standard deviation of the 1994 talent curve of slugging percentages.  For this season, we estimate the sampling standard deviation for TB for a single AB to be sigma = 0.863, and Bagwell had 400 AB that season.  So Bagwell’s z-score for his .750 SLG would be

    z = (.750 – .402) / sqrt(.863 ^ 2 / 400 + .0766 ^ 2) = 3.95

    Wow — this was a remarkably large SLG — almost 4 standard deviations above the average.

Looking at the 2017 HOF Nominees

Looking at the list of HOF nominees, I’ll look at two interesting groups of hitters.  The first group are the ones that have already appeared on the ballots and have reasonable chances of making it in the Hall of Fame:  Jeff Bagwell, Edgar Martinez, Larry Walker, Gary Sheffield, Tim Raines, and Fred McGriff.  The second group are hitters who are appearing on the HOF ballot for the first time:  Ivan Rodriguez, Manny Ramirez, Vladimir Guerrero, Jorge Posada, Magglio Ordonez, and Edgar Renteria.  We’ll look at adjusted trajectories of both on-base percentages and slugging percentages.  The fitting model for the OBP’s is a little different from what was described above, but it is similar in spirit and also provides a predictive z-score that can be interpreted in the same way.

I wrote R functions that compute these adjustments and construct the graphs shown below.  For example the first graph is constructed using the following code.  (Note some times I input the Lahman player id instead of the name — I need to use the id to distinguish between the two Tim Raines.)

hof_list1 <- c("Jeff Bagwell", "Edgar Martinez",
               "Larry Walker", "Gary Sheffield",
               "raineti01", "Fred McGriff")
compare_obp_trajectories(hof_list1)

First, let’s look at the adjusted OBP and adjusted SLG trajectories for the first group.

hof2017_2a.png

hof2017_2b.png

Several things seem interesting:

  • The shapes of the trajectories differ — perhaps this is more obvious in the OBP trajectories.  Raines and McGriff tended to peak early in their careers (middle to late 20’s), Bagwell and Sheffield had more traditional trajectories that peak about 30, and Martinez and Walker were late bloomers, peaking in the 30’s.
  • Walker seems to have the highest SLG values (say the greatest number of seasons where the adjusted SLG is greater than 2), but this observation is tempered perhaps by the “Coors-field effect.”
  • It is obvious that Bagwell’s great SLG season was an outlier, but he had many strong SLG seasons where the adjusted value was 2.

Let’s move on to the players who are on the HOF ballot for the first time.

hof2017_2c.png

hof2017_2d.png

  • The strongest hitters in this group are clearly Manny Ramirez and Vladimir Guerrero since their adjusted SLG values are consistently over 2.  Ivan Rodriguez had only a single adjusted SLG value over 2.  Edgar Renteria was just average with respect to SLG.
  • Some players had trajectories with strong curvature (for example, Rodriquez’s adjusted SLG).  Other players appeared to show remarkable consistency over seasons.  For example, Posada, after he matured, had very consistent values of adjusted SLG and adjusted OBP over his career.
  • I think these graphs are a quick way of assessing the hitting careers of the players.  I like looking at SLG and OBP measures since they are batting measures that are familiar and are easily interpretable.

Try this out for your players?

The code for the two functions that create these graph is available on my gist site.  For example, the slugging percentage function compare_slg_trajectories.R can be found here. This gist site also contains a R Markdown file that provides more details on the function and the fitting process that leads to the predictive z-scores.

 

 

4 responses

  1. What is the benefit of using this model vs. calculating some measure like OPS+ that also accounts for the environment between eras?

    1. OPS+ also wants to adjust for the league average, but there is no adjustment for the standard deviation of the values. My method does adjust for the standard deviation and will properly discount seasons with a relatively few AB.

      1. Thanks. I understand the part about estimating a reasonable SLG for a player with very few AB. But I’m not sure what you mean by ‘adjust for the standard deviation of the values’ means? What’s the benefit of that? (I appreciate your explanation in helping me understand some of these concepts. Thanks 🙂

      2. Ryan, suppose you are interested in comparing a AVG of .340 during two different seasons. To understand relative standing, you need to know both the typical AVG in each season and also the spread (standard deviation) in each season to make a reasonable comparison. If I need material, I could find a historical example to make this point.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: