Introducing 2017 Disc Golf Elo Ratings

Analytics like Elo ratings open up alternate ways of understanding and comparing pro player performance

Many of us who are interested in disc golf would love to see the sport enter the mainstream, and the sport seems to be moving in the right direction thanks to the ever-growing PDGA membership base.

However popular disc golf is becoming, it is generally lacking in one area that professional sports in which the NBA, PGA Tour, and, particularly, MLB have excelled recently: analytics. There has been growth in this area recently with the statistics provided by UDisc Live, and PDGA ratings have been around for a while. But, generally speaking, disc golf has some catching up to do when it comes to using big data to push our understanding of the sport forward.

I hope to help with the genesis of Elo ratings for professional disc golfers. Elo ratings are a simple mathematical tool used for comparing players.1 They have become popular for comparing teams or individuals in sports. For example, the website FiveThirtyEight uses Elo ratings to rank and make predictions regarding NBA and NFL games.

Elo ratings are very popular because they are easy to calculate and easy to understand. Basically, each player’s rating starts with the same baseline value, such as 1500, which is then modified according to how the player scores as compared to all the other players for a given round. If a player scores well, their rating goes up, and vice versa. I provide more of the dirty details in the footnotes.2 But for now, let’s get right to the results.

I have calculated Elo ratings for the 2017 MPO and FPO seasons. These ratings include the 595 MPO players and 104 FPO players that competed in PDGA Majors and NTs (35 total rounds). The figures show how the ratings of all 595 and 104 players, respectively, changed over the 35 rounds. They are pretty, but you cannot really learn anything from them.

Plot of 595 MPO player Elo ratings for the 2017 season. Figure: Aaron Howard

 

Plot of 104 FPO player Elo ratings for the 2017 season. Figure: Aaron Howard

For more clarity, I also generated tables of the top 25 rated players. I ranked them based on the harmonic mean of their average, maximum, and season end ratings, but the tables are also sortable by all four measures. Each of these ratings has value and tell you something worthwhile. But why focus on the harmonic mean? Because it is more sensitive to lower values and, therefore, penalizes players for not being consistent (mean), good (maximum), and/or a strong finisher (season end).

You probably recognize some of the players on these tables. At the top of the MPO ranking is, of course, Ricky Wysocki, who had the consensus “best” season. He had both the highest mean and end of season rating. Close behind is Paul McBeth, who had the highest maximum rating of the season after his transcendent comeback in the final round of the European Open (sorry Gregg Barsby!). Unfortunately, his struggles early on at the USDGC hurt his mean and end of season ratings.

PlayerMeanMaximumEnd Of SeasonHarmonic Mean
Richard Wysocki1515.61533.31524.81524.5
Paul McBeth1515.41534.01523.11524.1
Simon Lizotte1512.01529.31515.71518.9
Philo Brathwaite1509.81524.31518.01517.3
Devan Owens1509.11523.71518.61517.1
Nathan Sexton1508.21521.01521.01516.7
Nathan Doss1510.31526.71509.31515.3
Chris Dickerson1509.91520.21515.71515.3
Kyle Crabtree1509.41516.41516.01513.9
Gregg Barsby1507.11522.41512.21513.9
Jeremy Koling1509.01519.21512.31513.5
James Proctor1506.51515.91515.91512.7
Michael Johansen1507.31520.41510.21512.6
Austin Turner1507.21518.01512.41512.5
Eagle McMahon1508.81522.41505.51512.2
Paul Ulibarri1507.21519.41509.81512.1
Zach Melton1506.61514.31513.01511.3
Barry Schultz1506.81513.51513.51511.3
Cameron Todd1505.11513.41513.41510.7
Joshua Anthon1506.71516.91505.41509.7
James Conrad1504.51517.71505.91509.3
Robert Lockwood1505.61511.21511.21509.3
Andrew Fish1505.91510.81510.01508.9
Seppo Paju1505.01510.21510.21508.5
Håkon Kveseth1505.91510.81508.71508.4

At the top of the FPO ranking is Catrina Allen, which came as a bit of a surprise to me. Paige Pierce had the highest maximum and mean rating, but Catrina’s hot play at the Pittsburgh Flying Disc Open and the Hall of Fame Classic propelled her season end rating and harmonic mean above Pierce’s. For both MPO and FPO, ratings fall off a little after the top two players.

PlayerMeanMaximumEnd Of SeasonHarmonic Mean
Catrina Allen1514.21533.11533.11526.8
Paige Pierce1515.91533.61529.31526.2
Sarah Hokom1510.51522.01522.01518.1
Valarie Jenkins1508.01519.61515.61514.4
Jennifer Allen1506.21514.61514.61511.8
Lisa Fajkus1507.21515.61510.51511.1
Elaine King1508.01512.51511.31510.6
Jessica Weese1506.01513.61508.91509.5
Melody Waibel1502.21509.11509.11506.8
Hannah Leatherman1503.21507.81507.81506.3
Ellen Widboom1502.31509.91505.61505.9
Nicole Bradley1503.41507.51506.91505.9
Ragna Bygde Lewis1503.41506.71506.71505.6
Karina Nowels1502.21508.51502.91504.5
Madison Walker1501.51504.41504.11503.3
Rebecca Cox1499.61505.01505.01503.2
Kristin Tattar1500.41504.11504.11502.9
Eveliina Salonen1502.81508.31497.21502.7
Vanessa Van Dyken1500.21503.71502.81502.3
Henna Blomroos1501.51502.71502.31502.2
Zoe Andyke1500.91504.21501.21502.1
Stephanie Vincent1501.61502.11502.11501.9
Heather Zimmerman1501.21502.01502.01501.7
Melodie Bailey1500.51503.41500.81501.6
Michelle Frazer1500.91501.91501.91501.5

If you compare these ratings to those given by the PDGA, you will see a lot of consistencies. This makes sense because there are some conceptual similarities between PDGA player ratings and Elo ratings. For example, both use standardized round scores in their calculations. PDGA ratings use Scratch Scoring Average (SSA) and Elo ratings use the scores and ratings of other players in the round (see details below).

However, generally speaking, Elo ratings are much easier to calculate and one can quantify ratings for any tournament for which the scores exist, whether or not SSAs are available. This means we can generate ratings for players all the way back to the 1984 Pro Worlds (the earliest tournament data available on the PDGA website). As I generate these 20+ years of ratings I plan to improve upon my estimation of what is called regression to the mean,3 which controls for random fluctuations in performance that may be the result of many factors, the most common of which is small sample size (fewer rounds played).

Moving forward, I think these 2017 ratings provide a nice starting point for predicting future performance and generate some interesting questions. For example, can Nate Sexton translate his strong finish (his max ratings were his season end ratings) in 2017 and make moves on the dominant players? How quickly can any up and comers, like Kevin JonesJames Conrad, and Lisa Fajkus shoot up the ranks? We’ll have to wait and see.

The development of Elo ratings is one means of analytics that can enhance our understanding of the sport and provide an ample foundation upon which to explore it further. As soon as the 2018 season starts later this month, I will continue calculating Elo ratings for all participating players and expand the ratings to include all Disc Golf Pro Tour tournaments.


  1. The method behind these ratings was first developed by Arpad Elo, a physics professor, who wanted a quantitative way to compare chess players. 

  2. Methods: The Elo rating equation is: Elo Rating=PR+K*(S-(2*ES/N), where PR = previous rating, K = K-factor, S = round score, ES = expected score (based on other players competing in the same round), and N = number of players. K is a parameter that controls the volatility in ratings. Bigger K values mean more volatility. The K-factor I used was 20, which is a value that works well in a variety of sports. The 2*ES/N portion is modified from the classic Elo rating equation to deal with the fact that disc golf is not a one-on-one sport like chess (see: Building a rating system and Building a modified Elo rating system). For the first round of competition, when there was no PR, I used a baseline value of 1500. The baseline value can be anything you want, chess uses 1000, and it doesn’t really change your interpretation. I chose 1500 because it is commonly used for other sports. I extracted all data from the PDGA website

  3. I did include an estimate of regression to the mean when calculating the 2017 ratings, but my estimate will be more accurate when data from more years are included. 

  1. Aaron Howard
    Aaron Howard

    Aaron Howard is a Visiting Assistant Professor at Franklin & Marshall College. He loves to play disc golf and to think about things he loves quantitatively. Contact him at [email protected] and follow him on Instagram.

TAGGED: , , , , ,

More from Ultiworld
Comments on "Introducing 2017 Disc Golf Elo Ratings"

Find us on Twitter

Recent Comments

Find us on Facebook