Our Blog


Final Thoughts on Metrics and Lineup Review

Over the past couple months, I’ve been writing about non-projection metrics, lineup review methodology, and how they all fit together. I want to conclude this series with some miscellaneous thoughts on what I’ve learned through the research process as well as some areas I’d love to explore down the road.

Lineup Review Is Criminally Underutilized Across DFS

I think that robust, algorithm-driven lineup review is the future of critical analysis across every DFS sport. It’s not something that would have been easy even three or four years ago, but with modern processing power and memory it’s not difficult now to backtest millions of lineups at a time. It’s also a necessary counterweight to the proliferation of subjective analysis for what is and is not important in cash-game lineups and tournament lineups. It’s one thing to assert what metrics are important for each game type. It’s another to plug it into a test procedure to get a definitive answer.

Situational Lineups Are the Next Big Shift for Golf

One thing I don’t quite have enough data to have a definitive answer on is how one should adjust strategy based on course conditions: Wind/temperature, course difficulty, cut/no-cut, etc. No other sport (except for NASCAR) has variation in playing conditions that affects an entire slate, and it stands to reason that a one-size-fits-all strategy probably isn’t as good as various optimal strategies based on course conditions. (Example: Is a stars-and-scrubs approach better on a difficult course? Or on an easier course?)

I don’t think that anyone has cracked that strategy yet because the limiting factor is having a variety of conditions/slates to backtest strategies. I have a feeling that conditions will interact with one another to produce complicated, non-linear decision paths for optimizing lineups, which will necessitate a better class of lineup-review algorithms. Analytics stand to provide an even better edge once we have those algorithms.

Different Player Characteristics Have Different Levels of Predictability

When we talk about concepts like Consistency and Upside, there’s an implicit assumption that both metrics are on the same field in a lot of respects: Namely, that past consistency and upside (however we define those) are equally predictive of a player’s ability to repeat those traits in the future. Based on my findings from lineup review, I’m not sure that’s the case in golf at all, and it has major implications for roster construction. In golf, the rough analogous player properties are cut-making and top-5 potential. Initial findings suggest that cut-making is more predictable week-to-week than top-5 finishes.

While that’s not necessarily surprising, given how random the top of the leaderboard is in a given week, it does have implications for how you might create your lineups for guaranteed prize pools. If upside is essentially random, why give it any weight? Sure, you can see which players have been boom-or-bust in the past, but has anyone really verified that boom-or-bust-ness is repeatable or inherent? It’s an extra step of due diligence that a lot of other sports could also use.

Player Properties Need to Be Quantified More Often

The quantification of player properties is not only a necessary step for algorithm-driven lineup review. It’s also helpful for thinking critically about player properties and what exactly goes into them. My first pass at consistency/upside forced me to think about how those concepts work with non-standard distributions, and it helped my understanding of the game.

There’s essentially no limit to player properties when we’re defining them ourselves. I think that developing those additional metrics will pay off once lineup review becomes more of a standardized process.

Over the past couple months, I’ve been writing about non-projection metrics, lineup review methodology, and how they all fit together. I want to conclude this series with some miscellaneous thoughts on what I’ve learned through the research process as well as some areas I’d love to explore down the road.

Lineup Review Is Criminally Underutilized Across DFS

I think that robust, algorithm-driven lineup review is the future of critical analysis across every DFS sport. It’s not something that would have been easy even three or four years ago, but with modern processing power and memory it’s not difficult now to backtest millions of lineups at a time. It’s also a necessary counterweight to the proliferation of subjective analysis for what is and is not important in cash-game lineups and tournament lineups. It’s one thing to assert what metrics are important for each game type. It’s another to plug it into a test procedure to get a definitive answer.

Situational Lineups Are the Next Big Shift for Golf

One thing I don’t quite have enough data to have a definitive answer on is how one should adjust strategy based on course conditions: Wind/temperature, course difficulty, cut/no-cut, etc. No other sport (except for NASCAR) has variation in playing conditions that affects an entire slate, and it stands to reason that a one-size-fits-all strategy probably isn’t as good as various optimal strategies based on course conditions. (Example: Is a stars-and-scrubs approach better on a difficult course? Or on an easier course?)

I don’t think that anyone has cracked that strategy yet because the limiting factor is having a variety of conditions/slates to backtest strategies. I have a feeling that conditions will interact with one another to produce complicated, non-linear decision paths for optimizing lineups, which will necessitate a better class of lineup-review algorithms. Analytics stand to provide an even better edge once we have those algorithms.

Different Player Characteristics Have Different Levels of Predictability

When we talk about concepts like Consistency and Upside, there’s an implicit assumption that both metrics are on the same field in a lot of respects: Namely, that past consistency and upside (however we define those) are equally predictive of a player’s ability to repeat those traits in the future. Based on my findings from lineup review, I’m not sure that’s the case in golf at all, and it has major implications for roster construction. In golf, the rough analogous player properties are cut-making and top-5 potential. Initial findings suggest that cut-making is more predictable week-to-week than top-5 finishes.

While that’s not necessarily surprising, given how random the top of the leaderboard is in a given week, it does have implications for how you might create your lineups for guaranteed prize pools. If upside is essentially random, why give it any weight? Sure, you can see which players have been boom-or-bust in the past, but has anyone really verified that boom-or-bust-ness is repeatable or inherent? It’s an extra step of due diligence that a lot of other sports could also use.

Player Properties Need to Be Quantified More Often

The quantification of player properties is not only a necessary step for algorithm-driven lineup review. It’s also helpful for thinking critically about player properties and what exactly goes into them. My first pass at consistency/upside forced me to think about how those concepts work with non-standard distributions, and it helped my understanding of the game.

There’s essentially no limit to player properties when we’re defining them ourselves. I think that developing those additional metrics will pay off once lineup review becomes more of a standardized process.