Our Blog


Final Thoughts on Specialists and Randomness

In previous articles, I explored the specialist method of finding course-fit statistics and discussed how I couldn’t find evidence that specialist-type metrics are predictive of future performance. I don’t consider my dive into the data the final word on the matter: I’m eager for someone else to provide an evidence-based rebuttal of what I might have missed.

But I also didn’t want to leave the subject without some exploration and/or explanation of why specialist-type metrics aren’t all that predictive. There are some applicable lessons from this subject that can apply to more areas of PGA as well as DFS in general.

“Get Together One More Time”

Let’s go back to our specialist angle with a purely hypothetical course condition. Let’s say that on one out of every five courses on Tour there’s a Randomness Fairy that shows up. On this type of course, the Randomness Fairy takes every fifth shot and moves the ball in a random direction. Sometimes it hurts a golfer’s shot, and other times it helps. Every golfer knows if a course will have the Randomness Fairy ahead of time. The Randomness Fairy doesn’t distinguish between golfers. Everyone is equally subject to the whims of the course conditions.

Could we build a specialist narrative around Randomness Fairy courses?

The arguments in favor aren’t all that out of line. If you’re going to go into a course where you know all your preparation, strategy, and amazing shots can be undone by the Randomness Fairy, you probably have to be mentally tough to withstand that kind of variance. Maybe you believe that certain golfers tilt easier than others and that on a Randomness Fairy course they’re more likely to go off the rails and put up an egg.

On the flip side, there could also be golfers who are mentally tough and can withstand that variance better than the competition. They could conceivably have an edge on Randomness Fairy courses.

How would we find both of these types of golfers?

The Million-Dollar Question

The specialist approach says that, using your metric of choice (strokes gained, percentage from expectation, etc.), you should just look at who does better or worse on Randomness Fairy courses in comparison to regular courses. With this approach, you would certainly find golfers who did better and/or worse on these courses.

But here’s the million-dollar question: If a golfer does better on Randomness Fairy courses, is it because he has that mental edge? — or was he just lucky?

From the data alone, there’s no way to tell the difference. Even if you believe that the mental edge exists in the abstract, you won’t be able to pick out who was good versus who was lucky based only on the results.

(Now, for the twist at the end: Go back to all of the previous sentences containing “Randomness Fairy” and replace that phrase with “windy.” Wind is inherently random. That’s why I don’t think that the wind specialist angle is worth pursuing.)

The Randomness Fairy is a Cruel Mistress

In theory, randomness should even out over the long run, and you won’t have to worry about it with a sufficiently large sample size. In practice, golf is such a noisy game and tournaments are so infrequent that you will never hit your required sample size. This applies not just to our hypothetical but also to any course-fit approach we take in PGA.

Ultimately, this inherent randomness is very limiting — but not limiting enough to prevent us from reaching conclusions organically by backtesting whatever angle might intrigue us. We’ve put a lot of thought into what stats we include in our Player Models for that reason: There’s nothing that hasn’t been backtested and proved to be predictive out of sample.

Even if you’re not using Player Models, at least make sure that the analysis that you’re doing or reading has some empirical validation. Otherwise, you’ll run the risk of hitching your wagons to the whims of the Randomness Fairy.

In previous articles, I explored the specialist method of finding course-fit statistics and discussed how I couldn’t find evidence that specialist-type metrics are predictive of future performance. I don’t consider my dive into the data the final word on the matter: I’m eager for someone else to provide an evidence-based rebuttal of what I might have missed.

But I also didn’t want to leave the subject without some exploration and/or explanation of why specialist-type metrics aren’t all that predictive. There are some applicable lessons from this subject that can apply to more areas of PGA as well as DFS in general.

“Get Together One More Time”

Let’s go back to our specialist angle with a purely hypothetical course condition. Let’s say that on one out of every five courses on Tour there’s a Randomness Fairy that shows up. On this type of course, the Randomness Fairy takes every fifth shot and moves the ball in a random direction. Sometimes it hurts a golfer’s shot, and other times it helps. Every golfer knows if a course will have the Randomness Fairy ahead of time. The Randomness Fairy doesn’t distinguish between golfers. Everyone is equally subject to the whims of the course conditions.

Could we build a specialist narrative around Randomness Fairy courses?

The arguments in favor aren’t all that out of line. If you’re going to go into a course where you know all your preparation, strategy, and amazing shots can be undone by the Randomness Fairy, you probably have to be mentally tough to withstand that kind of variance. Maybe you believe that certain golfers tilt easier than others and that on a Randomness Fairy course they’re more likely to go off the rails and put up an egg.

On the flip side, there could also be golfers who are mentally tough and can withstand that variance better than the competition. They could conceivably have an edge on Randomness Fairy courses.

How would we find both of these types of golfers?

The Million-Dollar Question

The specialist approach says that, using your metric of choice (strokes gained, percentage from expectation, etc.), you should just look at who does better or worse on Randomness Fairy courses in comparison to regular courses. With this approach, you would certainly find golfers who did better and/or worse on these courses.

But here’s the million-dollar question: If a golfer does better on Randomness Fairy courses, is it because he has that mental edge? — or was he just lucky?

From the data alone, there’s no way to tell the difference. Even if you believe that the mental edge exists in the abstract, you won’t be able to pick out who was good versus who was lucky based only on the results.

(Now, for the twist at the end: Go back to all of the previous sentences containing “Randomness Fairy” and replace that phrase with “windy.” Wind is inherently random. That’s why I don’t think that the wind specialist angle is worth pursuing.)

The Randomness Fairy is a Cruel Mistress

In theory, randomness should even out over the long run, and you won’t have to worry about it with a sufficiently large sample size. In practice, golf is such a noisy game and tournaments are so infrequent that you will never hit your required sample size. This applies not just to our hypothetical but also to any course-fit approach we take in PGA.

Ultimately, this inherent randomness is very limiting — but not limiting enough to prevent us from reaching conclusions organically by backtesting whatever angle might intrigue us. We’ve put a lot of thought into what stats we include in our Player Models for that reason: There’s nothing that hasn’t been backtested and proved to be predictive out of sample.

Even if you’re not using Player Models, at least make sure that the analysis that you’re doing or reading has some empirical validation. Otherwise, you’ll run the risk of hitching your wagons to the whims of the Randomness Fairy.