Our Blog


Bitcoin, Projections, and Rankings: Intrinsic vs. Market Value

This is the 170th installment of The Labyrinthian, a series dedicated to exploring random fields of knowledge to give you unordinary theoretical, philosophical, strategic, and/or often rambling guidance on daily fantasy sports. Consult the introductory piece to the series for further explanation.

A couple of weeks ago Peter Jennings (CSURAM88) and I booked a prop bet as to whether I would finish the season as a top-10 ranker at FantasyPros. At the time I was No. 10. Now I’m No. 2, trailing only Sean Koerner, a friend of the Daily Fantasy Flex pod and the Director of Predictive Analytics for STATS.

In this piece I want to explore the difference between the projections in our Models and my rankings at FantasyPros, and I also want to discuss two different methods of evaluating players and predicting performance.

Am I going to mention the word “Bitcoin” in the second half of this article? Probably.

The FantasyLabs Projections

Within our Models we have different types of projections. In our NBA Models Justin Phan and the team project usage, playing time, and ownership rates. In our NFL Models we project:

Adam Levitan handles our NFL ownership projections, which is a good thing — because I suck at projecting ownership. It’s hard. Each slate has its own dynamics based on salaries, matchups, and positional scarcity — to say nothing of the not infrequent exposure inefficiencies that result from recency bias. Projecting ownership is much more of a knowledge-based art than a formula-driven science. Almost any good data analyst can create reasonable production projections, but only someone with vast knowledge of DFS and the underlying sport can produce reliable ownership projections. Phrased differently: A computer can do great production projections. Only a human with the ability to read the real-time movements of the DFS market can do ownership projections.

So Levitan is great at predicting ownership, and I suck at it. I have, though, gotten better by studying patterns with our Ownership Dashboard and Trends Tool. Also, I consult our Contest Dashboard, which allows me to see the exposure levels, stacking strategies, salary usage, and positional distributions for all DFS players in indicative guaranteed prize pools. This tool is incredibly useful for studying the best GPP practices of the industry’s elite players. Whenever I have the opportunity, I check out what the members of Team FantasyLabs have done in recent tournaments. With this trio of tools, I get a sense of not just general ownership trends but also advanced exposure and roster-building habits. These tools — available to those who subscribe to FantasyLabs — are perhaps the most valuable in the industry.

As for our NFL production projections, we strive to be accurate and precise with our median projections and realistic with our ceiling and floor projections. Because we provide three production numbers plus an ownership range, there’s no hedging with our projections. We don’t need to adjust the median projection up so as to provide a sense of a player’s upside. That’s what the ceiling projection is for. We don’t need to adjust the median projection down if we think a player will be exorbitantly rostered. That’s what the ownership projection is for. With the median projection we’re shooting to hit the bullseye. If a player projected for 10.0 fantasy points were to play his game 10,001 times we’d hope at No. 501 of the ordered data set to see exactly 10.0 fantasy points as the statistical over/under separating the top half of outcomes from the bottom half.

There’s no question that in any given week many players will have scores nowhere near their median projections, especially in an event-driven sport like football. That’s fine. That doesn’t mean we’re wrong. It’s not important if we’re off on any given player. What’s important is that we’re accurate in the aggregate. What’s important is that we see reasonable stratification: Over a number of weeks, we want to see 15 percent of the players above their ceiling projections and 15 percent below their floor projections. We want to see 15 percent just below their ceilings and 15 percent just above their floors. And we want to see 20 percent just above the median projections and 20 percent just below. We want to see balance across the scoring spectrum as it relates to the three types of production projections we make.

There’s a lot of thought and work that goes into our projections.

The Freedman FantasyPros Rankings

Full disclosure: Not a lot of thought and work goes into my FantasyPros rankings — at least in comparison to our projections. Whereas with our production projections we have a triangulated process — we basically take three shots at the target from three different angles — with my personal rankings I’m shooting at a series of targets from the same spot. It’s not as if I’m even compressing all my analysis into one representative number per player. To me, whether a player is ranked first or 36th at his position is almost irrelevant. What matters to me is the relational positioning of the players.

When ranking wide receivers, I never start out by saying “Antonio Brown is the No. 1 receiver this week.” He almost always is, and that usually makes sense, but I don’t start by assigning Brown a number. I start by placing him and the other elite receivers in relation to each other, and then whoever is at the top of that cohort — and usually it’s Antonio — is No. 1 almost by default. I consult the FantasyLabs median projections throughout this process, but they don’t govern my decisions since in my rankings I’m shooting for positive expected value. Although it might be the wrong approach, I generally try to think about probability and payoff instead of just probability. That might seem weird given that rankings flatten everything out, but I think I’ve improved this year as a ranker because I’ve gotten better at balancing upside and downside with concerns of likelihood.

As I’ve mentioned previously, I also extend this “relational positioning” to the field of FantasyPros rankers. I seek to distinguish myself from other rankers in strategic ways, which means that I tend to give my rankings a contrarian shade. (Also, I tend to be a contrarian ranker in general. I have to work to suppress my contrarian urge, but that’s another matter.) If I think lots of rankers are too bearish or bullish on a player, I might adjust my initial ranking in the opposite direction to gain leverage. I definitely employ some game theory.

But here’s something I want to make clear: The decision to adjust rankings based on my perception of inefficiencies in the field is not just about leverage. It’s not just about game theory. Through The Action Network, we are partnered with Sports Insights, Bet Labs, and Sports Action. With the tools these sites offer, we can do research to discover spots in which the sports speculation markets tend to be inefficient. Within the Vegas data and line movements, we can find inefficiencies as well as indicators for those inefficiencies. For instance, in particular situations it’s not uncommon for 80 percent or more of the tickets on a bet to be on one side — the losing side. In part, that’s how Vegas makes money. At times, when the public is overweight on one side of a bet, it’s the sharp move to be on the other side. I think this ‘fade the public’ perspective can also apply to rankings.

Here’s my thesis: When a lot of rankers are bullish or bearish on certain players, or even certain types of players, I should adjust my rankings in the opposite direction not just to gain beneficial leverage in case the rankers happen to be wrong. Rather, I should adjust because those rankers probably are wrong. This is about more than game theory. This is about predictability and ultimately accuracy.

About a month ago, I made a concerted effort to change part of my ranking process and to implement a little more aggressively my ‘fade the public’ strategy for particular players and types of players. The results speak for themselves. After Week 8, I was the No. 12 ranker overall. After these weekly performances . . .

  • Week 9: No. 5
  • Week 10: No. 12
  • Week 11: No. 3
  • Week 12: No. 2

. . . I’m now No. 2 overall. I have been running inordinately hot and could cool off any week — I’m due for regression — but as of now I’ve seen nothing to indicate that my field-based relational positions are theoretically faulty or likely to lead to less accuracy. To the contrary, I’ve never been more accurate.

Intrinsic vs. Market Value

I’ve said this before: Playing DFS is like investing. The same is true for projecting and ranking. Even if we’re not going through the literal process of buying and selling, we’re immersed in an active market in which participants assign values to various entities. Instead of evaluating companies or commodities or properties or vehicles, we’re evaluating players and specifically their likely production in a slate. There’s a difference, but not a big difference. This is coming from a guy who views life as a market and all objects or processes as assets to be evaluated, so my perspective might be skewed.

There (in general) are two main ways to evaluate an asset. One is to focus on the intrinsic value. What is it actually worth? What is its real value? What can it do? How many units of production can I expect to get out of it? This is a pragmatic approach to evaluation. It’s the bread-and-butter method for many value investors, such as Warren Buffett. If your No. 1 priority is not to lose money, before you make any sort of investment you should have a solid idea as to how much what you’re buying is worth.

The other way to evaluate an asset is to focus more on the market in which it circulates. What is it said to be worth? What is its perceived value? What do people think of it? How many units of profit might I hope to get out of it? This tends to be the approach for people who are more speculative and might view themselves less as investors and more as traders.

This might seem clear, but it’s worth saying: The best way to evaluate is almost certainly to use both methods in tandem. In the stock market, intrinsic evaluation can help investors determine what companies are worth, and market evaluation can help them know when to trade in and out of the stocks for those companies. In betting, intrinsic evaluation enables bettors to determine power rankings and what spreads and game totals ‘should’ be, and market evaluation lets bookmakers set lines that cater to the tendencies and biases of the people who actually bet on games.

In the cryptocurrency market, intrinsic evaluation involves analyzing the transactional technologies of different cryptos as well as the financial and political climates nationally and internationally that might cause cryptos to succeed or fail. Market evaluation . . .

. . . is basically looking at Bitcoin’s 2017 rise and saying, “People are buying it. Even more people are going to buy it in the future. I’m not going to be the only person on earth who doesn’t make money on this. Why shouldn’t this get to $100,000 in another year?” Check out The Three Donkeys podcast for more Labs ‘analysis’ on crypto.

Market-Based Evaluation in Projections and Rankings

Everyone in the fantasy and sports analytics industry projects player production via intrinsic evaluation. They look at team production, market share, matchup, etc. They look at the game itself. Almost no one tries to predict what individual players will do by evaluating the analytical market that projects and ranks those players. That might be the biggest inefficiency in the projection and ranking space today.

Even though I suck at predicting ownership in comparison to Levitan, CSURAM88, and Jonathan Bales, I still know that it’s way easier to predict how the market will roster a player than how a player will perform. I also know that for football it’s less worthwhile to look at points scored and more worthwhile to look at yards accumulated if you want to predict future scoring.

What might seem roundabout is sometimes rather direct.

I haven’t backtested this as it relates to NFL projections and rankings, so I’m operating purely in the realm of theory and anecdote, but it’s possible that in an event-driven arena we can be most accurate with our predictions if we focus a little less on our own calculated prognostications and a little more on the signals sent by the market, whether those are positive or negative indicators.

Sometimes what matters is not what we think about a game but who has skin in it and how much.

——

The Labyrinthian: 2017.75, 170

Matthew Freedman is the Editor-in-Chief of FantasyLabs. He has a dog and sometimes a British accent. In Cedar Rapids, Iowa, he’s known only as The Labyrinthian. Previous installments of the series can be accessed via the series archive.

This is the 170th installment of The Labyrinthian, a series dedicated to exploring random fields of knowledge to give you unordinary theoretical, philosophical, strategic, and/or often rambling guidance on daily fantasy sports. Consult the introductory piece to the series for further explanation.

A couple of weeks ago Peter Jennings (CSURAM88) and I booked a prop bet as to whether I would finish the season as a top-10 ranker at FantasyPros. At the time I was No. 10. Now I’m No. 2, trailing only Sean Koerner, a friend of the Daily Fantasy Flex pod and the Director of Predictive Analytics for STATS.

In this piece I want to explore the difference between the projections in our Models and my rankings at FantasyPros, and I also want to discuss two different methods of evaluating players and predicting performance.

Am I going to mention the word “Bitcoin” in the second half of this article? Probably.

The FantasyLabs Projections

Within our Models we have different types of projections. In our NBA Models Justin Phan and the team project usage, playing time, and ownership rates. In our NFL Models we project:

Adam Levitan handles our NFL ownership projections, which is a good thing — because I suck at projecting ownership. It’s hard. Each slate has its own dynamics based on salaries, matchups, and positional scarcity — to say nothing of the not infrequent exposure inefficiencies that result from recency bias. Projecting ownership is much more of a knowledge-based art than a formula-driven science. Almost any good data analyst can create reasonable production projections, but only someone with vast knowledge of DFS and the underlying sport can produce reliable ownership projections. Phrased differently: A computer can do great production projections. Only a human with the ability to read the real-time movements of the DFS market can do ownership projections.

So Levitan is great at predicting ownership, and I suck at it. I have, though, gotten better by studying patterns with our Ownership Dashboard and Trends Tool. Also, I consult our Contest Dashboard, which allows me to see the exposure levels, stacking strategies, salary usage, and positional distributions for all DFS players in indicative guaranteed prize pools. This tool is incredibly useful for studying the best GPP practices of the industry’s elite players. Whenever I have the opportunity, I check out what the members of Team FantasyLabs have done in recent tournaments. With this trio of tools, I get a sense of not just general ownership trends but also advanced exposure and roster-building habits. These tools — available to those who subscribe to FantasyLabs — are perhaps the most valuable in the industry.

As for our NFL production projections, we strive to be accurate and precise with our median projections and realistic with our ceiling and floor projections. Because we provide three production numbers plus an ownership range, there’s no hedging with our projections. We don’t need to adjust the median projection up so as to provide a sense of a player’s upside. That’s what the ceiling projection is for. We don’t need to adjust the median projection down if we think a player will be exorbitantly rostered. That’s what the ownership projection is for. With the median projection we’re shooting to hit the bullseye. If a player projected for 10.0 fantasy points were to play his game 10,001 times we’d hope at No. 501 of the ordered data set to see exactly 10.0 fantasy points as the statistical over/under separating the top half of outcomes from the bottom half.

There’s no question that in any given week many players will have scores nowhere near their median projections, especially in an event-driven sport like football. That’s fine. That doesn’t mean we’re wrong. It’s not important if we’re off on any given player. What’s important is that we’re accurate in the aggregate. What’s important is that we see reasonable stratification: Over a number of weeks, we want to see 15 percent of the players above their ceiling projections and 15 percent below their floor projections. We want to see 15 percent just below their ceilings and 15 percent just above their floors. And we want to see 20 percent just above the median projections and 20 percent just below. We want to see balance across the scoring spectrum as it relates to the three types of production projections we make.

There’s a lot of thought and work that goes into our projections.

The Freedman FantasyPros Rankings

Full disclosure: Not a lot of thought and work goes into my FantasyPros rankings — at least in comparison to our projections. Whereas with our production projections we have a triangulated process — we basically take three shots at the target from three different angles — with my personal rankings I’m shooting at a series of targets from the same spot. It’s not as if I’m even compressing all my analysis into one representative number per player. To me, whether a player is ranked first or 36th at his position is almost irrelevant. What matters to me is the relational positioning of the players.

When ranking wide receivers, I never start out by saying “Antonio Brown is the No. 1 receiver this week.” He almost always is, and that usually makes sense, but I don’t start by assigning Brown a number. I start by placing him and the other elite receivers in relation to each other, and then whoever is at the top of that cohort — and usually it’s Antonio — is No. 1 almost by default. I consult the FantasyLabs median projections throughout this process, but they don’t govern my decisions since in my rankings I’m shooting for positive expected value. Although it might be the wrong approach, I generally try to think about probability and payoff instead of just probability. That might seem weird given that rankings flatten everything out, but I think I’ve improved this year as a ranker because I’ve gotten better at balancing upside and downside with concerns of likelihood.

As I’ve mentioned previously, I also extend this “relational positioning” to the field of FantasyPros rankers. I seek to distinguish myself from other rankers in strategic ways, which means that I tend to give my rankings a contrarian shade. (Also, I tend to be a contrarian ranker in general. I have to work to suppress my contrarian urge, but that’s another matter.) If I think lots of rankers are too bearish or bullish on a player, I might adjust my initial ranking in the opposite direction to gain leverage. I definitely employ some game theory.

But here’s something I want to make clear: The decision to adjust rankings based on my perception of inefficiencies in the field is not just about leverage. It’s not just about game theory. Through The Action Network, we are partnered with Sports Insights, Bet Labs, and Sports Action. With the tools these sites offer, we can do research to discover spots in which the sports speculation markets tend to be inefficient. Within the Vegas data and line movements, we can find inefficiencies as well as indicators for those inefficiencies. For instance, in particular situations it’s not uncommon for 80 percent or more of the tickets on a bet to be on one side — the losing side. In part, that’s how Vegas makes money. At times, when the public is overweight on one side of a bet, it’s the sharp move to be on the other side. I think this ‘fade the public’ perspective can also apply to rankings.

Here’s my thesis: When a lot of rankers are bullish or bearish on certain players, or even certain types of players, I should adjust my rankings in the opposite direction not just to gain beneficial leverage in case the rankers happen to be wrong. Rather, I should adjust because those rankers probably are wrong. This is about more than game theory. This is about predictability and ultimately accuracy.

About a month ago, I made a concerted effort to change part of my ranking process and to implement a little more aggressively my ‘fade the public’ strategy for particular players and types of players. The results speak for themselves. After Week 8, I was the No. 12 ranker overall. After these weekly performances . . .

  • Week 9: No. 5
  • Week 10: No. 12
  • Week 11: No. 3
  • Week 12: No. 2

. . . I’m now No. 2 overall. I have been running inordinately hot and could cool off any week — I’m due for regression — but as of now I’ve seen nothing to indicate that my field-based relational positions are theoretically faulty or likely to lead to less accuracy. To the contrary, I’ve never been more accurate.

Intrinsic vs. Market Value

I’ve said this before: Playing DFS is like investing. The same is true for projecting and ranking. Even if we’re not going through the literal process of buying and selling, we’re immersed in an active market in which participants assign values to various entities. Instead of evaluating companies or commodities or properties or vehicles, we’re evaluating players and specifically their likely production in a slate. There’s a difference, but not a big difference. This is coming from a guy who views life as a market and all objects or processes as assets to be evaluated, so my perspective might be skewed.

There (in general) are two main ways to evaluate an asset. One is to focus on the intrinsic value. What is it actually worth? What is its real value? What can it do? How many units of production can I expect to get out of it? This is a pragmatic approach to evaluation. It’s the bread-and-butter method for many value investors, such as Warren Buffett. If your No. 1 priority is not to lose money, before you make any sort of investment you should have a solid idea as to how much what you’re buying is worth.

The other way to evaluate an asset is to focus more on the market in which it circulates. What is it said to be worth? What is its perceived value? What do people think of it? How many units of profit might I hope to get out of it? This tends to be the approach for people who are more speculative and might view themselves less as investors and more as traders.

This might seem clear, but it’s worth saying: The best way to evaluate is almost certainly to use both methods in tandem. In the stock market, intrinsic evaluation can help investors determine what companies are worth, and market evaluation can help them know when to trade in and out of the stocks for those companies. In betting, intrinsic evaluation enables bettors to determine power rankings and what spreads and game totals ‘should’ be, and market evaluation lets bookmakers set lines that cater to the tendencies and biases of the people who actually bet on games.

In the cryptocurrency market, intrinsic evaluation involves analyzing the transactional technologies of different cryptos as well as the financial and political climates nationally and internationally that might cause cryptos to succeed or fail. Market evaluation . . .

. . . is basically looking at Bitcoin’s 2017 rise and saying, “People are buying it. Even more people are going to buy it in the future. I’m not going to be the only person on earth who doesn’t make money on this. Why shouldn’t this get to $100,000 in another year?” Check out The Three Donkeys podcast for more Labs ‘analysis’ on crypto.

Market-Based Evaluation in Projections and Rankings

Everyone in the fantasy and sports analytics industry projects player production via intrinsic evaluation. They look at team production, market share, matchup, etc. They look at the game itself. Almost no one tries to predict what individual players will do by evaluating the analytical market that projects and ranks those players. That might be the biggest inefficiency in the projection and ranking space today.

Even though I suck at predicting ownership in comparison to Levitan, CSURAM88, and Jonathan Bales, I still know that it’s way easier to predict how the market will roster a player than how a player will perform. I also know that for football it’s less worthwhile to look at points scored and more worthwhile to look at yards accumulated if you want to predict future scoring.

What might seem roundabout is sometimes rather direct.

I haven’t backtested this as it relates to NFL projections and rankings, so I’m operating purely in the realm of theory and anecdote, but it’s possible that in an event-driven arena we can be most accurate with our predictions if we focus a little less on our own calculated prognostications and a little more on the signals sent by the market, whether those are positive or negative indicators.

Sometimes what matters is not what we think about a game but who has skin in it and how much.

——

The Labyrinthian: 2017.75, 170

Matthew Freedman is the Editor-in-Chief of FantasyLabs. He has a dog and sometimes a British accent. In Cedar Rapids, Iowa, he’s known only as The Labyrinthian. Previous installments of the series can be accessed via the series archive.

About the Author

Matthew Freedman is the Editor-in-Chief of FantasyLabs. The only edge he has in anything is his knowledge of '90s music.