How Good Are My Analysts? Building A Better Hedge Fund Through Moneyball & Superforecasting
In this article, Cameron Hight talks about measuring hedge fund analyst skill with Alpha Theory's Accuracy Score to introduce radical transparency into the rating of forecasting skill for hedge fund analysts.
Traditionally, measuring hedge fund analyst skill has been an opaque process mired in ambiguity and subjectivity. It is often misconstrued and tainted by portfolio manager influence in the form of sizing decisions, liquidity constraints and other non-analyst determinants. But, in the same way Moneyball revolutionized evaluating baseball player value by prioritizing on-base percentage over batting average, Alpha Theory has distilled the key indicator for predictive aptitude. Alpha Theory invented the Alpha Theory Accuracy Score to introduce radical transparency into the rating of forecasting skill for hedge fund analysts.
P&L is Yesterday’s Batting Average
Using the Moneyball analogy, quantitative disruption of baseball player evaluation changed the way players are paid by isolating the player skill that contributes most to team wins. Using that data, managers now pay athletes in proportion to the amount of that winning skill they individually possess. As such, the key metric for baseball player value evolved from batting average, to the more predictive on-base percentage, or OBP.
Specifically, OBP has a 92 percent correlation with runs scored compared to batting’s 81 percent, making it more predictive. Also, OBP’s 44 percent correlation year-to-year is more persistent than the 32 percent correlation of batting. The predictive reliability and performance consistency make OBP a superior metric to forecast wins for baseball teams. OBP’s disruption of batting average is an apt metaphor for the way Alpha Theory’s Accuracy Score will transform analyst ranking and assessment today.
In 2016, analysts are still primarily rated by the profits and losses their investments generate for the fund, or P&L. But making money on an investment is a misleading measure of analyst skill. Beyond its tendency to be distorted by portfolio manager discretion, P&L performance, both good and bad, often masks the integrity and quality of investment processes. Thus, P&L often misleads portfolio managers into thinking lucky analysts are actually skilled and vice versa.
For example, take these two analysts:
Looking at the table above and using P&L to measure skill, Analyst #1 would be exceptional and Analyst #2 would be sub-par. But Analyst #1 and #2 had the same forecasts, so their forecasting skill is actually identical. P&L does not translate into forecast skill because analysts do not have ultimate control over position sizing; the portfolio manager does!
More Science, Less Art
Inspired by the ideas presented in the groundbreaking book, Superforecasting: The Art and Science of Prediction, Alpha Theory’s Accuracy Score delivers quantitative insight into a qualitative blind spot for portfolio managers. Authored by Wharton Professor Phillip Tetlock and Dan Gardner in 2015, Superforecasting applies a Brier Score-inspired approach to quantifying predictive skill. The Brier Score was created by meteorological statistician, Glenn Brier, in 1950 and measures the accuracy of probabilistic outcomes. Superforecasting applies Brier’s methodology to only binary, or yes/no, outcomes.
The New Standard
Alpha Theory’s Accuracy Score is an algorithmic solution that measures analysts’ predictive skill over a 0 - 100 percent range, where 100 is the best. Scores are calculated on a per-forecast basis and then averaged per analyst. The Accuracy Score algorithm transforms point estimate price targets and probability forecasts into an implied probability distribution, enabling each forecast to be independently scored. By distributing multi-faceted outcomes across a range of probabilities, the Accuracy Score can measure forecasting skill for any price along the distribution.
The distribution of scores across our Alpha Theory clients is shown below. The results follow a normal distribution, which further validates the Accuracy Score’s efficacy in rating analysts’ ability to forecast future price movements.
Good forecasts are the most essential component of fund success and critical when portfolio managers are sizing positions. Using a data-driven approach to determine which analysts make the best forecasts allows managers to apply those forecasts with greater confidence, leading to better position sizing and superior performance.
The Good Judgement Project
In 2011, the Intelligence Advanced Research Projects Activity, a U.S. government research organization, sponsored a geopolitical forecasting tournament that would span 4 years. The IARPA tournament enlisted tens of thousands of forecasters and solicited more than 1 million forecasts across nearly 500 questions related to U.S. national security.
A group called the Good Judgement Project entered the competition, engaged tens of thousands of ordinary people to make predictions, and the won the tournament. The GJP’s forecast accuracy was so persistent that IARPA closed the tournament early to focus exclusively on them. In fact, GJP was able to find a select group of “Superforecasters” that generated forecasts that were "30 percent better than intelligence officers with access to actual classified information.”
Ways to Improve Forecasting Skill
The main findings of the GJP and the book that followed are especially relevant to investors. The research in Superforecasting indicates that predictive accuracy doesn’t require sophisticated algorithms or artificial intelligence. Instead, forecast reliability is the result of process-oriented discipline.
This process entails collecting evidence from a wide variety of sources, thinking probabilistically, working collaboratively, keeping score and being flexible in the face of error. According to the book, the 10 traits that most Superforecasters possess are:
1. Intelligence - above average, but genius isn’t required
2. Quantitative - not only understand math but apply it to everyday life
3. Foxes, not hedgehogs - speak in terms of possibilities, not absolutes
4. Humility - understand the limits of their knowledge
5. System 2 Driven - use the logic-driven instead of instinct-driven portion of their brain
6. Refute fatalism - life is not preordained
7. Make frequent and small updates to their forecast based on new information
8. Believe that history is one of many possible paths that could have occurred
9. Incorporate internal and external views
10. CONSTANTLY SEARCH FOR WAYS TO IMPROVE THEIR FORECASTING PROCESS
Accountability = Profitability
Organizations cannot improve without systematic and data-driven assessments of their personnel. Take Bridgewater Associates, for example. One of the primary factors driving the persistent outperformance of Ray Dalio’s storied fund has been the institutional commitment to radical transparency and accountability. Similarly, Alpha Theory’s Accuracy Score illuminates blind spots and holds analysts accountable through the precise measurement of predictive skill. For funds that lack the time, inclination or internal resources to create their own probabilistic forecast-grading models, Alpha Theory’s Accuracy Score fills the void.
To this end, Alpha Theory is exploring areas of collaboration with the leadership of Good Judgment Inc. (a spin-off from the Good Judgement Project in “Superforecasting”). As the competitive landscape for investment capital tightens, discretionary managers must leverage probabilistic data to survive. Alpha Theory’s Accuracy Score is a mission-critical asset that can help funds compete in the current investment landscape, improving operating inefficiencies and better aligning analyst pay with their intrinsic value to the firm.