This article is sponsored by EDHECInfra
Today, 80 percent of institutional investors are exposed to unlisted infrastructure equity invested via managed private investment funds. As a result, fund manager selection and performance monitoring are key aspects of the investment process in infrastructure. Indeed, most individual infrastructure portfolios are concentrated in a limited number of investments reflecting active manager choices.
To select skilled managers, investors typically rely on rankings by quartiles of net IRR and multiples and aim to work with asset managers that are consistently in the top quartiles. Likewise, to monitor performance, investors need to compare the reported performance of the funds they are invested in with that of comparable funds and, again, hope to achieve top-quartile results.
However, this process is hindered by the limited availability of infrastructure fund performance data. There are at least five reasons why such data is scarce and biased, making both manager selection and monitoring very challenging:
- First, available sample sizes are small (usually less than 30 data points) and estimating quartile boundaries reliably is impossible with so little data.
- Second, contributed data suffers from multiple biases (reporting, selection and survivorship biases), further making the estimation of quartiles of manager and fund performance unreliable.
- Third, in the case of some strategies and geographies, too few funds may exist in the first place to achieve any robust estimate of the quartiles of returns even if all available data can be collected.
- Fourth, because this data is contributed and processed by humans, it is sometimes plain wrong – either the exact investment year or the performance data itself can sometimes be inaccurate. Such human errors are compounded by the limited number of data points available. With sometimes less than 30 data points to rely on, there is no law of large numbers to cancel out human errors, and even one inaccurate data point can create a large deviation in reported quartiles.
- Fifth, the same is true of outliers: if reported data includes one or two very high or very low IRRs, with a small sample, estimated quartile boundaries are not robust. As far as we know, there is no outlier treatment in existing datasets used to rank funds and managers.
- Finally, contributed fund data is also typically stale – ie, available with a lag of one to three years, depending on the age of the fund. New funds usually do not report any performance data for the first two or three years, and more mature funds tend to report with a lag of up to four quarters. And since most funds also arbitrarily set a fixed hurdle rate at 7 or 8 percent, in the absence of robust performance quartile data, there typically is no relative benchmark against which infrastructure funds and managers can be assessed.
Ranking and selecting
Ranking and selecting managers based on quartiles is not just a matter of sorting funds by IRR and picking the top of the list.
The notion of quartile implies an underlying statistical distribution of returns and a relative ranking – for example, ranking funds or managers by quartile is a basic form of performance benchmarking.
Using quartiles to rank observations requires either knowing the underlying distribution of returns or observing a sufficiently large number of realised performance metrics to estimate the quartiles of that distribution with reasonable accuracy.
However, the distribution of private infrastructure fund returns in a given year is unknown and unobservable, and using sparse contributed performance data to estimate quartiles boundaries leads to unreliable results due to the paucity of available data.
For example, looking at the Preqin dataset of unlisted infrastructure fund performance metrics, recent vintage years typically exhibit between 10 and 20 contributors for net IRRs and between 15 and 35 contributors for net multiples.
As of Q3 2021, the full Preqin dataset includes 228 observations of infrastructure fund IRRs going back to and only including at least 10 observations per vintage from 2006 onwards. Thereafter, the number of available observations per vintage ranges between eight in 2009 to 24 in 2016, with an average of 15 observations per vintage year.
What are the consequences of using such small samples to describe the empirical quartiles of the underlying distribution of returns?
A quartile lottery?
Such paucity of performance data for infrastructure funds means that asset managers can struggle to demonstrate whether they are performing adequately or not, while investors are left none the wiser about the skills or performance persistence of their asset managers. Assessing infra fund managers based on contributed IRR quartiles is, in fact, a very unfair lottery.
EDHECinfra has developed a solution to this endemic data paucity problem in the private infrastructure fund space with a new Fund Strategy Analyser component of its infraMetrics platform. The infraMetrics Fund Strategy Analyser (iFSA) provides quartile estimates of the performance of unlisted infrastructure investment funds.
It uses the infraMetrics database to mimic the typical behaviour of private infrastructure investment funds and produce robust estimates of the IRR, multiples and PME quartiles that would be reported if thousands of funds existed in the market and faithfully reported their performance data in each segment and each vintage, every quarter.
This tool uses several assumptions about the investment period, size, number of investments etc, of each fund which have been validated in beta trials with the industry and documented using historical information on fundraising dry powder and more. iFSA is updated on the 10th working day of each quarter, ensuring timely comparisons with other asset classes and fund performance reports.
In back-tests, we compare the infraMetrics net IRR fund simulation results and the Preqin dataset on an aggregate basis for the period 2005-18. While this creates a backwards-looking bias that precludes using such results for the purpose of benchmarking funds today, this bias is common to both datasets and, with 200+ data points, the Preqin quartile boundaries are now more accurate.
Simulated results also fall within the confidence interval of contributed data points. Thus, the largest available sample of contributed data agrees with the simulation results about the overall distribution of the data taken in aggregate over 13 vintage years. This is a first validation of the ability of simulation to generate market-like results.
Application 1: Manager selection
Manager selection due diligence invariably hinges around past performance: quartile ranks of historical funds and any signs of top performance persistence. In a new paper describing this research, we compare the historical track record of four private infrastructure equity fund managers and illustrate the importance of data in quartile ranking. We look at the track record of nine funds managed by four managers as of June 2021. The funds are of vintages 2010 to 2018 and cover core, core-plus, and opportunistic strategies.
At the granular level, we find significant differences between contributed and simulated results that include all possible outcomes. We note cases of type I error (false negative) in manager selection, where a top performing manager by iFSA benchmarking is placed in lower quartiles using Prequin data.
Type II errors (false positives) abound as well with the contributed data, which could lead to a false conclusion of a superior quartile rank, resulting in investors mistakenly selecting a poor performing manager.
Application 2: Fund monitoring
In the paper, we also show how two seemingly identical funds, in terms of vintage and size strategy, have different reported net TVPI, which could lead to the conclusion that one of the two made poorer investments decisions. While true, this is also incomplete.
The figure below shows the annual performance benchmarking of these two funds against the quartiles of 2014 vintage funds in each year since 2014. Until 2017, fund 1 was, in fact, the outperformer and in the top quartile of all the funds before moving to the bottom quartile the following year.
Fund 2 follows the typical J-curve and gradually moves to the top quartile in 2021. Whatever issues fund 1 faced in 2018 it is only from that point onward that it became the lower performer.
With regular monitoring against a robust benchmark, LPs can confidently discuss these performance issues with the managers and understand the return drivers better.
Advantages of simulated data
Simulated results are both congruent with contributed data in back tests at the aggregate level over a long period and more robust and precise at the vintage year or sub-segment level.
Alignment of the results with market data is simply due to the use of market valuations and realised asset-level cashflows as the inputs of a bottom-up simulation. Meanwhile, the key advantages of generating a large number of observations for a large selection of possible funds are to avoid selection bias and survivorship bias, to use robust quartile boundary estimates, to have access to granular fund strategies and up-to-date data.