Note
Go to the end to download the full example code. or to run this example in your browser via JupyterLite or Binder
Black & Litterman#
This tutorial shows how to use the BlackLitterman
estimator in
the MeanRisk
optimization.
A Prior Estimator in skfolio
fits a ReturnDistribution
containing your pre-optimization inputs (\(\mu\), \(\Sigma\), returns, sample
weight, Cholesky decomposition).
The term “prior” is used in a general optimization sense, not confined to Bayesian priors. It denotes any a priori assumption or estimation method for the return distribution before optimization, unifying both Frequentist, Bayesian and Information-theoretic approaches into a single cohesive framework:
- Frequentist:
- Bayesian:
- Information-theoretic:
In skfolio’s API, all such methods share the same interface and adhere to scikit-learn’s
estimator API: the fit
method accepts X
(the asset returns) and stores the
resulting ReturnDistribution
in its return_distribution_
attribute.
The ReturnDistribution
is a dataclass containing:
mu
: Estimated expected returns of shape (n_assets,)
covariance
: Estimated covariance matrix of shape (n_assets, n_assets)
returns
: (Estimated) asset returns of shape (n_observations, n_assets)
sample_weight
: Sample weight for each observation of shape (n_observations,) (optional)
cholesky
: Lower-triangular Cholesky factor of the covariance (optional)
The BlackLitterman
estimator estimates the ReturnDistribution
using
the Black & Litterman model. It takes a Bayesian approach by starting from a prior
estimate of the assets’ expected returns and covariance matrix, then updating them with
the analyst’s views to obtain the posterior estimates.
In this tutorial we will build a Maximum Sharpe Ratio portfolio using the
BlackLitterman
estimator.
Data#
We load the S&P 500 dataset composed of the daily prices of 20 assets from the SPX Index composition starting from 1990-01-02 up to 2022-12-28:
from plotly.io import show
from sklearn.model_selection import train_test_split
from skfolio import Population, RiskMeasure
from skfolio.datasets import load_sp500_dataset
from skfolio.optimization import MeanRisk, ObjectiveFunction
from skfolio.preprocessing import prices_to_returns
from skfolio.prior import BlackLitterman
prices = load_sp500_dataset()
X = prices_to_returns(prices)
X_train, X_test = train_test_split(X, test_size=0.33, shuffle=False)
Analyst views#
Let’s assume we are able to accurately estimate views about future realization of the
market. We estimate that Apple will have an expected return of 25% p.a. (absolute
view) and will outperform General Electric by 22% p.a. (relative view). We also
estimate that JPMorgan will outperform General Electric by 15% p.a (relative view).
By converting these annualized estimates into daily estimates to be homogenous with
the input X
, we get:
analyst_views = [
"AAPL == 0.00098",
"AAPL - GE == 0.00086",
"JPM - GE == 0.00059",
]
Black & Litterman Model#
We create a Maximum Sharpe Ratio model using the Black & Litterman estimator that we fit on the training set:
model_bl = MeanRisk(
risk_measure=RiskMeasure.VARIANCE,
objective_function=ObjectiveFunction.MAXIMIZE_RATIO,
prior_estimator=BlackLitterman(views=analyst_views),
portfolio_params=dict(name="Black & Litterman"),
)
model_bl.fit(X_train)
model_bl.weights_
array([4.73688339e-01, 2.66641803e-02, 2.20094198e-07, 2.02229897e-02,
7.84166754e-03, 3.01723522e-08, 5.20800181e-07, 2.47479547e-03,
3.27262165e-01, 4.77426684e-03, 1.73055734e-02, 3.34208578e-02,
2.10139601e-03, 1.65511656e-02, 2.20704187e-06, 8.58902957e-07,
3.40428153e-02, 3.36417527e-02, 6.20216084e-07, 3.57834054e-06])
Empirical Model#
For comparison, we also create a Maximum Sharpe Ratio model using the default Empirical estimator:
model_empirical = MeanRisk(
risk_measure=RiskMeasure.VARIANCE,
objective_function=ObjectiveFunction.MAXIMIZE_RATIO,
portfolio_params=dict(name="Empirical"),
)
model_empirical.fit(X_train)
model_empirical.weights_
array([9.43631399e-02, 1.13184579e-06, 5.04970598e-07, 1.20834667e-01,
3.18126275e-02, 8.57806907e-07, 7.11596802e-04, 1.24104939e-01,
9.49223801e-07, 2.77547553e-02, 1.23409042e-06, 1.37593860e-06,
1.16299875e-01, 5.73516411e-02, 9.58498589e-06, 1.09493919e-01,
8.64761638e-02, 1.83992252e-01, 1.32350165e-02, 3.35537683e-02])
Prediction#
We predict both models on the test set:
pred_bl = model_bl.predict(X_test)
pred_empirical = model_empirical.predict(X_test)
population = Population([pred_bl, pred_empirical])
population.plot_cumulative_returns()
Because our views were accurate, the Black & Litterman model outperformed the Empirical model on the test set. From the below composition, we can see that Apple and JPMorgan were allocated more weights:
fig = population.plot_composition()
show(fig)
Total running time of the script: (0 minutes 0.365 seconds)