Note
Go to the end to download the full example code. or to run this example in your browser via JupyterLite or Binder
Distributionally Robust CVaR#
This tutorial introduces the DistributionallyRobustCVaR
model.
The Distributionally Robust CVaR model constructs a Wasserstein ball in the space of multivariate and non-discrete probability distributions centered at the uniform distribution on the training samples, and find the allocation that minimize the CVaR of the worst-case distribution within this Wasserstein ball.
Mohajerin Esfahani and Kuhn (2018) proved that for piecewise linear objective functions, which is the case of CVaR (Rockafellar and Uryasev), the distributionally robust optimization problem over Wasserstein ball can be reformulated as finite convex programs.
It’s advised to use a solver that handles a high number of constraints like Mosek
.
For accessibility, this example uses the default open source solver CLARABEL
, so to
increase convergence speed, we only use 3 years of data.
The radius of the Wasserstein ball is controlled with the wasserstein_ball_radius
parameter. Increasing the radius will increase the uncertainty about the
distribution, bringing the weights closer to the equal weighted portfolio.
Data#
We load the S&P 500 dataset composed of the daily prices of 20 assets from the S&P 500 Index composition starting from 2020-01-02 up to 2022-12-28:
from plotly.io import show
from sklearn.model_selection import train_test_split
from skfolio import Population
from skfolio.datasets import load_sp500_dataset
from skfolio.optimization import DistributionallyRobustCVaR, EqualWeighted
from skfolio.preprocessing import prices_to_returns
prices = load_sp500_dataset()
prices = prices["2020":]
X = prices_to_returns(prices)
X_train, X_test = train_test_split(X, test_size=0.5, shuffle=False)
Model#
We create four distributionally robust CVaR models with different radius then fit them on the training set:
model1 = DistributionallyRobustCVaR(
wasserstein_ball_radius=0.1,
portfolio_params=dict(name="Distributionally Robust CVaR - 0.1"),
)
model1.fit(X_train)
model2 = DistributionallyRobustCVaR(
wasserstein_ball_radius=0.01,
portfolio_params=dict(name="Distributionally Robust CVaR - 0.01"),
)
model2.fit(X_train)
model3 = DistributionallyRobustCVaR(
wasserstein_ball_radius=0.001,
portfolio_params=dict(name="Distributionally Robust CVaR - 0.001"),
)
model3.fit(X_train)
model4 = DistributionallyRobustCVaR(
wasserstein_ball_radius=0.0001,
portfolio_params=dict(name="Distributionally Robust CVaR - 0.0001"),
)
model4.fit(X_train)
model4.weights_
array([4.84531907e-11, 1.01910610e-10, 1.81629430e-11, 7.56967308e-11,
6.67249461e-11, 5.07087663e-10, 3.97503410e-02, 2.42882219e-09,
3.84348572e-11, 8.53379955e-02, 5.85983398e-02, 3.38460704e-01,
9.62545194e-11, 7.42768380e-11, 6.90123331e-10, 6.06982611e-10,
4.16112642e-02, 1.18803064e-09, 4.36241349e-01, 1.34914228e-10])
To compare the models, we use an equal weighted benchmark using
the EqualWeighted
estimator:
bench = EqualWeighted()
bench.fit(X_train)
bench.weights_
array([0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05,
0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05, 0.05])
Prediction#
We predict the models and the benchmark on the test set:
ptf_model1_test = model1.predict(X_test)
ptf_model2_test = model2.predict(X_test)
ptf_model3_test = model3.predict(X_test)
ptf_model4_test = model4.predict(X_test)
ptf_bench_test = bench.predict(X_test)
Analysis#
We load all predicted portfolios into a Population
and
plot their compositions:
population = Population(
[ptf_model1_test, ptf_model2_test, ptf_model3_test, ptf_model4_test, ptf_bench_test]
)
population.plot_composition()
We can see that by increasing the radius of the Wasserstein ball, the weights get closer to the equal weighted portfolio.
Let’s plot the portfolios cumulative returns:
fig = population.plot_cumulative_returns()
show(fig)
Total running time of the script: (0 minutes 14.911 seconds)