skfolio.metrics.mahalanobis_calibration_loss#

skfolio.metrics.mahalanobis_calibration_loss(estimator, X_test, y=None)[source]#

Mahalanobis calibration loss.

Computes the absolute deviation of mahalanobis_calibration_ratio from its calibration target of 1.0.

Let \(r_t\) be the one-period realized return vector at time \(t\), and let \(R^{(h)} = \sum_{t=1}^{h} r_t\) be the aggregated return over an evaluation window of \(h\) observations.

\[\ell = \left\lvert \frac{{R^{(h)}}^\top (h\,\Sigma)^{-1} R^{(h)}}{n} - 1 \right\rvert\]

where \(n\) is the number of assets.

As with mahalanobis_calibration_ratio, heavy tails and regime changes can weaken the Gaussian reference. This loss is therefore often most useful for relative comparison across covariance estimators.

Parameters:
estimatorBaseEstimator

Fitted estimator, must expose covariance_ or return_distribution_.covariance.

X_testarray-like of shape (n_observations, n_assets)

Realized returns for the test window.

yIgnored

Present for scikit-learn API compatibility.

Returns:
float

Calibration loss. Lower values are better and the optimum is 0.0.

See also

mahalanobis_calibration_ratio

The underlying calibration ratio.

diagonal_calibration_loss

Loss using only marginal variances.