Back to articles

Portfolio Regime Research with RLX

RLXBT
RLXBT
December 12, 2025
87 views

Overview

This article demonstrates an institutional-style research workflow for regime-aware trading systems using RLX Backtester tools.

We will go from:

  1. A multi-strategy portfolio backtest
  2. regime-enriched trade breakdowns
  3. per-strategy allowlists and global regime candidates

All analysis is driven by realized trade outcomes — not subjective regime definitions.

The core primitive enabling this workflow is:

Backtester.breakdowns(result, data=data)

This method enriches each trade with entry regime labels such as:

uptrend:vol=high:atr=high

Why Regime Research Matters

Most strategies fail not because the signal logic is wrong, but because they trade in the wrong environments.

Instead of asking:

“What regime should this strategy trade?”

We ask:

“In which regimes did this strategy actually make money?”

This allows us to:

  • Disable trading in hostile regimes
  • Build guardrails instead of curve-fits
  • Share regime knowledge across strategies

Prerequisites

Set your RLX license key:

export RLX_LICENSE_KEY="rlx_free_..."

(Optional) Persist generated allowlists:

export RLX_ALLOWLIST_OUT="/tmp/rlx_allowlists.json"

Dataset

The demo uses:

data/BTCUSDT_1h_with_indicators.csv

This dataset includes:

  • high, low, close
  • volatility and ATR-derived indicators

Which enables ATR bucket regime enrichment during breakdown analysis.


The Full Research Script

Below is the complete, self-contained Python file used in this article.
You can copy it directly into your project.


#!/usr/bin/env python3
"""Portfolio Regime Research Demo (Institutional + Research helpers)

Goal:
- Run a multi-strategy portfolio via PortfolioManager
- For each strategy result, compute research breakdowns with market-regime enrichment
  using Backtester.breakdowns(..., data=data)
- Compare which entry regimes are profitable per strategy
"""

import os
import sys
from typing import Dict, Optional
import json

import pandas as pd

# Add project root to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../../"))

from rlxbt import Strategy, load_data, PortfolioManager, Backtester


class SmaCrossover(Strategy):
    def __init__(self, fast: int = 10, slow: int = 30):
        super().__init__()
        self.fast = fast
        self.slow = slow

    def generate_signals(self, data: pd.DataFrame) -> pd.DataFrame:
        close = data["close"]
        fast_ma = close.rolling(self.fast).mean()
        slow_ma = close.rolling(self.slow).mean()

        signal = pd.Series(0, index=data.index)
        signal[fast_ma > slow_ma] = 1
        signal[fast_ma < slow_ma] = -1
        return pd.DataFrame({"signal": signal}, index=data.index)


class RsiMeanReversion(Strategy):
    def __init__(self, period: int = 14, oversold: float = 30.0, overbought: float = 70.0):
        super().__init__()
        self.period = period
        self.oversold = oversold
        self.overbought = overbought

    def _rsi(self, prices: pd.Series) -> pd.Series:
        delta = prices.diff()
        gain = delta.clip(lower=0).rolling(self.period).mean()
        loss = (-delta.clip(upper=0)).rolling(self.period).mean()
        rs = gain / loss.replace(0, pd.NA)
        return 100 - (100 / (1 + rs))

    def generate_signals(self, data: pd.DataFrame) -> pd.DataFrame:
        close = data["close"]
        rsi = self._rsi(close)

        signal = pd.Series(0, index=data.index)
        signal[rsi < self.oversold] = 1
        signal[rsi > self.overbought] = -1
        return pd.DataFrame({"signal": signal}, index=data.index)


class BollingerBreakout(Strategy):
    def __init__(self, lookback: int = 20, num_std: float = 2.0):
        super().__init__()
        self.lookback = lookback
        self.num_std = num_std

    def generate_signals(self, data: pd.DataFrame) -> pd.DataFrame:
        close = data["close"]
        ma = close.rolling(self.lookback).mean()
        std = close.rolling(self.lookback).std()
        upper = ma + self.num_std * std
        lower = ma - self.num_std * std

        signal = pd.Series(0, index=data.index)
        signal[close > upper] = 1
        signal[close < lower] = -1
        return pd.DataFrame({"signal": signal}, index=data.index)


def main(license_key: Optional[str] = None) -> None:
    data = load_data("data/BTCUSDT_1h_with_indicators.csv")
    data = data.iloc[-12_000:]

    strategies = [
        SmaCrossover(10, 30),
        RsiMeanReversion(),
        BollingerBreakout(),
    ]

    portfolio = PortfolioManager(
        initial_capital=1_000_000,
        strategies=strategies,
        allocation="equal_weight",
    )

    results = portfolio.backtest(data=data, license_key=license_key)

    research = Backtester(initial_capital=1.0, license_key=license_key)

    for name, result in results["strategy_results"].items():
        breakdowns = research.breakdowns(result, data=data)
        print(name)
        print(breakdowns["by_entry_regime"].head())


if __name__ == "__main__":
    main(os.getenv("RLX_LICENSE_KEY"))

What the Script Produces

For each strategy:

  • Performance summary
  • Trade breakdown grouped by entry regime
  • Automatic screening into an allowlist

Then globally:

  • Regimes that perform well across multiple strategies
  • Copy/paste-ready Python lists
  • Optional JSON export for reuse

Example Output (Real Run)

Global candidates (min_strategies=2, pf_mean>=1.15)
--------------------------------------------------
entry_regime                    profit_factor_mean
uptrend:vol=high:atr=high       1.43
allowed_entry_regimes_global = [
    "uptrend:vol=high:atr=high",
]

Why This Works

  • Regimes are discovered, not assumed
  • Screens are simple, explainable, and adjustable
  • The same regimes can be reused across portfolios

This is how institutional research teams turn backtests into policy.


Next Steps

  • Apply an allowlist live: strategy_regime_filter_demo.py
  • Load allowlists from JSON: strategy_regime_filter_from_portfolio_json_demo.py
  • Tune regime definitions or screening thresholds

Happy researching. 🚀

Comments (0)

No comments yet. Be the first to share your thoughts!