Complete documentation for the RLX Backtester Python API. High-performance trading logic at your fingertips.
RLX requires a valid license key for production use. Specialized for institutional-grade research. Get your license →
| Plan | Max Bars | RL | Intrabar | Multi-Strategy | Price |
|---|---|---|---|---|---|
| Starter | 500K | ❌ | ❌ | ❌ | $29/mo |
| Pro | Unlimited | ✅ | ✅ | ✅ | $79/mo |
| Institutional | Unlimited | ✅ | ✅ | ✅ | $499/mo |
The core orchestration layer. Manages ultra-low latency event simulation, automated trade execution, and comprehensive performance calculation.
engine = rlx.TradingEngine(
initial_capital=100000.0, # Starting balance
commission=0.001, # 0.1% taker fee
slippage=0.0001, # Dynamic slippage model
contract_size=1.0, # Base asset unit scaling
enable_dynamic_tp_sl=True, # Adaptive exit logic
license_key="rlx_pro_xxx" # Pro/Institutional Key
)Batch processing of pre-calculated signals. Optimized for high-throughput vectorized data.
High-fidelity simulation using sub-timeframe data for accurate TP/SL resolution.
Represents a single OHLCV candle in the market data.
bar = rlx.Bar(
timestamp=1609459200, # Unix timestamp
open=100.0, # Open price
high=105.0, # High price
low=98.0, # Low price
close=103.0, # Close price
volume=1000.0 # Volume
)A trading signal with optional Take Profit and Stop Loss levels for precise position management.
signal = rlx.EnhancedSignal(
signal=1, # 1 = Long, -1 = Short, 0 = Flat
take_profit=105.0, # Optional TP price level
stop_loss=95.0 # Optional SL price level
)signal = 1Long position — buy and hold
signal = -1Short position — sell and hold
signal = 0Flat — close any position
Contains comprehensive results from a backtest simulation.
| Attribute | Type | Description |
|---|---|---|
| initial_capital | float | Starting capital |
| final_capital | float | Ending portfolio value |
| total_return | float | Percentage return (e.g., 0.15 for 15%) |
| total_trades | int | Number of completed trades |
| winning_trades | int | Number of profitable trades |
| losing_trades | int | Number of losing trades |
| equity_curve | List[float] | Portfolio value at each bar |
| equity_curve_timestamps | List[int] | Unix timestamps for equity curve |
| trades | List[TradeResult] | Detailed history of all trades |
| metrics | Dict[str, float] | Performance metrics (Sharpe, Drawdown, etc.) |
| trade_analysis | Dict[str, float] | Trade statistics (avg_win, avg_loss) |
| total_commission | float | Total fees paid |
| drawdown_series | List[DrawdownPoint] | Drawdown at each timestamp |
| daily_returns | List[DailyReturn] | Daily return breakdown |
Details of a single completed trade.
entry_timeintUnix timestamp of entry
exit_timeintUnix timestamp of exit
entry_pricefloatPrice at entry
exit_pricefloatPrice at exit
sidestr"long" or "short"
pnlfloatProfit/Loss in quote currency
returnsfloatPercentage return of the trade
quantityfloatNumber of contracts traded
contract_sizefloatSize of one contract
commission_amountfloatCommission paid for this trade
exit_reasonExitReasonWhy the trade was closed
take_profitOptional[float]Take profit level if set
stop_lossOptional[float]Stop loss level if set
directionint1 for long, -1 for short
profitfloatAlias for pnl
commissionfloatAlias for commission_amount
Enum representing the reason for closing a position.
NoneNo exit
TakeProfitTP level hit
StopLossSL level hit
SignalStrategy signal
EndOfDataBacktest finished
MaxBarsReachedHold limit exceeded
MaxTimeReachedTime limit exceeded
NightExitNight session
MaxDrawdownDD limit exceeded
MinProfitReachedProfit target hit
Result object returned by run_multi_strategy.
| Attribute | Type | Description |
|---|---|---|
| strategies | List[StrategyResult] | Individual results for each strategy |
| portfolio_result | BacktestResult | Combined portfolio performance |
# Run multiple strategies as a portfolio
result = engine.run_multi_strategy(
strategies=[strategy_a, strategy_b, strategy_c],
data=market_data,
strategy_names=["Momentum", "Mean Reversion", "Breakout"],
allocation_weights=[0.4, 0.3, 0.3] # Must sum to 1.0
)
# Access individual strategy results
for strat in result.strategies:
print(f"{strat.name}: {strat.total_return:.2%}")
# Access combined portfolio
print(f"Portfolio Return: {result.portfolio_result.total_return:.2%}") RLX calculates 30+ institutional-grade metrics. Access via result.metrics.
| Metric | Description |
|---|---|
| total_return | Total percentage return over the backtest period |
| annual_return | Annualized return (CAGR) |
| total_trades | Total number of completed trades |
| winning_trades | Number of profitable trades |
| losing_trades | Number of losing trades |
| win_rate | Percentage of winning trades |
| profit_factor | Gross profit / Gross loss |
| avg_win | Average profit on winning trades |
| avg_loss | Average loss on losing trades |
| largest_win | Largest single winning trade |
| largest_loss | Largest single losing trade |
| avg_trade | Average P&L per trade |
| expectancy | Expected value per trade |
# From BacktestResult
result = engine.run_with_signals(data)
# Access basic metrics
print(f"Sharpe Ratio: {result.metrics['sharpe_ratio']:.2f}")
print(f"Max Drawdown: {result.metrics['max_drawdown']:.2%}")
print(f"Win Rate: {result.metrics['win_rate']:.2%}")
# From DashboardResult (more detailed)
dashboard = generator.generate_dashboard(result, data)
metrics = dashboard.performance_metrics
print(f"Sortino Ratio: {metrics.sortino_ratio:.2f}")
print(f"VaR 95%: {metrics.var_95:.2%}")
print(f"Omega Ratio: {metrics.omega_ratio:.2f}")
print(f"Kelly Criterion: {metrics.kelly_criterion:.2%}") Train your own trading agents using popular RL libraries like Stable-Baselines3, Ray RLlib, or custom implementations. The RLEnvironment wraps the TradingEngine and handles state observation, action execution, and reward calculation.
env = rlx.RLEnvironment(
initial_capital=100000.0, # Starting capital
commission=0.0, # Commission rate per trade
slippage=0.0, # Slippage per trade
window_size=20, # Number of past bars in observation
exit_controller=None # Optional ExitController instance
)| Parameter | Type | Default | Description |
|---|---|---|---|
| initial_capital | float | 100000.0 | Starting capital |
| commission | float | 0.0 | Commission rate per trade |
| slippage | float | 0.0 | Slippage per trade |
| window_size | int | 20 | Past bars in observation |
| exit_controller | ExitController? | None | Custom exit logic |
| reward_type | str | "SimpleReturn" | Reward function (Supports Sharpe, Sortino, MultiObjective) |
Specialized environment for portfolio-wide reinforcement learning. Accepts a dictionary of DataFrames and manages multiple positions in parallel.
env = rlx.RlxMultiAssetEnv(
data={"BTC": btc_df, "ETH": eth_df},
initial_capital=100000.0,
window_size=32
)Loads historical market data into the environment.
Parameters:
data — A pandas DataFrame containing OHLCV data Resets the environment to the beginning of the data or a random starting point.
Returns:
(observation, info) — Tuple of initial state vector and info dict Executes an action in the environment and advances one time step.
Parameters:
action — Discrete (Int) or Continuous (Float/Array) action Returns:
(observation, reward, done, truncated, info)Returns current state as a graph-based observation for GNNs.
Returns:
Dict[str, np.ndarray] — Graph data (x, edge_index, edge_attr) Close position if open
Open/flip to long
Open/flip to short
The observation vector is a flattened list containing market data and account state:
Total Dimension:(window_size × 5) + 3
Example: window_size=20 → 20×5 + 3 = 103 features
import rlx
import pandas as pd
# Load market data
df = pd.read_csv("BTCUSDT_1h.csv")
# Create environment
env = rlx.RLEnvironment(
initial_capital=100000.0,
window_size=10
)
env.load_data(df)
# Training loop
obs, info = env.reset()
done = False
while not done:
action = 1 # Example: Always go Long
obs, reward, done, truncated, info = env.step(action)
print(f"Reward: {reward:.2f}%, Portfolio: ${info['portfolio_value']:.2f}") For proper risk management, configure exit rules through an ExitController:
import rlx
# Define exit rules
rules = rlx.ExitRules(
hold_bars=12, # Max 12 bars in position
max_drawdown_percent=3.0, # Stop loss at 3% drawdown
min_profit_percent=1.5, # Take profit at 1.5%
)
# Create exit controller
exit_controller = rlx.ExitController(rules)
# Create environment with exit rules
env = rlx.RLEnvironment(
initial_capital=100000.0,
commission=0.001, # 0.1% commission
slippage=0.0,
window_size=20,
exit_controller=exit_controller
)
env.load_data(df)Real benchmark (BTCUSDT 1h, PPO agent, 100K training steps):
| Configuration | Test Return | Sharpe | Max DD | Trades |
|---|---|---|---|---|
| No Rules (baseline) | -14.01% | -0.0480 | 50.37% | 106 |
| Conservative (2% SL) | -12.14% | -0.0012 | 42.90% | 5,914 |
| Aggressive (5% SL) ⭐ | +28.50% | 0.0407 | 17.84% | 767 |
Proper exit rules dramatically improve RL agent performance by: