API Reference

Complete documentation for the RLX Backtester Python API. High-performance trading logic at your fingertips.

🔑

Licensing

RLX requires a valid license key for production use. Specialized for institutional-grade research. Get your license →

PlanMax BarsRLIntrabarMulti-StrategyPrice
Starter500K$29/mo
ProUnlimited$79/mo
InstitutionalUnlimited$499/mo

TradingEngine

The core orchestration layer. Manages ultra-low latency event simulation, automated trade execution, and comprehensive performance calculation.

Constructor

Python Initialization
engine = rlx.TradingEngine(
    initial_capital=100000.0,      # Starting balance
    commission=0.001,               # 0.1% taker fee
    slippage=0.0001,                # Dynamic slippage model
    contract_size=1.0,              # Base asset unit scaling
    enable_dynamic_tp_sl=True,     # Adaptive exit logic
    license_key="rlx_pro_xxx"      # Pro/Institutional Key
)

Core Methods

run_with_signals

Batch processing of pre-calculated signals. Optimized for high-throughput vectorized data.

Returns: BacktestResult

run_intrabar_backtest

High-fidelity simulation using sub-timeframe data for accurate TP/SL resolution.

Returns: PrecisionResult

Data Structures

Bar

Represents a single OHLCV candle in the market data.

Python
bar = rlx.Bar(
    timestamp=1609459200,    # Unix timestamp
    open=100.0,               # Open price
    high=105.0,               # High price
    low=98.0,                 # Low price
    close=103.0,              # Close price
    volume=1000.0             # Volume
)

EnhancedSignal

A trading signal with optional Take Profit and Stop Loss levels for precise position management.

Python
signal = rlx.EnhancedSignal(
    signal=1,              # 1 = Long, -1 = Short, 0 = Flat
    take_profit=105.0,     # Optional TP price level
    stop_loss=95.0         # Optional SL price level
)
signal = 1

Long position — buy and hold

signal = -1

Short position — sell and hold

signal = 0

Flat — close any position

BacktestResult

Contains comprehensive results from a backtest simulation.

Attribute Type Description
initial_capitalfloatStarting capital
final_capitalfloatEnding portfolio value
total_returnfloatPercentage return (e.g., 0.15 for 15%)
total_tradesintNumber of completed trades
winning_tradesintNumber of profitable trades
losing_tradesintNumber of losing trades
equity_curveList[float]Portfolio value at each bar
equity_curve_timestampsList[int]Unix timestamps for equity curve
tradesList[TradeResult]Detailed history of all trades
metricsDict[str, float]Performance metrics (Sharpe, Drawdown, etc.)
trade_analysisDict[str, float]Trade statistics (avg_win, avg_loss)
total_commissionfloatTotal fees paid
drawdown_seriesList[DrawdownPoint]Drawdown at each timestamp
daily_returnsList[DailyReturn]Daily return breakdown

TradeResult

Details of a single completed trade.

Entry/Exit Info

  • entry_timeint

    Unix timestamp of entry

  • exit_timeint

    Unix timestamp of exit

  • entry_pricefloat

    Price at entry

  • exit_pricefloat

    Price at exit

  • sidestr

    "long" or "short"

P&L & Position

  • pnlfloat

    Profit/Loss in quote currency

  • returnsfloat

    Percentage return of the trade

  • quantityfloat

    Number of contracts traded

  • contract_sizefloat

    Size of one contract

  • commission_amountfloat

    Commission paid for this trade

Exit Info

  • exit_reasonExitReason

    Why the trade was closed

  • take_profitOptional[float]

    Take profit level if set

  • stop_lossOptional[float]

    Stop loss level if set

Computed Properties

  • directionint

    1 for long, -1 for short

  • profitfloat

    Alias for pnl

  • commissionfloat

    Alias for commission_amount

ExitReason

Enum representing the reason for closing a position.

None

No exit

TakeProfit

TP level hit

StopLoss

SL level hit

Signal

Strategy signal

EndOfData

Backtest finished

MaxBarsReached

Hold limit exceeded

MaxTimeReached

Time limit exceeded

NightExit

Night session

MaxDrawdown

DD limit exceeded

MinProfitReached

Profit target hit

MultiStrategyResult

Result object returned by run_multi_strategy.

Attribute Type Description
strategiesList[StrategyResult] Individual results for each strategy
portfolio_result BacktestResult Combined portfolio performance
Python
# Run multiple strategies as a portfolio
result = engine.run_multi_strategy(
    strategies=[strategy_a, strategy_b, strategy_c],
    data=market_data,
    strategy_names=["Momentum", "Mean Reversion", "Breakout"],
    allocation_weights=[0.4, 0.3, 0.3]  # Must sum to 1.0
)

# Access individual strategy results
for strat in result.strategies:
    print(f"{strat.name}: {strat.total_return:.2%}")

# Access combined portfolio
print(f"Portfolio Return: {result.portfolio_result.total_return:.2%}")

Performance Metrics

RLX calculates 30+ institutional-grade metrics. Access via result.metrics.

Basic Metrics

Metric Description
total_returnTotal percentage return over the backtest period
annual_returnAnnualized return (CAGR)
total_tradesTotal number of completed trades
winning_tradesNumber of profitable trades
losing_tradesNumber of losing trades
win_ratePercentage of winning trades
profit_factorGross profit / Gross loss
avg_winAverage profit on winning trades
avg_lossAverage loss on losing trades
largest_winLargest single winning trade
largest_lossLargest single losing trade
avg_tradeAverage P&L per trade
expectancyExpected value per trade

Accessing Metrics

Python
# From BacktestResult
result = engine.run_with_signals(data)

# Access basic metrics
print(f"Sharpe Ratio: {result.metrics['sharpe_ratio']:.2f}")
print(f"Max Drawdown: {result.metrics['max_drawdown']:.2%}")
print(f"Win Rate: {result.metrics['win_rate']:.2%}")

# From DashboardResult (more detailed)
dashboard = generator.generate_dashboard(result, data)
metrics = dashboard.performance_metrics

print(f"Sortino Ratio: {metrics.sortino_ratio:.2f}")
print(f"VaR 95%: {metrics.var_95:.2%}")
print(f"Omega Ratio: {metrics.omega_ratio:.2f}")
print(f"Kelly Criterion: {metrics.kelly_criterion:.2%}")

Reinforcement Learning

🤖

Gym-Compatible RL Environment

Train your own trading agents using popular RL libraries like Stable-Baselines3, Ray RLlib, or custom implementations. The RLEnvironment wraps the TradingEngine and handles state observation, action execution, and reward calculation.

RLEnvironment Constructor

Python
env = rlx.RLEnvironment(
    initial_capital=100000.0,    # Starting capital
    commission=0.0,               # Commission rate per trade
    slippage=0.0,                 # Slippage per trade
    window_size=20,              # Number of past bars in observation
    exit_controller=None         # Optional ExitController instance
)
Parameter Type Default Description
initial_capital float100000.0Starting capital
commissionfloat0.0Commission rate per trade
slippagefloat0.0Slippage per trade
window_sizeint20Past bars in observation
exit_controller ExitController?NoneCustom exit logic
reward_type str"SimpleReturn" Reward function (Supports Sharpe, Sortino, MultiObjective)

RlxMultiAssetEnv

Specialized environment for portfolio-wide reinforcement learning. Accepts a dictionary of DataFrames and manages multiple positions in parallel.

Python
env = rlx.RlxMultiAssetEnv(
    data={"BTC": btc_df, "ETH": eth_df},
    initial_capital=100000.0,
    window_size=32
)

Methods

load_data(data)

Loads historical market data into the environment.

Parameters:

  • data — A pandas DataFrame containing OHLCV data

reset()

Resets the environment to the beginning of the data or a random starting point.

Returns:

  • (observation, info) — Tuple of initial state vector and info dict

step(action)

Executes an action in the environment and advances one time step.

Parameters:

  • action — Discrete (Int) or Continuous (Float/Array) action

Returns:

  • (observation, reward, done, truncated, info)

get_graph_observation()

Returns current state as a graph-based observation for GNNs.

Returns:

  • Dict[str, np.ndarray] — Graph data (x, edge_index, edge_attr)

Action Space

0

Hold / Neutral

Close position if open

1

Long

Open/flip to long

2

Short

Open/flip to short

Observation Space

The observation vector is a flattened list containing market data and account state:

Market Data Window (window_size × 5)

  • • Normalized Open price (relative to previous close)
  • • Normalized High price (relative to previous close)
  • • Normalized Low price (relative to previous close)
  • • Normalized Close price (relative to previous close)
  • • Log-normalized Volume

Account State (3 features)

  • • Normalized Portfolio Value
  • • Position Direction (1.0, 0.0, or -1.0)
  • • Position Status (1.0 if open, 0.0 if closed)

Total Dimension:(window_size × 5) + 3

Example: window_size=20 → 20×5 + 3 = 103 features

Basic Example

Python
import rlx
import pandas as pd

# Load market data
df = pd.read_csv("BTCUSDT_1h.csv")

# Create environment
env = rlx.RLEnvironment(
    initial_capital=100000.0,
    window_size=10
)
env.load_data(df)

# Training loop
obs, info = env.reset()
done = False

while not done:
    action = 1  # Example: Always go Long
    obs, reward, done, truncated, info = env.step(action)
    print(f"Reward: {reward:.2f}%, Portfolio: ${info['portfolio_value']:.2f}")

Example with Exit Rules

For proper risk management, configure exit rules through an ExitController:

Python
import rlx

# Define exit rules
rules = rlx.ExitRules(
    hold_bars=12,               # Max 12 bars in position
    max_drawdown_percent=3.0,   # Stop loss at 3% drawdown
    min_profit_percent=1.5,    # Take profit at 1.5%
)

# Create exit controller
exit_controller = rlx.ExitController(rules)

# Create environment with exit rules
env = rlx.RLEnvironment(
    initial_capital=100000.0,
    commission=0.001,            # 0.1% commission
    slippage=0.0,
    window_size=20,
    exit_controller=exit_controller
)
env.load_data(df)

Training Results with Different Exit Rules

Real benchmark (BTCUSDT 1h, PPO agent, 100K training steps):

Configuration Test Return Sharpe Max DD Trades
No Rules (baseline)-14.01%-0.048050.37%106
Conservative (2% SL)-12.14%-0.001242.90%5,914
Aggressive (5% SL) ⭐ +28.50%0.040717.84%767

💡 Key Insight

Proper exit rules dramatically improve RL agent performance by:

  • 1. Limiting downside risk with max_drawdown_percent
  • 2. Securing profits early with min_profit_percent
  • 3. Preventing overexposure with hold_bars limits