WIP UI rework with qt6
This commit is contained in:
parent
36385af6f3
commit
ebf232317c
4
.vscode/launch.json
vendored
4
.vscode/launch.json
vendored
@ -15,8 +15,8 @@
|
||||
"console": "integratedTerminal",
|
||||
"args": [
|
||||
"BTC-USDT",
|
||||
"2025-06-09",
|
||||
"2025-08-25"
|
||||
"2025-08-26",
|
||||
"2025-08-30"
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
136
README.md
136
README.md
@ -1,136 +0,0 @@
|
||||
# Orderflow Backtest System
|
||||
|
||||
A high-performance orderbook reconstruction and metrics analysis system for cryptocurrency trading data. Calculates Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD) metrics with per-snapshot granularity.
|
||||
|
||||
## Features
|
||||
|
||||
- **Orderbook Reconstruction**: Rebuild complete orderbooks from SQLite database files
|
||||
- **OBI Metrics**: Calculate Order Book Imbalance `(Vb - Va) / (Vb + Va)` per snapshot
|
||||
- **CVD Metrics**: Track Cumulative Volume Delta with incremental calculation and reset functionality
|
||||
- **Memory Optimization**: >70% memory reduction through persistent metrics storage
|
||||
- **Real-time Visualization**: OHLC candlesticks with OBI/CVD curves beneath volume graphs
|
||||
- **Batch Processing**: High-performance processing of large datasets (months to years of data)
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.12+
|
||||
- UV package manager
|
||||
- SQLite database files with orderbook and trades data
|
||||
|
||||
### Installation
|
||||
```bash
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Run tests to verify installation
|
||||
uv run pytest
|
||||
```
|
||||
|
||||
### Basic Usage
|
||||
```bash
|
||||
# Process BTC-USDT data from July 1-31, 2025
|
||||
uv run python main.py BTC-USDT 2025-07-01 2025-08-01
|
||||
```
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
- **`models.py`**: Data models (`OrderbookLevel`, `Trade`, `BookSnapshot`, `Book`, `Metric`, `MetricCalculator`)
|
||||
- **`storage.py`**: Orchestrates orderbook reconstruction and metrics calculation
|
||||
- **`strategies.py`**: Trading strategy framework with metrics analysis capabilities
|
||||
- **`visualizer.py`**: Multi-subplot visualization (OHLC, Volume, OBI, CVD)
|
||||
- **`main.py`**: CLI application entry point
|
||||
|
||||
### Data Layer
|
||||
- **`repositories/sqlite_repository.py`**: Read-only SQLite data access
|
||||
- **`repositories/sqlite_metrics_repository.py`**: Write-enabled metrics storage and retrieval
|
||||
- **`parsers/orderbook_parser.py`**: Orderbook text parsing with price caching
|
||||
|
||||
### Testing
|
||||
- **`tests/`**: Comprehensive unit and integration tests
|
||||
- **Coverage**: 27 tests across 6 test files
|
||||
- **Run tests**: `uv run pytest`
|
||||
|
||||
## Data Flow
|
||||
|
||||
1. **Data Loading**: SQLite databases → Repository → Raw orderbook/trades data
|
||||
2. **Processing**: Storage → MetricCalculator → OBI/CVD calculation per snapshot
|
||||
3. **Persistence**: Calculated metrics stored in database for future analysis
|
||||
4. **Analysis**: Strategy loads stored metrics for trading signal generation
|
||||
5. **Visualization**: Charts display OHLC, volume, OBI, and CVD with shared time axis
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Input Tables (Required)
|
||||
```sql
|
||||
-- Orderbook snapshots
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL, -- JSON array of [price, size, liquidation_count, order_count]
|
||||
asks TEXT NOT NULL, -- JSON array of [price, size, liquidation_count, order_count]
|
||||
timestamp INTEGER NOT NULL -- Unix timestamp
|
||||
);
|
||||
|
||||
-- Trade executions
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL, -- "buy" or "sell"
|
||||
timestamp INTEGER NOT NULL -- Unix timestamp
|
||||
);
|
||||
```
|
||||
|
||||
### Output Table (Auto-created)
|
||||
```sql
|
||||
-- Calculated metrics
|
||||
CREATE TABLE metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER NOT NULL,
|
||||
timestamp INTEGER NOT NULL,
|
||||
obi REAL NOT NULL, -- Order Book Imbalance [-1, 1]
|
||||
cvd REAL NOT NULL, -- Cumulative Volume Delta
|
||||
best_bid REAL, -- Best bid price
|
||||
best_ask REAL, -- Best ask price
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
);
|
||||
```
|
||||
|
||||
## Performance
|
||||
|
||||
- **Memory Usage**: >70% reduction vs. keeping full snapshot history
|
||||
- **Processing Speed**: Batch processing with optimized SQLite queries
|
||||
- **Scalability**: Handles months to years of high-frequency data
|
||||
- **Storage Efficiency**: Metrics table <20% overhead vs. source data
|
||||
|
||||
## Development
|
||||
|
||||
### Setup
|
||||
```bash
|
||||
# Install development dependencies
|
||||
uv add --dev pytest
|
||||
|
||||
# Run linting
|
||||
uv run pytest --linting
|
||||
|
||||
# Run specific test modules
|
||||
uv run pytest tests/test_storage_metrics.py -v
|
||||
```
|
||||
|
||||
### Project Structure
|
||||
```
|
||||
orderflow_backtest/
|
||||
├── docs/ # Documentation
|
||||
├── models.py # Core data structures
|
||||
├── storage.py # Data processing orchestrator
|
||||
├── strategies.py # Trading strategy framework
|
||||
├── visualizer.py # Chart rendering
|
||||
├── main.py # CLI application
|
||||
├── repositories/ # Data access layer
|
||||
├── parsers/ # Data parsing utilities
|
||||
└── tests/ # Test suite
|
||||
```
|
||||
|
||||
For detailed documentation, see [./docs/README.md](./docs/README.md).
|
||||
389
app.py
Normal file
389
app.py
Normal file
@ -0,0 +1,389 @@
|
||||
# app.py
|
||||
|
||||
import dash
|
||||
from dash import html, dcc, Output, Input
|
||||
import dash_bootstrap_components as dbc
|
||||
import plotly.graph_objs as go
|
||||
from plotly.subplots import make_subplots
|
||||
import pandas as pd
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Any, List, Tuple
|
||||
|
||||
from viz_io import DATA_FILE, DEPTH_FILE, METRICS_FILE
|
||||
|
||||
_LAST_DATA: List[list] = []
|
||||
_LAST_DEPTH: dict = {"bids": [], "asks": []}
|
||||
_LAST_METRICS: List[list] = []
|
||||
|
||||
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.FLATLY], suppress_callback_exceptions=True)
|
||||
|
||||
app.layout = dbc.Container([
|
||||
dbc.Row([
|
||||
dbc.Col([
|
||||
dcc.Graph(
|
||||
id='ohlc-chart',
|
||||
config={'displayModeBar': True, 'displaylogo': False},
|
||||
style={'height': '100vh'}
|
||||
)
|
||||
], width=9),
|
||||
dbc.Col([
|
||||
dcc.Graph(
|
||||
id='depth-chart',
|
||||
config={'displayModeBar': True, 'displaylogo': False},
|
||||
style={'height': '100vh'}
|
||||
)
|
||||
], width=3)
|
||||
]),
|
||||
dcc.Interval(
|
||||
id='interval-update',
|
||||
interval=500,
|
||||
n_intervals=0
|
||||
)
|
||||
], fluid=True, style={
|
||||
'backgroundColor': '#000000',
|
||||
'minHeight': '100vh',
|
||||
'color': '#ffffff'
|
||||
})
|
||||
|
||||
def build_empty_ohlc_fig() -> go.Figure:
|
||||
"""Empty OHLC+Volume+OBI+CVD subplot layout to avoid layout jump before data."""
|
||||
fig = make_subplots(
|
||||
rows=4,
|
||||
cols=1,
|
||||
shared_xaxes=True,
|
||||
vertical_spacing=0.03,
|
||||
row_heights=[0.50, 0.18, 0.16, 0.16],
|
||||
subplot_titles=('OHLC', 'Volume', 'OBI', 'CVD'),
|
||||
specs=[[{"secondary_y": False}],
|
||||
[{"secondary_y": False}],
|
||||
[{"secondary_y": False}],
|
||||
[{"secondary_y": False}]]
|
||||
)
|
||||
fig.update_layout(
|
||||
xaxis_title='Time',
|
||||
yaxis_title='Price',
|
||||
template='plotly_dark',
|
||||
autosize=True,
|
||||
xaxis_rangeslider_visible=False,
|
||||
paper_bgcolor='#000000',
|
||||
plot_bgcolor='#000000',
|
||||
uirevision='ohlc_v2',
|
||||
yaxis2_title='Volume',
|
||||
yaxis3_title='OBI',
|
||||
yaxis4_title='CVD',
|
||||
showlegend=False,
|
||||
xaxis_rangeselector=dict(
|
||||
buttons=list([
|
||||
dict(count=1, label="1m", step="minute", stepmode="backward"),
|
||||
dict(count=5, label="5m", step="minute", stepmode="backward"),
|
||||
dict(count=15, label="15m", step="minute", stepmode="backward"),
|
||||
dict(count=1, label="1h", step="hour", stepmode="backward"),
|
||||
dict(step="all", label="All")
|
||||
])
|
||||
)
|
||||
)
|
||||
fig.update_xaxes(matches='x', row=1, col=1)
|
||||
fig.update_xaxes(matches='x', row=2, col=1)
|
||||
fig.update_xaxes(matches='x', row=3, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def build_empty_depth_fig() -> go.Figure:
|
||||
fig = go.Figure()
|
||||
fig.update_layout(
|
||||
xaxis_title='Size',
|
||||
yaxis_title='Price',
|
||||
template='plotly_dark',
|
||||
autosize=True,
|
||||
paper_bgcolor='#000000',
|
||||
plot_bgcolor='#000000',
|
||||
uirevision='depth',
|
||||
showlegend=False
|
||||
)
|
||||
return fig
|
||||
|
||||
|
||||
def build_ohlc_fig(df: pd.DataFrame) -> go.Figure:
|
||||
"""Build a four-row subplot with OHLC candlesticks, volume, OBI, and CVD.
|
||||
|
||||
The four subplots share the same X axis (time). Volume bars are directionally
|
||||
colored (green for up, red for down). Candlestick colors use Plotly defaults.
|
||||
"""
|
||||
fig = make_subplots(
|
||||
rows=4,
|
||||
cols=1,
|
||||
shared_xaxes=True,
|
||||
vertical_spacing=0.03,
|
||||
row_heights=[0.50, 0.18, 0.16, 0.16],
|
||||
subplot_titles=('OHLC', 'Volume', 'OBI', 'CVD'),
|
||||
specs=[[{"secondary_y": False}],
|
||||
[{"secondary_y": False}],
|
||||
[{"secondary_y": False}],
|
||||
[{"secondary_y": False}]]
|
||||
)
|
||||
|
||||
fig.add_trace(
|
||||
go.Candlestick(
|
||||
x=df['datetime'],
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name='OHLC'
|
||||
),
|
||||
row=1,
|
||||
col=1
|
||||
)
|
||||
|
||||
volume_colors = [
|
||||
'rgba(0,200,0,0.7)' if close_val >= open_val else 'rgba(200,0,0,0.7)'
|
||||
for open_val, close_val in zip(df['open'], df['close'])
|
||||
]
|
||||
|
||||
fig.add_trace(
|
||||
go.Bar(
|
||||
x=df['datetime'],
|
||||
y=df['volume'],
|
||||
marker=dict(color=volume_colors, line=dict(width=0)),
|
||||
name='Volume',
|
||||
width=50000
|
||||
),
|
||||
row=2,
|
||||
col=1
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
template='plotly_dark',
|
||||
autosize=True,
|
||||
xaxis_rangeslider_visible=False,
|
||||
paper_bgcolor='#000000',
|
||||
plot_bgcolor='#000000',
|
||||
uirevision='ohlc_v2',
|
||||
yaxis_title='Price',
|
||||
yaxis2_title='Volume',
|
||||
yaxis3_title='OBI',
|
||||
yaxis4_title='CVD',
|
||||
showlegend=False,
|
||||
xaxis_rangeselector=dict(
|
||||
buttons=list([
|
||||
dict(count=1, label="1m", step="minute", stepmode="backward"),
|
||||
dict(count=5, label="5m", step="minute", stepmode="backward"),
|
||||
dict(count=15, label="15m", step="minute", stepmode="backward"),
|
||||
dict(count=1, label="1h", step="hour", stepmode="backward"),
|
||||
dict(step="all", label="All")
|
||||
])
|
||||
)
|
||||
)
|
||||
|
||||
# All x-axes share the same range selector buttons
|
||||
fig.update_xaxes(matches='x', row=1, col=1)
|
||||
fig.update_xaxes(matches='x', row=2, col=1)
|
||||
fig.update_xaxes(matches='x', row=3, col=1)
|
||||
fig.update_xaxes(matches='x', row=4, col=1)
|
||||
|
||||
return fig
|
||||
|
||||
|
||||
def add_obi_subplot(fig: go.Figure, metrics: pd.DataFrame) -> go.Figure:
|
||||
"""Add raw OBI candlestick data to the existing row 3 subplot."""
|
||||
if metrics.empty:
|
||||
return fig
|
||||
|
||||
fig.add_trace(
|
||||
go.Candlestick(
|
||||
x=metrics['datetime'],
|
||||
open=metrics['obi_open'],
|
||||
high=metrics['obi_high'],
|
||||
low=metrics['obi_low'],
|
||||
close=metrics['obi_close'],
|
||||
increasing_line_color='rgba(0, 120, 255, 1.0)',
|
||||
decreasing_line_color='rgba(0, 80, 180, 1.0)',
|
||||
increasing_fillcolor='rgba(0, 120, 255, 0.6)',
|
||||
decreasing_fillcolor='rgba(0, 80, 180, 0.6)',
|
||||
name='OBI'
|
||||
),
|
||||
row=3, col=1
|
||||
)
|
||||
# Zero line
|
||||
fig.add_hline(y=0, line=dict(color='rgba(100,100,150,0.5)', width=1), row=3, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def add_cvd_subplot(fig: go.Figure, metrics: pd.DataFrame) -> go.Figure:
|
||||
"""Add CVD line chart to the existing row 4 subplot."""
|
||||
if metrics.empty or 'cvd_value' not in metrics.columns:
|
||||
return fig
|
||||
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=metrics['datetime'],
|
||||
y=metrics['cvd_value'],
|
||||
mode='lines+markers',
|
||||
line=dict(color='rgba(255, 165, 0, 1.0)', width=3),
|
||||
marker=dict(color='rgba(255, 165, 0, 0.8)', size=4),
|
||||
name='CVD'
|
||||
),
|
||||
row=4, col=1
|
||||
)
|
||||
# Zero line
|
||||
fig.add_hline(y=0, line=dict(color='rgba(100,100,150,0.5)', width=1), row=4, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def _cumulate_levels(levels: List[List[float]], reverse: bool, limit: int) -> List[Tuple[float, float]]:
|
||||
try:
|
||||
lv = sorted(levels, key=lambda x: float(x[0]), reverse=reverse)[:limit]
|
||||
cumulative: List[Tuple[float, float]] = []
|
||||
total = 0.0
|
||||
for price, size in lv:
|
||||
total += float(size)
|
||||
cumulative.append((float(price), total))
|
||||
return cumulative
|
||||
except Exception:
|
||||
return []
|
||||
|
||||
|
||||
def build_depth_fig(depth: Any, levels_per_side: int = 50) -> go.Figure:
|
||||
fig = build_empty_depth_fig()
|
||||
if not isinstance(depth, dict):
|
||||
return fig
|
||||
bids = depth.get('bids', [])
|
||||
asks = depth.get('asks', [])
|
||||
|
||||
cum_bids = _cumulate_levels(bids, reverse=True, limit=levels_per_side)
|
||||
cum_asks = _cumulate_levels(asks, reverse=False, limit=levels_per_side)
|
||||
|
||||
if cum_bids:
|
||||
fig.add_trace(go.Scatter(
|
||||
x=[s for _, s in cum_bids],
|
||||
y=[p for p, _ in cum_bids],
|
||||
mode='lines',
|
||||
name='Bids',
|
||||
line=dict(color='rgba(0,200,0,1)'),
|
||||
fill='tozerox',
|
||||
fillcolor='rgba(0,200,0,0.2)',
|
||||
line_shape='hv'
|
||||
))
|
||||
if cum_asks:
|
||||
fig.add_trace(go.Scatter(
|
||||
x=[s for _, s in cum_asks],
|
||||
y=[p for p, _ in cum_asks],
|
||||
mode='lines',
|
||||
name='Asks',
|
||||
line=dict(color='rgba(200,0,0,1)'),
|
||||
fill='tozerox',
|
||||
fillcolor='rgba(200,0,0,0.2)',
|
||||
line_shape='hv'
|
||||
))
|
||||
return fig
|
||||
|
||||
|
||||
# Setup callback
|
||||
@app.callback(
|
||||
[Output('ohlc-chart', 'figure'), Output('depth-chart', 'figure')],
|
||||
[Input('interval-update', 'n_intervals')]
|
||||
)
|
||||
def update_chart(n):
|
||||
try:
|
||||
if not DATA_FILE.exists():
|
||||
data = _LAST_DATA
|
||||
else:
|
||||
try:
|
||||
with open(DATA_FILE, "r") as f:
|
||||
data = json.load(f)
|
||||
|
||||
if isinstance(data, list):
|
||||
_LAST_DATA.clear()
|
||||
_LAST_DATA.extend(data)
|
||||
else:
|
||||
data = _LAST_DATA
|
||||
except json.JSONDecodeError:
|
||||
logging.warning("JSON decode error while reading OHLC data; using cached data")
|
||||
data = _LAST_DATA
|
||||
|
||||
depth_data = _LAST_DEPTH
|
||||
try:
|
||||
if DEPTH_FILE.exists():
|
||||
with open(DEPTH_FILE, "r") as f:
|
||||
loaded = json.load(f)
|
||||
if isinstance(loaded, dict):
|
||||
_LAST_DEPTH.clear()
|
||||
_LAST_DEPTH.update(loaded)
|
||||
depth_data = _LAST_DEPTH
|
||||
except json.JSONDecodeError:
|
||||
logging.warning("JSON decode error while reading depth data; using cached depth data")
|
||||
except Exception as e:
|
||||
logging.debug(f"Depth read skipped: {e}")
|
||||
depth_fig = build_depth_fig(depth_data, levels_per_side=50)
|
||||
|
||||
# Read metrics
|
||||
metrics_raw = _LAST_METRICS
|
||||
try:
|
||||
if METRICS_FILE.exists():
|
||||
with open(METRICS_FILE, "r") as f:
|
||||
loaded = json.load(f)
|
||||
if isinstance(loaded, list):
|
||||
_LAST_METRICS.clear()
|
||||
_LAST_METRICS.extend(loaded)
|
||||
metrics_raw = _LAST_METRICS
|
||||
except json.JSONDecodeError:
|
||||
logging.warning("JSON decode error while reading metrics data; using cached metrics data")
|
||||
except Exception as e:
|
||||
logging.debug(f"Metrics read skipped: {e}")
|
||||
|
||||
if not data:
|
||||
ohlc_fig = build_empty_ohlc_fig()
|
||||
return [ohlc_fig, depth_fig]
|
||||
|
||||
df = pd.DataFrame(data, columns=['timestamp', 'open', 'high', 'low', 'close', 'volume'])
|
||||
df['datetime'] = pd.to_datetime(df['timestamp'], unit='ms')
|
||||
ohlc_fig = build_ohlc_fig(df)
|
||||
|
||||
# Build metrics df and append OBI and CVD subplots
|
||||
if metrics_raw:
|
||||
# Handle both 5-element (legacy) and 6-element (with CVD) schemas
|
||||
if metrics_raw and len(metrics_raw[0]) == 6:
|
||||
md = pd.DataFrame(metrics_raw, columns=[
|
||||
'timestamp', 'obi_open', 'obi_high', 'obi_low', 'obi_close', 'cvd_value'
|
||||
])
|
||||
else:
|
||||
md = pd.DataFrame(metrics_raw, columns=[
|
||||
'timestamp', 'obi_open', 'obi_high', 'obi_low', 'obi_close'
|
||||
])
|
||||
md['cvd_value'] = 0.0 # Default CVD for backward compatibility
|
||||
|
||||
md['datetime'] = pd.to_datetime(md['timestamp'], unit='ms')
|
||||
ohlc_fig = add_obi_subplot(ohlc_fig, md)
|
||||
ohlc_fig = add_cvd_subplot(ohlc_fig, md)
|
||||
|
||||
return [ohlc_fig, depth_fig]
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Error updating chart: {e}")
|
||||
|
||||
ohlc_fig = go.Figure()
|
||||
ohlc_fig.update_layout(
|
||||
title=f'Error: {str(e)}',
|
||||
xaxis_title='Time',
|
||||
yaxis_title='Price',
|
||||
template='plotly_white',
|
||||
height=600
|
||||
)
|
||||
|
||||
depth_fig = go.Figure()
|
||||
depth_fig.update_layout(
|
||||
title=f'Error: {str(e)}',
|
||||
xaxis_title='Spread',
|
||||
yaxis_title='Spread Size',
|
||||
template='plotly_white',
|
||||
height=600
|
||||
)
|
||||
return [ohlc_fig, depth_fig]
|
||||
|
||||
if __name__ == '__main__':
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logging.info(f"Starting OHLC visualizer on http://localhost:8050")
|
||||
logging.info(f"Registered callbacks: {list(app.callback_map.keys())}")
|
||||
app.run(debug=False, port=8050, host='0.0.0.0')
|
||||
83
dash_app.py
83
dash_app.py
@ -1,83 +0,0 @@
|
||||
"""
|
||||
Dash application setup for interactive orderflow visualization.
|
||||
|
||||
This module provides the Dash application structure for the interactive
|
||||
visualizer with real data integration.
|
||||
"""
|
||||
|
||||
import dash
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
from typing import Optional, List, Tuple, Dict, Any
|
||||
from models import Metric
|
||||
|
||||
|
||||
def create_dash_app(
|
||||
ohlc_data: Optional[List[Tuple[int, float, float, float, float, float]]] = None,
|
||||
metrics_data: Optional[List[Metric]] = None,
|
||||
debug: bool = False,
|
||||
port: int = 8050
|
||||
) -> dash.Dash:
|
||||
"""
|
||||
Create and configure a Dash application with real data.
|
||||
|
||||
Args:
|
||||
ohlc_data: List of OHLC tuples (timestamp, open, high, low, close, volume)
|
||||
metrics_data: List of Metric objects with OBI and CVD data
|
||||
debug: Enable debug mode for development
|
||||
port: Port number for the Dash server
|
||||
|
||||
Returns:
|
||||
dash.Dash: Configured Dash application instance
|
||||
"""
|
||||
app = dash.Dash(
|
||||
__name__,
|
||||
external_stylesheets=[dbc.themes.BOOTSTRAP, dbc.themes.DARKLY]
|
||||
)
|
||||
|
||||
# Layout with 4-subplot chart container
|
||||
from dash_components import create_chart_container, create_side_panel, create_populated_chart
|
||||
|
||||
# Create chart with real data if available
|
||||
chart_component = create_populated_chart(ohlc_data, metrics_data) if ohlc_data else create_chart_container()
|
||||
|
||||
app.layout = dbc.Container([
|
||||
dbc.Row([
|
||||
dbc.Col([
|
||||
html.H2("Orderflow Interactive Visualizer", className="text-center mb-3"),
|
||||
chart_component
|
||||
], width=9),
|
||||
dbc.Col([
|
||||
create_side_panel()
|
||||
], width=3)
|
||||
])
|
||||
], fluid=True)
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def create_dash_app_with_data(
|
||||
ohlc_data: List[Tuple[int, float, float, float, float, float]],
|
||||
metrics_data: List[Metric],
|
||||
debug: bool = False,
|
||||
port: int = 8050
|
||||
) -> dash.Dash:
|
||||
"""
|
||||
Create Dash application with processed data from InteractiveVisualizer.
|
||||
|
||||
Args:
|
||||
ohlc_data: Processed OHLC data
|
||||
metrics_data: Processed metrics data
|
||||
debug: Enable debug mode
|
||||
port: Port number
|
||||
|
||||
Returns:
|
||||
dash.Dash: Configured Dash application with real data
|
||||
"""
|
||||
return create_dash_app(ohlc_data, metrics_data, debug, port)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Development server for testing
|
||||
app = create_dash_app(debug=True)
|
||||
app.run(debug=True, port=8050)
|
||||
@ -1,19 +0,0 @@
|
||||
"""
|
||||
Dash callback functions for interactive chart functionality.
|
||||
|
||||
This module will contain all Dash callback functions that handle user interactions
|
||||
such as zooming, panning, hover information, and CVD reset functionality.
|
||||
"""
|
||||
|
||||
# Placeholder module - callbacks will be implemented in subsequent tasks
|
||||
# This file establishes the structure for future development
|
||||
|
||||
def register_callbacks(app):
|
||||
"""
|
||||
Register all interactive callbacks with the Dash app.
|
||||
|
||||
Args:
|
||||
app: Dash application instance
|
||||
"""
|
||||
# Callbacks will be implemented in Phase 2 tasks
|
||||
pass
|
||||
@ -1,261 +0,0 @@
|
||||
"""
|
||||
Custom Dash components for the interactive visualizer.
|
||||
|
||||
This module provides reusable UI components including the side panel,
|
||||
navigation controls, and chart containers.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
|
||||
|
||||
def create_side_panel():
|
||||
"""
|
||||
Create the side panel component for displaying hover information and controls.
|
||||
|
||||
Returns:
|
||||
dash component: Side panel layout
|
||||
"""
|
||||
return dbc.Card([
|
||||
dbc.CardHeader("Chart Information"),
|
||||
dbc.CardBody([
|
||||
html.Div(id="hover-info", children=[
|
||||
html.P("Hover over charts to see detailed information")
|
||||
]),
|
||||
html.Hr(),
|
||||
html.Div([
|
||||
dbc.Button("Reset CVD", id="reset-cvd-btn", color="primary", className="me-2"),
|
||||
dbc.Button("Reset Zoom", id="reset-zoom-btn", color="secondary"),
|
||||
])
|
||||
])
|
||||
], style={"height": "100vh"})
|
||||
|
||||
|
||||
def create_chart_container():
|
||||
"""
|
||||
Create the main chart container for the 4-subplot layout.
|
||||
|
||||
Returns:
|
||||
dash component: Chart container with 4-subplot layout
|
||||
"""
|
||||
return dcc.Graph(
|
||||
id="main-charts",
|
||||
figure=create_empty_subplot_layout(),
|
||||
style={"height": "100vh"},
|
||||
config={
|
||||
"displayModeBar": True,
|
||||
"displaylogo": False,
|
||||
"modeBarButtonsToRemove": ["select2d", "lasso2d"],
|
||||
"modeBarButtonsToAdd": ["resetScale2d"],
|
||||
"scrollZoom": True, # Enable mouse wheel zooming
|
||||
"doubleClick": "reset+autosize" # Double-click to reset zoom
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def create_empty_subplot_layout():
|
||||
"""
|
||||
Create empty 4-subplot layout matching existing visualizer structure.
|
||||
|
||||
Returns:
|
||||
plotly.graph_objects.Figure: Empty figure with 4 subplots
|
||||
"""
|
||||
fig = make_subplots(
|
||||
rows=4, cols=1,
|
||||
shared_xaxes=True,
|
||||
subplot_titles=["OHLC", "Volume", "Order Book Imbalance (OBI)", "Cumulative Volume Delta (CVD)"],
|
||||
vertical_spacing=0.02
|
||||
)
|
||||
|
||||
# Configure layout to match existing styling
|
||||
fig.update_layout(
|
||||
height=800,
|
||||
showlegend=False,
|
||||
margin=dict(l=50, r=50, t=50, b=50),
|
||||
template="plotly_dark", # Professional dark theme
|
||||
paper_bgcolor='rgba(0,0,0,0)', # Transparent background
|
||||
plot_bgcolor='rgba(0,0,0,0)' # Transparent plot area
|
||||
)
|
||||
|
||||
# Configure synchronized zooming and panning
|
||||
configure_synchronized_axes(fig)
|
||||
|
||||
return fig
|
||||
|
||||
|
||||
def configure_synchronized_axes(fig):
|
||||
"""
|
||||
Configure synchronized zooming and panning across all subplots.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure with subplots
|
||||
"""
|
||||
# Enable dragmode for panning and zooming
|
||||
fig.update_layout(
|
||||
dragmode='zoom',
|
||||
selectdirection='h' # Restrict selection to horizontal for time-based data
|
||||
)
|
||||
|
||||
# Configure X-axes for synchronized behavior (already shared via make_subplots)
|
||||
# All subplots will automatically share zoom/pan on X-axis due to shared_xaxes=True
|
||||
|
||||
# Configure individual Y-axes for better UX
|
||||
fig.update_yaxes(fixedrange=False, gridcolor='rgba(128,128,128,0.2)') # Allow Y-axis zooming
|
||||
fig.update_xaxes(fixedrange=False, gridcolor='rgba(128,128,128,0.2)') # Allow X-axis zooming
|
||||
|
||||
# Enable crosshair cursor spanning all charts
|
||||
fig.update_layout(hovermode='x unified')
|
||||
fig.update_traces(hovertemplate='<extra></extra>') # Clean hover labels
|
||||
|
||||
return fig
|
||||
|
||||
|
||||
def add_ohlc_trace(fig, ohlc_data: dict):
|
||||
"""
|
||||
Add OHLC candlestick trace to the first subplot.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure with subplots
|
||||
ohlc_data: Dict with x, open, high, low, close arrays
|
||||
"""
|
||||
candlestick = go.Candlestick(
|
||||
x=ohlc_data["x"],
|
||||
open=ohlc_data["open"],
|
||||
high=ohlc_data["high"],
|
||||
low=ohlc_data["low"],
|
||||
close=ohlc_data["close"],
|
||||
name="OHLC"
|
||||
)
|
||||
|
||||
fig.add_trace(candlestick, row=1, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def add_volume_trace(fig, volume_data: dict):
|
||||
"""
|
||||
Add Volume bar trace to the second subplot.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure with subplots
|
||||
volume_data: Dict with x (timestamps) and y (volumes) arrays
|
||||
"""
|
||||
volume_bar = go.Bar(
|
||||
x=volume_data["x"],
|
||||
y=volume_data["y"],
|
||||
name="Volume",
|
||||
marker_color='rgba(158, 185, 243, 0.7)', # Blue with transparency
|
||||
showlegend=False,
|
||||
hovertemplate="Volume: %{y}<extra></extra>"
|
||||
)
|
||||
|
||||
fig.add_trace(volume_bar, row=2, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def add_obi_trace(fig, obi_data: dict):
|
||||
"""
|
||||
Add OBI line trace to the third subplot.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure with subplots
|
||||
obi_data: Dict with timestamp and obi arrays
|
||||
"""
|
||||
obi_line = go.Scatter(
|
||||
x=obi_data["timestamp"],
|
||||
y=obi_data["obi"],
|
||||
mode='lines',
|
||||
name="OBI",
|
||||
line=dict(color='blue', width=2),
|
||||
showlegend=False,
|
||||
hovertemplate="OBI: %{y:.3f}<extra></extra>"
|
||||
)
|
||||
|
||||
# Add horizontal reference line at y=0
|
||||
fig.add_hline(y=0, line=dict(color='gray', dash='dash', width=1), row=3, col=1)
|
||||
fig.add_trace(obi_line, row=3, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def add_cvd_trace(fig, cvd_data: dict):
|
||||
"""
|
||||
Add CVD line trace to the fourth subplot.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure with subplots
|
||||
cvd_data: Dict with timestamp and cvd arrays
|
||||
"""
|
||||
cvd_line = go.Scatter(
|
||||
x=cvd_data["timestamp"],
|
||||
y=cvd_data["cvd"],
|
||||
mode='lines',
|
||||
name="CVD",
|
||||
line=dict(color='red', width=2),
|
||||
showlegend=False,
|
||||
hovertemplate="CVD: %{y:.1f}<extra></extra>"
|
||||
)
|
||||
|
||||
fig.add_trace(cvd_line, row=4, col=1)
|
||||
return fig
|
||||
|
||||
|
||||
def create_populated_chart(ohlc_data, metrics_data):
|
||||
"""
|
||||
Create a chart container with real data populated.
|
||||
|
||||
Args:
|
||||
ohlc_data: List of OHLC tuples or None
|
||||
metrics_data: List of Metric objects or None
|
||||
|
||||
Returns:
|
||||
dcc.Graph component with populated data
|
||||
"""
|
||||
from data_adapters import format_ohlc_for_plotly, format_volume_for_plotly, format_metrics_for_plotly
|
||||
|
||||
# Create base subplot layout
|
||||
fig = create_empty_subplot_layout()
|
||||
|
||||
# Add real data if available
|
||||
if ohlc_data:
|
||||
# Format OHLC data
|
||||
ohlc_formatted = format_ohlc_for_plotly(ohlc_data)
|
||||
volume_formatted = format_volume_for_plotly(ohlc_data)
|
||||
|
||||
# Add OHLC trace
|
||||
fig = add_ohlc_trace(fig, ohlc_formatted)
|
||||
|
||||
# Add Volume trace
|
||||
fig = add_volume_trace(fig, volume_formatted)
|
||||
|
||||
if metrics_data:
|
||||
# Format metrics data
|
||||
metrics_formatted = format_metrics_for_plotly(metrics_data)
|
||||
|
||||
# Add OBI and CVD traces
|
||||
if metrics_formatted["obi"]["x"]: # Check if we have OBI data
|
||||
obi_data = {
|
||||
"timestamp": metrics_formatted["obi"]["x"],
|
||||
"obi": metrics_formatted["obi"]["y"]
|
||||
}
|
||||
fig = add_obi_trace(fig, obi_data)
|
||||
if metrics_formatted["cvd"]["x"]: # Check if we have CVD data
|
||||
cvd_data = {
|
||||
"timestamp": metrics_formatted["cvd"]["x"],
|
||||
"cvd": metrics_formatted["cvd"]["y"]
|
||||
}
|
||||
fig = add_cvd_trace(fig, cvd_data)
|
||||
|
||||
return dcc.Graph(
|
||||
id="main-charts",
|
||||
figure=fig,
|
||||
style={"height": "100vh"},
|
||||
config={
|
||||
"displayModeBar": True,
|
||||
"displaylogo": False,
|
||||
"modeBarButtonsToRemove": ["select2d", "lasso2d"],
|
||||
"modeBarButtonsToAdd": ["pan2d", "zoom2d", "zoomIn2d", "zoomOut2d", "resetScale2d"],
|
||||
"scrollZoom": True,
|
||||
"doubleClick": "reset+autosize"
|
||||
}
|
||||
)
|
||||
160
data_adapters.py
160
data_adapters.py
@ -1,160 +0,0 @@
|
||||
"""
|
||||
Data transformation utilities for converting orderflow data to Plotly format.
|
||||
|
||||
This module provides functions to transform Book, Metric, and other data structures
|
||||
into formats suitable for Plotly charts.
|
||||
"""
|
||||
|
||||
from typing import List, Dict, Any, Tuple
|
||||
from datetime import datetime
|
||||
from storage import Book, BookSnapshot
|
||||
from models import Metric
|
||||
|
||||
|
||||
def format_ohlc_for_plotly(ohlc_data: List[Tuple[int, float, float, float, float, float]]) -> Dict[str, List[Any]]:
|
||||
"""
|
||||
Format OHLC tuples for Plotly Candlestick chart.
|
||||
|
||||
Args:
|
||||
ohlc_data: List of (timestamp, open, high, low, close, volume) tuples
|
||||
|
||||
Returns:
|
||||
Dict containing formatted data for Plotly Candlestick
|
||||
"""
|
||||
if not ohlc_data:
|
||||
return {"x": [], "open": [], "high": [], "low": [], "close": []}
|
||||
|
||||
timestamps = [datetime.fromtimestamp(bar[0]) for bar in ohlc_data]
|
||||
opens = [bar[1] for bar in ohlc_data]
|
||||
highs = [bar[2] for bar in ohlc_data]
|
||||
lows = [bar[3] for bar in ohlc_data]
|
||||
closes = [bar[4] for bar in ohlc_data]
|
||||
|
||||
return {
|
||||
"x": timestamps,
|
||||
"open": opens,
|
||||
"high": highs,
|
||||
"low": lows,
|
||||
"close": closes
|
||||
}
|
||||
|
||||
|
||||
def format_volume_for_plotly(ohlc_data: List[Tuple[int, float, float, float, float, float]]) -> Dict[str, List[Any]]:
|
||||
"""
|
||||
Format volume data for Plotly Bar chart.
|
||||
|
||||
Args:
|
||||
ohlc_data: List of (timestamp, open, high, low, close, volume) tuples
|
||||
|
||||
Returns:
|
||||
Dict containing formatted volume data for Plotly Bar
|
||||
"""
|
||||
if not ohlc_data:
|
||||
return {"x": [], "y": []}
|
||||
|
||||
timestamps = [datetime.fromtimestamp(bar[0]) for bar in ohlc_data]
|
||||
volumes = [bar[5] for bar in ohlc_data]
|
||||
|
||||
return {
|
||||
"x": timestamps,
|
||||
"y": volumes
|
||||
}
|
||||
|
||||
|
||||
def format_metrics_for_plotly(metrics: List[Metric]) -> Dict[str, Dict[str, List[Any]]]:
|
||||
"""
|
||||
Format Metric objects for Plotly line charts.
|
||||
|
||||
Args:
|
||||
metrics: List of Metric objects
|
||||
|
||||
Returns:
|
||||
Dict containing OBI and CVD data formatted for Plotly Scatter
|
||||
"""
|
||||
if not metrics:
|
||||
return {
|
||||
"obi": {"x": [], "y": []},
|
||||
"cvd": {"x": [], "y": []}
|
||||
}
|
||||
|
||||
timestamps = [datetime.fromtimestamp(m.timestamp / 1000) for m in metrics]
|
||||
obi_values = [m.obi for m in metrics]
|
||||
cvd_values = [m.cvd for m in metrics]
|
||||
|
||||
return {
|
||||
"obi": {
|
||||
"x": timestamps,
|
||||
"y": obi_values
|
||||
},
|
||||
"cvd": {
|
||||
"x": timestamps,
|
||||
"y": cvd_values
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def book_to_ohlc_data(book: Book, window_seconds: int = 60) -> Dict[str, List[Any]]:
|
||||
"""
|
||||
Convert Book snapshots to OHLC data format for Plotly (legacy function).
|
||||
|
||||
Args:
|
||||
book: Book containing snapshots
|
||||
window_seconds: Time window for OHLC aggregation
|
||||
|
||||
Returns:
|
||||
Dict containing OHLC data arrays for Plotly
|
||||
"""
|
||||
# Generate sample data for testing compatibility
|
||||
if not book.snapshots:
|
||||
return {"timestamp": [], "open": [], "high": [], "low": [], "close": [], "volume": []}
|
||||
|
||||
# Sample data based on existing visualizer pattern
|
||||
timestamps = [datetime.fromtimestamp(1640995200 + i * 60) for i in range(10)]
|
||||
opens = [50000 + i * 10 for i in range(10)]
|
||||
highs = [o + 50 for o in opens]
|
||||
lows = [o - 30 for o in opens]
|
||||
closes = [o + 20 for o in opens]
|
||||
volumes = [100 + i * 5 for i in range(10)]
|
||||
|
||||
return {
|
||||
"timestamp": timestamps,
|
||||
"open": opens,
|
||||
"high": highs,
|
||||
"low": lows,
|
||||
"close": closes,
|
||||
"volume": volumes
|
||||
}
|
||||
|
||||
|
||||
def metrics_to_plotly_data(metrics: List[Metric]) -> Dict[str, List[Any]]:
|
||||
"""
|
||||
Convert Metric objects to Plotly time series format (legacy function).
|
||||
|
||||
Args:
|
||||
metrics: List of Metric objects
|
||||
|
||||
Returns:
|
||||
Dict containing time series data for OBI and CVD
|
||||
"""
|
||||
# Generate sample data for testing compatibility
|
||||
if not metrics:
|
||||
timestamps = [datetime.fromtimestamp(1640995200 + i * 60) for i in range(10)]
|
||||
obi_values = [0.1 * (i % 3 - 1) + 0.05 * i for i in range(10)]
|
||||
cvd_values = [sum(obi_values[:i+1]) * 10 for i in range(10)]
|
||||
|
||||
return {
|
||||
"timestamp": timestamps,
|
||||
"obi": obi_values,
|
||||
"cvd": cvd_values,
|
||||
"best_bid": [50000 + i * 10 for i in range(10)],
|
||||
"best_ask": [50001 + i * 10 for i in range(10)]
|
||||
}
|
||||
|
||||
# Real implementation processes actual Metric objects
|
||||
return {
|
||||
"timestamp": [datetime.fromtimestamp(m.timestamp / 1000) for m in metrics],
|
||||
"obi": [m.obi for m in metrics],
|
||||
"cvd": [m.cvd for m in metrics],
|
||||
"best_bid": [m.best_bid for m in metrics],
|
||||
"best_ask": [m.best_ask for m in metrics]
|
||||
}
|
||||
212
db_interpreter.py
Normal file
212
db_interpreter.py
Normal file
@ -0,0 +1,212 @@
|
||||
from dataclasses import dataclass
|
||||
from math import inf
|
||||
from pathlib import Path
|
||||
from typing import Iterator, Tuple
|
||||
import sqlite3
|
||||
|
||||
@dataclass(slots=True)
|
||||
class OrderbookLevel:
|
||||
"""
|
||||
Represents a single price level in an orderbook.
|
||||
|
||||
Attributes:
|
||||
price: Price level in quote currency
|
||||
size: Total size available at this price level
|
||||
liquidation_count: Number of liquidation orders at this level
|
||||
order_count: Total number of orders at this level
|
||||
"""
|
||||
price: float
|
||||
size: float
|
||||
liquidation_count: int
|
||||
order_count: int
|
||||
|
||||
class OrderbookUpdate:
|
||||
"""
|
||||
Container for a windowed orderbook update with temporal boundaries.
|
||||
|
||||
Represents an orderbook snapshot with a defined time window, used for
|
||||
aggregating trades that occur within the same time period.
|
||||
|
||||
Attributes:
|
||||
id: Unique identifier for this orderbook row
|
||||
bids: List of bid price levels (highest price first)
|
||||
asks: List of ask price levels (lowest price first)
|
||||
timestamp: Window start time in milliseconds since epoch
|
||||
end_timestamp: Window end time in milliseconds since epoch
|
||||
"""
|
||||
id: int
|
||||
bids: str # JSON string representation of bid levels
|
||||
asks: str # JSON string representation of ask levels
|
||||
timestamp: str # String timestamp from database
|
||||
end_timestamp: str # String end timestamp from database
|
||||
|
||||
def __init__(self, id: int, bids: str, asks: str, timestamp: str, end_timestamp: str) -> None:
|
||||
"""
|
||||
Initialize orderbook update with raw database values.
|
||||
|
||||
Args:
|
||||
id: Database row identifier
|
||||
bids: JSON string of bid levels from database
|
||||
asks: JSON string of ask levels from database
|
||||
timestamp: Timestamp string from database
|
||||
end_timestamp: End timestamp string from database
|
||||
"""
|
||||
self.id = id
|
||||
self.bids = bids
|
||||
self.asks = asks
|
||||
self.timestamp = timestamp
|
||||
self.end_timestamp = end_timestamp
|
||||
|
||||
@dataclass
|
||||
class Trade:
|
||||
"""
|
||||
Represents a single trade execution.
|
||||
|
||||
Attributes:
|
||||
id: Unique database identifier for this trade
|
||||
trade_id: Exchange-specific trade identifier
|
||||
price: Execution price in quote currency
|
||||
size: Trade size in base currency
|
||||
side: Trade direction ("buy" or "sell")
|
||||
timestamp: Execution time in milliseconds since epoch
|
||||
"""
|
||||
id: int
|
||||
trade_id: str
|
||||
price: float
|
||||
size: float
|
||||
side: str
|
||||
timestamp: int
|
||||
|
||||
class DBInterpreter:
|
||||
"""
|
||||
Provides efficient streaming access to SQLite orderbook and trade data.
|
||||
|
||||
Handles batch reading from SQLite databases with optimized PRAGMA settings
|
||||
for read-only access. Uses temporal windowing to associate trades with
|
||||
orderbook snapshots based on timestamps.
|
||||
|
||||
Attributes:
|
||||
db_path: Path to the SQLite database file
|
||||
"""
|
||||
|
||||
def __init__(self, db_path: Path) -> None:
|
||||
"""
|
||||
Initialize database interpreter with path validation.
|
||||
|
||||
Args:
|
||||
db_path: Path to SQLite database containing 'book' and 'trades' tables
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If database file does not exist
|
||||
|
||||
Example:
|
||||
>>> db_path = Path("data/BTC-USDT-2025-01-01.db")
|
||||
>>> interpreter = DBInterpreter(db_path)
|
||||
"""
|
||||
if not db_path.exists():
|
||||
raise FileNotFoundError(f"Database file not found: {db_path}")
|
||||
self.db_path = db_path
|
||||
|
||||
def stream(self) -> Iterator[Tuple[OrderbookUpdate, list[Trade]]]:
|
||||
"""
|
||||
Stream orderbook updates and associated trades efficiently.
|
||||
|
||||
This performs two linear scans over the `book` and `trades` tables using
|
||||
separate cursors and batches, avoiding N+1 queries and large `fetchall()`
|
||||
calls. It preserves bids/asks for future use in visualizations while
|
||||
yielding trades that fall in [timestamp, next_book_timestamp).
|
||||
"""
|
||||
# Use read-only immutable mode for faster reads and protection
|
||||
conn = sqlite3.connect(f"file:{self.db_path}?mode=ro&immutable=1", uri=True)
|
||||
try:
|
||||
# Read-optimized PRAGMAs (safe in read-only mode)
|
||||
conn.execute("PRAGMA query_only=ON")
|
||||
conn.execute("PRAGMA synchronous=OFF")
|
||||
conn.execute("PRAGMA temp_store=MEMORY")
|
||||
# The following values can be tuned depending on available memory
|
||||
conn.execute("PRAGMA mmap_size=268435456") # 256MB
|
||||
conn.execute("PRAGMA cache_size=-200000") # ~200MB page cache
|
||||
|
||||
book_cur = conn.cursor()
|
||||
trade_cur = conn.cursor()
|
||||
|
||||
# Keep bids/asks for future visuals; cast timestamps to integer
|
||||
book_cur.execute(
|
||||
"SELECT id, bids, asks, CAST(timestamp AS INTEGER) AS timestamp "
|
||||
"FROM book ORDER BY timestamp ASC"
|
||||
)
|
||||
trade_cur.execute(
|
||||
"SELECT id, trade_id, price, size, side, CAST(timestamp AS INTEGER) AS timestamp "
|
||||
"FROM trades ORDER BY timestamp ASC"
|
||||
)
|
||||
|
||||
BOOK_BATCH = 2048
|
||||
TRADE_BATCH = 4096
|
||||
|
||||
# Helpers to stream book rows with one-row lookahead
|
||||
book_buffer = []
|
||||
book_index = 0
|
||||
|
||||
def fetch_one_book():
|
||||
nonlocal book_buffer, book_index
|
||||
if book_index >= len(book_buffer):
|
||||
book_buffer = book_cur.fetchmany(BOOK_BATCH)
|
||||
book_index = 0
|
||||
if not book_buffer:
|
||||
return None
|
||||
row = book_buffer[book_index]
|
||||
book_index += 1
|
||||
return row # (id, bids, asks, ts)
|
||||
|
||||
# Helpers to stream trade rows
|
||||
trade_buffer = []
|
||||
trade_index = 0
|
||||
|
||||
def peek_trade():
|
||||
nonlocal trade_buffer, trade_index
|
||||
if trade_index >= len(trade_buffer):
|
||||
trade_buffer = trade_cur.fetchmany(TRADE_BATCH)
|
||||
trade_index = 0
|
||||
if not trade_buffer:
|
||||
return None
|
||||
return trade_buffer[trade_index]
|
||||
|
||||
def advance_trade():
|
||||
nonlocal trade_index
|
||||
trade_index += 1
|
||||
|
||||
# Prime first two book rows to compute window end timestamps
|
||||
current_book = fetch_one_book()
|
||||
next_book = fetch_one_book()
|
||||
|
||||
while current_book is not None:
|
||||
book_id, bids, asks, book_ts = current_book
|
||||
end_ts = next_book[3] if next_book is not None else inf
|
||||
|
||||
# Collect trades in [book_ts, end_ts)
|
||||
trades_here = []
|
||||
while True:
|
||||
t = peek_trade()
|
||||
if t is None:
|
||||
break
|
||||
# trade row: (id, trade_id, price, size, side, ts)
|
||||
trade_ts = t[5]
|
||||
if trade_ts < book_ts:
|
||||
# advance until we reach current window
|
||||
advance_trade()
|
||||
continue
|
||||
if trade_ts >= end_ts:
|
||||
# next book window starts; stop collecting
|
||||
break
|
||||
# within [book_ts, end_ts)
|
||||
trades_here.append(t)
|
||||
advance_trade()
|
||||
|
||||
ob_update = OrderbookUpdate(book_id, bids, asks, book_ts, end_ts)
|
||||
yield ob_update, trades_here
|
||||
|
||||
# Advance to next window
|
||||
current_book = next_book
|
||||
next_book = fetch_one_book()
|
||||
finally:
|
||||
conn.close()
|
||||
698
desktop_app.py
Normal file
698
desktop_app.py
Normal file
@ -0,0 +1,698 @@
|
||||
"""
|
||||
Desktop visualization application using PySide6 and PyQtGraph.
|
||||
|
||||
This module provides a native desktop replacement for the Dash web application,
|
||||
offering better performance, debugging capabilities, and real-time potential.
|
||||
"""
|
||||
import sys
|
||||
import json
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Dict, Any
|
||||
from PySide6.QtWidgets import QApplication, QMainWindow, QVBoxLayout, QWidget, QHBoxLayout
|
||||
from PySide6.QtCore import QTimer
|
||||
import pyqtgraph as pg
|
||||
from pyqtgraph import QtCore, QtGui
|
||||
import numpy as np
|
||||
from ohlc_processor import OHLCProcessor
|
||||
|
||||
class OHLCItem(pg.GraphicsObject):
|
||||
"""Custom OHLC candlestick item for PyQtGraph."""
|
||||
|
||||
def __init__(self, data):
|
||||
"""
|
||||
Initialize OHLC item with data.
|
||||
|
||||
Args:
|
||||
data: List of tuples (timestamp, open, high, low, close, volume)
|
||||
"""
|
||||
pg.GraphicsObject.__init__(self)
|
||||
self.data = data
|
||||
self.generatePicture()
|
||||
|
||||
def generatePicture(self):
|
||||
"""Generate the candlestick chart picture."""
|
||||
self.picture = QtGui.QPicture()
|
||||
painter = QtGui.QPainter(self.picture)
|
||||
|
||||
pen_up = pg.mkPen(color='#00ff00', width=1) # Green for up candles
|
||||
pen_down = pg.mkPen(color='#ff0000', width=1) # Red for down candles
|
||||
brush_up = pg.mkBrush(color='#00ff00')
|
||||
brush_down = pg.mkBrush(color='#ff0000')
|
||||
|
||||
# Dynamic candle width based on data density
|
||||
if len(self.data) > 1:
|
||||
time_diff = (self.data[1][0] - self.data[0][0]) / 1000 # Time between candles in seconds
|
||||
width = time_diff * 0.8 # 80% of time interval
|
||||
else:
|
||||
width = 30 # Default width in seconds
|
||||
|
||||
for timestamp, open_price, high, low, close, volume in self.data:
|
||||
x = timestamp / 1000 # Convert ms to seconds
|
||||
|
||||
# Determine candle color
|
||||
is_up = close >= open_price
|
||||
pen = pen_up if is_up else pen_down
|
||||
brush = brush_up if is_up else brush_down
|
||||
|
||||
# Draw wick (high-low line)
|
||||
painter.setPen(pen)
|
||||
painter.drawLine(QtCore.QPointF(x, low), QtCore.QPointF(x, high))
|
||||
|
||||
# Draw body (open-close rectangle)
|
||||
body_height = abs(close - open_price)
|
||||
body_bottom = min(open_price, close)
|
||||
|
||||
painter.setPen(pen)
|
||||
painter.setBrush(brush)
|
||||
painter.drawRect(QtCore.QRectF(x - width/2, body_bottom, width, body_height))
|
||||
|
||||
painter.end()
|
||||
|
||||
def paint(self, painter, option, widget):
|
||||
"""Paint the candlestick chart."""
|
||||
painter.drawPicture(0, 0, self.picture)
|
||||
|
||||
def boundingRect(self):
|
||||
"""Return the bounding rectangle of the item."""
|
||||
return QtCore.QRectF(self.picture.boundingRect())
|
||||
|
||||
class OBIItem(pg.GraphicsObject):
|
||||
"""Custom OBI candlestick item with blue styling."""
|
||||
|
||||
def __init__(self, data):
|
||||
"""Initialize OBI item with blue color scheme."""
|
||||
pg.GraphicsObject.__init__(self)
|
||||
self.data = data
|
||||
self.generatePicture()
|
||||
|
||||
def generatePicture(self):
|
||||
"""Generate OBI candlestick chart with blue styling."""
|
||||
self.picture = QtGui.QPicture()
|
||||
painter = QtGui.QPainter(self.picture)
|
||||
|
||||
# Blue color scheme for OBI
|
||||
pen_up = pg.mkPen(color='#4a9eff', width=1) # Light blue for up
|
||||
pen_down = pg.mkPen(color='#1f5f99', width=1) # Dark blue for down
|
||||
brush_up = pg.mkBrush(color='#4a9eff')
|
||||
brush_down = pg.mkBrush(color='#1f5f99')
|
||||
|
||||
# Dynamic width calculation
|
||||
if len(self.data) > 1:
|
||||
time_diff = (self.data[1][0] - self.data[0][0]) / 1000
|
||||
width = time_diff * 0.8
|
||||
else:
|
||||
width = 30
|
||||
|
||||
for timestamp, open_price, high, low, close, _ in self.data:
|
||||
x = timestamp / 1000
|
||||
|
||||
# Determine color
|
||||
is_up = close >= open_price
|
||||
pen = pen_up if is_up else pen_down
|
||||
brush = brush_up if is_up else brush_down
|
||||
|
||||
# Draw wick
|
||||
painter.setPen(pen)
|
||||
painter.drawLine(QtCore.QPointF(x, low), QtCore.QPointF(x, high))
|
||||
|
||||
# Draw body
|
||||
body_height = abs(close - open_price)
|
||||
body_bottom = min(open_price, close)
|
||||
|
||||
painter.setPen(pen)
|
||||
painter.setBrush(brush)
|
||||
painter.drawRect(QtCore.QRectF(x - width/2, body_bottom, width, body_height))
|
||||
|
||||
painter.end()
|
||||
|
||||
def paint(self, painter, option, widget):
|
||||
"""Paint the OBI candlestick chart."""
|
||||
painter.drawPicture(0, 0, self.picture)
|
||||
|
||||
def boundingRect(self):
|
||||
"""Return the bounding rectangle."""
|
||||
return QtCore.QRectF(self.picture.boundingRect())
|
||||
|
||||
class MainWindow(QMainWindow):
|
||||
"""Main application window for orderflow visualization."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.ohlc_data = []
|
||||
self.metrics_data = []
|
||||
self.depth_data = {"bids": [], "asks": []} # Cache for depth data
|
||||
self.last_data_size = 0 # Track OHLC data changes
|
||||
self.last_metrics_size = 0 # Track metrics data changes
|
||||
self.last_depth_data = None # Track depth data changes
|
||||
|
||||
self.setup_ui()
|
||||
|
||||
def setup_ui(self):
|
||||
"""Initialize the user interface."""
|
||||
self.setWindowTitle("Orderflow Backtest Visualizer")
|
||||
self.setGeometry(100, 100, 1200, 800)
|
||||
|
||||
# Central widget and main horizontal layout
|
||||
central_widget = QWidget()
|
||||
self.setCentralWidget(central_widget)
|
||||
main_layout = QHBoxLayout(central_widget)
|
||||
|
||||
# Left side: Charts layout (OHLC, Volume, OBI, CVD)
|
||||
charts_widget = QWidget()
|
||||
charts_layout = QVBoxLayout(charts_widget)
|
||||
|
||||
# Right side: Depth chart widget
|
||||
depth_widget = QWidget()
|
||||
depth_layout = QVBoxLayout(depth_widget)
|
||||
|
||||
# Configure PyQtGraph
|
||||
pg.setConfigOptions(antialias=True, background='k', foreground='w')
|
||||
|
||||
# Create multiple plot widgets for different charts
|
||||
self.ohlc_plot = pg.PlotWidget(title="OHLC Candlestick Chart")
|
||||
self.ohlc_plot.setLabel('left', 'Price', units='USD')
|
||||
self.ohlc_plot.showGrid(x=True, y=True, alpha=0.3)
|
||||
self.ohlc_plot.setMouseEnabled(x=True, y=True)
|
||||
self.ohlc_plot.enableAutoRange(axis='xy', enable=False) # Disable auto-range for better control
|
||||
|
||||
self.volume_plot = pg.PlotWidget(title="Volume")
|
||||
self.volume_plot.setLabel('left', 'Volume')
|
||||
self.volume_plot.showGrid(x=True, y=True, alpha=0.3)
|
||||
self.volume_plot.setMouseEnabled(x=True, y=True)
|
||||
self.volume_plot.enableAutoRange(axis='xy', enable=False)
|
||||
|
||||
self.obi_plot = pg.PlotWidget(title="Order Book Imbalance (OBI)")
|
||||
self.obi_plot.setLabel('left', 'OBI')
|
||||
self.obi_plot.showGrid(x=True, y=True, alpha=0.3)
|
||||
self.obi_plot.setMouseEnabled(x=True, y=True)
|
||||
self.obi_plot.enableAutoRange(axis='xy', enable=False)
|
||||
|
||||
self.cvd_plot = pg.PlotWidget(title="Cumulative Volume Delta (CVD)")
|
||||
self.cvd_plot.setLabel('left', 'CVD')
|
||||
self.cvd_plot.setLabel('bottom', 'Time', units='s')
|
||||
self.cvd_plot.showGrid(x=True, y=True, alpha=0.3)
|
||||
self.cvd_plot.setMouseEnabled(x=True, y=True)
|
||||
self.cvd_plot.enableAutoRange(axis='xy', enable=False)
|
||||
|
||||
# Create depth chart (right side)
|
||||
self.depth_plot = pg.PlotWidget(title="Order Book Depth")
|
||||
self.depth_plot.setLabel('left', 'Price', units='USD')
|
||||
self.depth_plot.setLabel('bottom', 'Cumulative Volume')
|
||||
self.depth_plot.showGrid(x=True, y=True, alpha=0.3)
|
||||
self.depth_plot.setMouseEnabled(x=True, y=True)
|
||||
|
||||
# Link x-axes for synchronized zooming/panning (main charts only)
|
||||
self.volume_plot.setXLink(self.ohlc_plot)
|
||||
self.obi_plot.setXLink(self.ohlc_plot)
|
||||
self.cvd_plot.setXLink(self.ohlc_plot)
|
||||
|
||||
# Add crosshairs to time-series charts
|
||||
self._setup_crosshairs()
|
||||
|
||||
# Add charts to left layout
|
||||
charts_layout.addWidget(self.ohlc_plot, 3) # Larger space for OHLC
|
||||
charts_layout.addWidget(self.volume_plot, 1)
|
||||
charts_layout.addWidget(self.obi_plot, 1)
|
||||
charts_layout.addWidget(self.cvd_plot, 1)
|
||||
|
||||
# Add depth chart to right layout
|
||||
depth_layout.addWidget(self.depth_plot)
|
||||
|
||||
# Add both sides to main layout (3:1 ratio similar to original Dash)
|
||||
main_layout.addWidget(charts_widget, 3)
|
||||
main_layout.addWidget(depth_widget, 1)
|
||||
|
||||
logging.info("UI setup completed")
|
||||
|
||||
def _setup_crosshairs(self):
|
||||
"""Setup crosshair functionality for time-series charts."""
|
||||
# Create crosshair lines with proper pen style
|
||||
crosshair_pen = pg.mkPen(color='#888888', width=1, style=QtCore.Qt.DashLine)
|
||||
self.vline = pg.InfiniteLine(angle=90, movable=False, pen=crosshair_pen)
|
||||
self.hline_ohlc = pg.InfiniteLine(angle=0, movable=False, pen=crosshair_pen)
|
||||
self.hline_volume = pg.InfiniteLine(angle=0, movable=False, pen=crosshair_pen)
|
||||
self.hline_obi = pg.InfiniteLine(angle=0, movable=False, pen=crosshair_pen)
|
||||
self.hline_cvd = pg.InfiniteLine(angle=0, movable=False, pen=crosshair_pen)
|
||||
|
||||
# Add crosshairs to plots
|
||||
self.ohlc_plot.addItem(self.vline, ignoreBounds=True)
|
||||
self.ohlc_plot.addItem(self.hline_ohlc, ignoreBounds=True)
|
||||
self.volume_plot.addItem(self.vline, ignoreBounds=True)
|
||||
self.volume_plot.addItem(self.hline_volume, ignoreBounds=True)
|
||||
self.obi_plot.addItem(self.vline, ignoreBounds=True)
|
||||
self.obi_plot.addItem(self.hline_obi, ignoreBounds=True)
|
||||
self.cvd_plot.addItem(self.vline, ignoreBounds=True)
|
||||
self.cvd_plot.addItem(self.hline_cvd, ignoreBounds=True)
|
||||
|
||||
# Connect mouse move events
|
||||
self.ohlc_plot.scene().sigMouseMoved.connect(self._on_mouse_moved)
|
||||
self.volume_plot.scene().sigMouseMoved.connect(self._on_mouse_moved)
|
||||
self.obi_plot.scene().sigMouseMoved.connect(self._on_mouse_moved)
|
||||
self.cvd_plot.scene().sigMouseMoved.connect(self._on_mouse_moved)
|
||||
|
||||
# Create data inspection label
|
||||
self.data_label = pg.LabelItem(justify='left')
|
||||
self.ohlc_plot.addItem(self.data_label)
|
||||
|
||||
# Add rectangle selection for zoom functionality
|
||||
self._setup_rectangle_selection()
|
||||
|
||||
logging.debug("Crosshairs setup completed")
|
||||
|
||||
def _setup_rectangle_selection(self):
|
||||
"""Setup rectangle selection for zoom functionality."""
|
||||
# Enable rectangle selection on OHLC plot (main chart)
|
||||
self.ohlc_plot.setMenuEnabled(False) # Disable context menu for cleaner interaction
|
||||
|
||||
# Add double-click to auto-range
|
||||
self.ohlc_plot.plotItem.vb.mouseDoubleClickEvent = self._on_double_click
|
||||
|
||||
# Rectangle selection is handled by PyQtGraph's built-in ViewBox behavior
|
||||
# Users can drag to select area and right-click to zoom to selection
|
||||
logging.debug("Rectangle selection setup completed")
|
||||
|
||||
def _on_double_click(self, event):
|
||||
"""Handle double-click to auto-range all charts."""
|
||||
try:
|
||||
# Auto-range all linked charts
|
||||
self.ohlc_plot.autoRange()
|
||||
self.volume_plot.autoRange()
|
||||
self.obi_plot.autoRange()
|
||||
self.cvd_plot.autoRange()
|
||||
logging.debug("Auto-range applied to all charts")
|
||||
except Exception as e:
|
||||
logging.debug(f"Error in double-click handler: {e}")
|
||||
|
||||
def update_data(self, data_processor: OHLCProcessor):
|
||||
"""Update chart data from direct processor or JSON files."""
|
||||
self._get_data_from_processor(data_processor)
|
||||
|
||||
def _load_ohlc_data(self):
|
||||
"""Load OHLC data from JSON file."""
|
||||
ohlc_file = Path("ohlc_data.json")
|
||||
if not ohlc_file.exists():
|
||||
return
|
||||
|
||||
try:
|
||||
with open(ohlc_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Only update if data has changed
|
||||
if len(data) != self.last_data_size:
|
||||
self.ohlc_data = data
|
||||
self.last_data_size = len(data)
|
||||
logging.debug(f"Loaded {len(data)} OHLC bars")
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError) as e:
|
||||
logging.warning(f"Failed to load OHLC data: {e}")
|
||||
|
||||
def _load_metrics_data(self):
|
||||
"""Load metrics data (OBI, CVD) from JSON file."""
|
||||
metrics_file = Path("metrics_data.json")
|
||||
if not metrics_file.exists():
|
||||
return
|
||||
|
||||
try:
|
||||
with open(metrics_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Only update if data has changed
|
||||
if len(data) != self.last_metrics_size:
|
||||
self.metrics_data = data
|
||||
self.last_metrics_size = len(data)
|
||||
logging.debug(f"Loaded {len(data)} metrics bars")
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError) as e:
|
||||
logging.warning(f"Failed to load metrics data: {e}")
|
||||
|
||||
def _load_depth_data(self):
|
||||
"""Load depth data (bids, asks) from JSON file."""
|
||||
depth_file = Path("depth_data.json")
|
||||
if not depth_file.exists():
|
||||
return
|
||||
|
||||
try:
|
||||
with open(depth_file, 'r') as f:
|
||||
data = json.load(f)
|
||||
|
||||
# Only update if data has changed
|
||||
if data != self.last_depth_data:
|
||||
self.depth_data = data
|
||||
self.last_depth_data = data.copy() if isinstance(data, dict) else data
|
||||
logging.debug(f"Loaded depth data: {len(data.get('bids', []))} bids, {len(data.get('asks', []))} asks")
|
||||
|
||||
except (json.JSONDecodeError, FileNotFoundError) as e:
|
||||
logging.warning(f"Failed to load depth data: {e}")
|
||||
|
||||
def _update_all_plots(self):
|
||||
"""Update all chart plots with current data."""
|
||||
self._update_ohlc_plot()
|
||||
self._update_volume_plot()
|
||||
self._update_obi_plot()
|
||||
self._update_cvd_plot()
|
||||
self._update_depth_plot()
|
||||
|
||||
def _update_ohlc_plot(self):
|
||||
"""Update the OHLC plot with candlestick chart."""
|
||||
if not self.ohlc_data:
|
||||
return
|
||||
|
||||
# Clear existing plot items (but preserve crosshairs)
|
||||
items = [item for item in self.ohlc_plot.items() if not isinstance(item, pg.InfiniteLine)]
|
||||
for item in items:
|
||||
self.ohlc_plot.removeItem(item)
|
||||
|
||||
# Create OHLC candlestick item
|
||||
ohlc_item = OHLCItem(self.ohlc_data)
|
||||
self.ohlc_plot.addItem(ohlc_item)
|
||||
|
||||
logging.debug(f"Updated OHLC chart with {len(self.ohlc_data)} bars")
|
||||
|
||||
def _update_volume_plot(self):
|
||||
"""Update volume bar chart."""
|
||||
if not self.ohlc_data:
|
||||
return
|
||||
|
||||
# Clear existing plot items (but preserve crosshairs)
|
||||
items = [item for item in self.volume_plot.items() if not isinstance(item, pg.InfiniteLine)]
|
||||
for item in items:
|
||||
self.volume_plot.removeItem(item)
|
||||
|
||||
# Extract volume and price change data
|
||||
timestamps = [bar[0] / 1000 for bar in self.ohlc_data]
|
||||
volumes = [bar[5] for bar in self.ohlc_data]
|
||||
|
||||
# Create volume bars with color coding
|
||||
for i, (ts, vol) in enumerate(zip(timestamps, volumes)):
|
||||
# Determine color based on price movement
|
||||
if i > 0:
|
||||
prev_close = self.ohlc_data[i-1][4] # Previous close
|
||||
curr_close = self.ohlc_data[i][4] # Current close
|
||||
color = '#00ff00' if curr_close >= prev_close else '#ff0000'
|
||||
else:
|
||||
color = '#888888' # Neutral for first bar
|
||||
|
||||
# Create bar
|
||||
bar_item = pg.BarGraphItem(x=[ts], height=[vol], width=30, brush=color)
|
||||
self.volume_plot.addItem(bar_item)
|
||||
|
||||
logging.debug(f"Updated volume chart with {len(volumes)} bars")
|
||||
|
||||
def _update_obi_plot(self):
|
||||
"""Update OBI candlestick chart."""
|
||||
if not self.metrics_data:
|
||||
return
|
||||
|
||||
# Clear existing plot items (but preserve crosshairs)
|
||||
items = [item for item in self.obi_plot.items() if not isinstance(item, pg.InfiniteLine)]
|
||||
for item in items:
|
||||
self.obi_plot.removeItem(item)
|
||||
|
||||
# Extract OBI data and create candlestick format
|
||||
obi_candlesticks = []
|
||||
for bar in self.metrics_data:
|
||||
if len(bar) >= 5: # Ensure we have OBI data
|
||||
timestamp, obi_open, obi_high, obi_low, obi_close = bar[:5]
|
||||
obi_candlesticks.append([timestamp, obi_open, obi_high, obi_low, obi_close, 0])
|
||||
|
||||
if obi_candlesticks:
|
||||
# Create OBI candlestick item with blue styling
|
||||
obi_item = OBIItem(obi_candlesticks) # Will create this class
|
||||
self.obi_plot.addItem(obi_item)
|
||||
|
||||
logging.debug(f"Updated OBI chart with {len(obi_candlesticks)} bars")
|
||||
|
||||
def _update_cvd_plot(self):
|
||||
"""Update CVD line chart."""
|
||||
if not self.metrics_data:
|
||||
return
|
||||
|
||||
# Clear existing plot items (but preserve crosshairs)
|
||||
items = [item for item in self.cvd_plot.items() if not isinstance(item, pg.InfiniteLine)]
|
||||
for item in items:
|
||||
self.cvd_plot.removeItem(item)
|
||||
|
||||
# Extract CVD data
|
||||
timestamps = []
|
||||
cvd_values = []
|
||||
|
||||
for bar in self.metrics_data:
|
||||
if len(bar) >= 6: # Check for CVD value
|
||||
timestamps.append(bar[0] / 1000) # Convert to seconds
|
||||
cvd_values.append(bar[5]) # CVD value
|
||||
elif len(bar) >= 5: # Fallback for older format
|
||||
timestamps.append(bar[0] / 1000)
|
||||
cvd_values.append(0.0) # Default CVD
|
||||
|
||||
if timestamps and cvd_values:
|
||||
# Plot CVD as line chart
|
||||
self.cvd_plot.plot(timestamps, cvd_values, pen=pg.mkPen(color='#ffff00', width=2), name='CVD')
|
||||
|
||||
logging.debug(f"Updated CVD chart with {len(cvd_values)} points")
|
||||
|
||||
def _cumulate_levels(self, levels, reverse=False, limit=50):
|
||||
"""
|
||||
Convert individual price levels to cumulative volumes.
|
||||
|
||||
Args:
|
||||
levels: List of [price, size] pairs
|
||||
reverse: If True, sort in descending order (for bids)
|
||||
limit: Maximum number of levels to include
|
||||
|
||||
Returns:
|
||||
List of (price, cumulative_volume) tuples
|
||||
"""
|
||||
if not levels:
|
||||
return []
|
||||
|
||||
try:
|
||||
# Sort levels by price
|
||||
sorted_levels = sorted(levels[:limit], key=lambda x: x[0], reverse=reverse)
|
||||
|
||||
# Calculate cumulative volumes
|
||||
cumulative = []
|
||||
total_volume = 0.0
|
||||
|
||||
for price, size in sorted_levels:
|
||||
total_volume += size
|
||||
cumulative.append((price, total_volume))
|
||||
|
||||
return cumulative
|
||||
|
||||
except Exception as e:
|
||||
logging.warning(f"Error in cumulate_levels: {e}")
|
||||
return []
|
||||
|
||||
def _update_depth_plot(self):
|
||||
"""Update depth chart with cumulative bid/ask visualization."""
|
||||
if not self.depth_data or not isinstance(self.depth_data, dict):
|
||||
return
|
||||
|
||||
# Clear all items for depth chart (no crosshairs here)
|
||||
self.depth_plot.clear()
|
||||
|
||||
bids = self.depth_data.get('bids', [])
|
||||
asks = self.depth_data.get('asks', [])
|
||||
|
||||
# Calculate cumulative levels
|
||||
cum_bids = self._cumulate_levels(bids, reverse=True, limit=50)
|
||||
cum_asks = self._cumulate_levels(asks, reverse=False, limit=50)
|
||||
|
||||
# Plot bids (green)
|
||||
if cum_bids:
|
||||
bid_volumes = [vol for _, vol in cum_bids]
|
||||
bid_prices = [price for price, _ in cum_bids]
|
||||
|
||||
# Create stepped line plot for bids
|
||||
self.depth_plot.plot(bid_volumes, bid_prices,
|
||||
pen=pg.mkPen(color='#00c800', width=2),
|
||||
stepMode='left', name='Bids')
|
||||
|
||||
# Add fill area
|
||||
fill_curve = pg.PlotCurveItem(bid_volumes + [0], bid_prices + [bid_prices[-1]],
|
||||
fillLevel=0, fillBrush=pg.mkBrush(color=(0, 200, 0, 50)))
|
||||
self.depth_plot.addItem(fill_curve)
|
||||
|
||||
# Plot asks (red)
|
||||
if cum_asks:
|
||||
ask_volumes = [vol for _, vol in cum_asks]
|
||||
ask_prices = [price for price, _ in cum_asks]
|
||||
|
||||
# Create stepped line plot for asks
|
||||
self.depth_plot.plot(ask_volumes, ask_prices,
|
||||
pen=pg.mkPen(color='#c80000', width=2),
|
||||
stepMode='left', name='Asks')
|
||||
|
||||
# Add fill area
|
||||
fill_curve = pg.PlotCurveItem(ask_volumes + [0], ask_prices + [ask_prices[-1]],
|
||||
fillLevel=0, fillBrush=pg.mkBrush(color=(200, 0, 0, 50)))
|
||||
self.depth_plot.addItem(fill_curve)
|
||||
|
||||
logging.debug(f"Updated depth chart: {len(cum_bids)} bid levels, {len(cum_asks)} ask levels")
|
||||
|
||||
def _on_mouse_moved(self, pos):
|
||||
"""Handle mouse movement for crosshair and data inspection."""
|
||||
try:
|
||||
# Determine which plot triggered the event
|
||||
sender = self.sender()
|
||||
if not sender:
|
||||
return
|
||||
|
||||
# Find the plot widget from the scene
|
||||
plot_widget = None
|
||||
for plot in [self.ohlc_plot, self.volume_plot, self.obi_plot, self.cvd_plot]:
|
||||
if plot.scene() == sender:
|
||||
plot_widget = plot
|
||||
break
|
||||
|
||||
if not plot_widget:
|
||||
return
|
||||
|
||||
# Convert scene coordinates to plot coordinates
|
||||
if plot_widget.sceneBoundingRect().contains(pos):
|
||||
mouse_point = plot_widget.plotItem.vb.mapSceneToView(pos)
|
||||
x_pos = mouse_point.x()
|
||||
y_pos = mouse_point.y()
|
||||
|
||||
# Update crosshair positions
|
||||
self.vline.setPos(x_pos)
|
||||
self.hline_ohlc.setPos(y_pos if plot_widget == self.ohlc_plot else self.hline_ohlc.pos()[1])
|
||||
self.hline_volume.setPos(y_pos if plot_widget == self.volume_plot else self.hline_volume.pos()[1])
|
||||
self.hline_obi.setPos(y_pos if plot_widget == self.obi_plot else self.hline_obi.pos()[1])
|
||||
self.hline_cvd.setPos(y_pos if plot_widget == self.cvd_plot else self.hline_cvd.pos()[1])
|
||||
|
||||
# Update data inspection
|
||||
self._update_data_inspection(x_pos, plot_widget)
|
||||
|
||||
except Exception as e:
|
||||
logging.debug(f"Error in mouse move handler: {e}")
|
||||
|
||||
def _update_data_inspection(self, x_pos, plot_widget):
|
||||
"""Update data inspection label with values at cursor position."""
|
||||
try:
|
||||
info_parts = []
|
||||
|
||||
# Find closest data point for OHLC
|
||||
if self.ohlc_data:
|
||||
closest_ohlc = self._find_closest_data_point(x_pos, self.ohlc_data)
|
||||
if closest_ohlc:
|
||||
ts, open_p, high, low, close, volume = closest_ohlc
|
||||
time_str = self._format_timestamp(ts)
|
||||
info_parts.append(f"Time: {time_str}")
|
||||
info_parts.append(f"OHLC: O:{open_p:.2f} H:{high:.2f} L:{low:.2f} C:{close:.2f}")
|
||||
info_parts.append(f"Volume: {volume:.4f}")
|
||||
|
||||
# Find closest data point for Metrics (OBI/CVD)
|
||||
if self.metrics_data:
|
||||
closest_metrics = self._find_closest_data_point(x_pos, self.metrics_data)
|
||||
if closest_metrics and len(closest_metrics) >= 5:
|
||||
ts, obi_o, obi_h, obi_l, obi_c = closest_metrics[:5]
|
||||
cvd_val = closest_metrics[5] if len(closest_metrics) > 5 else 0.0
|
||||
info_parts.append(f"OBI: O:{obi_o:.2f} H:{obi_h:.2f} L:{obi_l:.2f} C:{obi_c:.2f}")
|
||||
info_parts.append(f"CVD: {cvd_val:.2f}")
|
||||
|
||||
# Update label
|
||||
if info_parts:
|
||||
self.data_label.setText("<br>".join(info_parts))
|
||||
else:
|
||||
self.data_label.setText("No data")
|
||||
|
||||
except Exception as e:
|
||||
logging.debug(f"Error updating data inspection: {e}")
|
||||
|
||||
def _find_closest_data_point(self, x_pos, data):
|
||||
"""Find the closest data point to the given x position."""
|
||||
if not data:
|
||||
return None
|
||||
|
||||
# Convert x_pos (seconds) back to milliseconds for comparison
|
||||
x_ms = x_pos * 1000
|
||||
|
||||
# Find closest timestamp
|
||||
closest_idx = 0
|
||||
min_diff = abs(data[0][0] - x_ms)
|
||||
|
||||
for i, bar in enumerate(data):
|
||||
diff = abs(bar[0] - x_ms)
|
||||
if diff < min_diff:
|
||||
min_diff = diff
|
||||
closest_idx = i
|
||||
|
||||
return data[closest_idx]
|
||||
|
||||
def _format_timestamp(self, timestamp_ms):
|
||||
"""Format timestamp for display."""
|
||||
from datetime import datetime
|
||||
try:
|
||||
dt = datetime.fromtimestamp(timestamp_ms / 1000)
|
||||
return dt.strftime("%H:%M:%S")
|
||||
except:
|
||||
return str(int(timestamp_ms))
|
||||
|
||||
def setup_data_processor(self, processor):
|
||||
"""Setup direct data integration with OHLCProcessor."""
|
||||
self.data_processor = processor
|
||||
# For now, use JSON mode as the direct mode implementation is incomplete
|
||||
self.direct_mode = False
|
||||
|
||||
# Setup callbacks for real-time data updates
|
||||
self._setup_processor_callbacks()
|
||||
|
||||
logging.info("Data processor reference set, using JSON file mode for visualization")
|
||||
|
||||
def _setup_processor_callbacks(self):
|
||||
"""Setup callbacks to receive data directly from processor."""
|
||||
if not self.data_processor:
|
||||
return
|
||||
|
||||
# Replace JSON polling with direct data access
|
||||
# Note: This is a simplified approach - in production, you'd want proper callbacks
|
||||
# from the processor when new data is available
|
||||
|
||||
logging.debug("Processor callbacks setup completed")
|
||||
|
||||
def _get_data_from_processor(self, data_processor: OHLCProcessor):
|
||||
"""Get data directly from processor instead of JSON files."""
|
||||
try:
|
||||
self.ohlc_data = data_processor.get_ohlc_data()
|
||||
self.metrics_data = data_processor.get_metrics_data()
|
||||
self.depth_data = data_processor.get_current_depth()
|
||||
|
||||
# Get OHLC data from processor (placeholder - needs actual processor API)
|
||||
# processor_ohlc = self.data_processor.get_ohlc_data()
|
||||
# if processor_ohlc:
|
||||
# self.ohlc_data = processor_ohlc
|
||||
|
||||
# Get metrics data from processor
|
||||
# processor_metrics = self.data_processor.get_metrics_data()
|
||||
# if processor_metrics:
|
||||
# self.metrics_data = processor_metrics
|
||||
|
||||
# Get depth data from processor
|
||||
# processor_depth = self.data_processor.get_current_depth()
|
||||
# if processor_depth:
|
||||
# self.depth_data = processor_depth
|
||||
|
||||
logging.debug("Retrieved data directly from processor")
|
||||
|
||||
except Exception as e:
|
||||
logging.warning(f"Error getting data from processor: {e}")
|
||||
# Fallback to JSON mode
|
||||
self.direct_mode = False
|
||||
|
||||
def main():
|
||||
"""Application entry point."""
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
|
||||
|
||||
app = QApplication(sys.argv)
|
||||
|
||||
window = MainWindow()
|
||||
window.show()
|
||||
|
||||
logging.info("Desktop application started")
|
||||
|
||||
sys.exit(app.exec())
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
756
docs/API.md
756
docs/API.md
@ -1,550 +1,23 @@
|
||||
# API Documentation
|
||||
# API Documentation (Current Implementation)
|
||||
|
||||
## Overview
|
||||
|
||||
This document provides comprehensive API documentation for the Orderflow Backtest System, including public interfaces, data models, and usage examples.
|
||||
This document describes the public interfaces of the current system: SQLite streaming, OHLC/depth aggregation, JSON-based IPC, and the Dash visualizer. Metrics (OBI/CVD), repository/storage layers, and strategy APIs are not part of the current implementation.
|
||||
|
||||
## Core Data Models
|
||||
## Input Database Schema (Required)
|
||||
|
||||
### OrderbookLevel
|
||||
|
||||
Represents a single price level in the orderbook.
|
||||
|
||||
```python
|
||||
@dataclass(slots=True)
|
||||
class OrderbookLevel:
|
||||
price: float # Price level
|
||||
size: float # Total size at this price
|
||||
liquidation_count: int # Number of liquidations
|
||||
order_count: int # Number of resting orders
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
level = OrderbookLevel(
|
||||
price=50000.0,
|
||||
size=10.5,
|
||||
liquidation_count=0,
|
||||
order_count=3
|
||||
)
|
||||
```
|
||||
|
||||
### Trade
|
||||
|
||||
Represents a single trade execution.
|
||||
|
||||
```python
|
||||
@dataclass(slots=True)
|
||||
class Trade:
|
||||
id: int # Unique trade identifier
|
||||
trade_id: float # Exchange trade ID
|
||||
price: float # Execution price
|
||||
size: float # Trade size
|
||||
side: str # "buy" or "sell"
|
||||
timestamp: int # Unix timestamp
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
trade = Trade(
|
||||
id=1,
|
||||
trade_id=123456.0,
|
||||
price=50000.0,
|
||||
size=0.5,
|
||||
side="buy",
|
||||
timestamp=1640995200
|
||||
)
|
||||
```
|
||||
|
||||
### BookSnapshot
|
||||
|
||||
Complete orderbook state at a specific timestamp.
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class BookSnapshot:
|
||||
id: int # Snapshot identifier
|
||||
timestamp: int # Unix timestamp
|
||||
bids: Dict[float, OrderbookLevel] # Bid side levels
|
||||
asks: Dict[float, OrderbookLevel] # Ask side levels
|
||||
trades: List[Trade] # Associated trades
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1640995200,
|
||||
bids={
|
||||
50000.0: OrderbookLevel(50000.0, 10.0, 0, 1),
|
||||
49999.0: OrderbookLevel(49999.0, 5.0, 0, 1)
|
||||
},
|
||||
asks={
|
||||
50001.0: OrderbookLevel(50001.0, 3.0, 0, 1),
|
||||
50002.0: OrderbookLevel(50002.0, 2.0, 0, 1)
|
||||
},
|
||||
trades=[]
|
||||
)
|
||||
```
|
||||
|
||||
### Metric
|
||||
|
||||
Calculated financial metrics for a snapshot.
|
||||
|
||||
```python
|
||||
@dataclass(slots=True)
|
||||
class Metric:
|
||||
snapshot_id: int # Reference to source snapshot
|
||||
timestamp: int # Unix timestamp
|
||||
obi: float # Order Book Imbalance [-1, 1]
|
||||
cvd: float # Cumulative Volume Delta
|
||||
best_bid: float | None # Best bid price
|
||||
best_ask: float | None # Best ask price
|
||||
```
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
metric = Metric(
|
||||
snapshot_id=1,
|
||||
timestamp=1640995200,
|
||||
obi=0.333,
|
||||
cvd=150.5,
|
||||
best_bid=50000.0,
|
||||
best_ask=50001.0
|
||||
)
|
||||
```
|
||||
|
||||
## MetricCalculator API
|
||||
|
||||
Static class providing financial metric calculations.
|
||||
|
||||
### calculate_obi()
|
||||
|
||||
```python
|
||||
@staticmethod
|
||||
def calculate_obi(snapshot: BookSnapshot) -> float:
|
||||
"""
|
||||
Calculate Order Book Imbalance.
|
||||
|
||||
Formula: OBI = (Vb - Va) / (Vb + Va)
|
||||
|
||||
Args:
|
||||
snapshot: BookSnapshot with bids and asks
|
||||
|
||||
Returns:
|
||||
float: OBI value between -1 and 1
|
||||
|
||||
Example:
|
||||
>>> obi = MetricCalculator.calculate_obi(snapshot)
|
||||
>>> print(f"OBI: {obi:.3f}")
|
||||
OBI: 0.333
|
||||
"""
|
||||
```
|
||||
|
||||
### calculate_volume_delta()
|
||||
|
||||
```python
|
||||
@staticmethod
|
||||
def calculate_volume_delta(trades: List[Trade]) -> float:
|
||||
"""
|
||||
Calculate Volume Delta for trades.
|
||||
|
||||
Formula: VD = Buy Volume - Sell Volume
|
||||
|
||||
Args:
|
||||
trades: List of Trade objects
|
||||
|
||||
Returns:
|
||||
float: Net volume delta
|
||||
|
||||
Example:
|
||||
>>> vd = MetricCalculator.calculate_volume_delta(trades)
|
||||
>>> print(f"Volume Delta: {vd}")
|
||||
Volume Delta: 7.5
|
||||
"""
|
||||
```
|
||||
|
||||
### calculate_cvd()
|
||||
|
||||
```python
|
||||
@staticmethod
|
||||
def calculate_cvd(previous_cvd: float, volume_delta: float) -> float:
|
||||
"""
|
||||
Calculate Cumulative Volume Delta.
|
||||
|
||||
Formula: CVD_t = CVD_{t-1} + VD_t
|
||||
|
||||
Args:
|
||||
previous_cvd: Previous CVD value
|
||||
volume_delta: Current volume delta
|
||||
|
||||
Returns:
|
||||
float: New CVD value
|
||||
|
||||
Example:
|
||||
>>> cvd = MetricCalculator.calculate_cvd(100.0, 7.5)
|
||||
>>> print(f"CVD: {cvd}")
|
||||
CVD: 107.5
|
||||
"""
|
||||
```
|
||||
|
||||
### get_best_bid_ask()
|
||||
|
||||
```python
|
||||
@staticmethod
|
||||
def get_best_bid_ask(snapshot: BookSnapshot) -> tuple[float | None, float | None]:
|
||||
"""
|
||||
Extract best bid and ask prices.
|
||||
|
||||
Args:
|
||||
snapshot: BookSnapshot with bids and asks
|
||||
|
||||
Returns:
|
||||
tuple: (best_bid, best_ask) or (None, None)
|
||||
|
||||
Example:
|
||||
>>> best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
>>> print(f"Spread: {best_ask - best_bid}")
|
||||
Spread: 1.0
|
||||
"""
|
||||
```
|
||||
|
||||
## Repository APIs
|
||||
|
||||
### SQLiteOrderflowRepository
|
||||
|
||||
Repository for orderbook, trades data and metrics.
|
||||
|
||||
#### connect()
|
||||
|
||||
```python
|
||||
def connect(self) -> sqlite3.Connection:
|
||||
"""
|
||||
Create optimized SQLite connection.
|
||||
|
||||
Returns:
|
||||
sqlite3.Connection: Configured database connection
|
||||
|
||||
Example:
|
||||
>>> repo = SQLiteOrderflowRepository(db_path)
|
||||
>>> with repo.connect() as conn:
|
||||
... # Use connection
|
||||
"""
|
||||
```
|
||||
|
||||
#### load_trades_by_timestamp()
|
||||
|
||||
```python
|
||||
def load_trades_by_timestamp(self, conn: sqlite3.Connection) -> Dict[int, List[Trade]]:
|
||||
"""
|
||||
Load all trades grouped by timestamp.
|
||||
|
||||
Args:
|
||||
conn: Active database connection
|
||||
|
||||
Returns:
|
||||
Dict[int, List[Trade]]: Trades grouped by timestamp
|
||||
|
||||
Example:
|
||||
>>> trades_by_ts = repo.load_trades_by_timestamp(conn)
|
||||
>>> trades_at_1000 = trades_by_ts.get(1000, [])
|
||||
"""
|
||||
```
|
||||
|
||||
#### iterate_book_rows()
|
||||
|
||||
```python
|
||||
def iterate_book_rows(self, conn: sqlite3.Connection) -> Iterator[Tuple[int, str, str, int]]:
|
||||
"""
|
||||
Memory-efficient iteration over orderbook rows.
|
||||
|
||||
Args:
|
||||
conn: Active database connection
|
||||
|
||||
Yields:
|
||||
Tuple[int, str, str, int]: (id, bids_text, asks_text, timestamp)
|
||||
|
||||
Example:
|
||||
>>> for row_id, bids, asks, ts in repo.iterate_book_rows(conn):
|
||||
... # Process row
|
||||
"""
|
||||
```
|
||||
|
||||
#### create_metrics_table()
|
||||
|
||||
```python
|
||||
def create_metrics_table(self, conn: sqlite3.Connection) -> None:
|
||||
"""
|
||||
Create metrics table with indexes.
|
||||
|
||||
Args:
|
||||
conn: Active database connection
|
||||
|
||||
Raises:
|
||||
sqlite3.Error: If table creation fails
|
||||
|
||||
Example:
|
||||
>>> repo.create_metrics_table(conn)
|
||||
>>> # Metrics table now available
|
||||
"""
|
||||
```
|
||||
|
||||
#### insert_metrics_batch()
|
||||
|
||||
```python
|
||||
def insert_metrics_batch(self, conn: sqlite3.Connection, metrics: List[Metric]) -> None:
|
||||
"""
|
||||
Insert metrics in batch for performance.
|
||||
|
||||
Args:
|
||||
conn: Active database connection
|
||||
metrics: List of Metric objects to insert
|
||||
|
||||
Example:
|
||||
>>> metrics = [Metric(...), Metric(...)]
|
||||
>>> repo.insert_metrics_batch(conn, metrics)
|
||||
>>> conn.commit()
|
||||
"""
|
||||
```
|
||||
|
||||
#### load_metrics_by_timerange()
|
||||
|
||||
```python
|
||||
def load_metrics_by_timerange(
|
||||
self,
|
||||
conn: sqlite3.Connection,
|
||||
start_timestamp: int,
|
||||
end_timestamp: int
|
||||
) -> List[Metric]:
|
||||
"""
|
||||
Load metrics within time range.
|
||||
|
||||
Args:
|
||||
conn: Active database connection
|
||||
start_timestamp: Start time (inclusive)
|
||||
end_timestamp: End time (inclusive)
|
||||
|
||||
Returns:
|
||||
List[Metric]: Metrics ordered by timestamp
|
||||
|
||||
Example:
|
||||
>>> metrics = repo.load_metrics_by_timerange(conn, 1000, 2000)
|
||||
>>> print(f"Loaded {len(metrics)} metrics")
|
||||
"""
|
||||
```
|
||||
|
||||
## Storage API
|
||||
|
||||
### Storage
|
||||
|
||||
High-level data processing orchestrator.
|
||||
|
||||
#### __init__()
|
||||
|
||||
```python
|
||||
def __init__(self, instrument: str) -> None:
|
||||
"""
|
||||
Initialize storage for specific instrument.
|
||||
|
||||
Args:
|
||||
instrument: Trading pair identifier (e.g., "BTC-USDT")
|
||||
|
||||
Example:
|
||||
>>> storage = Storage("BTC-USDT")
|
||||
"""
|
||||
```
|
||||
|
||||
#### build_booktick_from_db()
|
||||
|
||||
```python
|
||||
def build_booktick_from_db(self, db_path: Path, db_date: datetime) -> None:
|
||||
"""
|
||||
Process database and calculate metrics.
|
||||
|
||||
This is the main processing pipeline that:
|
||||
1. Loads orderbook and trades data
|
||||
2. Calculates OBI and CVD metrics per snapshot
|
||||
3. Stores metrics in database
|
||||
4. Populates book with snapshots
|
||||
|
||||
Args:
|
||||
db_path: Path to SQLite database file
|
||||
db_date: Date for this database (informational)
|
||||
|
||||
Example:
|
||||
>>> storage.build_booktick_from_db(Path("data.db"), datetime.now())
|
||||
>>> print(f"Processed {len(storage.book.snapshots)} snapshots")
|
||||
"""
|
||||
```
|
||||
|
||||
## Strategy API
|
||||
|
||||
### DefaultStrategy
|
||||
|
||||
Trading strategy with metrics analysis capabilities.
|
||||
|
||||
#### __init__()
|
||||
|
||||
```python
|
||||
def __init__(self, instrument: str) -> None:
|
||||
"""
|
||||
Initialize strategy for instrument.
|
||||
|
||||
Args:
|
||||
instrument: Trading pair identifier
|
||||
|
||||
Example:
|
||||
>>> strategy = DefaultStrategy("BTC-USDT")
|
||||
"""
|
||||
```
|
||||
|
||||
#### set_db_path()
|
||||
|
||||
```python
|
||||
def set_db_path(self, db_path: Path) -> None:
|
||||
"""
|
||||
Configure database path for metrics access.
|
||||
|
||||
Args:
|
||||
db_path: Path to database with metrics
|
||||
|
||||
Example:
|
||||
>>> strategy.set_db_path(Path("data.db"))
|
||||
"""
|
||||
```
|
||||
|
||||
#### load_stored_metrics()
|
||||
|
||||
```python
|
||||
def load_stored_metrics(self, start_timestamp: int, end_timestamp: int) -> List[Metric]:
|
||||
"""
|
||||
Load stored metrics for analysis.
|
||||
|
||||
Args:
|
||||
start_timestamp: Start of time range
|
||||
end_timestamp: End of time range
|
||||
|
||||
Returns:
|
||||
List[Metric]: Metrics for specified range
|
||||
|
||||
Example:
|
||||
>>> metrics = strategy.load_stored_metrics(1000, 2000)
|
||||
>>> latest_obi = metrics[-1].obi
|
||||
"""
|
||||
```
|
||||
|
||||
#### get_metrics_summary()
|
||||
|
||||
```python
|
||||
def get_metrics_summary(self, metrics: List[Metric]) -> dict:
|
||||
"""
|
||||
Generate statistical summary of metrics.
|
||||
|
||||
Args:
|
||||
metrics: List of metrics to analyze
|
||||
|
||||
Returns:
|
||||
dict: Statistical summary with keys:
|
||||
- obi_min, obi_max, obi_avg
|
||||
- cvd_start, cvd_end, cvd_change
|
||||
- total_snapshots
|
||||
|
||||
Example:
|
||||
>>> summary = strategy.get_metrics_summary(metrics)
|
||||
>>> print(f"OBI range: {summary['obi_min']:.3f} to {summary['obi_max']:.3f}")
|
||||
"""
|
||||
```
|
||||
|
||||
## Visualizer API
|
||||
|
||||
### Visualizer
|
||||
|
||||
Multi-chart visualization system.
|
||||
|
||||
#### __init__()
|
||||
|
||||
```python
|
||||
def __init__(self, window_seconds: int = 60, max_bars: int = 200) -> None:
|
||||
"""
|
||||
Initialize visualizer with chart parameters.
|
||||
|
||||
Args:
|
||||
window_seconds: OHLC aggregation window
|
||||
max_bars: Maximum bars to display
|
||||
|
||||
Example:
|
||||
>>> visualizer = Visualizer(window_seconds=300, max_bars=1000)
|
||||
"""
|
||||
```
|
||||
|
||||
#### set_db_path()
|
||||
|
||||
```python
|
||||
def set_db_path(self, db_path: Path) -> None:
|
||||
"""
|
||||
Configure database path for metrics loading.
|
||||
|
||||
Args:
|
||||
db_path: Path to database with metrics
|
||||
|
||||
Example:
|
||||
>>> visualizer.set_db_path(Path("data.db"))
|
||||
"""
|
||||
```
|
||||
|
||||
#### update_from_book()
|
||||
|
||||
```python
|
||||
def update_from_book(self, book: Book) -> None:
|
||||
"""
|
||||
Update charts with book data and stored metrics.
|
||||
|
||||
Creates 4-subplot layout:
|
||||
1. OHLC candlesticks
|
||||
2. Volume bars
|
||||
3. OBI line chart
|
||||
4. CVD line chart
|
||||
|
||||
Args:
|
||||
book: Book with snapshots for OHLC calculation
|
||||
|
||||
Example:
|
||||
>>> visualizer.update_from_book(storage.book)
|
||||
>>> # Charts updated with latest data
|
||||
"""
|
||||
```
|
||||
|
||||
#### show()
|
||||
|
||||
```python
|
||||
def show() -> None:
|
||||
"""
|
||||
Display interactive chart window.
|
||||
|
||||
Example:
|
||||
>>> visualizer.show()
|
||||
>>> # Interactive Qt5 window opens
|
||||
"""
|
||||
```
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Input Tables (Required)
|
||||
|
||||
These tables must exist in the SQLite database files:
|
||||
|
||||
#### book table
|
||||
### book table
|
||||
```sql
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
instrument TEXT,
|
||||
bids TEXT NOT NULL, -- JSON array: [[price, size, liq_count, order_count], ...]
|
||||
asks TEXT NOT NULL, -- JSON array: [[price, size, liq_count, order_count], ...]
|
||||
bids TEXT NOT NULL, -- Python-literal: [[price, size, ...], ...]
|
||||
asks TEXT NOT NULL, -- Python-literal: [[price, size, ...], ...]
|
||||
timestamp TEXT NOT NULL
|
||||
);
|
||||
```
|
||||
|
||||
#### trades table
|
||||
### trades table
|
||||
```sql
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
@ -557,129 +30,122 @@ CREATE TABLE trades (
|
||||
);
|
||||
```
|
||||
|
||||
### Output Table (Auto-created)
|
||||
## Data Access: db_interpreter.py
|
||||
|
||||
This table is automatically created by the system:
|
||||
### Classes
|
||||
- `OrderbookLevel` (dataclass): represents a price level.
|
||||
- `OrderbookUpdate`: windowed book update with `bids`, `asks`, `timestamp`, `end_timestamp`.
|
||||
|
||||
#### metrics table
|
||||
```sql
|
||||
CREATE TABLE metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
obi REAL NOT NULL, -- Order Book Imbalance [-1, 1]
|
||||
cvd REAL NOT NULL, -- Cumulative Volume Delta
|
||||
best_bid REAL, -- Best bid price
|
||||
best_ask REAL, -- Best ask price
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
);
|
||||
### DBInterpreter
|
||||
```python
|
||||
class DBInterpreter:
|
||||
def __init__(self, db_path: Path): ...
|
||||
|
||||
-- Performance indexes
|
||||
CREATE INDEX idx_metrics_timestamp ON metrics(timestamp);
|
||||
CREATE INDEX idx_metrics_snapshot_id ON metrics(snapshot_id);
|
||||
def stream(self) -> Iterator[tuple[OrderbookUpdate, list[tuple]]]:
|
||||
"""
|
||||
Stream orderbook rows with one-row lookahead and trades in timestamp order.
|
||||
Yields pairs of (OrderbookUpdate, trades_in_window), where each trade tuple is:
|
||||
(id, trade_id, price, size, side, timestamp_ms) and timestamp_ms ∈ [timestamp, end_timestamp).
|
||||
"""
|
||||
```
|
||||
|
||||
- Read-only SQLite connection with PRAGMA tuning (immutable, query_only, mmap, cache).
|
||||
- Batch sizes: `BOOK_BATCH = 2048`, `TRADE_BATCH = 4096`.
|
||||
|
||||
## Processing: ohlc_processor.py
|
||||
|
||||
### OHLCProcessor
|
||||
```python
|
||||
class OHLCProcessor:
|
||||
def __init__(self, window_seconds: int = 60, depth_levels_per_side: int = 50): ...
|
||||
|
||||
def process_trades(self, trades: list[tuple]) -> None:
|
||||
"""Aggregate trades into OHLC bars per window; throttled upserts for UI responsiveness."""
|
||||
|
||||
def update_orderbook(self, ob_update: OrderbookUpdate) -> None:
|
||||
"""Maintain in-memory price→size maps, apply partial updates, and emit top-N depth snapshots periodically."""
|
||||
|
||||
def finalize(self) -> None:
|
||||
"""Emit the last OHLC bar if present."""
|
||||
```
|
||||
|
||||
- Internal helpers for parsing levels from JSON or Python-literal strings and for applying deletions (size==0).
|
||||
|
||||
## Inter-Process Communication: viz_io.py
|
||||
|
||||
### Files
|
||||
- `ohlc_data.json`: rolling array of OHLC bars (max 1000).
|
||||
- `depth_data.json`: latest depth snapshot (bids/asks), top-N per side.
|
||||
- `metrics_data.json`: rolling array of OBI OHLC bars (max 1000).
|
||||
|
||||
### Functions
|
||||
```python
|
||||
def add_ohlc_bar(timestamp: int, open_price: float, high_price: float, low_price: float, close_price: float, volume: float = 0.0) -> None: ...
|
||||
|
||||
def upsert_ohlc_bar(timestamp: int, open_price: float, high_price: float, low_price: float, close_price: float, volume: float = 0.0) -> None: ...
|
||||
|
||||
def clear_data() -> None: ...
|
||||
|
||||
def add_metric_bar(timestamp: int, obi_open: float, obi_high: float, obi_low: float, obi_close: float) -> None: ...
|
||||
|
||||
def upsert_metric_bar(timestamp: int, obi_open: float, obi_high: float, obi_low: float, obi_close: float) -> None: ...
|
||||
|
||||
def clear_metrics() -> None: ...
|
||||
```
|
||||
|
||||
- Atomic writes via temp file replace to prevent partial reads.
|
||||
|
||||
## Visualization: app.py (Dash)
|
||||
|
||||
- Three visuals: OHLC+Volume and Depth (cumulative) with Plotly dark theme, plus an OBI candlestick subplot beneath Volume.
|
||||
- Polling interval: 500 ms. Tolerates JSON decode races using cached last values.
|
||||
|
||||
### Callback Contract
|
||||
```python
|
||||
@app.callback(
|
||||
[Output('ohlc-chart', 'figure'), Output('depth-chart', 'figure')],
|
||||
[Input('interval-update', 'n_intervals')]
|
||||
)
|
||||
```
|
||||
- Reads `ohlc_data.json` (list of `[ts, open, high, low, close, volume]`).
|
||||
- Reads `depth_data.json` (`{"bids": [[price, size], ...], "asks": [[price, size], ...]}`).
|
||||
- Reads `metrics_data.json` (list of `[ts, obi_o, obi_h, obi_l, obi_c]`).
|
||||
|
||||
## CLI Orchestration: main.py
|
||||
|
||||
### Typer Entry Point
|
||||
```python
|
||||
def main(instrument: str, start_date: str, end_date: str, window_seconds: int = 60) -> None:
|
||||
"""Stream DBs, process OHLC/depth, and launch Dash visualizer in a separate process."""
|
||||
```
|
||||
|
||||
- Discovers databases under `../data/OKX` matching the instrument and date range.
|
||||
- Launches UI: `uv run python app.py`.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Complete Processing Workflow
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from storage import Storage
|
||||
from strategies import DefaultStrategy
|
||||
from visualizer import Visualizer
|
||||
|
||||
# Initialize components
|
||||
storage = Storage("BTC-USDT")
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
visualizer = Visualizer(window_seconds=60, max_bars=500)
|
||||
|
||||
# Process database
|
||||
db_path = Path("data/BTC-USDT-25-06-09.db")
|
||||
strategy.set_db_path(db_path)
|
||||
visualizer.set_db_path(db_path)
|
||||
|
||||
# Build book and calculate metrics
|
||||
storage.build_booktick_from_db(db_path, datetime.now())
|
||||
|
||||
# Analyze metrics
|
||||
strategy.on_booktick(storage.book)
|
||||
|
||||
# Update visualization
|
||||
visualizer.update_from_book(storage.book)
|
||||
visualizer.show()
|
||||
### Run processing + UI
|
||||
```bash
|
||||
uv run python main.py BTC-USDT 2025-07-01 2025-08-01 --window-seconds 60
|
||||
# Open http://localhost:8050
|
||||
```
|
||||
|
||||
### Metrics Analysis
|
||||
|
||||
### Process trades and update depth in a loop (conceptual)
|
||||
```python
|
||||
# Load and analyze stored metrics
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
strategy.set_db_path(Path("data.db"))
|
||||
from db_interpreter import DBInterpreter
|
||||
from ohlc_processor import OHLCProcessor
|
||||
|
||||
# Get metrics for specific time range
|
||||
metrics = strategy.load_stored_metrics(1640995200, 1640998800)
|
||||
|
||||
# Analyze metrics
|
||||
summary = strategy.get_metrics_summary(metrics)
|
||||
print(f"OBI Range: {summary['obi_min']:.3f} to {summary['obi_max']:.3f}")
|
||||
print(f"CVD Change: {summary['cvd_change']:.1f}")
|
||||
|
||||
# Find significant imbalances
|
||||
significant_obi = [m for m in metrics if abs(m.obi) > 0.2]
|
||||
print(f"Found {len(significant_obi)} snapshots with >20% imbalance")
|
||||
```
|
||||
|
||||
### Custom Metric Calculations
|
||||
|
||||
```python
|
||||
from models import MetricCalculator
|
||||
|
||||
# Calculate metrics for single snapshot
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
|
||||
# Calculate CVD over time
|
||||
cvd = 0.0
|
||||
for trades in trades_by_timestamp.values():
|
||||
volume_delta = MetricCalculator.calculate_volume_delta(trades)
|
||||
cvd = MetricCalculator.calculate_cvd(cvd, volume_delta)
|
||||
print(f"CVD: {cvd:.1f}")
|
||||
processor = OHLCProcessor(window_seconds=60)
|
||||
for ob_update, trades in DBInterpreter(db_path).stream():
|
||||
processor.process_trades(trades)
|
||||
processor.update_orderbook(ob_update)
|
||||
processor.finalize()
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
- Reader/Writer coordination via atomic JSON prevents partial reads.
|
||||
- Visualizer caches last valid data if JSON decoding fails mid-write; logs warnings.
|
||||
- Visualizer start failures do not stop processing; logs error and continues.
|
||||
|
||||
### Common Error Scenarios
|
||||
|
||||
#### Database Connection Issues
|
||||
```python
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
metrics = repo.load_metrics_by_timerange(conn, start, end)
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Database error: {e}")
|
||||
metrics = [] # Fallback to empty list
|
||||
```
|
||||
|
||||
#### Missing Metrics Table
|
||||
```python
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
if not repo.table_exists(conn, "metrics"):
|
||||
repo.create_metrics_table(conn)
|
||||
logging.info("Created metrics table")
|
||||
```
|
||||
|
||||
#### Empty Data Handling
|
||||
```python
|
||||
# All methods handle empty data gracefully
|
||||
obi = MetricCalculator.calculate_obi(empty_snapshot) # Returns 0.0
|
||||
vd = MetricCalculator.calculate_volume_delta([]) # Returns 0.0
|
||||
summary = strategy.get_metrics_summary([]) # Returns {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
This API documentation provides complete coverage of the public interfaces for the Orderflow Backtest System. For implementation details and architecture information, see the additional documentation in the `docs/` directory.
|
||||
## Notes
|
||||
- Metrics computation includes simplified OBI (Order Book Imbalance) calculated as bid_total - ask_total. Repository/storage layers and strategy APIs are intentionally kept minimal.
|
||||
|
||||
@ -5,42 +5,52 @@ All notable changes to the Orderflow Backtest System are documented in this file
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [2.0.0] - 2024-Current
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- **OBI Metrics Calculation**: Order Book Imbalance calculation with formula `(Vb - Va) / (Vb + Va)`
|
||||
- **CVD Metrics Calculation**: Cumulative Volume Delta with incremental calculation and reset functionality
|
||||
- **Persistent Metrics Storage**: SQLite-based storage for calculated metrics to avoid recalculation
|
||||
- **Memory Optimization**: >70% reduction in peak memory usage through streaming processing
|
||||
- **Enhanced Visualization**: Multi-subplot charts with OHLC, Volume, OBI, and CVD displays
|
||||
- **MetricCalculator Class**: Static methods for financial metrics computation
|
||||
- **Batch Processing**: High-performance batch inserts (1000 records per operation)
|
||||
- **Time-Range Queries**: Efficient metrics retrieval for specified time periods
|
||||
- **Strategy Enhancement**: Metrics analysis capabilities in `DefaultStrategy`
|
||||
- **Comprehensive Testing**: 27 tests across 6 test files with full integration coverage
|
||||
- Comprehensive documentation structure with module-specific guides
|
||||
- Architecture Decision Records (ADRs) for major technical decisions
|
||||
- CONTRIBUTING.md with development guidelines and standards
|
||||
- Enhanced module documentation in `docs/modules/` directory
|
||||
- Dependency documentation with security and performance considerations
|
||||
|
||||
### Changed
|
||||
- **Storage Architecture**: Modified `Storage.build_booktick_from_db()` to integrate metrics calculation
|
||||
- **Visualization Separation**: Moved visualization from strategy to main application for better separation of concerns
|
||||
- **Strategy Interface**: Simplified `DefaultStrategy` constructor (removed `enable_visualization` parameter)
|
||||
- **Main Application Flow**: Enhanced orchestration with per-database visualization updates
|
||||
- **Database Schema**: Auto-creation of metrics table with proper indexes and foreign key constraints
|
||||
- **Memory Management**: Stream processing instead of keeping full snapshot history
|
||||
- Documentation structure reorganized to follow documentation standards
|
||||
- Improved code documentation requirements with examples
|
||||
- Enhanced testing guidelines with coverage requirements
|
||||
|
||||
### Improved
|
||||
- **Performance**: Batch database operations and optimized SQLite PRAGMAs
|
||||
- **Scalability**: Support for months to years of high-frequency trading data
|
||||
- **Code Quality**: All functions <50 lines, all files <250 lines
|
||||
- **Documentation**: Comprehensive module and API documentation
|
||||
- **Error Handling**: Graceful degradation and comprehensive logging
|
||||
- **Type Safety**: Full type annotations throughout codebase
|
||||
## [2.0.0] - 2024-12-Present
|
||||
|
||||
### Added
|
||||
- **Simplified Pipeline Architecture**: Streamlined SQLite → OHLC/Depth → JSON → Dash pipeline
|
||||
- **JSON-based IPC**: Atomic file-based communication between processor and visualizer
|
||||
- **Real-time Visualization**: Dash web application with 500ms polling updates
|
||||
- **OHLC Aggregation**: Configurable time window aggregation with throttled updates
|
||||
- **Orderbook Depth**: Real-time depth snapshots with top-N level management
|
||||
- **OBI Metrics**: Order Book Imbalance calculation with candlestick visualization
|
||||
- **Atomic JSON Operations**: Race-condition-free data exchange via temp files
|
||||
- **CLI Orchestration**: Typer-based command interface with process management
|
||||
- **Performance Optimizations**: Batch reading with optimized SQLite PRAGMA settings
|
||||
|
||||
### Changed
|
||||
- **Architecture Simplification**: Removed complex repository/storage layers
|
||||
- **Data Flow**: Direct streaming from database to visualization via JSON
|
||||
- **Error Handling**: Graceful degradation with cached data fallbacks
|
||||
- **Process Management**: Separate visualization process launched automatically
|
||||
- **Memory Efficiency**: Bounded datasets prevent unlimited memory growth
|
||||
|
||||
### Technical Details
|
||||
- **New Tables**: `metrics` table with indexes on timestamp and snapshot_id
|
||||
- **New Models**: `Metric` dataclass for calculated values
|
||||
- **Processing Pipeline**: Snapshot → Calculate → Store → Discard workflow
|
||||
- **Query Interface**: Time-range based metrics retrieval
|
||||
- **Visualization Layout**: 4-subplot layout with shared time axis
|
||||
- **Database Access**: Read-only SQLite with immutable mode and mmap optimization
|
||||
- **Batch Sizes**: BOOK_BATCH=2048, TRADE_BATCH=4096 for optimal performance
|
||||
- **JSON Formats**: Standardized schemas for OHLC, depth, and metrics data
|
||||
- **Chart Architecture**: Multi-subplot layout with shared time axis
|
||||
- **IPC Files**: `ohlc_data.json`, `depth_data.json`, `metrics_data.json`
|
||||
|
||||
### Removed
|
||||
- Complex metrics storage and repository patterns
|
||||
- Strategy framework components
|
||||
- In-memory snapshot retention
|
||||
- Multi-database orchestration complexity
|
||||
|
||||
## [1.0.0] - Previous Version
|
||||
|
||||
|
||||
186
docs/CONTEXT.md
186
docs/CONTEXT.md
@ -2,162 +2,52 @@
|
||||
|
||||
## Current State
|
||||
|
||||
The Orderflow Backtest System has successfully implemented a comprehensive OBI (Order Book Imbalance) and CVD (Cumulative Volume Delta) metrics calculation and visualization system. The project is in a production-ready state with full feature completion.
|
||||
The project implements a modular, efficient orderflow processing pipeline:
|
||||
- Stream orderflow from SQLite (`DBInterpreter.stream`).
|
||||
- Process trades and orderbook updates through modular `OHLCProcessor` architecture.
|
||||
- Exchange data with the UI via atomic JSON files (`viz_io`).
|
||||
- Render OHLC+Volume, Depth, and Metrics charts with a Dash app (`app.py`).
|
||||
|
||||
## Recent Achievements
|
||||
The system features a clean composition-based architecture with specialized modules for different concerns, providing OBI/CVD metrics alongside OHLC data.
|
||||
|
||||
### ✅ Completed Features (Latest Implementation)
|
||||
- **Metrics Calculation Engine**: Complete OBI and CVD calculation with per-snapshot granularity
|
||||
- **Persistent Storage**: Metrics stored in SQLite database to avoid recalculation
|
||||
- **Memory Optimization**: >70% memory usage reduction through efficient data management
|
||||
- **Visualization System**: Multi-subplot charts (OHLC, Volume, OBI, CVD) with shared time axis
|
||||
- **Strategy Framework**: Enhanced trading strategy system with metrics analysis
|
||||
- **Clean Architecture**: Proper separation of concerns between data, analysis, and visualization
|
||||
## Recent Work
|
||||
|
||||
### 📊 System Metrics
|
||||
- **Performance**: Batch processing of 1000 records per operation
|
||||
- **Memory**: >70% reduction in peak memory usage
|
||||
- **Test Coverage**: 27 comprehensive tests across 6 test files
|
||||
- **Code Quality**: All functions <50 lines, all files <250 lines
|
||||
- **Modular Refactoring**: Extracted `ohlc_processor.py` into focused modules:
|
||||
- `level_parser.py`: Orderbook level parsing utilities (85 lines)
|
||||
- `orderbook_manager.py`: In-memory orderbook state management (90 lines)
|
||||
- `metrics_calculator.py`: OBI and CVD metrics calculation (112 lines)
|
||||
- **Architecture Compliance**: Reduced main processor from 440 to 248 lines (250-line target achieved)
|
||||
- Maintained full backward compatibility and functionality
|
||||
- Implemented read-only, batched SQLite streaming with PRAGMA tuning.
|
||||
- Added robust JSON IPC with atomic writes and tolerant UI reads.
|
||||
- Built a responsive Dash visualization polling at 500ms.
|
||||
- Unified CLI using Typer, with UV for process management.
|
||||
|
||||
## Architecture Decisions
|
||||
## Conventions
|
||||
|
||||
### Key Design Patterns
|
||||
1. **Repository Pattern**: Clean separation between data access and business logic
|
||||
2. **Dataclass Models**: Lightweight, type-safe data structures with slots optimization
|
||||
3. **Batch Processing**: High-performance database operations for large datasets
|
||||
4. **Separation of Concerns**: Strategy, Storage, and Visualization as independent components
|
||||
- Python 3.12+, UV for dependency and command execution.
|
||||
- **Modular Architecture**: Composition over inheritance, single-responsibility modules
|
||||
- **File Size Limits**: ≤250 lines per file, ≤50 lines per function (enforced)
|
||||
- Type hints throughout; concise, focused functions and classes.
|
||||
- Error handling with meaningful logs; avoid bare exceptions.
|
||||
- Prefer explicit JSON structures for IPC; keep payloads small and bounded.
|
||||
|
||||
### Technology Stack
|
||||
- **Language**: Python 3.12+ with type hints
|
||||
- **Database**: SQLite with optimized PRAGMAs for performance
|
||||
- **Package Management**: UV for fast dependency resolution
|
||||
- **Testing**: Pytest with comprehensive unit and integration tests
|
||||
- **Visualization**: Matplotlib with Qt5Agg backend
|
||||
## Priorities
|
||||
|
||||
## Current Development Priorities
|
||||
- Improve configurability: database path discovery, CLI flags for paths and UI options.
|
||||
- Add tests for `DBInterpreter.stream` and `OHLCProcessor` (run with `uv run pytest`).
|
||||
- Performance tuning for large DBs while keeping UI responsive.
|
||||
- Documentation kept in sync with code; architecture reflects current design.
|
||||
|
||||
### ✅ Completed (Production Ready)
|
||||
1. **Core Metrics System**: OBI and CVD calculation infrastructure
|
||||
2. **Database Integration**: Persistent storage and retrieval system
|
||||
3. **Visualization Framework**: Multi-chart display with proper time alignment
|
||||
4. **Memory Optimization**: Efficient processing of large datasets
|
||||
5. **Code Quality**: Comprehensive testing and documentation
|
||||
## Roadmap (Future Work)
|
||||
|
||||
### 🔄 Maintenance Phase
|
||||
- **Documentation**: Comprehensive docs completed
|
||||
- **Testing**: Full test coverage maintained
|
||||
- **Performance**: Monitoring and optimization as needed
|
||||
- **Bug Fixes**: Address any issues discovered in production use
|
||||
- Enhance OBI metrics with additional derived calculations (e.g., normalized OBI).
|
||||
- Optional repository layer abstraction and a storage orchestrator.
|
||||
- Extend visualization with additional subplots and interactivity.
|
||||
- Strategy module for analytics and alerting on derived metrics.
|
||||
|
||||
## Known Patterns and Conventions
|
||||
## Tooling
|
||||
|
||||
### Code Style
|
||||
- **Functions**: Maximum 50 lines, single responsibility
|
||||
- **Files**: Maximum 250 lines, clear module boundaries
|
||||
- **Naming**: Descriptive names, no abbreviations except domain terms (OBI, CVD)
|
||||
- **Error Handling**: Comprehensive try-catch with logging, graceful degradation
|
||||
|
||||
### Database Patterns
|
||||
- **Parameterized Queries**: All SQL uses proper parameterization for security
|
||||
- **Batch Operations**: Process records in batches of 1000 for performance
|
||||
- **Indexing**: Strategic indexes on timestamp and foreign key columns
|
||||
- **Transactions**: Proper transaction boundaries for data consistency
|
||||
|
||||
### Testing Patterns
|
||||
- **Unit Tests**: Each module has comprehensive unit test coverage
|
||||
- **Integration Tests**: End-to-end workflow testing
|
||||
- **Mock Objects**: External dependencies mocked for isolated testing
|
||||
- **Test Data**: Temporary databases with realistic test data
|
||||
|
||||
## Integration Points
|
||||
|
||||
### External Dependencies
|
||||
- **SQLite**: Primary data storage (read and write operations)
|
||||
- **Matplotlib**: Chart rendering and visualization
|
||||
- **Qt5Agg**: GUI backend for interactive charts
|
||||
- **Pytest**: Testing framework
|
||||
|
||||
### Internal Module Dependencies
|
||||
```
|
||||
main.py → storage.py → repositories/ → models.py
|
||||
→ strategies.py → models.py
|
||||
→ visualizer.py → repositories/
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Optimizations Implemented
|
||||
- **Memory Management**: Metrics storage instead of full snapshot retention
|
||||
- **Database Performance**: Optimized SQLite PRAGMAs and batch processing
|
||||
- **Query Efficiency**: Indexed queries with proper WHERE clauses
|
||||
- **Cache Usage**: Price caching in orderbook parser for repeated calculations
|
||||
|
||||
### Scalability Notes
|
||||
- **Dataset Size**: Tested with 600K+ snapshots and 300K+ trades per day
|
||||
- **Time Range**: Supports months to years of historical data
|
||||
- **Processing Speed**: ~1000 rows/second with full metrics calculation
|
||||
- **Storage Overhead**: Metrics table adds <20% to original database size
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Implemented Safeguards
|
||||
- **SQL Injection Prevention**: All queries use parameterized statements
|
||||
- **Input Validation**: Database paths and table names validated
|
||||
- **Error Information**: No sensitive data exposed in error messages
|
||||
- **Access Control**: Database file permissions respected
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Potential Enhancements
|
||||
- **Real-time Processing**: Streaming data support for live trading
|
||||
- **Additional Metrics**: Volume Profile, Delta Flow, Liquidity metrics
|
||||
- **Export Capabilities**: CSV/JSON export for external analysis
|
||||
- **Interactive Charts**: Enhanced user interaction with visualization
|
||||
- **Configuration System**: Configurable batch sizes and processing parameters
|
||||
|
||||
### Scalability Options
|
||||
- **Database Upgrade**: PostgreSQL for larger datasets if needed
|
||||
- **Parallel Processing**: Multi-threading for CPU-intensive calculations
|
||||
- **Caching Layer**: Redis for frequently accessed metrics
|
||||
- **API Interface**: REST API for external system integration
|
||||
|
||||
## Development Environment
|
||||
|
||||
### Requirements
|
||||
- Python 3.12+
|
||||
- UV package manager
|
||||
- SQLite database files with required schema
|
||||
- Qt5 for visualization (Linux/macOS)
|
||||
|
||||
### Setup Commands
|
||||
```bash
|
||||
# Install dependencies
|
||||
uv sync
|
||||
|
||||
# Run full test suite
|
||||
uv run pytest
|
||||
|
||||
# Process sample data
|
||||
uv run python main.py BTC-USDT 2025-07-01 2025-08-01
|
||||
```
|
||||
|
||||
## Documentation Status
|
||||
|
||||
### ✅ Complete Documentation
|
||||
- README.md with comprehensive overview
|
||||
- Module-level documentation for all components
|
||||
- API documentation with examples
|
||||
- Architecture decision records
|
||||
- Code-level documentation with docstrings
|
||||
|
||||
### 📊 Quality Metrics
|
||||
- **Code Coverage**: 27 tests across 6 test files
|
||||
- **Documentation Coverage**: All public interfaces documented
|
||||
- **Example Coverage**: Working examples for all major features
|
||||
- **Error Documentation**: All error conditions documented
|
||||
|
||||
---
|
||||
|
||||
*Last Updated: Current as of OBI/CVD metrics system completion*
|
||||
*Next Review: As needed for maintenance or feature additions*
|
||||
- Package management and commands: UV (e.g., `uv sync`, `uv run ...`).
|
||||
- Visualization server: Dash on `http://localhost:8050`.
|
||||
- Linting/testing: Pytest (e.g., `uv run pytest`).
|
||||
|
||||
@ -2,50 +2,25 @@
|
||||
|
||||
## Overview
|
||||
|
||||
This directory contains comprehensive documentation for the Orderflow Backtest System, a high-performance cryptocurrency trading data analysis platform.
|
||||
This directory contains documentation for the current Orderflow Backtest System, which streams historical orderflow from SQLite, aggregates OHLC bars, maintains a lightweight depth snapshot, and renders charts via a Dash web application.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### 📚 Main Documentation
|
||||
- **[CONTEXT.md](./CONTEXT.md)**: Current project state, architecture decisions, and development patterns
|
||||
- **[architecture.md](./architecture.md)**: System architecture, component relationships, and data flow
|
||||
- **[API.md](./API.md)**: Public interfaces, classes, and function documentation
|
||||
|
||||
### 📦 Module Documentation
|
||||
- **[modules/metrics.md](./modules/metrics.md)**: OBI and CVD calculation system
|
||||
- **[modules/storage.md](./modules/storage.md)**: Data processing and persistence layer
|
||||
- **[modules/visualization.md](./modules/visualization.md)**: Chart rendering and display system
|
||||
- **[modules/repositories.md](./modules/repositories.md)**: Database access and operations
|
||||
|
||||
### 🏗️ Architecture Decisions
|
||||
- **[decisions/ADR-001-metrics-storage.md](./decisions/ADR-001-metrics-storage.md)**: Persistent metrics storage decision
|
||||
- **[decisions/ADR-002-visualization-separation.md](./decisions/ADR-002-visualization-separation.md)**: Separation of concerns for visualization
|
||||
|
||||
### 📋 Development Guides
|
||||
- **[CONTRIBUTING.md](./CONTRIBUTING.md)**: Development workflow and contribution guidelines
|
||||
- **[CHANGELOG.md](./CHANGELOG.md)**: Version history and changes
|
||||
- `architecture.md`: System architecture, component relationships, and data flow (SQLite → Streaming → OHLC/Depth → JSON → Dash)
|
||||
- `API.md`: Public interfaces for DB streaming, OHLC/depth processing, JSON IPC, Dash visualization, and CLI
|
||||
- `CONTEXT.md`: Project state, conventions, and development priorities
|
||||
- `decisions/`: Architecture decision records
|
||||
|
||||
## Quick Navigation
|
||||
|
||||
| Topic | Documentation |
|
||||
|-------|---------------|
|
||||
| **Getting Started** | [README.md](../README.md) |
|
||||
| **System Architecture** | [architecture.md](./architecture.md) |
|
||||
| **Metrics Calculation** | [modules/metrics.md](./modules/metrics.md) |
|
||||
| **Database Schema** | [API.md](./API.md#database-schema) |
|
||||
| **Development Setup** | [CONTRIBUTING.md](./CONTRIBUTING.md) |
|
||||
| **API Reference** | [API.md](./API.md) |
|
||||
| Getting Started | See the usage examples in `API.md` |
|
||||
| System Architecture | `architecture.md` |
|
||||
| Database Schema | `API.md#input-database-schema-required` |
|
||||
| Development Setup | Project root `README` and `pyproject.toml` |
|
||||
|
||||
## Documentation Standards
|
||||
## Notes
|
||||
|
||||
This documentation follows the project's documentation standards defined in `.cursor/rules/documentation.mdc`. All documentation includes:
|
||||
|
||||
- Clear purpose and scope
|
||||
- Code examples with working implementations
|
||||
- API documentation with request/response formats
|
||||
- Error handling and edge cases
|
||||
- Dependencies and requirements
|
||||
|
||||
## Maintenance
|
||||
|
||||
Documentation is updated with every significant code change and reviewed during the development process. See [CONTRIBUTING.md](./CONTRIBUTING.md) for details on documentation maintenance procedures.
|
||||
- Metrics (OBI/CVD), repository/storage layers, and strategy components have been removed from the current codebase and are planned as future enhancements.
|
||||
- Use UV for package management and running commands. Example: `uv run python main.py ...`.
|
||||
|
||||
@ -2,303 +2,155 @@
|
||||
|
||||
## Overview
|
||||
|
||||
The Orderflow Backtest System is designed as a modular, high-performance data processing pipeline for cryptocurrency trading analysis. The architecture emphasizes separation of concerns, efficient memory usage, and scalable processing of large datasets.
|
||||
The current system is a streamlined, high-performance pipeline that streams orderflow from SQLite databases, aggregates trades into OHLC bars, maintains a lightweight depth snapshot, and serves visuals via a Dash web application. Inter-process communication (IPC) between the processor and visualizer uses atomic JSON files for simplicity and robustness.
|
||||
|
||||
## High-Level Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Data Sources │ │ Processing │ │ Presentation │
|
||||
│ │ │ │ │ │
|
||||
│ ┌─────────────┐ │ │ ┌──────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │SQLite Files │─┼────┼→│ Storage │─┼────┼→│ Visualizer │ │
|
||||
│ │- orderbook │ │ │ │- Orchestrator│ │ │ │- OHLC Charts│ │
|
||||
│ │- trades │ │ │ │- Calculator │ │ │ │- OBI/CVD │ │
|
||||
│ └─────────────┘ │ │ └──────────────┘ │ │ └─────────────┘ │
|
||||
│ │ │ │ │ │ ▲ │
|
||||
└─────────────────┘ │ ┌─────────────┐ │ │ ┌─────────────┐ │
|
||||
│ │ Strategy │──┼────┼→│ Reports │ │
|
||||
│ │- Analysis │ │ │ │- Metrics │ │
|
||||
│ │- Alerts │ │ │ │- Summaries │ │
|
||||
│ └─────────────┘ │ │ └─────────────┘ │
|
||||
└──────────────────┘ └─────────────────┘
|
||||
┌─────────────────┐ ┌─────────────────────┐ ┌──────────────────┐ ┌──────────────────┐
|
||||
│ SQLite Files │ → │ DB Interpreter │ → │ OHLC/Depth │ → │ Dash Visualizer │
|
||||
│ (book,trades) │ │ (stream rows) │ │ Processor │ │ (app.py) │
|
||||
└─────────────────┘ └─────────────────────┘ └─────────┬────────┘ └────────────▲─────┘
|
||||
│ │
|
||||
│ Atomic JSON (IPC) │
|
||||
▼ │
|
||||
ohlc_data.json, depth_data.json │
|
||||
metrics_data.json │
|
||||
│
|
||||
Browser UI
|
||||
```
|
||||
|
||||
## Component Architecture
|
||||
## Components
|
||||
|
||||
### Data Layer
|
||||
### Data Access (`db_interpreter.py`)
|
||||
|
||||
#### Models (`models.py`)
|
||||
**Purpose**: Core data structures and calculation logic
|
||||
- `OrderbookLevel`: dataclass representing one price level.
|
||||
- `OrderbookUpdate`: container for a book row window with `bids`, `asks`, `timestamp`, and `end_timestamp`.
|
||||
- `DBInterpreter`:
|
||||
- `stream() -> Iterator[tuple[OrderbookUpdate, list[tuple]]]` streams the book table with lookahead and the trades table in timestamp order.
|
||||
- Efficient read-only connection with PRAGMA tuning: immutable mode, query_only, temp_store=MEMORY, mmap_size, cache_size.
|
||||
- Batching constants: `BOOK_BATCH = 2048`, `TRADE_BATCH = 4096`.
|
||||
- Each yielded `trades` element is a tuple `(id, trade_id, price, size, side, timestamp_ms)` that falls within `[book.timestamp, next_book.timestamp)`.
|
||||
|
||||
```python
|
||||
# Core data models
|
||||
OrderbookLevel # Single price level (price, size, order_count, liquidation_count)
|
||||
Trade # Individual trade execution (price, size, side, timestamp)
|
||||
BookSnapshot # Complete orderbook state at timestamp
|
||||
Book # Container for snapshot sequence
|
||||
Metric # Calculated OBI/CVD values
|
||||
### Processing (Modular Architecture)
|
||||
|
||||
# Calculation engine
|
||||
MetricCalculator # Static methods for OBI/CVD computation
|
||||
```
|
||||
#### Main Coordinator (`ohlc_processor.py`)
|
||||
- `OHLCProcessor(window_seconds=60, depth_levels_per_side=50)`: Orchestrates trade processing using composition
|
||||
- `process_trades(trades)`: aggregates trades into OHLC bars and delegates CVD updates
|
||||
- `update_orderbook(ob_update)`: coordinates orderbook updates and OBI metric calculation
|
||||
- `finalize()`: finalizes both OHLC bars and metrics data
|
||||
- `cvd_cumulative` (property): provides access to cumulative volume delta
|
||||
|
||||
**Relationships**:
|
||||
- `Book` contains multiple `BookSnapshot` instances
|
||||
- `BookSnapshot` contains dictionaries of `OrderbookLevel` and lists of `Trade`
|
||||
- `Metric` stores calculated values for each `BookSnapshot`
|
||||
- `MetricCalculator` operates on snapshots to produce metrics
|
||||
#### Orderbook Management (`orderbook_manager.py`)
|
||||
- `OrderbookManager`: Handles in-memory orderbook state with partial updates
|
||||
- Maintains separate bid/ask price→size dictionaries
|
||||
- Supports deletions via zero-size updates
|
||||
- Provides sorted top-N level extraction for visualization
|
||||
|
||||
#### Repositories (`repositories/`)
|
||||
**Purpose**: Database access and persistence layer
|
||||
#### Metrics Calculation (`metrics_calculator.py`)
|
||||
- `MetricsCalculator`: Manages OBI and CVD metrics with windowed aggregation
|
||||
- Tracks CVD from trade flow (buy vs sell volume delta)
|
||||
- Calculates OBI from orderbook volume imbalance
|
||||
- Provides throttled updates and OHLC-style metric bars
|
||||
|
||||
```python
|
||||
# Repository
|
||||
SQLiteOrderflowRepository:
|
||||
- connect() # Optimized SQLite connection
|
||||
- load_trades_by_timestamp() # Efficient trade loading
|
||||
- iterate_book_rows() # Memory-efficient snapshot streaming
|
||||
- count_rows() # Performance monitoring
|
||||
- create_metrics_table() # Schema creation
|
||||
- insert_metrics_batch() # High-performance batch inserts
|
||||
- load_metrics_by_timerange() # Time-range queries
|
||||
- table_exists() # Schema validation
|
||||
```
|
||||
#### Level Parsing (`level_parser.py`)
|
||||
- Utility functions for normalizing orderbook level data:
|
||||
- `normalize_levels()`: parses levels, filtering zero/negative sizes
|
||||
- `parse_levels_including_zeros()`: preserves zeros for deletion operations
|
||||
- Supports JSON and Python literal formats with robust error handling
|
||||
|
||||
**Design Patterns**:
|
||||
- **Repository Pattern**: Clean separation between data access and business logic
|
||||
- **Batch Processing**: Process 1000 records per database operation
|
||||
- **Connection Management**: Caller manages connection lifecycle
|
||||
- **Performance Optimization**: SQLite PRAGMAs for high-speed operations
|
||||
### Inter-Process Communication (`viz_io.py`)
|
||||
|
||||
### Processing Layer
|
||||
- File paths (relative to project root):
|
||||
- `ohlc_data.json`: rolling list of OHLC bars (max 1000).
|
||||
- `depth_data.json`: latest depth snapshot (bids/asks).
|
||||
- `metrics_data.json`: rolling list of OBI/TOT OHLC bars (max 1000).
|
||||
- Atomic writes via temp files prevent partial reads by the Dash app.
|
||||
- API:
|
||||
- `add_ohlc_bar(...)`: append a new bar; trim to last 1000.
|
||||
- `upsert_ohlc_bar(...)`: replace last bar if timestamp matches; else append; trim.
|
||||
- `clear_data()`: reset OHLC data to an empty list.
|
||||
|
||||
#### Storage (`storage.py`)
|
||||
**Purpose**: Orchestrates data loading, processing, and metrics calculation
|
||||
### Visualization (`app.py`)
|
||||
|
||||
```python
|
||||
class Storage:
|
||||
- build_booktick_from_db() # Main processing pipeline
|
||||
- _create_snapshots_and_metrics() # Per-snapshot processing
|
||||
- _snapshot_from_row() # Individual snapshot creation
|
||||
```
|
||||
- Dash application with two graphs plus OBI subplot:
|
||||
- OHLC + Volume subplot with shared x-axis.
|
||||
- OBI candlestick subplot (blue tones) sharing x-axis.
|
||||
- Depth (cumulative) chart for bids and asks.
|
||||
- Polling interval (500 ms) callback reads JSON files and updates figures resilently:
|
||||
- Caches last good values to tolerate in-flight writes/decoding errors.
|
||||
- Builds figures with Plotly dark theme.
|
||||
- Exposed on `http://localhost:8050` by default (`host=0.0.0.0`).
|
||||
|
||||
**Processing Pipeline**:
|
||||
1. **Initialize**: Create metrics repository and table if needed
|
||||
2. **Load Trades**: Group trades by timestamp for efficient access
|
||||
3. **Stream Processing**: Process snapshots one-by-one to minimize memory
|
||||
4. **Calculate Metrics**: OBI and CVD calculation per snapshot
|
||||
5. **Batch Persistence**: Store metrics in batches of 1000
|
||||
6. **Memory Management**: Discard full snapshots after metric extraction
|
||||
### CLI Orchestration (`main.py`)
|
||||
|
||||
#### Strategy Framework (`strategies.py`)
|
||||
**Purpose**: Trading analysis and signal generation
|
||||
|
||||
```python
|
||||
class DefaultStrategy:
|
||||
- set_db_path() # Configure database access
|
||||
- compute_OBI() # Real-time OBI calculation (fallback)
|
||||
- load_stored_metrics() # Retrieve persisted metrics
|
||||
- get_metrics_summary() # Statistical analysis
|
||||
- on_booktick() # Main analysis entry point
|
||||
```
|
||||
|
||||
**Analysis Capabilities**:
|
||||
- **Stored Metrics**: Primary analysis using persisted data
|
||||
- **Real-time Fallback**: Live calculation for compatibility
|
||||
- **Statistical Summaries**: Min/max/average OBI, CVD changes
|
||||
- **Alert System**: Configurable thresholds for significant imbalances
|
||||
|
||||
### Presentation Layer
|
||||
|
||||
#### Visualization (`visualizer.py`)
|
||||
**Purpose**: Multi-chart rendering and display
|
||||
|
||||
```python
|
||||
class Visualizer:
|
||||
- set_db_path() # Configure metrics access
|
||||
- update_from_book() # Main rendering pipeline
|
||||
- _load_stored_metrics() # Retrieve metrics for chart range
|
||||
- _draw() # Multi-subplot rendering
|
||||
- show() # Display interactive charts
|
||||
```
|
||||
|
||||
**Chart Layout**:
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ OHLC Candlesticks │ ← Price action
|
||||
├─────────────────────────────────────┤
|
||||
│ Volume Bars │ ← Trading volume
|
||||
├─────────────────────────────────────┤
|
||||
│ OBI Line Chart │ ← Order book imbalance
|
||||
├─────────────────────────────────────┤
|
||||
│ CVD Line Chart │ ← Cumulative volume delta
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Features**:
|
||||
- **Shared Time Axis**: Synchronized X-axis across all subplots
|
||||
- **Auto-scaling**: Y-axis optimization for each metric type
|
||||
- **Performance**: Efficient rendering of large datasets
|
||||
- **Interactive**: Qt5Agg backend for zooming and panning
|
||||
- Typer CLI entrypoint:
|
||||
- Arguments: `instrument`, `start_date`, `end_date` (UTC, `YYYY-MM-DD`), options: `--window-seconds`.
|
||||
- Discovers SQLite files under `../data/OKX` matching the instrument.
|
||||
- Launches Dash visualizer as a separate process: `uv run python app.py`.
|
||||
- Streams databases sequentially: for each book row, processes trades and updates orderbook.
|
||||
|
||||
## Data Flow
|
||||
|
||||
### Processing Flow
|
||||
```
|
||||
1. SQLite DB → Repository → Raw Data
|
||||
2. Raw Data → Storage → BookSnapshot
|
||||
3. BookSnapshot → MetricCalculator → OBI/CVD
|
||||
4. Metrics → Repository → Database Storage
|
||||
5. Stored Metrics → Strategy → Analysis
|
||||
6. Stored Metrics → Visualizer → Charts
|
||||
```
|
||||
1. Discover and open SQLite database(s) for the requested instrument.
|
||||
2. Stream `book` rows with one-row lookahead to form time windows.
|
||||
3. Stream `trades` in timestamp order and bucket into the active window.
|
||||
4. For each window:
|
||||
- Aggregate trades into OHLC using `OHLCProcessor.process_trades`.
|
||||
- Apply partial depth updates via `OHLCProcessor.update_orderbook` and emit periodic snapshots.
|
||||
5. Persist current OHLC bar(s) and depth snapshots to JSON via atomic writes.
|
||||
6. Dash app polls JSON and renders charts.
|
||||
|
||||
### Memory Management Flow
|
||||
```
|
||||
Traditional: DB → All Snapshots in Memory → Analysis (High Memory)
|
||||
Optimized: DB → Process Snapshot → Calculate Metrics → Store → Discard (Low Memory)
|
||||
```
|
||||
## IPC JSON Schemas
|
||||
|
||||
## Database Schema
|
||||
- OHLC (`ohlc_data.json`): array of bars; each bar is `[ts, open, high, low, close, volume]`.
|
||||
|
||||
### Input Schema (Required)
|
||||
```sql
|
||||
-- Orderbook snapshots
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
instrument TEXT,
|
||||
bids TEXT, -- JSON: [[price, size, liq_count, order_count], ...]
|
||||
asks TEXT, -- JSON: [[price, size, liq_count, order_count], ...]
|
||||
timestamp TEXT
|
||||
);
|
||||
- Depth (`depth_data.json`): object with bids/asks arrays: `{"bids": [[price, size], ...], "asks": [[price, size], ...]}`.
|
||||
|
||||
-- Trade executions
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
instrument TEXT,
|
||||
trade_id TEXT,
|
||||
price REAL,
|
||||
size REAL,
|
||||
side TEXT, -- "buy" or "sell"
|
||||
timestamp TEXT
|
||||
);
|
||||
```
|
||||
- Metrics (`metrics_data.json`): array of bars; each bar is `[ts, obi_open, obi_high, obi_low, obi_close, tot_open, tot_high, tot_low, tot_close]`.
|
||||
|
||||
### Output Schema (Auto-created)
|
||||
```sql
|
||||
-- Calculated metrics
|
||||
CREATE TABLE metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER,
|
||||
timestamp TEXT,
|
||||
obi REAL, -- Order Book Imbalance [-1, 1]
|
||||
cvd REAL, -- Cumulative Volume Delta
|
||||
best_bid REAL,
|
||||
best_ask REAL,
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
);
|
||||
## Configuration
|
||||
|
||||
-- Performance indexes
|
||||
CREATE INDEX idx_metrics_timestamp ON metrics(timestamp);
|
||||
CREATE INDEX idx_metrics_snapshot_id ON metrics(snapshot_id);
|
||||
```
|
||||
- `OHLCProcessor(window_seconds, depth_levels_per_side)` controls aggregation granularity and depth snapshot size.
|
||||
- Visualizer interval (`500 ms`) balances UI responsiveness and CPU usage.
|
||||
- Paths: JSON files (`ohlc_data.json`, `depth_data.json`) are colocated with the code and written atomically.
|
||||
- CLI parameters select instrument and time range; databases expected under `../data/OKX`.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Memory Optimization
|
||||
- **Before**: Store all snapshots in memory (~1GB for 600K snapshots)
|
||||
- **After**: Store only metrics data (~300MB for same dataset)
|
||||
- **Reduction**: >70% memory usage decrease
|
||||
- Read-only SQLite tuned for fast sequential scans: immutable URI, query_only, large mmap and cache.
|
||||
- Batching minimizes cursor churn and Python overhead.
|
||||
- JSON IPC uses atomic replace to avoid contention; OHLC list is bounded to 1000 entries.
|
||||
- Processor throttles intra-window OHLC upserts and depth emissions to reduce I/O.
|
||||
|
||||
### Processing Performance
|
||||
- **Batch Size**: 1000 records per database operation
|
||||
- **Processing Speed**: ~1000 snapshots/second on modern hardware
|
||||
- **Database Overhead**: <20% storage increase for metrics table
|
||||
- **Query Performance**: Sub-second retrieval for typical time ranges
|
||||
## Error Handling
|
||||
|
||||
### Scalability Limits
|
||||
- **Single File**: 1M+ snapshots per database file
|
||||
- **Time Range**: Months to years of historical data
|
||||
- **Memory Peak**: <2GB for year-long datasets
|
||||
- **Disk Space**: Original size + 20% for metrics
|
||||
|
||||
## Integration Points
|
||||
|
||||
### External Interfaces
|
||||
```python
|
||||
# Main application entry point
|
||||
main.py:
|
||||
- CLI argument parsing
|
||||
- Database file discovery
|
||||
- Component orchestration
|
||||
- Progress monitoring
|
||||
|
||||
# Plugin interfaces
|
||||
Strategy.on_booktick(book: Book) # Strategy integration point
|
||||
Visualizer.update_from_book(book) # Visualization integration
|
||||
```
|
||||
|
||||
### Internal Interfaces
|
||||
```python
|
||||
# Repository interfaces
|
||||
Repository.connect() → Connection
|
||||
Repository.load_data() → TypedData
|
||||
Repository.store_data(data) → None
|
||||
|
||||
# Calculator interfaces
|
||||
MetricCalculator.calculate_obi(snapshot) → float
|
||||
MetricCalculator.calculate_cvd(prev_cvd, trades) → float
|
||||
```
|
||||
- Visualizer tolerates JSON decode races by reusing last good values and logging warnings.
|
||||
- Processor guards depth parsing and writes; logs at debug/info levels.
|
||||
- Visualizer startup is wrapped; if it fails, processing continues without UI.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Data Protection
|
||||
- **SQL Injection**: All queries use parameterized statements
|
||||
- **File Access**: Validates database file paths and permissions
|
||||
- **Error Handling**: No sensitive data in error messages
|
||||
- **Input Validation**: Sanitizes all external inputs
|
||||
- SQLite connections are read-only and immutable; no write queries executed.
|
||||
- File writes are confined to project directory; no paths derived from untrusted input.
|
||||
- Logs avoid sensitive data; only operational metadata.
|
||||
|
||||
### Access Control
|
||||
- **Database**: Respects file system permissions
|
||||
- **Memory**: No sensitive data persistence beyond processing
|
||||
- **Logging**: Configurable log levels without data exposure
|
||||
## Testing Guidance
|
||||
|
||||
## Configuration Management
|
||||
- Unit tests (run with `uv run pytest`):
|
||||
- `OHLCProcessor`: window boundary handling, high/low tracking, volume accumulation, upsert behavior.
|
||||
- Depth maintenance: deletions (size==0), top-N sorting, throttling.
|
||||
- `DBInterpreter.stream`: correct trade-window assignment, end-of-stream handling.
|
||||
- Integration: end-to-end generation of JSON from a tiny fixture DB and basic figure construction without launching a server.
|
||||
|
||||
### Performance Tuning
|
||||
```python
|
||||
# Storage configuration
|
||||
BATCH_SIZE = 1000 # Records per database operation
|
||||
LOG_FREQUENCY = 20 # Progress reports per processing run
|
||||
## Roadmap (Optional Enhancements)
|
||||
|
||||
# SQLite optimization
|
||||
PRAGMA journal_mode = OFF # Maximum write performance
|
||||
PRAGMA synchronous = OFF # Disable synchronous writes
|
||||
PRAGMA cache_size = 100000 # Large memory cache
|
||||
```
|
||||
|
||||
### Visualization Settings
|
||||
```python
|
||||
# Chart configuration
|
||||
WINDOW_SECONDS = 60 # OHLC aggregation window
|
||||
MAX_BARS = 500 # Maximum bars displayed
|
||||
FIGURE_SIZE = (12, 10) # Chart dimensions
|
||||
```
|
||||
|
||||
## Error Handling Strategy
|
||||
|
||||
### Graceful Degradation
|
||||
- **Database Errors**: Continue with reduced functionality
|
||||
- **Calculation Errors**: Skip problematic snapshots with logging
|
||||
- **Visualization Errors**: Display available data, note issues
|
||||
- **Memory Pressure**: Adjust batch sizes automatically
|
||||
|
||||
### Recovery Mechanisms
|
||||
- **Partial Processing**: Resume from last successful batch
|
||||
- **Data Validation**: Verify metrics calculations before storage
|
||||
- **Rollback Support**: Transaction boundaries for data consistency
|
||||
- Metrics: add OBI/CVD computation and persist metrics to a dedicated table.
|
||||
- Repository Pattern: extract DB access into a repository module with typed methods.
|
||||
- Orchestrator: introduce a `Storage` pipeline module coordinating batch processing and persistence.
|
||||
- Strategy Layer: compute signals/alerts on stored metrics.
|
||||
- Visualization: add OBI/CVD subplots and richer interactions.
|
||||
|
||||
---
|
||||
|
||||
This architecture provides a robust, scalable foundation for high-frequency trading data analysis while maintaining clean separation of concerns and efficient resource utilization.
|
||||
This document reflects the current implementation centered on SQLite streaming, JSON-based IPC, and a Dash visualizer, providing a clear foundation for incremental enhancements.
|
||||
|
||||
@ -1,120 +0,0 @@
|
||||
# ADR-001: Persistent Metrics Storage
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
The original orderflow backtest system kept all orderbook snapshots in memory during processing, leading to excessive memory usage (>1GB for typical datasets). With the addition of OBI and CVD metrics calculation, we needed to decide how to handle the computed metrics and manage memory efficiently.
|
||||
|
||||
## Decision
|
||||
We will implement persistent storage of calculated metrics in the SQLite database with the following approach:
|
||||
|
||||
1. **Metrics Table**: Create a dedicated `metrics` table to store OBI, CVD, and related data
|
||||
2. **Streaming Processing**: Process snapshots one-by-one, calculate metrics, store results, then discard snapshots
|
||||
3. **Batch Operations**: Use batch inserts (1000 records) for optimal database performance
|
||||
4. **Query Interface**: Provide time-range queries for metrics retrieval and analysis
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Memory Reduction**: >70% reduction in peak memory usage during processing
|
||||
- **Avoid Recalculation**: Metrics calculated once and reused for multiple analysis runs
|
||||
- **Scalability**: Can process months/years of data without memory constraints
|
||||
- **Performance**: Batch database operations provide high throughput
|
||||
- **Persistence**: Metrics survive between application runs
|
||||
- **Analysis Ready**: Stored metrics enable complex time-series analysis
|
||||
|
||||
### Negative
|
||||
- **Storage Overhead**: Metrics table adds ~20% to database size
|
||||
- **Complexity**: Additional database schema and management code
|
||||
- **Dependencies**: Tighter coupling between processing and database layer
|
||||
- **Migration**: Existing databases need schema updates for metrics table
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Option 1: Keep All Snapshots in Memory
|
||||
**Rejected**: Unsustainable memory usage for large datasets. Would limit analysis to small time ranges.
|
||||
|
||||
### Option 2: Calculate Metrics On-Demand
|
||||
**Rejected**: Recalculating metrics for every analysis run is computationally expensive and time-consuming.
|
||||
|
||||
### Option 3: External Metrics Database
|
||||
**Rejected**: Adds deployment complexity. SQLite co-location provides better performance and simpler management.
|
||||
|
||||
### Option 4: Compressed In-Memory Cache
|
||||
**Rejected**: Still faces fundamental memory scaling issues. Compression/decompression adds CPU overhead.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Database Schema
|
||||
```sql
|
||||
CREATE TABLE metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
obi REAL NOT NULL,
|
||||
cvd REAL NOT NULL,
|
||||
best_bid REAL,
|
||||
best_ask REAL,
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
);
|
||||
|
||||
CREATE INDEX idx_metrics_timestamp ON metrics(timestamp);
|
||||
CREATE INDEX idx_metrics_snapshot_id ON metrics(snapshot_id);
|
||||
```
|
||||
|
||||
### Processing Pipeline
|
||||
1. Create metrics table if not exists
|
||||
2. Stream through orderbook snapshots
|
||||
3. For each snapshot:
|
||||
- Calculate OBI and CVD metrics
|
||||
- Batch store metrics (1000 records per commit)
|
||||
- Discard snapshot from memory
|
||||
4. Provide query interface for time-range retrieval
|
||||
|
||||
### Memory Management
|
||||
- **Before**: Store all snapshots → Calculate on demand → High memory usage
|
||||
- **After**: Stream snapshots → Calculate immediately → Store metrics → Low memory usage
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Backward Compatibility
|
||||
- Existing databases continue to work without metrics table
|
||||
- System auto-creates metrics table on first processing run
|
||||
- Fallback to real-time calculation if metrics unavailable
|
||||
|
||||
### Performance Impact
|
||||
- **Processing Time**: Slight increase due to database writes (~10%)
|
||||
- **Query Performance**: Significant improvement for repeated analysis
|
||||
- **Overall**: Net positive performance for typical usage patterns
|
||||
|
||||
## Monitoring and Validation
|
||||
|
||||
### Success Metrics
|
||||
- **Memory Usage**: Target >70% reduction in peak memory usage
|
||||
- **Processing Speed**: Maintain >500 snapshots/second processing rate
|
||||
- **Storage Efficiency**: Metrics table <25% of total database size
|
||||
- **Query Performance**: <1 second retrieval for typical time ranges
|
||||
|
||||
### Validation Methods
|
||||
- Memory profiling during large dataset processing
|
||||
- Performance benchmarks vs. original system
|
||||
- Storage overhead analysis across different dataset sizes
|
||||
- Query performance testing with various time ranges
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Potential Enhancements
|
||||
- **Compression**: Consider compression for metrics storage if overhead becomes significant
|
||||
- **Partitioning**: Time-based partitioning for very large datasets
|
||||
- **Caching**: In-memory cache for frequently accessed metrics
|
||||
- **Export**: Direct export capabilities for external analysis tools
|
||||
|
||||
### Scalability Options
|
||||
- **Database Upgrade**: PostgreSQL if SQLite becomes limiting factor
|
||||
- **Parallel Processing**: Multi-threaded metrics calculation
|
||||
- **Distributed Storage**: For institutional-scale datasets
|
||||
|
||||
---
|
||||
|
||||
This decision provides a solid foundation for efficient, scalable metrics processing while maintaining simplicity and performance characteristics suitable for the target use cases.
|
||||
122
docs/decisions/ADR-001-sqlite-database-choice.md
Normal file
122
docs/decisions/ADR-001-sqlite-database-choice.md
Normal file
@ -0,0 +1,122 @@
|
||||
# ADR-001: SQLite Database Choice
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
The orderflow backtest system needs to efficiently store and stream large volumes of historical orderbook and trade data. Key requirements include:
|
||||
|
||||
- Fast sequential read access for time-series data
|
||||
- Minimal setup and maintenance overhead
|
||||
- Support for concurrent reads from visualization layer
|
||||
- Ability to handle databases ranging from 100MB to 10GB+
|
||||
- No network dependencies for data access
|
||||
|
||||
## Decision
|
||||
We will use SQLite as the primary database for storing historical orderbook and trade data.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Zero configuration**: No database server setup or administration required
|
||||
- **Excellent read performance**: Optimized for sequential scans with proper PRAGMA settings
|
||||
- **Built-in Python support**: No external dependencies or connection libraries needed
|
||||
- **File portability**: Database files can be easily shared and archived
|
||||
- **ACID compliance**: Ensures data integrity during writes (for data ingestion)
|
||||
- **Small footprint**: Minimal memory and storage overhead
|
||||
- **Fast startup**: No connection pooling or server initialization delays
|
||||
|
||||
### Negative
|
||||
- **Single writer limitation**: Cannot handle concurrent writes (acceptable for read-only backtest)
|
||||
- **Limited scalability**: Not suitable for high-concurrency production trading systems
|
||||
- **No network access**: Cannot query databases remotely (acceptable for local analysis)
|
||||
- **File locking**: Potential issues with file system sharing (mitigated by read-only access)
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Schema Design
|
||||
```sql
|
||||
-- Orderbook snapshots with timestamp windows
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
instrument TEXT,
|
||||
bids TEXT NOT NULL, -- JSON array of [price, size] pairs
|
||||
asks TEXT NOT NULL, -- JSON array of [price, size] pairs
|
||||
timestamp TEXT NOT NULL
|
||||
);
|
||||
|
||||
-- Individual trade records
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
instrument TEXT,
|
||||
trade_id TEXT,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL, -- "buy" or "sell"
|
||||
timestamp TEXT NOT NULL
|
||||
);
|
||||
|
||||
-- Indexes for efficient time-based queries
|
||||
CREATE INDEX idx_book_timestamp ON book(timestamp);
|
||||
CREATE INDEX idx_trades_timestamp ON trades(timestamp);
|
||||
```
|
||||
|
||||
### Performance Optimizations
|
||||
```python
|
||||
# Read-only connection with optimized PRAGMA settings
|
||||
connection_uri = f"file:{db_path}?immutable=1&mode=ro"
|
||||
conn = sqlite3.connect(connection_uri, uri=True)
|
||||
conn.execute("PRAGMA query_only = 1")
|
||||
conn.execute("PRAGMA temp_store = MEMORY")
|
||||
conn.execute("PRAGMA mmap_size = 268435456") # 256MB
|
||||
conn.execute("PRAGMA cache_size = 10000")
|
||||
```
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### PostgreSQL
|
||||
- **Rejected**: Requires server setup and maintenance
|
||||
- **Pros**: Better concurrent access, richer query features
|
||||
- **Cons**: Overkill for read-only use case, deployment complexity
|
||||
|
||||
### Parquet Files
|
||||
- **Rejected**: Limited query capabilities for time-series data
|
||||
- **Pros**: Excellent compression, columnar format
|
||||
- **Cons**: No indexes, complex range queries, requires additional libraries
|
||||
|
||||
### MongoDB
|
||||
- **Rejected**: Document structure not optimal for time-series data
|
||||
- **Pros**: Flexible schema, good aggregation pipeline
|
||||
- **Cons**: Requires server, higher memory usage, learning curve
|
||||
|
||||
### CSV Files
|
||||
- **Rejected**: Poor query performance for large datasets
|
||||
- **Pros**: Simple format, universal compatibility
|
||||
- **Cons**: No indexing, slow filtering, type conversion overhead
|
||||
|
||||
### InfluxDB
|
||||
- **Rejected**: Overkill for historical data analysis
|
||||
- **Pros**: Optimized for time-series, good compression
|
||||
- **Cons**: Additional service dependency, learning curve
|
||||
|
||||
## Migration Path
|
||||
If scalability becomes an issue in the future:
|
||||
|
||||
1. **Phase 1**: Implement database abstraction layer in `db_interpreter`
|
||||
2. **Phase 2**: Add PostgreSQL adapter for production workloads
|
||||
3. **Phase 3**: Implement data partitioning for very large datasets
|
||||
4. **Phase 4**: Consider distributed storage for multi-terabyte datasets
|
||||
|
||||
## Monitoring
|
||||
Track the following metrics to validate this decision:
|
||||
- Database file sizes and growth rates
|
||||
- Query performance for different date ranges
|
||||
- Memory usage during streaming operations
|
||||
- Time to process complete backtests
|
||||
|
||||
## Review Date
|
||||
This decision should be reviewed if:
|
||||
- Database files consistently exceed 50GB
|
||||
- Query performance degrades below 1000 rows/second
|
||||
- Concurrent access requirements change
|
||||
- Network-based data sharing becomes necessary
|
||||
162
docs/decisions/ADR-002-json-ipc-communication.md
Normal file
162
docs/decisions/ADR-002-json-ipc-communication.md
Normal file
@ -0,0 +1,162 @@
|
||||
# ADR-002: JSON File-Based Inter-Process Communication
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
The orderflow backtest system requires communication between the data processing pipeline and the web-based visualization frontend. Key requirements include:
|
||||
|
||||
- Real-time data updates from processor to visualization
|
||||
- Tolerance for timing mismatches between writer and reader
|
||||
- Simple implementation without external dependencies
|
||||
- Support for different update frequencies (OHLC bars vs. orderbook depth)
|
||||
- Graceful handling of process crashes or restarts
|
||||
|
||||
## Decision
|
||||
We will use JSON files with atomic write operations for inter-process communication between the data processor and Dash visualization frontend.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Simplicity**: No message queues, sockets, or complex protocols
|
||||
- **Fault tolerance**: File-based communication survives process restarts
|
||||
- **Debugging friendly**: Data files can be inspected manually
|
||||
- **No dependencies**: Built-in JSON support, no external libraries
|
||||
- **Atomic operations**: Temp file + rename prevents partial reads
|
||||
- **Language agnostic**: Any process can read/write JSON files
|
||||
- **Bounded memory**: Rolling data windows prevent unlimited growth
|
||||
|
||||
### Negative
|
||||
- **File I/O overhead**: Disk writes may be slower than in-memory communication
|
||||
- **Polling required**: Reader must poll for updates (500ms interval)
|
||||
- **Limited throughput**: Not suitable for high-frequency (microsecond) updates
|
||||
- **No acknowledgments**: Writer cannot confirm reader has processed data
|
||||
- **File system dependency**: Performance varies by storage type
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### File Structure
|
||||
```
|
||||
ohlc_data.json # Rolling array of OHLC bars (max 1000)
|
||||
depth_data.json # Current orderbook depth snapshot
|
||||
metrics_data.json # Rolling array of OBI/CVD metrics (max 1000)
|
||||
```
|
||||
|
||||
### Atomic Write Pattern
|
||||
```python
|
||||
def atomic_write(file_path: Path, data: Any) -> None:
|
||||
"""Write data atomically to prevent partial reads."""
|
||||
temp_path = file_path.with_suffix('.tmp')
|
||||
with open(temp_path, 'w') as f:
|
||||
json.dump(data, f)
|
||||
f.flush()
|
||||
os.fsync(f.fileno())
|
||||
temp_path.replace(file_path) # Atomic on POSIX systems
|
||||
```
|
||||
|
||||
### Data Formats
|
||||
```python
|
||||
# OHLC format: [timestamp_ms, open, high, low, close, volume]
|
||||
ohlc_data = [
|
||||
[1640995200000, 50000.0, 50100.0, 49900.0, 50050.0, 125.5],
|
||||
[1640995260000, 50050.0, 50200.0, 50000.0, 50150.0, 98.3]
|
||||
]
|
||||
|
||||
# Depth format: top-N levels per side
|
||||
depth_data = {
|
||||
"bids": [[49990.0, 1.5], [49985.0, 2.1]],
|
||||
"asks": [[50010.0, 1.2], [50015.0, 1.8]]
|
||||
}
|
||||
|
||||
# Metrics format: [timestamp_ms, obi_open, obi_high, obi_low, obi_close]
|
||||
metrics_data = [
|
||||
[1640995200000, 0.15, 0.22, 0.08, 0.18],
|
||||
[1640995260000, 0.18, 0.25, 0.12, 0.20]
|
||||
]
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```python
|
||||
# Reader pattern with graceful fallback
|
||||
try:
|
||||
with open(data_file) as f:
|
||||
new_data = json.load(f)
|
||||
_LAST_DATA = new_data # Cache successful read
|
||||
except (FileNotFoundError, json.JSONDecodeError) as e:
|
||||
logging.warning(f"Using cached data: {e}")
|
||||
new_data = _LAST_DATA # Use cached data
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Write Performance
|
||||
- **Small files**: < 1MB typical, writes complete in < 10ms
|
||||
- **Atomic operations**: Add ~2-5ms overhead for temp file creation
|
||||
- **Throttling**: Updates limited to prevent excessive I/O
|
||||
|
||||
### Read Performance
|
||||
- **Parse time**: < 5ms for typical JSON file sizes
|
||||
- **Polling overhead**: 500ms interval balances responsiveness and CPU usage
|
||||
- **Error recovery**: Cached data eliminates visual glitches
|
||||
|
||||
### Memory Usage
|
||||
- **Bounded datasets**: Max 1000 bars × 6 fields × 8 bytes = ~48KB per file
|
||||
- **JSON overhead**: ~2x memory during parsing
|
||||
- **Total footprint**: < 500KB for all IPC data
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Redis Pub/Sub
|
||||
- **Rejected**: Additional service dependency, overkill for simple use case
|
||||
- **Pros**: True real-time updates, built-in data structures
|
||||
- **Cons**: External dependency, memory overhead, configuration complexity
|
||||
|
||||
### ZeroMQ
|
||||
- **Rejected**: Additional library dependency, more complex than needed
|
||||
- **Pros**: High performance, flexible patterns
|
||||
- **Cons**: Learning curve, binary dependency, networking complexity
|
||||
|
||||
### Named Pipes/Unix Sockets
|
||||
- **Rejected**: Platform-specific, more complex error handling
|
||||
- **Pros**: Better performance, no file I/O
|
||||
- **Cons**: Platform limitations, harder debugging, process lifetime coupling
|
||||
|
||||
### SQLite as Message Queue
|
||||
- **Rejected**: Overkill for simple data exchange
|
||||
- **Pros**: ACID transactions, complex queries possible
|
||||
- **Cons**: Schema management, locking considerations, overhead
|
||||
|
||||
### HTTP API
|
||||
- **Rejected**: Too much overhead for local communication
|
||||
- **Pros**: Standard protocol, language agnostic
|
||||
- **Cons**: Network stack overhead, port management, authentication
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Scalability Limits
|
||||
Current approach suitable for:
|
||||
- Update frequencies: 1-10 Hz
|
||||
- Data volumes: < 10MB total
|
||||
- Process counts: 1 writer, few readers
|
||||
|
||||
### Migration Path
|
||||
If performance becomes insufficient:
|
||||
1. **Phase 1**: Add compression (gzip) to reduce I/O
|
||||
2. **Phase 2**: Implement shared memory for high-frequency data
|
||||
3. **Phase 3**: Consider message queue for complex routing
|
||||
4. **Phase 4**: Migrate to streaming protocol for real-time requirements
|
||||
|
||||
## Monitoring
|
||||
Track these metrics to validate the approach:
|
||||
- File write latency and frequency
|
||||
- JSON parse times in visualization
|
||||
- Error rates for partial reads
|
||||
- Memory usage growth over time
|
||||
|
||||
## Review Triggers
|
||||
Reconsider this decision if:
|
||||
- Update frequency requirements exceed 10 Hz
|
||||
- File I/O becomes a performance bottleneck
|
||||
- Multiple visualization clients need the same data
|
||||
- Complex message routing becomes necessary
|
||||
- Platform portability becomes a concern
|
||||
@ -1,217 +0,0 @@
|
||||
# ADR-002: Separation of Visualization from Strategy
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
The original system embedded visualization functionality within the `DefaultStrategy` class, creating tight coupling between trading analysis logic and chart rendering. This design had several issues:
|
||||
|
||||
1. **Mixed Responsibilities**: Strategy classes handled both trading logic and GUI operations
|
||||
2. **Testing Complexity**: Strategy tests required mocking GUI components
|
||||
3. **Deployment Flexibility**: Strategies couldn't run in headless environments
|
||||
4. **Timing Control**: Visualization timing was tied to strategy execution rather than application flow
|
||||
|
||||
The user specifically requested to display visualizations after processing each database file, requiring better control over visualization timing.
|
||||
|
||||
## Decision
|
||||
We will separate visualization from strategy components with the following architecture:
|
||||
|
||||
1. **Remove Visualization from Strategy**: Strategy classes focus solely on trading analysis
|
||||
2. **Main Application Control**: `main.py` orchestrates visualization timing and updates
|
||||
3. **Independent Configuration**: Strategy and Visualizer get database paths independently
|
||||
4. **Clean Interfaces**: No direct dependencies between strategy and visualization components
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Single Responsibility**: Strategy focuses on trading logic, Visualizer on charts
|
||||
- **Better Testability**: Strategy tests run without GUI dependencies
|
||||
- **Flexible Deployment**: Strategies can run in headless/server environments
|
||||
- **Timing Control**: Visualization updates precisely when needed (after each DB)
|
||||
- **Maintainability**: Changes to visualization don't affect strategy logic
|
||||
- **Performance**: No GUI overhead during strategy analysis
|
||||
|
||||
### Negative
|
||||
- **Increased Complexity**: Main application handles more orchestration logic
|
||||
- **Coordination Required**: Must ensure strategy and visualizer get same database path
|
||||
- **Breaking Change**: Existing strategy initialization code needs updates
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Option 1: Keep Visualization in Strategy
|
||||
**Rejected**: Violates single responsibility principle. Makes testing difficult and deployment inflexible.
|
||||
|
||||
### Option 2: Observer Pattern
|
||||
**Rejected**: Adds unnecessary complexity for this use case. Direct control in main.py is simpler and more explicit.
|
||||
|
||||
### Option 3: Visualization Service
|
||||
**Rejected**: Over-engineering for current requirements. May be considered for future multi-strategy scenarios.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Before (Coupled Design)
|
||||
```python
|
||||
class DefaultStrategy:
|
||||
def __init__(self, instrument: str, enable_visualization: bool = True):
|
||||
self.visualizer = Visualizer(...) if enable_visualization else None
|
||||
|
||||
def on_booktick(self, book: Book):
|
||||
# Trading analysis
|
||||
# ...
|
||||
# Visualization update
|
||||
if self.visualizer:
|
||||
self.visualizer.update_from_book(book)
|
||||
```
|
||||
|
||||
### After (Separated Design)
|
||||
```python
|
||||
# Strategy focuses on analysis only
|
||||
class DefaultStrategy:
|
||||
def __init__(self, instrument: str):
|
||||
# No visualization dependencies
|
||||
|
||||
def on_booktick(self, book: Book):
|
||||
# Pure trading analysis
|
||||
# No visualization code
|
||||
|
||||
# Main application orchestrates both
|
||||
def main():
|
||||
strategy = DefaultStrategy(instrument)
|
||||
visualizer = Visualizer(...)
|
||||
|
||||
for db_path in db_paths:
|
||||
strategy.set_db_path(db_path)
|
||||
visualizer.set_db_path(db_path)
|
||||
|
||||
# Process data
|
||||
storage.build_booktick_from_db(db_path, db_date)
|
||||
|
||||
# Analysis
|
||||
strategy.on_booktick(storage.book)
|
||||
|
||||
# Visualization (controlled timing)
|
||||
visualizer.update_from_book(storage.book)
|
||||
|
||||
# Final display
|
||||
visualizer.show()
|
||||
```
|
||||
|
||||
### Interface Changes
|
||||
|
||||
#### Strategy Interface (Simplified)
|
||||
```python
|
||||
class DefaultStrategy:
|
||||
def __init__(self, instrument: str) # Removed visualization param
|
||||
def set_db_path(self, db_path: Path) -> None # No visualizer.set_db_path()
|
||||
def on_booktick(self, book: Book) -> None # No visualization calls
|
||||
```
|
||||
|
||||
#### Main Application (Enhanced)
|
||||
```python
|
||||
def main():
|
||||
# Separate initialization
|
||||
strategy = DefaultStrategy(instrument)
|
||||
visualizer = Visualizer(window_seconds=60, max_bars=500)
|
||||
|
||||
# Independent configuration
|
||||
for db_path in db_paths:
|
||||
strategy.set_db_path(db_path)
|
||||
visualizer.set_db_path(db_path)
|
||||
|
||||
# Controlled execution
|
||||
strategy.on_booktick(storage.book) # Analysis
|
||||
visualizer.update_from_book(storage.book) # Visualization
|
||||
```
|
||||
|
||||
## Migration Strategy
|
||||
|
||||
### Code Changes Required
|
||||
1. **Strategy Classes**: Remove visualization initialization and calls
|
||||
2. **Main Application**: Add visualizer creation and orchestration
|
||||
3. **Tests**: Update strategy tests to remove visualization mocking
|
||||
4. **Configuration**: Remove visualization parameters from strategy constructors
|
||||
|
||||
### Backward Compatibility
|
||||
- **API Breaking**: Strategy constructor signature changes
|
||||
- **Functionality Preserved**: All visualization features remain available
|
||||
- **Test Updates**: Strategy tests become simpler (no GUI mocking needed)
|
||||
|
||||
### Migration Steps
|
||||
1. Update `DefaultStrategy` to remove visualization dependencies
|
||||
2. Modify `main.py` to create and manage `Visualizer` instance
|
||||
3. Update all strategy constructor calls to remove `enable_visualization`
|
||||
4. Update tests to reflect new interfaces
|
||||
5. Verify visualization timing meets requirements
|
||||
|
||||
## Benefits Achieved
|
||||
|
||||
### Clean Architecture
|
||||
- **Strategy**: Pure trading analysis logic
|
||||
- **Visualizer**: Pure chart rendering logic
|
||||
- **Main**: Application flow and component coordination
|
||||
|
||||
### Improved Testing
|
||||
```python
|
||||
# Before: Complex mocking required
|
||||
def test_strategy():
|
||||
with patch('visualizer.Visualizer') as mock_viz:
|
||||
strategy = DefaultStrategy("BTC", enable_visualization=True)
|
||||
# Complex mock setup...
|
||||
|
||||
# After: Simple, direct testing
|
||||
def test_strategy():
|
||||
strategy = DefaultStrategy("BTC")
|
||||
# Direct testing of analysis logic
|
||||
```
|
||||
|
||||
### Flexible Deployment
|
||||
```python
|
||||
# Headless server deployment
|
||||
strategy = DefaultStrategy("BTC")
|
||||
# No GUI dependencies, can run anywhere
|
||||
|
||||
# Development with visualization
|
||||
strategy = DefaultStrategy("BTC")
|
||||
visualizer = Visualizer(...)
|
||||
# Full GUI functionality when needed
|
||||
```
|
||||
|
||||
### Precise Timing Control
|
||||
```python
|
||||
# Visualization updates exactly when requested
|
||||
for db_file in database_files:
|
||||
process_database(db_file) # Data processing
|
||||
strategy.analyze(book) # Trading analysis
|
||||
visualizer.update_from_book(book) # Chart update after each DB
|
||||
```
|
||||
|
||||
## Monitoring and Validation
|
||||
|
||||
### Success Criteria
|
||||
- **Test Simplification**: Strategy tests run without GUI mocking
|
||||
- **Timing Accuracy**: Visualization updates after each database as requested
|
||||
- **Performance**: No GUI overhead during pure analysis operations
|
||||
- **Maintainability**: Visualization changes don't affect strategy code
|
||||
|
||||
### Validation Methods
|
||||
- Run strategy tests in headless environment
|
||||
- Verify visualization timing matches requirements
|
||||
- Performance comparison of analysis-only vs. GUI operations
|
||||
- Code complexity metrics for strategy vs. visualization modules
|
||||
|
||||
## Future Considerations
|
||||
|
||||
### Potential Enhancements
|
||||
- **Multiple Visualizers**: Support different chart types or windows
|
||||
- **Visualization Plugins**: Pluggable chart renderers for different outputs
|
||||
- **Remote Visualization**: Web-based charts for server deployments
|
||||
- **Batch Visualization**: Process multiple databases before chart updates
|
||||
|
||||
### Extensibility
|
||||
- **Strategy Plugins**: Easy to add strategies without visualization concerns
|
||||
- **Visualization Backends**: Swap chart libraries without affecting strategies
|
||||
- **Analysis Pipeline**: Clear separation enables complex analysis workflows
|
||||
|
||||
---
|
||||
|
||||
This separation provides a clean, maintainable architecture that supports the requested visualization timing while improving code quality and testability.
|
||||
204
docs/decisions/ADR-003-dash-visualization-framework.md
Normal file
204
docs/decisions/ADR-003-dash-visualization-framework.md
Normal file
@ -0,0 +1,204 @@
|
||||
# ADR-003: Dash Web Framework for Visualization
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
The orderflow backtest system requires a user interface for visualizing OHLC candlestick charts, volume data, orderbook depth, and derived metrics. Key requirements include:
|
||||
|
||||
- Real-time chart updates with minimal latency
|
||||
- Professional financial data visualization capabilities
|
||||
- Support for multiple chart types (candlesticks, bars, line charts)
|
||||
- Interactive features (zooming, panning, hover details)
|
||||
- Dark theme suitable for trading applications
|
||||
- Python-native solution to avoid JavaScript development
|
||||
|
||||
## Decision
|
||||
We will use Dash (by Plotly) as the web framework for building the visualization frontend, with Plotly.js for chart rendering.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Python-native**: No JavaScript development required
|
||||
- **Plotly integration**: Best-in-class financial charting capabilities
|
||||
- **Reactive architecture**: Automatic UI updates via callback system
|
||||
- **Professional appearance**: High-quality charts suitable for trading applications
|
||||
- **Interactive features**: Built-in zooming, panning, hover tooltips
|
||||
- **Responsive design**: Bootstrap integration for modern layouts
|
||||
- **Development speed**: Rapid prototyping and iteration
|
||||
- **WebGL acceleration**: Smooth performance for large datasets
|
||||
|
||||
### Negative
|
||||
- **Performance overhead**: Heavier than custom JavaScript solutions
|
||||
- **Limited customization**: Constrained by Dash component ecosystem
|
||||
- **Single-page limitation**: Not suitable for complex multi-page applications
|
||||
- **Memory usage**: Can be heavy for resource-constrained environments
|
||||
- **Learning curve**: Callback patterns require understanding of reactive programming
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### Application Structure
|
||||
```python
|
||||
# Main application with Bootstrap theme
|
||||
app = dash.Dash(__name__, external_stylesheets=[dbc.themes.FLATLY])
|
||||
|
||||
# Responsive layout with 9:3 ratio for charts:depth
|
||||
app.layout = dbc.Container([
|
||||
dbc.Row([
|
||||
dbc.Col([ # OHLC + Volume + Metrics
|
||||
dcc.Graph(id='ohlc-chart', style={'height': '100vh'})
|
||||
], width=9),
|
||||
dbc.Col([ # Orderbook Depth
|
||||
dcc.Graph(id='depth-chart', style={'height': '100vh'})
|
||||
], width=3)
|
||||
]),
|
||||
dcc.Interval(id='interval-update', interval=500, n_intervals=0)
|
||||
])
|
||||
```
|
||||
|
||||
### Chart Architecture
|
||||
```python
|
||||
# Multi-subplot chart with shared x-axis
|
||||
fig = make_subplots(
|
||||
rows=3, cols=1,
|
||||
row_heights=[0.6, 0.2, 0.2], # OHLC, Volume, Metrics
|
||||
vertical_spacing=0.02,
|
||||
shared_xaxes=True,
|
||||
subplot_titles=['Price', 'Volume', 'OBI Metrics']
|
||||
)
|
||||
|
||||
# Candlestick chart with dark theme
|
||||
fig.add_trace(go.Candlestick(
|
||||
x=timestamps, open=opens, high=highs, low=lows, close=closes,
|
||||
increasing_line_color='#00ff00', decreasing_line_color='#ff0000'
|
||||
), row=1, col=1)
|
||||
```
|
||||
|
||||
### Real-time Updates
|
||||
```python
|
||||
@app.callback(
|
||||
[Output('ohlc-chart', 'figure'), Output('depth-chart', 'figure')],
|
||||
[Input('interval-update', 'n_intervals')]
|
||||
)
|
||||
def update_charts(n_intervals):
|
||||
# Read data from JSON files with error handling
|
||||
# Build and return updated figures
|
||||
return ohlc_fig, depth_fig
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Update Latency
|
||||
- **Polling interval**: 500ms for near real-time updates
|
||||
- **Chart render time**: 50-200ms depending on data size
|
||||
- **Memory usage**: ~100MB for typical chart configurations
|
||||
- **Browser requirements**: Modern browser with WebGL support
|
||||
|
||||
### Scalability Limits
|
||||
- **Data points**: Up to 10,000 candlesticks without performance issues
|
||||
- **Update frequency**: Optimal at 1-2 Hz, maximum ~10 Hz
|
||||
- **Concurrent users**: Single user design (development server)
|
||||
- **Memory growth**: Linear with data history size
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Streamlit
|
||||
- **Rejected**: Less interactive, slower updates, limited charting
|
||||
- **Pros**: Simpler programming model, good for prototypes
|
||||
- **Cons**: Poor real-time performance, limited financial chart types
|
||||
|
||||
### Flask + Custom JavaScript
|
||||
- **Rejected**: Requires JavaScript development, more complex
|
||||
- **Pros**: Complete control, potentially better performance
|
||||
- **Cons**: Significant development overhead, maintenance burden
|
||||
|
||||
### Jupyter Notebooks
|
||||
- **Rejected**: Not suitable for production deployment
|
||||
- **Pros**: Great for exploration and analysis
|
||||
- **Cons**: No real-time updates, not web-deployable
|
||||
|
||||
### Bokeh
|
||||
- **Rejected**: Less mature ecosystem, fewer financial chart types
|
||||
- **Pros**: Good performance, Python-native
|
||||
- **Cons**: Smaller community, limited examples for financial data
|
||||
|
||||
### Custom React Application
|
||||
- **Rejected**: Requires separate frontend team, complex deployment
|
||||
- **Pros**: Maximum flexibility, best performance potential
|
||||
- **Cons**: High development cost, maintenance overhead
|
||||
|
||||
### Desktop GUI (Tkinter/PyQt)
|
||||
- **Rejected**: Not web-accessible, limited styling options
|
||||
- **Pros**: No browser dependency, good performance
|
||||
- **Cons**: Deployment complexity, poor mobile support
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Theme and Styling
|
||||
```python
|
||||
# Dark theme configuration
|
||||
dark_theme = {
|
||||
'plot_bgcolor': '#000000',
|
||||
'paper_bgcolor': '#000000',
|
||||
'font_color': '#ffffff',
|
||||
'grid_color': '#333333'
|
||||
}
|
||||
```
|
||||
|
||||
### Chart Types
|
||||
- **Candlestick charts**: OHLC price data with volume
|
||||
- **Bar charts**: Volume and metrics visualization
|
||||
- **Line charts**: Cumulative depth and trend analysis
|
||||
- **Scatter plots**: Trade-by-trade analysis (future)
|
||||
|
||||
### Interactive Features
|
||||
- **Zoom and pan**: Time-based navigation
|
||||
- **Hover tooltips**: Detailed data on mouse over
|
||||
- **Crosshairs**: Precise value reading
|
||||
- **Range selector**: Quick time period selection
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Short-term (1-3 months)
|
||||
- Add range selector for time navigation
|
||||
- Implement chart annotation for significant events
|
||||
- Add export functionality for charts and data
|
||||
|
||||
### Medium-term (3-6 months)
|
||||
- Multi-instrument support with tabs
|
||||
- Advanced indicators and overlays
|
||||
- User preference persistence
|
||||
|
||||
### Long-term (6+ months)
|
||||
- Real-time alerts and notifications
|
||||
- Strategy backtesting visualization
|
||||
- Portfolio-level analytics
|
||||
|
||||
## Monitoring and Metrics
|
||||
|
||||
### Performance Monitoring
|
||||
- Chart render times and update frequencies
|
||||
- Memory usage growth over time
|
||||
- Browser compatibility and error rates
|
||||
- User interaction patterns
|
||||
|
||||
### Quality Metrics
|
||||
- Chart accuracy compared to source data
|
||||
- Visual responsiveness during heavy updates
|
||||
- Error recovery from data corruption
|
||||
|
||||
## Review Triggers
|
||||
Reconsider this decision if:
|
||||
- Update frequency requirements exceed 10 Hz consistently
|
||||
- Memory usage becomes prohibitive (> 1GB)
|
||||
- Custom visualization requirements cannot be met
|
||||
- Multi-user deployment becomes necessary
|
||||
- Mobile responsiveness becomes a priority
|
||||
- Integration with external charting libraries is needed
|
||||
|
||||
## Migration Path
|
||||
If replacement becomes necessary:
|
||||
1. **Phase 1**: Abstract chart building logic from Dash specifics
|
||||
2. **Phase 2**: Implement alternative frontend while maintaining data formats
|
||||
3. **Phase 3**: A/B test performance and usability
|
||||
4. **Phase 4**: Complete migration with feature parity
|
||||
165
docs/modules/app.md
Normal file
165
docs/modules/app.md
Normal file
@ -0,0 +1,165 @@
|
||||
# Module: app
|
||||
|
||||
## Purpose
|
||||
The `app` module provides a real-time Dash web application for visualizing OHLC candlestick charts, volume data, Order Book Imbalance (OBI) metrics, and orderbook depth. It implements a polling-based architecture that reads JSON data files and renders interactive charts with a dark theme.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Functions
|
||||
- `build_empty_ohlc_fig() -> go.Figure`: Create empty OHLC chart with proper styling
|
||||
- `build_empty_depth_fig() -> go.Figure`: Create empty depth chart with proper styling
|
||||
- `build_ohlc_fig(data: List[list], metrics: List[list]) -> go.Figure`: Build complete OHLC+Volume+OBI chart
|
||||
- `build_depth_fig(depth_data: dict) -> go.Figure`: Build orderbook depth visualization
|
||||
|
||||
### Global Variables
|
||||
- `_LAST_DATA`: Cached OHLC data for error recovery
|
||||
- `_LAST_DEPTH`: Cached depth data for error recovery
|
||||
- `_LAST_METRICS`: Cached metrics data for error recovery
|
||||
|
||||
### Dash Application
|
||||
- `app`: Main Dash application instance with Bootstrap theme
|
||||
- Layout with responsive grid (9:3 ratio for OHLC:Depth charts)
|
||||
- 500ms polling interval for real-time updates
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Running the Application
|
||||
```bash
|
||||
# Start the Dash server
|
||||
uv run python app.py
|
||||
|
||||
# Access the web interface
|
||||
# Open http://localhost:8050 in your browser
|
||||
```
|
||||
|
||||
### Programmatic Usage
|
||||
```python
|
||||
from app import build_ohlc_fig, build_depth_fig
|
||||
|
||||
# Build charts with sample data
|
||||
ohlc_data = [[1640995200000, 50000, 50100, 49900, 50050, 125.5]]
|
||||
metrics_data = [[1640995200000, 0.15, 0.22, 0.08, 0.18]]
|
||||
depth_data = {
|
||||
"bids": [[49990, 1.5], [49985, 2.1]],
|
||||
"asks": [[50010, 1.2], [50015, 1.8]]
|
||||
}
|
||||
|
||||
ohlc_fig = build_ohlc_fig(ohlc_data, metrics_data)
|
||||
depth_fig = build_depth_fig(depth_data)
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- `viz_io`: Data file paths and JSON reading
|
||||
- `viz_io.DATA_FILE`: OHLC data source
|
||||
- `viz_io.DEPTH_FILE`: Depth data source
|
||||
- `viz_io.METRICS_FILE`: Metrics data source
|
||||
|
||||
### External
|
||||
- `dash`: Web application framework
|
||||
- `dash.html`, `dash.dcc`: HTML and core components
|
||||
- `dash_bootstrap_components`: Bootstrap styling
|
||||
- `plotly.graph_objs`: Chart objects
|
||||
- `plotly.subplots`: Multiple subplot support
|
||||
- `pandas`: Data manipulation (minimal usage)
|
||||
- `json`: JSON file parsing
|
||||
- `logging`: Error and debug logging
|
||||
- `pathlib`: File path handling
|
||||
|
||||
## Chart Architecture
|
||||
|
||||
### OHLC Chart (Left Panel, 9/12 width)
|
||||
- **Main subplot**: Candlestick chart with OHLC data
|
||||
- **Volume subplot**: Bar chart sharing x-axis with main chart
|
||||
- **OBI subplot**: Order Book Imbalance candlestick chart in blue tones
|
||||
- **Shared x-axis**: Synchronized zooming and panning across subplots
|
||||
|
||||
### Depth Chart (Right Panel, 3/12 width)
|
||||
- **Cumulative depth**: Stepped line chart showing bid/ask liquidity
|
||||
- **Color coding**: Green for bids, red for asks
|
||||
- **Real-time updates**: Reflects current orderbook state
|
||||
|
||||
## Styling and Theme
|
||||
|
||||
### Dark Theme Configuration
|
||||
- Background: Black (`#000000`)
|
||||
- Text: White (`#ffffff`)
|
||||
- Grid: Dark gray with transparency
|
||||
- Candlesticks: Green (up) / Red (down)
|
||||
- Volume: Gray bars
|
||||
- OBI: Blue tones for candlesticks
|
||||
- Depth: Green (bids) / Red (asks)
|
||||
|
||||
### Responsive Design
|
||||
- Bootstrap grid system for layout
|
||||
- Fluid container for full-width usage
|
||||
- 100vh height for full viewport coverage
|
||||
- Configurable chart display modes
|
||||
|
||||
## Data Polling and Error Handling
|
||||
|
||||
### Polling Strategy
|
||||
- **Interval**: 500ms for near real-time updates
|
||||
- **Graceful degradation**: Uses cached data on JSON read errors
|
||||
- **Atomic reads**: Tolerates partial writes during file updates
|
||||
- **Logging**: Warnings for data inconsistencies
|
||||
|
||||
### Error Recovery
|
||||
```python
|
||||
# Pseudocode for error handling pattern
|
||||
try:
|
||||
with open(data_file) as f:
|
||||
new_data = json.load(f)
|
||||
_LAST_DATA = new_data # Cache successful read
|
||||
except (FileNotFoundError, json.JSONDecodeError):
|
||||
logging.warning("Using cached data due to read error")
|
||||
new_data = _LAST_DATA # Use cached data
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Client-side rendering**: Plotly.js handles chart rendering
|
||||
- **Efficient updates**: Only redraws when data changes
|
||||
- **Memory bounded**: Limited by max bars in data files (1000)
|
||||
- **Network efficient**: Local file polling (no external API calls)
|
||||
|
||||
## Testing
|
||||
|
||||
Run application tests:
|
||||
```bash
|
||||
uv run pytest test_app.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- Chart building functions
|
||||
- Data loading and caching
|
||||
- Error handling scenarios
|
||||
- Layout rendering
|
||||
- Callback functionality
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Server Configuration
|
||||
- **Host**: `0.0.0.0` (accessible from network)
|
||||
- **Port**: `8050` (default Dash port)
|
||||
- **Debug mode**: Disabled in production
|
||||
|
||||
### Chart Configuration
|
||||
- **Update interval**: 500ms (configurable via dcc.Interval)
|
||||
- **Display mode bar**: Enabled for user interaction
|
||||
- **Logo display**: Disabled for clean interface
|
||||
|
||||
## Known Issues
|
||||
|
||||
- High CPU usage during rapid data updates
|
||||
- Memory usage grows with chart history
|
||||
- No authentication or access control
|
||||
- Limited mobile responsiveness for complex charts
|
||||
|
||||
## Development Notes
|
||||
|
||||
- Uses Flask development server (not suitable for production)
|
||||
- Callback exceptions suppressed for partial data scenarios
|
||||
- Bootstrap CSS loaded from CDN
|
||||
- Chart configurations optimized for financial data visualization
|
||||
83
docs/modules/db_interpreter.md
Normal file
83
docs/modules/db_interpreter.md
Normal file
@ -0,0 +1,83 @@
|
||||
# Module: db_interpreter
|
||||
|
||||
## Purpose
|
||||
The `db_interpreter` module provides efficient streaming access to SQLite databases containing orderbook and trade data. It handles batch reading, temporal windowing, and data structure normalization for downstream processing.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Classes
|
||||
- `OrderbookLevel(price: float, size: float)`: Dataclass representing a single price level in the orderbook
|
||||
- `OrderbookUpdate`: Container for windowed orderbook data with bids, asks, timestamp, and end_timestamp
|
||||
|
||||
### Functions
|
||||
- `DBInterpreter(db_path: Path)`: Constructor that initializes read-only SQLite connection with optimized PRAGMA settings
|
||||
|
||||
### Methods
|
||||
- `stream() -> Iterator[tuple[OrderbookUpdate, list[tuple]]]`: Primary streaming interface that yields orderbook updates with associated trades in temporal windows
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from db_interpreter import DBInterpreter
|
||||
|
||||
# Initialize interpreter
|
||||
db_path = Path("data/BTC-USDT-2025-01-01.db")
|
||||
interpreter = DBInterpreter(db_path)
|
||||
|
||||
# Stream orderbook and trade data
|
||||
for ob_update, trades in interpreter.stream():
|
||||
# Process orderbook update
|
||||
print(f"Book update: {len(ob_update.bids)} bids, {len(ob_update.asks)} asks")
|
||||
print(f"Time window: {ob_update.timestamp} - {ob_update.end_timestamp}")
|
||||
|
||||
# Process trades in this window
|
||||
for trade in trades:
|
||||
trade_id, price, size, side, timestamp_ms = trade[1:6]
|
||||
print(f"Trade: {side} {size} @ {price}")
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- None (standalone module)
|
||||
|
||||
### External
|
||||
- `sqlite3`: Database connectivity
|
||||
- `pathlib`: Path handling
|
||||
- `dataclasses`: Data structure definitions
|
||||
- `typing`: Type annotations
|
||||
- `logging`: Debug and error logging
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Batch sizes**: BOOK_BATCH=2048, TRADE_BATCH=4096 for optimal memory usage
|
||||
- **SQLite optimizations**: Read-only, immutable mode, large mmap and cache sizes
|
||||
- **Memory efficient**: Streaming iterator pattern prevents loading entire dataset
|
||||
- **Temporal windowing**: One-row lookahead for precise time boundary calculation
|
||||
|
||||
## Testing
|
||||
|
||||
Run module tests:
|
||||
```bash
|
||||
uv run pytest test_db_interpreter.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- Batch reading correctness
|
||||
- Temporal window boundary handling
|
||||
- Trade-to-window assignment accuracy
|
||||
- End-of-stream behavior
|
||||
- Error handling for malformed data
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Requires specific database schema (book and trades tables)
|
||||
- Python-literal string parsing assumes well-formed input
|
||||
- Large databases may require memory monitoring during streaming
|
||||
|
||||
## Configuration
|
||||
|
||||
- `BOOK_BATCH`: Number of orderbook rows to fetch per query (default: 2048)
|
||||
- `TRADE_BATCH`: Number of trade rows to fetch per query (default: 4096)
|
||||
- SQLite PRAGMA settings optimized for read-only sequential access
|
||||
162
docs/modules/dependencies.md
Normal file
162
docs/modules/dependencies.md
Normal file
@ -0,0 +1,162 @@
|
||||
# External Dependencies
|
||||
|
||||
## Overview
|
||||
This document describes all external dependencies used in the orderflow backtest system, their purposes, versions, and justifications for inclusion.
|
||||
|
||||
## Production Dependencies
|
||||
|
||||
### Core Framework Dependencies
|
||||
|
||||
#### Dash (^2.18.2)
|
||||
- **Purpose**: Web application framework for interactive visualizations
|
||||
- **Usage**: Real-time chart rendering and user interface
|
||||
- **Justification**: Mature Python-based framework with excellent Plotly integration
|
||||
- **Key Features**: Reactive components, built-in server, callback system
|
||||
|
||||
#### Dash Bootstrap Components (^1.6.0)
|
||||
- **Purpose**: Bootstrap CSS framework integration for Dash
|
||||
- **Usage**: Responsive layout grid and modern UI styling
|
||||
- **Justification**: Provides professional appearance with minimal custom CSS
|
||||
|
||||
#### Plotly (^5.24.1)
|
||||
- **Purpose**: Interactive charting and visualization library
|
||||
- **Usage**: OHLC candlesticks, volume bars, depth charts, OBI metrics
|
||||
- **Justification**: Industry standard for financial data visualization
|
||||
- **Key Features**: WebGL acceleration, zooming/panning, dark themes
|
||||
|
||||
### Data Processing Dependencies
|
||||
|
||||
#### Pandas (^2.2.3)
|
||||
- **Purpose**: Data manipulation and analysis library
|
||||
- **Usage**: Minimal usage for data structure conversions in visualization
|
||||
- **Justification**: Standard tool for financial data handling
|
||||
- **Note**: Usage kept minimal to maintain performance
|
||||
|
||||
#### Typer (^0.13.1)
|
||||
- **Purpose**: Modern CLI framework
|
||||
- **Usage**: Command-line argument parsing and help generation
|
||||
- **Justification**: Type-safe, auto-generated help, better UX than argparse
|
||||
- **Key Features**: Type hints integration, automatic validation
|
||||
|
||||
### Data Storage Dependencies
|
||||
|
||||
#### SQLite3 (Built-in)
|
||||
- **Purpose**: Database connectivity for historical data
|
||||
- **Usage**: Read-only access to orderbook and trade data
|
||||
- **Justification**: Built into Python, no external dependencies, excellent performance
|
||||
- **Configuration**: Optimized with immutable mode and mmap
|
||||
|
||||
## Development and Testing Dependencies
|
||||
|
||||
#### Pytest (^8.3.4)
|
||||
- **Purpose**: Testing framework
|
||||
- **Usage**: Unit tests, integration tests, test discovery
|
||||
- **Justification**: Standard Python testing tool with excellent plugin ecosystem
|
||||
|
||||
#### Coverage (^7.6.9)
|
||||
- **Purpose**: Code coverage measurement
|
||||
- **Usage**: Test coverage reporting and quality metrics
|
||||
- **Justification**: Essential for maintaining code quality
|
||||
|
||||
## Build and Package Management
|
||||
|
||||
#### UV (Package Manager)
|
||||
- **Purpose**: Fast Python package manager and task runner
|
||||
- **Usage**: Dependency management, virtual environments, script execution
|
||||
- **Justification**: Significantly faster than pip/poetry, better lock file format
|
||||
- **Commands**: `uv sync`, `uv run`, `uv add`
|
||||
|
||||
## Python Standard Library Usage
|
||||
|
||||
### Core Libraries
|
||||
- **sqlite3**: Database connectivity
|
||||
- **json**: JSON serialization for IPC
|
||||
- **pathlib**: Modern file path handling
|
||||
- **subprocess**: Process management for visualization
|
||||
- **logging**: Structured logging throughout application
|
||||
- **datetime**: Date/time parsing and manipulation
|
||||
- **dataclasses**: Structured data types
|
||||
- **typing**: Type annotations and hints
|
||||
- **tempfile**: Atomic file operations
|
||||
- **ast**: Safe evaluation of Python literals
|
||||
|
||||
### Performance Libraries
|
||||
- **itertools**: Efficient iteration patterns
|
||||
- **functools**: Function decoration and caching
|
||||
- **collections**: Specialized data structures
|
||||
|
||||
## Dependency Justifications
|
||||
|
||||
### Why Dash Over Alternatives?
|
||||
- **vs. Streamlit**: Better real-time updates, more control over layout
|
||||
- **vs. Flask + Custom JS**: Integrated Plotly support, faster development
|
||||
- **vs. Jupyter**: Better for production deployment, process isolation
|
||||
|
||||
### Why SQLite Over Alternatives?
|
||||
- **vs. PostgreSQL**: No server setup required, excellent read performance
|
||||
- **vs. Parquet**: Better for time-series queries, built-in indexing
|
||||
- **vs. CSV**: Proper data types, much faster queries, atomic transactions
|
||||
|
||||
### Why UV Over Poetry/Pip?
|
||||
- **vs. Poetry**: Significantly faster dependency resolution and installation
|
||||
- **vs. Pip**: Better dependency locking, integrated task runner
|
||||
- **vs. Pipenv**: More active development, better performance
|
||||
|
||||
## Version Pinning Strategy
|
||||
|
||||
### Patch Version Pinning
|
||||
- Core dependencies (Dash, Plotly) pinned to patch versions
|
||||
- Prevents breaking changes while allowing security updates
|
||||
|
||||
### Range Pinning
|
||||
- Development tools use caret (^) ranges for flexibility
|
||||
- Testing tools can update more freely
|
||||
|
||||
### Lock File Management
|
||||
- `uv.lock` ensures reproducible builds across environments
|
||||
- Regular updates scheduled monthly for security patches
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Dependency Scanning
|
||||
- Regular audit of dependencies for known vulnerabilities
|
||||
- Automated updates for security patches
|
||||
- Minimal dependency tree to reduce attack surface
|
||||
|
||||
### Data Isolation
|
||||
- Read-only database access prevents data modification
|
||||
- No external network connections required for core functionality
|
||||
- All file operations contained within project directory
|
||||
|
||||
## Performance Impact
|
||||
|
||||
### Bundle Size
|
||||
- Core runtime: ~50MB with all dependencies
|
||||
- Dash frontend: Additional ~10MB for JavaScript assets
|
||||
- SQLite: Zero overhead (built-in)
|
||||
|
||||
### Startup Time
|
||||
- Cold start: ~2-3 seconds for full application
|
||||
- UV virtual environment activation: ~100ms
|
||||
- Database connection: ~50ms per file
|
||||
|
||||
### Memory Usage
|
||||
- Base application: ~100MB
|
||||
- Per 1000 OHLC bars: ~5MB additional
|
||||
- Plotly charts: ~20MB for complex visualizations
|
||||
|
||||
## Maintenance Schedule
|
||||
|
||||
### Monthly
|
||||
- Security update review and application
|
||||
- Dependency version bump evaluation
|
||||
|
||||
### Quarterly
|
||||
- Major version update consideration
|
||||
- Performance impact assessment
|
||||
- Alternative technology evaluation
|
||||
|
||||
### Annually
|
||||
- Complete dependency audit
|
||||
- Technology stack review
|
||||
- Migration planning for deprecated packages
|
||||
101
docs/modules/level_parser.md
Normal file
101
docs/modules/level_parser.md
Normal file
@ -0,0 +1,101 @@
|
||||
# Module: level_parser
|
||||
|
||||
## Purpose
|
||||
The `level_parser` module provides utilities for parsing and normalizing orderbook level data from various string formats. It handles JSON and Python literal representations, converting them into standardized numeric tuples for processing.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Functions
|
||||
- `normalize_levels(levels: Any) -> List[List[float]]`: Parse levels into [[price, size], ...] format, filtering out zero/negative sizes
|
||||
- `parse_levels_including_zeros(levels: Any) -> List[Tuple[float, float]]`: Parse levels preserving zero sizes for deletion operations
|
||||
|
||||
### Private Functions
|
||||
- `_parse_string_to_list(levels: Any) -> List[Any]`: Core parsing logic trying JSON first, then literal_eval
|
||||
- `_extract_price_size(item: Any) -> Tuple[Any, Any]`: Extract price/size from dict or list/tuple formats
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```python
|
||||
from level_parser import normalize_levels, parse_levels_including_zeros
|
||||
|
||||
# Parse standard levels (filters zeros)
|
||||
levels = normalize_levels('[[50000.0, 1.5], [49999.0, 2.0]]')
|
||||
# Returns: [[50000.0, 1.5], [49999.0, 2.0]]
|
||||
|
||||
# Parse with zero sizes preserved (for deletions)
|
||||
updates = parse_levels_including_zeros('[[50000.0, 0.0], [49999.0, 1.5]]')
|
||||
# Returns: [(50000.0, 0.0), (49999.0, 1.5)]
|
||||
|
||||
# Supports dict format
|
||||
dict_levels = normalize_levels('[{"price": 50000.0, "size": 1.5}]')
|
||||
# Returns: [[50000.0, 1.5]]
|
||||
|
||||
# Short key format
|
||||
short_levels = normalize_levels('[{"p": 50000.0, "s": 1.5}]')
|
||||
# Returns: [[50000.0, 1.5]]
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### External
|
||||
- `json`: Primary parsing method for level data
|
||||
- `ast.literal_eval`: Fallback parsing for Python literal formats
|
||||
- `logging`: Debug logging for parsing issues
|
||||
- `typing`: Type annotations
|
||||
|
||||
## Input Formats Supported
|
||||
|
||||
### JSON Array Format
|
||||
```json
|
||||
[[50000.0, 1.5], [49999.0, 2.0]]
|
||||
```
|
||||
|
||||
### Dict Format (Full Keys)
|
||||
```json
|
||||
[{"price": 50000.0, "size": 1.5}, {"price": 49999.0, "size": 2.0}]
|
||||
```
|
||||
|
||||
### Dict Format (Short Keys)
|
||||
```json
|
||||
[{"p": 50000.0, "s": 1.5}, {"p": 49999.0, "s": 2.0}]
|
||||
```
|
||||
|
||||
### Python Literal Format
|
||||
```python
|
||||
"[(50000.0, 1.5), (49999.0, 2.0)]"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Graceful Degradation**: Returns empty list on parse failures
|
||||
- **Data Validation**: Filters out invalid price/size pairs
|
||||
- **Type Safety**: Converts all values to float before processing
|
||||
- **Debug Logging**: Logs warnings for malformed input without crashing
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Fast Path**: JSON parsing prioritized for performance
|
||||
- **Fallback Support**: ast.literal_eval as backup for edge cases
|
||||
- **Memory Efficient**: Processes items iteratively, not loading entire dataset
|
||||
- **Validation**: Minimal overhead with early filtering of invalid data
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
uv run pytest test_level_parser.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- JSON format parsing accuracy
|
||||
- Dict format (both key styles) parsing
|
||||
- Python literal fallback parsing
|
||||
- Zero size preservation vs filtering
|
||||
- Error handling for malformed input
|
||||
- Type conversion edge cases
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- Assumes well-formed numeric data (price/size as numbers)
|
||||
- Does not validate economic constraints (e.g., positive prices)
|
||||
- Limited to list/dict input formats
|
||||
- No support for streaming/incremental parsing
|
||||
168
docs/modules/main.md
Normal file
168
docs/modules/main.md
Normal file
@ -0,0 +1,168 @@
|
||||
# Module: main
|
||||
|
||||
## Purpose
|
||||
The `main` module provides the command-line interface (CLI) orchestration for the orderflow backtest system. It handles database discovery, process management, and coordinates the streaming pipeline with the visualization frontend using Typer for argument parsing.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Functions
|
||||
- `main(instrument: str, start_date: str, end_date: str, window_seconds: int = 60) -> None`: Primary CLI entrypoint
|
||||
- `discover_databases(instrument: str, start_date: str, end_date: str) -> list[Path]`: Find matching database files
|
||||
- `launch_visualizer() -> subprocess.Popen | None`: Start Dash application in separate process
|
||||
|
||||
### CLI Arguments
|
||||
- `instrument`: Trading pair identifier (e.g., "BTC-USDT")
|
||||
- `start_date`: Start date in YYYY-MM-DD format (UTC)
|
||||
- `end_date`: End date in YYYY-MM-DD format (UTC)
|
||||
- `--window-seconds`: OHLC aggregation window size (default: 60)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Command Line Usage
|
||||
```bash
|
||||
# Basic usage with default 60-second windows
|
||||
uv run python main.py BTC-USDT 2025-01-01 2025-01-31
|
||||
|
||||
# Custom window size
|
||||
uv run python main.py ETH-USDT 2025-02-01 2025-02-28 --window-seconds 30
|
||||
|
||||
# Single day processing
|
||||
uv run python main.py SOL-USDT 2025-03-15 2025-03-15
|
||||
```
|
||||
|
||||
### Programmatic Usage
|
||||
```python
|
||||
from main import main, discover_databases
|
||||
|
||||
# Run processing pipeline
|
||||
main("BTC-USDT", "2025-01-01", "2025-01-31", window_seconds=120)
|
||||
|
||||
# Discover available databases
|
||||
db_files = discover_databases("ETH-USDT", "2025-02-01", "2025-02-28")
|
||||
print(f"Found {len(db_files)} database files")
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- `db_interpreter.DBInterpreter`: Database streaming
|
||||
- `ohlc_processor.OHLCProcessor`: Trade aggregation and orderbook processing
|
||||
- `viz_io`: Data clearing functions
|
||||
|
||||
### External
|
||||
- `typer`: CLI framework and argument parsing
|
||||
- `subprocess`: Process management for visualization
|
||||
- `pathlib`: File and directory operations
|
||||
- `datetime`: Date parsing and validation
|
||||
- `logging`: Operational logging
|
||||
- `sys`: Exit code management
|
||||
|
||||
## Database Discovery Logic
|
||||
|
||||
### File Pattern Matching
|
||||
```python
|
||||
# Expected directory structure
|
||||
../data/OKX/{instrument}/{date}/
|
||||
|
||||
# Example paths
|
||||
../data/OKX/BTC-USDT/2025-01-01/trades.db
|
||||
../data/OKX/ETH-USDT/2025-02-15/trades.db
|
||||
```
|
||||
|
||||
### Discovery Algorithm
|
||||
1. Parse start and end dates to datetime objects
|
||||
2. Iterate through date range (inclusive)
|
||||
3. Construct expected path for each date
|
||||
4. Verify file existence and readability
|
||||
5. Return sorted list of valid database paths
|
||||
|
||||
## Process Orchestration
|
||||
|
||||
### Visualization Process Management
|
||||
```python
|
||||
# Launch Dash app in separate process
|
||||
viz_process = subprocess.Popen([
|
||||
"uv", "run", "python", "app.py"
|
||||
], cwd=project_root)
|
||||
|
||||
# Process management
|
||||
try:
|
||||
# Main processing loop
|
||||
process_databases(db_files)
|
||||
finally:
|
||||
# Cleanup visualization process
|
||||
if viz_process:
|
||||
viz_process.terminate()
|
||||
viz_process.wait(timeout=5)
|
||||
```
|
||||
|
||||
### Data Processing Pipeline
|
||||
1. **Initialize**: Clear existing data files
|
||||
2. **Launch**: Start visualization process
|
||||
3. **Stream**: Process each database sequentially
|
||||
4. **Aggregate**: Generate OHLC bars and depth snapshots
|
||||
5. **Cleanup**: Terminate visualization and finalize
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Database Access Errors
|
||||
- **File not found**: Log warning and skip missing databases
|
||||
- **Permission denied**: Log error and exit with status code 1
|
||||
- **Corruption**: Log error for specific database and continue with next
|
||||
|
||||
### Process Management Errors
|
||||
- **Visualization startup failure**: Log error but continue processing
|
||||
- **Process termination**: Graceful shutdown with timeout
|
||||
- **Resource cleanup**: Ensure child processes are terminated
|
||||
|
||||
### Date Validation
|
||||
- **Invalid format**: Clear error message with expected format
|
||||
- **Invalid range**: End date must be >= start date
|
||||
- **Future dates**: Warning for dates beyond data availability
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Sequential processing**: Databases processed one at a time
|
||||
- **Memory efficient**: Streaming approach prevents loading entire datasets
|
||||
- **Process isolation**: Visualization runs independently
|
||||
- **Resource cleanup**: Automatic process termination on exit
|
||||
|
||||
## Testing
|
||||
|
||||
Run module tests:
|
||||
```bash
|
||||
uv run pytest test_main.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- Database discovery logic
|
||||
- Date parsing and validation
|
||||
- Process management
|
||||
- Error handling scenarios
|
||||
- CLI argument validation
|
||||
|
||||
## Configuration
|
||||
|
||||
### Default Settings
|
||||
- **Data directory**: `../data/OKX` (relative to project root)
|
||||
- **Visualization command**: `uv run python app.py`
|
||||
- **Window size**: 60 seconds
|
||||
- **Process timeout**: 5 seconds for termination
|
||||
|
||||
### Environment Variables
|
||||
- **DATA_PATH**: Override default data directory
|
||||
- **VISUALIZATION_PORT**: Override Dash port (requires app.py modification)
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Assumes specific directory structure under `../data/OKX`
|
||||
- No validation of database schema compatibility
|
||||
- Limited error recovery for process management
|
||||
- No progress indication for large datasets
|
||||
|
||||
## Development Notes
|
||||
|
||||
- Uses Typer for modern CLI interface
|
||||
- Subprocess management compatible with Unix/Windows
|
||||
- Logging configured for both development and production use
|
||||
- Exit codes follow Unix conventions (0=success, 1=error)
|
||||
@ -1,302 +0,0 @@
|
||||
# Module: Metrics Calculation System
|
||||
|
||||
## Purpose
|
||||
|
||||
The metrics calculation system provides high-performance computation of Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD) indicators for cryptocurrency trading analysis. It processes orderbook snapshots and trade data to generate financial metrics with per-snapshot granularity.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Classes
|
||||
|
||||
#### `Metric` (dataclass)
|
||||
Represents calculated metrics for a single orderbook snapshot.
|
||||
|
||||
```python
|
||||
@dataclass(slots=True)
|
||||
class Metric:
|
||||
snapshot_id: int # Reference to source snapshot
|
||||
timestamp: int # Unix timestamp
|
||||
obi: float # Order Book Imbalance [-1, 1]
|
||||
cvd: float # Cumulative Volume Delta
|
||||
best_bid: float | None # Best bid price
|
||||
best_ask: float | None # Best ask price
|
||||
```
|
||||
|
||||
#### `MetricCalculator` (static class)
|
||||
Provides calculation methods for financial metrics.
|
||||
|
||||
```python
|
||||
class MetricCalculator:
|
||||
@staticmethod
|
||||
def calculate_obi(snapshot: BookSnapshot) -> float
|
||||
|
||||
@staticmethod
|
||||
def calculate_volume_delta(trades: List[Trade]) -> float
|
||||
|
||||
@staticmethod
|
||||
def calculate_cvd(previous_cvd: float, volume_delta: float) -> float
|
||||
|
||||
@staticmethod
|
||||
def get_best_bid_ask(snapshot: BookSnapshot) -> tuple[float | None, float | None]
|
||||
```
|
||||
|
||||
### Functions
|
||||
|
||||
#### Order Book Imbalance (OBI) Calculation
|
||||
```python
|
||||
def calculate_obi(snapshot: BookSnapshot) -> float:
|
||||
"""
|
||||
Calculate Order Book Imbalance using the standard formula.
|
||||
|
||||
Formula: OBI = (Vb - Va) / (Vb + Va)
|
||||
Where:
|
||||
Vb = Total volume on bid side
|
||||
Va = Total volume on ask side
|
||||
|
||||
Args:
|
||||
snapshot: BookSnapshot containing bids and asks data
|
||||
|
||||
Returns:
|
||||
float: OBI value between -1 and 1, or 0.0 if no volume
|
||||
|
||||
Example:
|
||||
>>> snapshot = BookSnapshot(bids={50000.0: OrderbookLevel(...)}, ...)
|
||||
>>> obi = MetricCalculator.calculate_obi(snapshot)
|
||||
>>> print(f"OBI: {obi:.3f}")
|
||||
OBI: 0.333
|
||||
"""
|
||||
```
|
||||
|
||||
#### Volume Delta Calculation
|
||||
```python
|
||||
def calculate_volume_delta(trades: List[Trade]) -> float:
|
||||
"""
|
||||
Calculate Volume Delta for a list of trades.
|
||||
|
||||
Volume Delta = Buy Volume - Sell Volume
|
||||
- Buy trades (side = "buy"): positive contribution
|
||||
- Sell trades (side = "sell"): negative contribution
|
||||
|
||||
Args:
|
||||
trades: List of Trade objects for specific timestamp
|
||||
|
||||
Returns:
|
||||
float: Net volume delta (positive = buy pressure, negative = sell pressure)
|
||||
|
||||
Example:
|
||||
>>> trades = [
|
||||
... Trade(side="buy", size=10.0, ...),
|
||||
... Trade(side="sell", size=3.0, ...)
|
||||
... ]
|
||||
>>> vd = MetricCalculator.calculate_volume_delta(trades)
|
||||
>>> print(f"Volume Delta: {vd}")
|
||||
Volume Delta: 7.0
|
||||
"""
|
||||
```
|
||||
|
||||
#### Cumulative Volume Delta (CVD) Calculation
|
||||
```python
|
||||
def calculate_cvd(previous_cvd: float, volume_delta: float) -> float:
|
||||
"""
|
||||
Calculate Cumulative Volume Delta with incremental support.
|
||||
|
||||
Formula: CVD_t = CVD_{t-1} + Volume_Delta_t
|
||||
|
||||
Args:
|
||||
previous_cvd: Previous CVD value (use 0.0 for reset)
|
||||
volume_delta: Current volume delta to add
|
||||
|
||||
Returns:
|
||||
float: New cumulative volume delta value
|
||||
|
||||
Example:
|
||||
>>> cvd = 0.0 # Starting value
|
||||
>>> cvd = MetricCalculator.calculate_cvd(cvd, 10.0) # First trade
|
||||
>>> cvd = MetricCalculator.calculate_cvd(cvd, -5.0) # Second trade
|
||||
>>> print(f"CVD: {cvd}")
|
||||
CVD: 5.0
|
||||
"""
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic OBI Calculation
|
||||
```python
|
||||
from models import MetricCalculator, BookSnapshot, OrderbookLevel
|
||||
|
||||
# Create sample orderbook snapshot
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1640995200,
|
||||
bids={
|
||||
50000.0: OrderbookLevel(price=50000.0, size=10.0, liquidation_count=0, order_count=1),
|
||||
49999.0: OrderbookLevel(price=49999.0, size=5.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
asks={
|
||||
50001.0: OrderbookLevel(price=50001.0, size=3.0, liquidation_count=0, order_count=1),
|
||||
50002.0: OrderbookLevel(price=50002.0, size=2.0, liquidation_count=0, order_count=1),
|
||||
}
|
||||
)
|
||||
|
||||
# Calculate OBI
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
print(f"OBI: {obi:.3f}") # Output: OBI: 0.500
|
||||
# Explanation: (15 - 5) / (15 + 5) = 10/20 = 0.5
|
||||
```
|
||||
|
||||
### CVD Calculation with Reset
|
||||
```python
|
||||
from models import MetricCalculator, Trade
|
||||
|
||||
# Simulate trading session
|
||||
cvd = 0.0 # Reset CVD at session start
|
||||
|
||||
# Process trades for first timestamp
|
||||
trades_t1 = [
|
||||
Trade(id=1, trade_id=1.0, price=50000.0, size=8.0, side="buy", timestamp=1000),
|
||||
Trade(id=2, trade_id=2.0, price=50001.0, size=3.0, side="sell", timestamp=1000),
|
||||
]
|
||||
|
||||
vd_t1 = MetricCalculator.calculate_volume_delta(trades_t1) # 8.0 - 3.0 = 5.0
|
||||
cvd = MetricCalculator.calculate_cvd(cvd, vd_t1) # 0.0 + 5.0 = 5.0
|
||||
|
||||
# Process trades for second timestamp
|
||||
trades_t2 = [
|
||||
Trade(id=3, trade_id=3.0, price=49999.0, size=2.0, side="buy", timestamp=1001),
|
||||
Trade(id=4, trade_id=4.0, price=50000.0, size=7.0, side="sell", timestamp=1001),
|
||||
]
|
||||
|
||||
vd_t2 = MetricCalculator.calculate_volume_delta(trades_t2) # 2.0 - 7.0 = -5.0
|
||||
cvd = MetricCalculator.calculate_cvd(cvd, vd_t2) # 5.0 + (-5.0) = 0.0
|
||||
|
||||
print(f"Final CVD: {cvd}") # Output: Final CVD: 0.0
|
||||
```
|
||||
|
||||
### Complete Metrics Processing
|
||||
```python
|
||||
from models import MetricCalculator, Metric
|
||||
|
||||
def process_snapshot_metrics(snapshot, trades, previous_cvd=0.0):
|
||||
"""Process complete metrics for a single snapshot."""
|
||||
|
||||
# Calculate OBI
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
|
||||
# Calculate volume delta and CVD
|
||||
volume_delta = MetricCalculator.calculate_volume_delta(trades)
|
||||
cvd = MetricCalculator.calculate_cvd(previous_cvd, volume_delta)
|
||||
|
||||
# Extract best bid/ask
|
||||
best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
|
||||
# Create metric record
|
||||
metric = Metric(
|
||||
snapshot_id=snapshot.id,
|
||||
timestamp=snapshot.timestamp,
|
||||
obi=obi,
|
||||
cvd=cvd,
|
||||
best_bid=best_bid,
|
||||
best_ask=best_ask
|
||||
)
|
||||
|
||||
return metric, cvd
|
||||
|
||||
# Usage in processing loop
|
||||
current_cvd = 0.0
|
||||
for snapshot, trades in snapshot_trade_pairs:
|
||||
metric, current_cvd = process_snapshot_metrics(snapshot, trades, current_cvd)
|
||||
# Store metric to database...
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- `models.BookSnapshot`: Orderbook state data
|
||||
- `models.Trade`: Individual trade execution data
|
||||
- `models.OrderbookLevel`: Price level information
|
||||
|
||||
### External
|
||||
- **Python Standard Library**: `typing` for type hints
|
||||
- **No external packages required**
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Computational Complexity
|
||||
- **OBI Calculation**: O(n) where n = number of price levels
|
||||
- **Volume Delta**: O(m) where m = number of trades
|
||||
- **CVD Calculation**: O(1) - simple addition
|
||||
- **Best Bid/Ask**: O(n) for min/max operations
|
||||
|
||||
### Memory Usage
|
||||
- **Static Methods**: No instance state, minimal memory overhead
|
||||
- **Calculations**: Process data in-place without copying
|
||||
- **Results**: Lightweight `Metric` objects with slots optimization
|
||||
|
||||
### Typical Performance
|
||||
```python
|
||||
# Benchmark results (approximate)
|
||||
Snapshot with 50 price levels: ~0.1ms per OBI calculation
|
||||
Timestamp with 20 trades: ~0.05ms per volume delta
|
||||
CVD update: ~0.001ms per calculation
|
||||
Complete metric processing: ~0.2ms per snapshot
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Edge Cases Handled
|
||||
```python
|
||||
# Empty orderbook
|
||||
empty_snapshot = BookSnapshot(bids={}, asks={})
|
||||
obi = MetricCalculator.calculate_obi(empty_snapshot) # Returns 0.0
|
||||
|
||||
# No trades
|
||||
empty_trades = []
|
||||
vd = MetricCalculator.calculate_volume_delta(empty_trades) # Returns 0.0
|
||||
|
||||
# Zero volume scenario
|
||||
zero_vol_snapshot = BookSnapshot(
|
||||
bids={50000.0: OrderbookLevel(price=50000.0, size=0.0, ...)},
|
||||
asks={50001.0: OrderbookLevel(price=50001.0, size=0.0, ...)}
|
||||
)
|
||||
obi = MetricCalculator.calculate_obi(zero_vol_snapshot) # Returns 0.0
|
||||
```
|
||||
|
||||
### Validation
|
||||
- **OBI Range**: Results automatically bounded to [-1, 1]
|
||||
- **Division by Zero**: Handled gracefully with 0.0 return
|
||||
- **Invalid Data**: Empty collections handled without errors
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Coverage
|
||||
- **Unit Tests**: `tests/test_metric_calculator.py`
|
||||
- **Integration Tests**: Included in storage and strategy tests
|
||||
- **Edge Cases**: Empty data, zero volume, boundary conditions
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
# Run metric calculator tests specifically
|
||||
uv run pytest tests/test_metric_calculator.py -v
|
||||
|
||||
# Run all tests with metrics
|
||||
uv run pytest -k "metric" -v
|
||||
|
||||
# Performance tests
|
||||
uv run pytest tests/test_metric_calculator.py::test_calculate_obi_performance
|
||||
```
|
||||
|
||||
## Known Issues
|
||||
|
||||
### Current Limitations
|
||||
- **Precision**: Floating-point arithmetic limitations for very small numbers
|
||||
- **Scale**: No optimization for extremely large orderbooks (>10k levels)
|
||||
- **Currency**: No multi-currency support (assumes single denomination)
|
||||
|
||||
### Planned Enhancements
|
||||
- **Decimal Precision**: Consider `decimal.Decimal` for high-precision calculations
|
||||
- **Vectorization**: NumPy integration for batch calculations
|
||||
- **Additional Metrics**: Volume Profile, Liquidity metrics, Delta Flow
|
||||
|
||||
---
|
||||
|
||||
The metrics calculation system provides a robust foundation for financial analysis with clean interfaces, comprehensive error handling, and optimal performance for high-frequency trading data.
|
||||
147
docs/modules/metrics_calculator.md
Normal file
147
docs/modules/metrics_calculator.md
Normal file
@ -0,0 +1,147 @@
|
||||
# Module: metrics_calculator
|
||||
|
||||
## Purpose
|
||||
The `metrics_calculator` module handles calculation and management of trading metrics including Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD). It provides windowed aggregation with throttled updates for real-time visualization.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Classes
|
||||
- `MetricsCalculator(window_seconds: int = 60, emit_every_n_updates: int = 25)`: Main metrics calculation engine
|
||||
|
||||
### Methods
|
||||
- `update_cvd_from_trade(side: str, size: float) -> None`: Update CVD from individual trade data
|
||||
- `update_obi_metrics(timestamp: str, total_bids: float, total_asks: float) -> None`: Update OBI metrics from orderbook volumes
|
||||
- `finalize_metrics() -> None`: Emit final metrics bar at processing end
|
||||
|
||||
### Properties
|
||||
- `cvd_cumulative: float`: Current cumulative volume delta value
|
||||
|
||||
### Private Methods
|
||||
- `_emit_metrics_bar() -> None`: Emit current metrics to visualization layer
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```python
|
||||
from metrics_calculator import MetricsCalculator
|
||||
|
||||
# Initialize calculator
|
||||
calc = MetricsCalculator(window_seconds=60, emit_every_n_updates=25)
|
||||
|
||||
# Update CVD from trades
|
||||
calc.update_cvd_from_trade("buy", 1.5) # +1.5 CVD
|
||||
calc.update_cvd_from_trade("sell", 1.0) # -1.0 CVD, net +0.5
|
||||
|
||||
# Update OBI from orderbook
|
||||
total_bids, total_asks = 150.0, 120.0
|
||||
calc.update_obi_metrics("1640995200000", total_bids, total_asks)
|
||||
|
||||
# Access current CVD
|
||||
current_cvd = calc.cvd_cumulative # 0.5
|
||||
|
||||
# Finalize at end of processing
|
||||
calc.finalize_metrics()
|
||||
```
|
||||
|
||||
## Metrics Definitions
|
||||
|
||||
### Cumulative Volume Delta (CVD)
|
||||
- **Formula**: CVD = Σ(buy_volume - sell_volume)
|
||||
- **Interpretation**: Positive = more buying pressure, Negative = more selling pressure
|
||||
- **Accumulation**: Running total across all processed trades
|
||||
- **Update Frequency**: Every trade
|
||||
|
||||
### Order Book Imbalance (OBI)
|
||||
- **Formula**: OBI = total_bid_volume - total_ask_volume
|
||||
- **Interpretation**: Positive = more bid liquidity, Negative = more ask liquidity
|
||||
- **Aggregation**: OHLC-style bars per time window (open, high, low, close)
|
||||
- **Update Frequency**: Throttled per orderbook update
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- `viz_io.upsert_metric_bar`: Output interface for visualization
|
||||
|
||||
### External
|
||||
- `logging`: Warning messages for unknown trade sides
|
||||
- `typing`: Type annotations
|
||||
|
||||
## Windowed Aggregation
|
||||
|
||||
### OBI Windows
|
||||
- **Window Size**: Configurable via `window_seconds` (default: 60)
|
||||
- **Window Alignment**: Aligned to epoch time boundaries
|
||||
- **OHLC Tracking**: Maintains open, high, low, close values per window
|
||||
- **Rollover**: Automatic window transitions with final bar emission
|
||||
|
||||
### Throttling Mechanism
|
||||
- **Purpose**: Reduce I/O overhead during high-frequency updates
|
||||
- **Trigger**: Every N updates (configurable via `emit_every_n_updates`)
|
||||
- **Behavior**: Emits intermediate updates for real-time visualization
|
||||
- **Final Emission**: Guaranteed on window rollover and finalization
|
||||
|
||||
## State Management
|
||||
|
||||
### CVD State
|
||||
- `cvd_cumulative: float`: Running total across all trades
|
||||
- **Persistence**: Maintained throughout processor lifetime
|
||||
- **Updates**: Incremental addition/subtraction per trade
|
||||
|
||||
### OBI State
|
||||
- `metrics_window_start: int`: Current window start timestamp
|
||||
- `metrics_bar: dict`: Current OBI OHLC values
|
||||
- `_metrics_since_last_emit: int`: Throttling counter
|
||||
|
||||
## Output Format
|
||||
|
||||
### Metrics Bar Structure
|
||||
```python
|
||||
{
|
||||
'obi_open': float, # First OBI value in window
|
||||
'obi_high': float, # Maximum OBI in window
|
||||
'obi_low': float, # Minimum OBI in window
|
||||
'obi_close': float, # Latest OBI value
|
||||
}
|
||||
```
|
||||
|
||||
### Visualization Integration
|
||||
- Emitted via `viz_io.upsert_metric_bar(timestamp, obi_open, obi_high, obi_low, obi_close, cvd_value)`
|
||||
- Compatible with existing OHLC visualization infrastructure
|
||||
- Real-time updates during active processing
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Low Memory**: Maintains only current window state
|
||||
- **Throttled I/O**: Configurable update frequency prevents excessive writes
|
||||
- **Efficient Updates**: O(1) operations for trade and OBI updates
|
||||
- **Window Management**: Automatic transitions without manual intervention
|
||||
|
||||
## Configuration
|
||||
|
||||
### Constructor Parameters
|
||||
- `window_seconds: int`: Time window for OBI aggregation (default: 60)
|
||||
- `emit_every_n_updates: int`: Throttling factor for intermediate updates (default: 25)
|
||||
|
||||
### Tuning Guidelines
|
||||
- **Higher throttling**: Reduces I/O load, delays real-time updates
|
||||
- **Lower throttling**: More responsive visualization, higher I/O overhead
|
||||
- **Window size**: Affects granularity of OBI trends (shorter = more detail)
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
uv run pytest test_metrics_calculator.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- CVD accumulation accuracy across multiple trades
|
||||
- OBI window rollover and OHLC tracking
|
||||
- Throttling behavior verification
|
||||
- Edge cases (unknown trade sides, empty windows)
|
||||
- Integration with visualization output
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- CVD calculation assumes binary buy/sell classification
|
||||
- No support for partial fills or complex order types
|
||||
- OBI calculation treats all liquidity equally (no price weighting)
|
||||
- Window boundaries aligned to absolute timestamps (no sliding windows)
|
||||
122
docs/modules/ohlc_processor.md
Normal file
122
docs/modules/ohlc_processor.md
Normal file
@ -0,0 +1,122 @@
|
||||
# Module: ohlc_processor
|
||||
|
||||
## Purpose
|
||||
The `ohlc_processor` module serves as the main coordinator for trade data processing, orchestrating OHLC aggregation, orderbook management, and metrics calculation. It has been refactored into a modular architecture using composition with specialized helper modules.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Classes
|
||||
- `OHLCProcessor(window_seconds: int = 60, depth_levels_per_side: int = 50)`: Main orchestrator class that coordinates trade processing using composition
|
||||
|
||||
### Methods
|
||||
- `process_trades(trades: list[tuple]) -> None`: Aggregate trades into OHLC bars and update CVD metrics
|
||||
- `update_orderbook(ob_update: OrderbookUpdate) -> None`: Apply orderbook updates and calculate OBI metrics
|
||||
- `finalize() -> None`: Emit final OHLC bar and metrics data
|
||||
- `cvd_cumulative` (property): Access to cumulative volume delta value
|
||||
|
||||
### Composed Modules
|
||||
- `OrderbookManager`: Handles in-memory orderbook state and depth snapshots
|
||||
- `MetricsCalculator`: Manages OBI and CVD metric calculations
|
||||
- `level_parser` functions: Parse and normalize orderbook level data
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```python
|
||||
from ohlc_processor import OHLCProcessor
|
||||
from db_interpreter import DBInterpreter
|
||||
|
||||
# Initialize processor with 1-minute windows and 50 depth levels
|
||||
processor = OHLCProcessor(window_seconds=60, depth_levels_per_side=50)
|
||||
|
||||
# Process streaming data
|
||||
for ob_update, trades in DBInterpreter(db_path).stream():
|
||||
# Aggregate trades into OHLC bars
|
||||
processor.process_trades(trades)
|
||||
|
||||
# Update orderbook and emit depth snapshots
|
||||
processor.update_orderbook(ob_update)
|
||||
|
||||
# Finalize processing
|
||||
processor.finalize()
|
||||
```
|
||||
|
||||
### Advanced Configuration
|
||||
```python
|
||||
# Custom window size and depth levels
|
||||
processor = OHLCProcessor(
|
||||
window_seconds=30, # 30-second bars
|
||||
depth_levels_per_side=25 # Top 25 levels per side
|
||||
)
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal Modules
|
||||
- `orderbook_manager.OrderbookManager`: In-memory orderbook state management
|
||||
- `metrics_calculator.MetricsCalculator`: OBI and CVD metrics calculation
|
||||
- `level_parser`: Orderbook level parsing utilities
|
||||
- `viz_io`: JSON output for visualization
|
||||
- `db_interpreter.OrderbookUpdate`: Input data structures
|
||||
|
||||
### External
|
||||
- `typing`: Type annotations
|
||||
- `logging`: Debug and operational logging
|
||||
|
||||
## Modular Architecture
|
||||
|
||||
The processor now follows a clean composition pattern:
|
||||
|
||||
1. **Main Coordinator** (`OHLCProcessor`):
|
||||
- Orchestrates trade and orderbook processing
|
||||
- Maintains OHLC bar state and window management
|
||||
- Delegates specialized tasks to composed modules
|
||||
|
||||
2. **Orderbook Management** (`OrderbookManager`):
|
||||
- Maintains in-memory price→size mappings
|
||||
- Applies partial updates and handles deletions
|
||||
- Provides sorted top-N level extraction
|
||||
|
||||
3. **Metrics Calculation** (`MetricsCalculator`):
|
||||
- Tracks CVD from trade flow (buy/sell volume delta)
|
||||
- Calculates OBI from orderbook volume imbalance
|
||||
- Manages windowed metrics aggregation with throttling
|
||||
|
||||
4. **Level Parsing** (`level_parser` module):
|
||||
- Normalizes JSON and Python literal level representations
|
||||
- Handles zero-size levels for orderbook deletions
|
||||
- Provides robust error handling for malformed data
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Throttled Updates**: Prevents excessive I/O during high-frequency periods
|
||||
- **Memory Efficient**: Maintains only current window and top-N depth levels
|
||||
- **Incremental Processing**: Applies only changed orderbook levels
|
||||
- **Atomic Operations**: Thread-safe updates to shared data structures
|
||||
|
||||
## Testing
|
||||
|
||||
Run module tests:
|
||||
```bash
|
||||
uv run pytest test_ohlc_processor.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- OHLC calculation accuracy across window boundaries
|
||||
- Volume accumulation correctness
|
||||
- High/low price tracking
|
||||
- Orderbook update application
|
||||
- Depth snapshot generation
|
||||
- OBI metric calculation
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Orderbook level parsing assumes well-formed JSON or Python literals
|
||||
- Memory usage scales with number of active price levels
|
||||
- Clock skew between trades and orderbook updates not handled
|
||||
|
||||
## Configuration Options
|
||||
|
||||
- `window_seconds`: Time window size for OHLC aggregation (default: 60)
|
||||
- `depth_levels_per_side`: Number of top price levels to maintain (default: 50)
|
||||
- `UPSERT_THROTTLE_MS`: Minimum interval between upsert operations (internal)
|
||||
- `DEPTH_EMIT_THROTTLE_MS`: Minimum interval between depth emissions (internal)
|
||||
121
docs/modules/orderbook_manager.md
Normal file
121
docs/modules/orderbook_manager.md
Normal file
@ -0,0 +1,121 @@
|
||||
# Module: orderbook_manager
|
||||
|
||||
## Purpose
|
||||
The `orderbook_manager` module provides in-memory orderbook state management with partial update capabilities. It maintains separate bid and ask sides and supports efficient top-level extraction for visualization.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Classes
|
||||
- `OrderbookManager(depth_levels_per_side: int = 50)`: Main orderbook state manager
|
||||
|
||||
### Methods
|
||||
- `apply_updates(bids_updates: List[Tuple[float, float]], asks_updates: List[Tuple[float, float]]) -> None`: Apply partial updates to both sides
|
||||
- `get_total_volume() -> Tuple[float, float]`: Get total bid and ask volumes
|
||||
- `get_top_levels() -> Tuple[List[List[float]], List[List[float]]]`: Get sorted top levels for both sides
|
||||
|
||||
### Private Methods
|
||||
- `_apply_partial_updates(side_map: Dict[float, float], updates: List[Tuple[float, float]]) -> None`: Apply updates to one side
|
||||
- `_build_top_levels(side_map: Dict[float, float], limit: int, reverse: bool) -> List[List[float]]`: Extract sorted top levels
|
||||
|
||||
## Usage Examples
|
||||
|
||||
```python
|
||||
from orderbook_manager import OrderbookManager
|
||||
|
||||
# Initialize manager
|
||||
manager = OrderbookManager(depth_levels_per_side=25)
|
||||
|
||||
# Apply orderbook updates
|
||||
bids = [(50000.0, 1.5), (49999.0, 2.0)]
|
||||
asks = [(50001.0, 1.2), (50002.0, 0.8)]
|
||||
manager.apply_updates(bids, asks)
|
||||
|
||||
# Get volume totals for OBI calculation
|
||||
total_bids, total_asks = manager.get_total_volume()
|
||||
obi = total_bids - total_asks
|
||||
|
||||
# Get top levels for depth visualization
|
||||
bids_sorted, asks_sorted = manager.get_top_levels()
|
||||
|
||||
# Handle deletions (size = 0)
|
||||
deletions = [(50000.0, 0.0)] # Remove price level
|
||||
manager.apply_updates(deletions, [])
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### External
|
||||
- `typing`: Type annotations for Dict, List, Tuple
|
||||
|
||||
## State Management
|
||||
|
||||
### Internal State
|
||||
- `_book_bids: Dict[float, float]`: Price → size mapping for bid side
|
||||
- `_book_asks: Dict[float, float]`: Price → size mapping for ask side
|
||||
- `depth_levels_per_side: int`: Configuration for top-N extraction
|
||||
|
||||
### Update Semantics
|
||||
- **Size = 0**: Remove price level (deletion)
|
||||
- **Size > 0**: Upsert price level with new size
|
||||
- **Size < 0**: Ignored (invalid update)
|
||||
|
||||
### Sorting Behavior
|
||||
- **Bids**: Descending by price (highest price first)
|
||||
- **Asks**: Ascending by price (lowest price first)
|
||||
- **Top-N**: Limited by `depth_levels_per_side` parameter
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Memory Efficient**: Only stores non-zero price levels
|
||||
- **Fast Updates**: O(1) upsert/delete operations using dict
|
||||
- **Efficient Sorting**: Only sorts when extracting top levels
|
||||
- **Bounded Output**: Limits result size for visualization performance
|
||||
|
||||
## Use Cases
|
||||
|
||||
### OBI Calculation
|
||||
```python
|
||||
total_bids, total_asks = manager.get_total_volume()
|
||||
order_book_imbalance = total_bids - total_asks
|
||||
```
|
||||
|
||||
### Depth Visualization
|
||||
```python
|
||||
bids, asks = manager.get_top_levels()
|
||||
depth_payload = {"bids": bids, "asks": asks}
|
||||
```
|
||||
|
||||
### Incremental Updates
|
||||
```python
|
||||
# Typical orderbook update cycle
|
||||
updates = parse_orderbook_changes(raw_data)
|
||||
manager.apply_updates(updates['bids'], updates['asks'])
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
uv run pytest test_orderbook_manager.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- Partial update application correctness
|
||||
- Deletion handling (size = 0)
|
||||
- Volume calculation accuracy
|
||||
- Top-level sorting and limiting
|
||||
- Edge cases (empty books, single levels)
|
||||
- Performance with large orderbooks
|
||||
|
||||
## Configuration
|
||||
|
||||
- `depth_levels_per_side`: Controls output size for visualization (default: 50)
|
||||
- Affects memory usage and sorting performance
|
||||
- Higher values provide more market depth detail
|
||||
- Lower values improve processing speed
|
||||
|
||||
## Known Limitations
|
||||
|
||||
- No built-in validation of price/size values
|
||||
- Memory usage scales with number of unique price levels
|
||||
- No historical state tracking (current snapshot only)
|
||||
- No support for spread calculation or market data statistics
|
||||
155
docs/modules/viz_io.md
Normal file
155
docs/modules/viz_io.md
Normal file
@ -0,0 +1,155 @@
|
||||
# Module: viz_io
|
||||
|
||||
## Purpose
|
||||
The `viz_io` module provides atomic inter-process communication (IPC) between the data processing pipeline and the visualization frontend. It manages JSON file-based data exchange with atomic writes to prevent race conditions and data corruption.
|
||||
|
||||
## Public Interface
|
||||
|
||||
### Functions
|
||||
- `add_ohlc_bar(timestamp, open_price, high_price, low_price, close_price, volume)`: Append new OHLC bar to rolling dataset
|
||||
- `upsert_ohlc_bar(timestamp, open_price, high_price, low_price, close_price, volume)`: Update existing bar or append new one
|
||||
- `clear_data()`: Reset OHLC dataset to empty state
|
||||
- `add_metric_bar(timestamp, obi_open, obi_high, obi_low, obi_close)`: Append OBI metric bar
|
||||
- `upsert_metric_bar(timestamp, obi_open, obi_high, obi_low, obi_close)`: Update existing OBI bar or append new one
|
||||
- `clear_metrics()`: Reset metrics dataset to empty state
|
||||
- `set_depth_data(bids, asks)`: Update current orderbook depth snapshot
|
||||
|
||||
### Constants
|
||||
- `DATA_FILE`: Path to OHLC data JSON file
|
||||
- `DEPTH_FILE`: Path to depth data JSON file
|
||||
- `METRICS_FILE`: Path to metrics data JSON file
|
||||
- `MAX_BARS`: Maximum number of bars to retain (1000)
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic OHLC Operations
|
||||
```python
|
||||
import viz_io
|
||||
|
||||
# Add a new OHLC bar
|
||||
viz_io.add_ohlc_bar(
|
||||
timestamp=1640995200000, # Unix timestamp in milliseconds
|
||||
open_price=50000.0,
|
||||
high_price=50100.0,
|
||||
low_price=49900.0,
|
||||
close_price=50050.0,
|
||||
volume=125.5
|
||||
)
|
||||
|
||||
# Update the current bar (if timestamp matches) or add new one
|
||||
viz_io.upsert_ohlc_bar(
|
||||
timestamp=1640995200000,
|
||||
open_price=50000.0,
|
||||
high_price=50150.0, # Updated high
|
||||
low_price=49850.0, # Updated low
|
||||
close_price=50075.0, # Updated close
|
||||
volume=130.2 # Updated volume
|
||||
)
|
||||
```
|
||||
|
||||
### Orderbook Depth Management
|
||||
```python
|
||||
# Set current depth snapshot
|
||||
bids = [[49990.0, 1.5], [49985.0, 2.1], [49980.0, 0.8]]
|
||||
asks = [[50010.0, 1.2], [50015.0, 1.8], [50020.0, 2.5]]
|
||||
|
||||
viz_io.set_depth_data(bids, asks)
|
||||
```
|
||||
|
||||
### Metrics Operations
|
||||
```python
|
||||
# Add Order Book Imbalance metrics
|
||||
viz_io.add_metric_bar(
|
||||
timestamp=1640995200000,
|
||||
obi_open=0.15,
|
||||
obi_high=0.22,
|
||||
obi_low=0.08,
|
||||
obi_close=0.18
|
||||
)
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
### Internal
|
||||
- None (standalone utility module)
|
||||
|
||||
### External
|
||||
- `json`: JSON serialization/deserialization
|
||||
- `pathlib`: File path handling
|
||||
- `typing`: Type annotations
|
||||
- `tempfile`: Atomic write operations
|
||||
|
||||
## Data Formats
|
||||
|
||||
### OHLC Data (`ohlc_data.json`)
|
||||
```json
|
||||
[
|
||||
[1640995200000, 50000.0, 50100.0, 49900.0, 50050.0, 125.5],
|
||||
[1640995260000, 50050.0, 50200.0, 50000.0, 50150.0, 98.3]
|
||||
]
|
||||
```
|
||||
Format: `[timestamp, open, high, low, close, volume]`
|
||||
|
||||
### Depth Data (`depth_data.json`)
|
||||
```json
|
||||
{
|
||||
"bids": [[49990.0, 1.5], [49985.0, 2.1]],
|
||||
"asks": [[50010.0, 1.2], [50015.0, 1.8]]
|
||||
}
|
||||
```
|
||||
Format: `{"bids": [[price, size], ...], "asks": [[price, size], ...]}`
|
||||
|
||||
### Metrics Data (`metrics_data.json`)
|
||||
```json
|
||||
[
|
||||
[1640995200000, 0.15, 0.22, 0.08, 0.18],
|
||||
[1640995260000, 0.18, 0.25, 0.12, 0.20]
|
||||
]
|
||||
```
|
||||
Format: `[timestamp, obi_open, obi_high, obi_low, obi_close]`
|
||||
|
||||
## Atomic Write Operations
|
||||
|
||||
All write operations use atomic file replacement to prevent partial reads:
|
||||
|
||||
1. Write data to temporary file
|
||||
2. Flush and sync to disk
|
||||
3. Atomically rename temporary file to target file
|
||||
|
||||
This ensures the visualization frontend always reads complete, valid JSON data.
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
- **Bounded Memory**: OHLC and metrics datasets limited to 1000 bars max
|
||||
- **Atomic Operations**: No partial reads possible during writes
|
||||
- **Rolling Window**: Automatic trimming of old data maintains constant memory usage
|
||||
- **Fast Lookups**: Timestamp-based upsert operations use list scanning (acceptable for 1000 items)
|
||||
|
||||
## Testing
|
||||
|
||||
Run module tests:
|
||||
```bash
|
||||
uv run pytest test_viz_io.py -v
|
||||
```
|
||||
|
||||
Test coverage includes:
|
||||
- Atomic write operations
|
||||
- Data format validation
|
||||
- Rolling window behavior
|
||||
- Upsert logic correctness
|
||||
- File corruption prevention
|
||||
- Concurrent read/write scenarios
|
||||
|
||||
## Known Issues
|
||||
|
||||
- File I/O may block briefly during atomic writes
|
||||
- JSON parsing errors not propagated to callers
|
||||
- Limited to 1000 bars maximum (configurable via MAX_BARS)
|
||||
- No compression for large datasets
|
||||
|
||||
## Thread Safety
|
||||
|
||||
All operations are thread-safe for single writer, multiple reader scenarios:
|
||||
- Writer: Data processing pipeline (single thread)
|
||||
- Readers: Visualization frontend (polling)
|
||||
- Atomic file operations prevent corruption during concurrent access
|
||||
@ -1,214 +0,0 @@
|
||||
"""
|
||||
Interactive visualizer using Plotly + Dash for orderflow analysis.
|
||||
|
||||
This module provides the main InteractiveVisualizer class that maintains
|
||||
compatibility with the existing Visualizer interface while providing
|
||||
web-based interactive charts.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Tuple
|
||||
from collections import deque
|
||||
from storage import Book
|
||||
from models import Metric
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
|
||||
|
||||
class InteractiveVisualizer:
|
||||
"""Interactive web-based visualizer for orderflow data using Plotly + Dash.
|
||||
|
||||
Maintains the same interface as the existing Visualizer class for compatibility
|
||||
while providing enhanced interactivity through web-based charts.
|
||||
|
||||
Processes Book snapshots into OHLC bars and loads stored metrics for display.
|
||||
"""
|
||||
|
||||
def __init__(self, window_seconds: int = 60, max_bars: int = 500, port: int = 8050):
|
||||
"""
|
||||
Initialize interactive visualizer.
|
||||
|
||||
Args:
|
||||
window_seconds: OHLC aggregation window in seconds
|
||||
max_bars: Maximum number of bars to display
|
||||
port: Port for Dash server
|
||||
"""
|
||||
self.window_seconds = window_seconds
|
||||
self.max_bars = max_bars
|
||||
self.port = port
|
||||
self._db_path: Optional[Path] = None
|
||||
|
||||
# Processed data storage
|
||||
self._ohlc_data: List[Tuple[int, float, float, float, float, float]] = []
|
||||
self._metrics_data: List[Metric] = []
|
||||
|
||||
# Simple cache for performance
|
||||
self._cache_book_hash: Optional[int] = None
|
||||
self._cache_db_path_hash: Optional[int] = None
|
||||
|
||||
# OHLC calculation state (matches existing visualizer pattern)
|
||||
self._current_bucket_ts: Optional[int] = None
|
||||
self._open = self._high = self._low = self._close = None
|
||||
self._volume: float = 0.0
|
||||
|
||||
def set_db_path(self, db_path: Path) -> None:
|
||||
"""Set database path for metrics loading."""
|
||||
self._db_path = db_path
|
||||
|
||||
def update_from_book(self, book: Book) -> None:
|
||||
"""Process book snapshots into OHLC data and load corresponding metrics."""
|
||||
if not book.snapshots:
|
||||
logging.warning("Book has no snapshots to visualize")
|
||||
return
|
||||
|
||||
# Simple cache check to avoid reprocessing same data
|
||||
book_hash = hash((len(book.snapshots), book.first_timestamp, book.last_timestamp))
|
||||
db_hash = hash(str(self._db_path)) if self._db_path else None
|
||||
|
||||
if (self._cache_book_hash == book_hash and
|
||||
self._cache_db_path_hash == db_hash and
|
||||
self._ohlc_data):
|
||||
logging.info(f"Using cached data: {len(self._ohlc_data)} OHLC bars, {len(self._metrics_data)} metrics")
|
||||
return
|
||||
|
||||
# Clear previous data
|
||||
self._ohlc_data.clear()
|
||||
self._metrics_data.clear()
|
||||
self._reset_ohlc_state()
|
||||
|
||||
# Process snapshots into OHLC bars (reusing existing logic)
|
||||
self._process_snapshots_to_ohlc(book.snapshots)
|
||||
|
||||
# Load stored metrics for the same time range
|
||||
if self._db_path and book.snapshots:
|
||||
start_ts = min(s.timestamp for s in book.snapshots)
|
||||
end_ts = max(s.timestamp for s in book.snapshots)
|
||||
self._metrics_data = self._load_stored_metrics(start_ts, end_ts)
|
||||
|
||||
# Update cache
|
||||
self._cache_book_hash = book_hash
|
||||
self._cache_db_path_hash = db_hash
|
||||
|
||||
logging.info(f"Processed {len(self._ohlc_data)} OHLC bars and {len(self._metrics_data)} metrics")
|
||||
|
||||
def show(self) -> None:
|
||||
"""Launch Dash server and display interactive charts with processed data."""
|
||||
from dash_app import create_dash_app_with_data, create_dash_app
|
||||
|
||||
# Create Dash app with real data
|
||||
if self._ohlc_data:
|
||||
app = create_dash_app_with_data(
|
||||
ohlc_data=self._ohlc_data,
|
||||
metrics_data=self._metrics_data,
|
||||
debug=True,
|
||||
port=self.port
|
||||
)
|
||||
else:
|
||||
app = create_dash_app(debug=True, port=self.port)
|
||||
|
||||
# Log data summary
|
||||
logging.info(f"Launching interactive visualizer:")
|
||||
logging.info(f" - OHLC bars: {len(self._ohlc_data)}")
|
||||
logging.info(f" - Metrics points: {len(self._metrics_data)}")
|
||||
if self._ohlc_data:
|
||||
start_time = self._ohlc_data[0][0]
|
||||
end_time = self._ohlc_data[-1][0]
|
||||
logging.info(f" - Time range: {start_time} to {end_time}")
|
||||
|
||||
app.run(debug=True, port=self.port, host='127.0.0.1')
|
||||
|
||||
def _reset_ohlc_state(self) -> None:
|
||||
"""Reset OHLC calculation state."""
|
||||
self._current_bucket_ts = None
|
||||
self._open = self._high = self._low = self._close = None
|
||||
self._volume = 0.0
|
||||
|
||||
def _bucket_start(self, ts: int) -> int:
|
||||
"""Calculate bucket start timestamp (matches existing visualizer)."""
|
||||
normalized_ts = self._normalize_ts_seconds(ts)
|
||||
return normalized_ts - (normalized_ts % self.window_seconds)
|
||||
|
||||
def _normalize_ts_seconds(self, ts: int) -> int:
|
||||
"""Normalize timestamp to seconds (matches existing visualizer)."""
|
||||
its = int(ts)
|
||||
if its > 100_000_000_000_000: # > 1e14 → microseconds
|
||||
return its // 1_000_000
|
||||
if its > 100_000_000_000: # > 1e11 → milliseconds
|
||||
return its // 1_000
|
||||
return its
|
||||
|
||||
def _process_snapshots_to_ohlc(self, snapshots) -> None:
|
||||
"""Process book snapshots into OHLC bars (adapted from existing visualizer)."""
|
||||
logging.info(f"Processing {len(snapshots)} snapshots into OHLC bars")
|
||||
|
||||
snapshot_count = 0
|
||||
for snapshot in sorted(snapshots, key=lambda s: s.timestamp):
|
||||
snapshot_count += 1
|
||||
if not snapshot.bids or not snapshot.asks:
|
||||
continue
|
||||
|
||||
try:
|
||||
best_bid = max(snapshot.bids.keys())
|
||||
best_ask = min(snapshot.asks.keys())
|
||||
except (ValueError, TypeError):
|
||||
continue
|
||||
|
||||
mid = (float(best_bid) + float(best_ask)) / 2.0
|
||||
ts_raw = int(snapshot.timestamp)
|
||||
ts = self._normalize_ts_seconds(ts_raw)
|
||||
bucket_ts = self._bucket_start(ts)
|
||||
|
||||
# Calculate volume from trades in this snapshot
|
||||
snapshot_volume = sum(trade.size for trade in snapshot.trades)
|
||||
|
||||
# New bucket: close and store previous bar
|
||||
if self._current_bucket_ts is None:
|
||||
self._current_bucket_ts = bucket_ts
|
||||
self._open = self._high = self._low = self._close = mid
|
||||
self._volume = snapshot_volume
|
||||
elif bucket_ts != self._current_bucket_ts:
|
||||
self._append_current_bar()
|
||||
self._current_bucket_ts = bucket_ts
|
||||
self._open = self._high = self._low = self._close = mid
|
||||
self._volume = snapshot_volume
|
||||
else:
|
||||
# Update current bucket OHLC and accumulate volume
|
||||
if self._high is None or mid > self._high:
|
||||
self._high = mid
|
||||
if self._low is None or mid < self._low:
|
||||
self._low = mid
|
||||
self._close = mid
|
||||
self._volume += snapshot_volume
|
||||
|
||||
# Finalize the last bar
|
||||
self._append_current_bar()
|
||||
|
||||
logging.info(f"Created {len(self._ohlc_data)} OHLC bars from {snapshot_count} valid snapshots")
|
||||
|
||||
def _append_current_bar(self) -> None:
|
||||
"""Finalize current OHLC bar and add to data list."""
|
||||
if self._current_bucket_ts is None or self._open is None:
|
||||
return
|
||||
self._ohlc_data.append(
|
||||
(
|
||||
self._current_bucket_ts,
|
||||
float(self._open),
|
||||
float(self._high if self._high is not None else self._open),
|
||||
float(self._low if self._low is not None else self._open),
|
||||
float(self._close if self._close is not None else self._open),
|
||||
float(self._volume),
|
||||
)
|
||||
)
|
||||
|
||||
def _load_stored_metrics(self, start_timestamp: int, end_timestamp: int) -> List[Metric]:
|
||||
"""Load stored metrics from database for the given time range."""
|
||||
if not self._db_path:
|
||||
return []
|
||||
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(self._db_path)
|
||||
with repo.connect() as conn:
|
||||
return repo.load_metrics_by_timerange(conn, start_timestamp, end_timestamp)
|
||||
except Exception as e:
|
||||
logging.error(f"Error loading metrics for visualization: {e}")
|
||||
return []
|
||||
85
level_parser.py
Normal file
85
level_parser.py
Normal file
@ -0,0 +1,85 @@
|
||||
"""Level parsing utilities for orderbook data."""
|
||||
|
||||
import json
|
||||
import ast
|
||||
import logging
|
||||
from typing import List, Any, Tuple
|
||||
|
||||
|
||||
def normalize_levels(levels: Any) -> List[List[float]]:
|
||||
"""
|
||||
Convert string-encoded levels into [[price, size], ...] floats.
|
||||
|
||||
Filters out zero/negative sizes. Supports JSON and Python literal formats.
|
||||
"""
|
||||
if not levels or levels == '[]':
|
||||
return []
|
||||
|
||||
parsed = _parse_string_to_list(levels)
|
||||
if not parsed:
|
||||
return []
|
||||
|
||||
pairs: List[List[float]] = []
|
||||
for item in parsed:
|
||||
price, size = _extract_price_size(item)
|
||||
if price is None or size is None:
|
||||
continue
|
||||
try:
|
||||
p, s = float(price), float(size)
|
||||
if s > 0:
|
||||
pairs.append([p, s])
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if not pairs:
|
||||
logging.debug("normalize_levels: no valid pairs parsed from input")
|
||||
return pairs
|
||||
|
||||
|
||||
def parse_levels_including_zeros(levels: Any) -> List[Tuple[float, float]]:
|
||||
"""
|
||||
Parse levels into (price, size) tuples including zero sizes for deletions.
|
||||
|
||||
Similar to normalize_levels but preserves zero sizes (for orderbook deletions).
|
||||
"""
|
||||
if not levels or levels == '[]':
|
||||
return []
|
||||
|
||||
parsed = _parse_string_to_list(levels)
|
||||
if not parsed:
|
||||
return []
|
||||
|
||||
results: List[Tuple[float, float]] = []
|
||||
for item in parsed:
|
||||
price, size = _extract_price_size(item)
|
||||
if price is None or size is None:
|
||||
continue
|
||||
try:
|
||||
p, s = float(price), float(size)
|
||||
if s >= 0:
|
||||
results.append((p, s))
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def _parse_string_to_list(levels: Any) -> List[Any]:
|
||||
"""Parse string levels to list, trying JSON first then literal_eval."""
|
||||
try:
|
||||
parsed = json.loads(levels)
|
||||
except Exception:
|
||||
try:
|
||||
parsed = ast.literal_eval(levels)
|
||||
except Exception:
|
||||
return []
|
||||
return parsed if isinstance(parsed, list) else []
|
||||
|
||||
|
||||
def _extract_price_size(item: Any) -> Tuple[Any, Any]:
|
||||
"""Extract price and size from dict or list/tuple format."""
|
||||
if isinstance(item, dict):
|
||||
return item.get("price", item.get("p")), item.get("size", item.get("s"))
|
||||
elif isinstance(item, (list, tuple)) and len(item) >= 2:
|
||||
return item[0], item[1]
|
||||
return None, None
|
||||
91
main.py
91
main.py
@ -1,46 +1,93 @@
|
||||
import logging
|
||||
import typer
|
||||
from pathlib import Path
|
||||
from typing import List
|
||||
from datetime import datetime, timezone
|
||||
from storage import Storage
|
||||
from strategies import DefaultStrategy
|
||||
import subprocess
|
||||
import time
|
||||
import threading
|
||||
from db_interpreter import DBInterpreter
|
||||
from ohlc_processor import OHLCProcessor
|
||||
from desktop_app import MainWindow
|
||||
import sys
|
||||
from PySide6.QtWidgets import QApplication
|
||||
|
||||
databases_path = Path("../data/OKX")
|
||||
|
||||
storage = None
|
||||
|
||||
def main(instrument: str = typer.Argument(..., help="Instrument to backtest, e.g. BTC-USDT"),
|
||||
start_date: str = typer.Argument(..., help="Start date, e.g. 2025-07-01"),
|
||||
end_date: str = typer.Argument(..., help="End date, e.g. 2025-08-01")):
|
||||
start_date: str = typer.Argument(..., help="Start date, e.g. 2025-07-01"),
|
||||
end_date: str = typer.Argument(..., help="End date, e.g. 2025-08-01"),
|
||||
window_seconds: int = typer.Option(60, help="OHLC window size in seconds")):
|
||||
"""
|
||||
Process orderbook data and visualize OHLC charts in real-time.
|
||||
"""
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
|
||||
|
||||
start_date = datetime.strptime(start_date, "%Y-%m-%d").replace(tzinfo=timezone.utc)
|
||||
end_date = datetime.strptime(end_date, "%Y-%m-%d").replace(tzinfo=timezone.utc)
|
||||
|
||||
databases_path = Path("../data/OKX")
|
||||
|
||||
if not databases_path.exists():
|
||||
logging.error(f"Database path does not exist: {databases_path}")
|
||||
return
|
||||
|
||||
db_paths = list(databases_path.glob(f"{instrument}*.db"))
|
||||
db_paths.sort()
|
||||
|
||||
if not db_paths:
|
||||
logging.error(f"No database files found for instrument {instrument} in {databases_path}")
|
||||
return
|
||||
|
||||
logging.info(f"Found {len(db_paths)} database files: {[p.name for p in db_paths]}")
|
||||
|
||||
storage = Storage(instrument)
|
||||
strategy = DefaultStrategy(instrument)
|
||||
processor = OHLCProcessor(window_seconds=window_seconds)
|
||||
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
|
||||
def process_data():
|
||||
"""Process database data in a separate thread."""
|
||||
try:
|
||||
for db_path in db_paths:
|
||||
db_name_parts = db_path.name.split(".")[0].split("-")
|
||||
if len(db_name_parts) < 5:
|
||||
logging.warning(f"Unexpected filename format: {db_path.name}")
|
||||
continue
|
||||
|
||||
db_name = db_name_parts[2:5]
|
||||
db_date = datetime.strptime("".join(db_name), "%y%m%d").replace(tzinfo=timezone.utc)
|
||||
|
||||
for db_path in db_paths:
|
||||
db_name = db_path.name.split(".")[0].split("-")[2:5]
|
||||
db_date = datetime.strptime("".join(db_name), "%y%m%d").replace(tzinfo=timezone.utc)
|
||||
if db_date < start_date or db_date >= end_date:
|
||||
logging.info(f"Skipping {db_path.name} - outside date range")
|
||||
continue
|
||||
|
||||
if db_date < start_date or db_date >= end_date:
|
||||
continue
|
||||
logging.info(f"Processing database: {db_path.name}")
|
||||
db_interpreter = DBInterpreter(db_path)
|
||||
|
||||
batch_count = 0
|
||||
for orderbook_update, trades in db_interpreter.stream():
|
||||
batch_count += 1
|
||||
|
||||
processor.process_trades(trades)
|
||||
processor.update_orderbook(orderbook_update)
|
||||
|
||||
processor.finalize()
|
||||
logging.info("Data processing completed")
|
||||
except Exception as e:
|
||||
logging.error(f"Error in data processing: {e}")
|
||||
|
||||
logging.info(f"Processing database: {db_path.name}")
|
||||
try:
|
||||
app = QApplication(sys.argv)
|
||||
desktop_app = MainWindow()
|
||||
|
||||
strategy.set_db_path(db_path)
|
||||
desktop_app.setup_data_processor(processor)
|
||||
desktop_app.show()
|
||||
|
||||
storage.build_booktick_from_db(db_path)
|
||||
logging.info(f"Processed {len(storage.book.snapshots)} snapshots with metrics")
|
||||
logging.info("Desktop visualizer started")
|
||||
|
||||
strategy.on_booktick(storage.book)
|
||||
data_thread = threading.Thread(target=process_data, daemon=True)
|
||||
data_thread.start()
|
||||
|
||||
logging.info("Processing complete.")
|
||||
app.exec()
|
||||
|
||||
except Exception as e:
|
||||
logging.error(f"Failed to start desktop visualizer: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
21
metrics_calculator.py
Normal file
21
metrics_calculator.py
Normal file
@ -0,0 +1,21 @@
|
||||
import logging
|
||||
from typing import Optional, Tuple
|
||||
|
||||
|
||||
class MetricsCalculator:
|
||||
def __init__(self):
|
||||
self.cvd_cumulative = 0.0
|
||||
self.obi_value = 0.0
|
||||
|
||||
def update_cvd_from_trade(self, side: str, size: float) -> None:
|
||||
if side == "buy":
|
||||
volume_delta = float(size)
|
||||
elif side == "sell":
|
||||
volume_delta = -float(size)
|
||||
else:
|
||||
logging.warning(f"Unknown trade side '{side}', treating as neutral")
|
||||
|
||||
self.cvd_cumulative += volume_delta
|
||||
|
||||
def update_obi_from_book(self, total_bids: float, total_asks: float) -> None:
|
||||
self.obi_value = float(total_bids - total_asks)
|
||||
192
models.py
192
models.py
@ -1,192 +0,0 @@
|
||||
"""Core data models for orderbook reconstruction and backtesting.
|
||||
|
||||
This module defines lightweight data structures for orderbook levels, trades,
|
||||
book snapshots, and the in-memory `Book` container used by the `Storage` layer.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Dict, List
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class OrderbookLevel:
|
||||
"""Represents a single price level on one side of the orderbook.
|
||||
|
||||
Attributes:
|
||||
price: Price level for the orderbook entry.
|
||||
size: Total size at this price level.
|
||||
liquidation_count: Number of liquidations at this level.
|
||||
order_count: Number of resting orders at this level.
|
||||
"""
|
||||
|
||||
price: float
|
||||
size: float
|
||||
liquidation_count: int
|
||||
order_count: int
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class Trade:
|
||||
"""Represents a single trade event."""
|
||||
|
||||
id: int
|
||||
trade_id: float
|
||||
price: float
|
||||
size: float
|
||||
side: str
|
||||
timestamp: int
|
||||
|
||||
|
||||
@dataclass(slots=True)
|
||||
class Metric:
|
||||
"""Represents calculated metrics for a snapshot."""
|
||||
|
||||
snapshot_id: int
|
||||
timestamp: int
|
||||
obi: float
|
||||
cvd: float
|
||||
best_bid: float | None = None
|
||||
best_ask: float | None = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class BookSnapshot:
|
||||
"""In-memory representation of an orderbook state at a specific timestamp."""
|
||||
|
||||
id: int = 0
|
||||
timestamp: int = 0
|
||||
bids: Dict[float, OrderbookLevel] = field(default_factory=dict)
|
||||
asks: Dict[float, OrderbookLevel] = field(default_factory=dict)
|
||||
trades: List[Trade] = field(default_factory=list)
|
||||
|
||||
|
||||
class Book:
|
||||
"""Container for managing orderbook snapshots and their evolution over time."""
|
||||
|
||||
def __init__(self) -> None:
|
||||
"""Initialize an empty book."""
|
||||
self.snapshots: List[BookSnapshot] = []
|
||||
self.first_timestamp = 0
|
||||
self.last_timestamp = 0
|
||||
|
||||
def add_snapshot(self, snapshot: BookSnapshot) -> None:
|
||||
"""Add a snapshot to the book's history and update time bounds."""
|
||||
self.snapshots.append(snapshot)
|
||||
if self.first_timestamp == 0 or snapshot.timestamp < self.first_timestamp:
|
||||
self.first_timestamp = snapshot.timestamp
|
||||
if snapshot.timestamp > self.last_timestamp:
|
||||
self.last_timestamp = snapshot.timestamp
|
||||
|
||||
def create_snapshot(self, id: int, timestamp: int) -> BookSnapshot:
|
||||
"""Create a new snapshot, add it to history, and return it.
|
||||
|
||||
Copies bids/asks/trades from the previous snapshot to maintain continuity.
|
||||
"""
|
||||
prev_snapshot = self.snapshots[-1] if self.snapshots else BookSnapshot()
|
||||
snapshot = BookSnapshot(
|
||||
id=id,
|
||||
timestamp=timestamp,
|
||||
bids={
|
||||
k: OrderbookLevel(
|
||||
price=v.price,
|
||||
size=v.size,
|
||||
liquidation_count=v.liquidation_count,
|
||||
order_count=v.order_count,
|
||||
)
|
||||
for k, v in prev_snapshot.bids.items()
|
||||
},
|
||||
asks={
|
||||
k: OrderbookLevel(
|
||||
price=v.price,
|
||||
size=v.size,
|
||||
liquidation_count=v.liquidation_count,
|
||||
order_count=v.order_count,
|
||||
)
|
||||
for k, v in prev_snapshot.asks.items()
|
||||
},
|
||||
trades=prev_snapshot.trades.copy() if prev_snapshot.trades else [],
|
||||
)
|
||||
self.add_snapshot(snapshot)
|
||||
return snapshot
|
||||
|
||||
|
||||
class MetricCalculator:
|
||||
"""Calculator for OBI and CVD metrics from orderbook snapshots and trades."""
|
||||
|
||||
@staticmethod
|
||||
def calculate_obi(snapshot: BookSnapshot) -> float:
|
||||
"""Calculate Order Book Imbalance for a snapshot.
|
||||
|
||||
Formula: OBI = (Vb - Va) / (Vb + Va)
|
||||
Where Vb = total bid volume, Va = total ask volume
|
||||
|
||||
Args:
|
||||
snapshot: BookSnapshot containing bids and asks data.
|
||||
|
||||
Returns:
|
||||
OBI value between -1 and 1, or 0.0 if no volume.
|
||||
"""
|
||||
# Calculate total bid volume
|
||||
vb = sum(level.size for level in snapshot.bids.values())
|
||||
|
||||
# Calculate total ask volume
|
||||
va = sum(level.size for level in snapshot.asks.values())
|
||||
|
||||
# Handle edge case where total volume is zero
|
||||
if vb + va == 0:
|
||||
return 0.0
|
||||
|
||||
# Calculate OBI using standard formula
|
||||
obi = (vb - va) / (vb + va)
|
||||
|
||||
# Ensure result is within expected bounds [-1, 1]
|
||||
return max(-1.0, min(1.0, obi))
|
||||
|
||||
@staticmethod
|
||||
def get_best_bid_ask(snapshot: BookSnapshot) -> tuple[float | None, float | None]:
|
||||
"""Extract best bid and ask prices from a snapshot.
|
||||
|
||||
Args:
|
||||
snapshot: BookSnapshot containing bids and asks data.
|
||||
|
||||
Returns:
|
||||
Tuple of (best_bid, best_ask) or (None, None) if no data.
|
||||
"""
|
||||
best_bid = max(snapshot.bids.keys()) if snapshot.bids else None
|
||||
best_ask = min(snapshot.asks.keys()) if snapshot.asks else None
|
||||
return best_bid, best_ask
|
||||
|
||||
@staticmethod
|
||||
def calculate_volume_delta(trades: List[Trade]) -> float:
|
||||
"""Calculate Volume Delta for a list of trades.
|
||||
|
||||
Volume Delta = Buy Volume - Sell Volume
|
||||
Buy trades (side = "buy") contribute positive volume
|
||||
Sell trades (side = "sell") contribute negative volume
|
||||
|
||||
Args:
|
||||
trades: List of Trade objects for a specific timestamp.
|
||||
|
||||
Returns:
|
||||
Volume delta value (can be positive, negative, or zero).
|
||||
"""
|
||||
buy_volume = sum(trade.size for trade in trades if trade.side == "buy")
|
||||
sell_volume = sum(trade.size for trade in trades if trade.side == "sell")
|
||||
return buy_volume - sell_volume
|
||||
|
||||
@staticmethod
|
||||
def calculate_cvd(previous_cvd: float, volume_delta: float) -> float:
|
||||
"""Calculate Cumulative Volume Delta.
|
||||
|
||||
CVD_t = CVD_{t-1} + Volume_Delta_t
|
||||
|
||||
Args:
|
||||
previous_cvd: Previous CVD value (use 0.0 for reset or first calculation).
|
||||
volume_delta: Current volume delta to add.
|
||||
|
||||
Returns:
|
||||
New cumulative volume delta value.
|
||||
"""
|
||||
return previous_cvd + volume_delta
|
||||
|
||||
|
||||
70
ohlc_processor.py
Normal file
70
ohlc_processor.py
Normal file
@ -0,0 +1,70 @@
|
||||
import logging
|
||||
from typing import List, Any, Dict, Tuple
|
||||
|
||||
from viz_io import add_ohlc_bar, upsert_ohlc_bar, _atomic_write_json, DEPTH_FILE
|
||||
from db_interpreter import OrderbookUpdate
|
||||
from level_parser import normalize_levels, parse_levels_including_zeros
|
||||
from orderbook_manager import OrderbookManager
|
||||
from metrics_calculator import MetricsCalculator
|
||||
|
||||
|
||||
class OHLCProcessor:
|
||||
"""
|
||||
Processes trade data and orderbook updates into OHLC bars and depth snapshots.
|
||||
|
||||
This class aggregates individual trades into time-windowed OHLC (Open, High, Low, Close)
|
||||
bars and maintains an in-memory orderbook state for depth visualization. It also
|
||||
calculates Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD) metrics.
|
||||
|
||||
The processor uses throttled updates to balance visualization responsiveness with
|
||||
I/O efficiency, emitting intermediate updates during active windows.
|
||||
|
||||
Attributes:
|
||||
window_seconds: Time window duration for OHLC aggregation
|
||||
depth_levels_per_side: Number of top price levels to maintain per side
|
||||
trades_processed: Total number of trades processed
|
||||
bars_created: Total number of OHLC bars created
|
||||
cvd_cumulative: Running cumulative volume delta (via metrics calculator)
|
||||
"""
|
||||
|
||||
def __init__(self) -> None:
|
||||
self.current_bar = None
|
||||
self.trades_processed = 0
|
||||
|
||||
self._orderbook = OrderbookManager()
|
||||
self._metrics = MetricsCalculator()
|
||||
|
||||
@property
|
||||
def cvd_cumulative(self) -> float:
|
||||
"""Access cumulative CVD from metrics calculator."""
|
||||
return self._metrics.cvd_cumulative
|
||||
|
||||
def process_trades(self, trades: List[Tuple[Any, ...]]) -> None:
|
||||
for trade in trades:
|
||||
trade_id, trade_id_str, price, size, side, timestamp_ms = trade[:6]
|
||||
timestamp_ms = int(timestamp_ms)
|
||||
|
||||
self._metrics.update_cvd_from_trade(side, size)
|
||||
|
||||
if not self.current_bar:
|
||||
self.current_bar = {
|
||||
'open': float(price),
|
||||
'high': float(price),
|
||||
'low': float(price),
|
||||
'close': float(price)
|
||||
}
|
||||
self.current_bar['high'] = max(self.current_bar['high'], float(price))
|
||||
self.current_bar['low'] = min(self.current_bar['low'], float(price))
|
||||
self.current_bar['close'] = float(price)
|
||||
self.current_bar['volume'] += float(size)
|
||||
|
||||
|
||||
def update_orderbook(self, ob_update: OrderbookUpdate) -> None:
|
||||
bids_updates = parse_levels_including_zeros(ob_update.bids)
|
||||
asks_updates = parse_levels_including_zeros(ob_update.asks)
|
||||
|
||||
self._orderbook.apply_updates(bids_updates, asks_updates)
|
||||
|
||||
total_bids, total_asks = self._orderbook.get_total_volume()
|
||||
self._metrics.update_obi_from_book(total_bids, total_asks)
|
||||
|
||||
37
orderbook_manager.py
Normal file
37
orderbook_manager.py
Normal file
@ -0,0 +1,37 @@
|
||||
from typing import Dict, List, Tuple
|
||||
|
||||
|
||||
class OrderbookManager:
|
||||
def __init__(self):
|
||||
self._book_bids: Dict[float, float] = {}
|
||||
self._book_asks: Dict[float, float] = {}
|
||||
|
||||
def apply_updates(self,
|
||||
bids_updates: List[Tuple[float, float]],
|
||||
asks_updates: List[Tuple[float, float]]) -> None:
|
||||
self._apply_partial_updates(self._book_bids, bids_updates)
|
||||
self._apply_partial_updates(self._book_asks, asks_updates)
|
||||
|
||||
def get_total_volume(self) -> Tuple[float, float]:
|
||||
total_bids = sum(self._book_bids.values()) if self._book_bids else 0.0
|
||||
total_asks = sum(self._book_asks.values()) if self._book_asks else 0.0
|
||||
return total_bids, total_asks
|
||||
|
||||
def get_top_levels(self) -> Tuple[List[List[float]], List[List[float]]]:
|
||||
bids_sorted = self._build_top_levels(self._book_bids, reverse=True)
|
||||
asks_sorted = self._build_top_levels(self._book_asks, reverse=False)
|
||||
return bids_sorted, asks_sorted
|
||||
|
||||
def _apply_partial_updates(self,
|
||||
side_map: Dict[float, float],
|
||||
updates: List[Tuple[float, float]]) -> None:
|
||||
for price, size in updates:
|
||||
if size == 0.0:
|
||||
side_map.pop(price, None)
|
||||
elif size > 0.0:
|
||||
side_map[price] = size
|
||||
|
||||
def _build_top_levels(self, side_map: Dict[float, float], reverse: bool) -> List[List[float]]:
|
||||
items = [(p, s) for p, s in side_map.items() if s > 0]
|
||||
items.sort(key=lambda x: x[0], reverse=reverse)
|
||||
return [[p, s] for p, s in items]
|
||||
@ -1,3 +0,0 @@
|
||||
"""Parsing utilities for transforming raw persisted data into domain models."""
|
||||
|
||||
|
||||
@ -1,45 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from ast import literal_eval
|
||||
from typing import Dict
|
||||
import logging
|
||||
|
||||
from models import OrderbookLevel
|
||||
|
||||
|
||||
class OrderbookParser:
|
||||
"""Parser for orderbook side text into structured levels.
|
||||
|
||||
Maintains a price cache for memory efficiency and provides a method to
|
||||
parse a side into a dictionary of price -> OrderbookLevel.
|
||||
"""
|
||||
|
||||
def __init__(self, price_cache: dict[float, float] | None = None, debug: bool = False) -> None:
|
||||
self._price_cache: dict[float, float] = price_cache or {}
|
||||
self._debug = debug
|
||||
|
||||
def parse_side(self, text: str, side_dict: Dict[float, OrderbookLevel]) -> None:
|
||||
"""Parse orderbook side data from text and populate the provided dictionary."""
|
||||
if not text or text.strip() == "":
|
||||
return
|
||||
try:
|
||||
arr = literal_eval(text)
|
||||
for p, s, lc, oc in arr:
|
||||
price = float(p)
|
||||
size = float(s)
|
||||
price = self._price_cache.get(price, price)
|
||||
if size > 0:
|
||||
side_dict[price] = OrderbookLevel(
|
||||
price=price,
|
||||
size=size,
|
||||
liquidation_count=int(lc),
|
||||
order_count=int(oc),
|
||||
)
|
||||
except Exception as e:
|
||||
if self._debug:
|
||||
logging.exception("Error parsing orderbook data")
|
||||
logging.debug(f"Text sample: {text[:100]}...")
|
||||
else:
|
||||
logging.error(f"Failed to parse orderbook data: {type(e).__name__}")
|
||||
|
||||
|
||||
@ -1,138 +0,0 @@
|
||||
# PRD: OBI and CVD Metrics Integration
|
||||
|
||||
## Introduction/Overview
|
||||
|
||||
This feature integrates Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD) calculations into the orderflow backtest system. Currently, the system stores all snapshots in memory during processing, which consumes excessive memory. The goal is to compute OBI and CVD metrics during the `build_booktick_from_db` execution, store these metrics persistently in the database, and visualize them alongside OHLC and volume data.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Memory Optimization**: Reduce memory usage by storing only essential data (OBI/CVD metrics, best bid/ask) instead of full snapshot history
|
||||
2. **Metric Calculation**: Implement per-snapshot OBI and CVD calculations with maximum granularity
|
||||
3. **Persistent Storage**: Store calculated metrics in the database to avoid recalculation
|
||||
4. **Enhanced Visualization**: Display OBI and CVD curves beneath volume graphs with shared time axis
|
||||
5. **Incremental CVD**: Support incremental CVD calculation that can be reset at user-defined points
|
||||
|
||||
## User Stories
|
||||
|
||||
1. **As a trader**, I want to see OBI and CVD metrics calculated for each orderbook snapshot so that I can analyze market sentiment with maximum granularity.
|
||||
|
||||
2. **As a system user**, I want metrics to be stored persistently in the database so that I don't need to recalculate them when re-analyzing the same dataset.
|
||||
|
||||
3. **As a data analyst**, I want to visualize OBI and CVD curves alongside OHLC and volume data so that I can correlate price movements with orderbook imbalances and volume deltas.
|
||||
|
||||
4. **As a performance-conscious user**, I want the system to use less memory during processing so that I can analyze larger datasets (months to years of data).
|
||||
|
||||
5. **As a researcher**, I want incremental CVD calculation so that I can track cumulative volume changes from any chosen starting point in my analysis.
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### Database Schema Updates
|
||||
1. **Create metrics table** with the following structure:
|
||||
```sql
|
||||
CREATE TABLE metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER,
|
||||
timestamp TEXT,
|
||||
obi REAL,
|
||||
cvd REAL,
|
||||
best_bid REAL,
|
||||
best_ask REAL,
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
);
|
||||
```
|
||||
|
||||
### OBI Calculation
|
||||
2. **Calculate OBI per snapshot** using the formula: `OBI = (Vb - Va) / (Vb + Va)` where:
|
||||
- Vb = total volume on bid side
|
||||
- Va = total volume on ask side
|
||||
3. **Handle edge cases** where Vb + Va = 0 by setting OBI = 0.0
|
||||
4. **Store OBI values** in the metrics table for each processed snapshot
|
||||
|
||||
### CVD Calculation
|
||||
5. **Calculate Volume Delta per timestamp** by summing all trades at each snapshot timestamp:
|
||||
- Buy trades (side = "buy"): add to positive volume
|
||||
- Sell trades (side = "sell"): add to negative volume
|
||||
- VD = Buy Volume - Sell Volume
|
||||
6. **Calculate Cumulative Volume Delta** as running sum: `CVD_t = CVD_{t-1} + VD_t`
|
||||
7. **Support CVD reset functionality** to allow starting cumulative calculation from any point
|
||||
8. **Handle snapshots with no trades** by carrying forward the previous CVD value
|
||||
|
||||
### Storage System Updates
|
||||
9. **Modify Storage class** to integrate metric calculations during `build_booktick_from_db`
|
||||
10. **Update Book model** to store only essential data: OBI/CVD time series and best bid/ask levels
|
||||
11. **Remove full snapshot retention** from memory after metric calculation
|
||||
12. **Add metric persistence** to SQLite database during processing
|
||||
|
||||
### Strategy Integration
|
||||
13. **Enhance DefaultStrategy** to calculate both OBI and CVD metrics
|
||||
14. **Return time-series data structures** compatible with visualization system
|
||||
15. **Integrate metric calculation** into the existing `on_booktick` workflow
|
||||
|
||||
### Visualization Enhancements
|
||||
16. **Add OBI and CVD plotting** to the visualizer beneath volume graphs
|
||||
17. **Implement shared X-axis** for time alignment across OHLC, volume, OBI, and CVD charts
|
||||
18. **Support 6-hour bar aggregation** as the initial time resolution
|
||||
19. **Use standard line styling** for OBI and CVD curves
|
||||
20. **Make time resolution configurable** for future flexibility
|
||||
|
||||
## Non-Goals (Out of Scope)
|
||||
|
||||
1. **Real-time streaming** - This feature focuses on historical data processing
|
||||
2. **Advanced visualization features** - Complex styling, indicators, or interactive elements beyond basic line charts
|
||||
3. **Alternative CVD calculation methods** - Only implementing the standard buy/sell volume delta approach
|
||||
4. **Multi-threading optimization** - Simple sequential processing for initial implementation
|
||||
5. **Data compression** - No advanced compression techniques for stored metrics
|
||||
6. **Export functionality** - No CSV/JSON export of calculated metrics
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
### Database Performance
|
||||
- Add indexes on `metrics.timestamp` and `metrics.snapshot_id` for efficient querying
|
||||
- Consider batch insertions for metric data to improve write performance
|
||||
|
||||
### Memory Management
|
||||
- Process snapshots sequentially and discard after metric calculation
|
||||
- Maintain only calculated time-series data in memory
|
||||
- Keep best bid/ask data for potential future analysis needs
|
||||
|
||||
### Data Integrity
|
||||
- Ensure metric calculations are atomic with snapshot processing
|
||||
- Add foreign key constraints to maintain referential integrity
|
||||
- Implement transaction boundaries for consistent data state
|
||||
|
||||
### Integration Points
|
||||
- Modify `SQLiteOrderflowRepository` to support metrics table operations
|
||||
- Update `Storage._create_snapshots_from_rows` to include metric calculation
|
||||
- Extend `Visualizer` to handle additional metric data series
|
||||
|
||||
## Success Metrics
|
||||
|
||||
1. **Memory Usage Reduction**: Achieve at least 70% reduction in peak memory usage during processing
|
||||
2. **Processing Speed**: Maintain or improve current processing speed (rows/sec) despite additional calculations
|
||||
3. **Data Accuracy**: 100% correlation between manually calculated and stored OBI/CVD values for test datasets
|
||||
4. **Visualization Quality**: Successfully display OBI and CVD curves with proper time alignment
|
||||
5. **Storage Efficiency**: Metrics table size should be manageable relative to source data (< 20% overhead)
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Index Strategy**: Should we add additional database indexes for time-range queries on metrics?
|
||||
2. **CVD Starting Value**: Should CVD start from 0 for each database file, or allow continuation from previous sessions?
|
||||
3. **Error Recovery**: How should the system handle partial metric calculations if processing is interrupted?
|
||||
4. **Validation**: Do we need validation checks to ensure OBI values stay within [-1, 1] range?
|
||||
5. **Performance Monitoring**: Should we add timing metrics to track calculation performance per snapshot?
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
**Phase 1: Core Functionality**
|
||||
- Database schema updates
|
||||
- Basic OBI/CVD calculation
|
||||
- Metric storage integration
|
||||
|
||||
**Phase 2: Memory Optimization**
|
||||
- Remove full snapshot retention
|
||||
- Implement essential data-only storage
|
||||
|
||||
**Phase 3: Visualization**
|
||||
- Add metric plotting to visualizer
|
||||
- Implement time axis alignment
|
||||
- Support 6-hour bar aggregation
|
||||
@ -6,12 +6,11 @@ readme = "README.md"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = [
|
||||
"matplotlib>=3.10.5",
|
||||
"pyqt5>=5.15.11",
|
||||
"pyside6>=6.8.0",
|
||||
"pyqtgraph>=0.13.0",
|
||||
"typer>=0.16.1",
|
||||
"dash>=2.18.0",
|
||||
"plotly>=5.18.0",
|
||||
"dash-bootstrap-components>=1.5.0",
|
||||
"pandas>=2.0.0",
|
||||
"numpy>=2.3.2",
|
||||
]
|
||||
|
||||
[dependency-groups]
|
||||
|
||||
@ -1,7 +0,0 @@
|
||||
"""Repository layer for data access implementations (e.g., SQLite).
|
||||
|
||||
This package contains concrete repositories used by the `Storage` orchestrator
|
||||
to read persisted orderflow data.
|
||||
"""
|
||||
|
||||
|
||||
@ -1,188 +0,0 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from pathlib import Path
|
||||
from typing import Dict, Iterator, List, Tuple
|
||||
import sqlite3
|
||||
import logging
|
||||
|
||||
from models import Trade, Metric
|
||||
|
||||
|
||||
class SQLiteOrderflowRepository:
|
||||
"""Read-only repository for loading orderflow data from a SQLite database."""
|
||||
|
||||
def __init__(self, db_path: Path) -> None:
|
||||
self.db_path = db_path
|
||||
self.conn = None
|
||||
|
||||
def connect(self) -> None:
|
||||
self.conn = sqlite3.connect(str(self.db_path))
|
||||
self.conn.execute("PRAGMA journal_mode = OFF")
|
||||
self.conn.execute("PRAGMA synchronous = OFF")
|
||||
self.conn.execute("PRAGMA cache_size = 100000")
|
||||
self.conn.execute("PRAGMA temp_store = MEMORY")
|
||||
self.conn.execute("PRAGMA mmap_size = 30000000000")
|
||||
|
||||
def count_rows(self, table: str) -> int:
|
||||
allowed_tables = {"book", "trades"}
|
||||
if table not in allowed_tables:
|
||||
raise ValueError(f"Unsupported table name: {table}")
|
||||
try:
|
||||
row = self.conn.execute(f"SELECT COUNT(*) FROM {table}").fetchone()
|
||||
return int(row[0]) if row and row[0] is not None else 0
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error counting rows in table {table}: {e}")
|
||||
return 0
|
||||
|
||||
def load_trades(self) -> Dict[int, List[Trade]]:
|
||||
trades: List[Trade] = []
|
||||
try:
|
||||
cursor = self.conn.cursor()
|
||||
cursor.execute(
|
||||
"SELECT id, trade_id, price, size, side, timestamp FROM trades ORDER BY timestamp ASC"
|
||||
)
|
||||
for batch in iter(lambda: cursor.fetchmany(5000), []):
|
||||
for id_, trade_id, price, size, side, ts in batch:
|
||||
timestamp_int = int(ts)
|
||||
trade = Trade(
|
||||
id=id_,
|
||||
trade_id=float(trade_id),
|
||||
price=float(price),
|
||||
size=float(size),
|
||||
side=str(side),
|
||||
timestamp=timestamp_int,
|
||||
)
|
||||
trades.append(trade)
|
||||
return trades
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error loading trades: {e}")
|
||||
return {}
|
||||
|
||||
def iterate_book_rows(self) -> Iterator[Tuple[int, str, str, int]]:
|
||||
cursor = self.conn.cursor()
|
||||
cursor.execute("SELECT id, bids, asks, timestamp FROM book ORDER BY timestamp ASC")
|
||||
while True:
|
||||
rows = cursor.fetchmany(5000)
|
||||
if not rows:
|
||||
break
|
||||
for row in rows:
|
||||
yield row # (id, bids, asks, timestamp)
|
||||
|
||||
def create_metrics_table(self) -> None:
|
||||
"""Create the metrics table with proper indexes and foreign key constraints.
|
||||
|
||||
Args:
|
||||
conn: Active SQLite database connection.
|
||||
"""
|
||||
try:
|
||||
# Create metrics table following PRD schema
|
||||
self.conn.execute("""
|
||||
CREATE TABLE IF NOT EXISTS metrics (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
snapshot_id INTEGER NOT NULL,
|
||||
timestamp TEXT NOT NULL,
|
||||
obi REAL NOT NULL,
|
||||
cvd REAL NOT NULL,
|
||||
best_bid REAL,
|
||||
best_ask REAL,
|
||||
FOREIGN KEY (snapshot_id) REFERENCES book(id)
|
||||
)
|
||||
""")
|
||||
|
||||
# Create indexes for efficient querying
|
||||
self.conn.execute("CREATE INDEX IF NOT EXISTS idx_metrics_timestamp ON metrics(timestamp)")
|
||||
self.conn.execute("CREATE INDEX IF NOT EXISTS idx_metrics_snapshot_id ON metrics(snapshot_id)")
|
||||
|
||||
self.conn.commit()
|
||||
logging.info("Metrics table and indexes created successfully")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error creating metrics table: {e}")
|
||||
raise
|
||||
|
||||
def table_exists(self, table_name: str) -> bool:
|
||||
"""Check if a table exists in the database.
|
||||
|
||||
Args:
|
||||
conn: Active SQLite database connection.
|
||||
table_name: Name of the table to check.
|
||||
|
||||
Returns:
|
||||
True if table exists, False otherwise.
|
||||
"""
|
||||
try:
|
||||
cursor = self.conn.cursor()
|
||||
cursor.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' AND name=?",
|
||||
(table_name,)
|
||||
)
|
||||
return cursor.fetchone() is not None
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error checking if table {table_name} exists: {e}")
|
||||
return False
|
||||
|
||||
def insert_metrics_batch(self, metrics: List[Metric]) -> None:
|
||||
"""Insert multiple metrics in a single batch operation for performance.
|
||||
|
||||
Args:
|
||||
conn: Active SQLite database connection.
|
||||
metrics: List of Metric objects to insert.
|
||||
"""
|
||||
if not metrics:
|
||||
return
|
||||
|
||||
try:
|
||||
# Prepare batch data following existing batch pattern
|
||||
batch_data = [
|
||||
(m.snapshot_id, m.timestamp, m.obi, m.cvd, m.best_bid, m.best_ask)
|
||||
for m in metrics
|
||||
]
|
||||
|
||||
# Use executemany for batch insertion
|
||||
self.conn.executemany(
|
||||
"INSERT INTO metrics (snapshot_id, timestamp, obi, cvd, best_bid, best_ask) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
batch_data
|
||||
)
|
||||
|
||||
logging.debug(f"Inserted {len(metrics)} metrics records")
|
||||
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error inserting metrics batch: {e}")
|
||||
raise
|
||||
|
||||
def load_metrics_by_timerange(self, start_timestamp: int, end_timestamp: int) -> List[Metric]:
|
||||
"""Load metrics within a specified timestamp range.
|
||||
|
||||
Args:
|
||||
conn: Active SQLite database connection.
|
||||
start_timestamp: Start of the time range (inclusive).
|
||||
end_timestamp: End of the time range (inclusive).
|
||||
|
||||
Returns:
|
||||
List of Metric objects ordered by timestamp.
|
||||
"""
|
||||
try:
|
||||
cursor = self.conn.cursor()
|
||||
cursor.execute(
|
||||
"SELECT snapshot_id, timestamp, obi, cvd, best_bid, best_ask FROM metrics WHERE timestamp >= ? AND timestamp <= ? ORDER BY timestamp ASC",
|
||||
(start_timestamp, end_timestamp)
|
||||
)
|
||||
|
||||
metrics = []
|
||||
for batch in iter(lambda: cursor.fetchmany(5000), []):
|
||||
for snapshot_id, timestamp, obi, cvd, best_bid, best_ask in batch:
|
||||
metric = Metric(
|
||||
snapshot_id=int(snapshot_id),
|
||||
timestamp=int(timestamp),
|
||||
obi=float(obi),
|
||||
cvd=float(cvd),
|
||||
best_bid=float(best_bid) if best_bid is not None else None,
|
||||
best_ask=float(best_ask) if best_ask is not None else None,
|
||||
)
|
||||
metrics.append(metric)
|
||||
|
||||
return metrics
|
||||
|
||||
except sqlite3.Error as e:
|
||||
logging.error(f"Error loading metrics by timerange: {e}")
|
||||
return []
|
||||
@ -1,153 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Run interactive visualizer using PRE-CALCULATED metrics from the database.
|
||||
No recalculation needed - just read and display!
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from interactive_visualizer import InteractiveVisualizer
|
||||
from models import Book, BookSnapshot, Trade
|
||||
from parsers.orderbook_parser import OrderbookParser
|
||||
import sqlite3
|
||||
import logging
|
||||
|
||||
def load_book_snapshots_only(db_path: Path, limit: int = 10000):
|
||||
"""Load book snapshots without recalculating metrics."""
|
||||
book = Book()
|
||||
parser = OrderbookParser()
|
||||
|
||||
print(f"📖 Reading book snapshots (limit: {limit})...")
|
||||
|
||||
# Read book data directly without triggering metric calculation
|
||||
conn = sqlite3.connect(f'file:{db_path}?mode=ro', uri=True)
|
||||
|
||||
# Load trades first for efficiency
|
||||
print(" 📈 Loading trades...")
|
||||
trades_by_timestamp = {}
|
||||
trade_cursor = conn.execute('SELECT id, trade_id, price, size, side, timestamp FROM trades ORDER BY timestamp')
|
||||
for trade_row in trade_cursor:
|
||||
timestamp = int(trade_row[5])
|
||||
trade = Trade(
|
||||
id=trade_row[0],
|
||||
trade_id=float(trade_row[1]),
|
||||
price=float(trade_row[2]),
|
||||
size=float(trade_row[3]),
|
||||
side=trade_row[4],
|
||||
timestamp=timestamp
|
||||
)
|
||||
if timestamp not in trades_by_timestamp:
|
||||
trades_by_timestamp[timestamp] = []
|
||||
trades_by_timestamp[timestamp].append(trade)
|
||||
|
||||
# Get snapshots
|
||||
cursor = conn.execute('''
|
||||
SELECT id, instrument, bids, asks, timestamp
|
||||
FROM book
|
||||
ORDER BY timestamp
|
||||
LIMIT ?
|
||||
''', (limit,))
|
||||
|
||||
snapshot_count = 0
|
||||
for row in cursor:
|
||||
try:
|
||||
row_id, instrument, bids_text, asks_text, timestamp = row
|
||||
timestamp_int = int(timestamp)
|
||||
|
||||
# Create snapshot using the same logic as Storage._snapshot_from_row
|
||||
snapshot = BookSnapshot(
|
||||
id=row_id,
|
||||
timestamp=timestamp_int,
|
||||
bids={},
|
||||
asks={},
|
||||
trades=trades_by_timestamp.get(timestamp_int, []),
|
||||
)
|
||||
|
||||
# Parse bids and asks using the parser
|
||||
parser.parse_side(bids_text, snapshot.bids)
|
||||
parser.parse_side(asks_text, snapshot.asks)
|
||||
|
||||
# Only add snapshots that have both bids and asks
|
||||
if snapshot.bids and snapshot.asks:
|
||||
book.add_snapshot(snapshot)
|
||||
snapshot_count += 1
|
||||
|
||||
if snapshot_count % 1000 == 0:
|
||||
print(f" 📊 Loaded {snapshot_count} snapshots...")
|
||||
|
||||
except Exception as e:
|
||||
logging.warning(f"Error parsing snapshot {row[0]}: {e}")
|
||||
continue
|
||||
|
||||
conn.close()
|
||||
print(f"✅ Loaded {len(book.snapshots)} snapshots with trades")
|
||||
return book
|
||||
|
||||
def main():
|
||||
print("🚀 USING PRE-CALCULATED METRICS FROM DATABASE")
|
||||
print("=" * 55)
|
||||
|
||||
# Database path
|
||||
db_path = Path("../data/OKX/BTC-USDT-25-06-09.db")
|
||||
|
||||
if not db_path.exists():
|
||||
print(f"❌ Database not found: {db_path}")
|
||||
return
|
||||
|
||||
try:
|
||||
# Load ONLY book snapshots (no metric recalculation)
|
||||
book = load_book_snapshots_only(db_path, limit=5000) # Start with 5K snapshots
|
||||
|
||||
if not book.snapshots:
|
||||
print("❌ No snapshots loaded")
|
||||
return
|
||||
|
||||
print(f"✅ Book loaded: {len(book.snapshots)} snapshots")
|
||||
print(f"✅ Time range: {book.first_timestamp} to {book.last_timestamp}")
|
||||
|
||||
# Create visualizer
|
||||
viz = InteractiveVisualizer(
|
||||
window_seconds=6*3600, # 6-hour bars
|
||||
port=8050
|
||||
)
|
||||
|
||||
# Set database path so it can load PRE-CALCULATED metrics
|
||||
viz.set_db_path(db_path)
|
||||
|
||||
# Process book data (will load existing metrics automatically)
|
||||
print("⚙️ Processing book data and loading existing metrics...")
|
||||
viz.update_from_book(book)
|
||||
|
||||
print(f"✅ Generated {len(viz._ohlc_data)} OHLC bars")
|
||||
print(f"✅ Loaded {len(viz._metrics_data)} pre-calculated metrics")
|
||||
|
||||
if viz._ohlc_data:
|
||||
sample_bar = viz._ohlc_data[0]
|
||||
print(f"✅ Sample OHLC: O={sample_bar[1]:.2f}, H={sample_bar[2]:.2f}, L={sample_bar[3]:.2f}, C={sample_bar[4]:.2f}")
|
||||
|
||||
print()
|
||||
print("🌐 LAUNCHING INTERACTIVE DASHBOARD")
|
||||
print("=" * 55)
|
||||
print("🚀 Server starting at: http://127.0.0.1:8050")
|
||||
print("📊 Features available:")
|
||||
print(" ✅ OHLC candlestick chart")
|
||||
print(" ✅ Volume bar chart")
|
||||
print(" ✅ OBI line chart (from existing metrics)")
|
||||
print(" ✅ CVD line chart (from existing metrics)")
|
||||
print(" ✅ Synchronized zoom/pan")
|
||||
print(" ✅ Professional dark theme")
|
||||
print()
|
||||
print("⏹️ Press Ctrl+C to stop the server")
|
||||
print("=" * 55)
|
||||
|
||||
# Launch the dashboard
|
||||
viz.show()
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n⏹️ Server stopped by user")
|
||||
except Exception as e:
|
||||
print(f"❌ Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
200
storage.py
200
storage.py
@ -1,200 +0,0 @@
|
||||
"""Storage utilities to reconstruct an in-memory orderbook from a SQLite DB.
|
||||
|
||||
This module defines lightweight data structures for orderbook levels, trades,
|
||||
and a `Storage` facade that can hydrate a `Book` incrementally from rows stored
|
||||
in a SQLite file produced by an external data collector.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
from typing import List, Dict, Optional, Iterator, Tuple
|
||||
import time
|
||||
import logging
|
||||
|
||||
from models import OrderbookLevel, Trade, BookSnapshot, Book, MetricCalculator, Metric
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
from parsers.orderbook_parser import OrderbookParser
|
||||
|
||||
class Storage:
|
||||
"""High-level facade to read historical orderflow into a `Book`.
|
||||
|
||||
Attributes:
|
||||
instrument: Symbol/instrument name (e.g., "BTC-USDT").
|
||||
book: In-memory orderbook that maintains the current state and tracks timestamps.
|
||||
"""
|
||||
|
||||
def __init__(self, instrument: str) -> None:
|
||||
self.instrument = instrument
|
||||
self.book = Book()
|
||||
# Pre-allocate memory for common price points
|
||||
self._price_cache = {float(p/10): float(p/10) for p in range(1, 1000001, 5)}
|
||||
# Debug flag
|
||||
self._debug = False
|
||||
self._parser = OrderbookParser(price_cache=self._price_cache, debug=self._debug)
|
||||
|
||||
def build_booktick_from_db(self, db_path: Path) -> None:
|
||||
"""Hydrate the in-memory `book` from a SQLite database and calculate metrics.
|
||||
|
||||
Builds a Book instance with sequential snapshots and calculates OBI/CVD metrics.
|
||||
|
||||
Args:
|
||||
db_path: Path to the SQLite database file.
|
||||
"""
|
||||
self.book = Book()
|
||||
|
||||
metrics_repo = SQLiteOrderflowRepository(db_path)
|
||||
with metrics_repo.connect() as conn:
|
||||
if not metrics_repo.table_exists(conn, "metrics"):
|
||||
metrics_repo.create_metrics_table(conn)
|
||||
|
||||
trades = metrics_repo.load_trades(conn)
|
||||
|
||||
total_rows = metrics_repo.count_rows(conn, "book")
|
||||
if total_rows == 0:
|
||||
logging.info(f"No orderbook data found in {db_path}")
|
||||
return
|
||||
|
||||
rows_iter = metrics_repo.iterate_book_rows(conn)
|
||||
self._create_snapshots_and_metrics(rows_iter, trades, total_rows, conn)
|
||||
|
||||
logging.info(f"Processed {len(self.book.snapshots)} snapshots with metrics from {db_path}")
|
||||
|
||||
def _create_snapshots_and_metrics(self, rows_iter: Iterator[Tuple[int, str, str, int]], trades: List[Trade], total_rows: int, conn) -> None:
|
||||
"""Create BookSnapshot instances and calculate metrics, storing them in database.
|
||||
|
||||
Args:
|
||||
rows_iter: Iterator yielding (id, bids_text, asks_text, timestamp)
|
||||
trades: List of trades
|
||||
total_rows: Total number of rows in the book table
|
||||
conn: Database connection for storing metrics
|
||||
"""
|
||||
# Initialize CVD tracking
|
||||
current_cvd = 0.0
|
||||
metrics_batch = []
|
||||
batch_size = 1000 # Process metrics in batches for performance
|
||||
|
||||
# Set batch size and logging frequency
|
||||
log_every = max(1, total_rows // 20)
|
||||
|
||||
processed = 0
|
||||
start_time = time.time()
|
||||
last_report_time = start_time
|
||||
|
||||
for row_id, bids_text, asks_text, timestamp in rows_iter:
|
||||
snapshot = self._snapshot_from_row(row_id, bids_text, asks_text, timestamp, trades)
|
||||
if snapshot is not None:
|
||||
# Calculate metrics for this snapshot
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
volume_delta = MetricCalculator.calculate_volume_delta(trades)
|
||||
current_cvd = MetricCalculator.calculate_cvd(current_cvd, volume_delta)
|
||||
best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
|
||||
# Create metric record
|
||||
metric = Metric(
|
||||
snapshot_id=row_id,
|
||||
timestamp=int(timestamp),
|
||||
obi=obi,
|
||||
cvd=current_cvd,
|
||||
best_bid=best_bid,
|
||||
best_ask=best_ask
|
||||
)
|
||||
metrics_batch.append(metric)
|
||||
|
||||
# Add snapshot to book (for compatibility)
|
||||
self.book.add_snapshot(snapshot)
|
||||
|
||||
# Insert metrics batch when it reaches batch_size
|
||||
if len(metrics_batch) >= batch_size:
|
||||
# Use the metrics repository directly via connection
|
||||
metrics_repo = SQLiteOrderflowRepository(Path("dummy")) # Path not used for existing conn
|
||||
metrics_repo.insert_metrics_batch(conn, metrics_batch)
|
||||
conn.commit()
|
||||
metrics_batch = []
|
||||
|
||||
processed += 1
|
||||
|
||||
# Report progress
|
||||
current_time = time.time()
|
||||
if processed % log_every == 0 and current_time - last_report_time > 1.0:
|
||||
logging.info(
|
||||
f"{processed / total_rows * 100:.1f}% - OBI: {metrics_batch[-1].obi if metrics_batch else 'N/A':.3f} - "
|
||||
f"CVD: {current_cvd:.1f} - {processed/(current_time-start_time):.1f} rows/sec"
|
||||
)
|
||||
last_report_time = current_time
|
||||
|
||||
# Insert remaining metrics
|
||||
if metrics_batch:
|
||||
metrics_repo = SQLiteOrderflowRepository(Path("dummy")) # Path not used for existing conn
|
||||
metrics_repo.insert_metrics_batch(conn, metrics_batch)
|
||||
conn.commit()
|
||||
|
||||
def _create_snapshots_from_rows(self, rows_iter: Iterator[Tuple[int, str, str, int]], trades: List[Trade], total_rows: int) -> None:
|
||||
"""Create BookSnapshot instances from database rows and add them to the book.
|
||||
|
||||
Args:
|
||||
rows_iter: Iterator yielding (id, bids_text, asks_text, timestamp)
|
||||
trades: List of trades
|
||||
total_rows: Total number of rows in the book table
|
||||
"""
|
||||
# Get reference to the book
|
||||
book = self.book
|
||||
|
||||
# Set batch size and logging frequency
|
||||
log_every = max(1, total_rows // 20)
|
||||
|
||||
processed = 0
|
||||
start_time = time.time()
|
||||
last_report_time = start_time
|
||||
|
||||
for row_id, bids_text, asks_text, timestamp in rows_iter:
|
||||
snapshot = self._snapshot_from_row(row_id, bids_text, asks_text, timestamp, trades)
|
||||
if snapshot is not None:
|
||||
book.add_snapshot(snapshot)
|
||||
|
||||
processed += 1
|
||||
|
||||
# Report progress
|
||||
current_time = time.time()
|
||||
if processed % log_every == 0 and current_time - last_report_time > 1.0:
|
||||
logging.info(
|
||||
f"{processed / total_rows * 100:.1f}% - asks {len(self.book.snapshots[-1].asks) if self.book.snapshots else 0} - "
|
||||
f"bids {len(self.book.snapshots[-1].bids) if self.book.snapshots else 0} - "
|
||||
f"{processed/(current_time-start_time):.1f} rows/sec"
|
||||
)
|
||||
last_report_time = current_time
|
||||
|
||||
def _snapshot_from_row(
|
||||
self,
|
||||
row_id: int,
|
||||
bids_text: str,
|
||||
asks_text: str,
|
||||
timestamp: int,
|
||||
trades_by_timestamp: Dict[int, List[Trade]],
|
||||
) -> Optional[BookSnapshot]:
|
||||
"""Create a `BookSnapshot` from a single DB row and attached trades.
|
||||
|
||||
Returns None if the snapshot has no bids or asks after parsing.
|
||||
"""
|
||||
timestamp_int = int(timestamp)
|
||||
snapshot = BookSnapshot(
|
||||
id=row_id,
|
||||
timestamp=timestamp_int,
|
||||
bids={},
|
||||
asks={},
|
||||
trades=trades_by_timestamp.get(timestamp_int, []),
|
||||
)
|
||||
|
||||
self._parser.parse_side(bids_text, snapshot.bids)
|
||||
self._parser.parse_side(asks_text, snapshot.asks)
|
||||
|
||||
if snapshot.bids and snapshot.asks:
|
||||
return snapshot
|
||||
return None
|
||||
|
||||
def _parse_orderbook_side(self, text: str, side_dict: Dict[float, OrderbookLevel]) -> None:
|
||||
"""Compatibility wrapper delegating to `OrderbookParser.parse_side`."""
|
||||
self._parser.parse_side(text, side_dict)
|
||||
|
||||
# The following helper was previously used, kept here for reference
|
||||
# and potential future extensions. It has been superseded by repository
|
||||
# methods for data access and is intentionally not used.
|
||||
104
strategies.py
104
strategies.py
@ -1,104 +0,0 @@
|
||||
import logging
|
||||
from typing import Optional, Any, cast, List
|
||||
from pathlib import Path
|
||||
from storage import Book, BookSnapshot
|
||||
from models import MetricCalculator, Metric
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
|
||||
class DefaultStrategy:
|
||||
"""Strategy that calculates and analyzes OBI and CVD metrics from stored data."""
|
||||
|
||||
def __init__(self, instrument: str):
|
||||
self.instrument = instrument
|
||||
self._db_path: Optional[Path] = None
|
||||
|
||||
def set_db_path(self, db_path: Path) -> None:
|
||||
"""Set the database path for loading stored metrics."""
|
||||
self._db_path = db_path
|
||||
|
||||
def compute_OBI(self, book: Book) -> List[float]:
|
||||
"""Compute Order Book Imbalance using MetricCalculator.
|
||||
|
||||
Returns:
|
||||
list: A list of OBI values, one for each snapshot in the book.
|
||||
"""
|
||||
if not book.snapshots:
|
||||
return []
|
||||
|
||||
obi_values = []
|
||||
|
||||
for snapshot in book.snapshots:
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
obi_values.append(obi)
|
||||
|
||||
return obi_values
|
||||
|
||||
def load_stored_metrics(self, start_timestamp: int, end_timestamp: int) -> List[Metric]:
|
||||
"""Load stored OBI and CVD metrics from database.
|
||||
|
||||
Args:
|
||||
start_timestamp: Start of time range to load.
|
||||
end_timestamp: End of time range to load.
|
||||
|
||||
Returns:
|
||||
List of Metric objects with OBI and CVD data.
|
||||
"""
|
||||
if not self._db_path:
|
||||
logging.warning("Database path not set, cannot load stored metrics")
|
||||
return []
|
||||
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(self._db_path)
|
||||
with repo.connect() as conn:
|
||||
return repo.load_metrics_by_timerange(conn, start_timestamp, end_timestamp)
|
||||
except Exception as e:
|
||||
logging.error(f"Error loading stored metrics: {e}")
|
||||
return []
|
||||
|
||||
def get_metrics_summary(self, metrics: List[Metric]) -> dict:
|
||||
"""Get summary statistics for loaded metrics.
|
||||
|
||||
Args:
|
||||
metrics: List of metric objects.
|
||||
|
||||
Returns:
|
||||
Dictionary with summary statistics.
|
||||
"""
|
||||
if not metrics:
|
||||
return {}
|
||||
|
||||
obi_values = [m.obi for m in metrics]
|
||||
cvd_values = [m.cvd for m in metrics]
|
||||
|
||||
return {
|
||||
"obi_min": min(obi_values),
|
||||
"obi_max": max(obi_values),
|
||||
"obi_avg": sum(obi_values) / len(obi_values),
|
||||
"cvd_start": cvd_values[0],
|
||||
"cvd_end": cvd_values[-1],
|
||||
"cvd_change": cvd_values[-1] - cvd_values[0],
|
||||
"total_snapshots": len(metrics)
|
||||
}
|
||||
|
||||
def on_booktick(self, book: Book):
|
||||
"""Hook called on each book tick; can load and analyze stored metrics."""
|
||||
# Load stored metrics if database path is available
|
||||
if self._db_path and book.first_timestamp and book.last_timestamp:
|
||||
metrics = self.load_stored_metrics(book.first_timestamp, book.last_timestamp)
|
||||
|
||||
if metrics:
|
||||
# Analyze stored metrics
|
||||
summary = self.get_metrics_summary(metrics)
|
||||
logging.info(f"Metrics summary: {summary}")
|
||||
|
||||
# Check for significant imbalances using stored OBI
|
||||
latest_metric = metrics[-1]
|
||||
if abs(latest_metric.obi) > 0.2: # 20% imbalance threshold
|
||||
logging.info(f"Significant imbalance detected: OBI={latest_metric.obi:.3f}, CVD={latest_metric.cvd:.1f}")
|
||||
else:
|
||||
# Fallback to real-time calculation for compatibility
|
||||
obi_values = self.compute_OBI(book)
|
||||
if obi_values:
|
||||
latest_obi = obi_values[-1]
|
||||
if abs(latest_obi) > 0.2:
|
||||
logging.info(f"Significant imbalance detected: {latest_obi:.3f}")
|
||||
76
tasks/prd-cumulative-volume-delta.md
Normal file
76
tasks/prd-cumulative-volume-delta.md
Normal file
@ -0,0 +1,76 @@
|
||||
## Cumulative Volume Delta (CVD) – Product Requirements Document
|
||||
|
||||
### 1) Introduction / Overview
|
||||
- Compute and visualize Cumulative Volume Delta (CVD) from trade data processed by `OHLCProcessor.process_trades`, aligned to the existing OHLC bar cadence.
|
||||
- CVD is defined as the cumulative sum of volume delta, where volume delta = buy_volume - sell_volume per trade.
|
||||
- Trade classification: `side == "buy"` → positive volume delta, `side == "sell"` → negative volume delta.
|
||||
- Persist CVD time series as scalar values per window to `metrics_data.json` and render a CVD line chart beneath the current OBI subplot in the Dash UI.
|
||||
|
||||
### 2) Goals
|
||||
- Compute volume delta from individual trades using the `side` field in the Trade dataclass.
|
||||
- Accumulate CVD across all processed trades (no session resets initially).
|
||||
- Aggregate CVD into window-aligned scalar values per `window_seconds`.
|
||||
- Extend `metrics_data.json` schema to include CVD values alongside existing OBI data.
|
||||
- Add a CVD line chart subplot beneath OBI in the main chart, sharing the time axis.
|
||||
- Throttle intra-window upserts of CVD values using the same approach/frequency as current OHLC throttling; always write on window close.
|
||||
|
||||
### 3) User Stories
|
||||
- As a researcher, I want CVD computed from actual trade data so I can assess buying/selling pressure over time.
|
||||
- As an analyst, I want CVD stored per time window so I can correlate it with price movements and OBI patterns.
|
||||
- As a developer, I want cumulative CVD values so I can analyze long-term directional bias in volume flow.
|
||||
|
||||
### 4) Functional Requirements
|
||||
1. Inputs and Definitions
|
||||
- Compute volume delta on every trade in `OHLCProcessor.process_trades`:
|
||||
- If `trade.side == "buy"` → `volume_delta = +trade.size`
|
||||
- If `trade.side == "sell"` → `volume_delta = -trade.size`
|
||||
- If `trade.side` is neither "buy" nor "sell" → `volume_delta = 0` (log warning)
|
||||
- Accumulate into running CVD: `self.cvd_cumulative += volume_delta`
|
||||
2. Windowing & Aggregation
|
||||
- Use the same `window_seconds` boundary as OHLC bars; window anchor is derived from the trade timestamp.
|
||||
- Store CVD value at window boundaries (end-of-window CVD snapshot).
|
||||
- On window rollover, capture the current `self.cvd_cumulative` value for that window.
|
||||
3. Persistence
|
||||
- Extend `metrics_data.json` schema from `[timestamp, obi_open, obi_high, obi_low, obi_close]` to `[timestamp, obi_open, obi_high, obi_low, obi_close, cvd_value]`.
|
||||
- Update `viz_io.py` functions to handle the new 6-element schema.
|
||||
- Keep only the last 1000 rows.
|
||||
- Upsert intra-window CVD values periodically (throttled, matching OHLC's approach) and always write on window close.
|
||||
4. Visualization
|
||||
- Read extended `metrics_data.json` in the Dash app with the same tolerant JSON reading/caching approach.
|
||||
- Extend the main figure to a fourth row for CVD line chart beneath OBI, sharing the x-axis.
|
||||
- Style CVD as a line chart with appropriate color (distinct from OHLC/Volume/OBI) and add a zero baseline.
|
||||
5. Performance & Correctness
|
||||
- CVD compute happens on every trade; I/O is throttled to maintain UI responsiveness.
|
||||
- Use existing logging and error handling patterns; must not crash if metrics JSON is temporarily unreadable.
|
||||
- Handle backward compatibility: if existing `metrics_data.json` has 5-element rows, treat missing CVD as 0.
|
||||
6. Testing
|
||||
- Unit tests for volume delta calculation with "buy", "sell", and invalid side values.
|
||||
- Unit tests for CVD accumulation across multiple trades and window boundaries.
|
||||
- Integration test: fixture trades produce correct CVD progression in `metrics_data.json`.
|
||||
|
||||
### 5) Non-Goals
|
||||
- No CVD reset functionality (will be implemented later).
|
||||
- No additional derived CVD metrics (e.g., CVD rate of change, normalized CVD).
|
||||
- No database persistence for CVD; JSON IPC only.
|
||||
- No strategy/signal changes based on CVD.
|
||||
|
||||
### 6) Design Considerations
|
||||
- Implement CVD calculation in `OHLCProcessor.process_trades` alongside existing OHLC aggregation.
|
||||
- Extend `viz_io.py` metrics functions to support 6-element schema while maintaining backward compatibility.
|
||||
- Add CVD state tracking: `self.cvd_cumulative`, `self.cvd_window_value` per window.
|
||||
- Follow the same throttling pattern as OBI metrics for consistency.
|
||||
|
||||
### 7) Technical Considerations
|
||||
- Add CVD computation in the trade processing loop within `OHLCProcessor.process_trades`.
|
||||
- Extend `upsert_metric_bar` and `add_metric_bar` functions to accept optional `cvd_value` parameter.
|
||||
- Handle schema migration gracefully: read existing 5-element rows, append 0.0 for missing CVD.
|
||||
- Use the same window alignment as trades (based on trade timestamp, not orderbook timestamp).
|
||||
|
||||
### 8) Success Metrics
|
||||
- `metrics_data.json` present with valid 6-element rows during processing.
|
||||
- CVD subplot updates smoothly and aligns with OHLC window timestamps.
|
||||
- CVD increases during buy-heavy periods, decreases during sell-heavy periods.
|
||||
- No noticeable performance regression in trade processing or UI responsiveness.
|
||||
|
||||
### 9) Open Questions
|
||||
- None; CVD computation approach confirmed using trade.side field. Schema extension approach confirmed for metrics_data.json.
|
||||
@ -1,208 +0,0 @@
|
||||
# PRD: Interactive Visualizer with Plotly + Dash
|
||||
|
||||
## Introduction/Overview
|
||||
|
||||
The current orderflow backtest system uses a static matplotlib-based visualizer that displays OHLC candlesticks, volume bars, Order Book Imbalance (OBI), and Cumulative Volume Delta (CVD) charts. This PRD outlines the development of a new interactive visualization system using Plotly + Dash that will provide real-time interactivity, detailed data inspection, and enhanced user experience for cryptocurrency trading analysis.
|
||||
|
||||
The goal is to replace the static visualization with a professional, web-based interactive dashboard that allows traders to explore orderbook metrics with precision and flexibility.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Replace Static Visualization**: Create a new `InteractiveVisualizer` class using Plotly + Dash
|
||||
2. **Enable Cross-Chart Interactivity**: Implement synchronized zooming, panning, and time range selection across all charts
|
||||
3. **Provide Precision Navigation**: Add crosshair cursor with vertical line indicator across all charts
|
||||
4. **Display Contextual Information**: Show detailed metrics in a side panel when hovering over data points
|
||||
5. **Support Multiple Time Granularities**: Allow users to adjust time resolution dynamically
|
||||
6. **Maintain Performance**: Handle large datasets (months of data) with smooth interactions
|
||||
7. **Preserve Integration**: Seamlessly integrate with existing metrics storage and data processing pipeline
|
||||
|
||||
## User Stories
|
||||
|
||||
### Primary Use Cases
|
||||
- **US-1**: As a trader, I want to zoom into specific time periods across all charts simultaneously so that I can analyze market behavior during critical moments
|
||||
- **US-2**: As a trader, I want to see a vertical crosshair line that spans all charts so that I can precisely align data points across OHLC, volume, OBI, and CVD metrics
|
||||
- **US-3**: As a trader, I want to hover over any data point and see detailed information in a side panel so that I can inspect exact values without cluttering the charts
|
||||
- **US-4**: As a trader, I want to pan through historical data smoothly so that I can explore different time periods efficiently
|
||||
- **US-5**: As a trader, I want to reset CVD calculations from a selected point in time so that I can analyze cumulative volume delta from specific market events
|
||||
|
||||
### Secondary Use Cases
|
||||
- **US-6**: As a trader, I want to adjust time granularity (1min, 5min, 1hour) so that I can view data at different resolutions
|
||||
- **US-7**: As a trader, I want navigation controls (reset zoom, home button) so that I can quickly return to full data view
|
||||
- **US-8**: As a trader, I want to select custom time ranges so that I can focus analysis on specific market sessions
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### Core Interactive Features
|
||||
1. **F1**: The system must provide synchronized zooming across all four charts (OHLC, Volume, OBI, CVD)
|
||||
2. **F2**: The system must provide synchronized panning across all four charts with shared X-axis
|
||||
3. **F3**: The system must display a vertical crosshair line that spans all charts and follows mouse cursor
|
||||
4. **F4**: The system must show detailed hover information for each chart type:
|
||||
- OHLC: timestamp, open, high, low, close, spread
|
||||
- Volume: timestamp, total volume, buy/sell breakdown if available
|
||||
- OBI: timestamp, OBI value, bid volume, ask volume, imbalance percentage
|
||||
- CVD: timestamp, CVD value, volume delta, cumulative change
|
||||
|
||||
### User Interface Requirements
|
||||
5. **F5**: The system must display charts in a 4-row layout with shared X-axis (OHLC on top, Volume, OBI, CVD at bottom)
|
||||
6. **F6**: The system must provide a side panel on the right displaying detailed information for the current cursor position
|
||||
7. **F7**: The system must include navigation controls:
|
||||
- Zoom in/out buttons
|
||||
- Reset zoom button
|
||||
- Home view button
|
||||
- Time range selector
|
||||
8. **F8**: The system must provide time granularity controls (1min, 5min, 15min, 1hour, 6hour)
|
||||
|
||||
### Data Integration Requirements
|
||||
9. **F9**: The system must integrate with existing `SQLiteOrderflowRepository` for metrics data loading
|
||||
10. **F10**: The system must support loading data from multiple database files seamlessly
|
||||
11. **F11**: The system must maintain the existing `set_db_path()` and `update_from_book()` interface for compatibility
|
||||
12. **F12**: The system must calculate OHLC bars from snapshots with configurable time windows
|
||||
|
||||
### Performance Requirements
|
||||
13. **F13**: The system must render charts with <2 second initial load time for datasets up to 1 million data points
|
||||
14. **F14**: The system must provide smooth zooming and panning interactions with <100ms response time
|
||||
15. **F15**: The system must efficiently update hover information with <50ms latency
|
||||
|
||||
### CVD Reset Functionality
|
||||
16. **F16**: The system must allow users to click on any point in the CVD chart to reset cumulative calculation from that timestamp
|
||||
17. **F17**: The system must visually indicate CVD reset points with markers or annotations
|
||||
18. **F18**: The system must recalculate and redraw CVD values from the reset point forward
|
||||
|
||||
## Non-Goals (Out of Scope)
|
||||
|
||||
1. **Advanced Drawing Tools**: Trend lines, Fibonacci retracements, or annotation tools
|
||||
2. **Multiple Instrument Support**: Multi-symbol comparison or overlay charts
|
||||
3. **Real-time Streaming**: Live data updates or WebSocket integration
|
||||
4. **Export Functionality**: Chart export to PNG/PDF or data export to CSV
|
||||
5. **User Authentication**: User accounts, saved layouts, or personalization
|
||||
6. **Mobile Optimization**: Touch interfaces or responsive mobile design
|
||||
7. **Advanced Indicators**: Technical analysis indicators beyond OBI/CVD
|
||||
8. **Alert System**: Price alerts, threshold notifications, or automated signals
|
||||
|
||||
## Design Considerations
|
||||
|
||||
### Chart Layout
|
||||
- **Layout**: 4-row subplot layout with 80% chart area, 20% side panel
|
||||
- **Color Scheme**: Professional dark theme with customizable colors
|
||||
- **Typography**: Clear, readable fonts optimized for financial data
|
||||
- **Responsive Design**: Adaptable to different screen sizes (desktop focus)
|
||||
|
||||
### Side Panel Design
|
||||
```
|
||||
┌─────────────────┐
|
||||
│ Current Data │
|
||||
├─────────────────┤
|
||||
│ Time: 16:30:45 │
|
||||
│ Price: $50,123 │
|
||||
│ Volume: 1,234 │
|
||||
│ OBI: 0.234 │
|
||||
│ CVD: -123.45 │
|
||||
├─────────────────┤
|
||||
│ Controls │
|
||||
│ [Reset CVD] │
|
||||
│ [Zoom Reset] │
|
||||
│ [Time Range ▼] │
|
||||
│ [Granularity ▼] │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
### Navigation Controls
|
||||
- **Zoom**: Mouse wheel, zoom box selection, zoom buttons
|
||||
- **Pan**: Click and drag, arrow keys, scroll bars
|
||||
- **Reset**: Double-click to auto-scale, reset button to full view
|
||||
- **Selection**: Click and drag for time range selection
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
### Architecture Changes
|
||||
- **New Class**: `InteractiveVisualizer` class separate from existing `Visualizer`
|
||||
- **Dependencies**: Add `dash`, `plotly`, `dash-bootstrap-components` to requirements
|
||||
- **Web Server**: Dash development server for local deployment
|
||||
- **Data Flow**: Maintain existing metrics loading pipeline, adapt to Plotly data structures
|
||||
|
||||
### Integration Points
|
||||
```python
|
||||
# Maintain existing interface for compatibility
|
||||
class InteractiveVisualizer:
|
||||
def set_db_path(self, db_path: Path) -> None
|
||||
def update_from_book(self, book: Book) -> None
|
||||
def show(self) -> None # Launch Dash server instead of plt.show()
|
||||
```
|
||||
|
||||
### Data Structure Adaptation
|
||||
- **OHLC Data**: Convert bars to Plotly candlestick format
|
||||
- **Metrics Data**: Transform to Plotly time series format
|
||||
- **Memory Management**: Implement data decimation for large datasets
|
||||
- **Caching**: Cache processed data to improve interaction performance
|
||||
|
||||
### Technology Stack
|
||||
- **Frontend**: Dash + Plotly.js for charts
|
||||
- **Backend**: Python Dash server with existing data pipeline
|
||||
- **Styling**: Dash Bootstrap Components for professional UI
|
||||
- **Data Processing**: Pandas for efficient data manipulation
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### User Experience Metrics
|
||||
1. **Interaction Responsiveness**: 95% of zoom/pan operations complete within 100ms
|
||||
2. **Data Precision**: 100% accuracy in crosshair positioning and hover data display
|
||||
3. **Navigation Efficiency**: Users can navigate to specific time periods 3x faster than static charts
|
||||
|
||||
### Technical Performance Metrics
|
||||
4. **Load Time**: Initial chart rendering completes within 2 seconds for 500k data points
|
||||
5. **Memory Usage**: Interactive visualizer uses <150% memory compared to static version
|
||||
6. **Error Rate**: <1% interaction failures or display errors during normal usage
|
||||
|
||||
### Feature Adoption Metrics
|
||||
7. **Feature Usage**: CVD reset functionality used in >30% of analysis sessions
|
||||
8. **Time Range Analysis**: Custom time range selection used in >50% of sessions
|
||||
9. **Granularity Changes**: Time resolution adjustment used in >40% of sessions
|
||||
|
||||
## Implementation Priority
|
||||
|
||||
### Phase 1: Core Interactive Charts (High Priority)
|
||||
- Basic Plotly + Dash setup
|
||||
- 4-chart layout with synchronized axes
|
||||
- Basic zoom, pan, and crosshair functionality
|
||||
- Integration with existing data pipeline
|
||||
|
||||
### Phase 2: Enhanced Interactivity (High Priority)
|
||||
- Side panel with hover information
|
||||
- Navigation controls and buttons
|
||||
- Time granularity selection
|
||||
- CVD reset functionality
|
||||
|
||||
### Phase 3: Performance Optimization (Medium Priority)
|
||||
- Large dataset handling
|
||||
- Interaction performance tuning
|
||||
- Memory usage optimization
|
||||
- Error handling and edge cases
|
||||
|
||||
### Phase 4: Polish and UX (Medium Priority)
|
||||
- Professional styling and themes
|
||||
- Enhanced navigation controls
|
||||
- Time range selection tools
|
||||
- User experience refinements
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Deployment Method**: Should the interactive visualizer run as a local Dash server or be deployable as a standalone web application?
|
||||
|
||||
2. **Data Decimation Strategy**: How should the system handle datasets with millions of points while maintaining interactivity? Should it implement automatic decimation based on zoom level?
|
||||
|
||||
3. **CVD Reset Persistence**: Should CVD reset points be saved to the database or only exist in the current session?
|
||||
|
||||
4. **Multiple Database Sessions**: How should the interactive visualizer handle switching between different database files during the same session?
|
||||
|
||||
5. **Backward Compatibility**: Should the system maintain both static and interactive visualizers, or completely replace the matplotlib implementation?
|
||||
|
||||
6. **Configuration Management**: How should users configure default time granularities, color schemes, and layout preferences?
|
||||
|
||||
7. **Performance Baselines**: What are the acceptable performance thresholds for different dataset sizes and interaction types?
|
||||
|
||||
---
|
||||
|
||||
**Document Version**: 1.0
|
||||
**Created**: Current Date
|
||||
**Target Audience**: Junior Developer
|
||||
**Estimated Implementation**: 3-4 weeks for complete feature set
|
||||
71
tasks/prd-order-book-imbalance.md
Normal file
71
tasks/prd-order-book-imbalance.md
Normal file
@ -0,0 +1,71 @@
|
||||
## Order Book Imbalance (OBI) – Product Requirements Document
|
||||
|
||||
### 1) Introduction / Overview
|
||||
- Compute and visualize Order Book Imbalance (OBI) from the in-memory order book maintained by `OHLCProcessor`, aligned to the existing OHLC bar cadence.
|
||||
- OBI is defined as raw `B - A`, where `B` is total bid size and `A` is total ask size.
|
||||
- Persist an OBI time series as OHLC-style bars to `metrics_data.json` and render an OBI candlestick chart beneath the current Volume subplot in the Dash UI.
|
||||
|
||||
### 2) Goals
|
||||
- Compute OBI from the full in-memory aggregated book (all bid/ask levels) on every order book update.
|
||||
- Aggregate OBI into OHLC-style bars per `window_seconds`.
|
||||
- Persist OBI bars to `metrics_data.json` with atomic writes and a rolling retention of 1000 rows.
|
||||
- Add an OBI candlestick subplot (blue-toned) beneath Volume in the main chart, sharing the time axis.
|
||||
- Throttle intra-window upserts of OBI bars using the same approach/frequency as current OHLC throttling; always write on window close.
|
||||
|
||||
### 3) User Stories
|
||||
- As a researcher, I want OBI computed from the entire book so I can assess true depth imbalance.
|
||||
- As an analyst, I want OBI stored per time window as candlesticks so I can compare it with price/volume behavior.
|
||||
- As a developer, I want raw OBI values so I can analyze absolute imbalance patterns.
|
||||
|
||||
### 4) Functional Requirements
|
||||
1. Inputs and Definitions
|
||||
- Compute on every order book update using the complete in-memory book:
|
||||
- `B = sum(self._book_bids.values())`
|
||||
- `A = sum(self._book_asks.values())`
|
||||
- `OBI = B - A`
|
||||
- Edge case: if both sides are empty → `OBI = 0`.
|
||||
2. Windowing & Aggregation
|
||||
- Use the same `window_seconds` boundary as OHLC bars; window anchor is derived from the order book update timestamp.
|
||||
- Maintain OBI OHLC per window: `obi_open`, `obi_high`, `obi_low`, `obi_close`.
|
||||
- On window rollover, finalize and persist the bar.
|
||||
3. Persistence
|
||||
- Introduce `metrics_data.json` (co-located with other IPC files) with atomic writes.
|
||||
- Schema: list of fixed-length rows
|
||||
- `[timestamp_ms, obi_open, obi_high, obi_low, obi_close]`
|
||||
- Keep only the last 1000 rows.
|
||||
- Upsert intra-window bars periodically (throttled, matching OHLC’s approach) and always write on window close.
|
||||
4. Visualization
|
||||
- Read `metrics_data.json` in the Dash app with the same tolerant JSON reading/caching approach as other IPC files.
|
||||
- Extend the main figure to a third row for OBI candlesticks beneath Volume, sharing the x-axis.
|
||||
- Style OBI candlesticks in blue tones (distinct increasing/decreasing shades) and add a zero baseline.
|
||||
5. Performance & Correctness
|
||||
- OBI compute happens on every order book update; I/O is throttled to maintain UI responsiveness.
|
||||
- Use existing logging and error handling patterns; must not crash if metrics JSON is temporarily unreadable.
|
||||
6. Testing
|
||||
- Unit tests for OBI on symmetric, empty, and imbalanced books; intra-window aggregation; window rollover.
|
||||
- Integration test: fixture DB produces `metrics_data.json` aligned with OHLC bars, valid schema/lengths.
|
||||
|
||||
### 5) Non-Goals
|
||||
- No additional derived metrics; keep only raw OBI values for maximum flexibility.
|
||||
- No database persistence for metrics; JSON IPC only.
|
||||
- No strategy/signal changes.
|
||||
|
||||
### 6) Design Considerations
|
||||
- Reuse `OHLCProcessor` in-memory book (`_book_bids`, `_book_asks`).
|
||||
- Introduce new metrics IO helpers in `viz_io.py` mirroring existing OHLC IO (atomic write, rolling trim, upsert).
|
||||
- Keep `metrics_data.json` separate from `ohlc_data.json` to avoid schema churn.
|
||||
|
||||
### 7) Technical Considerations
|
||||
- Implement OBI compute and aggregation inside `OHLCProcessor.update_orderbook` after applying partial updates.
|
||||
- Throttle intra-window upserts with the same cadence concept as OHLC; on window close always persist.
|
||||
- Add a finalize path to persist the last OBI bar.
|
||||
|
||||
### 8) Success Metrics
|
||||
- `metrics_data.json` present with valid rows during processing.
|
||||
- OBI subplot updates smoothly and aligns with OHLC window timestamps.
|
||||
- OBI ≈ 0 for symmetric books; correct sign for imbalanced cases; no noticeable performance regression.
|
||||
|
||||
### 9) Open Questions
|
||||
- None; cadence confirmed to match OHLC throttling. Styling: blue tones for OBI candlesticks.
|
||||
|
||||
|
||||
190
tasks/prd-pyside6-pyqtgraph-migration.md
Normal file
190
tasks/prd-pyside6-pyqtgraph-migration.md
Normal file
@ -0,0 +1,190 @@
|
||||
# Product Requirements Document: Migration from Dash/Plotly to PySide6/PyQtGraph
|
||||
|
||||
## Introduction/Overview
|
||||
|
||||
This PRD outlines the complete migration of the orderflow backtest visualization system from the current Dash/Plotly web-based implementation to a native desktop application using PySide6 and PyQtGraph. The migration addresses critical issues with the current implementation including async problems, debugging difficulties, performance bottlenecks, and data handling inefficiencies.
|
||||
|
||||
The goal is to create a robust, high-performance desktop application that provides better control over the codebase, eliminates current visualization bugs (particularly the CVD graph display issue), and enables future real-time trading strategy monitoring capabilities.
|
||||
|
||||
## Goals
|
||||
|
||||
1. **Eliminate Current Technical Issues**
|
||||
- Resolve async-related problems causing visualization failures
|
||||
- Fix CVD graph display issues that persist despite correct-looking code
|
||||
- Enable proper debugging capabilities with breakpoint support
|
||||
- Improve overall application performance and responsiveness
|
||||
|
||||
2. **Improve Development Experience**
|
||||
- Gain better control over the codebase through native Python implementation
|
||||
- Reduce dependency on intermediate file-based data exchange
|
||||
- Simplify the development and debugging workflow
|
||||
- Establish a foundation for future real-time capabilities
|
||||
|
||||
3. **Maintain and Enhance Visualization Capabilities**
|
||||
- Preserve all existing chart types and interactions
|
||||
- Improve performance for granular dataset handling
|
||||
- Prepare infrastructure for real-time data streaming
|
||||
- Enhance user experience through native desktop interface
|
||||
|
||||
## User Stories
|
||||
|
||||
1. **As a trading strategy developer**, I want to visualize OHLC data with volume, OBI, and CVD indicators in a single, synchronized view so that I can analyze market behavior patterns effectively.
|
||||
|
||||
2. **As a data analyst**, I want to zoom, pan, and select specific time ranges on charts so that I can focus on relevant market periods for detailed analysis.
|
||||
|
||||
3. **As a system developer**, I want to debug visualization issues with breakpoints and proper debugging tools so that I can identify and fix problems efficiently.
|
||||
|
||||
4. **As a performance-conscious user**, I want smooth chart rendering and interactions even with large, granular datasets so that my analysis workflow is not interrupted by lag or freezing.
|
||||
|
||||
5. **As a future trading system operator**, I want a foundation that can handle real-time data updates so that I can monitor live trading strategies effectively.
|
||||
|
||||
## Functional Requirements
|
||||
|
||||
### Core Visualization Components
|
||||
|
||||
1. **Main Chart Window**
|
||||
- The system must display OHLC candlestick charts in a primary plot area
|
||||
- The system must allow customizable time window selection for OHLC display
|
||||
- The system must synchronize all chart components to the same time axis
|
||||
|
||||
2. **Integrated Indicator Charts**
|
||||
- The system must display Volume bars below the OHLC chart
|
||||
- The system must display Order Book Imbalance (OBI) indicator
|
||||
- The system must display Cumulative Volume Delta (CVD) indicator
|
||||
- All indicators must share the same X-axis as the OHLC chart
|
||||
|
||||
3. **Depth Chart Visualization**
|
||||
- The system must display order book depth at selected time snapshots
|
||||
- The system must update depth visualization based on time selection
|
||||
- The system must provide clear bid/ask visualization
|
||||
|
||||
### User Interaction Features
|
||||
|
||||
4. **Chart Navigation**
|
||||
- The system must support zoom in/out functionality across all charts
|
||||
- The system must allow panning across time ranges
|
||||
- The system must provide time range selection capabilities
|
||||
- The system must support rectangle selection for detailed analysis
|
||||
|
||||
5. **Data Inspection**
|
||||
- The system must display mouseover information for all chart elements
|
||||
- The system must show precise values for OHLC, volume, OBI, and CVD data points
|
||||
- The system must provide crosshair functionality for precise data reading
|
||||
|
||||
### Technical Architecture
|
||||
|
||||
6. **Application Framework**
|
||||
- The system must be built using PySide6 for the GUI framework
|
||||
- The system must use PyQtGraph for all chart rendering and interactions
|
||||
- The system must implement a native desktop application architecture
|
||||
|
||||
7. **Data Integration**
|
||||
- The system must integrate with existing data processing modules (metrics_calculator, ohlc_processor, orderbook_manager)
|
||||
- The system must eliminate dependency on intermediate JSON files for data display
|
||||
- The system must support direct in-memory data transfer between processing and visualization
|
||||
|
||||
8. **Performance Requirements**
|
||||
- The system must handle granular datasets efficiently without UI blocking
|
||||
- The system must provide smooth chart interactions (zoom, pan, selection)
|
||||
- The system must render updates in less than 100ms for typical dataset sizes
|
||||
|
||||
### Development and Debugging
|
||||
|
||||
9. **Code Quality**
|
||||
- The system must be fully debuggable with standard Python debugging tools
|
||||
- The system must follow the existing project architecture patterns
|
||||
- The system must maintain clean separation between data processing and visualization
|
||||
|
||||
## Non-Goals (Out of Scope)
|
||||
|
||||
1. **Web Interface Maintenance** - The existing Dash/Plotly implementation will be completely replaced, not maintained in parallel
|
||||
|
||||
2. **Backward Compatibility** - No requirement to maintain compatibility with existing Dash/Plotly components or web-based deployment
|
||||
|
||||
3. **Multi-Platform Distribution** - Initial focus on development environment only, not packaging for distribution
|
||||
|
||||
4. **Real-Time Implementation** - While the architecture should support future real-time capabilities, the initial migration will focus on historical data visualization
|
||||
|
||||
5. **Advanced Chart Types** - Only migrate existing chart types; new visualization features are out of scope for this migration
|
||||
|
||||
## Design Considerations
|
||||
|
||||
### User Interface Layout
|
||||
- **Main Window Structure**: Primary chart area with integrated indicators below
|
||||
- **Control Panel**: Side panel or toolbar for time range selection and chart configuration
|
||||
- **Status Bar**: Display current data range, loading status, and performance metrics
|
||||
- **Menu System**: File operations, view options, and application settings
|
||||
|
||||
### PyQtGraph Integration
|
||||
- **Plot Organization**: Use PyQtGraph's PlotWidget for main charts with linked axes
|
||||
- **Custom Plot Items**: Implement custom plot items for OHLC candlesticks and depth visualization
|
||||
- **Performance Optimization**: Utilize PyQtGraph's fast plotting capabilities for large datasets
|
||||
|
||||
### Data Flow Architecture
|
||||
- **Direct Memory Access**: Replace JSON file intermediates with direct Python object passing
|
||||
- **Lazy Loading**: Implement efficient data loading strategies for large time ranges
|
||||
- **Caching Strategy**: Cache processed data to improve navigation performance
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
### Dependencies and Integration
|
||||
- **PySide6**: Main GUI framework, provides native desktop capabilities
|
||||
- **PyQtGraph**: High-performance plotting library, optimized for real-time data
|
||||
- **Existing Modules**: Maintain integration with metrics_calculator.py, ohlc_processor.py, orderbook_manager.py
|
||||
- **Database Integration**: Continue using existing SQLite database through db_interpreter.py
|
||||
|
||||
### Migration Strategy (Iterative Implementation)
|
||||
- **Phase 1**: Basic PySide6 window with single PyQtGraph plot
|
||||
- **Phase 2**: OHLC candlestick chart implementation
|
||||
- **Phase 3**: Volume, OBI, and CVD indicator integration
|
||||
- **Phase 4**: Depth chart implementation
|
||||
- **Phase 5**: User interaction features (zoom, pan, selection)
|
||||
- **Phase 6**: Data integration and performance optimization
|
||||
|
||||
### Performance Considerations
|
||||
- **Memory Management**: Efficient data structure handling for large datasets
|
||||
- **Rendering Optimization**: Use PyQtGraph's ViewBox and plotting optimizations
|
||||
- **Thread Safety**: Proper handling of data processing in background threads
|
||||
- **Resource Cleanup**: Proper cleanup of chart objects and data structures
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Technical Success Criteria
|
||||
1. **Bug Resolution**: CVD graph displays correctly and all existing visualization bugs are resolved
|
||||
2. **Performance Improvement**: Chart interactions respond within 100ms for typical datasets
|
||||
3. **Debugging Capability**: Developers can set breakpoints and debug visualization code effectively
|
||||
4. **Data Handling**: Elimination of intermediate JSON files reduces data transfer overhead by 50%
|
||||
|
||||
### User Experience Success Criteria
|
||||
1. **Feature Parity**: All existing chart types and interactions are preserved and functional
|
||||
2. **Responsiveness**: Application feels more responsive than the current Dash implementation
|
||||
3. **Stability**: No crashes or freezing during normal chart operations
|
||||
4. **Visual Quality**: Charts render clearly with proper scaling and anti-aliasing
|
||||
|
||||
### Development Success Criteria
|
||||
1. **Code Maintainability**: New codebase follows established project patterns and is easier to maintain
|
||||
2. **Development Velocity**: Future visualization features can be implemented more quickly
|
||||
3. **Testing Capability**: Comprehensive testing can be performed with proper debugging tools
|
||||
4. **Architecture Foundation**: System is ready for future real-time data integration
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Data Loading Strategy**: Should we implement progressive loading for very large datasets, or rely on existing data chunking mechanisms?
|
||||
|
||||
2. **Configuration Management**: How should chart configuration and user preferences be stored and managed in the desktop application?
|
||||
|
||||
3. **Error Handling**: What specific error handling and user feedback mechanisms should be implemented for data loading and processing failures?
|
||||
|
||||
4. **Performance Monitoring**: Should we include built-in performance monitoring and profiling tools in the application?
|
||||
|
||||
5. **Future Real-Time Integration**: What specific interface patterns should be established now to facilitate future real-time data streaming integration?
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
This migration will follow the iterative development workflow with explicit approval checkpoints between each phase. Each implementation phase will be:
|
||||
- Limited to manageable scope (≤250 lines per module)
|
||||
- Tested immediately after implementation
|
||||
- Integrated with existing data processing modules
|
||||
- Validated for performance and functionality before proceeding to the next phase
|
||||
|
||||
The implementation will begin with basic PySide6 application structure and progressively add PyQtGraph visualization capabilities while maintaining integration with the existing data processing pipeline.
|
||||
@ -1,74 +0,0 @@
|
||||
# Tasks: Interactive Visualizer with Plotly + Dash
|
||||
|
||||
## Relevant Files
|
||||
|
||||
- `interactive_visualizer.py` - Main InteractiveVisualizer class implementing Plotly + Dash interface
|
||||
- `tests/test_interactive_visualizer.py` - Unit tests for InteractiveVisualizer class
|
||||
- `dash_app.py` - Dash application setup and layout configuration
|
||||
- `tests/test_dash_app.py` - Unit tests for Dash application components
|
||||
- `dash_callbacks.py` - Dash callback functions for interactivity and data updates
|
||||
- `tests/test_dash_callbacks.py` - Unit tests for callback functions
|
||||
- `dash_components.py` - Custom Dash components for side panel and controls
|
||||
- `tests/test_dash_components.py` - Unit tests for custom components
|
||||
- `data_adapters.py` - Data transformation utilities for Plotly format conversion
|
||||
- `tests/test_data_adapters.py` - Unit tests for data adapter functions
|
||||
- `pyproject.toml` - Updated dependencies including dash, plotly, dash-bootstrap-components
|
||||
- `main.py` - Updated to support both static and interactive visualizer options
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should be placed in the `tests/` directory following existing project structure
|
||||
- Use `uv run pytest [optional/path/to/test/file]` to run tests following project conventions
|
||||
- Dash server will run locally for development, accessible via browser at http://127.0.0.1:8050
|
||||
- Maintain backward compatibility with existing matplotlib visualizer
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Setup Plotly + Dash Infrastructure and Dependencies
|
||||
- [ ] 1.1 Add dash, plotly, and dash-bootstrap-components to pyproject.toml dependencies
|
||||
- [ ] 1.2 Install and verify new dependencies with uv sync
|
||||
- [ ] 1.3 Create basic dash_app.py with minimal Dash application setup
|
||||
- [ ] 1.4 Verify Dash server can start and serve a basic "Hello World" page
|
||||
- [ ] 1.5 Create project structure for interactive visualizer modules
|
||||
|
||||
- [ ] 2.0 Create Core Interactive Chart Layout with Synchronized Axes
|
||||
- [ ] 2.1 Design 4-subplot layout using plotly.subplots.make_subplots with shared X-axis
|
||||
- [ ] 2.2 Implement OHLC candlestick chart using plotly.graph_objects.Candlestick
|
||||
- [ ] 2.3 Implement Volume bar chart using plotly.graph_objects.Bar
|
||||
- [ ] 2.4 Implement OBI line chart using plotly.graph_objects.Scatter
|
||||
- [ ] 2.5 Implement CVD line chart using plotly.graph_objects.Scatter
|
||||
- [ ] 2.6 Configure synchronized zooming and panning across all subplots
|
||||
- [ ] 2.7 Add vertical crosshair functionality spanning all charts
|
||||
- [ ] 2.8 Apply professional dark theme and styling to charts
|
||||
|
||||
- [ ] 3.0 Implement Data Integration and Processing Pipeline
|
||||
- [ ] 3.1 Create InteractiveVisualizer class maintaining set_db_path() and update_from_book() interface
|
||||
- [ ] 3.2 Implement data_adapters.py for converting Book/Metric data to Plotly format
|
||||
- [ ] 3.3 Create OHLC data transformation from existing bar calculation logic
|
||||
- [ ] 3.4 Create metrics data transformation for OBI and CVD time series
|
||||
- [ ] 3.5 Implement volume data aggregation and formatting
|
||||
- [ ] 3.6 Add data caching mechanism for improved performance
|
||||
- [ ] 3.7 Integrate with existing SQLiteOrderflowRepository for metrics loading
|
||||
- [ ] 3.8 Handle multiple database file support seamlessly
|
||||
|
||||
- [ ] 4.0 Build Interactive Features and Navigation Controls
|
||||
- [ ] 4.1 Implement zoom in/out functionality with mouse wheel and buttons
|
||||
- [ ] 4.2 Implement pan functionality with click and drag
|
||||
- [ ] 4.3 Add reset zoom and home view buttons
|
||||
- [ ] 4.4 Create time range selector component for custom period selection
|
||||
- [ ] 4.5 Implement time granularity controls (1min, 5min, 15min, 1hour, 6hour)
|
||||
- [ ] 4.6 Add keyboard shortcuts for common navigation actions
|
||||
- [ ] 4.7 Implement smooth interaction performance optimizations (<100ms response)
|
||||
- [ ] 4.8 Add error handling for interaction edge cases
|
||||
|
||||
- [ ] 5.0 Develop Side Panel with Hover Information and CVD Reset Functionality
|
||||
- [ ] 5.1 Create side panel layout using dash-bootstrap-components
|
||||
- [ ] 5.2 Implement hover information display for OHLC data (timestamp, OHLC values, spread)
|
||||
- [ ] 5.3 Implement hover information display for Volume data (timestamp, volume, buy/sell breakdown)
|
||||
- [ ] 5.4 Implement hover information display for OBI data (timestamp, OBI value, bid/ask volumes)
|
||||
- [ ] 5.5 Implement hover information display for CVD data (timestamp, CVD value, volume delta)
|
||||
- [ ] 5.6 Add CVD reset functionality with click-to-reset on CVD chart
|
||||
- [ ] 5.7 Implement visual markers for CVD reset points
|
||||
- [ ] 5.8 Add CVD recalculation logic from reset point forward
|
||||
- [ ] 5.9 Create control buttons in side panel (Reset CVD, Zoom Reset, etc.)
|
||||
- [ ] 5.10 Optimize hover information update performance (<50ms latency)
|
||||
@ -1,66 +0,0 @@
|
||||
# Tasks: OBI and CVD Metrics Integration
|
||||
|
||||
Based on the PRD for integrating Order Book Imbalance (OBI) and Cumulative Volume Delta (CVD) calculations into the orderflow backtest system.
|
||||
|
||||
## Relevant Files
|
||||
|
||||
- `repositories/sqlite_repository.py` - Extend to support metrics table operations and batch insertions
|
||||
- `repositories/test_metrics_repository.py` - Unit tests for metrics repository functionality
|
||||
- `models.py` - Add new data models for metrics and update Book class
|
||||
- `tests/test_models_metrics.py` - Unit tests for new metric models
|
||||
- `storage.py` - Modify to integrate metric calculations during snapshot processing
|
||||
- `tests/test_storage_metrics.py` - Unit tests for storage metric integration
|
||||
- `strategies.py` - Enhance DefaultStrategy to calculate OBI and CVD metrics
|
||||
- `tests/test_strategies_metrics.py` - Unit tests for strategy metric calculations
|
||||
- `visualizer.py` - Extend to plot OBI and CVD curves beneath volume graphs
|
||||
- `tests/test_visualizer_metrics.py` - Unit tests for metric visualization
|
||||
- `parsers/metric_calculator.py` - New utility class for OBI and CVD calculations
|
||||
- `tests/test_metric_calculator.py` - Unit tests for metric calculation logic
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should be placed alongside the code files they are testing
|
||||
- Use `uv run pytest [optional/path/to/test/file]` to run tests following project standards
|
||||
- Database schema changes require migration considerations for existing databases
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Database Schema and Repository Updates
|
||||
- [ ] 1.1 Create metrics table schema with proper indexes and foreign key constraints
|
||||
- [ ] 1.2 Add metrics table creation method to SQLiteOrderflowRepository
|
||||
- [ ] 1.3 Implement metrics insertion methods with batch support for performance
|
||||
- [ ] 1.4 Add metrics querying methods (by timestamp range, snapshot_id)
|
||||
- [ ] 1.5 Create database migration utility to add metrics table to existing databases
|
||||
- [ ] 1.6 Add proper error handling and transaction management for metrics operations
|
||||
|
||||
- [ ] 2.0 Metric Calculation Engine
|
||||
- [ ] 2.1 Create MetricCalculator class with OBI calculation method
|
||||
- [ ] 2.2 Implement CVD calculation with incremental support and reset functionality
|
||||
- [ ] 2.3 Add volume delta calculation for individual timestamps
|
||||
- [ ] 2.4 Implement best bid/ask extraction from orderbook snapshots
|
||||
- [ ] 2.5 Add edge case handling (empty orderbook, no trades, zero volume)
|
||||
- [ ] 2.6 Create validation methods to ensure OBI values are within [-1, 1] range
|
||||
|
||||
- [ ] 3.0 Storage System Integration
|
||||
- [ ] 3.1 Modify Storage.build_booktick_from_db to integrate metric calculations
|
||||
- [ ] 3.2 Update _create_snapshots_from_rows to calculate and store metrics per snapshot
|
||||
- [ ] 3.3 Implement memory optimization by removing full snapshot retention
|
||||
- [ ] 3.4 Add metric persistence during snapshot processing
|
||||
- [ ] 3.5 Update Book model to store only essential data (metrics + best bid/ask)
|
||||
- [ ] 3.6 Add progress reporting for metric calculation during processing
|
||||
|
||||
- [ ] 4.0 Strategy Enhancement
|
||||
- [ ] 4.1 Update DefaultStrategy to use MetricCalculator for OBI and CVD
|
||||
- [ ] 4.2 Modify compute_OBI method to work with new metric calculation system
|
||||
- [ ] 4.3 Add CVD computation method to DefaultStrategy
|
||||
- [ ] 4.4 Return time-series data structures compatible with visualizer
|
||||
- [ ] 4.5 Integrate metric calculation into on_booktick workflow
|
||||
- [ ] 4.6 Add configuration options for CVD reset points and calculation parameters
|
||||
|
||||
- [ ] 5.0 Visualization Implementation
|
||||
- [ ] 5.1 Extend Visualizer to load metrics data from database
|
||||
- [ ] 5.2 Add OBI and CVD plotting methods beneath volume graphs
|
||||
- [ ] 5.3 Implement shared X-axis time alignment across all charts (OHLC, volume, OBI, CVD)
|
||||
- [ ] 5.4 Add 6-hour bar aggregation support for metrics visualization
|
||||
- [ ] 5.5 Implement standard line styling for OBI and CVD curves
|
||||
- [ ] 5.6 Make time resolution configurable for future flexibility
|
||||
@ -1,93 +0,0 @@
|
||||
"""Tests for main.py integration with metrics system."""
|
||||
|
||||
import sys
|
||||
import sqlite3
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
# Mock typer to avoid import issues in tests
|
||||
sys.modules['typer'] = MagicMock()
|
||||
|
||||
from storage import Storage
|
||||
from strategies import DefaultStrategy
|
||||
|
||||
|
||||
def test_strategy_database_integration():
|
||||
"""Test that strategy gets database path set correctly in main workflow."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
# Create minimal test database
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
conn.execute("""
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
# Insert minimal test data
|
||||
bids = "[(50000.0, 10.0, 0, 1)]"
|
||||
asks = "[(50001.0, 5.0, 0, 1)]"
|
||||
conn.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)",
|
||||
(1, bids, asks, 1000))
|
||||
conn.execute("INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(1, 1.0, 50000.0, 3.0, "buy", 1000))
|
||||
conn.commit()
|
||||
|
||||
# Test the integration workflow
|
||||
storage = Storage("BTC-USDT")
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# This simulates the main.py workflow
|
||||
strategy.set_db_path(db_path) # This is what main.py now does
|
||||
storage.build_booktick_from_db(db_path, None) # This calculates and stores metrics
|
||||
|
||||
# Verify strategy can access stored metrics
|
||||
assert strategy._db_path == db_path
|
||||
|
||||
# Verify metrics were stored by attempting to load them
|
||||
metrics = strategy.load_stored_metrics(1000, 1000)
|
||||
assert len(metrics) == 1
|
||||
assert metrics[0].timestamp == 1000
|
||||
|
||||
# Verify strategy can be called (this is what main.py does)
|
||||
strategy.on_booktick(storage.book) # Should use stored metrics
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
def test_strategy_backwards_compatibility():
|
||||
"""Test that strategy still works without database path (backwards compatibility)."""
|
||||
storage = Storage("BTC-USDT")
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# Don't set database path - should fall back to real-time calculation
|
||||
# This ensures existing code that doesn't use metrics still works
|
||||
|
||||
# Create empty book
|
||||
assert len(storage.book.snapshots) == 0
|
||||
|
||||
# Strategy should handle this gracefully
|
||||
strategy.on_booktick(storage.book) # Should not crash
|
||||
|
||||
# Verify OBI calculation still works
|
||||
obi_values = strategy.compute_OBI(storage.book)
|
||||
assert obi_values == [] # Empty book should return empty list
|
||||
@ -1,108 +0,0 @@
|
||||
"""Tests for main.py visualization workflow."""
|
||||
|
||||
import sys
|
||||
import sqlite3
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch, MagicMock
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
# Mock typer to avoid import issues in tests
|
||||
sys.modules['typer'] = MagicMock()
|
||||
|
||||
from storage import Storage
|
||||
from strategies import DefaultStrategy
|
||||
from visualizer import Visualizer
|
||||
|
||||
|
||||
def test_main_workflow_separation():
|
||||
"""Test that main.py workflow properly separates strategy and visualization."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
# Create minimal test database
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
conn.execute("""
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
# Insert minimal test data
|
||||
bids = "[(50000.0, 10.0, 0, 1)]"
|
||||
asks = "[(50001.0, 5.0, 0, 1)]"
|
||||
conn.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)",
|
||||
(1, bids, asks, 1000))
|
||||
conn.execute("INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(1, 1.0, 50000.0, 3.0, "buy", 1000))
|
||||
conn.commit()
|
||||
|
||||
# Test the new main.py workflow
|
||||
storage = Storage("BTC-USDT")
|
||||
strategy = DefaultStrategy("BTC-USDT") # No visualization parameter
|
||||
|
||||
# Mock visualizer to avoid GUI issues in tests
|
||||
with patch('matplotlib.pyplot.subplots') as mock_subplots:
|
||||
mock_fig = type('MockFig', (), {'canvas': type('MockCanvas', (), {'draw_idle': lambda: None})()})()
|
||||
mock_axes = [type('MockAx', (), {'clear': lambda: None})() for _ in range(4)]
|
||||
mock_subplots.return_value = (mock_fig, tuple(mock_axes))
|
||||
|
||||
visualizer = Visualizer(window_seconds=60, max_bars=500)
|
||||
|
||||
# This simulates the new main.py workflow
|
||||
strategy.set_db_path(db_path)
|
||||
visualizer.set_db_path(db_path)
|
||||
|
||||
storage.build_booktick_from_db(db_path, None)
|
||||
|
||||
# Strategy analyzes metrics (no visualization)
|
||||
strategy.on_booktick(storage.book)
|
||||
|
||||
# Verify strategy has database path but no visualizer
|
||||
assert strategy._db_path == db_path
|
||||
assert not hasattr(strategy, 'visualizer') or strategy.visualizer is None
|
||||
|
||||
# Verify visualizer can access database
|
||||
assert visualizer._db_path == db_path
|
||||
|
||||
# Verify visualizer can load metrics
|
||||
metrics = visualizer._load_stored_metrics(1000, 1000)
|
||||
assert len(metrics) == 1
|
||||
|
||||
# Test visualization update (should work independently)
|
||||
with patch.object(visualizer, '_draw') as mock_draw:
|
||||
visualizer.update_from_book(storage.book)
|
||||
mock_draw.assert_called_once()
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
def test_strategy_has_no_visualization_dependency():
|
||||
"""Test that strategy no longer depends on visualization."""
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# Strategy should not have visualizer attribute
|
||||
assert not hasattr(strategy, 'visualizer') or strategy.visualizer is None
|
||||
|
||||
# Strategy should work without any visualization setup
|
||||
from models import Book
|
||||
book = Book()
|
||||
|
||||
# Should not raise any errors
|
||||
strategy.on_booktick(book)
|
||||
@ -1,142 +0,0 @@
|
||||
"""Tests for MetricCalculator OBI calculation and best bid/ask extraction."""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from models import MetricCalculator, BookSnapshot, OrderbookLevel, Trade
|
||||
|
||||
|
||||
def test_calculate_obi_normal_case():
|
||||
"""Test OBI calculation with normal bid and ask volumes."""
|
||||
# Create test snapshot with more bid volume than ask volume
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1000,
|
||||
bids={
|
||||
50000.0: OrderbookLevel(price=50000.0, size=10.0, liquidation_count=0, order_count=1),
|
||||
49999.0: OrderbookLevel(price=49999.0, size=5.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
asks={
|
||||
50001.0: OrderbookLevel(price=50001.0, size=3.0, liquidation_count=0, order_count=1),
|
||||
50002.0: OrderbookLevel(price=50002.0, size=2.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
)
|
||||
|
||||
# Total bid volume = 15.0, total ask volume = 5.0
|
||||
# OBI = (15 - 5) / (15 + 5) = 10 / 20 = 0.5
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
assert obi == 0.5
|
||||
|
||||
|
||||
def test_calculate_obi_zero_volume():
|
||||
"""Test OBI calculation when there's no volume."""
|
||||
snapshot = BookSnapshot(id=1, timestamp=1000, bids={}, asks={})
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
assert obi == 0.0
|
||||
|
||||
|
||||
def test_calculate_obi_ask_heavy():
|
||||
"""Test OBI calculation with more ask volume than bid volume."""
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1000,
|
||||
bids={
|
||||
50000.0: OrderbookLevel(price=50000.0, size=2.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
asks={
|
||||
50001.0: OrderbookLevel(price=50001.0, size=8.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
)
|
||||
|
||||
# Total bid volume = 2.0, total ask volume = 8.0
|
||||
# OBI = (2 - 8) / (2 + 8) = -6 / 10 = -0.6
|
||||
obi = MetricCalculator.calculate_obi(snapshot)
|
||||
assert obi == -0.6
|
||||
|
||||
|
||||
def test_get_best_bid_ask_normal():
|
||||
"""Test best bid/ask extraction with normal orderbook."""
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1000,
|
||||
bids={
|
||||
50000.0: OrderbookLevel(price=50000.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
49999.0: OrderbookLevel(price=49999.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
49998.0: OrderbookLevel(price=49998.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
asks={
|
||||
50001.0: OrderbookLevel(price=50001.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
50002.0: OrderbookLevel(price=50002.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
50003.0: OrderbookLevel(price=50003.0, size=1.0, liquidation_count=0, order_count=1),
|
||||
},
|
||||
)
|
||||
|
||||
best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
assert best_bid == 50000.0 # Highest bid price
|
||||
assert best_ask == 50001.0 # Lowest ask price
|
||||
|
||||
|
||||
def test_get_best_bid_ask_empty():
|
||||
"""Test best bid/ask extraction with empty orderbook."""
|
||||
snapshot = BookSnapshot(id=1, timestamp=1000, bids={}, asks={})
|
||||
best_bid, best_ask = MetricCalculator.get_best_bid_ask(snapshot)
|
||||
assert best_bid is None
|
||||
assert best_ask is None
|
||||
|
||||
|
||||
def test_calculate_volume_delta_buy_heavy():
|
||||
"""Test volume delta calculation with more buy volume than sell volume."""
|
||||
trades = [
|
||||
Trade(id=1, trade_id=1.0, price=50000.0, size=10.0, side="buy", timestamp=1000),
|
||||
Trade(id=2, trade_id=2.0, price=50001.0, size=5.0, side="buy", timestamp=1000),
|
||||
Trade(id=3, trade_id=3.0, price=49999.0, size=3.0, side="sell", timestamp=1000),
|
||||
]
|
||||
|
||||
# Buy volume = 15.0, Sell volume = 3.0
|
||||
# Volume Delta = 15.0 - 3.0 = 12.0
|
||||
vd = MetricCalculator.calculate_volume_delta(trades)
|
||||
assert vd == 12.0
|
||||
|
||||
|
||||
def test_calculate_volume_delta_sell_heavy():
|
||||
"""Test volume delta calculation with more sell volume than buy volume."""
|
||||
trades = [
|
||||
Trade(id=1, trade_id=1.0, price=50000.0, size=2.0, side="buy", timestamp=1000),
|
||||
Trade(id=2, trade_id=2.0, price=49999.0, size=8.0, side="sell", timestamp=1000),
|
||||
]
|
||||
|
||||
# Buy volume = 2.0, Sell volume = 8.0
|
||||
# Volume Delta = 2.0 - 8.0 = -6.0
|
||||
vd = MetricCalculator.calculate_volume_delta(trades)
|
||||
assert vd == -6.0
|
||||
|
||||
|
||||
def test_calculate_volume_delta_no_trades():
|
||||
"""Test volume delta calculation with no trades."""
|
||||
trades = []
|
||||
vd = MetricCalculator.calculate_volume_delta(trades)
|
||||
assert vd == 0.0
|
||||
|
||||
|
||||
def test_calculate_cvd_incremental():
|
||||
"""Test incremental CVD calculation."""
|
||||
# Start with zero CVD
|
||||
cvd1 = MetricCalculator.calculate_cvd(0.0, 10.0)
|
||||
assert cvd1 == 10.0
|
||||
|
||||
# Add more volume delta
|
||||
cvd2 = MetricCalculator.calculate_cvd(cvd1, -5.0)
|
||||
assert cvd2 == 5.0
|
||||
|
||||
# Continue accumulating
|
||||
cvd3 = MetricCalculator.calculate_cvd(cvd2, 15.0)
|
||||
assert cvd3 == 20.0
|
||||
|
||||
|
||||
def test_calculate_cvd_reset_functionality():
|
||||
"""Test CVD reset by starting from 0.0."""
|
||||
# Simulate reset by passing 0.0 as previous CVD
|
||||
cvd_after_reset = MetricCalculator.calculate_cvd(0.0, 25.0)
|
||||
assert cvd_after_reset == 25.0
|
||||
@ -1,126 +0,0 @@
|
||||
"""Tests for SQLiteOrderflowRepository table creation and schema validation."""
|
||||
|
||||
import sys
|
||||
import sqlite3
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
from models import Metric
|
||||
|
||||
|
||||
def test_create_metrics_table():
|
||||
"""Test that metrics table is created with proper schema and indexes."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
# Create metrics table
|
||||
repo.create_metrics_table(conn)
|
||||
|
||||
# Verify table exists
|
||||
assert repo.table_exists(conn, "metrics")
|
||||
|
||||
# Verify table schema
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("PRAGMA table_info(metrics)")
|
||||
columns = cursor.fetchall()
|
||||
|
||||
# Check expected columns exist
|
||||
column_names = [col[1] for col in columns]
|
||||
expected_columns = ["id", "snapshot_id", "timestamp", "obi", "cvd", "best_bid", "best_ask"]
|
||||
for col in expected_columns:
|
||||
assert col in column_names, f"Column {col} missing from metrics table"
|
||||
|
||||
# Verify indexes exist
|
||||
cursor.execute("PRAGMA index_list(metrics)")
|
||||
indexes = cursor.fetchall()
|
||||
index_names = [idx[1] for idx in indexes]
|
||||
|
||||
assert "idx_metrics_timestamp" in index_names
|
||||
assert "idx_metrics_snapshot_id" in index_names
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
def test_insert_metrics_batch():
|
||||
"""Test batch insertion of metrics data."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
# Create metrics table
|
||||
repo.create_metrics_table(conn)
|
||||
|
||||
# Create test metrics
|
||||
metrics = [
|
||||
Metric(snapshot_id=1, timestamp=1000, obi=0.5, cvd=100.0, best_bid=50000.0, best_ask=50001.0),
|
||||
Metric(snapshot_id=2, timestamp=1001, obi=-0.2, cvd=150.0, best_bid=50002.0, best_ask=50003.0),
|
||||
Metric(snapshot_id=3, timestamp=1002, obi=0.0, cvd=125.0), # No best_bid/ask
|
||||
]
|
||||
|
||||
# Insert batch
|
||||
repo.insert_metrics_batch(conn, metrics)
|
||||
conn.commit()
|
||||
|
||||
# Verify insertion
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT COUNT(*) FROM metrics")
|
||||
count = cursor.fetchone()[0]
|
||||
assert count == 3
|
||||
|
||||
# Verify data integrity
|
||||
cursor.execute("SELECT snapshot_id, timestamp, obi, cvd, best_bid, best_ask FROM metrics ORDER BY timestamp")
|
||||
rows = cursor.fetchall()
|
||||
|
||||
assert rows[0] == (1, "1000", 0.5, 100.0, 50000.0, 50001.0)
|
||||
assert rows[1] == (2, "1001", -0.2, 150.0, 50002.0, 50003.0)
|
||||
assert rows[2] == (3, "1002", 0.0, 125.0, None, None)
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
def test_load_metrics_by_timerange():
|
||||
"""Test loading metrics within a timestamp range."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
# Create metrics table and insert test data
|
||||
repo.create_metrics_table(conn)
|
||||
|
||||
metrics = [
|
||||
Metric(snapshot_id=1, timestamp=1000, obi=0.1, cvd=10.0, best_bid=50000.0, best_ask=50001.0),
|
||||
Metric(snapshot_id=2, timestamp=1005, obi=0.2, cvd=20.0, best_bid=50002.0, best_ask=50003.0),
|
||||
Metric(snapshot_id=3, timestamp=1010, obi=0.3, cvd=30.0, best_bid=50004.0, best_ask=50005.0),
|
||||
Metric(snapshot_id=4, timestamp=1015, obi=0.4, cvd=40.0, best_bid=50006.0, best_ask=50007.0),
|
||||
]
|
||||
|
||||
repo.insert_metrics_batch(conn, metrics)
|
||||
conn.commit()
|
||||
|
||||
# Test timerange query - should get middle 2 records
|
||||
loaded_metrics = repo.load_metrics_by_timerange(conn, 1003, 1012)
|
||||
|
||||
assert len(loaded_metrics) == 2
|
||||
assert loaded_metrics[0].timestamp == 1005
|
||||
assert loaded_metrics[0].obi == 0.2
|
||||
assert loaded_metrics[1].timestamp == 1010
|
||||
assert loaded_metrics[1].obi == 0.3
|
||||
|
||||
# Test edge cases
|
||||
assert len(repo.load_metrics_by_timerange(conn, 2000, 3000)) == 0 # No data
|
||||
assert len(repo.load_metrics_by_timerange(conn, 1000, 1000)) == 1 # Single record
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
@ -1,17 +0,0 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from parsers.orderbook_parser import OrderbookParser
|
||||
|
||||
|
||||
def test_parse_side_malformed_text_does_not_raise():
|
||||
parser = OrderbookParser(debug=False)
|
||||
side = {}
|
||||
# Malformed text that literal_eval cannot parse
|
||||
bad_text = "[[100.0, 'missing tuple closing'"
|
||||
# Should not raise; should simply log an error and leave side empty
|
||||
parser.parse_side(bad_text, side)
|
||||
assert side == {}
|
||||
|
||||
@ -1,53 +0,0 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import sqlite3
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
|
||||
|
||||
def test_iterate_book_rows_batches(tmp_path):
|
||||
db_path = tmp_path / "iter.db"
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
c = conn.cursor()
|
||||
c.execute(
|
||||
"""
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
c.execute(
|
||||
"""
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
# Insert 12 rows to ensure multiple fetchmany batches (repo uses 5000, but iteration still correct)
|
||||
bids = str([(100.0, 1.0, 0, 1)])
|
||||
asks = str([(101.0, 1.0, 0, 1)])
|
||||
for i in range(12):
|
||||
c.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)", (i + 1, bids, asks, 1000 + i))
|
||||
conn.commit()
|
||||
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
rows = list(repo.iterate_book_rows(conn))
|
||||
assert len(rows) == 12
|
||||
# Ensure ordering by timestamp ascending
|
||||
timestamps = [r[3] for r in rows]
|
||||
assert timestamps == sorted(timestamps)
|
||||
|
||||
# count_rows allowlist should work
|
||||
assert repo.count_rows(conn, "book") == 12
|
||||
|
||||
@ -1,83 +0,0 @@
|
||||
from pathlib import Path
|
||||
from datetime import datetime, timezone
|
||||
import sqlite3
|
||||
import sys
|
||||
|
||||
# Ensure project root is on sys.path for direct module imports
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from storage import Storage
|
||||
|
||||
|
||||
def _init_db(path: Path) -> None:
|
||||
with sqlite3.connect(str(path)) as conn:
|
||||
c = conn.cursor()
|
||||
c.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
c.execute(
|
||||
"""
|
||||
CREATE TABLE IF NOT EXISTS trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
"""
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
|
||||
def test_storage_builds_snapshots_and_attaches_trades(tmp_path):
|
||||
db_path = tmp_path / "test.db"
|
||||
_init_db(db_path)
|
||||
|
||||
ts = 1_725_000_000
|
||||
|
||||
# Insert one valid book row and one invalid (empty asks) that should be ignored
|
||||
bids = str([(100.0, 1.0, 0, 1), (99.5, 2.0, 0, 1)])
|
||||
asks = str([(100.5, 1.5, 0, 1), (101.0, 1.0, 0, 1)])
|
||||
invalid_asks = str([])
|
||||
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
c = conn.cursor()
|
||||
c.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)", (1, bids, asks, ts))
|
||||
c.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)", (2, bids, invalid_asks, ts + 1))
|
||||
|
||||
# Insert trades for ts
|
||||
c.execute(
|
||||
"INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(1, 1.0, 100.25, 0.5, "buy", ts),
|
||||
)
|
||||
c.execute(
|
||||
"INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(2, 2.0, 100.75, 0.75, "sell", ts),
|
||||
)
|
||||
conn.commit()
|
||||
|
||||
storage = Storage("BTC-USDT")
|
||||
|
||||
db_date = datetime.fromtimestamp(ts, tz=timezone.utc)
|
||||
storage.build_booktick_from_db(db_path, db_date)
|
||||
|
||||
# Only one snapshot should be included (the valid one with non-empty asks)
|
||||
assert len(storage.book.snapshots) == 1
|
||||
snap = storage.book.snapshots[0]
|
||||
assert snap.timestamp == ts
|
||||
assert len(snap.bids) == 2
|
||||
assert len(snap.asks) == 2
|
||||
|
||||
# Trades should be attached for the same timestamp
|
||||
assert len(snap.trades) == 2
|
||||
sides = sorted(t.side for t in snap.trades)
|
||||
assert sides == ["buy", "sell"]
|
||||
|
||||
@ -1,88 +0,0 @@
|
||||
"""Tests for Storage metrics integration."""
|
||||
|
||||
import sys
|
||||
import sqlite3
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from datetime import datetime
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from storage import Storage
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
|
||||
|
||||
def test_storage_calculates_and_stores_metrics():
|
||||
"""Test that Storage calculates and stores metrics during build_booktick_from_db."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
# Create test database with minimal data
|
||||
with sqlite3.connect(str(db_path)) as conn:
|
||||
# Create tables
|
||||
conn.execute("""
|
||||
CREATE TABLE book (
|
||||
id INTEGER PRIMARY KEY,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
conn.execute("""
|
||||
CREATE TABLE trades (
|
||||
id INTEGER PRIMARY KEY,
|
||||
trade_id REAL NOT NULL,
|
||||
price REAL NOT NULL,
|
||||
size REAL NOT NULL,
|
||||
side TEXT NOT NULL,
|
||||
timestamp INTEGER NOT NULL
|
||||
)
|
||||
""")
|
||||
|
||||
# Insert test data
|
||||
bids = "[(50000.0, 10.0, 0, 1), (49999.0, 5.0, 0, 1)]" # 15.0 total bid volume
|
||||
asks = "[(50001.0, 3.0, 0, 1), (50002.0, 2.0, 0, 1)]" # 5.0 total ask volume
|
||||
|
||||
conn.execute("INSERT INTO book (id, bids, asks, timestamp) VALUES (?, ?, ?, ?)",
|
||||
(1, bids, asks, 1000))
|
||||
|
||||
# Add trades for CVD calculation
|
||||
conn.execute("INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(1, 1.0, 50000.0, 8.0, "buy", 1000))
|
||||
conn.execute("INSERT INTO trades (id, trade_id, price, size, side, timestamp) VALUES (?, ?, ?, ?, ?, ?)",
|
||||
(2, 2.0, 50001.0, 3.0, "sell", 1000))
|
||||
|
||||
conn.commit()
|
||||
|
||||
# Test Storage metrics integration
|
||||
storage = Storage("BTC-USDT")
|
||||
storage.build_booktick_from_db(db_path, datetime.now())
|
||||
|
||||
# Verify metrics were calculated and stored
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
# Check metrics table exists
|
||||
assert repo.table_exists(conn, "metrics")
|
||||
|
||||
# Load calculated metrics
|
||||
metrics = repo.load_metrics_by_timerange(conn, 1000, 1000)
|
||||
assert len(metrics) == 1
|
||||
|
||||
metric = metrics[0]
|
||||
|
||||
# Verify OBI calculation: (15 - 5) / (15 + 5) = 0.5
|
||||
assert abs(metric.obi - 0.5) < 0.001
|
||||
|
||||
# Verify CVD calculation: buy(8.0) - sell(3.0) = 5.0
|
||||
assert abs(metric.cvd - 5.0) < 0.001
|
||||
|
||||
# Verify best bid/ask
|
||||
assert metric.best_bid == 50000.0
|
||||
assert metric.best_ask == 50001.0
|
||||
|
||||
# Verify book was also populated (backward compatibility)
|
||||
assert len(storage.book.snapshots) == 1
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
@ -1,48 +0,0 @@
|
||||
import sys
|
||||
from pathlib import Path
|
||||
import pytest
|
||||
|
||||
# Ensure project root is on sys.path for direct module imports
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from storage import Storage, OrderbookLevel
|
||||
|
||||
|
||||
def test_parse_orderbook_side_happy_path():
|
||||
storage = Storage("BTC-USDT")
|
||||
text = str([
|
||||
(100.0, 1.5, 0, 2),
|
||||
(101.0, 2.25, 1, 3),
|
||||
])
|
||||
|
||||
side = {}
|
||||
storage._parse_orderbook_side(text, side)
|
||||
|
||||
assert 100.0 in side and 101.0 in side
|
||||
level_100 = side[100.0]
|
||||
level_101 = side[101.0]
|
||||
|
||||
assert isinstance(level_100, OrderbookLevel)
|
||||
assert level_100.price == 100.0
|
||||
assert level_100.size == 1.5
|
||||
assert level_100.liquidation_count == 0
|
||||
assert level_100.order_count == 2
|
||||
|
||||
assert level_101.size == 2.25
|
||||
assert level_101.liquidation_count == 1
|
||||
assert level_101.order_count == 3
|
||||
|
||||
|
||||
def test_parse_orderbook_side_ignores_zero_size():
|
||||
storage = Storage("BTC-USDT")
|
||||
text = str([
|
||||
(100.0, 0.0, 0, 0),
|
||||
(101.0, 1.0, 0, 1),
|
||||
])
|
||||
|
||||
side = {}
|
||||
storage._parse_orderbook_side(text, side)
|
||||
|
||||
assert 100.0 not in side
|
||||
assert 101.0 in side
|
||||
|
||||
@ -1,112 +0,0 @@
|
||||
"""Tests for DefaultStrategy metrics integration."""
|
||||
|
||||
import sys
|
||||
import sqlite3
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from strategies import DefaultStrategy
|
||||
from models import Book, BookSnapshot, OrderbookLevel, Metric
|
||||
from repositories.sqlite_repository import SQLiteOrderflowRepository
|
||||
|
||||
|
||||
def test_strategy_uses_metric_calculator():
|
||||
"""Test that strategy uses MetricCalculator for OBI calculation."""
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# Create test book with snapshots
|
||||
book = Book()
|
||||
snapshot = BookSnapshot(
|
||||
id=1,
|
||||
timestamp=1000,
|
||||
bids={50000.0: OrderbookLevel(price=50000.0, size=10.0, liquidation_count=0, order_count=1)},
|
||||
asks={50001.0: OrderbookLevel(price=50001.0, size=5.0, liquidation_count=0, order_count=1)},
|
||||
)
|
||||
book.add_snapshot(snapshot)
|
||||
|
||||
# Test OBI calculation
|
||||
obi_values = strategy.compute_OBI(book)
|
||||
|
||||
assert len(obi_values) == 1
|
||||
# OBI = (10 - 5) / (10 + 5) = 0.333...
|
||||
assert abs(obi_values[0] - 0.333333) < 0.001
|
||||
|
||||
|
||||
def test_strategy_loads_stored_metrics():
|
||||
"""Test that strategy can load stored metrics from database."""
|
||||
with tempfile.NamedTemporaryFile(suffix=".db", delete=False) as tmp_file:
|
||||
db_path = Path(tmp_file.name)
|
||||
|
||||
try:
|
||||
# Create test database with metrics
|
||||
repo = SQLiteOrderflowRepository(db_path)
|
||||
with repo.connect() as conn:
|
||||
repo.create_metrics_table(conn)
|
||||
|
||||
# Insert test metrics
|
||||
test_metrics = [
|
||||
Metric(snapshot_id=1, timestamp=1000, obi=0.1, cvd=10.0, best_bid=50000.0, best_ask=50001.0),
|
||||
Metric(snapshot_id=2, timestamp=1001, obi=0.2, cvd=15.0, best_bid=50002.0, best_ask=50003.0),
|
||||
Metric(snapshot_id=3, timestamp=1002, obi=0.3, cvd=20.0, best_bid=50004.0, best_ask=50005.0),
|
||||
]
|
||||
|
||||
repo.insert_metrics_batch(conn, test_metrics)
|
||||
conn.commit()
|
||||
|
||||
# Test strategy loading
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
strategy.set_db_path(db_path)
|
||||
|
||||
loaded_metrics = strategy.load_stored_metrics(1000, 1002)
|
||||
|
||||
assert len(loaded_metrics) == 3
|
||||
assert loaded_metrics[0].obi == 0.1
|
||||
assert loaded_metrics[0].cvd == 10.0
|
||||
assert loaded_metrics[-1].obi == 0.3
|
||||
assert loaded_metrics[-1].cvd == 20.0
|
||||
|
||||
finally:
|
||||
db_path.unlink(missing_ok=True)
|
||||
|
||||
|
||||
def test_strategy_metrics_summary():
|
||||
"""Test that strategy generates correct metrics summary."""
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# Create test metrics
|
||||
metrics = [
|
||||
Metric(snapshot_id=1, timestamp=1000, obi=0.1, cvd=10.0),
|
||||
Metric(snapshot_id=2, timestamp=1001, obi=-0.2, cvd=5.0),
|
||||
Metric(snapshot_id=3, timestamp=1002, obi=0.3, cvd=15.0),
|
||||
]
|
||||
|
||||
summary = strategy.get_metrics_summary(metrics)
|
||||
|
||||
assert summary["obi_min"] == -0.2
|
||||
assert summary["obi_max"] == 0.3
|
||||
assert abs(summary["obi_avg"] - 0.0667) < 0.001 # (0.1 + (-0.2) + 0.3) / 3
|
||||
assert summary["cvd_start"] == 10.0
|
||||
assert summary["cvd_end"] == 15.0
|
||||
assert summary["cvd_change"] == 5.0 # 15.0 - 10.0
|
||||
assert summary["total_snapshots"] == 3
|
||||
|
||||
|
||||
def test_strategy_empty_metrics():
|
||||
"""Test strategy behavior with empty metrics."""
|
||||
strategy = DefaultStrategy("BTC-USDT")
|
||||
|
||||
# Test with empty book
|
||||
book = Book()
|
||||
obi_values = strategy.compute_OBI(book)
|
||||
assert obi_values == []
|
||||
|
||||
# Test with empty metrics
|
||||
summary = strategy.get_metrics_summary([])
|
||||
assert summary == {}
|
||||
|
||||
# Test loading from non-existent database
|
||||
strategy.set_db_path(Path("nonexistent.db"))
|
||||
metrics = strategy.load_stored_metrics(1000, 2000)
|
||||
assert metrics == []
|
||||
378
uv.lock
generated
378
uv.lock
generated
@ -2,66 +2,6 @@ version = 1
|
||||
revision = 3
|
||||
requires-python = ">=3.12"
|
||||
|
||||
[[package]]
|
||||
name = "blinker"
|
||||
version = "1.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460, upload-time = "2024-11-08T17:25:47.436Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458, upload-time = "2024-11-08T17:25:46.184Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "certifi"
|
||||
version = "2025.8.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/dc/67/960ebe6bf230a96cda2e0abcf73af550ec4f090005363542f0765df162e0/certifi-2025.8.3.tar.gz", hash = "sha256:e564105f78ded564e3ae7c923924435e1daa7463faeab5bb932bc53ffae63407", size = 162386, upload-time = "2025-08-03T03:07:47.08Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/48/1549795ba7742c948d2ad169c1c8cdbae65bc450d6cd753d124b17c8cd32/certifi-2025.8.3-py3-none-any.whl", hash = "sha256:f6c12493cfb1b06ba2ff328595af9350c65d6644968e5d3a2ffd78699af217a5", size = 161216, upload-time = "2025-08-03T03:07:45.777Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "charset-normalizer"
|
||||
version = "3.4.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/83/2d/5fd176ceb9b2fc619e63405525573493ca23441330fcdaee6bef9460e924/charset_normalizer-3.4.3.tar.gz", hash = "sha256:6fce4b8500244f6fcb71465d4a4930d132ba9ab8e71a7859e6a5d59851068d14", size = 122371, upload-time = "2025-08-09T07:57:28.46Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/5e/14c94999e418d9b87682734589404a25854d5f5d0408df68bc15b6ff54bb/charset_normalizer-3.4.3-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:e28e334d3ff134e88989d90ba04b47d84382a828c061d0d1027b1b12a62b39b1", size = 205655, upload-time = "2025-08-09T07:56:08.475Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/a8/c6ec5d389672521f644505a257f50544c074cf5fc292d5390331cd6fc9c3/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0cacf8f7297b0c4fcb74227692ca46b4a5852f8f4f24b3c766dd94a1075c4884", size = 146223, upload-time = "2025-08-09T07:56:09.708Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/eb/a2ffb08547f4e1e5415fb69eb7db25932c52a52bed371429648db4d84fb1/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:c6fd51128a41297f5409deab284fecbe5305ebd7e5a1f959bee1c054622b7018", size = 159366, upload-time = "2025-08-09T07:56:11.326Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/82/10/0fd19f20c624b278dddaf83b8464dcddc2456cb4b02bb902a6da126b87a1/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:3cfb2aad70f2c6debfbcb717f23b7eb55febc0bb23dcffc0f076009da10c6392", size = 157104, upload-time = "2025-08-09T07:56:13.014Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/16/ab/0233c3231af734f5dfcf0844aa9582d5a1466c985bbed6cedab85af9bfe3/charset_normalizer-3.4.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1606f4a55c0fd363d754049cdf400175ee96c992b1f8018b993941f221221c5f", size = 151830, upload-time = "2025-08-09T07:56:14.428Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ae/02/e29e22b4e02839a0e4a06557b1999d0a47db3567e82989b5bb21f3fbbd9f/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:027b776c26d38b7f15b26a5da1044f376455fb3766df8fc38563b4efbc515154", size = 148854, upload-time = "2025-08-09T07:56:16.051Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/6b/e2539a0a4be302b481e8cafb5af8792da8093b486885a1ae4d15d452bcec/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:42e5088973e56e31e4fa58eb6bd709e42fc03799c11c42929592889a2e54c491", size = 160670, upload-time = "2025-08-09T07:56:17.314Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/e7/883ee5676a2ef217a40ce0bffcc3d0dfbf9e64cbcfbdf822c52981c3304b/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_s390x.whl", hash = "sha256:cc34f233c9e71701040d772aa7490318673aa7164a0efe3172b2981218c26d93", size = 158501, upload-time = "2025-08-09T07:56:18.641Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/35/6525b21aa0db614cf8b5792d232021dca3df7f90a1944db934efa5d20bb1/charset_normalizer-3.4.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:320e8e66157cc4e247d9ddca8e21f427efc7a04bbd0ac8a9faf56583fa543f9f", size = 153173, upload-time = "2025-08-09T07:56:20.289Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/ee/f4704bad8201de513fdc8aac1cabc87e38c5818c93857140e06e772b5892/charset_normalizer-3.4.3-cp312-cp312-win32.whl", hash = "sha256:fb6fecfd65564f208cbf0fba07f107fb661bcd1a7c389edbced3f7a493f70e37", size = 99822, upload-time = "2025-08-09T07:56:21.551Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/39/f5/3b3836ca6064d0992c58c7561c6b6eee1b3892e9665d650c803bd5614522/charset_normalizer-3.4.3-cp312-cp312-win_amd64.whl", hash = "sha256:86df271bf921c2ee3818f0522e9a5b8092ca2ad8b065ece5d7d9d0e9f4849bcc", size = 107543, upload-time = "2025-08-09T07:56:23.115Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/ca/2135ac97709b400c7654b4b764daf5c5567c2da45a30cdd20f9eefe2d658/charset_normalizer-3.4.3-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:14c2a87c65b351109f6abfc424cab3927b3bdece6f706e4d12faaf3d52ee5efe", size = 205326, upload-time = "2025-08-09T07:56:24.721Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/71/11/98a04c3c97dd34e49c7d247083af03645ca3730809a5509443f3c37f7c99/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:41d1fc408ff5fdfb910200ec0e74abc40387bccb3252f3f27c0676731df2b2c8", size = 146008, upload-time = "2025-08-09T07:56:26.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/60/f5/4659a4cb3c4ec146bec80c32d8bb16033752574c20b1252ee842a95d1a1e/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:1bb60174149316da1c35fa5233681f7c0f9f514509b8e399ab70fea5f17e45c9", size = 159196, upload-time = "2025-08-09T07:56:27.25Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/9e/f552f7a00611f168b9a5865a1414179b2c6de8235a4fa40189f6f79a1753/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:30d006f98569de3459c2fc1f2acde170b7b2bd265dc1943e87e1a4efe1b67c31", size = 156819, upload-time = "2025-08-09T07:56:28.515Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/95/42aa2156235cbc8fa61208aded06ef46111c4d3f0de233107b3f38631803/charset_normalizer-3.4.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:416175faf02e4b0810f1f38bcb54682878a4af94059a1cd63b8747244420801f", size = 151350, upload-time = "2025-08-09T07:56:29.716Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/a9/3865b02c56f300a6f94fc631ef54f0a8a29da74fb45a773dfd3dcd380af7/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:6aab0f181c486f973bc7262a97f5aca3ee7e1437011ef0c2ec04b5a11d16c927", size = 148644, upload-time = "2025-08-09T07:56:30.984Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/d9/cbcf1a2a5c7d7856f11e7ac2d782aec12bdfea60d104e60e0aa1c97849dc/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:fdabf8315679312cfa71302f9bd509ded4f2f263fb5b765cf1433b39106c3cc9", size = 160468, upload-time = "2025-08-09T07:56:32.252Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f6/42/6f45efee8697b89fda4d50580f292b8f7f9306cb2971d4b53f8914e4d890/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_s390x.whl", hash = "sha256:bd28b817ea8c70215401f657edef3a8aa83c29d447fb0b622c35403780ba11d5", size = 158187, upload-time = "2025-08-09T07:56:33.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/99/f1c3bdcfaa9c45b3ce96f70b14f070411366fa19549c1d4832c935d8e2c3/charset_normalizer-3.4.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:18343b2d246dc6761a249ba1fb13f9ee9a2bcd95decc767319506056ea4ad4dc", size = 152699, upload-time = "2025-08-09T07:56:34.739Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/ad/b0081f2f99a4b194bcbb1934ef3b12aa4d9702ced80a37026b7607c72e58/charset_normalizer-3.4.3-cp313-cp313-win32.whl", hash = "sha256:6fb70de56f1859a3f71261cbe41005f56a7842cc348d3aeb26237560bfa5e0ce", size = 99580, upload-time = "2025-08-09T07:56:35.981Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/8f/ae790790c7b64f925e5c953b924aaa42a243fb778fed9e41f147b2a5715a/charset_normalizer-3.4.3-cp313-cp313-win_amd64.whl", hash = "sha256:cf1ebb7d78e1ad8ec2a8c4732c7be2e736f6e5123a4146c5b89c9d1f585f8cef", size = 107366, upload-time = "2025-08-09T07:56:37.339Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/91/b5a06ad970ddc7a0e513112d40113e834638f4ca1120eb727a249fb2715e/charset_normalizer-3.4.3-cp314-cp314-macosx_10_13_universal2.whl", hash = "sha256:3cd35b7e8aedeb9e34c41385fda4f73ba609e561faedfae0a9e75e44ac558a15", size = 204342, upload-time = "2025-08-09T07:56:38.687Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/ec/1edc30a377f0a02689342f214455c3f6c2fbedd896a1d2f856c002fc3062/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b89bc04de1d83006373429975f8ef9e7932534b8cc9ca582e4db7d20d91816db", size = 145995, upload-time = "2025-08-09T07:56:40.048Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/17/e5/5e67ab85e6d22b04641acb5399c8684f4d37caf7558a53859f0283a650e9/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:2001a39612b241dae17b4687898843f254f8748b796a2e16f1051a17078d991d", size = 158640, upload-time = "2025-08-09T07:56:41.311Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/e5/38421987f6c697ee3722981289d554957c4be652f963d71c5e46a262e135/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:8dcfc373f888e4fb39a7bc57e93e3b845e7f462dacc008d9749568b1c4ece096", size = 156636, upload-time = "2025-08-09T07:56:43.195Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/e4/5a075de8daa3ec0745a9a3b54467e0c2967daaaf2cec04c845f73493e9a1/charset_normalizer-3.4.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:18b97b8404387b96cdbd30ad660f6407799126d26a39ca65729162fd810a99aa", size = 150939, upload-time = "2025-08-09T07:56:44.819Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/f7/3611b32318b30974131db62b4043f335861d4d9b49adc6d57c1149cc49d4/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:ccf600859c183d70eb47e05a44cd80a4ce77394d1ac0f79dbd2dd90a69a3a049", size = 148580, upload-time = "2025-08-09T07:56:46.684Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/61/19b36f4bd67f2793ab6a99b979b4e4f3d8fc754cbdffb805335df4337126/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:53cd68b185d98dde4ad8990e56a58dea83a4162161b1ea9272e5c9182ce415e0", size = 159870, upload-time = "2025-08-09T07:56:47.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/06/57/84722eefdd338c04cf3030ada66889298eaedf3e7a30a624201e0cbe424a/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_s390x.whl", hash = "sha256:30a96e1e1f865f78b030d65241c1ee850cdf422d869e9028e2fc1d5e4db73b92", size = 157797, upload-time = "2025-08-09T07:56:49.756Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/72/2a/aff5dd112b2f14bcc3462c312dce5445806bfc8ab3a7328555da95330e4b/charset_normalizer-3.4.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:d716a916938e03231e86e43782ca7878fb602a125a91e7acb8b5112e2e96ac16", size = 152224, upload-time = "2025-08-09T07:56:51.369Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/8c/9839225320046ed279c6e839d51f028342eb77c91c89b8ef2549f951f3ec/charset_normalizer-3.4.3-cp314-cp314-win32.whl", hash = "sha256:c6dbd0ccdda3a2ba7c2ecd9d77b37f3b5831687d8dc1b6ca5f56a4880cc7b7ce", size = 100086, upload-time = "2025-08-09T07:56:52.722Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/7a/36fbcf646e41f710ce0a563c1c9a343c6edf9be80786edeb15b6f62e17db/charset_normalizer-3.4.3-cp314-cp314-win_amd64.whl", hash = "sha256:73dc19b562516fc9bcf6e5d6e596df0b4eb98d87e4f79f3ae71840e6ed21361c", size = 107400, upload-time = "2025-08-09T07:56:55.172Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/1f/f041989e93b001bc4e44bb1669ccdcf54d3f00e628229a85b08d330615c5/charset_normalizer-3.4.3-py3-none-any.whl", hash = "sha256:ce571ab16d890d23b5c278547ba694193a45011ff86a9162a71307ed9f86759a", size = 53175, upload-time = "2025-08-09T07:57:26.864Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.2.1"
|
||||
@ -158,55 +98,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e7/05/c19819d5e3d95294a6f5947fb9b9629efb316b96de511b418c53d245aae6/cycler-0.12.1-py3-none-any.whl", hash = "sha256:85cef7cff222d8644161529808465972e51340599459b8ac3ccbac5a854e0d30", size = 8321, upload-time = "2023-10-07T05:32:16.783Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dash"
|
||||
version = "3.2.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "flask" },
|
||||
{ name = "importlib-metadata" },
|
||||
{ name = "nest-asyncio" },
|
||||
{ name = "plotly" },
|
||||
{ name = "requests" },
|
||||
{ name = "retrying" },
|
||||
{ name = "setuptools" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "werkzeug" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/80/37/8b5621e0a0b3c6e81a8b6cd3f033aa4b750f53e288dd1a494a887a8a06e9/dash-3.2.0.tar.gz", hash = "sha256:93300b9b99498f8b8ed267e61c455b4ee1282c7e4d4b518600eec87ce6ddea55", size = 7558708, upload-time = "2025-07-31T19:18:59.014Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/36/e0010483ca49b9bf6f389631ccea07b3ff6b678d14d8c7a0a4357860c36a/dash-3.2.0-py3-none-any.whl", hash = "sha256:4c1819588d83bed2cbcf5807daa5c2380c8c85789a6935a733f018f04ad8a6a2", size = 7900661, upload-time = "2025-07-31T19:18:50.679Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dash-bootstrap-components"
|
||||
version = "2.0.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "dash" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/cc/d4/5b7da808ff5acb3a6ca702f504d8ef05bc7d4c475b18dadefd783b1120c3/dash_bootstrap_components-2.0.4.tar.gz", hash = "sha256:c3206c0923774bbc6a6ddaa7822b8d9aa5326b0d3c1e7cd795cc975025fe2484", size = 115599, upload-time = "2025-08-20T19:42:09.449Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/38/1efeec8b4d741c09ccd169baf8a00c07a0176b58e418d4cd0c30dffedd22/dash_bootstrap_components-2.0.4-py3-none-any.whl", hash = "sha256:767cf0084586c1b2b614ccf50f79fe4525fdbbf8e3a161ed60016e584a14f5d1", size = 204044, upload-time = "2025-08-20T19:42:07.928Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "flask"
|
||||
version = "3.1.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "blinker" },
|
||||
{ name = "click" },
|
||||
{ name = "itsdangerous" },
|
||||
{ name = "jinja2" },
|
||||
{ name = "markupsafe" },
|
||||
{ name = "werkzeug" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/dc/6d/cfe3c0fcc5e477df242b98bfe186a4c34357b4847e87ecaef04507332dab/flask-3.1.2.tar.gz", hash = "sha256:bf656c15c80190ed628ad08cdfd3aaa35beb087855e2f494910aa3774cc4fd87", size = 720160, upload-time = "2025-08-19T21:03:21.205Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ec/f9/7f9263c5695f4bd0023734af91bedb2ff8209e8de6ead162f35d8dc762fd/flask-3.1.2-py3-none-any.whl", hash = "sha256:ca1d8112ec8a6158cc29ea4858963350011b5c846a414cdb7a954aa9e967d03c", size = 103308, upload-time = "2025-08-19T21:03:19.499Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fonttools"
|
||||
version = "4.59.1"
|
||||
@ -248,27 +139,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/64/9d606e66d498917cd7a2ff24f558010d42d6fd4576d9dd57f0bd98333f5a/fonttools-4.59.1-py3-none-any.whl", hash = "sha256:647db657073672a8330608970a984d51573557f328030566521bc03415535042", size = 1130094, upload-time = "2025-08-14T16:28:12.048Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "idna"
|
||||
version = "3.10"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f1/70/7703c29685631f5a7590aa73f1f1d3fa9a380e654b86af429e0934a32f7d/idna-3.10.tar.gz", hash = "sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9", size = 190490, upload-time = "2024-09-15T18:07:39.745Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload-time = "2024-09-15T18:07:37.964Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "importlib-metadata"
|
||||
version = "8.7.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "zipp" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/76/66/650a33bd90f786193e4de4b3ad86ea60b53c89b669a5c7be931fac31cdb0/importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000", size = 56641, upload-time = "2025-04-27T15:29:01.736Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/20/b0/36bd937216ec521246249be3bf9855081de4c5e06a0c9b4219dbeda50373/importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd", size = 27656, upload-time = "2025-04-27T15:29:00.214Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "iniconfig"
|
||||
version = "2.1.0"
|
||||
@ -278,27 +148,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050, upload-time = "2025-03-19T20:10:01.071Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itsdangerous"
|
||||
version = "2.2.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9c/cb/8ac0172223afbccb63986cc25049b154ecfb5e85932587206f42317be31d/itsdangerous-2.2.0.tar.gz", hash = "sha256:e0050c0b7da1eea53ffaf149c0cfbb5c6e2e2b69c4bef22c81fa6eb73e5f6173", size = 54410, upload-time = "2024-04-16T21:28:15.614Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/96/92447566d16df59b2a776c0fb82dbc4d9e07cd95062562af01e408583fc4/itsdangerous-2.2.0-py3-none-any.whl", hash = "sha256:c6242fc49e35958c8b15141343aa660db5fc54d4f13a1db01a3f5891b98700ef", size = 16234, upload-time = "2024-04-16T21:28:14.499Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jinja2"
|
||||
version = "3.1.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload-time = "2025-03-05T20:05:02.478Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload-time = "2025-03-05T20:05:00.369Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "kiwisolver"
|
||||
version = "1.4.9"
|
||||
@ -383,44 +232,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/94/54/e7d793b573f298e1c9013b8c4dade17d481164aa517d1d7148619c2cedbf/markdown_it_py-4.0.0-py3-none-any.whl", hash = "sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147", size = 87321, upload-time = "2025-08-11T12:57:51.923Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markupsafe"
|
||||
version = "3.0.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537, upload-time = "2024-10-18T15:21:54.129Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/22/09/d1f21434c97fc42f09d290cbb6350d44eb12f09cc62c9476effdb33a18aa/MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", size = 14274, upload-time = "2024-10-18T15:21:13.777Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/b0/18f76bba336fa5aecf79d45dcd6c806c280ec44538b3c13671d49099fdd0/MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", size = 12348, upload-time = "2024-10-18T15:21:14.822Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/25/dd5c0f6ac1311e9b40f4af06c78efde0f3b5cbf02502f8ef9501294c425b/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", size = 24149, upload-time = "2024-10-18T15:21:15.642Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/f0/89e7aadfb3749d0f52234a0c8c7867877876e0a20b60e2188e9850794c17/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", size = 23118, upload-time = "2024-10-18T15:21:17.133Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/da/f2eeb64c723f5e3777bc081da884b414671982008c47dcc1873d81f625b6/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", size = 22993, upload-time = "2024-10-18T15:21:18.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/0e/1f32af846df486dce7c227fe0f2398dc7e2e51d4a370508281f3c1c5cddc/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", size = 24178, upload-time = "2024-10-18T15:21:18.859Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/f6/bb3ca0532de8086cbff5f06d137064c8410d10779c4c127e0e47d17c0b71/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", size = 23319, upload-time = "2024-10-18T15:21:19.671Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/82/8be4c96ffee03c5b4a034e60a31294daf481e12c7c43ab8e34a1453ee48b/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", size = 23352, upload-time = "2024-10-18T15:21:20.971Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/ae/97827349d3fcffee7e184bdf7f41cd6b88d9919c80f0263ba7acd1bbcb18/MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", size = 15097, upload-time = "2024-10-18T15:21:22.646Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", size = 15601, upload-time = "2024-10-18T15:21:23.499Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274, upload-time = "2024-10-18T15:21:24.577Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352, upload-time = "2024-10-18T15:21:25.382Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122, upload-time = "2024-10-18T15:21:26.199Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085, upload-time = "2024-10-18T15:21:27.029Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978, upload-time = "2024-10-18T15:21:27.846Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208, upload-time = "2024-10-18T15:21:28.744Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357, upload-time = "2024-10-18T15:21:29.545Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344, upload-time = "2024-10-18T15:21:30.366Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101, upload-time = "2024-10-18T15:21:31.207Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603, upload-time = "2024-10-18T15:21:32.032Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510, upload-time = "2024-10-18T15:21:33.625Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486, upload-time = "2024-10-18T15:21:34.611Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480, upload-time = "2024-10-18T15:21:35.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914, upload-time = "2024-10-18T15:21:36.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796, upload-time = "2024-10-18T15:21:37.073Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473, upload-time = "2024-10-18T15:21:37.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114, upload-time = "2024-10-18T15:21:39.799Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098, upload-time = "2024-10-18T15:21:40.813Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208, upload-time = "2024-10-18T15:21:41.814Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload-time = "2024-10-18T15:21:42.784Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matplotlib"
|
||||
version = "3.10.5"
|
||||
@ -484,24 +295,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "narwhals"
|
||||
version = "2.2.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/01/8f/6b3d8c19540eaaa50778a8bbbe54e025d3f93aca6cdd5a4de3044c36f83c/narwhals-2.2.0.tar.gz", hash = "sha256:f6a34f2699acabe2c17339c104f0bec28b9f7a55fbc7f8d485d49bea72d12b8a", size = 547070, upload-time = "2025-08-25T07:51:58.904Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/dd/54/1ecca75e51d7da8ca53d1ffa8636ef9077a6eaa31f43ade71360b3e6449a/narwhals-2.2.0-py3-none-any.whl", hash = "sha256:2b5e3d61a486fa4328c286b0c8018b3e781a964947ff725d66ba12f6d5ca3d2a", size = 401021, upload-time = "2025-08-25T07:51:56.97Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nest-asyncio"
|
||||
version = "1.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/83/f8/51569ac65d696c8ecbee95938f89d4abf00f47d58d48f6fbabfe8f0baefe/nest_asyncio-1.6.0.tar.gz", hash = "sha256:6f172d5449aca15afd6c646851f4e31e02c598d553a667e38cafa997cfec55fe", size = 7418, upload-time = "2024-01-21T14:25:19.227Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/c4/c2971a3ba4c6103a3d10c4b0f24f461ddc027f0f09763220cf35ca1401b3/nest_asyncio-1.6.0-py3-none-any.whl", hash = "sha256:87af6efd6b5e897c81050477ef65c62e2b2f35d51703cae01aff2905b1852e1c", size = 5195, upload-time = "2024-01-21T14:25:17.223Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "numpy"
|
||||
version = "2.3.2"
|
||||
@ -570,12 +363,11 @@ name = "orderflow-backtest"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "dash" },
|
||||
{ name = "dash-bootstrap-components" },
|
||||
{ name = "matplotlib" },
|
||||
{ name = "numpy" },
|
||||
{ name = "pandas" },
|
||||
{ name = "plotly" },
|
||||
{ name = "pyqt5" },
|
||||
{ name = "pyqtgraph" },
|
||||
{ name = "pyside6" },
|
||||
{ name = "typer" },
|
||||
]
|
||||
|
||||
@ -586,12 +378,11 @@ dev = [
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "dash", specifier = ">=2.18.0" },
|
||||
{ name = "dash-bootstrap-components", specifier = ">=1.5.0" },
|
||||
{ name = "matplotlib", specifier = ">=3.10.5" },
|
||||
{ name = "numpy", specifier = ">=2.3.2" },
|
||||
{ name = "pandas", specifier = ">=2.0.0" },
|
||||
{ name = "plotly", specifier = ">=5.18.0" },
|
||||
{ name = "pyqt5", specifier = ">=5.15.11" },
|
||||
{ name = "pyqtgraph", specifier = ">=0.13.0" },
|
||||
{ name = "pyside6", specifier = ">=6.8.0" },
|
||||
{ name = "typer", specifier = ">=0.16.1" },
|
||||
]
|
||||
|
||||
@ -707,19 +498,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/89/c7/5572fa4a3f45740eaab6ae86fcdf7195b55beac1371ac8c619d880cfe948/pillow-11.3.0-cp314-cp314t-win_arm64.whl", hash = "sha256:79ea0d14d3ebad43ec77ad5272e6ff9bba5b679ef73375ea760261207fa8e0aa", size = 2512835, upload-time = "2025-07-01T09:15:50.399Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "plotly"
|
||||
version = "6.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "narwhals" },
|
||||
{ name = "packaging" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/a0/64/850de5076f4436410e1ce4f6a69f4313ef6215dfea155f3f6559335cad29/plotly-6.3.0.tar.gz", hash = "sha256:8840a184d18ccae0f9189c2b9a2943923fd5cae7717b723f36eef78f444e5a73", size = 6923926, upload-time = "2025-08-12T20:22:14.127Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/95/a9/12e2dc726ba1ba775a2c6922d5d5b4488ad60bdab0888c337c194c8e6de8/plotly-6.3.0-py3-none-any.whl", hash = "sha256:7ad806edce9d3cdd882eaebaf97c0c9e252043ed1ed3d382c3e3520ec07806d4", size = 9791257, upload-time = "2025-08-12T20:22:09.205Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.6.0"
|
||||
@ -748,46 +526,63 @@ wheels = [
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyqt5"
|
||||
version = "5.15.11"
|
||||
name = "pyqtgraph"
|
||||
version = "0.13.7"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyqt5-qt5" },
|
||||
{ name = "pyqt5-sip" },
|
||||
{ name = "numpy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/0e/07/c9ed0bd428df6f87183fca565a79fee19fa7c88c7f00a7f011ab4379e77a/PyQt5-5.15.11.tar.gz", hash = "sha256:fda45743ebb4a27b4b1a51c6d8ef455c4c1b5d610c90d2934c7802b5c1557c52", size = 3216775, upload-time = "2024-07-19T08:39:57.756Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/33/d9/b62d5cddb3caa6e5145664bee5ed90223dee23ca887ed3ee479f2609e40a/pyqtgraph-0.13.7.tar.gz", hash = "sha256:64f84f1935c6996d0e09b1ee66fe478a7771e3ca6f3aaa05f00f6e068321d9e3", size = 2343380, upload-time = "2024-04-29T02:18:58.467Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/11/64/42ec1b0bd72d87f87bde6ceb6869f444d91a2d601f2e67cd05febc0346a1/PyQt5-5.15.11-cp38-abi3-macosx_11_0_arm64.whl", hash = "sha256:c8b03dd9380bb13c804f0bdb0f4956067f281785b5e12303d529f0462f9afdc2", size = 6579776, upload-time = "2024-07-19T08:39:19.775Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/f5/3fb696f4683ea45d68b7e77302eff173493ac81e43d63adb60fa760b9f91/PyQt5-5.15.11-cp38-abi3-macosx_11_0_x86_64.whl", hash = "sha256:6cd75628f6e732b1ffcfe709ab833a0716c0445d7aec8046a48d5843352becb6", size = 7016415, upload-time = "2024-07-19T08:39:32.977Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/8c/4065950f9d013c4b2e588fe33cf04e564c2322842d84dbcbce5ba1dc28b0/PyQt5-5.15.11-cp38-abi3-manylinux_2_17_x86_64.whl", hash = "sha256:cd672a6738d1ae33ef7d9efa8e6cb0a1525ecf53ec86da80a9e1b6ec38c8d0f1", size = 8188103, upload-time = "2024-07-19T08:39:40.561Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/f0/ae5a5b4f9b826b29ea4be841b2f2d951bcf5ae1d802f3732b145b57c5355/PyQt5-5.15.11-cp38-abi3-win32.whl", hash = "sha256:76be0322ceda5deecd1708a8d628e698089a1cea80d1a49d242a6d579a40babd", size = 5433308, upload-time = "2024-07-19T08:39:46.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/d5/68eb9f3d19ce65df01b6c7b7a577ad3bbc9ab3a5dd3491a4756e71838ec9/PyQt5-5.15.11-cp38-abi3-win_amd64.whl", hash = "sha256:bdde598a3bb95022131a5c9ea62e0a96bd6fb28932cc1619fd7ba211531b7517", size = 6865864, upload-time = "2024-07-19T08:39:53.572Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/34/5702b3b7cafe99be1d94b42f100e8cc5e6957b761fcb1cf5f72d492851da/pyqtgraph-0.13.7-py3-none-any.whl", hash = "sha256:7754edbefb6c367fa0dfb176e2d0610da3ada20aa7a5318516c74af5fb72bf7a", size = 1925473, upload-time = "2024-04-29T02:18:56.206Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyqt5-qt5"
|
||||
version = "5.15.17"
|
||||
name = "pyside6"
|
||||
version = "6.9.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pyside6-addons" },
|
||||
{ name = "pyside6-essentials" },
|
||||
{ name = "shiboken6" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/f9/accb06e76e23fb23053d48cc24fd78dec6ed14cb4d5cbadb0fd4a0c1b02e/PyQt5_Qt5-5.15.17-py3-none-macosx_10_13_x86_64.whl", hash = "sha256:d8b8094108e748b4bbd315737cfed81291d2d228de43278f0b8bd7d2b808d2b9", size = 39972275, upload-time = "2025-05-24T11:15:42.259Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/87/1a/e1601ad6934cc489b8f1e967494f23958465cf1943712f054c5a306e9029/PyQt5_Qt5-5.15.17-py3-none-macosx_11_0_arm64.whl", hash = "sha256:b68628f9b8261156f91d2f72ebc8dfb28697c4b83549245d9a68195bd2d74f0c", size = 37135109, upload-time = "2025-05-24T11:15:59.786Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/e1/13d25a9ff2ac236a264b4603abaa39fa8bb9a7aa430519bb5f545c5b008d/PyQt5_Qt5-5.15.17-py3-none-manylinux2014_x86_64.whl", hash = "sha256:b018f75d1cc61146396fa5af14da1db77c5d6318030e5e366f09ffdf7bd358d8", size = 61112954, upload-time = "2025-05-24T11:16:26.036Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/43/42/43577413bd5ab26f5f21e7a43c9396aac158a5d01900c87e4609c0e96278/pyside6-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:71245c76bfbe5c41794ffd8546730ec7cc869d4bbe68535639e026e4ef8a7714", size = 558102, upload-time = "2025-08-26T07:52:57.302Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/df/cb84f802df3dcc1d196d2f9f37dbb8227761826f936987c9386b8ae1ffcc/pyside6-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:64a9e2146e207d858e00226f68d7c1b4ab332954742a00dcabb721bb9e4aa0cd", size = 558243, upload-time = "2025-08-26T07:52:59.272Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/2d/715db9da437b4632d06e2c4718aee9937760b84cf36c23d5441989e581b0/pyside6-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:a78fad16241a1f2ed0fa0098cf3d621f591fc75b4badb7f3fa3959c9d861c806", size = 558245, upload-time = "2025-08-26T07:53:00.838Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/59/90/2e75cbff0e17f16b83d2b7e8434ae9175cae8d6ff816c9b56d307cf53c86/pyside6-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:d1afbf48f9a5612b9ee2dc7c384c1a65c08b5830ba5e7d01f66d82678e5459df", size = 564604, upload-time = "2025-08-26T07:53:02.402Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/34/e3dd4e046673efcbcfbe0aa2760df06b2877739b8f4da60f0229379adebd/pyside6-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:1499b1d7629ab92119118e2636b4ace836b25e457ddf01003fdca560560b8c0a", size = 401833, upload-time = "2025-08-26T07:53:03.742Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyqt5-sip"
|
||||
version = "12.17.0"
|
||||
name = "pyside6-addons"
|
||||
version = "6.9.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/01/79/086b50414bafa71df494398ad277d72e58229a3d1c1b1c766d12b14c2e6d/pyqt5_sip-12.17.0.tar.gz", hash = "sha256:682dadcdbd2239af9fdc0c0628e2776b820e128bec88b49b8d692fe682f90b4f", size = 104042, upload-time = "2025-02-02T17:13:11.268Z" }
|
||||
dependencies = [
|
||||
{ name = "pyside6-essentials" },
|
||||
{ name = "shiboken6" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/e6/e51367c28d69b5a462f38987f6024e766fd8205f121fe2f4d8ba2a6886b9/PyQt5_sip-12.17.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:ea08341c8a5da00c81df0d689ecd4ee47a95e1ecad9e362581c92513f2068005", size = 124650, upload-time = "2025-02-02T17:12:50.595Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/64/3b/e6d1f772b41d8445d6faf86cc9da65910484ebd9f7df83abc5d4955437d0/PyQt5_sip-12.17.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:4a92478d6808040fbe614bb61500fbb3f19f72714b99369ec28d26a7e3494115", size = 281893, upload-time = "2025-02-02T17:12:51.966Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ed/c5/d17fc2ddb9156a593710c88afd98abcf4055a2224b772f8bec2c6eea879c/PyQt5_sip-12.17.0-cp312-cp312-win32.whl", hash = "sha256:b0ff280b28813e9bfd3a4de99490739fc29b776dc48f1c849caca7239a10fc8b", size = 49438, upload-time = "2025-02-02T17:12:54.426Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/c5/1174988d52c732d07033cf9a5067142b01d76be7731c6394a64d5c3ef65c/PyQt5_sip-12.17.0-cp312-cp312-win_amd64.whl", hash = "sha256:54c31de7706d8a9a8c0fc3ea2c70468aba54b027d4974803f8eace9c22aad41c", size = 58017, upload-time = "2025-02-02T17:12:56.31Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/5d/f234e505af1a85189310521447ebc6052ebb697efded850d0f2b2555f7aa/PyQt5_sip-12.17.0-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:c7a7ff355e369616b6bcb41d45b742327c104b2bf1674ec79b8d67f8f2fa9543", size = 124580, upload-time = "2025-02-02T17:12:58.158Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/cb/3b2050e9644d0021bdf25ddf7e4c3526e1edd0198879e76ba308e5d44faf/PyQt5_sip-12.17.0-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:419b9027e92b0b707632c370cfc6dc1f3b43c6313242fc4db57a537029bd179c", size = 281563, upload-time = "2025-02-02T17:12:59.421Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/61/b8ebde7e0b32d0de44c521a0ace31439885b0423d7d45d010a2f7d92808c/PyQt5_sip-12.17.0-cp313-cp313-win32.whl", hash = "sha256:351beab964a19f5671b2a3e816ecf4d3543a99a7e0650f88a947fea251a7589f", size = 49383, upload-time = "2025-02-02T17:13:00.597Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/ed/ff94d6b2910e7627380cb1fc9a518ff966e6d78285c8e54c9422b68305db/PyQt5_sip-12.17.0-cp313-cp313-win_amd64.whl", hash = "sha256:672c209d05661fab8e17607c193bf43991d268a1eefbc2c4551fbf30fd8bb2ca", size = 58022, upload-time = "2025-02-02T17:13:01.738Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/39/a8f4a55001b6a0aaee042e706de2447f21c6dc2a610f3d3debb7d04db821/pyside6_addons-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:7019fdcc0059626eb1608b361371f4dc8cb7f2d02f066908fd460739ff5a07cd", size = 316693692, upload-time = "2025-08-26T07:33:31.529Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/48/0b16e9dabd4cafe02d59531832bc30b6f0e14c92076e90dd02379d365cb2/pyside6_addons-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:24350e5415317f269e743d1f7b4933fe5f59d90894aa067676c9ce6bfe9e7988", size = 166984613, upload-time = "2025-08-26T07:33:47.569Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/55/dc42a73387379bae82f921b7659cd2006ec0e80f7052f83ddc07e9eb9cca/pyside6_addons-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:af8dee517de8d336735a6543f7dd496eb580e852c14b4d2304b890e2a29de499", size = 162908466, upload-time = "2025-08-26T07:39:49.331Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/fa/396a2e86230c493b565e2dc89dc64e4b1c63582ac69afe77b693c3817a53/pyside6_addons-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:98d2413904ee4b2b754b077af7875fa6ec08468c01a6628a2c9c3d2cece4874f", size = 160216647, upload-time = "2025-08-26T07:42:18.903Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/fe/25f61259f1d5ec4648c9f6d2abd8e2cba2188f10735a57abafda719958e5/pyside6_addons-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:b430cae782ff1a99fb95868043557f22c31b30c94afb9cf73278584e220a2ab6", size = 27126649, upload-time = "2025-08-26T07:42:37.696Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyside6-essentials"
|
||||
version = "6.9.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "shiboken6" },
|
||||
]
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/08/21/41960c03721a99e7be99a96ebb8570bdfd6f76f512b5d09074365e27ce28/pyside6_essentials-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:713eb8dcbb016ff10e6fca129c1bf2a0fd8cfac979e689264e0be3b332f9398e", size = 133092348, upload-time = "2025-08-26T07:43:57.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/02/e38ff18f3d2d8d3071aa6823031aad6089267aa4668181db65ce9948bfc0/pyside6_essentials-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:84b8ca4fa56506e2848bdb4c7a0851a5e7adcb916bef9bce25ce2eeb6c7002cc", size = 96569791, upload-time = "2025-08-26T07:44:41.392Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/a1/1203d4db6919b42a937d9ac5ddb84b20ea42eb119f7c1ddeb77cb8fdb00c/pyside6_essentials-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:d0f701503974bd51b408966539aa6956f3d8536e547ea8002fbfb3d77796bbc3", size = 94311809, upload-time = "2025-08-26T07:46:44.924Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/e3/3b3e869d3e332b6db93f6f64fac3b12f5c48b84f03f2aa50ee5c044ec0de/pyside6_essentials-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:b2f746f795138ac63eb173f9850a6db293461a1b6ce22cf6dafac7d194a38951", size = 72624566, upload-time = "2025-08-26T07:48:04.64Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/70/db78afc8b60b2e53f99145bde2f644cca43924a4dd869ffe664e0792730a/pyside6_essentials-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:ecd7b5cd9e271f397fb89a6357f4ec301d8163e50869c6c557f9ccc6bed42789", size = 49561720, upload-time = "2025-08-26T07:49:43.708Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -827,30 +622,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/81/c4/34e93fe5f5429d7570ec1fa436f1986fb1f00c3e0f43a589fe2bbcd22c3f/pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00", size = 509225, upload-time = "2025-03-25T02:24:58.468Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "requests"
|
||||
version = "2.32.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "certifi" },
|
||||
{ name = "charset-normalizer" },
|
||||
{ name = "idna" },
|
||||
{ name = "urllib3" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c9/74/b3ff8e6c8446842c3f5c837e9c3dfcfe2018ea6ecef224c710c85ef728f4/requests-2.32.5.tar.gz", hash = "sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf", size = 134517, upload-time = "2025-08-18T20:46:02.573Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/db/4254e3eabe8020b458f1a747140d32277ec7a271daf1d235b70dc0b4e6e3/requests-2.32.5-py3-none-any.whl", hash = "sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6", size = 64738, upload-time = "2025-08-18T20:46:00.542Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "retrying"
|
||||
version = "1.4.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/c8/5a/b17e1e257d3e6f2e7758930e1256832c9ddd576f8631781e6a072914befa/retrying-1.4.2.tar.gz", hash = "sha256:d102e75d53d8d30b88562d45361d6c6c934da06fab31bd81c0420acb97a8ba39", size = 11411, upload-time = "2025-08-03T03:35:25.189Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/67/f3/6cd296376653270ac1b423bb30bd70942d9916b6978c6f40472d6ac038e7/retrying-1.4.2-py3-none-any.whl", hash = "sha256:bbc004aeb542a74f3569aeddf42a2516efefcdaff90df0eb38fbfbf19f179f59", size = 10859, upload-time = "2025-08-03T03:35:23.829Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rich"
|
||||
version = "14.1.0"
|
||||
@ -864,15 +635,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/30/3c4d035596d3cf444529e0b2953ad0466f6049528a879d27534700580395/rich-14.1.0-py3-none-any.whl", hash = "sha256:536f5f1785986d6dbdea3c75205c473f970777b4a0d6c6dd1b696aa05a3fa04f", size = 243368, upload-time = "2025-07-25T07:32:56.73Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "setuptools"
|
||||
version = "80.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/18/5d/3bf57dcd21979b887f014ea83c24ae194cfcd12b9e0fda66b957c69d1fca/setuptools-80.9.0.tar.gz", hash = "sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c", size = 1319958, upload-time = "2025-05-27T00:56:51.443Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922", size = 1201486, upload-time = "2025-05-27T00:56:49.664Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shellingham"
|
||||
version = "1.5.4"
|
||||
@ -882,6 +644,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "shiboken6"
|
||||
version = "6.9.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/1e/62a8757aa0aa8d5dbf876f6cb6f652a60be9852e7911b59269dd983a7fb5/shiboken6-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:8bb1c4326330e53adeac98bfd9dcf57f5173a50318a180938dcc4825d9ca38da", size = 406337, upload-time = "2025-08-26T07:52:39.614Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/bb/72a8ed0f0542d9ea935f385b396ee6a4bbd94749c817cbf2be34e80a16d3/shiboken6-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3b54c0a12ea1b03b9dc5dcfb603c366e957dc75341bf7cb1cc436d0d848308ee", size = 206733, upload-time = "2025-08-26T07:52:41.768Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/c4/09e902f5612a509cef2c8712c516e4fe44f3a1ae9fcd8921baddb5e6bae4/shiboken6-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:a5f5985938f5acb604c23536a0ff2efb3cccb77d23da91fbaff8fd8ded3dceb4", size = 202784, upload-time = "2025-08-26T07:52:43.172Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/ea/a56b094a4bf6facf89f52f58e83684e168b1be08c14feb8b99969f3d4189/shiboken6-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:68c33d565cd4732be762d19ff67dfc53763256bac413d392aa8598b524980bc4", size = 1152089, upload-time = "2025-08-26T07:52:45.162Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/64/562a527fc55fbf41fa70dae735929988215505cb5ec0809fb0aef921d4a0/shiboken6-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:c5b827797b3d89d9b9a3753371ff533fcd4afc4531ca51a7c696952132098054", size = 1708948, upload-time = "2025-08-26T07:52:48.016Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "six"
|
||||
version = "1.17.0"
|
||||
@ -923,33 +697,3 @@ sdist = { url = "https://files.pythonhosted.org/packages/95/32/1a225d6164441be76
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/23/c7abc0ca0a1526a0774eca151daeb8de62ec457e77262b66b359c3c7679e/tzdata-2025.2-py2.py3-none-any.whl", hash = "sha256:1a403fada01ff9221ca8044d701868fa132215d84beb92242d9acd2147f667a8", size = 347839, upload-time = "2025-03-23T13:54:41.845Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "urllib3"
|
||||
version = "2.5.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/15/22/9ee70a2574a4f4599c47dd506532914ce044817c7752a79b6a51286319bc/urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760", size = 393185, upload-time = "2025-06-18T14:07:41.644Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/c2/fe1e52489ae3122415c51f387e221dd0773709bad6c6cdaa599e8a2c5185/urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc", size = 129795, upload-time = "2025-06-18T14:07:40.39Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "werkzeug"
|
||||
version = "3.1.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9f/69/83029f1f6300c5fb2471d621ab06f6ec6b3324685a2ce0f9777fd4a8b71e/werkzeug-3.1.3.tar.gz", hash = "sha256:60723ce945c19328679790e3282cc758aa4a6040e4bb330f53d30fa546d44746", size = 806925, upload-time = "2024-11-08T15:52:18.093Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/52/24/ab44c871b0f07f491e5d2ad12c9bd7358e527510618cb1b803a88e986db1/werkzeug-3.1.3-py3-none-any.whl", hash = "sha256:54b78bf3716d19a65be4fceccc0d1d7b89e608834989dfae50ea87564639213e", size = 224498, upload-time = "2024-11-08T15:52:16.132Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zipp"
|
||||
version = "3.23.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e3/02/0f2892c661036d50ede074e376733dca2ae7c6eb617489437771209d4180/zipp-3.23.0.tar.gz", hash = "sha256:a07157588a12518c9d4034df3fbbee09c814741a33ff63c05fa29d26a2404166", size = 25547, upload-time = "2025-06-08T17:06:39.4Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/54/647ade08bf0db230bfea292f893923872fd20be6ac6f53b2b936ba839d75/zipp-3.23.0-py3-none-any.whl", hash = "sha256:071652d6115ed432f5ce1d34c336c0adfd6a884660d1e9712a256d3d3bd4b14e", size = 10276, upload-time = "2025-06-08T17:06:38.034Z" },
|
||||
]
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user