Add initial implementation of the Orderflow Backtest System with OBI and CVD metrics integration, including core modules for storage, strategies, and visualization. Introduced persistent metrics storage in SQLite, optimized memory usage, and enhanced documentation.

This commit is contained in:
Simon Moisy
2025-08-26 17:22:07 +08:00
parent 63f723820a
commit fa6df78c1e
52 changed files with 7039 additions and 1 deletions

View File

@@ -0,0 +1,120 @@
# ADR-001: Persistent Metrics Storage
## Status
Accepted
## Context
The original orderflow backtest system kept all orderbook snapshots in memory during processing, leading to excessive memory usage (>1GB for typical datasets). With the addition of OBI and CVD metrics calculation, we needed to decide how to handle the computed metrics and manage memory efficiently.
## Decision
We will implement persistent storage of calculated metrics in the SQLite database with the following approach:
1. **Metrics Table**: Create a dedicated `metrics` table to store OBI, CVD, and related data
2. **Streaming Processing**: Process snapshots one-by-one, calculate metrics, store results, then discard snapshots
3. **Batch Operations**: Use batch inserts (1000 records) for optimal database performance
4. **Query Interface**: Provide time-range queries for metrics retrieval and analysis
## Consequences
### Positive
- **Memory Reduction**: >70% reduction in peak memory usage during processing
- **Avoid Recalculation**: Metrics calculated once and reused for multiple analysis runs
- **Scalability**: Can process months/years of data without memory constraints
- **Performance**: Batch database operations provide high throughput
- **Persistence**: Metrics survive between application runs
- **Analysis Ready**: Stored metrics enable complex time-series analysis
### Negative
- **Storage Overhead**: Metrics table adds ~20% to database size
- **Complexity**: Additional database schema and management code
- **Dependencies**: Tighter coupling between processing and database layer
- **Migration**: Existing databases need schema updates for metrics table
## Alternatives Considered
### Option 1: Keep All Snapshots in Memory
**Rejected**: Unsustainable memory usage for large datasets. Would limit analysis to small time ranges.
### Option 2: Calculate Metrics On-Demand
**Rejected**: Recalculating metrics for every analysis run is computationally expensive and time-consuming.
### Option 3: External Metrics Database
**Rejected**: Adds deployment complexity. SQLite co-location provides better performance and simpler management.
### Option 4: Compressed In-Memory Cache
**Rejected**: Still faces fundamental memory scaling issues. Compression/decompression adds CPU overhead.
## Implementation Details
### Database Schema
```sql
CREATE TABLE metrics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
snapshot_id INTEGER NOT NULL,
timestamp TEXT NOT NULL,
obi REAL NOT NULL,
cvd REAL NOT NULL,
best_bid REAL,
best_ask REAL,
FOREIGN KEY (snapshot_id) REFERENCES book(id)
);
CREATE INDEX idx_metrics_timestamp ON metrics(timestamp);
CREATE INDEX idx_metrics_snapshot_id ON metrics(snapshot_id);
```
### Processing Pipeline
1. Create metrics table if not exists
2. Stream through orderbook snapshots
3. For each snapshot:
- Calculate OBI and CVD metrics
- Batch store metrics (1000 records per commit)
- Discard snapshot from memory
4. Provide query interface for time-range retrieval
### Memory Management
- **Before**: Store all snapshots → Calculate on demand → High memory usage
- **After**: Stream snapshots → Calculate immediately → Store metrics → Low memory usage
## Migration Strategy
### Backward Compatibility
- Existing databases continue to work without metrics table
- System auto-creates metrics table on first processing run
- Fallback to real-time calculation if metrics unavailable
### Performance Impact
- **Processing Time**: Slight increase due to database writes (~10%)
- **Query Performance**: Significant improvement for repeated analysis
- **Overall**: Net positive performance for typical usage patterns
## Monitoring and Validation
### Success Metrics
- **Memory Usage**: Target >70% reduction in peak memory usage
- **Processing Speed**: Maintain >500 snapshots/second processing rate
- **Storage Efficiency**: Metrics table <25% of total database size
- **Query Performance**: <1 second retrieval for typical time ranges
### Validation Methods
- Memory profiling during large dataset processing
- Performance benchmarks vs. original system
- Storage overhead analysis across different dataset sizes
- Query performance testing with various time ranges
## Future Considerations
### Potential Enhancements
- **Compression**: Consider compression for metrics storage if overhead becomes significant
- **Partitioning**: Time-based partitioning for very large datasets
- **Caching**: In-memory cache for frequently accessed metrics
- **Export**: Direct export capabilities for external analysis tools
### Scalability Options
- **Database Upgrade**: PostgreSQL if SQLite becomes limiting factor
- **Parallel Processing**: Multi-threaded metrics calculation
- **Distributed Storage**: For institutional-scale datasets
---
This decision provides a solid foundation for efficient, scalable metrics processing while maintaining simplicity and performance characteristics suitable for the target use cases.

View File

@@ -0,0 +1,217 @@
# ADR-002: Separation of Visualization from Strategy
## Status
Accepted
## Context
The original system embedded visualization functionality within the `DefaultStrategy` class, creating tight coupling between trading analysis logic and chart rendering. This design had several issues:
1. **Mixed Responsibilities**: Strategy classes handled both trading logic and GUI operations
2. **Testing Complexity**: Strategy tests required mocking GUI components
3. **Deployment Flexibility**: Strategies couldn't run in headless environments
4. **Timing Control**: Visualization timing was tied to strategy execution rather than application flow
The user specifically requested to display visualizations after processing each database file, requiring better control over visualization timing.
## Decision
We will separate visualization from strategy components with the following architecture:
1. **Remove Visualization from Strategy**: Strategy classes focus solely on trading analysis
2. **Main Application Control**: `main.py` orchestrates visualization timing and updates
3. **Independent Configuration**: Strategy and Visualizer get database paths independently
4. **Clean Interfaces**: No direct dependencies between strategy and visualization components
## Consequences
### Positive
- **Single Responsibility**: Strategy focuses on trading logic, Visualizer on charts
- **Better Testability**: Strategy tests run without GUI dependencies
- **Flexible Deployment**: Strategies can run in headless/server environments
- **Timing Control**: Visualization updates precisely when needed (after each DB)
- **Maintainability**: Changes to visualization don't affect strategy logic
- **Performance**: No GUI overhead during strategy analysis
### Negative
- **Increased Complexity**: Main application handles more orchestration logic
- **Coordination Required**: Must ensure strategy and visualizer get same database path
- **Breaking Change**: Existing strategy initialization code needs updates
## Alternatives Considered
### Option 1: Keep Visualization in Strategy
**Rejected**: Violates single responsibility principle. Makes testing difficult and deployment inflexible.
### Option 2: Observer Pattern
**Rejected**: Adds unnecessary complexity for this use case. Direct control in main.py is simpler and more explicit.
### Option 3: Visualization Service
**Rejected**: Over-engineering for current requirements. May be considered for future multi-strategy scenarios.
## Implementation Details
### Before (Coupled Design)
```python
class DefaultStrategy:
def __init__(self, instrument: str, enable_visualization: bool = True):
self.visualizer = Visualizer(...) if enable_visualization else None
def on_booktick(self, book: Book):
# Trading analysis
# ...
# Visualization update
if self.visualizer:
self.visualizer.update_from_book(book)
```
### After (Separated Design)
```python
# Strategy focuses on analysis only
class DefaultStrategy:
def __init__(self, instrument: str):
# No visualization dependencies
def on_booktick(self, book: Book):
# Pure trading analysis
# No visualization code
# Main application orchestrates both
def main():
strategy = DefaultStrategy(instrument)
visualizer = Visualizer(...)
for db_path in db_paths:
strategy.set_db_path(db_path)
visualizer.set_db_path(db_path)
# Process data
storage.build_booktick_from_db(db_path, db_date)
# Analysis
strategy.on_booktick(storage.book)
# Visualization (controlled timing)
visualizer.update_from_book(storage.book)
# Final display
visualizer.show()
```
### Interface Changes
#### Strategy Interface (Simplified)
```python
class DefaultStrategy:
def __init__(self, instrument: str) # Removed visualization param
def set_db_path(self, db_path: Path) -> None # No visualizer.set_db_path()
def on_booktick(self, book: Book) -> None # No visualization calls
```
#### Main Application (Enhanced)
```python
def main():
# Separate initialization
strategy = DefaultStrategy(instrument)
visualizer = Visualizer(window_seconds=60, max_bars=500)
# Independent configuration
for db_path in db_paths:
strategy.set_db_path(db_path)
visualizer.set_db_path(db_path)
# Controlled execution
strategy.on_booktick(storage.book) # Analysis
visualizer.update_from_book(storage.book) # Visualization
```
## Migration Strategy
### Code Changes Required
1. **Strategy Classes**: Remove visualization initialization and calls
2. **Main Application**: Add visualizer creation and orchestration
3. **Tests**: Update strategy tests to remove visualization mocking
4. **Configuration**: Remove visualization parameters from strategy constructors
### Backward Compatibility
- **API Breaking**: Strategy constructor signature changes
- **Functionality Preserved**: All visualization features remain available
- **Test Updates**: Strategy tests become simpler (no GUI mocking needed)
### Migration Steps
1. Update `DefaultStrategy` to remove visualization dependencies
2. Modify `main.py` to create and manage `Visualizer` instance
3. Update all strategy constructor calls to remove `enable_visualization`
4. Update tests to reflect new interfaces
5. Verify visualization timing meets requirements
## Benefits Achieved
### Clean Architecture
- **Strategy**: Pure trading analysis logic
- **Visualizer**: Pure chart rendering logic
- **Main**: Application flow and component coordination
### Improved Testing
```python
# Before: Complex mocking required
def test_strategy():
with patch('visualizer.Visualizer') as mock_viz:
strategy = DefaultStrategy("BTC", enable_visualization=True)
# Complex mock setup...
# After: Simple, direct testing
def test_strategy():
strategy = DefaultStrategy("BTC")
# Direct testing of analysis logic
```
### Flexible Deployment
```python
# Headless server deployment
strategy = DefaultStrategy("BTC")
# No GUI dependencies, can run anywhere
# Development with visualization
strategy = DefaultStrategy("BTC")
visualizer = Visualizer(...)
# Full GUI functionality when needed
```
### Precise Timing Control
```python
# Visualization updates exactly when requested
for db_file in database_files:
process_database(db_file) # Data processing
strategy.analyze(book) # Trading analysis
visualizer.update_from_book(book) # Chart update after each DB
```
## Monitoring and Validation
### Success Criteria
- **Test Simplification**: Strategy tests run without GUI mocking
- **Timing Accuracy**: Visualization updates after each database as requested
- **Performance**: No GUI overhead during pure analysis operations
- **Maintainability**: Visualization changes don't affect strategy code
### Validation Methods
- Run strategy tests in headless environment
- Verify visualization timing matches requirements
- Performance comparison of analysis-only vs. GUI operations
- Code complexity metrics for strategy vs. visualization modules
## Future Considerations
### Potential Enhancements
- **Multiple Visualizers**: Support different chart types or windows
- **Visualization Plugins**: Pluggable chart renderers for different outputs
- **Remote Visualization**: Web-based charts for server deployments
- **Batch Visualization**: Process multiple databases before chart updates
### Extensibility
- **Strategy Plugins**: Easy to add strategies without visualization concerns
- **Visualization Backends**: Swap chart libraries without affecting strategies
- **Analysis Pipeline**: Clear separation enables complex analysis workflows
---
This separation provides a clean, maintainable architecture that supports the requested visualization timing while improving code quality and testability.