documentation update

This commit is contained in:
Vasily.onl
2025-06-06 20:33:29 +08:00
parent 5158d8a7d3
commit 666a58e799
31 changed files with 1107 additions and 2837 deletions

210
docs/modules/README.md Normal file
View File

@@ -0,0 +1,210 @@
# Modules Documentation
This section contains detailed technical documentation for all system modules in the TCP Dashboard platform.
## 📋 Contents
### User Interface & Visualization
- **[Chart System (`charts/`)](./charts/)** - *Comprehensive modular chart system*
- **Strategy-driven Configuration**: 5 professional trading strategies with JSON persistence
- **26+ Indicator Presets**: SMA, EMA, RSI, MACD, Bollinger Bands with customization
- **User Indicator Management**: Interactive CRUD system with real-time updates
- **Modular Dashboard Integration**: Separated layouts, callbacks, and components
- **Validation System**: 10+ validation rules with detailed error reporting
- **Extensible Architecture**: Foundation for bot signal integration
- Real-time chart updates with indicator toggling
- Strategy dropdown with auto-loading configurations
### Data Collection System
- **[Data Collectors (`data_collectors.md`)]** - *Comprehensive guide to the enhanced data collector system*
- **BaseDataCollector** abstract class with health monitoring
- **CollectorManager** for centralized management
- **Exchange Factory Pattern** for standardized collector creation
- **Modular Exchange Architecture** for scalable implementation
- Auto-restart and failure recovery mechanisms
- Health monitoring and alerting systems
- Performance optimization techniques
- Integration examples and patterns
- Comprehensive troubleshooting guide
### Database Operations
- **[Database Operations (`database_operations.md`)]** - *Repository pattern for clean database interactions*
- **Repository Pattern** implementation for data access abstraction
- **MarketDataRepository** for candle/OHLCV operations
- **RawTradeRepository** for WebSocket data storage
- Automatic transaction management and session cleanup
- Configurable duplicate handling with force update options
- Custom error handling with DatabaseOperationError
- Database health monitoring and performance statistics
- Migration guide from direct SQL to repository pattern
### Technical Analysis
- **[Technical Indicators (`technical-indicators.md`)]** - *Comprehensive technical analysis module*
- **Five Core Indicators**: SMA, EMA, RSI, MACD, and Bollinger Bands
- **Sparse Data Handling**: Optimized for the platform's aggregation strategy
- **Vectorized Calculations**: High-performance pandas and numpy implementation
- **Flexible Configuration**: JSON-based parameter configuration with validation
- **Integration Ready**: Seamless integration with OHLCV data and real-time processing
- Batch processing for multiple indicators
- Support for different price columns (open, high, low, close)
- Comprehensive unit testing and documentation
### Logging & Monitoring
- **[Enhanced Logging System (`logging.md`)]** - *Unified logging framework*
- Multi-level logging with automatic cleanup
- Console and file output with formatting
- Performance monitoring integration
- Cross-component logging standards
- Log aggregation and analysis
## 🔧 Component Architecture
### Core Components
```
┌─────────────────────────────────────────────────────────────┐
│ TCP Dashboard Platform │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Modular Dashboard System │ │
│ │ • Separated layouts, callbacks, components │ │
│ │ • Chart layers with strategy management │ │
│ │ • Real-time indicator updates │ │
│ │ • User indicator CRUD operations │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ CollectorManager │ │
│ │ ┌─────────────────────────────────────────────────┐│ │
│ │ │ Global Health Monitor ││ │
│ │ │ • System-wide health checks ││ │
│ │ │ • Auto-restart coordination ││ │
│ │ │ • Performance analytics ││ │
│ │ └─────────────────────────────────────────────────┘│ │
│ │ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌────────────────┐ │ │
│ │ │OKX Collector│ │Binance Coll.│ │Custom Collector│ │ │
│ │ │• Health Mon │ │• Health Mon │ │• Health Monitor│ │ │
│ │ │• Auto-restart│ │• Auto-restart│ │• Auto-restart │ │ │
│ │ │• Data Valid │ │• Data Valid │ │• Data Validate │ │ │
│ │ └─────────────┘ └─────────────┘ └────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
### Design Patterns
- **Factory Pattern**: Standardized component creation across exchanges and charts
- **Observer Pattern**: Event-driven data processing and real-time updates
- **Strategy Pattern**: Pluggable data processing and chart configuration strategies
- **Singleton Pattern**: Centralized logging and configuration management
- **Modular Architecture**: Separated concerns with reusable components
- **Repository Pattern**: Clean database access abstraction
## 🚀 Quick Start
### Using Chart Components
```python
# Chart system usage
from components.charts.config import get_available_strategy_names
from components.charts.indicator_manager import get_indicator_manager
# Get available strategies
strategy_names = get_available_strategy_names()
# Create custom indicator
manager = get_indicator_manager()
indicator = manager.create_indicator(
name="Custom SMA 50",
indicator_type="sma",
parameters={"period": 50}
)
```
### Using Data Components
```python
# Data Collector usage
from data.exchanges import create_okx_collector
from data.base_collector import DataType
collector = create_okx_collector(
symbol='BTC-USDT',
data_types=[DataType.TRADE, DataType.ORDERBOOK]
)
# Logging usage
from utils.logger import get_logger
logger = get_logger("my_component")
logger.info("Component initialized")
```
### Component Integration
```python
# Integrating multiple components
from data.collector_manager import CollectorManager
from dashboard.app import create_app
from utils.logger import get_logger
# Start data collection
manager = CollectorManager("production_system")
# Create dashboard app
app = create_app()
# Components work together seamlessly
await manager.start()
app.run_server(host='0.0.0.0', port=8050)
```
## 📊 Performance & Monitoring
### Health Monitoring
All components include built-in health monitoring:
- **Real-time Status**: Component state and performance metrics
- **Auto-Recovery**: Automatic restart on failures
- **Performance Tracking**: Message rates, uptime, error rates
- **Alerting**: Configurable alerts for component health
- **Dashboard Integration**: Visual system health monitoring
### Logging Integration
Unified logging across all components:
- **Structured Logging**: JSON-formatted logs for analysis
- **Multiple Levels**: Debug, Info, Warning, Error levels
- **Automatic Cleanup**: Log rotation and old file cleanup
- **Performance Metrics**: Built-in performance tracking
- **Component Isolation**: Separate loggers for different modules
## 🔗 Related Documentation
- **[Dashboard Modular Structure (dashboard-modular-structure.md)](./dashboard-modular-structure.md)** - Complete dashboard architecture
- **[Exchange Documentation (exchanges/)](./exchanges/)** - Exchange-specific implementations
- **[Architecture Overview (`../../architecture.md`)]** - System design and patterns
- **[Setup Guide (`../../guides/setup.md`)]** - Component configuration and deployment
- **[API Reference (`../../reference/`)** - Technical specifications
## 📈 Future Components
Planned component additions:
- **Signal Layer**: Bot trading signal visualization and integration
- **Strategy Engine**: Trading strategy execution framework
- **Portfolio Manager**: Position and risk management
- **Alert Manager**: Advanced alerting and notification system
- **Data Analytics**: Historical data analysis and reporting
---
*For the complete documentation index, see the [main documentation README (`../README.md`)]*

View File

@@ -0,0 +1,702 @@
# Modular Chart Layers System
The Modular Chart Layers System is a flexible, strategy-driven chart system that supports technical indicator overlays, subplot management, and future bot signal integration. This system replaces basic chart functionality with a modular architecture that adapts to different trading strategies and their specific indicator requirements.
## Table of Contents
- [Overview](#overview)
- [Architecture](#architecture)
- [Quick Start](#quick-start)
- [Components](#components)
- [User Indicator Management](#user-indicator-management)
- [Configuration System](#configuration-system)
- [Example Strategies](#example-strategies)
- [Validation System](#validation-system)
- [API Reference](#api-reference)
- [Examples](#examples)
- [Best Practices](#best-practices)
## Overview
### Key Features
- **Modular Architecture**: Chart layers can be independently tested and composed
- **User Indicator Management**: Create, edit, and manage custom indicators with JSON persistence
- **Strategy-Driven Configuration**: JSON-based configurations for different trading strategies
- **Comprehensive Validation**: 10+ validation rules with detailed error reporting
- **Example Strategies**: 5 real-world trading strategy templates
- **Indicator Support**: 26+ professionally configured indicator presets
- **Extensible Design**: Easy to add new indicators, strategies, and chart types
### Supported Indicators
**Trend Indicators:**
- Simple Moving Average (SMA) - Multiple periods
- Exponential Moving Average (EMA) - Multiple periods
- Bollinger Bands - Various configurations
**Momentum Indicators:**
- Relative Strength Index (RSI) - Multiple periods
- MACD - Various speed configurations
**Volume Indicators:**
- Volume analysis and confirmation
## Architecture
```
components/charts/
├── indicator_manager.py # User indicator CRUD operations
├── indicator_defaults.py # Default indicator templates
├── config/ # Configuration management
│ ├── indicator_defs.py # Indicator schemas and validation
│ ├── defaults.py # Default configurations and presets
│ ├── strategy_charts.py # Strategy-specific configurations
│ ├── validation.py # Validation system
│ ├── example_strategies.py # Real-world strategy examples
│ └── __init__.py # Package exports
├── layers/ # Chart layer implementation
│ ├── base.py # Base layer system
│ ├── indicators.py # Indicator overlays
│ ├── subplots.py # Subplot management
│ └── signals.py # Signal overlays (future)
├── builder.py # Main chart builder
└── utils.py # Chart utilities
dashboard/ # Modular dashboard integration
├── layouts/market_data.py # Chart layout with controls
├── callbacks/charts.py # Chart update callbacks
├── components/
│ ├── chart_controls.py # Reusable chart configuration panel
│ └── indicator_modal.py # Indicator management UI
config/indicators/
└── user_indicators/ # User-created indicators (JSON files)
├── sma_abc123.json
├── ema_def456.json
└── ...
```
## Dashboard Integration
The chart system is fully integrated with the modular dashboard structure:
### Modular Components
- **`dashboard/layouts/market_data.py`** - Chart layout with strategy selection and indicator controls
- **`dashboard/callbacks/charts.py`** - Chart update callbacks with strategy handling
- **`dashboard/components/chart_controls.py`** - Reusable chart configuration panel
- **`dashboard/components/indicator_modal.py`** - Complete indicator management interface
### Key Features
- **Strategy Dropdown**: Auto-loads predefined indicator combinations
- **Real-time Updates**: Charts update immediately with indicator changes
- **Modular Architecture**: Each component under 300 lines for maintainability
- **Separated Concerns**: Layouts, callbacks, and components in dedicated modules
### Usage in Dashboard
```python
# From dashboard/layouts/market_data.py
from components.charts.config import get_available_strategy_names
from components.charts.indicator_manager import get_indicator_manager
# Get available strategies for dropdown
strategy_names = get_available_strategy_names()
strategy_options = [{'label': name.replace('_', ' ').title(), 'value': name}
for name in strategy_names]
# Get user indicators for checklists
indicator_manager = get_indicator_manager()
overlay_indicators = indicator_manager.get_indicators_by_type('overlay')
subplot_indicators = indicator_manager.get_indicators_by_type('subplot')
```
For complete dashboard documentation, see [Dashboard Modular Structure (`../dashboard-modular-structure.md`)](../dashboard-modular-structure.md).
## User Indicator Management
The system includes a comprehensive user indicator management system that allows creating, editing, and managing custom technical indicators.
### Features
- **Interactive UI**: Modal dialog for creating and editing indicators
- **Real-time Updates**: Charts update immediately when indicators are toggled
- **JSON Persistence**: Each indicator saved as individual JSON file
- **Full CRUD Operations**: Create, Read, Update, Delete functionality
- **Type Validation**: Parameter validation based on indicator type
- **Custom Styling**: Color, line width, and appearance customization
### Quick Access
- **📊 [Complete Indicator Documentation (`indicators.md`)](./indicators.md)** - Comprehensive guide to the indicator system
- **⚡ [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)** - Step-by-step checklist for developers
### Current User Indicators
| Indicator | Type | Parameters | Display |
|-----------|------|------------|---------|
| Simple Moving Average (SMA) | `sma` | period (1-200) | Overlay |
| Exponential Moving Average (EMA) | `ema` | period (1-200) | Overlay |
| Bollinger Bands | `bollinger_bands` | period (5-100), std_dev (0.5-5.0) | Overlay |
| Relative Strength Index (RSI) | `rsi` | period (2-50) | Subplot |
| MACD | `macd` | fast_period, slow_period, signal_period | Subplot |
### Usage Example
```python
# Get indicator manager
from components.charts.indicator_manager import get_indicator_manager
manager = get_indicator_manager()
# Create new indicator
indicator = manager.create_indicator(
name="My SMA 50",
indicator_type="sma",
parameters={"period": 50},
description="50-period Simple Moving Average",
color="#ff0000"
)
# Load and update
loaded = manager.load_indicator("sma_abc123")
success = manager.update_indicator("sma_abc123", name="Updated SMA")
# Get indicators by type
overlay_indicators = manager.get_indicators_by_type("overlay")
subplot_indicators = manager.get_indicators_by_type("subplot")
```
## Quick Start
### Basic Usage
```python
from components.charts.config import (
create_ema_crossover_strategy,
get_strategy_config,
validate_configuration
)
# Get a pre-built strategy
strategy = create_ema_crossover_strategy()
config = strategy.config
# Validate the configuration
report = validate_configuration(config)
if report.is_valid:
print("Configuration is valid!")
else:
print(f"Errors: {[str(e) for e in report.errors]}")
# Use with dashboard
# chart = create_chart(config, market_data)
```
### Custom Strategy Creation
```python
from components.charts.config import (
StrategyChartConfig,
SubplotConfig,
ChartStyle,
TradingStrategy,
SubplotType
)
# Create custom strategy
config = StrategyChartConfig(
strategy_name="My Custom Strategy",
strategy_type=TradingStrategy.DAY_TRADING,
description="Custom day trading strategy",
timeframes=["15m", "1h"],
overlay_indicators=["ema_12", "ema_26", "bb_20_20"],
subplot_configs=[
SubplotConfig(
subplot_type=SubplotType.RSI,
height_ratio=0.2,
indicators=["rsi_14"]
)
]
)
# Validate and use
is_valid, errors = config.validate()
```
## Components
### 1. Configuration System
The configuration system provides schema validation, default presets, and strategy management.
**Key Files:**
- `indicator_defs.py` - Core schemas and validation
- `defaults.py` - 26+ indicator presets organized by category
- `strategy_charts.py` - Complete strategy configurations
**Features:**
- Type-safe indicator definitions
- Parameter validation with ranges
- Category-based organization (trend, momentum, volatility)
- Strategy-specific recommendations
### 2. Validation System
Comprehensive validation with 10 validation rules:
1. **Required Fields** - Essential configuration validation
2. **Height Ratios** - Chart layout validation
3. **Indicator Existence** - Indicator availability check
4. **Timeframe Format** - Valid timeframe patterns
5. **Chart Style** - Color and styling validation
6. **Subplot Config** - Subplot compatibility check
7. **Strategy Consistency** - Strategy-timeframe alignment
8. **Performance Impact** - Resource usage warnings
9. **Indicator Conflicts** - Redundancy detection
10. **Resource Usage** - Memory and rendering estimates
**Usage:**
```python
from components.charts.config import validate_configuration
report = validate_configuration(config)
print(f"Valid: {report.is_valid}")
print(f"Errors: {len(report.errors)}")
print(f"Warnings: {len(report.warnings)}")
```
### 3. Example Strategies
Five professionally configured trading strategies:
1. **EMA Crossover** (Intermediate, Medium Risk)
- Classic trend-following with EMA crossovers
- Best for trending markets, 15m-4h timeframes
2. **Momentum Breakout** (Advanced, High Risk)
- Fast indicators for momentum capture
- Volume confirmation, best for volatile markets
3. **Mean Reversion** (Intermediate, Medium Risk)
- Oversold/overbought conditions
- Multiple RSI periods, best for ranging markets
4. **Scalping** (Advanced, High Risk)
- Ultra-fast indicators for 1m-5m trading
- Tight risk management, high frequency
5. **Swing Trading** (Beginner, Medium Risk)
- Medium-term trend following
- 4h-1d timeframes, suitable for part-time traders
## Configuration System
### Indicator Definitions
Each indicator has a complete schema definition:
```python
@dataclass
class ChartIndicatorConfig:
indicator_type: IndicatorType
parameters: Dict[str, Any]
display_name: str
color: str
line_style: LineStyle
line_width: int
display_type: DisplayType
```
### Strategy Configuration
Complete strategy definitions include:
```python
@dataclass
class StrategyChartConfig:
strategy_name: str
strategy_type: TradingStrategy
description: str
timeframes: List[str]
layout: ChartLayout
main_chart_height: float
overlay_indicators: List[str]
subplot_configs: List[SubplotConfig]
chart_style: ChartStyle
```
### Default Configurations
26+ indicator presets organized by category:
- **Trend Indicators**: 13 SMA/EMA presets
- **Momentum Indicators**: 9 RSI/MACD presets
- **Volatility Indicators**: 4 Bollinger Bands configurations
Access via:
```python
from components.charts.config import get_all_default_indicators
indicators = get_all_default_indicators()
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
```
## Example Strategies
### EMA Crossover Strategy
```python
from components.charts.config import create_ema_crossover_strategy
strategy = create_ema_crossover_strategy()
config = strategy.config
# Strategy includes:
# - EMA 12, 26, 50 for trend analysis
# - RSI 14 for momentum confirmation
# - MACD for signal confirmation
# - Bollinger Bands for volatility context
```
### Custom Strategy Creation
```python
from components.charts.config import create_custom_strategy_config
config, errors = create_custom_strategy_config(
strategy_name="My Strategy",
strategy_type=TradingStrategy.MOMENTUM,
description="Custom momentum strategy",
timeframes=["5m", "15m"],
overlay_indicators=["ema_8", "ema_21"],
subplot_configs=[{
"subplot_type": "rsi",
"height_ratio": 0.2,
"indicators": ["rsi_7"]
}]
)
```
## Validation System
### Comprehensive Validation
```python
from components.charts.config import validate_configuration
# Full validation with detailed reporting
report = validate_configuration(config)
# Check results
if report.is_valid:
print("✅ Configuration is valid")
else:
print("❌ Configuration has errors:")
for error in report.errors:
print(f"{error}")
# Check warnings
if report.warnings:
print("⚠️ Warnings:")
for warning in report.warnings:
print(f"{warning}")
```
### Validation Rules Information
```python
from components.charts.config import get_validation_rules_info
rules = get_validation_rules_info()
for rule, info in rules.items():
print(f"{info['name']}: {info['description']}")
```
## API Reference
### Core Classes
#### `StrategyChartConfig`
Main configuration class for chart strategies.
**Methods:**
- `validate()``tuple[bool, List[str]]` - Basic validation
- `validate_comprehensive()``ValidationReport` - Detailed validation
- `get_all_indicators()``List[str]` - Get all indicator names
- `get_indicator_configs()``Dict[str, ChartIndicatorConfig]` - Get configurations
#### `StrategyExample`
Container for example strategies with metadata.
**Properties:**
- `config: StrategyChartConfig` - The strategy configuration
- `description: str` - Detailed strategy description
- `difficulty: str` - Beginner/Intermediate/Advanced
- `risk_level: str` - Low/Medium/High
- `market_conditions: List[str]` - Suitable market conditions
### Utility Functions
#### Configuration Access
```python
# Get all example strategies
get_all_example_strategies() Dict[str, StrategyExample]
# Filter by criteria
get_strategies_by_difficulty("Intermediate") List[StrategyExample]
get_strategies_by_risk_level("Medium") List[StrategyExample]
get_strategies_by_market_condition("Trending") List[StrategyExample]
# Get strategy summary
get_strategy_summary() Dict[str, Dict[str, str]]
```
#### JSON Export/Import
```python
# Export to JSON
export_strategy_config_to_json(config) str
export_example_strategies_to_json() str
# Import from JSON
load_strategy_config_from_json(json_data) tuple[StrategyChartConfig, List[str]]
```
#### Validation
```python
# Comprehensive validation
validate_configuration(config, rules=None, strict=False) ValidationReport
# Get validation rules info
get_validation_rules_info() Dict[ValidationRule, Dict[str, str]]
```
## Examples
### Example 1: Using Pre-built Strategy
```python
from components.charts.config import get_example_strategy
# Get a specific strategy
strategy = get_example_strategy("ema_crossover")
print(f"Strategy: {strategy.config.strategy_name}")
print(f"Difficulty: {strategy.difficulty}")
print(f"Risk Level: {strategy.risk_level}")
print(f"Timeframes: {strategy.config.timeframes}")
print(f"Indicators: {strategy.config.overlay_indicators}")
# Validate before use
is_valid, errors = strategy.config.validate()
if is_valid:
# Use in dashboard
pass
```
### Example 2: Creating Custom Configuration
```python
from components.charts.config import (
StrategyChartConfig, SubplotConfig, ChartStyle,
TradingStrategy, SubplotType, ChartLayout
)
# Create custom configuration
config = StrategyChartConfig(
strategy_name="Custom Momentum Strategy",
strategy_type=TradingStrategy.MOMENTUM,
description="Fast momentum strategy with volume confirmation",
timeframes=["5m", "15m"],
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
main_chart_height=0.65,
overlay_indicators=["ema_8", "ema_21", "bb_20_25"],
subplot_configs=[
SubplotConfig(
subplot_type=SubplotType.RSI,
height_ratio=0.15,
indicators=["rsi_7"],
title="Fast RSI"
),
SubplotConfig(
subplot_type=SubplotType.VOLUME,
height_ratio=0.2,
indicators=[],
title="Volume Confirmation"
)
],
chart_style=ChartStyle(
theme="plotly_white",
candlestick_up_color="#00d4aa",
candlestick_down_color="#fe6a85"
)
)
# Comprehensive validation
report = config.validate_comprehensive()
print(f"Validation: {report.summary()}")
```
### Example 3: Filtering Strategies
```python
from components.charts.config import (
get_strategies_by_difficulty,
get_strategies_by_market_condition
)
# Get beginner-friendly strategies
beginner_strategies = get_strategies_by_difficulty("Beginner")
print("Beginner Strategies:")
for strategy in beginner_strategies:
print(f"{strategy.config.strategy_name}")
# Get strategies for trending markets
trending_strategies = get_strategies_by_market_condition("Trending")
print("\nTrending Market Strategies:")
for strategy in trending_strategies:
print(f"{strategy.config.strategy_name}")
```
### Example 4: Validation with Error Handling
```python
from components.charts.config import validate_configuration, ValidationLevel
# Validate with comprehensive reporting
report = validate_configuration(config)
# Handle different severity levels
if report.errors:
print("🚨 ERRORS (must fix):")
for error in report.errors:
print(f"{error}")
if report.warnings:
print("\n⚠️ WARNINGS (recommended fixes):")
for warning in report.warnings:
print(f"{warning}")
if report.info:
print("\n INFO (optimization suggestions):")
for info in report.info:
print(f"{info}")
# Check specific validation rules
height_issues = report.get_issues_by_rule(ValidationRule.HEIGHT_RATIOS)
if height_issues:
print(f"\nHeight ratio issues: {len(height_issues)}")
```
## Best Practices
### 1. Configuration Design
- **Use meaningful names**: Strategy names should be descriptive
- **Validate early**: Always validate configurations before use
- **Consider timeframes**: Match timeframes to strategy type
- **Height ratios**: Ensure total height ≤ 1.0
### 2. Indicator Selection
- **Avoid redundancy**: Don't use multiple similar indicators
- **Performance impact**: Limit complex indicators (>3 Bollinger Bands)
- **Category balance**: Mix trend, momentum, and volume indicators
- **Timeframe alignment**: Use appropriate indicator periods
### 3. Strategy Development
- **Start simple**: Begin with proven strategies like EMA crossover
- **Test thoroughly**: Validate both technically and with market data
- **Document well**: Include entry/exit rules and market conditions
- **Consider risk**: Match complexity to experience level
### 4. Validation Usage
- **Use comprehensive validation**: Get detailed reports with suggestions
- **Handle warnings**: Address performance and usability warnings
- **Test edge cases**: Validate with extreme configurations
- **Monitor updates**: Re-validate when changing configurations
### 5. Performance Optimization
- **Limit indicators**: Keep total indicators <10 for performance
- **Monitor memory**: Check resource usage warnings
- **Optimize rendering**: Consider visual complexity
- **Cache configurations**: Reuse validated configurations
## Error Handling
### Common Issues and Solutions
1. **"Indicator not found in defaults"**
```python
# Check available indicators
from components.charts.config import get_all_default_indicators
available = get_all_default_indicators()
print(list(available.keys()))
```
2. **"Total height exceeds 1.0"**
```python
# Adjust height ratios
config.main_chart_height = 0.7
for subplot in config.subplot_configs:
subplot.height_ratio = 0.1
```
3. **"Invalid timeframe format"**
```python
# Use standard formats
config.timeframes = ["1m", "5m", "15m", "1h", "4h", "1d", "1w"]
```
## Testing
The system includes comprehensive tests:
- **112+ test cases** across all components
- **Unit tests** for individual components
- **Integration tests** for system interactions
- **Validation tests** for error handling
Run tests:
```bash
uv run pytest tests/test_*_strategies.py -v
uv run pytest tests/test_validation.py -v
uv run pytest tests/test_defaults.py -v
```
## Future Enhancements
- **✅ Signal Layer Integration**: Bot trade signals and alerts - **IMPLEMENTED** - See [Bot Integration Guide (`bot-integration.md`)](./bot-integration.md)
- **Custom Indicators**: User-defined technical indicators
- **Advanced Layouts**: Multi-chart and grid layouts
- **Real-time Updates**: Live chart updates with indicator toggling
- **Performance Monitoring**: Advanced resource usage tracking
## Bot Integration
The chart system now includes comprehensive bot integration capabilities:
- **Real-time Signal Visualization**: Live bot signals on charts
- **Trade Execution Tracking**: P&L and trade entry/exit points
- **Multi-Bot Support**: Compare strategies across multiple bots
- **Performance Analytics**: Built-in bot performance metrics
📊 **[Complete Bot Integration Guide (`bot-integration.md`)](./bot-integration.md)** - Comprehensive documentation for integrating bot signals with charts
## Support
For issues, questions, or contributions:
1. Check existing configurations in `example_strategies.py`
2. Review validation rules in `validation.py`
3. Test with comprehensive validation
4. Refer to this documentation
The modular chart system is designed to be extensible and maintainable, providing a solid foundation for advanced trading chart functionality.
---
*Back to [Modules Documentation](../README.md)*

View File

@@ -0,0 +1,249 @@
# Quick Guide: Adding New Indicators
## Overview
This guide provides a step-by-step checklist for adding new technical indicators to the Crypto Trading Bot Dashboard, updated for the new modular dashboard structure.
## Prerequisites
- Understanding of Python and technical analysis
- Familiarity with the project structure and Dash callbacks
- Knowledge of the indicator type (overlay vs subplot)
## Step-by-Step Checklist
### ✅ Step 1: Plan Your Indicator
- [ ] Determine indicator type (overlay or subplot)
- [ ] Define required parameters
- [ ] Choose default styling
- [ ] Research calculation formula
### ✅ Step 2: Create Indicator Class
**File**: `components/charts/layers/indicators.py` (overlay) or `components/charts/layers/subplots.py` (subplot)
Create a class for your indicator that inherits from `IndicatorLayer`.
```python
class StochasticLayer(IndicatorLayer):
def __init__(self, config: Dict[str, Any]):
super().__init__(config)
self.name = "stochastic"
self.display_type = "subplot"
def calculate_values(self, df: pd.DataFrame) -> Dict[str, pd.Series]:
k_period = self.config.get('k_period', 14)
d_period = self.config.get('d_period', 3)
lowest_low = df['low'].rolling(window=k_period).min()
highest_high = df['high'].rolling(window=k_period).max()
k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low))
d_percent = k_percent.rolling(window=d_period).mean()
return {'k_percent': k_percent, 'd_percent': d_percent}
def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]:
traces = []
traces.append(go.Scatter(x=df.index, y=values['k_percent'], mode='lines', name=f"%K ({self.config.get('k_period', 14)})", line=dict(color=self.config.get('color', '#007bff'), width=self.config.get('line_width', 2))))
traces.append(go.Scatter(x=df.index, y=values['d_percent'], mode='lines', name=f"%D ({self.config.get('d_period', 3)})", line=dict(color=self.config.get('secondary_color', '#ff6b35'), width=self.config.get('line_width', 2))))
return traces
```
### ✅ Step 3: Register Indicator
**File**: `components/charts/layers/__init__.py`
Register your new indicator class in the appropriate registry.
```python
from .subplots import StochasticLayer
SUBPLOT_REGISTRY = {
'rsi': RSILayer,
'macd': MACDLayer,
'stochastic': StochasticLayer,
}
INDICATOR_REGISTRY = {
'sma': SMALayer,
'ema': EMALayer,
'bollinger_bands': BollingerBandsLayer,
}
```
### ✅ Step 4: Add UI Dropdown Option
**File**: `dashboard/components/indicator_modal.py`
Add your new indicator to the `indicator-type-dropdown` options.
```python
dcc.Dropdown(
id='indicator-type-dropdown',
options=[
{'label': 'Simple Moving Average (SMA)', 'value': 'sma'},
{'label': 'Exponential Moving Average (EMA)', 'value': 'ema'},
{'label': 'Relative Strength Index (RSI)', 'value': 'rsi'},
{'label': 'MACD', 'value': 'macd'},
{'label': 'Bollinger Bands', 'value': 'bollinger_bands'},
{'label': 'Stochastic Oscillator', 'value': 'stochastic'},
],
placeholder='Select indicator type',
)
```
### ✅ Step 5: Add Parameter Fields to Modal
**File**: `dashboard/components/indicator_modal.py`
In `create_parameter_fields`, add the `dcc.Input` components for your indicator's parameters.
```python
def create_parameter_fields():
return html.Div([
# ... existing parameter fields ...
html.Div([
dbc.Row([
dbc.Col([dbc.Label("%K Period:"), dcc.Input(id='stochastic-k-period-input', type='number', value=14)], width=6),
dbc.Col([dbc.Label("%D Period:"), dcc.Input(id='stochastic-d-period-input', type='number', value=3)], width=6),
]),
dbc.FormText("Stochastic oscillator periods for %K and %D lines")
], id='stochastic-parameters', style={'display': 'none'}, className="mb-3")
])
```
### ✅ Step 6: Update Parameter Visibility Callback
**File**: `dashboard/callbacks/indicators.py`
In `update_parameter_fields`, add an `Output` and logic to show/hide your new parameter fields.
```python
@app.callback(
[Output('indicator-parameters-message', 'style'),
Output('sma-parameters', 'style'),
Output('ema-parameters', 'style'),
Output('rsi-parameters', 'style'),
Output('macd-parameters', 'style'),
Output('bb-parameters', 'style'),
Output('stochastic-parameters', 'style')],
Input('indicator-type-dropdown', 'value'),
)
def update_parameter_fields(indicator_type):
styles = { 'sma': {'display': 'none'}, 'ema': {'display': 'none'}, 'rsi': {'display': 'none'}, 'macd': {'display': 'none'}, 'bb': {'display': 'none'}, 'stochastic': {'display': 'none'} }
message_style = {'display': 'block'} if not indicator_type else {'display': 'none'}
if indicator_type:
styles[indicator_type] = {'display': 'block'}
return [message_style] + list(styles.values())
```
### ✅ Step 7: Update Save Indicator Callback
**File**: `dashboard/callbacks/indicators.py`
In `save_new_indicator`, add `State` inputs for your parameters and logic to collect them.
```python
@app.callback(
# ... Outputs ...
Input('save-indicator-btn', 'n_clicks'),
[# ... States ...
State('stochastic-k-period-input', 'value'),
State('stochastic-d-period-input', 'value'),
State('edit-indicator-store', 'data')],
)
def save_new_indicator(n_clicks, name, indicator_type, ..., stochastic_k, stochastic_d, edit_data):
# ...
elif indicator_type == 'stochastic':
parameters = {'k_period': stochastic_k or 14, 'd_period': stochastic_d or 3}
# ...
```
### ✅ Step 8: Update Edit Callback Parameters
**File**: `dashboard/callbacks/indicators.py`
In `edit_indicator`, add `Output`s for your parameter fields and logic to load values.
```python
@app.callback(
[# ... Outputs ...
Output('stochastic-k-period-input', 'value'),
Output('stochastic-d-period-input', 'value')],
Input({'type': 'edit-indicator-btn', 'index': dash.ALL}, 'n_clicks'),
)
def edit_indicator(edit_clicks, button_ids):
# ...
stochastic_k, stochastic_d = 14, 3
if indicator:
# ...
elif indicator.type == 'stochastic':
stochastic_k = params.get('k_period', 14)
stochastic_d = params.get('d_period', 3)
return (..., stochastic_k, stochastic_d)
```
### ✅ Step 9: Update Reset Callback
**File**: `dashboard/callbacks/indicators.py`
In `reset_modal_form`, add `Output`s for your parameter fields and their default values.
```python
@app.callback(
[# ... Outputs ...
Output('stochastic-k-period-input', 'value', allow_duplicate=True),
Output('stochastic-d-period-input', 'value', allow_duplicate=True)],
Input('cancel-indicator-btn', 'n_clicks'),
)
def reset_modal_form(cancel_clicks):
# ...
return ..., 14, 3
```
### ✅ Step 10: Create Default Template
**File**: `components/charts/indicator_defaults.py`
Create a default template for your indicator.
```python
def create_stochastic_template() -> UserIndicator:
return UserIndicator(
id=f"stochastic_{generate_short_id()}",
name="Stochastic 14,3",
type="stochastic",
display_type="subplot",
parameters={"k_period": 14, "d_period": 3},
styling=IndicatorStyling(color="#9c27b0", line_width=2)
)
DEFAULT_TEMPLATES = {
# ...
"stochastic": create_stochastic_template,
}
```
### ✅ Step 11: Add Calculation Function (Optional)
**File**: `data/common/indicators.py`
Add a standalone calculation function.
```python
def calculate_stochastic(df: pd.DataFrame, k_period: int = 14, d_period: int = 3) -> tuple:
lowest_low = df['low'].rolling(window=k_period).min()
highest_high = df['high'].rolling(window=k_period).max()
k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low))
d_percent = k_percent.rolling(window=d_period).mean()
return k_percent, d_percent
```
## File Change Summary
When adding a new indicator, you'll typically modify these files:
1. **`components/charts/layers/indicators.py`** or **`subplots.py`**
2. **`components/charts/layers/__init__.py`**
3. **`dashboard/components/indicator_modal.py`**
4. **`dashboard/callbacks/indicators.py`**
5. **`components/charts/indicator_defaults.py`**
6. **`data/common/indicators.py`** (optional)

View File

@@ -0,0 +1,630 @@
# Bot Integration with Chart Signal Layers
> **⚠️ Feature Not Yet Implemented**
>
> The functionality described in this document for bot integration with chart layers is **planned for a future release**. It depends on the **Strategy Engine** and **Bot Manager**, which are not yet implemented. This document outlines the intended architecture and usage once these components are available.
The Chart Layers System provides seamless integration with the bot management system, allowing real-time visualization of bot signals, trades, and performance data directly on charts.
## Table of Contents
- [Overview](#overview)
- [Architecture](#architecture)
- [Quick Start](#quick-start)
- [Bot Data Service](#bot-data-service)
- [Signal Layer Integration](#signal-layer-integration)
- [Enhanced Bot Layers](#enhanced-bot-layers)
- [Multi-Bot Visualization](#multi-bot-visualization)
- [Configuration Options](#configuration-options)
- [Examples](#examples)
- [Best Practices](#best-practices)
- [Troubleshooting](#troubleshooting)
## Overview
The bot integration system provides automatic data fetching and visualization of:
- **Trading Signals**: Buy/sell/hold signals from active bots
- **Trade Executions**: Entry/exit points with P&L information
- **Bot Performance**: Real-time performance metrics and analytics
- **Strategy Comparison**: Side-by-side strategy analysis
- **Multi-Bot Views**: Aggregate views across multiple bots
### Key Features
- **Automatic Data Fetching**: No manual data queries required
- **Real-time Updates**: Charts update with live bot data
- **Database Integration**: Direct connection to bot management system
- **Advanced Filtering**: Filter by bot, strategy, symbol, timeframe
- **Performance Analytics**: Built-in performance calculation
- **Error Handling**: Graceful handling of database errors
## Architecture
```
components/charts/layers/
├── bot_integration.py # Core bot data services
├── bot_enhanced_layers.py # Enhanced chart layers with bot integration
└── signals.py # Base signal layers
Bot Integration Components:
├── BotFilterConfig # Configuration for bot filtering
├── BotDataService # Database operations for bot data
├── BotSignalLayerIntegration # Chart-specific integration utilities
├── BotIntegratedSignalLayer # Auto-fetching signal layer
├── BotIntegratedTradeLayer # Auto-fetching trade layer
└── BotMultiLayerIntegration # Multi-bot layer management
```
## Quick Start
### Basic Bot Signal Visualization
```python
from components.charts.layers import create_bot_signal_layer
# Create a bot-integrated signal layer for BTCUSDT
signal_layer = create_bot_signal_layer(
symbol='BTCUSDT',
active_only=True,
confidence_threshold=0.5,
time_window_days=7
)
# Add to chart
fig = go.Figure()
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
```
### Complete Bot Visualization Setup
```python
from components.charts.layers import create_complete_bot_layers
# Create complete bot layer set for a symbol
result = create_complete_bot_layers(
symbol='BTCUSDT',
timeframe='1h',
active_only=True,
time_window_days=7
)
if result['success']:
signal_layer = result['layers']['signals']
trade_layer = result['layers']['trades']
# Add to chart
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
fig = trade_layer.render(fig, market_data, symbol='BTCUSDT')
```
## Bot Data Service
The `BotDataService` provides the core interface for fetching bot-related data from the database.
### Basic Usage
```python
from components.charts.layers.bot_integration import BotDataService, BotFilterConfig
# Initialize service
service = BotDataService()
# Create filter configuration
bot_filter = BotFilterConfig(
symbols=['BTCUSDT'],
strategies=['momentum', 'ema_crossover'],
active_only=True
)
# Fetch bot data
bots_df = service.get_bots(bot_filter)
signals_df = service.get_signals_for_bots(
bot_ids=bots_df['id'].tolist(),
start_time=datetime.now() - timedelta(days=7),
end_time=datetime.now(),
min_confidence=0.3
)
```
### Available Methods
| Method | Description | Parameters |
|--------|-------------|------------|
| `get_bots()` | Fetch bot information | `filter_config: BotFilterConfig` |
| `get_signals_for_bots()` | Fetch signals from bots | `bot_ids, start_time, end_time, signal_types, min_confidence` |
| `get_trades_for_bots()` | Fetch trades from bots | `bot_ids, start_time, end_time, sides` |
| `get_bot_performance()` | Fetch performance data | `bot_ids, start_time, end_time` |
### BotFilterConfig Options
```python
@dataclass
class BotFilterConfig:
bot_ids: Optional[List[int]] = None # Specific bot IDs
bot_names: Optional[List[str]] = None # Specific bot names
strategies: Optional[List[str]] = None # Strategy filter
symbols: Optional[List[str]] = None # Symbol filter
statuses: Optional[List[str]] = None # Bot status filter
date_range: Optional[Tuple[datetime, datetime]] = None
active_only: bool = False # Only active bots
```
## Signal Layer Integration
The `BotSignalLayerIntegration` provides chart-specific utilities for integrating bot data with chart layers.
### Chart-Specific Signal Fetching
```python
from components.charts.layers.bot_integration import BotSignalLayerIntegration
integration = BotSignalLayerIntegration()
# Get signals for specific chart context
signals_df = integration.get_signals_for_chart(
symbol='BTCUSDT',
timeframe='1h',
bot_filter=BotFilterConfig(active_only=True),
time_range=(start_time, end_time),
signal_types=['buy', 'sell'],
min_confidence=0.5
)
# Get trades for chart context
trades_df = integration.get_trades_for_chart(
symbol='BTCUSDT',
timeframe='1h',
bot_filter=BotFilterConfig(strategies=['momentum']),
time_range=(start_time, end_time)
)
# Get bot summary statistics
stats = integration.get_bot_summary_stats(bot_ids=[1, 2, 3])
```
### Performance Analytics
```python
# Get comprehensive performance summary
performance = get_bot_performance_summary(
bot_id=1, # Specific bot or None for all
days_back=30
)
print(f"Total trades: {performance['trade_count']}")
print(f"Win rate: {performance['win_rate']:.1f}%")
print(f"Total P&L: ${performance['bot_stats']['total_pnl']:.2f}")
```
## Enhanced Bot Layers
Enhanced layers provide automatic data fetching and bot-specific visualization features.
### BotIntegratedSignalLayer
```python
from components.charts.layers import BotIntegratedSignalLayer, BotSignalLayerConfig
# Configure bot-integrated signal layer
config = BotSignalLayerConfig(
name="BTCUSDT Bot Signals",
auto_fetch_data=True, # Automatically fetch from database
time_window_days=7, # Look back 7 days
active_bots_only=True, # Only active bots
include_bot_info=True, # Include bot info in hover
group_by_strategy=True, # Group signals by strategy
confidence_threshold=0.3, # Minimum confidence
signal_types=['buy', 'sell'] # Signal types to show
)
layer = BotIntegratedSignalLayer(config)
# Render automatically fetches data
fig = layer.render(fig, market_data, symbol='BTCUSDT')
```
### BotIntegratedTradeLayer
```python
from components.charts.layers import BotIntegratedTradeLayer, BotTradeLayerConfig
config = BotTradeLayerConfig(
name="BTCUSDT Bot Trades",
auto_fetch_data=True,
time_window_days=7,
show_pnl=True, # Show profit/loss
show_trade_lines=True, # Connect entry/exit
include_bot_info=True, # Bot info in hover
group_by_strategy=False
)
layer = BotIntegratedTradeLayer(config)
fig = layer.render(fig, market_data, symbol='BTCUSDT')
```
## Multi-Bot Visualization
### Strategy Comparison
```python
from components.charts.layers import bot_multi_layer
# Compare multiple strategies on the same symbol
result = bot_multi_layer.create_strategy_comparison_layers(
symbol='BTCUSDT',
strategies=['momentum', 'ema_crossover', 'mean_reversion'],
timeframe='1h',
time_window_days=14
)
if result['success']:
for strategy in result['strategies']:
signal_layer = result['layers'][f"{strategy}_signals"]
trade_layer = result['layers'][f"{strategy}_trades"]
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
fig = trade_layer.render(fig, market_data, symbol='BTCUSDT')
```
### Multi-Symbol Bot View
```python
# Create bot layers for multiple symbols
symbols = ['BTCUSDT', 'ETHUSDT', 'ADAUSDT']
for symbol in symbols:
bot_layers = create_complete_bot_layers(
symbol=symbol,
active_only=True,
time_window_days=7
)
if bot_layers['success']:
# Add layers to respective charts
signal_layer = bot_layers['layers']['signals']
# ... render on symbol-specific chart
```
## Configuration Options
### Auto-Fetch Configuration
```python
# Disable auto-fetch for manual data control
config = BotSignalLayerConfig(
name="Manual Bot Signals",
auto_fetch_data=False, # Disable auto-fetch
active_bots_only=True
)
layer = BotIntegratedSignalLayer(config)
# Manually provide signal data
manual_signals = get_signals_from_api()
fig = layer.render(fig, market_data, signals=manual_signals)
```
### Time Window Management
```python
# Custom time window
config = BotSignalLayerConfig(
name="Short Term Signals",
time_window_days=1, # Last 24 hours only
active_bots_only=True,
confidence_threshold=0.7 # High confidence only
)
```
### Bot-Specific Filtering
```python
# Filter for specific bots
bot_filter = BotFilterConfig(
bot_ids=[1, 2, 5], # Specific bot IDs
symbols=['BTCUSDT'],
active_only=True
)
config = BotSignalLayerConfig(
name="Selected Bots",
bot_filter=bot_filter,
include_bot_info=True
)
```
## Examples
### Dashboard Integration Example
```python
# dashboard/callbacks/charts.py
from components.charts.layers import (
create_bot_signal_layer,
create_bot_trade_layer,
get_active_bot_signals
)
@app.callback(
Output('chart', 'figure'),
[Input('symbol-dropdown', 'value'),
Input('show-bot-signals', 'value')]
)
def update_chart_with_bots(symbol, show_bot_signals):
fig = create_base_chart(symbol)
if 'bot-signals' in show_bot_signals:
# Add bot signals
signal_layer = create_bot_signal_layer(
symbol=symbol,
active_only=True,
confidence_threshold=0.3
)
fig = signal_layer.render(fig, market_data, symbol=symbol)
# Add bot trades
trade_layer = create_bot_trade_layer(
symbol=symbol,
active_only=True,
show_pnl=True
)
fig = trade_layer.render(fig, market_data, symbol=symbol)
return fig
```
### Custom Bot Analysis
```python
# Custom analysis for specific strategy
def analyze_momentum_strategy(symbol: str, days_back: int = 30):
"""Analyze momentum strategy performance for a symbol."""
# Get momentum bot signals
signals = get_bot_signals_by_strategy(
strategy_name='momentum',
symbol=symbol,
days_back=days_back
)
# Get performance summary
performance = get_bot_performance_summary(days_back=days_back)
# Create visualizations
signal_layer = create_bot_signal_layer(
symbol=symbol,
active_only=False, # Include all momentum bots
time_window_days=days_back
)
return {
'signals': signals,
'performance': performance,
'layer': signal_layer
}
# Usage
analysis = analyze_momentum_strategy('BTCUSDT', days_back=14)
```
### Real-time Monitoring Setup
```python
# Real-time bot monitoring dashboard component
def create_realtime_bot_monitor(symbols: List[str]):
"""Create real-time bot monitoring charts."""
charts = {}
for symbol in symbols:
# Get latest bot data
active_signals = get_active_bot_signals(
symbol=symbol,
days_back=1, # Last 24 hours
min_confidence=0.5
)
# Create monitoring layers
signal_layer = create_bot_signal_layer(
symbol=symbol,
active_only=True,
time_window_days=1
)
trade_layer = create_bot_trade_layer(
symbol=symbol,
active_only=True,
show_pnl=True,
time_window_days=1
)
charts[symbol] = {
'signal_layer': signal_layer,
'trade_layer': trade_layer,
'active_signals': len(active_signals)
}
return charts
```
## Best Practices
### Performance Optimization
```python
# 1. Use appropriate time windows
config = BotSignalLayerConfig(
time_window_days=7, # Don't fetch more data than needed
confidence_threshold=0.3 # Filter low-confidence signals
)
# 2. Filter by active bots only when possible
bot_filter = BotFilterConfig(
active_only=True, # Reduces database queries
symbols=['BTCUSDT'] # Specific symbols only
)
# 3. Reuse integration instances
integration = BotSignalLayerIntegration() # Create once
# Use multiple times for different symbols
```
### Error Handling
```python
try:
bot_layers = create_complete_bot_layers('BTCUSDT')
if not bot_layers['success']:
logger.warning(f"Bot layer creation failed: {bot_layers.get('error')}")
# Fallback to manual signal layer
signal_layer = TradingSignalLayer()
else:
signal_layer = bot_layers['layers']['signals']
except Exception as e:
logger.error(f"Bot integration error: {e}")
# Graceful degradation
signal_layer = TradingSignalLayer()
```
### Database Connection Management
```python
# The bot integration handles database connections automatically
# But for custom queries, follow these patterns:
from database.connection import get_session
def custom_bot_query():
try:
with get_session() as session:
# Your database operations
result = session.query(Bot).filter(...).all()
return result
except Exception as e:
logger.error(f"Database query failed: {e}")
return []
```
## Troubleshooting
### Common Issues
1. **No signals showing on chart**
```python
# Check if bots exist for symbol
service = BotDataService()
bots = service.get_bots(BotFilterConfig(symbols=['BTCUSDT']))
print(f"Found {len(bots)} bots for BTCUSDT")
# Check signal count
signals = get_active_bot_signals('BTCUSDT', days_back=7)
print(f"Found {len(signals)} signals in last 7 days")
```
2. **Database connection errors**
```python
# Test database connection
try:
from database.connection import get_session
with get_session() as session:
print("Database connection successful")
except Exception as e:
print(f"Database connection failed: {e}")
```
3. **Performance issues with large datasets**
```python
# Reduce time window
config = BotSignalLayerConfig(
time_window_days=3, # Reduced from 7
confidence_threshold=0.5 # Higher threshold
)
# Filter by specific strategies
bot_filter = BotFilterConfig(
strategies=['momentum'], # Specific strategy only
active_only=True
)
```
### Debug Mode
```python
import logging
# Enable debug logging for bot integration
logging.getLogger('bot_integration').setLevel(logging.DEBUG)
logging.getLogger('bot_enhanced_layers').setLevel(logging.DEBUG)
# This will show detailed information about:
# - Database queries
# - Data fetching operations
# - Filter applications
# - Performance metrics
```
### Testing Bot Integration
```python
# Test bot integration components
from tests.test_signal_layers import TestBotIntegration
# Run specific bot integration tests
pytest.main(['-v', 'tests/test_signal_layers.py::TestBotIntegration'])
# Test with mock data
def test_bot_integration():
config = BotSignalLayerConfig(
name="Test Bot Signals",
auto_fetch_data=False # Use manual data for testing
)
layer = BotIntegratedSignalLayer(config)
# Provide test data
test_signals = pd.DataFrame({
'timestamp': [datetime.now()],
'signal_type': ['buy'],
'price': [50000],
'confidence': [0.8],
'bot_name': ['Test Bot']
})
fig = go.Figure()
result = layer.render(fig, market_data, signals=test_signals)
assert len(result.data) > 0
```
## API Reference
### Core Classes
- **`BotDataService`** - Main service for database operations
- **`BotSignalLayerIntegration`** - Chart-specific integration utilities
- **`BotIntegratedSignalLayer`** - Auto-fetching signal layer
- **`BotIntegratedTradeLayer`** - Auto-fetching trade layer
- **`BotMultiLayerIntegration`** - Multi-bot layer management
### Configuration Classes
- **`BotFilterConfig`** - Bot filtering configuration
- **`BotSignalLayerConfig`** - Signal layer configuration with bot options
- **`BotTradeLayerConfig`** - Trade layer configuration with bot options
### Convenience Functions
- **`create_bot_signal_layer()`** - Quick bot signal layer creation
- **`create_bot_trade_layer()`** - Quick bot trade layer creation
- **`create_complete_bot_layers()`** - Complete bot layer set
- **`get_active_bot_signals()`** - Get signals from active bots
- **`get_active_bot_trades()`** - Get trades from active bots
- **`get_bot_signals_by_strategy()`** - Get signals by strategy
- **`get_bot_performance_summary()`** - Get performance analytics
For complete API documentation, see the module docstrings in:
- `components/charts/layers/bot_integration.py`
- `components/charts/layers/bot_enhanced_layers.py`

View File

@@ -0,0 +1,754 @@
# Chart Configuration System
The Chart Configuration System provides comprehensive management of chart settings, indicator definitions, and trading strategy configurations. It includes schema validation, default presets, and extensible configuration patterns.
## Table of Contents
- [Overview](#overview)
- [Indicator Definitions](#indicator-definitions)
- [Default Configurations](#default-configurations)
- [Strategy Configurations](#strategy-configurations)
- [Validation System](#validation-system)
- [Configuration Files](#configuration-files)
- [Usage Examples](#usage-examples)
- [Extension Guide](#extension-guide)
## Overview
The configuration system is built around three core concepts:
1. **Indicator Definitions** - Schema and validation for technical indicators
2. **Default Configurations** - Pre-built indicator presets organized by category
3. **Strategy Configurations** - Complete chart setups for trading strategies
### Architecture
```
components/charts/config/
├── indicator_defs.py # Core schemas and validation
├── defaults.py # Default indicator presets
├── strategy_charts.py # Strategy configurations
├── validation.py # Validation system
├── example_strategies.py # Real-world examples
└── __init__.py # Package exports
```
## Indicator Definitions
### Core Classes
#### `ChartIndicatorConfig`
The main configuration class for individual indicators:
```python
@dataclass
class ChartIndicatorConfig:
name: str
indicator_type: str
parameters: Dict[str, Any]
display_type: str # 'overlay', 'subplot'
color: str
line_style: str = 'solid' # 'solid', 'dash', 'dot'
line_width: int = 2
opacity: float = 1.0
visible: bool = True
subplot_height_ratio: float = 0.3 # For subplot indicators
```
#### Enums
**IndicatorType**
```python
class IndicatorType(str, Enum):
SMA = "sma"
EMA = "ema"
RSI = "rsi"
MACD = "macd"
BOLLINGER_BANDS = "bollinger_bands"
VOLUME = "volume"
```
**DisplayType**
```python
class DisplayType(str, Enum):
OVERLAY = "overlay" # Overlaid on price chart
SUBPLOT = "subplot" # Separate subplot
HISTOGRAM = "histogram" # Histogram display
```
**LineStyle**
```python
class LineStyle(str, Enum):
SOLID = "solid"
DASHED = "dash"
DOTTED = "dot"
DASH_DOT = "dashdot"
```
### Schema Validation
#### `IndicatorParameterSchema`
Defines validation rules for indicator parameters:
```python
@dataclass
class IndicatorParameterSchema:
name: str
type: type
required: bool = True
default: Any = None
min_value: Optional[Union[int, float]] = None
max_value: Optional[Union[int, float]] = None
description: str = ""
```
#### `IndicatorSchema`
Complete schema for an indicator type:
```python
@dataclass
class IndicatorSchema:
indicator_type: IndicatorType
display_type: DisplayType
required_parameters: List[IndicatorParameterSchema]
optional_parameters: List[IndicatorParameterSchema] = field(default_factory=list)
min_data_points: int = 1
description: str = ""
```
### Schema Definitions
The system includes complete schemas for all supported indicators:
```python
INDICATOR_SCHEMAS = {
IndicatorType.SMA: IndicatorSchema(
indicator_type=IndicatorType.SMA,
display_type=DisplayType.OVERLAY,
parameters=[
IndicatorParameterSchema(
name="period",
type=int,
min_value=1,
max_value=200,
default_value=20,
description="Number of periods for the moving average"
),
IndicatorParameterSchema(
name="price_column",
type=str,
required=False,
default_value="close",
valid_values=["open", "high", "low", "close"],
description="Price column to use for calculation"
)
],
description="Simple Moving Average - arithmetic mean of prices",
calculation_description="Sum of closing prices divided by period"
),
# ... more schemas
}
```
### Utility Functions
#### Validation Functions
```python
# Validate individual indicator configuration
def validate_indicator_configuration(config: ChartIndicatorConfig) -> tuple[bool, List[str]]
# Create indicator configuration with validation
def create_indicator_config(
name: str,
indicator_type: str,
parameters: Dict[str, Any],
display_type: Optional[str] = None,
color: str = "#007bff",
**display_options
) -> tuple[Optional[ChartIndicatorConfig], List[str]]
# Get schema for indicator type
def get_indicator_schema(indicator_type: str) -> Optional[IndicatorSchema]
# Get available indicator types
def get_available_indicator_types() -> List[str]
# Validate parameters for specific type
def validate_parameters_for_type(
indicator_type: str,
parameters: Dict[str, Any]
) -> tuple[bool, List[str]]
```
## Default Configurations
### Organization
Default configurations are organized by category and trading strategy:
#### Categories
```python
class IndicatorCategory(str, Enum):
TREND = "trend"
MOMENTUM = "momentum"
VOLATILITY = "volatility"
VOLUME = "volume"
SUPPORT_RESISTANCE = "support_resistance"
```
#### Trading Strategies
```python
class TradingStrategy(str, Enum):
SCALPING = "scalping"
DAY_TRADING = "day_trading"
SWING_TRADING = "swing_trading"
POSITION_TRADING = "position_trading"
MOMENTUM = "momentum"
MEAN_REVERSION = "mean_reversion"
```
### Indicator Presets
#### `IndicatorPreset`
Container for pre-configured indicators:
```python
@dataclass
class IndicatorPreset:
name: str
config: ChartIndicatorConfig
category: IndicatorCategory
description: str
recommended_timeframes: List[str]
suitable_strategies: List[TradingStrategy]
notes: List[str] = field(default_factory=list)
```
### Available Presets
**Trend Indicators (13 presets)**
- `sma_5`, `sma_10`, `sma_20`, `sma_50`, `sma_100`, `sma_200`
- `ema_5`, `ema_12`, `ema_21`, `ema_26`, `ema_50`, `ema_100`, `ema_200`
**Momentum Indicators (9 presets)**
- `rsi_7`, `rsi_14`, `rsi_21`
- `macd_5_13_4`, `macd_8_17_6`, `macd_12_26_9`, `macd_19_39_13`
**Volatility Indicators (4 presets)**
- `bb_10_15`, `bb_20_15`, `bb_20_20`, `bb_50_20`
### Color Schemes
Organized color palettes by category:
```python
CATEGORY_COLORS = {
IndicatorCategory.TREND: {
"primary": "#2E86C1", # Blue
"secondary": "#5DADE2", # Light Blue
"accent": "#1F618D" # Dark Blue
},
IndicatorCategory.MOMENTUM: {
"primary": "#E74C3C", # Red
"secondary": "#F1948A", # Light Red
"accent": "#C0392B" # Dark Red
},
# ... more colors
}
```
### Access Functions
```python
# Get all default indicators
def get_all_default_indicators() -> Dict[str, IndicatorPreset]
# Filter by category
def get_indicators_by_category(category: IndicatorCategory) -> Dict[str, IndicatorPreset]
# Filter by timeframe
def get_indicators_for_timeframe(timeframe: str) -> Dict[str, IndicatorPreset]
# Get strategy-specific indicators
def get_strategy_indicators(strategy: TradingStrategy) -> Dict[str, IndicatorPreset]
# Create custom preset
def create_custom_preset(
name: str,
indicator_type: IndicatorType,
parameters: Dict[str, Any],
category: IndicatorCategory,
**kwargs
) -> tuple[Optional[IndicatorPreset], List[str]]
```
## Strategy Configurations
### Core Classes
#### `StrategyChartConfig`
Complete chart configuration for a trading strategy:
```python
@dataclass
class StrategyChartConfig:
strategy_name: str
strategy_type: TradingStrategy
description: str
timeframes: List[str]
# Chart layout
layout: ChartLayout = ChartLayout.MAIN_WITH_SUBPLOTS
main_chart_height: float = 0.7
# Indicators
overlay_indicators: List[str] = field(default_factory=list)
subplot_configs: List[SubplotConfig] = field(default_factory=list)
# Style
chart_style: ChartStyle = field(default_factory=ChartStyle)
# Metadata
created_at: Optional[datetime] = None
updated_at: Optional[datetime] = None
version: str = "1.0"
tags: List[str] = field(default_factory=list)
```
#### `SubplotConfig`
Configuration for chart subplots:
```python
@dataclass
class SubplotConfig:
subplot_type: SubplotType
height_ratio: float = 0.3
indicators: List[str] = field(default_factory=list)
title: Optional[str] = None
y_axis_label: Optional[str] = None
show_grid: bool = True
show_legend: bool = True
background_color: Optional[str] = None
```
#### `ChartStyle`
Comprehensive chart styling:
```python
@dataclass
class ChartStyle:
theme: str = "plotly_white"
background_color: str = "#ffffff"
grid_color: str = "#e6e6e6"
text_color: str = "#2c3e50"
font_family: str = "Arial, sans-serif"
font_size: int = 12
candlestick_up_color: str = "#26a69a"
candlestick_down_color: str = "#ef5350"
volume_color: str = "#78909c"
show_volume: bool = True
show_grid: bool = True
show_legend: bool = True
show_toolbar: bool = True
```
### Default Strategy Configurations
Pre-built strategy configurations for common trading approaches:
1. **Scalping Strategy**
- Ultra-fast indicators (EMA 5, 12, 21)
- Fast RSI (7) and MACD (5,13,4)
- 1m-5m timeframes
2. **Day Trading Strategy**
- Balanced indicators (SMA 20, EMA 12/26, BB 20,2.0)
- Standard RSI (14) and MACD (12,26,9)
- 5m-1h timeframes
3. **Swing Trading Strategy**
- Longer-term indicators (SMA 50, EMA 21/50, BB 20,2.0)
- Standard momentum indicators
- 1h-1d timeframes
### Configuration Functions
```python
# Create default strategy configurations
def create_default_strategy_configurations() -> Dict[str, StrategyChartConfig]
# Create custom strategy
def create_custom_strategy_config(
strategy_name: str,
strategy_type: TradingStrategy,
description: str,
timeframes: List[str],
overlay_indicators: List[str],
subplot_configs: List[Dict[str, Any]],
**kwargs
) -> tuple[Optional[StrategyChartConfig], List[str]]
# JSON import/export
def load_strategy_config_from_json(json_data: Union[str, Dict[str, Any]]) -> tuple[Optional[StrategyChartConfig], List[str]]
def export_strategy_config_to_json(config: StrategyChartConfig) -> str
# Access functions
def get_strategy_config(strategy_name: str) -> Optional[StrategyChartConfig]
def get_all_strategy_configs() -> Dict[str, StrategyChartConfig]
def get_available_strategy_names() -> List[str]
```
## Validation System
### Validation Rules
The system includes 10 comprehensive validation rules:
1. **REQUIRED_FIELDS** - Validates essential configuration fields
2. **HEIGHT_RATIOS** - Ensures chart height ratios sum correctly
3. **INDICATOR_EXISTENCE** - Checks indicator availability
4. **TIMEFRAME_FORMAT** - Validates timeframe patterns
5. **CHART_STYLE** - Validates styling options
6. **SUBPLOT_CONFIG** - Validates subplot configurations
7. **STRATEGY_CONSISTENCY** - Checks strategy-timeframe alignment
8. **PERFORMANCE_IMPACT** - Warns about performance issues
9. **INDICATOR_CONFLICTS** - Detects redundant indicators
10. **RESOURCE_USAGE** - Estimates resource consumption
### Validation Classes
#### `ValidationReport`
Comprehensive validation results:
```python
@dataclass
class ValidationReport:
is_valid: bool
errors: List[ValidationIssue] = field(default_factory=list)
warnings: List[ValidationIssue] = field(default_factory=list)
info: List[ValidationIssue] = field(default_factory=list)
debug: List[ValidationIssue] = field(default_factory=list)
validation_time: Optional[datetime] = None
rules_applied: Set[ValidationRule] = field(default_factory=set)
```
#### `ValidationIssue`
Individual validation issue:
```python
@dataclass
class ValidationIssue:
level: ValidationLevel
rule: ValidationRule
message: str
field_path: str = ""
suggestion: Optional[str] = None
auto_fix: Optional[str] = None
context: Dict[str, Any] = field(default_factory=dict)
```
### Validation Usage
```python
from components.charts.config import validate_configuration
# Comprehensive validation
report = validate_configuration(config)
# Check results
if report.is_valid:
print("✅ Configuration is valid")
else:
print("❌ Configuration has errors:")
for error in report.errors:
print(f"{error}")
# Handle warnings
if report.warnings:
print("⚠️ Warnings:")
for warning in report.warnings:
print(f"{warning}")
```
## Configuration Files
### File Structure
```
components/charts/config/
├── __init__.py # Package exports and public API
├── indicator_defs.py # Core indicator schemas and validation
├── defaults.py # Default indicator presets and categories
├── strategy_charts.py # Strategy configuration classes and defaults
├── validation.py # Validation system and rules
└── example_strategies.py # Real-world trading strategy examples
```
### Key Exports
From `__init__.py`:
```python
# Core classes
from .indicator_defs import (
IndicatorType, DisplayType, LineStyle, PriceColumn,
IndicatorParameterSchema, IndicatorSchema, ChartIndicatorConfig
)
# Default configurations
from .defaults import (
IndicatorCategory, TradingStrategy, IndicatorPreset,
get_all_default_indicators, get_indicators_by_category
)
# Strategy configurations
from .strategy_charts import (
ChartLayout, SubplotType, SubplotConfig, ChartStyle, StrategyChartConfig,
create_default_strategy_configurations
)
# Validation system
from .validation import (
ValidationLevel, ValidationRule, ValidationIssue, ValidationReport,
validate_configuration
)
# Utility functions from indicator_defs
from .indicator_defs import (
create_indicator_config, get_indicator_schema, get_available_indicator_types
)
```
## Usage Examples
### Example 1: Creating Custom Indicator
```python
from components.charts.config import (
create_indicator_config, IndicatorType
)
# Create custom EMA configuration
config, errors = create_indicator_config(
name="EMA 21",
indicator_type=IndicatorType.EMA,
parameters={"period": 21, "price_column": "close"},
color="#2E86C1",
line_width=2
)
if config:
print(f"Created: {config.name}")
else:
print(f"Errors: {errors}")
```
### Example 2: Using Default Presets
```python
from components.charts.config import (
get_all_default_indicators,
get_indicators_by_category,
IndicatorCategory
)
# Get all available indicators
all_indicators = get_all_default_indicators()
print(f"Available indicators: {len(all_indicators)}")
# Get trend indicators only
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
for name, preset in trend_indicators.items():
print(f"{name}: {preset.description}")
```
### Example 3: Strategy Configuration
```python
from components.charts.config import (
create_custom_strategy_config,
TradingStrategy
)
# Create custom momentum strategy
config, errors = create_custom_strategy_config(
strategy_name="Custom Momentum",
strategy_type=TradingStrategy.MOMENTUM,
description="Fast momentum trading strategy",
timeframes=["5m", "15m"],
overlay_indicators=["ema_8", "ema_21"],
subplot_configs=[{
"subplot_type": "rsi",
"height_ratio": 0.2,
"indicators": ["rsi_7"]
}]
)
if config:
print(f"Created strategy: {config.strategy_name}")
is_valid, validation_errors = config.validate()
if is_valid:
print("Strategy is valid!")
else:
print(f"Validation errors: {validation_errors}")
```
### Example 4: Comprehensive Validation
```python
from components.charts.config import (
validate_configuration,
ValidationRule
)
# Validate with specific rules
rules = {ValidationRule.REQUIRED_FIELDS, ValidationRule.HEIGHT_RATIOS}
report = validate_configuration(config, rules=rules)
# Detailed error handling
for error in report.errors:
print(f"ERROR: {error.message}")
if error.suggestion:
print(f" Suggestion: {error.suggestion}")
if error.auto_fix:
print(f" Auto-fix: {error.auto_fix}")
# Performance warnings
performance_issues = report.get_issues_by_rule(ValidationRule.PERFORMANCE_IMPACT)
if performance_issues:
print(f"Performance concerns: {len(performance_issues)}")
```
## Extension Guide
### Adding New Indicators
1. **Define Indicator Type**
```python
# Add to IndicatorType enum
class IndicatorType(str, Enum):
# ... existing types
STOCHASTIC = "stochastic"
```
2. **Create Schema**
```python
# Add to INDICATOR_SCHEMAS
INDICATOR_SCHEMAS[IndicatorType.STOCHASTIC] = IndicatorSchema(
indicator_type=IndicatorType.STOCHASTIC,
display_type=DisplayType.SUBPLOT,
parameters=[
IndicatorParameterSchema(
name="k_period",
type=int,
min_value=1,
max_value=100,
default_value=14
),
# ... more parameters
],
description="Stochastic Oscillator",
calculation_description="Momentum indicator comparing closing price to price range"
)
```
3. **Create Default Presets**
```python
# Add to defaults.py
def create_momentum_indicators():
# ... existing indicators
indicators["stoch_14"] = IndicatorPreset(
name="stoch_14",
config=create_indicator_config(
IndicatorType.STOCHASTIC,
{"k_period": 14, "d_period": 3},
display_name="Stochastic %K(14,%D(3))",
color=CATEGORY_COLORS[IndicatorCategory.MOMENTUM]["primary"]
)[0],
category=IndicatorCategory.MOMENTUM,
description="Standard Stochastic oscillator",
recommended_timeframes=["15m", "1h", "4h"],
suitable_strategies=[TradingStrategy.SWING_TRADING]
)
```
### Adding New Validation Rules
1. **Define Rule**
```python
# Add to ValidationRule enum
class ValidationRule(str, Enum):
# ... existing rules
CUSTOM_RULE = "custom_rule"
```
2. **Implement Validation**
```python
# Add to ConfigurationValidator
def _validate_custom_rule(self, config: StrategyChartConfig, report: ValidationReport) -> None:
# Custom validation logic
if some_condition:
report.add_issue(ValidationIssue(
level=ValidationLevel.WARNING,
rule=ValidationRule.CUSTOM_RULE,
message="Custom validation message",
suggestion="Suggested fix"
))
```
3. **Add to Validator**
```python
# Add to validate_strategy_config method
if ValidationRule.CUSTOM_RULE in self.enabled_rules:
self._validate_custom_rule(config, report)
```
### Adding New Strategy Types
1. **Define Strategy Type**
```python
# Add to TradingStrategy enum
class TradingStrategy(str, Enum):
# ... existing strategies
GRID_TRADING = "grid_trading"
```
2. **Create Strategy Configuration**
```python
# Add to create_default_strategy_configurations()
strategy_configs["grid_trading"] = StrategyChartConfig(
strategy_name="Grid Trading Strategy",
strategy_type=TradingStrategy.GRID_TRADING,
description="Grid trading with support/resistance levels",
timeframes=["1h", "4h"],
overlay_indicators=["sma_20", "sma_50"],
# ... complete configuration
)
```
3. **Add Example Strategy**
```python
# Create in example_strategies.py
def create_grid_trading_strategy() -> StrategyExample:
config = StrategyChartConfig(...)
return StrategyExample(
config=config,
description="Grid trading strategy description...",
difficulty="Intermediate",
risk_level="Medium"
)
```
The configuration system is designed to be highly extensible while maintaining type safety and comprehensive validation. All additions should follow the established patterns and include appropriate tests.

View File

@@ -0,0 +1,313 @@
# Indicator System Documentation
## Overview
The Crypto Trading Bot Dashboard features a comprehensive modular indicator system that allows users to create, customize, and manage technical indicators for chart analysis. The system supports both overlay indicators (displayed on the main price chart) and subplot indicators (displayed in separate panels below the main chart).
## Table of Contents
1. [System Architecture](#system-architecture)
2. [Current Indicators](#current-indicators)
3. [User Interface](#user-interface)
4. [File Structure](#file-structure)
5. [Adding New Indicators](#adding-new-indicators)
6. [Configuration Format](#configuration-format)
7. [API Reference](#api-reference)
8. [Troubleshooting](#troubleshooting)
## System Architecture
### Core Components
```
components/charts/
├── indicator_manager.py # Core indicator CRUD operations
├── indicator_defaults.py # Default indicator templates
├── layers/
│ ├── indicators.py # Overlay indicator rendering
│ └── subplots.py # Subplot indicator rendering
└── config/
└── indicator_defs.py # Indicator definitions and schemas
config/indicators/
└── user_indicators/ # User-created indicators (JSON files)
├── sma_abc123.json
├── ema_def456.json
└── ...
```
### Key Classes
- **`IndicatorManager`**: Handles CRUD operations for user indicators
- **`UserIndicator`**: Data structure for indicator configuration
- **`IndicatorStyling`**: Appearance and styling configuration
- **Indicator Layers**: Rendering classes for different indicator types
## Current Indicators
### Overlay Indicators
These indicators are displayed directly on the price chart:
| Indicator | Type | Parameters | Description |
|-----------|------|------------|-------------|
| **Simple Moving Average (SMA)** | `sma` | `period` (1-200) | Average price over N periods |
| **Exponential Moving Average (EMA)** | `ema` | `period` (1-200) | Weighted average giving more weight to recent prices |
| **Bollinger Bands** | `bollinger_bands` | `period` (5-100), `std_dev` (0.5-5.0) | Price channels based on standard deviation |
### Subplot Indicators
These indicators are displayed in separate panels:
| Indicator | Type | Parameters | Description |
|-----------|------|------------|-------------|
| **Relative Strength Index (RSI)** | `rsi` | `period` (2-50) | Momentum oscillator (0-100 scale) |
| **MACD** | `macd` | `fast_period` (2-50), `slow_period` (5-100), `signal_period` (2-30) | Moving average convergence divergence |
## User Interface
### Adding Indicators
1. **Click " Add New Indicator"** button
2. **Configure Basic Settings**:
- Name: Custom name for the indicator
- Type: Select from available indicator types
- Description: Optional description
3. **Set Parameters**: Type-specific parameters appear dynamically
4. **Customize Styling**:
- Color: Hex color code
- Line Width: 1-5 pixels
5. **Save**: Creates a new JSON file and updates the UI
### Managing Indicators
- **✅ Checkboxes**: Toggle indicator visibility on chart
- **✏️ Edit Button**: Modify existing indicator settings
- **🗑️ Delete Button**: Remove indicator permanently
### Real-time Updates
- Chart updates automatically when indicators are toggled
- Changes are saved immediately to JSON files
- No page refresh required
## File Structure
### Indicator JSON Format
```json
{
"id": "ema_ca5fd53d",
"name": "EMA 10",
"description": "10-period Exponential Moving Average for fast signals",
"type": "ema",
"display_type": "overlay",
"parameters": {
"period": 10
},
"styling": {
"color": "#ff6b35",
"line_width": 2,
"opacity": 1.0,
"line_style": "solid"
},
"visible": true,
"created_date": "2025-06-04T04:16:35.455729+00:00",
"modified_date": "2025-06-04T04:54:49.608549+00:00"
}
```
### Directory Structure
```
config/indicators/
└── user_indicators/
├── sma_abc123.json # Individual indicator files
├── ema_def456.json
├── rsi_ghi789.json
└── macd_jkl012.json
```
## Adding New Indicators
For developers who want to add new indicator types to the system, please refer to the comprehensive step-by-step guide:
**📋 [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)**
This guide covers:
- ✅ Complete 11-step implementation checklist
- ✅ Full code examples (Stochastic Oscillator implementation)
- ✅ File modification requirements
- ✅ Testing checklist and common patterns
- ✅ Tips and best practices
## Configuration Format
### User Indicator Structure
```python
@dataclass
class UserIndicator:
id: str # Unique identifier
name: str # Display name
description: str # User description
type: str # Indicator type (sma, ema, etc.)
display_type: str # "overlay" or "subplot"
parameters: Dict[str, Any] # Type-specific parameters
styling: IndicatorStyling # Appearance settings
visible: bool = True # Default visibility
created_date: datetime # Creation timestamp
modified_date: datetime # Last modification timestamp
```
### Styling Options
```python
@dataclass
class IndicatorStyling:
color: str = "#007bff" # Hex color code
line_width: int = 2 # Line thickness (1-5)
opacity: float = 1.0 # Transparency (0.0-1.0)
line_style: str = "solid" # Line style
```
### Parameter Examples
```python
# SMA/EMA Parameters
{"period": 20}
# RSI Parameters
{"period": 14}
# MACD Parameters
{
"fast_period": 12,
"slow_period": 26,
"signal_period": 9
}
# Bollinger Bands Parameters
{
"period": 20,
"std_dev": 2.0
}
```
## API Reference
### IndicatorManager Class
```python
class IndicatorManager:
def create_indicator(self, name: str, indicator_type: str,
parameters: Dict[str, Any], **kwargs) -> Optional[UserIndicator]
def load_indicator(self, indicator_id: str) -> Optional[UserIndicator]
def update_indicator(self, indicator_id: str, **kwargs) -> bool
def delete_indicator(self, indicator_id: str) -> bool
def list_indicators(self) -> List[UserIndicator]
def get_indicators_by_type(self, display_type: str) -> List[UserIndicator]
```
### Usage Examples
```python
# Get indicator manager
manager = get_indicator_manager()
# Create new indicator
indicator = manager.create_indicator(
name="My SMA 50",
indicator_type="sma",
parameters={"period": 50},
description="50-period Simple Moving Average",
color="#ff0000"
)
# Load indicator
loaded = manager.load_indicator("sma_abc123")
# Update indicator
success = manager.update_indicator(
"sma_abc123",
name="Updated SMA",
parameters={"period": 30}
)
# Delete indicator
deleted = manager.delete_indicator("sma_abc123")
# List all indicators
all_indicators = manager.list_indicators()
# Get by type
overlay_indicators = manager.get_indicators_by_type("overlay")
subplot_indicators = manager.get_indicators_by_type("subplot")
```
## Troubleshooting
### Common Issues
1. **Indicator not appearing in dropdown**
- Check if registered in `INDICATOR_REGISTRY`
- Verify the indicator type matches the class name
2. **Parameters not saving**
- Ensure parameter fields are added to save callback
- Check parameter collection logic in `save_new_indicator`
3. **Chart not updating**
- Verify the indicator layer implements `calculate_values` and `create_traces`
- Check if indicator is registered in the correct registry
4. **File permission errors**
- Ensure `config/indicators/user_indicators/` directory is writable
- Check file permissions on existing JSON files
### Debug Information
- Check browser console for JavaScript errors
- Look at application logs for Python exceptions
- Verify JSON file structure with a validator
- Test indicator calculations with sample data
### Performance Considerations
- Indicators with large periods may take longer to calculate
- Consider data availability when setting parameter limits
- Subplot indicators require additional chart space
- Real-time updates may impact performance with many indicators
## Best Practices
1. **Naming Conventions**
- Use descriptive names for indicators
- Include parameter values in names (e.g., "SMA 20")
- Use consistent naming patterns
2. **Parameter Validation**
- Set appropriate min/max values for parameters
- Provide helpful descriptions for parameters
- Use sensible default values
3. **Error Handling**
- Handle insufficient data gracefully
- Provide meaningful error messages
- Log errors for debugging
4. **Performance**
- Cache calculated values when possible
- Optimize calculation algorithms
- Limit the number of active indicators
5. **User Experience**
- Provide immediate visual feedback
- Use intuitive color schemes
- Group related indicators logically
---
*Back to [Chart System Documentation (`README.md`)]*

View File

@@ -0,0 +1,280 @@
# Chart System Quick Reference
## Quick Start
### Import Everything You Need
```python
from components.charts.config import (
# Example strategies
create_ema_crossover_strategy,
get_all_example_strategies,
# Configuration
StrategyChartConfig,
create_custom_strategy_config,
validate_configuration,
# Indicators
get_all_default_indicators,
get_indicators_by_category,
IndicatorCategory,
TradingStrategy
)
```
### Use Pre-built Strategy
```python
# Get EMA crossover strategy
strategy = create_ema_crossover_strategy()
config = strategy.config
# Validate before use
report = validate_configuration(config)
if report.is_valid:
print("✅ Ready to use!")
else:
print(f"❌ Errors: {[str(e) for e in report.errors]}")
```
### Create Custom Strategy
```python
config, errors = create_custom_strategy_config(
strategy_name="My Strategy",
strategy_type=TradingStrategy.DAY_TRADING,
description="Custom day trading strategy",
timeframes=["15m", "1h"],
overlay_indicators=["ema_12", "ema_26"],
subplot_configs=[{
"subplot_type": "rsi",
"height_ratio": 0.2,
"indicators": ["rsi_14"]
}]
)
```
## Available Indicators
### Trend Indicators
- `sma_5`, `sma_10`, `sma_20`, `sma_50`, `sma_100`, `sma_200`
- `ema_5`, `ema_12`, `ema_21`, `ema_26`, `ema_50`, `ema_100`, `ema_200`
### Momentum Indicators
- `rsi_7`, `rsi_14`, `rsi_21`
- `macd_5_13_4`, `macd_8_17_6`, `macd_12_26_9`, `macd_19_39_13`
### Volatility Indicators
- `bb_10_15`, `bb_20_15`, `bb_20_20`, `bb_50_20`
## Example Strategies
### 1. EMA Crossover (Intermediate, Medium Risk)
```python
strategy = create_ema_crossover_strategy()
# Uses: EMA 12/26/50, RSI 14, MACD, Bollinger Bands
# Best for: Trending markets, 15m-4h timeframes
```
### 2. Momentum Breakout (Advanced, High Risk)
```python
strategy = create_momentum_breakout_strategy()
# Uses: EMA 8/21, Fast RSI/MACD, Volume
# Best for: Volatile markets, 5m-1h timeframes
```
### 3. Mean Reversion (Intermediate, Medium Risk)
```python
strategy = create_mean_reversion_strategy()
# Uses: SMA 20/50, Multiple RSI, Tight BB
# Best for: Ranging markets, 15m-4h timeframes
```
### 4. Scalping (Advanced, High Risk)
```python
strategy = create_scalping_strategy()
# Uses: Ultra-fast EMAs, RSI 7, Fast MACD
# Best for: High liquidity, 1m-5m timeframes
```
### 5. Swing Trading (Beginner, Medium Risk)
```python
strategy = create_swing_trading_strategy()
# Uses: SMA 20/50, Standard indicators
# Best for: Trending markets, 4h-1d timeframes
```
## Strategy Filtering
### By Difficulty
```python
beginner = get_strategies_by_difficulty("Beginner")
intermediate = get_strategies_by_difficulty("Intermediate")
advanced = get_strategies_by_difficulty("Advanced")
```
### By Risk Level
```python
low_risk = get_strategies_by_risk_level("Low")
medium_risk = get_strategies_by_risk_level("Medium")
high_risk = get_strategies_by_risk_level("High")
```
### By Market Condition
```python
trending = get_strategies_by_market_condition("Trending")
sideways = get_strategies_by_market_condition("Sideways")
volatile = get_strategies_by_market_condition("Volatile")
```
## Validation Quick Checks
### Basic Validation
```python
is_valid, errors = config.validate()
if not is_valid:
for error in errors:
print(f"{error}")
```
### Comprehensive Validation
```python
report = validate_configuration(config)
# Errors (must fix)
for error in report.errors:
print(f"🚨 {error}")
# Warnings (recommended)
for warning in report.warnings:
print(f"⚠️ {warning}")
# Info (optional)
for info in report.info:
print(f" {info}")
```
## JSON Export/Import
### Export Strategy
```python
json_data = export_strategy_config_to_json(config)
```
### Import Strategy
```python
config, errors = load_strategy_config_from_json(json_data)
```
### Export All Examples
```python
all_strategies_json = export_example_strategies_to_json()
```
## Common Patterns
### Get Strategy Summary
```python
summary = get_strategy_summary()
for name, info in summary.items():
print(f"{name}: {info['difficulty']} - {info['risk_level']}")
```
### List Available Indicators
```python
indicators = get_all_default_indicators()
for name, preset in indicators.items():
print(f"{name}: {preset.description}")
```
### Filter by Category
```python
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
momentum_indicators = get_indicators_by_category(IndicatorCategory.MOMENTUM)
```
## Configuration Structure
### Strategy Config
```python
StrategyChartConfig(
strategy_name="Strategy Name",
strategy_type=TradingStrategy.DAY_TRADING,
description="Strategy description",
timeframes=["15m", "1h"],
overlay_indicators=["ema_12", "ema_26"],
subplot_configs=[
{
"subplot_type": "rsi",
"height_ratio": 0.2,
"indicators": ["rsi_14"]
}
]
)
```
### Subplot Types
- `"rsi"` - RSI oscillator
- `"macd"` - MACD with histogram
- `"volume"` - Volume bars
### Timeframe Formats
- `"1m"`, `"5m"`, `"15m"`, `"30m"`
- `"1h"`, `"2h"`, `"4h"`, `"6h"`, `"12h"`
- `"1d"`, `"1w"`, `"1M"`
## Error Handling
### Common Errors
1. **"Indicator not found"** - Check available indicators list
2. **"Height ratios exceed 1.0"** - Adjust main_chart_height and subplot ratios
3. **"Invalid timeframe"** - Use standard timeframe formats
### Validation Rules
1. Required fields present
2. Height ratios sum ≤ 1.0
3. Indicators exist in defaults
4. Valid timeframe formats
5. Chart style validation
6. Subplot configuration
7. Strategy consistency
8. Performance impact
9. Indicator conflicts
10. Resource usage
## Best Practices
### Strategy Design
- Start with proven strategies (EMA crossover)
- Match timeframes to strategy type
- Balance indicator categories (trend + momentum + volume)
- Consider performance impact (<10 indicators)
### Validation
- Always validate before use
- Address all errors
- Consider warnings for optimization
- Test with edge cases
### Performance
- Limit complex indicators (Bollinger Bands)
- Monitor resource usage warnings
- Cache validated configurations
- Use appropriate timeframes for strategy type
## Testing Commands
```bash
# Test all chart components
pytest tests/test_*_strategies.py -v
pytest tests/test_validation.py -v
pytest tests/test_defaults.py -v
# Test specific component
pytest tests/test_example_strategies.py::TestEMACrossoverStrategy -v
```
## File Locations
- **Main config**: `components/charts/config/`
- **Documentation**: `docs/modules/charts/`
- **Tests**: `tests/test_*_strategies.py`
- **Examples**: `components/charts/config/example_strategies.py`

View File

@@ -0,0 +1,302 @@
# Dashboard Modular Structure Documentation
## Overview
The Crypto Trading Bot Dashboard has been successfully refactored into a modular architecture for better maintainability, scalability, and development efficiency. This document outlines the new structure and how to work with it.
## Architecture
### Directory Structure
```
dashboard/
├── __init__.py # Package initialization
├── app.py # Main app creation and configuration
├── layouts/ # UI layout modules
│ ├── __init__.py
│ ├── market_data.py # Market data visualization layout
│ ├── bot_management.py # Bot management interface layout
│ ├── performance.py # Performance analytics layout
│ └── system_health.py # System health monitoring layout
├── callbacks/ # Dash callback modules
│ ├── __init__.py
│ ├── navigation.py # Tab navigation callbacks
│ ├── charts.py # Chart-related callbacks
│ ├── indicators.py # Indicator management callbacks
│ └── system_health.py # System health callbacks
└── components/ # Reusable UI components
├── __init__.py
├── indicator_modal.py # Indicator creation/editing modal
└── chart_controls.py # Chart configuration controls
```
## Key Components
### 1. Main Application (`dashboard/app.py`)
**Purpose**: Creates and configures the main Dash application.
**Key Functions**:
- `create_app()`: Initializes Dash app with main layout
- `register_callbacks()`: Registers all callback modules
**Features**:
- Centralized app configuration
- Main navigation structure
- Global components (modals, intervals)
### 2. Layout Modules (`dashboard/layouts/`)
**Purpose**: Define UI layouts for different dashboard sections.
#### Market Data Layout (`market_data.py`)
- Symbol and timeframe selection
- Chart configuration panel with indicator management
- Parameter controls for indicator customization
- Real-time chart display
- Market statistics
#### Bot Management Layout (`bot_management.py`)
- Bot status overview
- Bot control interface (placeholder for Phase 4.0)
#### Performance Layout (`performance.py`)
- Portfolio performance metrics (placeholder for Phase 6.0)
#### System Health Layout (`system_health.py`)
- Database status monitoring
- Data collection status
- Redis status monitoring
### 3. Callback Modules (`dashboard/callbacks/`)
**Purpose**: Handle user interactions and data updates.
#### Navigation Callbacks (`navigation.py`)
- Tab switching logic
- Content rendering based on active tab
#### Chart Callbacks (`charts.py`)
- Chart data updates
- Strategy selection handling
- Market statistics updates
#### Indicator Callbacks (`indicators.py`)
- Complete indicator modal management
- CRUD operations for custom indicators
- Parameter field dynamics
- Checkbox synchronization
- Edit/delete functionality
#### System Health Callbacks (`system_health.py`)
- Database status monitoring
- Data collection status updates
- Redis status checks
### 4. UI Components (`dashboard/components/`)
**Purpose**: Reusable UI components for consistent design.
#### Indicator Modal (`indicator_modal.py`)
- Complete indicator creation/editing interface
- Dynamic parameter fields
- Styling controls
- Form validation
#### Chart Controls (`chart_controls.py`)
- Chart configuration panel
- Parameter control sliders
- Auto-update controls
## Benefits of Modular Structure
### 1. **Maintainability**
- **Separation of Concerns**: Each module has a specific responsibility
- **Smaller Files**: Easier to navigate and understand (under 300 lines each)
- **Clear Dependencies**: Explicit imports show component relationships
### 2. **Scalability**
- **Easy Extension**: Add new layouts/callbacks without touching existing code
- **Parallel Development**: Multiple developers can work on different modules
- **Component Reusability**: UI components can be shared across layouts
### 3. **Testing**
- **Unit Testing**: Each module can be tested independently
- **Mock Dependencies**: Easier to mock specific components for testing
- **Isolated Debugging**: Issues can be traced to specific modules
### 4. **Code Organization**
- **Logical Grouping**: Related functionality is grouped together
- **Consistent Structure**: Predictable file organization
- **Documentation**: Each module can have focused documentation
## Migration from Monolithic Structure
### Before (app.py - 1523 lines)
```python
# Single large file with:
# - All layouts mixed together
# - All callbacks in one place
# - UI components embedded in layouts
# - Difficult to navigate and maintain
```
### After (Modular Structure)
```python
# dashboard/app.py (73 lines)
# dashboard/layouts/market_data.py (124 lines)
# dashboard/components/indicator_modal.py (290 lines)
# dashboard/callbacks/navigation.py (32 lines)
# dashboard/callbacks/charts.py (122 lines)
# dashboard/callbacks/indicators.py (590 lines)
# dashboard/callbacks/system_health.py (88 lines)
# ... and so on
```
## Development Workflow
### Adding a New Layout
1. **Create Layout Module**:
```python
# dashboard/layouts/new_feature.py
def get_new_feature_layout():
return html.Div([...])
```
2. **Update Layout Package**:
```python
# dashboard/layouts/__init__.py
from .new_feature import get_new_feature_layout
```
3. **Add Navigation**:
```python
# dashboard/callbacks/navigation.py
elif active_tab == 'new-feature':
return get_new_feature_layout()
```
### Adding New Callbacks
1. **Create Callback Module**:
```python
# dashboard/callbacks/new_feature.py
def register_new_feature_callbacks(app):
@app.callback(...)
def callback_function(...):
pass
```
2. **Register Callbacks**:
```python
# dashboard/app.py or main app file
from dashboard.callbacks import register_new_feature_callbacks
register_new_feature_callbacks(app)
```
### Creating Reusable Components
1. **Create Component Module**:
```python
# dashboard/components/new_component.py
def create_new_component(params):
return html.Div([...])
```
2. **Export Component**:
```python
# dashboard/components/__init__.py
from .new_component import create_new_component
```
3. **Use in Layouts**:
```python
# dashboard/layouts/some_layout.py
from dashboard.components import create_new_component
```
## Best Practices
### 1. **File Organization**
- Keep files under 300-400 lines
- Use descriptive module names
- Group related functionality together
### 2. **Import Management**
- Use explicit imports
- Avoid circular dependencies
- Import only what you need
### 3. **Component Design**
- Make components reusable
- Use parameters for customization
- Include proper documentation
### 4. **Callback Organization**
- Group related callbacks in same module
- Use descriptive function names
- Include error handling
### 5. **Testing Strategy**
- Test each module independently
- Mock external dependencies
- Use consistent testing patterns
## Current Status
### ✅ **Completed**
- ✅ Modular directory structure
- ✅ Layout modules extracted
- ✅ UI components modularized
- ✅ Navigation callbacks implemented
- ✅ Chart callbacks extracted and working
- ✅ Indicator callbacks extracted and working
- ✅ System health callbacks extracted and working
- ✅ All imports fixed and dependencies resolved
- ✅ Modular dashboard fully functional
### 📋 **Next Steps**
1. Implement comprehensive testing for each module
2. Add error handling and validation improvements
3. Create development guidelines
4. Update deployment scripts
5. Performance optimization for large datasets
## Usage
### Running the Modular Dashboard
```bash
# Use the new modular version
uv run python app_new.py
# Original monolithic version (for comparison)
uv run python app.py
```
### Development Mode
```bash
# The modular structure supports hot reloading
# Changes to individual modules are reflected immediately
```
## Conclusion
The modular dashboard structure migration has been **successfully completed**! All functionality from the original 1523-line monolithic application has been extracted into clean, maintainable modules while preserving all existing features including:
- Complete indicator management system (CRUD operations)
- Chart visualization with dynamic indicators
- Strategy selection and auto-loading
- System health monitoring
- Real-time data updates
- Professional UI with modals and controls
> **Note on UI Components:** While the modular structure is in place, many UI sections, such as the **Bot Management** and **Performance** layouts, are currently placeholders. The controls and visualizations for these features will be implemented once the corresponding backend components (Bot Manager, Strategy Engine) are developed.
This architecture provides a solid foundation for future development while maintaining all existing functionality. The separation of concerns makes the codebase more maintainable and allows for easier collaboration and testing.
**The modular dashboard is now production-ready and fully functional!** 🚀
---
*Back to [Modules Documentation (`../README.md`)]*

View File

@@ -0,0 +1,215 @@
# Enhanced Data Collector System
This documentation describes the enhanced data collector system, featuring a modular architecture, centralized management, and robust health monitoring.
## Table of Contents
- [Overview](#overview)
- [System Architecture](#system-architecture)
- [Core Components](#core-components)
- [Exchange Factory](#exchange-factory)
- [Health Monitoring](#health-monitoring)
- [API Reference](#api-reference)
- [Troubleshooting](#troubleshooting)
## Overview
### Key Features
- **Modular Exchange Integration**: Easily add new exchanges without impacting core logic
- **Centralized Management**: `CollectorManager` for system-wide control
- **Robust Health Monitoring**: Automatic restarts and failure detection
- **Factory Pattern**: Standardized creation of collector instances
- **Asynchronous Operations**: High-performance data collection
- **Comprehensive Logging**: Detailed component-level logging
### Supported Exchanges
- **OKX**: Full implementation with WebSocket support
- **Binance (Future)**: Planned support
- **Coinbase (Future)**: Planned support
For exchange-specific documentation, see [Exchange Implementations (`./exchanges/`)](./exchanges/).
## System Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ TCP Dashboard Platform │
│ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ CollectorManager │ │
│ │ • Centralized start/stop/status control │ │
│ │ │ │
│ │ ┌─────────────────────────────────────────────────┐│ │
│ │ │ Global Health Monitor ││ │
│ │ │ • System-wide health checks ││ │
│ │ │ • Auto-restart coordination ││ │
│ │ │ • Performance analytics ││ │
│ │ └─────────────────────────────────────────────────┘│ │
│ │ │ │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌────────────────┐ │ │
│ │ │OKX Collector│ │Binance Coll.│ │Custom Collector│ │ │
│ │ │• Health Mon │ │• Health Mon │ │• Health Monitor│ │ │
│ │ │• Auto-restart│ │• Auto-restart│ │• Auto-restart │ │ │
│ │ │• Data Valid │ │• Data Valid │ │• Data Validate │ │ │
│ │ └─────────────┘ └─────────────┘ └────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
## Core Components
### 1. `BaseDataCollector`
An abstract base class that defines the common interface for all exchange collectors.
**Key Responsibilities:**
- Standardized `start`, `stop`, `restart` methods
- Built-in health monitoring with heartbeat and data silence detection
- Automatic reconnect and restart logic
- Asynchronous message handling
### 2. `CollectorManager`
A singleton class that manages all active data collectors in the system.
**Key Responsibilities:**
- Centralized `start` and `stop` for all collectors
- System-wide status aggregation
- Global health monitoring
- Coordination of restart policies
### 3. Exchange-Specific Collectors
Concrete implementations of `BaseDataCollector` for each exchange (e.g., `OKXCollector`).
**Key Responsibilities:**
- Handle exchange-specific WebSocket protocols
- Parse and standardize incoming data
- Implement exchange-specific authentication
- Define subscription messages for different data types
For more details, see [OKX Collector Documentation (`./exchanges/okx.md`)](./exchanges/okx.md).
## Exchange Factory
The `ExchangeFactory` provides a standardized way to create data collectors, decoupling the client code from specific implementations.
### Features
- **Simplified Creation**: Single function to create any supported collector
- **Configuration Driven**: Uses `ExchangeCollectorConfig` for flexible setup
- **Validation**: Validates configuration before creating a collector
- **Extensible**: Easily register new exchange collectors
### Usage
```python
from data.exchanges import ExchangeFactory, ExchangeCollectorConfig
from data.common import DataType
# Create config for OKX collector
config = ExchangeCollectorConfig(
exchange="okx",
symbol="BTC-USDT",
data_types=[DataType.TRADE, DataType.ORDERBOOK],
auto_restart=True
)
# Create collector using the factory
try:
collector = ExchangeFactory.create_collector(config)
# Use the collector
await collector.start()
except ValueError as e:
print(f"Error creating collector: {e}")
# Create multiple collectors
configs = [...]
collectors = ExchangeFactory.create_multiple_collectors(configs)
```
## Health Monitoring
The system includes a robust, two-level health monitoring system.
### 1. Collector-Level Monitoring
Each `BaseDataCollector` instance has its own health monitoring.
**Key Metrics:**
- **Heartbeat**: Regular internal signal to confirm the collector is responsive
- **Data Silence**: Tracks time since last message to detect frozen connections
- **Restart Count**: Number of automatic restarts
- **Connection Status**: Tracks WebSocket connection state
### 2. Manager-Level Monitoring
The `CollectorManager` provides a global view of system health.
**Key Metrics:**
- **Aggregate Status**: Overview of all collectors (running, stopped, failed)
- **System Uptime**: Total uptime for the collector system
- **Failed Collectors**: List of collectors that failed to restart
- **Resource Usage**: (Future) System-level CPU and memory monitoring
### Health Status API
```python
# Get status of a single collector
status = collector.get_status()
health = collector.get_health_status()
# Get status of the entire system
system_status = manager.get_status()
```
For detailed status schemas, refer to the [Reference Documentation (`../../reference/README.md`)](../../reference/README.md).
## API Reference
### `BaseDataCollector`
- `async start()`
- `async stop()`
- `async restart()`
- `get_status() -> dict`
- `get_health_status() -> dict`
### `CollectorManager`
- `add_collector(collector)`
- `async start_all()`
- `async stop_all()`
- `get_status() -> dict`
- `list_collectors() -> list`
### `ExchangeFactory`
- `create_collector(config) -> BaseDataCollector`
- `create_multiple_collectors(configs) -> list`
- `get_supported_exchanges() -> list`
## Troubleshooting
### Common Issues
1. **Collector fails to start**
- **Cause**: Invalid symbol, incorrect API keys, or network issues.
- **Solution**: Check logs for error messages. Verify configuration and network connectivity.
2. **Collector stops receiving data**
- **Cause**: WebSocket connection dropped, exchange issues.
- **Solution**: Health monitor should automatically restart. If not, check logs for reconnect errors.
3. **"Exchange not supported" error**
- **Cause**: Trying to create a collector for an exchange not registered in the factory.
- **Solution**: Implement the collector and register it in `data/exchanges/__init__.py`.
### Best Practices
- Use the `CollectorManager` for lifecycle management.
- Always validate configurations before creating collectors.
- Monitor system status regularly using `manager.get_status()`.
- Refer to logs for detailed error analysis.
---
*Back to [Modules Documentation (`../README.md`)]*

View File

@@ -0,0 +1,439 @@
# Database Operations Documentation
## Overview
The Database Operations module (`database/operations.py`) provides a clean, centralized interface for all database interactions using the **Repository Pattern**. This approach abstracts SQL complexity from business logic, ensuring maintainable, testable, and consistent database operations across the entire application.
## Key Benefits
### 🏗️ **Clean Architecture**
- **Repository Pattern**: Separates data access logic from business logic
- **Centralized Operations**: All database interactions go through well-defined APIs
- **No Raw SQL**: Business logic never contains direct SQL queries
- **Consistent Interface**: Standardized methods across all database operations
### 🛡️ **Reliability & Safety**
- **Automatic Transaction Management**: Sessions and commits handled automatically
- **Error Handling**: Custom exceptions with proper context
- **Connection Pooling**: Efficient database connection management
- **Session Cleanup**: Automatic session management and cleanup
### 🔧 **Maintainability**
- **Easy Testing**: Repository methods can be easily mocked for testing
- **Database Agnostic**: Can change database implementations without affecting business logic
- **Type Safety**: Full type hints for better IDE support and error detection
- **Logging Integration**: Built-in logging for monitoring and debugging
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ DatabaseOperations │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Health Check & Stats │ │
│ │ • Connection health monitoring │ │
│ │ • Database statistics │ │
│ │ • Performance metrics │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
│ │MarketDataRepo │ │RawTradeRepo │ │ Future │ │
│ │ │ │ │ │ Repositories │ │
│ │ • upsert_candle │ │ • insert_data │ │ • OrderBook │ │
│ │ • get_candles │ │ • get_trades │ │ • UserTrades │ │
│ │ • get_latest │ │ • raw_websocket │ │ • Positions │ │
│ └─────────────────┘ └─────────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────┐
│ BaseRepository │
│ │
│ • Session Mgmt │
│ • Error Logging │
│ • DB Connection │
└─────────────────┘
```
## Quick Start
### Basic Usage
```python
from database.operations import get_database_operations
from data.common.data_types import OHLCVCandle
from datetime import datetime, timezone
# Get the database operations instance (singleton)
db = get_database_operations()
# Check database health
if not db.health_check():
print("Database connection issue!")
return
# Store a candle
candle = OHLCVCandle(
exchange="okx",
symbol="BTC-USDT",
timeframe="5s",
open=50000.0,
high=50100.0,
low=49900.0,
close=50050.0,
volume=1.5,
trade_count=25,
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
end_time=datetime(2024, 1, 1, 12, 0, 5, tzinfo=timezone.utc)
)
# Store candle (with duplicate handling)
success = db.market_data.upsert_candle(candle, force_update=False)
if success:
print("Candle stored successfully!")
```
### With Data Collectors
```python
import asyncio
from data.exchanges.okx import OKXCollector
from data.base_collector import DataType
from database.operations import get_database_operations
async def main():
# Initialize database operations
db = get_database_operations()
# The collector automatically uses the database operations module
collector = OKXCollector(
symbols=['BTC-USDT'],
data_types=[DataType.TRADE],
store_raw_data=True, # Stores raw WebSocket data
force_update_candles=False # Ignore duplicate candles
)
await collector.start()
await asyncio.sleep(60) # Collect for 1 minute
await collector.stop()
# Check statistics
stats = db.get_stats()
print(f"Total candles: {stats['candle_count']}")
print(f"Total raw trades: {stats['trade_count']}")
asyncio.run(main())
```
## API Reference
### DatabaseOperations
Main entry point for all database operations.
#### Methods
##### `health_check() -> bool`
Test database connection health.
```python
db = get_database_operations()
if db.health_check():
print("✅ Database is healthy")
else:
print("❌ Database connection issues")
```
##### `get_stats() -> Dict[str, Any]`
Get comprehensive database statistics.
```python
stats = db.get_stats()
print(f"Candles: {stats['candle_count']:,}")
print(f"Raw trades: {stats['trade_count']:,}")
print(f"Health: {stats['healthy']}")
```
### MarketDataRepository
Repository for `market_data` table operations (candles/OHLCV data).
#### Methods
##### `upsert_candle(candle: OHLCVCandle, force_update: bool = False) -> bool`
Store or update candle data with configurable duplicate handling.
**Parameters:**
- `candle`: OHLCVCandle object to store
- `force_update`: If True, overwrites existing data; if False, ignores duplicates
**Returns:** True if successful, False otherwise
**Duplicate Handling:**
- `force_update=False`: Uses `ON CONFLICT DO NOTHING` (preserves existing candles)
- `force_update=True`: Uses `ON CONFLICT DO UPDATE SET` (overwrites existing candles)
```python
# Store new candle, ignore if duplicate exists
db.market_data.upsert_candle(candle, force_update=False)
# Store candle, overwrite if duplicate exists
db.market_data.upsert_candle(candle, force_update=True)
```
##### `get_candles(symbol: str, timeframe: str, start_time: datetime, end_time: datetime, exchange: str = "okx") -> List[Dict[str, Any]]`
Retrieve historical candle data.
```python
from datetime import datetime, timezone
candles = db.market_data.get_candles(
symbol="BTC-USDT",
timeframe="5s",
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
end_time=datetime(2024, 1, 1, 13, 0, 0, tzinfo=timezone.utc),
exchange="okx"
)
for candle in candles:
print(f"{candle['timestamp']}: O={candle['open']} H={candle['high']} L={candle['low']} C={candle['close']}")
```
##### `get_latest_candle(symbol: str, timeframe: str, exchange: str = "okx") -> Optional[Dict[str, Any]]`
Get the most recent candle for a symbol/timeframe combination.
```python
latest = db.market_data.get_latest_candle("BTC-USDT", "5s")
if latest:
print(f"Latest 5s candle: {latest['close']} at {latest['timestamp']}")
else:
print("No candles found")
```
### RawTradeRepository
Repository for `raw_trades` table operations (raw WebSocket data).
#### Methods
##### `insert_market_data_point(data_point: MarketDataPoint) -> bool`
Store raw market data from WebSocket streams.
```python
from data.base_collector import MarketDataPoint, DataType
from datetime import datetime, timezone
data_point = MarketDataPoint(
exchange="okx",
symbol="BTC-USDT",
timestamp=datetime.now(timezone.utc),
data_type=DataType.TRADE,
data={"price": 50000, "size": 0.1, "side": "buy"}
)
success = db.raw_trades.insert_market_data_point(data_point)
```
##### `insert_raw_websocket_data(exchange: str, symbol: str, data_type: str, raw_data: Dict[str, Any], timestamp: Optional[datetime] = None) -> bool`
Store raw WebSocket data for debugging purposes.
```python
db.raw_trades.insert_raw_websocket_data(
exchange="okx",
symbol="BTC-USDT",
data_type="raw_trade",
raw_data={"instId": "BTC-USDT", "px": "50000", "sz": "0.1"},
timestamp=datetime.now(timezone.utc)
)
```
##### `get_raw_trades(symbol: str, data_type: str, start_time: datetime, end_time: datetime, exchange: str = "okx", limit: Optional[int] = None) -> List[Dict[str, Any]]`
Retrieve raw trade data for analysis.
```python
trades = db.raw_trades.get_raw_trades(
symbol="BTC-USDT",
data_type="trade",
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
end_time=datetime(2024, 1, 1, 13, 0, 0, tzinfo=timezone.utc),
limit=1000
)
```
## Error Handling
The database operations module includes comprehensive error handling with custom exceptions.
### DatabaseOperationError
Custom exception for database operation failures.
```python
from database.operations import DatabaseOperationError
try:
db.market_data.upsert_candle(candle)
except DatabaseOperationError as e:
logger.error(f"Database operation failed: {e}")
# Handle the error appropriately
```
### Best Practices
1. **Always Handle Exceptions**: Wrap database operations in try-catch blocks
2. **Check Health First**: Use `health_check()` before critical operations
3. **Monitor Performance**: Use `get_stats()` to monitor database growth
4. **Use Appropriate Repositories**: Use `market_data` for candles, `raw_trades` for raw data
5. **Handle Duplicates Appropriately**: Choose the right `force_update` setting
## Configuration
### Force Update Behavior
The `force_update_candles` parameter in collectors controls duplicate handling:
```python
# In OKX collector configuration
collector = OKXCollector(
symbols=['BTC-USDT'],
force_update_candles=False # Default: ignore duplicates
)
# Or enable force updates
collector = OKXCollector(
symbols=['BTC-USDT'],
force_update_candles=True # Overwrite existing candles
)
```
### Logging Integration
Database operations automatically integrate with the application's logging system:
```python
import logging
from database.operations import get_database_operations
logger = logging.getLogger(__name__)
db = get_database_operations(logger)
# All database operations will now log through your logger
db.market_data.upsert_candle(candle) # Logs: "Stored candle: BTC-USDT 5s at ..."
```
## Migration from Direct SQL
If you have existing code using direct SQL, here's how to migrate:
### Before (Direct SQL - ❌ Don't do this)
```python
# OLD WAY - direct SQL queries
from database.connection import get_db_manager
from sqlalchemy import text
db_manager = get_db_manager()
with db_manager.get_session() as session:
session.execute(text("""
INSERT INTO market_data (exchange, symbol, timeframe, ...)
VALUES (:exchange, :symbol, :timeframe, ...)
"""), {'exchange': 'okx', 'symbol': 'BTC-USDT', ...})
session.commit()
```
### After (Repository Pattern - ✅ Correct way)
```python
# NEW WAY - using repository pattern
from database.operations import get_database_operations
from data.common.data_types import OHLCVCandle
db = get_database_operations()
candle = OHLCVCandle(...) # Create candle object
success = db.market_data.upsert_candle(candle)
```
## Performance Considerations
### Connection Pooling
The database operations module automatically manages connection pooling through the underlying `DatabaseManager`.
### Batch Operations
For high-throughput scenarios, consider batching operations:
```python
# Store multiple candles efficiently
candles = [candle1, candle2, candle3, ...]
for candle in candles:
db.market_data.upsert_candle(candle)
```
### Monitoring
Monitor database performance using the built-in statistics:
```python
import time
# Monitor database load
while True:
stats = db.get_stats()
print(f"Candles: {stats['candle_count']:,}, Health: {stats['healthy']}")
time.sleep(30)
```
## Troubleshooting
### Common Issues
#### 1. Connection Errors
```python
if not db.health_check():
logger.error("Database connection failed - check connection settings")
```
#### 2. Duplicate Key Errors
```python
# Use force_update=False to ignore duplicates
db.market_data.upsert_candle(candle, force_update=False)
```
#### 3. Transaction Errors
The repository automatically handles session management, but if you encounter issues:
```python
try:
db.market_data.upsert_candle(candle)
except DatabaseOperationError as e:
logger.error(f"Transaction failed: {e}")
```
### Debug Mode
Enable database query logging for debugging:
```python
# Set environment variable
import os
os.environ['DEBUG'] = 'true'
# This will log all SQL queries
db = get_database_operations()
```
## Related Documentation
- **[Database Connection](../architecture/database.md)** - Connection pooling and configuration
- **[Data Collectors](data_collectors.md)** - How collectors use database operations
- **[Architecture Overview](../architecture/architecture.md)** - System design patterns
---
*This documentation covers the repository pattern implementation in `database/operations.py`. For database schema details, see the [Architecture Documentation](../architecture/).*

View File

@@ -0,0 +1,43 @@
# Exchange Integrations
This section provides documentation for integrating with different cryptocurrency exchanges.
## Architecture
The platform uses a modular architecture for exchange integration, allowing for easy addition of new exchanges without modifying core application logic.
### Core Components
- **`BaseDataCollector`**: An abstract base class defining the standard interface for all exchange collectors.
- **`ExchangeFactory`**: A factory for creating exchange-specific collector instances.
- **Exchange-Specific Modules**: Each exchange has its own module containing the collector implementation and any specific data processing logic.
For a high-level overview of the data collection system, see the [Data Collectors Documentation (`../data_collectors.md`)](../data_collectors.md).
## Supported Exchanges
### OKX
- **Status**: Production Ready
- **Features**: Real-time trades, order book, and ticker data.
- **Documentation**: [OKX Collector Guide (`okx.md`)]
### Binance
- **Status**: Planned
- **Features**: To be determined.
### Coinbase
- **Status**: Planned
- **Features**: To be determined.
## Adding a New Exchange
To add support for a new exchange, you need to:
1. Create a new module in the `data/exchanges/` directory.
2. Implement a new collector class that inherits from `BaseDataCollector`.
3. Implement the exchange-specific WebSocket connection and data parsing logic.
4. Register the new collector in the `ExchangeFactory`.
5. Add a new documentation file in this directory explaining the implementation details.
---
*Back to [Modules Documentation (`../README.md`)]*

View File

@@ -0,0 +1,960 @@
# OKX Data Collector Documentation
## Overview
The OKX Data Collector provides real-time market data collection from OKX exchange using WebSocket API. It's built on the modular exchange architecture and provides robust connection management, automatic reconnection, health monitoring, and comprehensive data processing.
## Features
### 🎯 **OKX-Specific Features**
- **Real-time Data**: Live trades, orderbook, and ticker data
- **Single Pair Focus**: Each collector handles one trading pair for better isolation
- **Ping/Pong Management**: OKX-specific keepalive mechanism with proper format
- **Raw Data Storage**: Optional storage of raw OKX messages for debugging
- **Connection Resilience**: Robust reconnection logic for OKX WebSocket
### 📊 **Supported Data Types**
- **Trades**: Real-time trade executions (`trades` channel)
- **Orderbook**: 5-level order book depth (`books5` channel)
- **Ticker**: 24h ticker statistics (`tickers` channel)
- **Candles**: Real-time OHLCV aggregation (1s, 5s, 10s, 15s, 30s, 1m, 5m, 15m, 1h, 4h, 1d)
### 🔧 **Configuration Options**
- Auto-restart on failures
- Health check intervals
- Raw data storage toggle
- Custom ping/pong timing
- Reconnection attempts configuration
- Multi-timeframe candle aggregation
## Quick Start
### 1. Using Factory Pattern (Recommended)
```python
import asyncio
from data.exchanges import create_okx_collector
from data.base_collector import DataType
async def main():
# Create OKX collector using convenience function
collector = create_okx_collector(
symbol='BTC-USDT',
data_types=[DataType.TRADE, DataType.ORDERBOOK],
auto_restart=True,
health_check_interval=30.0,
store_raw_data=True
)
# Add data callbacks
def on_trade(data_point):
trade = data_point.data
print(f"Trade: {trade['side']} {trade['sz']} @ {trade['px']} (ID: {trade['tradeId']})")
def on_orderbook(data_point):
book = data_point.data
if book.get('bids') and book.get('asks'):
best_bid = book['bids'][0]
best_ask = book['asks'][0]
print(f"Orderbook: Bid {best_bid[0]}@{best_bid[1]} Ask {best_ask[0]}@{best_ask[1]}")
collector.add_data_callback(DataType.TRADE, on_trade)
collector.add_data_callback(DataType.ORDERBOOK, on_orderbook)
# Start collector
await collector.start()
# Run for 60 seconds
await asyncio.sleep(60)
# Stop gracefully
await collector.stop()
asyncio.run(main())
```
### 2. Direct OKX Collector Usage
```python
import asyncio
from data.exchanges.okx import OKXCollector
from data.base_collector import DataType
async def main():
# Create collector directly
collector = OKXCollector(
symbol='ETH-USDT',
data_types=[DataType.TRADE, DataType.ORDERBOOK],
component_name='eth_collector',
auto_restart=True,
health_check_interval=30.0,
store_raw_data=True
)
# Add callbacks
def on_data(data_point):
print(f"{data_point.data_type.value}: {data_point.symbol} - {data_point.timestamp}")
collector.add_data_callback(DataType.TRADE, on_data)
collector.add_data_callback(DataType.ORDERBOOK, on_data)
# Start and monitor
await collector.start()
# Monitor status
for i in range(12): # 60 seconds total
await asyncio.sleep(5)
status = collector.get_status()
print(f"Status: {status['status']} - Messages: {status.get('messages_processed', 0)}")
await collector.stop()
asyncio.run(main())
```
### 3. Multiple OKX Collectors with Manager
```python
import asyncio
from data.collector_manager import CollectorManager
from data.exchanges import create_okx_collector
from data.base_collector import DataType
async def main():
# Create manager
manager = CollectorManager(
manager_name="okx_trading_system",
global_health_check_interval=30.0
)
# Create multiple OKX collectors
symbols = ['BTC-USDT', 'ETH-USDT', 'SOL-USDT']
for symbol in symbols:
collector = create_okx_collector(
symbol=symbol,
data_types=[DataType.TRADE, DataType.ORDERBOOK],
auto_restart=True
)
manager.add_collector(collector)
# Start manager
await manager.start()
# Monitor all collectors
while True:
status = manager.get_status()
stats = status.get('statistics', {})
print(f"=== OKX Collectors Status ===")
print(f"Running: {stats.get('running_collectors', 0)}")
print(f"Failed: {stats.get('failed_collectors', 0)}")
print(f"Total messages: {stats.get('total_messages', 0)}")
# Individual collector status
for collector_name in manager.list_collectors():
collector_status = manager.get_collector_status(collector_name)
if collector_status:
info = collector_status.get('status', {})
print(f" {collector_name}: {info.get('status')} - "
f"Messages: {info.get('messages_processed', 0)}")
await asyncio.sleep(15)
asyncio.run(main())
```
### 3. Multi-Timeframe Candle Processing
```python
import asyncio
from data.exchanges.okx import OKXCollector
from data.base_collector import DataType
from data.common import CandleProcessingConfig
async def main():
# Configure multi-timeframe candle processing
candle_config = CandleProcessingConfig(
timeframes=['1s', '5s', '10s', '15s', '30s', '1m', '5m', '15m', '1h'],
auto_save_candles=True,
emit_incomplete_candles=False
)
# Create collector with candle processing
collector = OKXCollector(
symbol='BTC-USDT',
data_types=[DataType.TRADE], # Trades needed for candle aggregation
candle_config=candle_config,
auto_restart=True,
store_raw_data=False # Disable raw storage for production
)
# Add candle callback
def on_candle_completed(candle):
print(f"Completed {candle.timeframe} candle: "
f"OHLCV=({candle.open},{candle.high},{candle.low},{candle.close},{candle.volume}) "
f"at {candle.end_time}")
collector.add_candle_callback(on_candle_completed)
# Start collector
await collector.start()
# Monitor real-time candle generation
await asyncio.sleep(300) # 5 minutes
await collector.stop()
asyncio.run(main())
```
## Configuration
### 1. JSON Configuration File
The system uses `config/okx_config.json` for configuration:
```json
{
"exchange": "okx",
"connection": {
"public_ws_url": "wss://ws.okx.com:8443/ws/v5/public",
"private_ws_url": "wss://ws.okx.com:8443/ws/v5/private",
"ping_interval": 25.0,
"pong_timeout": 10.0,
"max_reconnect_attempts": 5,
"reconnect_delay": 5.0
},
"data_collection": {
"store_raw_data": true,
"health_check_interval": 30.0,
"auto_restart": true,
"buffer_size": 1000
},
"factory": {
"use_factory_pattern": true,
"default_data_types": ["trade", "orderbook"],
"batch_create": true
},
"trading_pairs": [
{
"symbol": "BTC-USDT",
"enabled": true,
"data_types": ["trade", "orderbook"],
"channels": {
"trades": "trades",
"orderbook": "books5",
"ticker": "tickers"
}
},
{
"symbol": "ETH-USDT",
"enabled": true,
"data_types": ["trade", "orderbook"],
"channels": {
"trades": "trades",
"orderbook": "books5",
"ticker": "tickers"
}
}
],
"logging": {
"component_name_template": "okx_collector_{symbol}",
"log_level": "INFO",
"verbose": false
},
"database": {
"store_processed_data": true,
"store_raw_data": true,
"batch_size": 100,
"flush_interval": 5.0
},
"monitoring": {
"enable_health_checks": true,
"health_check_interval": 30.0,
"alert_on_connection_loss": true,
"max_consecutive_errors": 5
}
}
```
### 2. Programmatic Configuration
```python
from data.exchanges.okx import OKXCollector
from data.base_collector import DataType
# Custom configuration
collector = OKXCollector(
symbol='BTC-USDT',
data_types=[DataType.TRADE, DataType.ORDERBOOK],
component_name='custom_btc_collector',
auto_restart=True,
health_check_interval=15.0, # Check every 15 seconds
store_raw_data=True # Store raw OKX messages
)
```
### 3. Factory Configuration
```python
from data.exchanges import ExchangeFactory, ExchangeCollectorConfig
from data.base_collector import DataType
config = ExchangeCollectorConfig(
exchange='okx',
symbol='ETH-USDT',
data_types=[DataType.TRADE, DataType.ORDERBOOK],
auto_restart=True,
health_check_interval=30.0,
store_raw_data=True,
custom_params={
'ping_interval': 20.0, # Custom ping interval
'max_reconnect_attempts': 10, # More reconnection attempts
'pong_timeout': 15.0 # Longer pong timeout
}
)
collector = ExchangeFactory.create_collector(config)
```
## Data Processing
### OKX Message Formats
#### Trade Data
```python
# Raw OKX trade message
{
"arg": {
"channel": "trades",
"instId": "BTC-USDT"
},
"data": [
{
"instId": "BTC-USDT",
"tradeId": "12345678",
"px": "50000.5", # Price
"sz": "0.001", # Size
"side": "buy", # Side (buy/sell)
"ts": "1697123456789" # Timestamp (ms)
}
]
}
# Processed MarketDataPoint
MarketDataPoint(
exchange="okx",
symbol="BTC-USDT",
timestamp=datetime(2023, 10, 12, 15, 30, 56, tzinfo=timezone.utc),
data_type=DataType.TRADE,
data={
"instId": "BTC-USDT",
"tradeId": "12345678",
"px": "50000.5",
"sz": "0.001",
"side": "buy",
"ts": "1697123456789"
}
)
```
#### Orderbook Data
```python
# Raw OKX orderbook message (books5)
{
"arg": {
"channel": "books5",
"instId": "BTC-USDT"
},
"data": [
{
"asks": [
["50001.0", "0.5", "0", "3"], # [price, size, liquidated, orders]
["50002.0", "1.0", "0", "5"]
],
"bids": [
["50000.0", "0.8", "0", "2"],
["49999.0", "1.2", "0", "4"]
],
"ts": "1697123456789",
"checksum": "123456789"
}
]
}
# Usage in callback
def on_orderbook(data_point):
book = data_point.data
if book.get('bids') and book.get('asks'):
best_bid = book['bids'][0]
best_ask = book['asks'][0]
spread = float(best_ask[0]) - float(best_bid[0])
print(f"Spread: ${spread:.2f}")
```
#### Ticker Data
```python
# Raw OKX ticker message
{
"arg": {
"channel": "tickers",
"instId": "BTC-USDT"
},
"data": [
{
"instType": "SPOT",
"instId": "BTC-USDT",
"last": "50000.5", # Last price
"lastSz": "0.001", # Last size
"askPx": "50001.0", # Best ask price
"askSz": "0.5", # Best ask size
"bidPx": "50000.0", # Best bid price
"bidSz": "0.8", # Best bid size
"open24h": "49500.0", # 24h open
"high24h": "50500.0", # 24h high
"low24h": "49000.0", # 24h low
"vol24h": "1234.567", # 24h volume
"ts": "1697123456789"
}
]
}
```
### Data Validation
The OKX collector includes comprehensive data validation:
```python
# Automatic validation in collector
class OKXCollector(BaseDataCollector):
async def _process_data_item(self, channel: str, data_item: Dict[str, Any]):
# Validate message structure
if not isinstance(data_item, dict):
self.logger.warning("Invalid data item type")
return None
# Validate required fields based on channel
if channel == "trades":
required_fields = ['tradeId', 'px', 'sz', 'side', 'ts']
elif channel == "books5":
required_fields = ['bids', 'asks', 'ts']
elif channel == "tickers":
required_fields = ['last', 'ts']
else:
self.logger.warning(f"Unknown channel: {channel}")
return None
# Check required fields
for field in required_fields:
if field not in data_item:
self.logger.warning(f"Missing required field '{field}' in {channel} data")
return None
# Process and return validated data
return await self._create_market_data_point(channel, data_item)
```
## Monitoring and Status
### Status Information
```python
# Get comprehensive status
status = collector.get_status()
print(f"Exchange: {status['exchange']}") # 'okx'
print(f"Symbol: {status['symbol']}") # 'BTC-USDT'
print(f"Status: {status['status']}") # 'running'
print(f"WebSocket Connected: {status['websocket_connected']}") # True/False
print(f"WebSocket State: {status['websocket_state']}") # 'connected'
print(f"Messages Processed: {status['messages_processed']}") # Integer
print(f"Errors: {status['errors']}") # Integer
print(f"Last Trade ID: {status['last_trade_id']}") # String or None
# WebSocket statistics
if 'websocket_stats' in status:
ws_stats = status['websocket_stats']
print(f"Messages Received: {ws_stats['messages_received']}")
print(f"Messages Sent: {ws_stats['messages_sent']}")
print(f"Pings Sent: {ws_stats['pings_sent']}")
print(f"Pongs Received: {ws_stats['pongs_received']}")
print(f"Reconnections: {ws_stats['reconnections']}")
```
### Health Monitoring
```python
# Get health status
health = collector.get_health_status()
print(f"Is Healthy: {health['is_healthy']}") # True/False
print(f"Issues: {health['issues']}") # List of issues
print(f"Last Heartbeat: {health['last_heartbeat']}") # ISO timestamp
print(f"Last Data: {health['last_data_received']}") # ISO timestamp
print(f"Should Be Running: {health['should_be_running']}") # True/False
print(f"Is Running: {health['is_running']}") # True/False
# Auto-restart status
if not health['is_healthy']:
print("Collector is unhealthy - auto-restart will trigger")
for issue in health['issues']:
print(f" Issue: {issue}")
```
### Performance Monitoring
```python
import time
async def monitor_performance():
collector = create_okx_collector('BTC-USDT', [DataType.TRADE])
await collector.start()
start_time = time.time()
last_message_count = 0
while True:
await asyncio.sleep(10) # Check every 10 seconds
status = collector.get_status()
current_messages = status.get('messages_processed', 0)
# Calculate message rate
elapsed = time.time() - start_time
messages_per_second = current_messages / elapsed if elapsed > 0 else 0
# Calculate recent rate
recent_messages = current_messages - last_message_count
recent_rate = recent_messages / 10 # per second over last 10 seconds
print(f"=== Performance Stats ===")
print(f"Total Messages: {current_messages}")
print(f"Average Rate: {messages_per_second:.2f} msg/sec")
print(f"Recent Rate: {recent_rate:.2f} msg/sec")
print(f"Errors: {status.get('errors', 0)}")
print(f"WebSocket State: {status.get('websocket_state', 'unknown')}")
last_message_count = current_messages
# Run performance monitoring
asyncio.run(monitor_performance())
```
## WebSocket Connection Details
### OKX WebSocket Client
The OKX implementation includes a specialized WebSocket client:
```python
from data.exchanges.okx import OKXWebSocketClient, OKXSubscription, OKXChannelType
# Create WebSocket client directly (usually handled by collector)
ws_client = OKXWebSocketClient(
component_name='okx_ws_btc',
ping_interval=25.0, # Must be < 30 seconds for OKX
pong_timeout=10.0,
max_reconnect_attempts=5,
reconnect_delay=5.0
)
# Connect to OKX
await ws_client.connect(use_public=True)
# Create subscriptions
subscriptions = [
OKXSubscription(
channel=OKXChannelType.TRADES.value,
inst_id='BTC-USDT',
enabled=True
),
OKXSubscription(
channel=OKXChannelType.BOOKS5.value,
inst_id='BTC-USDT',
enabled=True
)
]
# Subscribe to channels
await ws_client.subscribe(subscriptions)
# Add message callback
def on_message(message):
print(f"Received: {message}")
ws_client.add_message_callback(on_message)
# WebSocket will handle messages automatically
await asyncio.sleep(60)
# Disconnect
await ws_client.disconnect()
```
### Connection States
The WebSocket client tracks connection states:
```python
from data.exchanges.okx.websocket import ConnectionState
# Check connection state
state = ws_client.connection_state
if state == ConnectionState.CONNECTED:
print("WebSocket is connected and ready")
elif state == ConnectionState.CONNECTING:
print("WebSocket is connecting...")
elif state == ConnectionState.RECONNECTING:
print("WebSocket is reconnecting...")
elif state == ConnectionState.DISCONNECTED:
print("WebSocket is disconnected")
elif state == ConnectionState.ERROR:
print("WebSocket has error")
```
### Ping/Pong Mechanism
OKX requires specific ping/pong format:
```python
# OKX expects simple "ping" string (not JSON)
# The WebSocket client handles this automatically:
# Send: "ping"
# Receive: "pong"
# This is handled automatically by OKXWebSocketClient
# Ping interval must be < 30 seconds to avoid disconnection
```
## Error Handling & Resilience
The OKX collector includes comprehensive error handling and automatic recovery mechanisms:
### Connection Management
- **Automatic Reconnection**: Handles network disconnections with exponential backoff
- **Task Synchronization**: Prevents race conditions during reconnection using asyncio locks
- **Graceful Shutdown**: Properly cancels background tasks and closes connections
- **Connection State Tracking**: Monitors connection health and validity
### Enhanced WebSocket Handling (v2.1+)
- **Race Condition Prevention**: Uses synchronization locks to prevent multiple recv() calls
- **Task Lifecycle Management**: Properly manages background task startup and shutdown
- **Reconnection Locking**: Prevents concurrent reconnection attempts
- **Subscription Persistence**: Automatically re-subscribes to channels after reconnection
```python
# The collector handles these scenarios automatically:
# - Network interruptions
# - WebSocket connection drops
# - OKX server maintenance
# - Rate limiting responses
# - Malformed data packets
# Enhanced error logging for diagnostics
collector = OKXCollector('BTC-USDT', [DataType.TRADE])
stats = collector.get_status()
print(f"Connection state: {stats['connection_state']}")
print(f"Reconnection attempts: {stats['reconnect_attempts']}")
print(f"Error count: {stats['error_count']}")
```
### Common Error Patterns
#### WebSocket Concurrency Errors (Fixed in v2.1)
```
ERROR: cannot call recv while another coroutine is already running recv or recv_streaming
```
**Solution**: Updated WebSocket client with proper task synchronization and reconnection locking.
#### Connection Recovery
```python
# Monitor connection health
async def monitor_connection():
while True:
if collector.is_connected():
print("✅ Connected and receiving data")
else:
print("❌ Connection issue - auto-recovery in progress")
await asyncio.sleep(30)
```
## Testing
### Unit Tests
Run the existing test scripts:
```bash
# Test single collector
python scripts/test_okx_collector.py single
# Test collector manager
python scripts/test_okx_collector.py manager
# Test factory pattern
python scripts/test_exchange_factory.py
```
### Custom Testing
```python
import asyncio
from data.exchanges import create_okx_collector
from data.base_collector import DataType
async def test_okx_collector():
"""Test OKX collector functionality."""
# Test data collection
message_count = 0
error_count = 0
def on_trade(data_point):
nonlocal message_count
message_count += 1
print(f"Trade #{message_count}: {data_point.data.get('tradeId')}")
def on_error(error):
nonlocal error_count
error_count += 1
print(f"Error #{error_count}: {error}")
# Create and configure collector
collector = create_okx_collector(
symbol='BTC-USDT',
data_types=[DataType.TRADE],
auto_restart=True
)
collector.add_data_callback(DataType.TRADE, on_trade)
# Test lifecycle
print("Starting collector...")
await collector.start()
print("Collecting data for 30 seconds...")
await asyncio.sleep(30)
print("Stopping collector...")
await collector.stop()
# Check results
status = collector.get_status()
print(f"Final status: {status['status']}")
print(f"Messages processed: {status.get('messages_processed', 0)}")
print(f"Errors: {status.get('errors', 0)}")
assert message_count > 0, "No messages received"
assert error_count == 0, f"Unexpected errors: {error_count}"
print("Test passed!")
# Run test
asyncio.run(test_okx_collector())
```
## Production Deployment
### Recommended Configuration
```python
# Production-ready OKX collector setup
import asyncio
from data.collector_manager import CollectorManager
from data.exchanges import create_okx_collector
from data.base_collector import DataType
async def deploy_okx_production():
"""Production deployment configuration."""
# Create manager with appropriate settings
manager = CollectorManager(
manager_name="okx_production",
global_health_check_interval=30.0, # Check every 30 seconds
restart_delay=10.0 # Wait 10 seconds between restarts
)
# Production trading pairs
trading_pairs = [
'BTC-USDT', 'ETH-USDT', 'SOL-USDT',
'DOGE-USDT', 'TON-USDT', 'UNI-USDT'
]
# Create collectors with production settings
for symbol in trading_pairs:
collector = create_okx_collector(
symbol=symbol,
data_types=[DataType.TRADE, DataType.ORDERBOOK],
auto_restart=True,
health_check_interval=15.0, # More frequent health checks
store_raw_data=False # Disable raw data storage in production
)
manager.add_collector(collector)
# Start system
await manager.start()
# Production monitoring loop
try:
while True:
await asyncio.sleep(60) # Check every minute
status = manager.get_status()
stats = status.get('statistics', {})
# Log production metrics
print(f"=== Production Status ===")
print(f"Running: {stats.get('running_collectors', 0)}/{len(trading_pairs)}")
print(f"Failed: {stats.get('failed_collectors', 0)}")
print(f"Total restarts: {stats.get('restarts_performed', 0)}")
# Alert on failures
failed_count = stats.get('failed_collectors', 0)
if failed_count > 0:
print(f"ALERT: {failed_count} collectors failed!")
# Implement alerting system here
except KeyboardInterrupt:
print("Shutting down production system...")
await manager.stop()
print("Production system stopped")
# Deploy to production
asyncio.run(deploy_okx_production())
```
### Docker Deployment
```dockerfile
# Dockerfile for OKX collector
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
# Production command
CMD ["python", "-m", "scripts.deploy_okx_production"]
```
### Environment Variables
```bash
# Production environment variables
export LOG_LEVEL=INFO
export OKX_ENV=production
export HEALTH_CHECK_INTERVAL=30
export AUTO_RESTART=true
export STORE_RAW_DATA=false
export DATABASE_URL=postgresql://user:pass@host:5432/db
```
## API Reference
### OKXCollector Class
```python
class OKXCollector(BaseDataCollector):
def __init__(self,
symbol: str,
data_types: Optional[List[DataType]] = None,
component_name: Optional[str] = None,
auto_restart: bool = True,
health_check_interval: float = 30.0,
store_raw_data: bool = True):
"""
Initialize OKX collector.
Args:
symbol: Trading symbol (e.g., 'BTC-USDT')
data_types: Data types to collect (default: [TRADE, ORDERBOOK])
component_name: Name for logging (default: auto-generated)
auto_restart: Enable automatic restart on failures
health_check_interval: Seconds between health checks
store_raw_data: Whether to store raw OKX data
"""
```
## Key Components
The OKX collector consists of three main components working together:
### `OKXCollector`
- **Main class**: `OKXCollector(BaseDataCollector)`
- **Responsibilities**:
- Manages WebSocket connection state
- Subscribes to required data channels
- Dispatches raw messages to the data processor
- Stores standardized data in the database
- Provides health and status monitoring
### `OKXWebSocketClient`
- **Handles WebSocket communication**: `OKXWebSocketClient`
- **Responsibilities**:
- Manages connection, reconnection, and ping/pong
- Decodes incoming messages
- Handles authentication for private channels
### `OKXDataProcessor`
- **New in v2.0**: `OKXDataProcessor`
- **Responsibilities**:
- Validates incoming raw data from WebSocket
- Transforms data into standardized `StandardizedTrade` and `OHLCVCandle` formats
- Aggregates trades into OHLCV candles
- Invokes callbacks for processed trades and completed candles
## Configuration
### `OKXCollector` Configuration
Configuration options for the `OKXCollector` class:
| Parameter | Type | Default | Description |
|-------------------------|---------------------|---------------------------------------|-----------------------------------------------------------------------------|
| `symbol` | `str` | - | Trading symbol (e.g., `BTC-USDT`) |
| `data_types` | `List[DataType]` | `[TRADE, ORDERBOOK]` | List of data types to collect |
| `auto_restart` | `bool` | `True` | Automatically restart on failures |
| `health_check_interval` | `float` | `30.0` | Seconds between health checks |
| `store_raw_data` | `bool` | `True` | Store raw WebSocket data for debugging |
| `force_update_candles` | `bool` | `False` | If `True`, update existing candles; if `False`, keep existing ones unchanged |
| `logger` | `Logger` | `None` | Logger instance for conditional logging |
| `log_errors_only` | `bool` | `False` | If `True` and logger provided, only log error-level messages |
### Health & Status Monitoring
status = collector.get_status()
print(json.dumps(status, indent=2))
Example output:
```json
{
"component_name": "okx_collector_btc_usdt",
"status": "running",
"uptime": "0:10:15.123456",
"symbol": "BTC-USDT",
"data_types": ["trade", "orderbook"],
"connection_state": "connected",
"last_health_check": "2023-11-15T10:30:00Z",
"message_count": 1052,
"processed_trades": 512,
"processed_candles": 10,
"error_count": 2
}
```
## Database Integration

567
docs/modules/logging.md Normal file
View File

@@ -0,0 +1,567 @@
# Unified Logging System
The TCP Dashboard project uses a unified logging system that provides consistent, centralized logging across all components with advanced conditional logging capabilities.
## Key Features
- **Component-based logging**: Each component (e.g., `bot_manager`, `data_collector`) gets its own dedicated logger and log directory under `logs/`.
- **Centralized control**: `UnifiedLogger` class manages all logger instances, ensuring consistent configuration.
- **Date-based rotation**: Log files are automatically rotated daily (e.g., `2023-11-15.txt`).
- **Unified format**: All log messages follow `[YYYY-MM-DD HH:MM:SS - LEVEL - message]`.
- **Verbose console logging**: Optional verbose console output for real-time monitoring, controlled by environment variables.
- **Automatic cleanup**: Old log files are automatically removed to save disk space.
## Features
- **Component-specific directories**: Each component gets its own log directory
- **Date-based file rotation**: New log files created daily automatically
- **Unified format**: Consistent timestamp and message format across all logs
- **Thread-safe**: Safe for use in multi-threaded applications
- **Verbose console logging**: Configurable console output with proper log level handling
- **Automatic log cleanup**: Built-in functionality to remove old log files automatically
- **Error handling**: Graceful fallback to console logging if file logging fails
- **Conditional logging**: Components can operate with or without loggers
- **Error-only logging**: Option to log only error-level messages
- **Hierarchical logging**: Parent components can pass loggers to children
- **Logger inheritance**: Consistent logging across component hierarchies
## Conditional Logging System
The TCP Dashboard implements a sophisticated conditional logging system that allows components to work with or without loggers, providing maximum flexibility for different deployment scenarios.
### Key Concepts
1. **Optional Logging**: Components accept `logger=None` and function normally without logging
2. **Error-Only Mode**: Components can log only error-level messages with `log_errors_only=True`
3. **Logger Inheritance**: Parent components pass their logger to child components
4. **Hierarchical Structure**: Log files are organized by component hierarchy
### Component Hierarchy
```
Top-level Application (individual logger)
├── ProductionManager (individual logger)
│ ├── DataSaver (receives logger from ProductionManager)
│ ├── DataValidator (receives logger from ProductionManager)
│ ├── DatabaseConnection (receives logger from ProductionManager)
│ └── CollectorManager (individual logger)
│ ├── OKX collector BTC-USD (individual logger)
│ │ ├── DataAggregator (receives logger from OKX collector)
│ │ ├── DataTransformer (receives logger from OKX collector)
│ │ └── DataProcessor (receives logger from OKX collector)
│ └── Another collector...
```
### Usage Patterns
#### 1. No Logging
```python
from data.collector_manager import CollectorManager
from data.exchanges.okx.collector import OKXCollector
# Components work without any logging
manager = CollectorManager(logger=None)
collector = OKXCollector("BTC-USDT", logger=None)
# No log files created, no console output
# Components function normally without exceptions
```
#### 2. Normal Logging
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# Create logger for the manager
logger = get_logger('production_manager')
# Manager logs all activities
manager = CollectorManager(logger=logger)
# Child components inherit the logger
collector = manager.add_okx_collector("BTC-USDT") # Uses manager's logger
```
#### 3. Error-Only Logging
```python
from utils.logger import get_logger
from data.exchanges.okx.collector import OKXCollector
# Create logger but only log errors
logger = get_logger('critical_only')
# Only error and critical messages are logged
collector = OKXCollector(
"BTC-USDT",
logger=logger,
log_errors_only=True
)
# Debug, info, warning messages are suppressed
# Error and critical messages are always logged
```
#### 4. Hierarchical Logging
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# Top-level application logger
app_logger = get_logger('tcp_dashboard')
# Production manager with its own logger
prod_logger = get_logger('production_manager')
manager = CollectorManager(logger=prod_logger)
# Individual collectors with specific loggers
btc_logger = get_logger('btc_collector')
btc_collector = OKXCollector("BTC-USDT", logger=btc_logger)
eth_collector = OKXCollector("ETH-USDT", logger=None) # No logging
# Results in organized log structure:
# logs/tcp_dashboard/
# logs/production_manager/
# logs/btc_collector/
# (no logs for ETH collector)
```
#### 5. Mixed Configuration
```python
from utils.logger import get_logger
from data.collector_manager import CollectorManager
# System logger for normal operations
system_logger = get_logger('system')
# Critical logger for error-only components
critical_logger = get_logger('critical_only')
manager = CollectorManager(logger=system_logger)
# Different logging strategies for different collectors
btc_collector = OKXCollector("BTC-USDT", logger=system_logger) # Full logging
eth_collector = OKXCollector("ETH-USDT", logger=critical_logger, log_errors_only=True) # Errors only
ada_collector = OKXCollector("ADA-USDT", logger=None) # No logging
manager.add_collector(btc_collector)
manager.add_collector(eth_collector)
manager.add_collector(ada_collector)
```
### Implementation Details
#### Component Constructor Pattern
All major components follow this pattern:
```python
class ComponentExample:
def __init__(self, logger=None, log_errors_only=False):
self.logger = logger
self.log_errors_only = log_errors_only
def _log_debug(self, message: str) -> None:
"""Log debug message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.debug(message)
def _log_info(self, message: str) -> None:
"""Log info message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.info(message)
def _log_warning(self, message: str) -> None:
"""Log warning message if logger is available and not in errors-only mode."""
if self.logger and not self.log_errors_only:
self.logger.warning(message)
def _log_error(self, message: str, exc_info: bool = False) -> None:
"""Log error message if logger is available (always logs errors)."""
if self.logger:
self.logger.error(message, exc_info=exc_info)
def _log_critical(self, message: str, exc_info: bool = False) -> None:
"""Log critical message if logger is available (always logs critical)."""
if self.logger:
self.logger.critical(message, exc_info=exc_info)
```
#### Child Component Pattern
Child components receive logger from parent:
```python
class OKXCollector(BaseDataCollector):
def __init__(self, symbol: str, logger=None, log_errors_only=False):
super().__init__(..., logger=logger, log_errors_only=log_errors_only)
# Pass logger to child components
self._data_processor = OKXDataProcessor(
symbol,
logger=self.logger # Pass parent's logger
)
self._data_validator = DataValidator(logger=self.logger)
self._data_transformer = DataTransformer(logger=self.logger)
```
#### Supported Components
The following components support conditional logging:
1. **BaseDataCollector** (`data/base_collector.py`)
- Parameters: `logger=None, log_errors_only=False`
- Conditional logging for all collector operations
2. **CollectorManager** (`data/collector_manager.py`)
- Parameters: `logger=None, log_errors_only=False`
- Manages multiple collectors with consistent logging
3. **OKXCollector** (`data/exchanges/okx/collector.py`)
- Parameters: `logger=None, log_errors_only=False`
- Exchange-specific data collection with conditional logging
4. **BaseDataValidator** (`data/common/validation.py`)
- Parameters: `logger=None`
- Data validation with optional logging
5. **OKXDataTransformer** (`data/exchanges/okx/data_processor.py`)
- Parameters: `logger=None`
- Data processing with conditional logging
## Usage
### Getting a Logger
```python
from utils.logger import get_logger
# Get logger for bot manager
logger = get_logger('bot_manager', verbose=True)
logger.info("Bot started successfully")
logger.debug("Connecting to database...")
logger.warning("API response time is high")
logger.error("Failed to execute trade", extra={'trade_id': 12345})
```
### Configuration
The `get_logger` function accepts the following parameters:
| Parameter | Type | Default | Description |
|-------------------|---------------------|---------|-----------------------------------------------------------------------------|
| `component_name` | `str` | - | Name of the component (e.g., `bot_manager`, `data_collector`) |
| `log_level` | `str` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
| `verbose` | `Optional[bool]` | `None` | Enable console logging. If `None`, uses `VERBOSE_LOGGING` from `.env` |
| `clean_old_logs` | `bool` | `True` | Automatically clean old log files when creating new ones |
| `max_log_files` | `int` | `30` | Maximum number of log files to keep per component |
## Log Cleanup
Log cleanup is now based on the number of files, not age.
- **Enabled by default**: `clean_old_logs=True`
- **Default retention**: Keeps the most recent 30 log files (`max_log_files=30`)
## Centralized Control
For consistent logging behavior across the application, it is recommended to use environment variables in an `.env` file instead of passing parameters to `get_logger`.
- `LOG_LEVEL`: "INFO", "DEBUG", etc.
- `VERBOSE_LOGGING`: "true" or "false"
- `CLEAN_OLD_LOGS`: "true" or "false"
- `MAX_LOG_FILES`: e.g., "15"
## File Structure
```
logs/
├── bot_manager/
│ ├── 2023-11-14.txt
│ └── 2023-11-15.txt
├── data_collector/
│ ├── 2023-11-14.txt
│ └── 2023-11-15.txt
└── default_logger/
└── 2023-11-15.txt
```
## Best Practices
### 1. Component Naming
Use descriptive, consistent component names:
- `bot_manager` - for bot lifecycle management
- `data_collector` - for market data collection
- `strategies` - for trading strategies
- `backtesting` - for backtesting engine
- `dashboard` - for web dashboard
### 2. Log Level Guidelines
- **DEBUG**: Detailed diagnostic information, typically only of interest when diagnosing problems
- **INFO**: General information about program execution
- **WARNING**: Something unexpected happened, but the program is still working
- **ERROR**: A serious problem occurred, the program couldn't perform a function
- **CRITICAL**: A serious error occurred, the program may not be able to continue
### 3. Verbose Logging Guidelines
```python
# Development: Use verbose logging with DEBUG level
dev_logger = get_logger('component', 'DEBUG', verbose=True, max_log_files=3)
# Production: Use INFO level with no console output
prod_logger = get_logger('component', 'INFO', verbose=False, max_log_files=30)
# Testing: Disable cleanup to preserve test logs
test_logger = get_logger('test_component', 'DEBUG', verbose=True, clean_old_logs=False)
```
### 4. Log Retention Guidelines
```python
# High-frequency components (data collectors): shorter retention
data_logger = get_logger('data_collector', max_log_files=7)
# Important components (bot managers): longer retention
bot_logger = get_logger('bot_manager', max_log_files=30)
# Development: very short retention
dev_logger = get_logger('dev_component', max_log_files=3)
```
### 5. Message Content
```python
# Good: Descriptive and actionable
logger.error("Failed to connect to OKX API: timeout after 30s")
# Bad: Vague and unhelpful
logger.error("Error occurred")
# Good: Include relevant context
logger.info(f"Bot {bot_id} executed trade: {symbol} {side} {quantity}@{price}")
# Good: Include duration for performance monitoring
start_time = time.time()
# ... do work ...
duration = time.time() - start_time
logger.info(f"Data aggregation completed in {duration:.2f}s")
```
### 6. Exception Handling
```python
try:
execute_trade(symbol, quantity, price)
logger.info(f"Trade executed successfully: {symbol}")
except APIError as e:
logger.error(f"API error during trade execution: {e}", exc_info=True)
raise
except ValidationError as e:
logger.warning(f"Trade validation failed: {e}")
return False
except Exception as e:
logger.critical(f"Unexpected error during trade execution: {e}", exc_info=True)
raise
```
### 7. Performance Considerations
```python
# Good: Efficient string formatting
logger.debug(f"Processing {len(data)} records")
# Avoid: Expensive operations in log messages unless necessary
# logger.debug(f"Data: {expensive_serialization(data)}") # Only if needed
# Better: Check log level first for expensive operations
if logger.isEnabledFor(logging.DEBUG):
logger.debug(f"Data: {expensive_serialization(data)}")
```
## Migration Guide
### Updating Existing Components
1. **Add logger parameter to constructor**:
```python
def __init__(self, ..., logger=None, log_errors_only=False):
```
2. **Add conditional logging helpers**:
```python
def _log_debug(self, message: str) -> None:
if self.logger and not self.log_errors_only:
self.logger.debug(message)
```
3. **Update all logging calls**:
```python
# Before
self.logger.info("Message")
# After
self._log_info("Message")
```
4. **Pass logger to child components**:
```python
child = ChildComponent(logger=self.logger)
```
### From Standard Logging
```python
# Old logging (if any existed)
import logging
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# New unified logging
from utils.logger import get_logger
logger = get_logger('component_name', verbose=True)
```
### Gradual Adoption
1. **Phase 1**: Add optional logger parameters to new components
2. **Phase 2**: Update existing components to support conditional logging
3. **Phase 3**: Implement hierarchical logging structure
4. **Phase 4**: Add error-only logging mode
## Testing
### Testing Conditional Logging
#### Test Script Example
```python
# test_conditional_logging.py
from utils.logger import get_logger
from data.collector_manager import CollectorManager
from data.exchanges.okx.collector import OKXCollector
def test_no_logging():
"""Test components work without loggers."""
manager = CollectorManager(logger=None)
collector = OKXCollector("BTC-USDT", logger=None)
print("✓ No logging test passed")
def test_with_logging():
"""Test components work with loggers."""
logger = get_logger('test_system')
manager = CollectorManager(logger=logger)
collector = OKXCollector("BTC-USDT", logger=logger)
print("✓ With logging test passed")
def test_error_only():
"""Test error-only logging mode."""
logger = get_logger('test_errors')
collector = OKXCollector("BTC-USDT", logger=logger, log_errors_only=True)
print("✓ Error-only logging test passed")
if __name__ == "__main__":
test_no_logging()
test_with_logging()
test_error_only()
print("✅ All conditional logging tests passed!")
```
### Testing Changes
```python
# Test without logger
component = MyComponent(logger=None)
# Should work without errors, no logging
# Test with logger
logger = get_logger('test_component')
component = MyComponent(logger=logger)
# Should log normally
# Test error-only mode
component = MyComponent(logger=logger, log_errors_only=True)
# Should only log errors
```
### Basic System Test
Run a simple test to verify the logging system:
```bash
python -c "from utils.logger import get_logger; logger = get_logger('test', verbose=True); logger.info('Test message'); print('Check logs/test/ directory')"
```
## Troubleshooting
### Common Issues
1. **Permission errors**: Ensure the application has write permissions to the project directory
2. **Disk space**: Monitor disk usage and adjust log retention with `max_log_files`
3. **Threading issues**: The logger is thread-safe, but check for application-level concurrency issues
4. **Too many console messages**: Adjust `verbose` parameter or log levels
### Debug Mode
Enable debug logging to troubleshoot issues:
```python
logger = get_logger('component_name', 'DEBUG', verbose=True)
```
### Console Output Issues
```python
# Force console output regardless of environment
logger = get_logger('component_name', verbose=True)
# Check environment variables
import os
print(f"VERBOSE_LOGGING: {os.getenv('VERBOSE_LOGGING')}")
print(f"LOG_TO_CONSOLE: {os.getenv('LOG_TO_CONSOLE')}")
```
### Fallback Logging
If file logging fails, the system automatically falls back to console logging with a warning message.
## Integration with Existing Code
The logging system is designed to be gradually adopted:
1. **Start with new modules**: Use the unified logger in new code
2. **Replace existing logging**: Gradually migrate existing logging to the unified system
3. **No breaking changes**: Existing code continues to work
## Maintenance
### Automatic Cleanup Benefits
The automatic cleanup feature provides several benefits:
- **Disk space management**: Prevents log directories from growing indefinitely
- **Performance**: Fewer files to scan in log directories
- **Maintenance-free**: No need for external cron jobs or scripts
- **Component-specific**: Each component can have different retention policies
### Manual Cleanup for Special Cases
For cases requiring age-based cleanup instead of count-based:
```python
# cleanup_logs.py
from utils.logger import cleanup_old_logs
components = ['bot_manager', 'data_collector', 'strategies', 'dashboard']
for component in components:
cleanup_old_logs(component, days_to_keep=30)
```
### Monitoring Disk Usage
Monitor the `logs/` directory size and adjust retention policies as needed:
```bash
# Check log directory size
du -sh logs/
# Find large log files
find logs/ -name "*.txt" -size +10M
# Count log files per component
find logs/ -name "*.txt" | cut -d'/' -f2 | sort | uniq -c
```
This conditional logging system provides maximum flexibility while maintaining clean, maintainable code that works in all scenarios.

View File

@@ -0,0 +1,858 @@
# Data Collection Service
**Service for collecting and storing real-time market data from multiple exchanges.**
## Architecture Overview
The data collection service uses a **manager-worker architecture** to collect data for multiple trading pairs concurrently.
- **`CollectorManager`**: The central manager responsible for creating, starting, stopping, and monitoring individual data collectors.
- **`OKXCollector`**: A dedicated worker responsible for collecting data for a single trading pair from the OKX exchange.
This architecture allows for high scalability and fault tolerance.
## Key Components
### `CollectorManager`
- **Location**: `tasks/collector_manager.py`
- **Responsibilities**:
- Manages the lifecycle of multiple collectors
- Provides a unified API for controlling all collectors
- Monitors the health of each collector
- Distributes tasks and aggregates results
### `OKXCollector`
- **Location**: `data/exchanges/okx/collector.py`
- **Responsibilities**:
- Connects to the OKX WebSocket API
- Subscribes to real-time data channels
- Processes and standardizes incoming data
- Stores data in the database
## Configuration
The service is configured through `config/bot_configs/data_collector_config.json`:
```json
{
"service_name": "data_collection_service",
"enabled": true,
"manager_config": {
"component_name": "collector_manager",
"health_check_interval": 60,
"log_level": "INFO",
"verbose": true
},
"collectors": [
{
"exchange": "okx",
"symbol": "BTC-USDT",
"data_types": ["trade", "orderbook"],
"enabled": true
},
{
"exchange": "okx",
"symbol": "ETH-USDT",
"data_types": ["trade"],
"enabled": true
}
]
}
```
## Usage
Start the service from the main application entry point:
```python
# main.py
from tasks.collector_manager import CollectorManager
async def main():
manager = CollectorManager()
await manager.start_all_collectors()
if __name__ == "__main__":
asyncio.run(main())
```
## Health & Monitoring
The `CollectorManager` provides a `get_status()` method to monitor the health of all collectors.
## Features
- **Service Lifecycle Management**: Start, stop, and monitor data collection operations
- **JSON Configuration**: File-based configuration with automatic defaults
- **Clean Production Logging**: Only essential operational information
- **Health Monitoring**: Service-level health checks and auto-recovery
- **Graceful Shutdown**: Proper signal handling and cleanup
- **Multi-Exchange Orchestration**: Coordinate collectors across multiple exchanges
- **Production Ready**: Designed for 24/7 operation with monitoring
## Quick Start
### Basic Usage
```bash
# Start with default configuration (indefinite run)
python scripts/start_data_collection.py
# Run for 8 hours
python scripts/start_data_collection.py --hours 8
# Use custom configuration
python scripts/start_data_collection.py --config config/my_config.json
```
### Monitoring
```bash
# Check status once
python scripts/monitor_clean.py
# Monitor continuously every 60 seconds
python scripts/monitor_clean.py --interval 60
```
## Configuration
The service uses JSON configuration files with automatic default creation if none exists.
### Default Configuration Location
`config/data_collection.json`
### Configuration Structure
```json
{
"exchanges": {
"okx": {
"enabled": true,
"trading_pairs": [
{
"symbol": "BTC-USDT",
"enabled": true,
"data_types": ["trade"],
"timeframes": ["1m", "5m", "15m", "1h"]
},
{
"symbol": "ETH-USDT",
"enabled": true,
"data_types": ["trade"],
"timeframes": ["1m", "5m", "15m", "1h"]
}
]
}
},
"collection_settings": {
"health_check_interval": 120,
"store_raw_data": true,
"auto_restart": true,
"max_restart_attempts": 3
},
"logging": {
"level": "INFO",
"log_errors_only": true,
"verbose_data_logging": false
}
}
```
### Configuration Options
#### Exchange Settings
- **enabled**: Whether to enable this exchange
- **trading_pairs**: Array of trading pair configurations
#### Trading Pair Settings
- **symbol**: Trading pair symbol (e.g., "BTC-USDT")
- **enabled**: Whether to collect data for this pair
- **data_types**: Types of data to collect (["trade"], ["ticker"], etc.)
- **timeframes**: Candle timeframes to generate (["1m", "5m", "15m", "1h", "4h", "1d"])
#### Collection Settings
- **health_check_interval**: Health check frequency in seconds
- **store_raw_data**: Whether to store raw trade data
- **auto_restart**: Enable automatic restart on failures
- **max_restart_attempts**: Maximum restart attempts before giving up
#### Logging Settings
- **level**: Log level ("DEBUG", "INFO", "WARNING", "ERROR")
- **log_errors_only**: Only log errors and essential events
- **verbose_data_logging**: Enable verbose logging of individual trades/candles
## Service Architecture
### Service Layer Components
```
┌─────────────────────────────────────────────────┐
│ DataCollectionService │
│ ┌─────────────────────────────────────────┐ │
│ │ Configuration Manager │ │
│ │ • JSON config loading/validation │ │
│ │ • Default config generation │ │
│ │ • Runtime config updates │ │
│ └─────────────────────────────────────────┘ │
│ ┌─────────────────────────────────────────┐ │
│ │ Service Monitor │ │
│ │ • Service-level health checks │ │
│ │ • Uptime tracking │ │
│ │ • Error aggregation │ │
│ └─────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────┐ │
│ │ CollectorManager │ │
│ │ • Individual collector management │ │
│ │ • Health monitoring │ │
│ │ • Auto-restart coordination │ │
│ └─────────────────────────────────────────┘ │
└─────────────────────────────────────────────────┘
┌─────────────────────────────┐
│ Core Data Collectors │
│ (See data_collectors.md) │
└─────────────────────────────┘
```
### Data Flow
```
Configuration → Service → CollectorManager → Data Collectors → Database
↓ ↓
Service Monitor Health Monitor
```
### Storage Integration
- **Raw Data**: PostgreSQL `raw_trades` table via repository pattern
- **Candles**: PostgreSQL `market_data` table with multiple timeframes
- **Real-time**: Redis pub/sub for live data distribution
- **Service Metrics**: Service uptime, error counts, collector statistics
## Logging Philosophy
The service implements **clean production logging** focused on operational needs:
### What Gets Logged
**Service Lifecycle**
- Service start/stop events
- Configuration loading
- Service initialization
**Collector Orchestration**
- Collector creation and destruction
- Service-level health summaries
- Recovery operations
**Configuration Events**
- Config file changes
- Runtime configuration updates
- Validation errors
**Service Statistics**
- Periodic uptime reports
- Collection summary statistics
- Performance metrics
### What Doesn't Get Logged
**Individual Data Points**
- Every trade received
- Every candle generated
- Raw market data
**Internal Operations**
- Individual collector heartbeats
- Routine database operations
- Internal processing steps
## API Reference
### DataCollectionService
The main service class for managing data collection operations.
#### Constructor
```python
DataCollectionService(config_path: str = "config/data_collection.json")
```
**Parameters:**
- `config_path`: Path to JSON configuration file
#### Methods
##### `async run(duration_hours: Optional[float] = None) -> bool`
Run the service for a specified duration or indefinitely.
**Parameters:**
- `duration_hours`: Optional duration in hours (None = indefinite)
**Returns:**
- `bool`: True if successful, False if error occurred
**Example:**
```python
service = DataCollectionService()
await service.run(duration_hours=24) # Run for 24 hours
```
##### `async start() -> bool`
Start the data collection service and all configured collectors.
**Returns:**
- `bool`: True if started successfully
##### `async stop() -> None`
Stop the service gracefully, including all collectors and cleanup.
##### `get_status() -> Dict[str, Any]`
Get current service status including uptime, collector counts, and errors.
**Returns:**
```python
{
'service_running': True,
'uptime_hours': 12.5,
'collectors_total': 6,
'collectors_running': 5,
'collectors_failed': 1,
'errors_count': 2,
'last_error': 'Connection timeout for ETH-USDT',
'configuration': {
'config_file': 'config/data_collection.json',
'exchanges_enabled': ['okx'],
'total_trading_pairs': 6
}
}
```
##### `async initialize_collectors() -> bool`
Initialize all collectors based on configuration.
**Returns:**
- `bool`: True if all collectors initialized successfully
##### `load_configuration() -> Dict[str, Any]`
Load and validate configuration from file.
**Returns:**
- `dict`: Loaded configuration
### Standalone Function
#### `run_data_collection_service(config_path, duration_hours)`
```python
async def run_data_collection_service(
config_path: str = "config/data_collection.json",
duration_hours: Optional[float] = None
) -> bool
```
Convenience function to run the service with minimal setup.
**Parameters:**
- `config_path`: Path to configuration file
- `duration_hours`: Optional duration in hours
**Returns:**
- `bool`: True if successful
## Integration Examples
### Basic Service Integration
```python
import asyncio
from data.collection_service import DataCollectionService
async def main():
service = DataCollectionService("config/my_config.json")
# Run for 24 hours
success = await service.run(duration_hours=24)
if not success:
print("Service encountered errors")
if __name__ == "__main__":
asyncio.run(main())
```
### Custom Status Monitoring
```python
import asyncio
from data.collection_service import DataCollectionService
async def monitor_service():
service = DataCollectionService()
# Start service in background
start_task = asyncio.create_task(service.run())
# Monitor status every 5 minutes
while service.running:
status = service.get_status()
print(f"Service Uptime: {status['uptime_hours']:.1f}h")
print(f"Collectors: {status['collectors_running']}/{status['collectors_total']}")
print(f"Errors: {status['errors_count']}")
await asyncio.sleep(300) # 5 minutes
await start_task
asyncio.run(monitor_service())
```
### Programmatic Control
```python
import asyncio
from data.collection_service import DataCollectionService
async def controlled_collection():
service = DataCollectionService()
try:
# Initialize and start
await service.initialize_collectors()
await service.start()
# Monitor and control
while True:
status = service.get_status()
# Check if any collectors failed
if status['collectors_failed'] > 0:
print("Some collectors failed, checking health...")
# Service auto-restart will handle this
await asyncio.sleep(60) # Check every minute
except KeyboardInterrupt:
print("Shutting down service...")
finally:
await service.stop()
asyncio.run(controlled_collection())
```
### Configuration Management
```python
import asyncio
import json
from data.collection_service import DataCollectionService
async def dynamic_configuration():
service = DataCollectionService()
# Load and modify configuration
config = service.load_configuration()
# Add new trading pair
config['exchanges']['okx']['trading_pairs'].append({
'symbol': 'SOL-USDT',
'enabled': True,
'data_types': ['trade'],
'timeframes': ['1m', '5m']
})
# Save updated configuration
with open('config/data_collection.json', 'w') as f:
json.dump(config, f, indent=2)
# Restart service with new config
await service.stop()
await service.start()
asyncio.run(dynamic_configuration())
```
## Error Handling
The service implements robust error handling at the service orchestration level:
### Service Level Errors
- **Configuration Errors**: Invalid JSON, missing required fields
- **Initialization Errors**: Failed collector creation, database connectivity
- **Runtime Errors**: Service-level exceptions, resource exhaustion
### Error Recovery Strategies
1. **Graceful Degradation**: Continue with healthy collectors
2. **Configuration Validation**: Validate before applying changes
3. **Service Restart**: Full service restart on critical errors
4. **Error Aggregation**: Collect and report errors across all collectors
### Error Reporting
```python
# Service status includes error information
status = service.get_status()
if status['errors_count'] > 0:
print(f"Service has {status['errors_count']} errors")
print(f"Last error: {status['last_error']}")
# Get detailed error information from collectors
for collector_name in service.manager.list_collectors():
collector_status = service.manager.get_collector_status(collector_name)
if collector_status['status'] == 'error':
print(f"Collector {collector_name}: {collector_status['statistics']['last_error']}")
```
## Testing
### Running Service Tests
```bash
# Run all data collection service tests
uv run pytest tests/test_data_collection_service.py -v
# Run specific test categories
uv run pytest tests/test_data_collection_service.py::TestDataCollectionService -v
# Run with coverage
uv run pytest tests/test_data_collection_service.py --cov=data.collection_service
```
### Test Coverage
The service test suite covers:
- Service initialization and configuration loading
- Collector orchestration and management
- Service lifecycle (start/stop/restart)
- Configuration validation and error handling
- Signal handling and graceful shutdown
- Status reporting and monitoring
- Error aggregation and recovery
### Mock Testing
```python
import pytest
from unittest.mock import AsyncMock, patch
from data.collection_service import DataCollectionService
@pytest.mark.asyncio
async def test_service_with_mock_collectors():
with patch('data.collection_service.CollectorManager') as mock_manager:
# Mock successful initialization
mock_manager.return_value.start.return_value = True
service = DataCollectionService()
result = await service.start()
assert result is True
mock_manager.return_value.start.assert_called_once()
```
## Production Deployment
### Docker Deployment
```dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY . .
# Install dependencies
RUN pip install uv
RUN uv pip install -r requirements.txt
# Create logs and config directories
RUN mkdir -p logs config
# Copy production configuration
COPY config/production.json config/data_collection.json
# Health check
HEALTHCHECK --interval=60s --timeout=10s --start-period=30s --retries=3 \
CMD python scripts/health_check.py || exit 1
# Run service
CMD ["python", "scripts/start_data_collection.py", "--config", "config/data_collection.json"]
```
### Kubernetes Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: data-collection-service
spec:
replicas: 1
selector:
matchLabels:
app: data-collection-service
template:
metadata:
labels:
app: data-collection-service
spec:
containers:
- name: data-collector
image: crypto-dashboard/data-collector:latest
ports:
- containerPort: 8080
env:
- name: POSTGRES_HOST
value: "postgres-service"
- name: REDIS_HOST
value: "redis-service"
volumeMounts:
- name: config-volume
mountPath: /app/config
- name: logs-volume
mountPath: /app/logs
livenessProbe:
exec:
command:
- python
- scripts/health_check.py
initialDelaySeconds: 30
periodSeconds: 60
volumes:
- name: config-volume
configMap:
name: data-collection-config
- name: logs-volume
emptyDir: {}
```
### Systemd Service
```ini
[Unit]
Description=Cryptocurrency Data Collection Service
After=network.target postgres.service redis.service
Requires=postgres.service redis.service
[Service]
Type=simple
User=crypto-collector
Group=crypto-collector
WorkingDirectory=/opt/crypto-dashboard
ExecStart=/usr/bin/python scripts/start_data_collection.py --config config/production.json
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=10
KillMode=mixed
TimeoutStopSec=30
# Environment
Environment=PYTHONPATH=/opt/crypto-dashboard
Environment=LOG_LEVEL=INFO
# Security
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ReadWritePaths=/opt/crypto-dashboard/logs
[Install]
WantedBy=multi-user.target
```
### Environment Configuration
```bash
# Production environment variables
export ENVIRONMENT=production
export POSTGRES_HOST=postgres.internal
export POSTGRES_PORT=5432
export POSTGRES_DB=crypto_dashboard
export POSTGRES_USER=dashboard_user
export POSTGRES_PASSWORD=secure_password
export REDIS_HOST=redis.internal
export REDIS_PORT=6379
# Service configuration
export DATA_COLLECTION_CONFIG=/etc/crypto-dashboard/data_collection.json
export LOG_LEVEL=INFO
export HEALTH_CHECK_INTERVAL=120
```
## Monitoring and Alerting
### Metrics Collection
The service exposes metrics for monitoring systems:
```python
# Service metrics
service_uptime_hours = 24.5
collectors_running = 5
collectors_total = 6
errors_per_hour = 0.2
data_points_processed = 15000
```
### Health Checks
```python
# External health check endpoint
async def health_check():
service = DataCollectionService()
status = service.get_status()
if not status['service_running']:
return {'status': 'unhealthy', 'reason': 'service_stopped'}
if status['collectors_failed'] > status['collectors_total'] * 0.5:
return {'status': 'degraded', 'reason': 'too_many_failed_collectors'}
return {'status': 'healthy'}
```
### Alerting Rules
```yaml
# Prometheus alerting rules
groups:
- name: data_collection_service
rules:
- alert: DataCollectionServiceDown
expr: up{job="data-collection-service"} == 0
for: 5m
annotations:
summary: "Data collection service is down"
- alert: TooManyFailedCollectors
expr: collectors_failed / collectors_total > 0.5
for: 10m
annotations:
summary: "More than 50% of collectors have failed"
- alert: HighErrorRate
expr: rate(errors_total[5m]) > 0.1
for: 15m
annotations:
summary: "High error rate in data collection service"
```
## Performance Considerations
### Resource Usage
- **Memory**: ~150MB base + ~15MB per trading pair (including service overhead)
- **CPU**: Low (async I/O bound, service orchestration)
- **Network**: ~1KB/s per trading pair
- **Storage**: Service logs ~10MB/day
### Scaling Strategies
1. **Horizontal Scaling**: Multiple service instances with different configurations
2. **Configuration Partitioning**: Separate services by exchange or asset class
3. **Load Balancing**: Distribute trading pairs across service instances
4. **Regional Deployment**: Deploy closer to exchange data centers
### Optimization Tips
1. **Configuration Tuning**: Optimize health check intervals and timeframes
2. **Resource Limits**: Set appropriate memory and CPU limits
3. **Batch Operations**: Use efficient database operations
4. **Monitoring Overhead**: Balance monitoring frequency with performance
## Troubleshooting
### Common Service Issues
#### Service Won't Start
```
❌ Failed to start data collection service
```
**Solutions:**
1. Check configuration file validity
2. Verify database connectivity
3. Ensure no port conflicts
4. Check file permissions
#### Configuration Loading Failed
```
❌ Failed to load config from config/data_collection.json: Invalid JSON
```
**Solutions:**
1. Validate JSON syntax
2. Check required fields
3. Verify file encoding (UTF-8)
4. Recreate default configuration
#### No Collectors Created
```
❌ No collectors were successfully initialized
```
**Solutions:**
1. Check exchange configuration
2. Verify trading pair symbols
3. Check network connectivity
4. Review collector creation logs
### Debug Mode
Enable verbose service debugging:
```json
{
"logging": {
"level": "DEBUG",
"log_errors_only": false,
"verbose_data_logging": true
}
}
```
### Service Diagnostics
```python
# Run diagnostic check
from data.collection_service import DataCollectionService
service = DataCollectionService()
status = service.get_status()
print(f"Service Running: {status['service_running']}")
print(f"Configuration File: {status['configuration']['config_file']}")
print(f"Collectors: {status['collectors_running']}/{status['collectors_total']}")
# Check individual collector health
for collector_name in service.manager.list_collectors():
collector_status = service.manager.get_collector_status(collector_name)
print(f"{collector_name}: {collector_status['status']}")
```
## Related Documentation
- [Data Collectors System](../components/data_collectors.md) - Core collector components
- [Logging System](../components/logging.md) - Logging configuration
- [Database Operations](../database/operations.md) - Database integration
- [Monitoring Guide](../monitoring/README.md) - System monitoring setup

View File

@@ -0,0 +1,298 @@
# Technical Indicators Module
The Technical Indicators module provides a suite of common technical analysis tools. It is designed to work efficiently with pandas DataFrames, which is the standard data structure for time-series analysis in the TCP Trading Platform.
## Overview
The module has been refactored to be **DataFrame-centric**. All calculation methods now expect a pandas DataFrame with a `DatetimeIndex` and the required OHLCV columns (`open`, `high`, `low`, `close`, `volume`). This change simplifies the data pipeline, improves performance through vectorization, and ensures consistency across the platform.
The module implements five core technical indicators:
- **Simple Moving Average (SMA)**
- **Exponential Moving Average (EMA)**
- **Relative Strength Index (RSI)**
- **Moving Average Convergence Divergence (MACD)**
- **Bollinger Bands**
## Key Features
- **DataFrame-Centric Design**: Operates directly on pandas DataFrames for performance and simplicity.
- **Vectorized Calculations**: Leverages pandas and numpy for high-speed computation.
- **Flexible `calculate` Method**: A single entry point for calculating any supported indicator by name.
- **Standardized Output**: All methods return a DataFrame containing the calculated indicator values, indexed by timestamp.
## Usage Examples
### Preparing the DataFrame
Before you can calculate indicators, you need a properly formatted pandas DataFrame. The `prepare_chart_data` utility is the recommended way to create one from a list of candle dictionaries.
```python
from components.charts.utils import prepare_chart_data
from data.common.indicators import TechnicalIndicators
# Assume 'candles' is a list of OHLCV dictionaries from the database
# candles = fetch_market_data(...)
# Prepare the DataFrame
df = prepare_chart_data(candles)
# df is now ready for indicator calculations
# It has a DatetimeIndex and the necessary OHLCV columns.
```
### Basic Indicator Calculation
Once you have a prepared DataFrame, you can calculate indicators directly.
```python
# Initialize the calculator
indicators = TechnicalIndicators()
# Calculate a Simple Moving Average
sma_df = indicators.sma(df, period=20)
# Calculate an Exponential Moving Average
ema_df = indicators.ema(df, period=12)
# sma_df and ema_df are pandas DataFrames containing the results.
```
### Using the `calculate` Method
The most flexible way to compute an indicator is with the `calculate` method, which accepts the indicator type as a string.
```python
# Calculate RSI using the generic method
rsi_pkg = indicators.calculate('rsi', df, period=14)
if rsi_pkg:
rsi_df = rsi_pkg['data']
# Calculate MACD with custom parameters
macd_pkg = indicators.calculate('macd', df, fast_period=10, slow_period=30, signal_period=8)
if macd_pkg:
macd_df = macd_pkg['data']
```
### Using Different Price Columns
You can specify which price column (`open`, `high`, `low`, or `close`) to use for the calculation.
```python
# Calculate SMA on the 'high' price
sma_high_df = indicators.sma(df, period=20, price_column='high')
# Calculate RSI on the 'open' price
rsi_open_pkg = indicators.calculate('rsi', df, period=14, price_column='open')
```
## Indicator Details
The following details the parameters and the columns returned in the result DataFrame for each indicator.
### Simple Moving Average (SMA)
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
- **Returned Columns**: `sma`
### Exponential Moving Average (EMA)
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
- **Returned Columns**: `ema`
### Relative Strength Index (RSI)
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
- **Returned Columns**: `rsi`
### MACD (Moving Average Convergence Divergence)
- **Parameters**: `fast_period` (int), `slow_period` (int), `signal_period` (int), `price_column` (str, default: 'close')
- **Returned Columns**: `macd`, `signal`, `histogram`
### Bollinger Bands
- **Parameters**: `period` (int), `std_dev` (float), `price_column` (str, default: 'close')
- **Returned Columns**: `upper_band`, `middle_band`, `lower_band`
## Integration with the TCP Platform
The refactored `TechnicalIndicators` module is now tightly integrated with the `ChartBuilder`, which handles all data preparation and calculation automatically when indicators are added to a chart. For custom analysis or strategy development, you can use the class directly as shown in the examples above. The key is to always start with a properly prepared DataFrame using `prepare_chart_data`.
## Data Structures
### IndicatorResult
Container for technical indicator calculation results.
```python
@dataclass
class IndicatorResult:
timestamp: datetime # Right-aligned candle timestamp
symbol: str # Trading symbol (e.g., 'BTC-USDT')
timeframe: str # Candle timeframe (e.g., '1m', '5m')
values: Dict[str, float] # Indicator values
metadata: Optional[Dict[str, Any]] = None # Calculation metadata
```
### Configuration Format
Indicator configurations use a standardized JSON format:
```json
{
"indicator_name": {
"type": "sma|ema|rsi|macd|bollinger_bands",
"period": 20,
"price_column": "close",
// Additional parameters specific to indicator type
}
}
```
## Integration with TCP Platform
### Aggregation Strategy Compatibility
The indicators module is designed to work seamlessly with the TCP platform's aggregation strategy:
- **Right-Aligned Timestamps**: Uses `end_time` from OHLCV candles
- **Sparse Data Support**: Handles missing candles without interpolation
- **No Future Leakage**: Only processes completed candles
- **Time Boundary Respect**: Maintains proper temporal ordering
### Real-Time Processing
```python
from data.common.aggregation import RealTimeCandleProcessor
from data.common.indicators import TechnicalIndicators
# Set up real-time processing
candle_processor = RealTimeCandleProcessor(symbol='BTC-USDT', exchange='okx')
indicators = TechnicalIndicators()
# Process incoming trades and calculate indicators
def on_new_candle(candle):
# Get recent candles for indicator calculation
recent_candles = get_recent_candles(symbol='BTC-USDT', count=50)
# Calculate indicators
sma_results = indicators.sma(recent_candles, period=20)
rsi_results = indicators.rsi(recent_candles, period=14)
# Use indicator values for trading decisions
if sma_results and rsi_results:
latest_sma = sma_results[-1].values['sma']
latest_rsi = rsi_results[-1].values['rsi']
# Trading logic here...
```
### Database Integration
```python
from database.models import IndicatorData
# Store indicator results in database
def store_indicators(indicator_results, indicator_type):
for result in indicator_results:
indicator_data = IndicatorData(
symbol=result.symbol,
timeframe=result.timeframe,
timestamp=result.timestamp,
indicator_type=indicator_type,
values=result.values,
metadata=result.metadata
)
session.add(indicator_data)
session.commit()
```
## Performance Considerations
### Memory Usage
- Process indicators in batches for large datasets
- Use appropriate period lengths to balance accuracy and performance
- Consider data retention policies for historical indicator values
### Calculation Frequency
- Calculate indicators only when new complete candles are available
- Cache recent indicator values to avoid recalculation
- Use incremental updates for real-time scenarios
### Optimization Tips
- Use `calculate_multiple_indicators()` for efficiency when computing multiple indicators
- Limit the number of historical candles to what's actually needed
- Consider using different timeframes for different indicators
## Error Handling
The module includes comprehensive error handling:
- **Insufficient Data**: Returns empty results when not enough data is available
- **Invalid Configuration**: Validates configuration parameters before calculation
- **Data Quality Issues**: Handles NaN values and missing data gracefully
- **Type Errors**: Converts data types safely with fallback values
## Testing
The module includes comprehensive unit tests covering:
- All indicator calculations with known expected values
- Sparse data handling scenarios
- Edge cases (insufficient data, invalid parameters)
- Configuration validation
- Multiple indicator batch processing
Run tests with:
```bash
uv run pytest tests/test_indicators.py -v
```
## Future Enhancements
Potential future additions to the indicators module:
- **Additional Indicators**: Stochastic, Williams %R, Commodity Channel Index
- **Custom Indicators**: Framework for user-defined indicators
- **Performance Metrics**: Calculation timing and memory usage statistics
- **Streaming Updates**: Incremental indicator updates for real-time scenarios
- **Parallel Processing**: Multi-threaded calculation for large datasets
## See Also
- [Aggregation Strategy Documentation](aggregation-strategy.md)
- [Data Types Documentation](data-types.md)
- [Database Schema Documentation](database-schema.md)
- [API Reference](api-reference.md)
## `TechnicalIndicators` Class
The main class for calculating technical indicators.
- **RSI**: `rsi(df, period=14, price_column='close')`
- **MACD**: `macd(df, fast_period=12, slow_period=26, signal_period=9, price_column='close')`
- **Bollinger Bands**: `bollinger_bands(df, period=20, std_dev=2.0, price_column='close')`
### `calculate_multiple_indicators`
Calculates multiple indicators in a single pass for efficiency.
```python
# Configuration for multiple indicators
indicators_config = {
'sma_20': {'type': 'sma', 'period': 20},
'ema_50': {'type': 'ema', 'period': 50},
'rsi_14': {'type': 'rsi', 'period': 14}
}
# Calculate all indicators
all_results = ti.calculate_multiple_indicators(candles, indicators_config)
print(f"SMA results: {len(all_results['sma_20'])}")
print(f"RSI results: {len(all_results['rsi_14'])}")
```
## Sparse Data Handling
The `TechnicalIndicators` class is designed to handle sparse OHLCV data, which is a common scenario in real-time data collection.