Merge branch 'main' of https://dep.sokaris.link/vasily/TCPDashboard
This commit is contained in:
11
docs/API.md
Normal file
11
docs/API.md
Normal file
@@ -0,0 +1,11 @@
|
||||
# API Documentation
|
||||
|
||||
This document will contain the documentation for the platform's REST API once it is implemented.
|
||||
|
||||
The API will provide endpoints for:
|
||||
- Managing bots (creating, starting, stopping)
|
||||
- Configuring strategies
|
||||
- Retrieving market data
|
||||
- Viewing performance metrics
|
||||
|
||||
*This documentation is currently a placeholder.*
|
||||
28
docs/CHANGELOG.md
Normal file
28
docs/CHANGELOG.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [Unreleased]
|
||||
|
||||
### Added
|
||||
- Initial project setup with data collection for OKX.
|
||||
- Basic dashboard for system health monitoring and data visualization.
|
||||
- Modularized data collector and processing framework.
|
||||
- Comprehensive documentation structure.
|
||||
|
||||
### Changed
|
||||
- Refactored data processing to be more modular and extensible.
|
||||
- Refactored dashboard into a modular structure with separated layouts, callbacks, and components.
|
||||
- Refactored common package for better organization:
|
||||
- Split aggregation.py into dedicated sub-package
|
||||
- Split indicators.py into dedicated sub-package
|
||||
- Improved validation.py modularity
|
||||
- Added safety limits to transformation package
|
||||
- Verified and documented data_types.py structure
|
||||
|
||||
### Removed
|
||||
- Monolithic `app.py` in favor of a modular dashboard structure.
|
||||
- Original monolithic common package files in favor of modular structure
|
||||
31
docs/CONTRIBUTING.md
Normal file
31
docs/CONTRIBUTING.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Contributing
|
||||
|
||||
We welcome contributions to the TCP Trading Platform! Please follow these guidelines to ensure a smooth development process.
|
||||
|
||||
## Development Process
|
||||
|
||||
1. **Check for Existing Issues**: Before starting work on a new feature or bugfix, check the issue tracker to see if it has already been reported.
|
||||
2. **Fork the Repository**: Create your own fork of the repository to work on your changes.
|
||||
3. **Create a Branch**: Create a new branch for your feature or bugfix. Use a descriptive name (e.g., `feature/add-binance-support`, `fix/chart-rendering-bug`).
|
||||
4. **Write Code**:
|
||||
* Adhere to the coding standards outlined in `CONTEXT.md`.
|
||||
* Maintain a modular structure and keep components decoupled.
|
||||
* Ensure all new code is well-documented with docstrings and comments.
|
||||
5. **Update Documentation**: If you add or change a feature, update the relevant documentation in the `docs/` directory.
|
||||
6. **Write Tests**: Add unit and integration tests for any new functionality.
|
||||
7. **Submit a Pull Request**: Once your changes are complete, submit a pull request to the `main` branch. Provide a clear description of your changes and reference any related issues.
|
||||
|
||||
## Coding Standards
|
||||
|
||||
* **Style**: Follow PEP 8 for Python code.
|
||||
* **Naming**: Use `PascalCase` for classes and `snake_case` for functions and variables.
|
||||
* **Type Hinting**: All function signatures must include type hints.
|
||||
* **Modularity**: Keep files small and focused on a single responsibility.
|
||||
|
||||
## Commit Messages
|
||||
|
||||
* Use clear and descriptive commit messages.
|
||||
* Start with a verb in the imperative mood (e.g., `Add`, `Fix`, `Update`).
|
||||
* Reference the issue number if applicable (e.g., `Fix: Resolve issue #42`).
|
||||
|
||||
Thank you for contributing!
|
||||
283
docs/README.md
Normal file
283
docs/README.md
Normal file
@@ -0,0 +1,283 @@
|
||||
# TCP Dashboard Documentation
|
||||
|
||||
Welcome to the documentation for the TCP Trading Platform. This resource provides comprehensive information for developers, contributors, and anyone interested in the platform's architecture and functionality.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
### 1. Project Overview
|
||||
- **[Project Context (`../CONTEXT.md`)](../CONTEXT.md)** - The single source of truth for the project's current state, architecture, and conventions. **Start here.**
|
||||
- **[Product Requirements (`./crypto-bot-prd.md`)](./crypto-bot-prd.md)** - The Product Requirements Document (PRD) outlining the project's goals and scope.
|
||||
|
||||
### 2. Getting Started
|
||||
- **[Setup Guide (`guides/setup.md`)](./guides/setup.md)** - Instructions for setting up the development environment.
|
||||
- **[Contributing (`CONTRIBUTING.md`)](./CONTRIBUTING.md)** - Guidelines for contributing to the project.
|
||||
|
||||
### 3. Architecture & Design
|
||||
- **[Architecture Overview (`../architecture.md`)](../architecture.md)** - High-level system architecture, components, and data flow.
|
||||
- **[Architecture Decision Records (`decisions/`)](./decisions/)** - Key architectural decisions and their justifications.
|
||||
|
||||
### 4. Modules Documentation
|
||||
This section contains detailed technical documentation for each system module.
|
||||
|
||||
- **[Chart System (`modules/charts/`)](./modules/charts/)** - Comprehensive documentation for the modular chart system.
|
||||
- **[Data Collectors (`modules/data_collectors.md`)](./modules/data_collectors.md)йй** - Guide to the data collector framework.
|
||||
- **[Database Operations (`modules/database_operations.md`)](./modules/database_operations.md)** - Details on the repository pattern for database interactions.
|
||||
- **[Technical Indicators (`modules/technical-indicators.md`)](./modules/technical-indicators.md)** - Information on the technical analysis module.
|
||||
- **[Exchange Integrations (`modules/exchanges/`)](./modules/exchanges/)** - Exchange-specific implementation details.
|
||||
- **[Logging System (`modules/logging.md`)](./modules/logging.md)** - The unified logging framework.
|
||||
- **[Data Collection Service (`modules/services/data_collection_service.md`)](./modules/services/data_collection_service.md)** - The high-level service that orchestrates data collectors.
|
||||
|
||||
### 5. API & Reference
|
||||
- **[API Documentation (`API.md`)](./API.md)** - Placeholder for future REST API documentation.
|
||||
- **[Technical Reference (`reference/`)](./reference/)** - Detailed specifications, data formats, and standards.
|
||||
- **[Changelog (`CHANGELOG.md`)](./CHANGELOG.md)** - A log of all notable changes to the project.
|
||||
|
||||
## How to Use This Documentation
|
||||
|
||||
- **For a high-level understanding**, start with the `CONTEXT.md` and `architecture.md` files.
|
||||
- **For development tasks**, refer to the specific module documentation in the `modules/` directory.
|
||||
- **For setup and contribution guidelines**, see the `guides/` and `CONTRIBUTING.md` files.
|
||||
|
||||
This documentation is intended to be a living document that evolves with the project. Please keep it up-to-date as you make changes.
|
||||
|
||||
|
||||
|
||||
### 📖 **[Setup & Guides](guides/)**
|
||||
|
||||
- **[Setup Guide](./guides/setup.md)** - *Comprehensive setup instructions*
|
||||
- Environment configuration and prerequisites
|
||||
- Database setup with Docker and PostgreSQL
|
||||
- Development workflow and best practices
|
||||
- Production deployment guidelines
|
||||
|
||||
### 📋 **[Technical Reference](reference/)**
|
||||
|
||||
- **[Project Specification](./reference/specification.md)** - *Technical specifications and requirements*
|
||||
- System requirements and constraints
|
||||
- Database schema specifications
|
||||
- API endpoint definitions
|
||||
- Data format specifications
|
||||
|
||||
- **[Aggregation Strategy](reference/aggregation-strategy.md)** - *Comprehensive data aggregation documentation*
|
||||
- Right-aligned timestamp strategy (industry standard)
|
||||
- Future leakage prevention safeguards
|
||||
- Real-time vs historical processing
|
||||
- Database storage patterns
|
||||
- Testing methodology and examples
|
||||
|
||||
## 🎯 **Quick Start**
|
||||
|
||||
1. **New to the platform?** Start with the [Setup Guide](guides/setup.md)
|
||||
2. **Working with charts and indicators?** See [Chart Layers Documentation](components/charts/)
|
||||
3. **Implementing data collectors?** See [Data Collectors Documentation](components/data_collectors.md)
|
||||
4. **Understanding the architecture?** Read [Architecture Overview](architecture/architecture.md)
|
||||
5. **Modular dashboard development?** Check [Dashboard Structure](dashboard-modular-structure.md)
|
||||
6. **Exchange integration?** Check [Exchange Documentation](exchanges/)
|
||||
7. **Troubleshooting?** Check component-specific documentation
|
||||
|
||||
## 🏛️ **System Components**
|
||||
|
||||
### Core Infrastructure
|
||||
- **Database Layer**: PostgreSQL with SQLAlchemy models
|
||||
- **Real-time Messaging**: Redis pub/sub for data distribution
|
||||
- **Configuration Management**: Pydantic-based settings
|
||||
- **Containerization**: Docker and docker-compose setup
|
||||
|
||||
### Data Collection & Processing
|
||||
- **Abstract Base Collectors**: Standardized interface for all exchange connectors
|
||||
- **Exchange Factory Pattern**: Unified collector creation across exchanges
|
||||
- **Modular Exchange Architecture**: Organized exchange implementations in dedicated folders
|
||||
- **Health Monitoring**: Automatic failure detection and recovery
|
||||
- **Data Validation**: Comprehensive validation for market data
|
||||
- **Multi-Exchange Support**: OKX (production-ready), Binance and other exchanges (planned)
|
||||
|
||||
### Trading & Strategy Engine
|
||||
- **Strategy Framework**: Base strategy classes and implementations
|
||||
- **Bot Management**: Lifecycle management with JSON configuration
|
||||
- **Backtesting Engine**: Historical strategy testing with performance metrics
|
||||
- **Portfolio Management**: Virtual trading with P&L tracking
|
||||
|
||||
### User Interface & Visualization
|
||||
- **Modular Dashboard**: Dash-based web interface with separated layouts and callbacks
|
||||
- **Chart Layers System**: Interactive price charts with 26+ technical indicators
|
||||
- **Strategy Templates**: 5 pre-configured trading strategies (EMA crossover, momentum, etc.)
|
||||
- **User Indicator Management**: Custom indicator creation with JSON persistence
|
||||
- **Real-time Updates**: Chart and system health monitoring with auto-refresh
|
||||
- **Bot Controls**: Start/stop/configure trading bots (planned)
|
||||
- **Performance Analytics**: Portfolio visualization and trade analytics (planned)
|
||||
|
||||
## 📋 **Task Progress**
|
||||
|
||||
The platform follows a structured development approach with clearly defined tasks:
|
||||
|
||||
- ✅ **Database Foundation** - Complete
|
||||
- ✅ **Enhanced Data Collectors** - Complete with health monitoring
|
||||
- ✅ **OKX Data Collector** - Complete with factory pattern and production testing
|
||||
- ✅ **Modular Chart Layers System** - Complete with strategy support
|
||||
- ✅ **Dashboard Modular Structure** - Complete with separated concerns
|
||||
- ✅ **Custom Indicator Management** - Complete with CRUD operations
|
||||
- ⏳ **Multi-Exchange Support** - In progress (Binance connector next)
|
||||
- ⏳ **Bot Signal Layer** - Planned for integration
|
||||
- ⏳ **Strategy Engine** - Planned
|
||||
- ⏳ **Advanced Features** - Planned
|
||||
|
||||
For detailed task tracking, see [tasks/tasks-crypto-bot-prd.md](../tasks/tasks-crypto-bot-prd.md) and [tasks/3.4. Chart layers.md](../tasks/3.4. Chart layers.md).
|
||||
|
||||
## 🛠️ **Development Workflow**
|
||||
|
||||
### Setting Up Development Environment
|
||||
|
||||
```bash
|
||||
# Clone and setup
|
||||
git clone <repository>
|
||||
cd TCPDashboard
|
||||
|
||||
# Install dependencies with UV
|
||||
uv sync
|
||||
|
||||
# Setup environment
|
||||
cp .env.example .env
|
||||
# Edit .env with your configuration
|
||||
|
||||
# Start services
|
||||
docker-compose up -d
|
||||
|
||||
# Initialize database
|
||||
uv run python scripts/init_database.py
|
||||
|
||||
# Run tests
|
||||
uv run pytest
|
||||
```
|
||||
|
||||
### Key Development Tools
|
||||
|
||||
- **UV**: Modern Python package management
|
||||
- **pytest**: Testing framework with async support
|
||||
- **SQLAlchemy**: Database ORM with migration support
|
||||
- **Dash + Mantine**: Modern web UI framework
|
||||
- **Docker**: Containerized development environment
|
||||
|
||||
## 🔍 **Testing**
|
||||
|
||||
The platform includes comprehensive test coverage:
|
||||
|
||||
- **Unit Tests**: Individual component testing
|
||||
- **Integration Tests**: Cross-component functionality
|
||||
- **Performance Tests**: Load and stress testing
|
||||
- **End-to-End Tests**: Full system workflows
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
uv run pytest
|
||||
|
||||
# Run specific test files
|
||||
uv run pytest tests/test_base_collector.py
|
||||
uv run pytest tests/test_collector_manager.py
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest --cov=data --cov-report=html
|
||||
```
|
||||
|
||||
## 📊 **Monitoring & Observability**
|
||||
|
||||
### Logging
|
||||
- **Structured Logging**: JSON-formatted logs with automatic cleanup
|
||||
- **Multiple Levels**: Debug, Info, Warning, Error with configurable output
|
||||
- **Component Isolation**: Separate loggers for different system components
|
||||
|
||||
### Health Monitoring
|
||||
- **Collector Health**: Real-time status and performance metrics
|
||||
- **Auto-Recovery**: Automatic restart on failures
|
||||
- **Performance Tracking**: Message rates, uptime, error rates
|
||||
|
||||
### Metrics Integration
|
||||
- **Prometheus Support**: Built-in metrics collection
|
||||
- **Custom Dashboards**: System performance visualization
|
||||
- **Alerting**: Configurable alerts for system health
|
||||
|
||||
## 🔐 **Security & Best Practices**
|
||||
|
||||
### Configuration Management
|
||||
- **Environment Variables**: All sensitive data via `.env` files
|
||||
- **No Hardcoded Secrets**: Clean separation of configuration and code
|
||||
- **Validation**: Pydantic-based configuration validation
|
||||
|
||||
### Data Handling
|
||||
- **Input Validation**: Comprehensive validation for all external data
|
||||
- **Error Handling**: Robust error handling with proper logging
|
||||
- **Resource Management**: Proper cleanup and resource management
|
||||
|
||||
### Code Quality
|
||||
- **Type Hints**: Full type annotation coverage
|
||||
- **Documentation**: Comprehensive docstrings and comments
|
||||
- **Testing**: High test coverage with multiple test types
|
||||
- **Code Standards**: Consistent formatting and patterns
|
||||
|
||||
## 🤝 **Contributing**
|
||||
|
||||
### Development Guidelines
|
||||
1. Follow existing code patterns and architecture
|
||||
2. Add comprehensive tests for new functionality
|
||||
3. Update documentation for API changes
|
||||
4. Use type hints and proper error handling
|
||||
5. Follow the existing logging patterns
|
||||
|
||||
### Code Review Process
|
||||
1. Create feature branches from main
|
||||
2. Write tests before implementing features
|
||||
3. Ensure all tests pass and maintain coverage
|
||||
4. Update relevant documentation
|
||||
5. Submit pull requests with clear descriptions
|
||||
|
||||
## 📞 **Support**
|
||||
|
||||
### Getting Help
|
||||
1. **Documentation**: Check relevant component documentation
|
||||
2. **Logs**: Review system logs in `./logs/` directory
|
||||
3. **Status**: Use built-in status and health check methods
|
||||
4. **Tests**: Run test suite to verify system integrity
|
||||
|
||||
### Common Issues
|
||||
- **Database Connection**: Check Docker services and environment variables
|
||||
- **Collector Failures**: Review collector health status and logs
|
||||
- **Performance Issues**: Monitor system resources and optimize accordingly
|
||||
|
||||
---
|
||||
|
||||
## 📁 **Documentation File Structure**
|
||||
|
||||
```
|
||||
docs/
|
||||
├── README.md # This file - main documentation index
|
||||
├── architecture/ # System architecture and design
|
||||
│ ├── README.md # Architecture overview
|
||||
│ ├── architecture.md # Technical architecture
|
||||
│ └── crypto-bot-prd.md # Product requirements
|
||||
├── components/ # Core system components
|
||||
│ ├── README.md # Component overview
|
||||
│ ├── data_collectors.md # Data collection system
|
||||
│ └── logging.md # Logging framework
|
||||
├── exchanges/ # Exchange integrations
|
||||
│ ├── README.md # Exchange overview
|
||||
│ └── okx_collector.md # OKX implementation
|
||||
├── guides/ # User guides and tutorials
|
||||
│ ├── README.md # Guide overview
|
||||
│ └── setup.md # Setup instructions
|
||||
└── reference/ # Technical reference
|
||||
├── README.md # Reference overview
|
||||
└── specification.md # Technical specifications
|
||||
```
|
||||
|
||||
## 🔗 **Navigation**
|
||||
|
||||
- **🏗️ [Architecture & Design](architecture/)** - System design and requirements
|
||||
- **🔧 [Core Components](components/)** - Technical implementation details
|
||||
- **🌐 [Exchange Integrations](exchanges/)** - Exchange-specific documentation
|
||||
- **📖 [Setup & Guides](guides/)** - User guides and tutorials
|
||||
- **📋 [Technical Reference](reference/)** - API specifications and schemas
|
||||
|
||||
---
|
||||
|
||||
*Last updated: $(date)*
|
||||
|
||||
For the most current information, refer to the individual component documentation linked above.
|
||||
116
docs/architecture.md
Normal file
116
docs/architecture.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# System Architecture
|
||||
|
||||
This document provides a high-level overview of the system architecture for the Crypto Trading Bot Platform.
|
||||
|
||||
## 1. Core Components
|
||||
|
||||
The platform consists of six core components designed to work together in a monolithic application structure. This design prioritizes rapid development and clear separation of concerns.
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TCP Dashboard Platform │
|
||||
│ │
|
||||
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐│
|
||||
│ │ Data Collector │────> │ Strategy Engine │────>│ Bot Manager ││
|
||||
│ │ (OKX, Binance...) │ │ (Signal Generation)│ │(State & Execution)││
|
||||
│ └──────────────────┘ └──────────────────┘ └──────────────────┘│
|
||||
│ │ │ │ │
|
||||
│┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐│
|
||||
││ Dashboard │<──── │ Backtesting │<────│ Database ││
|
||||
││ (Visualization) │ │ Engine │ │ (PostgreSQL) ││
|
||||
│└──────────────────┘ └──────────────────┘ └──────────────────┘│
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 1. Data Collection Module
|
||||
**Responsibility**: Collect real-time market data from exchanges
|
||||
**Implementation**: `data/`
|
||||
**Key Features**:
|
||||
- Connects to exchange WebSocket APIs (OKX implemented)
|
||||
- Aggregates raw trades into OHLCV candles
|
||||
- Publishes data to Redis for real-time distribution
|
||||
- Stores data in PostgreSQL for historical analysis
|
||||
- See [Data Collectors Documentation (`modules/data_collectors.md`)](./modules/data_collectors.md) for details.
|
||||
|
||||
### 2. Strategy Engine
|
||||
**Responsibility**: Unified interface for all trading strategies
|
||||
**Status**: Not yet implemented. This section describes the planned architecture.
|
||||
|
||||
```python
|
||||
class BaseStrategy:
|
||||
def __init__(self, parameters: dict):
|
||||
self.params = parameters
|
||||
|
||||
def calculate(self, ohlcv_data: pd.DataFrame) -> Signal:
|
||||
raise NotImplementedError
|
||||
|
||||
def get_indicators(self) -> dict:
|
||||
raise NotImplementedError
|
||||
```
|
||||
|
||||
### 3. Bot Manager
|
||||
**Responsibility**: Orchestrate bot execution and state management
|
||||
**Status**: Not yet implemented. This section describes the planned architecture.
|
||||
|
||||
```python
|
||||
class BotManager:
|
||||
def __init__(self):
|
||||
self.bots = {}
|
||||
|
||||
def create_bot(self, config: dict) -> Bot:
|
||||
# ...
|
||||
|
||||
def run_all_bots(self):
|
||||
# ...
|
||||
```
|
||||
|
||||
### 4. Database
|
||||
**Responsibility**: Data persistence and storage
|
||||
**Implementation**: `database/`
|
||||
**Key Features**:
|
||||
- PostgreSQL with TimescaleDB extension for time-series data
|
||||
- SQLAlchemy for ORM and schema management
|
||||
- Alembic for database migrations
|
||||
- See [Database Operations Documentation (`modules/database_operations.md`)](./modules/database_operations.md) for details.
|
||||
|
||||
### 5. Backtesting Engine
|
||||
**Responsibility**: Test strategies against historical data
|
||||
**Status**: Not yet implemented. This section describes the planned architecture.
|
||||
|
||||
### 6. Dashboard
|
||||
**Responsibility**: Visualization and user interaction
|
||||
**Implementation**: `dashboard/`
|
||||
**Key Features**:
|
||||
- Dash-based web interface
|
||||
- Real-time chart visualization with Plotly
|
||||
- System health monitoring
|
||||
- Bot management UI (planned)
|
||||
- See the [Chart System Documentation (`modules/charts/`)](./modules/charts/) for details.
|
||||
|
||||
## 2. Data Flow
|
||||
|
||||
### Real-time Data Flow
|
||||
1. **Data Collector** connects to exchange WebSocket (e.g., OKX).
|
||||
2. Raw trades are aggregated into OHLCV candles (1m, 5m, etc.).
|
||||
3. OHLCV data is published to a **Redis** channel.
|
||||
4. **Strategy Engine** subscribes to Redis and receives OHLCV data.
|
||||
5. Strategy generates a **Signal** (BUY/SELL/HOLD).
|
||||
6. **Bot Manager** receives the signal and executes a virtual trade.
|
||||
7. Trade details are stored in the **Database**.
|
||||
8. **Dashboard** visualizes real-time data and bot activity.
|
||||
|
||||
### Backtesting Data Flow
|
||||
1. **Backtesting Engine** queries historical OHLCV data from the **Database**.
|
||||
2. Data is fed into the **Strategy Engine**.
|
||||
3. Strategy generates signals, which are logged.
|
||||
4. Performance metrics are calculated and stored.
|
||||
|
||||
## 3. Design Principles
|
||||
|
||||
- **Monolithic Architecture**: All components are part of a single application for simplicity.
|
||||
- **Modular Design**: Components are loosely coupled to allow for future migration to microservices.
|
||||
- **API-First**: Internal components communicate through well-defined interfaces.
|
||||
- **Configuration-driven**: Bot and strategy parameters are managed via JSON files.
|
||||
|
||||
---
|
||||
*Back to [Main Documentation (`README.md`)]*
|
||||
613
docs/crypto-bot-prd.md
Normal file
613
docs/crypto-bot-prd.md
Normal file
@@ -0,0 +1,613 @@
|
||||
# Simplified Crypto Trading Bot Platform: Product Requirements Document (PRD)
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** May 30, 2025
|
||||
**Author:** Vasily
|
||||
**Status:** Draft
|
||||
|
||||
> **Note on Implementation Status:** This document describes the complete vision for the platform. As of the current development phase, many components like the **Strategy Engine**, **Bot Manager**, and **Backtesting Engine** are planned but not yet implemented. For a detailed view of the current status, please refer to the main `CONTEXT.md` file.
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This PRD outlines the development of a simplified crypto trading bot platform that enables strategy testing, development, and execution without the complexity of microservices and advanced monitoring. The goal is to create a functional system within 1-2 weeks that allows for strategy testing while establishing a foundation that can scale in the future. The platform addresses key requirements including data collection, strategy execution, visualization, and backtesting capabilities in a monolithic architecture optimized for internal use.
|
||||
|
||||
---
|
||||
*Back to [Main Documentation (`../README.md`)]*
|
||||
|
||||
## Current Requirements & Constraints
|
||||
|
||||
- **Speed to Deployment**: System must be functional within 1-2 weeks
|
||||
- **Scale**: Support for 5-10 concurrent trading bots
|
||||
- **Architecture**: Monolithic application instead of microservices
|
||||
- **User Access**: Internal use only initially (no multi-user authentication)
|
||||
- **Infrastructure**: Simplified deployment without Kubernetes/Docker Swarm
|
||||
- **Monitoring**: Basic logging for modules
|
||||
|
||||
## System Architecture
|
||||
|
||||
### High-Level Architecture
|
||||
|
||||
The platform will follow a monolithic architecture pattern to enable rapid development while providing clear separation between components:
|
||||
|
||||
### Data Flow Architecture
|
||||
|
||||
```
|
||||
OKX Exchange API (WebSocket)
|
||||
↓
|
||||
Data Collector → OHLCV Aggregator → PostgreSQL (market_data)
|
||||
↓ ↓
|
||||
[Optional] Raw Trade Storage Redis Pub/Sub → Strategy Engine (JSON configs)
|
||||
↓ ↓
|
||||
Files/Database (raw_trades) Signal Generation → Bot Manager
|
||||
↓
|
||||
PostgreSQL (signals, trades, bot_performance)
|
||||
↓
|
||||
Dashboard (REST API) ← PostgreSQL (historical data)
|
||||
↑
|
||||
Real-time Updates ← Redis Channels
|
||||
```
|
||||
|
||||
**Data Processing Priority**:
|
||||
1. **Real-time**: Raw data → OHLCV candles → Redis → Bots (primary flow)
|
||||
2. **Historical**: OHLCV data from PostgreSQL for backtesting and charts
|
||||
3. **Advanced Analysis**: Raw trade data (if stored) for detailed backtesting
|
||||
|
||||
### Redis Channel Design
|
||||
|
||||
```python
|
||||
# Real-time market data distribution
|
||||
MARKET_DATA_CHANNEL = "market:{symbol}" # OHLCV updates
|
||||
BOT_SIGNALS_CHANNEL = "signals:{bot_id}" # Trading decisions
|
||||
BOT_STATUS_CHANNEL = "status:{bot_id}" # Bot lifecycle events
|
||||
SYSTEM_EVENTS_CHANNEL = "system:events" # Global notifications
|
||||
```
|
||||
|
||||
### Configuration Strategy
|
||||
|
||||
**PostgreSQL for**: Market data, bot instances, trades, signals, performance metrics
|
||||
**JSON files for**: Strategy parameters, bot configurations (rapid testing and parameter tuning)
|
||||
|
||||
```json
|
||||
// config/strategies/ema_crossover.json
|
||||
{
|
||||
"strategy_name": "EMA_Crossover",
|
||||
"parameters": {
|
||||
"fast_period": 12,
|
||||
"slow_period": 26,
|
||||
"risk_percentage": 0.02
|
||||
}
|
||||
}
|
||||
|
||||
// config/bots/bot_001.json
|
||||
{
|
||||
"bot_id": "bot_001",
|
||||
"strategy_file": "ema_crossover.json",
|
||||
"symbol": "BTC-USDT",
|
||||
"virtual_balance": 10000,
|
||||
"enabled": true
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling Strategy
|
||||
|
||||
**Bot Crash Recovery**:
|
||||
- Monitor bot processes every 30 seconds
|
||||
- Auto-restart crashed bots if status = 'active'
|
||||
- Log all crashes with stack traces
|
||||
- Maximum 3 restart attempts per hour
|
||||
|
||||
**Exchange Connection Issues**:
|
||||
- Retry with exponential backoff (1s, 2s, 4s, 8s, max 60s)
|
||||
- Switch to backup WebSocket connection if available
|
||||
- Log connection quality metrics
|
||||
|
||||
**Database Errors**:
|
||||
- Continue operation with in-memory cache for up to 5 minutes
|
||||
- Queue operations for retry when connection restored
|
||||
- Alert on prolonged database disconnection
|
||||
|
||||
**Application Restart Recovery**:
|
||||
- Read bot states from database on startup
|
||||
- Restore active bots to 'active' status
|
||||
- Resume data collection for all monitored symbols
|
||||
|
||||
### Component Details and Functional Requirements
|
||||
|
||||
1. **Data Collection Module**
|
||||
- Connect to exchange APIs (OKX initially) via WebSocket
|
||||
- Aggregate real-time trades into OHLCV candles (1m, 5m, 15m, 1h, 4h, 1d)
|
||||
- Store OHLCV data in PostgreSQL for bot operations and backtesting
|
||||
- Send real-time candle updates through Redis
|
||||
- Optional: Store raw trade data for advanced backtesting
|
||||
|
||||
**FR-001: Unified Data Provider Interface**
|
||||
- Support multiple exchanges through standardized adapters
|
||||
- Real-time OHLCV aggregation with WebSocket connections
|
||||
- Primary focus on candle data, raw data storage optional
|
||||
- Data validation and error handling mechanisms
|
||||
|
||||
**FR-002: Market Data Processing**
|
||||
- OHLCV aggregation with configurable timeframes (1m base, higher timeframes derived)
|
||||
- Technical indicator calculation (SMA, EMA, RSI, MACD, Bollinger Bands) on OHLCV data
|
||||
- Data normalization across different exchanges
|
||||
- Time alignment following exchange standards (right-aligned candles)
|
||||
|
||||
2. **Strategy Engine**
|
||||
- Provide unified interface for all trading strategies
|
||||
- Support multiple strategy types with common parameter structure
|
||||
- Generate trading signals based on market data
|
||||
- Log strategy performance and signals
|
||||
- Strategy implementation as a class.
|
||||
|
||||
**FR-003: Strategy Framework**
|
||||
- Base strategy class with standardized interface
|
||||
- Support for multiple strategy types
|
||||
- Parameter configuration and optimization tools (JSON for the parameters)
|
||||
- Signal generation with confidence scoring
|
||||
|
||||
**FR-004: Signal Processing**
|
||||
- Real-time signal calculation and validation
|
||||
- Signal persistence for analysis and debugging
|
||||
- Multi-timeframe analysis capabilities
|
||||
- Custom indicator development support
|
||||
|
||||
3. **Bot Manager**
|
||||
- Create and manage up to 10 concurrent trading bots
|
||||
- Configure bot parameters and associated strategies
|
||||
- Start/stop individual bots
|
||||
- Track bot status and performance
|
||||
|
||||
**FR-005: Bot Lifecycle Management**
|
||||
- Bot creation with strategy and parameter selection
|
||||
- Start/stop/pause functionality with state persistence
|
||||
- Configuration management
|
||||
- Resource allocation and monitoring (in future)
|
||||
|
||||
**FR-006: Portfolio Management**
|
||||
- Position tracking and balance management
|
||||
- Risk management controls (stop-loss, take-profit, position sizing)
|
||||
- Multi-bot coordination and conflict resolution (in future)
|
||||
- Real-time portfolio valuation (in future)
|
||||
|
||||
5. **Trading Execution**
|
||||
- Simulate or execute trades based on configuration
|
||||
- Stores trade information in database
|
||||
|
||||
**FR-007: Order Management**
|
||||
- Order placement with multiple order types (market, limit, stop)
|
||||
- Order tracking and status monitoring (in future)
|
||||
- Execution confirmation and reconciliation (in future)
|
||||
- Fee calculation and tracking (in future)
|
||||
|
||||
**FR-008: Risk Controls**
|
||||
- Pre-trade risk validation
|
||||
- Position limits and exposure controls (in future)
|
||||
- Emergency stop mechanisms (in future)
|
||||
- Compliance monitoring and reporting (in future)
|
||||
|
||||
4. **Database (PostgreSQL)**
|
||||
- Store market data, bot configurations, and trading history
|
||||
- Optimized schema for time-series data without complexity
|
||||
- Support for data querying and aggregation
|
||||
**Database (JSON)**
|
||||
- Store strategy parameters and bot onfiguration in JSON in the beginning for simplicity of editing and testing
|
||||
|
||||
5. **Backtesting Engine**
|
||||
- Run simulations on historical data using vectorized operations for speed
|
||||
- Calculate performance metrics
|
||||
- Support multiple timeframes and strategy parameter testing
|
||||
- Generate comparison reports between strategies
|
||||
|
||||
**FR-009: Historical Simulation**
|
||||
- Strategy backtesting on historical market data
|
||||
- Performance metric calculation (Sharpe ratio, drawdown, win rate, total return)
|
||||
- Parameter optimization through grid search (limited combinations for speed) (in future)
|
||||
- Side-by-side strategy comparison with statistical significance
|
||||
|
||||
**FR-010: Simulation Engine**
|
||||
- Vectorized signal calculation using pandas operations
|
||||
- Realistic fee modeling (0.1% per trade for OKX)
|
||||
- Look-ahead bias prevention with proper timestamp handling
|
||||
- Configurable test periods (1 day to 24 months)
|
||||
|
||||
6. **Dashboard & Visualization**
|
||||
- Display real-time market data and bot status
|
||||
- Show portfolio value progression over time
|
||||
- Visualize trade history with buy/sell markers on price charts
|
||||
- Provide simple bot control interface (start/stop/configure)
|
||||
|
||||
**FR-011: Dashboard Interface**
|
||||
- Real-time bot monitoring with status indicators
|
||||
- Portfolio performance charts (total value, cash vs crypto allocation)
|
||||
- Trade history table with P&L per trade
|
||||
- Simple bot configuration forms for JSON parameter editing
|
||||
|
||||
**FR-012: Data Visualization**
|
||||
- Interactive price charts with strategy signal overlays
|
||||
- Portfolio value progression charts
|
||||
- Performance comparison tables (multiple bots side-by-side)
|
||||
- Fee tracking and total cost analysis
|
||||
|
||||
### Non-Functional Requirements
|
||||
|
||||
1 Performance Requirements
|
||||
**NFR-001: Latency**
|
||||
- Market data processing: <100ms from exchange to database
|
||||
- Signal generation: <500ms for standard strategies
|
||||
- API response time: <200ms for 95% of requests
|
||||
- Dashboard updates: <2 seconds for real-time data
|
||||
|
||||
**NFR-002: Scalability**
|
||||
- Database queries scalable to 1M+ records per table
|
||||
- Horizontal scaling capability for all services (in future)
|
||||
|
||||
2. Reliability Requirements
|
||||
**NFR-003: Availability**
|
||||
- System uptime: 99.5% excluding planned maintenance
|
||||
- Data collection: 99.9% uptime during market hours
|
||||
- Automatic failover for critical services
|
||||
- Graceful degradation during partial outages
|
||||
|
||||
**NFR-004: Data Integrity**
|
||||
- Zero data loss for executed trades
|
||||
- Transactional consistency for all financial operations
|
||||
- Regular database backups with point-in-time recovery
|
||||
- Data validation and error correction mechanisms
|
||||
|
||||
3. Security Requirements
|
||||
**NFR-005: Authentication & Authorization** (in future)
|
||||
|
||||
**NFR-006: Data Protection**
|
||||
- End-to-end encryption for sensitive data (in future)
|
||||
- Secure storage of API keys and credentials
|
||||
- Regular security audits and penetration testing (in future)
|
||||
- Compliance with financial data protection regulations (in future)
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
### Database Schema
|
||||
|
||||
The database schema separates frequently-accessed OHLCV data from raw tick data to optimize performance and storage.
|
||||
|
||||
```sql
|
||||
-- OHLCV Market Data (primary table for bot operations)
|
||||
CREATE TABLE market_data (
|
||||
id SERIAL PRIMARY KEY,
|
||||
exchange VARCHAR(50) NOT NULL DEFAULT 'okx',
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
timeframe VARCHAR(5) NOT NULL, -- 1m, 5m, 15m, 1h, 4h, 1d
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
open DECIMAL(18,8) NOT NULL,
|
||||
high DECIMAL(18,8) NOT NULL,
|
||||
low DECIMAL(18,8) NOT NULL,
|
||||
close DECIMAL(18,8) NOT NULL,
|
||||
volume DECIMAL(18,8) NOT NULL,
|
||||
trades_count INTEGER, -- number of trades in this candle
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
UNIQUE(exchange, symbol, timeframe, timestamp)
|
||||
);
|
||||
CREATE INDEX idx_market_data_lookup ON market_data(symbol, timeframe, timestamp);
|
||||
CREATE INDEX idx_market_data_recent ON market_data(timestamp DESC) WHERE timestamp > NOW() - INTERVAL '7 days';
|
||||
|
||||
-- Raw Trade Data (optional, for detailed backtesting only)
|
||||
CREATE TABLE raw_trades (
|
||||
id SERIAL PRIMARY KEY,
|
||||
exchange VARCHAR(50) NOT NULL DEFAULT 'okx',
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
type VARCHAR(10) NOT NULL, -- trade, order, balance, tick, books
|
||||
data JSONB NOT NULL, -- response from the exchange
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
) PARTITION BY RANGE (timestamp);
|
||||
CREATE INDEX idx_raw_trades_symbol_time ON raw_trades(symbol, timestamp);
|
||||
|
||||
-- Monthly partitions for raw data (if using raw data)
|
||||
-- CREATE TABLE raw_trades_y2024m01 PARTITION OF raw_trades
|
||||
-- FOR VALUES FROM ('2024-01-01') TO ('2024-02-01');
|
||||
|
||||
-- Bot Management (simplified)
|
||||
CREATE TABLE bots (
|
||||
id SERIAL PRIMARY KEY,
|
||||
name VARCHAR(100) NOT NULL,
|
||||
strategy_name VARCHAR(50) NOT NULL,
|
||||
symbol VARCHAR(20) NOT NULL,
|
||||
timeframe VARCHAR(5) NOT NULL,
|
||||
status VARCHAR(20) NOT NULL DEFAULT 'inactive', -- active, inactive, error
|
||||
config_file VARCHAR(200), -- path to JSON config
|
||||
virtual_balance DECIMAL(18,8) DEFAULT 10000,
|
||||
current_balance DECIMAL(18,8) DEFAULT 10000,
|
||||
last_heartbeat TIMESTAMPTZ,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW(),
|
||||
updated_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Trading Signals (for analysis and debugging)
|
||||
CREATE TABLE signals (
|
||||
id SERIAL PRIMARY KEY,
|
||||
bot_id INTEGER REFERENCES bots(id),
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
signal_type VARCHAR(10) NOT NULL, -- buy, sell, hold
|
||||
price DECIMAL(18,8),
|
||||
confidence DECIMAL(5,4),
|
||||
indicators JSONB, -- technical indicator values
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
CREATE INDEX idx_signals_bot_time ON signals(bot_id, timestamp);
|
||||
|
||||
-- Trade Execution Records
|
||||
CREATE TABLE trades (
|
||||
id SERIAL PRIMARY KEY,
|
||||
bot_id INTEGER REFERENCES bots(id),
|
||||
signal_id INTEGER REFERENCES signals(id),
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
side VARCHAR(5) NOT NULL, -- buy, sell
|
||||
price DECIMAL(18,8) NOT NULL,
|
||||
quantity DECIMAL(18,8) NOT NULL,
|
||||
fees DECIMAL(18,8) DEFAULT 0,
|
||||
pnl DECIMAL(18,8), -- profit/loss for this trade
|
||||
balance_after DECIMAL(18,8), -- portfolio balance after trade
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
CREATE INDEX idx_trades_bot_time ON trades(bot_id, timestamp);
|
||||
|
||||
-- Performance Snapshots (for plotting portfolio over time)
|
||||
CREATE TABLE bot_performance (
|
||||
id SERIAL PRIMARY KEY,
|
||||
bot_id INTEGER REFERENCES bots(id),
|
||||
timestamp TIMESTAMPTZ NOT NULL,
|
||||
total_value DECIMAL(18,8) NOT NULL, -- current portfolio value
|
||||
cash_balance DECIMAL(18,8) NOT NULL,
|
||||
crypto_balance DECIMAL(18,8) NOT NULL,
|
||||
total_trades INTEGER DEFAULT 0,
|
||||
winning_trades INTEGER DEFAULT 0,
|
||||
total_fees DECIMAL(18,8) DEFAULT 0,
|
||||
created_at TIMESTAMPTZ DEFAULT NOW()
|
||||
);
|
||||
CREATE INDEX idx_bot_performance_bot_time ON bot_performance(bot_id, timestamp);
|
||||
```
|
||||
|
||||
**Data Storage Strategy**:
|
||||
- **OHLCV Data**: Primary source for bot operations, kept indefinitely, optimized indexes
|
||||
- **Raw Trade Data**: Optional table, only if detailed backtesting needed, can be partitioned monthly
|
||||
- **Alternative for Raw Data**: Store in compressed files (Parquet/CSV) instead of database for cost efficiency
|
||||
|
||||
**MVP Approach**: Start with OHLCV data only, add raw data storage later if advanced backtesting requires it.
|
||||
|
||||
### Technology Stack
|
||||
|
||||
The platform will be built using the following technologies:
|
||||
|
||||
- **Backend Framework**: Python 3.10+ with Dash (includes built-in Flask server for REST API endpoints)
|
||||
- **Database**: PostgreSQL 14+ (with TimescaleDB extension for time-series optimization)
|
||||
- **Real-time Messaging**: Redis (for pub/sub messaging between components)
|
||||
- **Frontend**: Dash with Plotly (for visualization and control interface) and Mantine UI components
|
||||
- **Configuration**: JSON files for strategy parameters and bot configurations
|
||||
- **Deployment**: Docker container setup for development and production
|
||||
|
||||
### API Design
|
||||
|
||||
**Dash Callbacks**: Real-time updates and user interactions
|
||||
**REST Endpoints**: Historical data queries for backtesting and analysis
|
||||
```python
|
||||
# Built-in Flask routes for historical data
|
||||
@app.server.route('/api/bot/<bot_id>/trades')
|
||||
@app.server.route('/api/market/<symbol>/history')
|
||||
@app.server.route('/api/backtest/results/<test_id>')
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
The data flow follows a simple pattern to ensure efficient processing:
|
||||
|
||||
1. **Market Data Collection**:
|
||||
- Collector fetches data from exchange APIs
|
||||
- Raw data is stored in PostgreSQL
|
||||
- Processed data (e.g., OHLCV candles) are calculated and stored
|
||||
- Real-time updates are published to Redis channels
|
||||
|
||||
2. **Signal Generation**:
|
||||
- Bots subscribe to relevant data channels and generate signals based on the strategy
|
||||
- Signals are stored in database and published to Redis
|
||||
|
||||
3. **Trade Execution**:
|
||||
- Bot manager receives signals from strategies
|
||||
- Validates signals against bot parameters and portfolio
|
||||
- Simulates or executes trades based on configuration
|
||||
- Stores trade information in database
|
||||
|
||||
4. **Visualization**:
|
||||
- Dashboard subscribes to real-time data and trading updates
|
||||
- Queries historical data for charts and performance metrics
|
||||
- Provides interface for bot management and configuration
|
||||
|
||||
## Development Roadmap
|
||||
|
||||
### Phase 1: Foundation (Days 1-5)
|
||||
|
||||
**Objective**: Establish core system components and data flow
|
||||
|
||||
1. **Day 1-2**: Database Setup and Data Collection
|
||||
- Set up PostgreSQL with initial schema
|
||||
- Implement OKX API connector
|
||||
- Create data storage and processing logic
|
||||
|
||||
2. **Day 3-4**: Strategy Engine and Bot Manager
|
||||
- Develop strategy interface and 1-2 example strategies
|
||||
- Create bot manager with basic controls
|
||||
- Implement Redis for real-time messaging
|
||||
|
||||
3. **Day 5**: Basic Visualization
|
||||
- Set up Dash/Plotly for simple charts
|
||||
- Create basic dashboard layout
|
||||
- Connect to real-time data sources
|
||||
- Create mockup strategies and bots
|
||||
|
||||
### Phase 2: Core Functionality (Days 6-10)
|
||||
|
||||
**Objective**: Complete essential features for strategy testing
|
||||
|
||||
1. **Day 6-7**: Backtesting Engine
|
||||
- Get historical data from the database or file (have for BTC/USDT in csv format)
|
||||
- Create performance calculation metrics
|
||||
- Develop strategy comparison tools
|
||||
|
||||
2. **Day 8-9**: Trading Logic
|
||||
- Implement virtual trading capability
|
||||
- Create trade execution logic
|
||||
- Develop portfolio tracking
|
||||
|
||||
3. **Day 10**: Dashboard Enhancement
|
||||
- Improve visualization components
|
||||
- Add bot control interface
|
||||
- Implement real-time performance monitoring
|
||||
|
||||
### Phase 3: Refinement (Days 11-14)
|
||||
|
||||
**Objective**: Polish system and prepare for ongoing use
|
||||
|
||||
1. **Day 11-12**: Testing and Debugging
|
||||
- Comprehensive system testing
|
||||
- Fix identified issues
|
||||
- Performance optimization
|
||||
|
||||
2. **Day 13-14**: Documentation and Deployment
|
||||
- Create user documentation
|
||||
- Prepare deployment process
|
||||
- Set up basic monitoring
|
||||
|
||||
## Technical Considerations
|
||||
|
||||
### Scalability Path
|
||||
|
||||
While the initial system is designed as a monolithic application for rapid development, several considerations ensure future scalability:
|
||||
|
||||
1. **Module Separation**: Clear boundaries between components enable future extraction into microservices
|
||||
2. **Database Design**: Schema supports partitioning and sharding for larger data volumes
|
||||
3. **Message Queue**: Redis implementation paves way for more robust messaging (Kafka/RabbitMQ)
|
||||
4. **API-First Design**: Internal components communicate through well-defined interfaces
|
||||
|
||||
### Time Aggregation
|
||||
|
||||
Special attention is given to time aggregation to ensure consistency with exchanges:
|
||||
|
||||
```python
|
||||
def aggregate_candles(trades, timeframe, alignment='right'):
|
||||
"""
|
||||
Aggregate trade data into OHLCV candles with consistent timestamp alignment.
|
||||
|
||||
Parameters:
|
||||
- trades: List of trade dictionaries with timestamp and price
|
||||
- timeframe: String representing the timeframe (e.g., '1m', '5m', '1h')
|
||||
- alignment: String indicating timestamp alignment ('right' or 'left')
|
||||
|
||||
Returns:
|
||||
- Dictionary with OHLCV data
|
||||
"""
|
||||
# Convert timeframe to pandas offset
|
||||
if timeframe.endswith('m'):
|
||||
offset = pd.Timedelta(minutes=int(timeframe[:-1]))
|
||||
elif timeframe.endswith('h'):
|
||||
offset = pd.Timedelta(hours=int(timeframe[:-1]))
|
||||
elif timeframe.endswith('d'):
|
||||
offset = pd.Timedelta(days=int(timeframe[:-1]))
|
||||
|
||||
# Create DataFrame from trades
|
||||
df = pd.DataFrame(trades)
|
||||
|
||||
# Convert timestamps to pandas datetime
|
||||
df['timestamp'] = pd.to_datetime(df['timestamp'], unit='ms')
|
||||
|
||||
# Floor timestamps to timeframe
|
||||
if alignment == 'right':
|
||||
df['candle_time'] = df['timestamp'].dt.floor(offset)
|
||||
else:
|
||||
df['candle_time'] = df['timestamp'].dt.ceil(offset) - offset
|
||||
|
||||
# Aggregate to OHLCV
|
||||
candles = df.groupby('candle_time').agg({
|
||||
'price': ['first', 'max', 'min', 'last'],
|
||||
'amount': 'sum'
|
||||
}).reset_index()
|
||||
|
||||
# Rename columns
|
||||
candles.columns = ['timestamp', 'open', 'high', 'low', 'close', 'volume']
|
||||
|
||||
return candles
|
||||
```
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
For the initial release, several performance optimizations are implemented:
|
||||
|
||||
1. **Database Indexing**: Proper indexes on timestamp and symbol fields
|
||||
2. **Query Optimization**: Prepared statements and efficient query patterns
|
||||
3. **Connection Pooling**: Database connection management to prevent leaks
|
||||
4. **Data Aggregation**: Pre-calculation of common time intervals
|
||||
5. **Memory Management**: Proper cleanup of data objects after processing
|
||||
|
||||
## User Interface
|
||||
|
||||
The initial user interface focuses on functionality over aesthetics, providing essential controls and visualizations, minimalistic design.
|
||||
|
||||
1. **Market Data View**
|
||||
- Real-time price charts for monitored symbols
|
||||
- Order book visualization
|
||||
- Recent trades list
|
||||
|
||||
2. **Bot Management**
|
||||
- Create/configure bot interface
|
||||
- Start/stop controls
|
||||
- Status indicators
|
||||
|
||||
3. **Strategy Dashboard**
|
||||
- Strategy selection and configuration
|
||||
- Signal visualization
|
||||
- Performance metrics
|
||||
|
||||
4. **Backtesting Interface**
|
||||
- Historical data selection
|
||||
- Strategy parameter configuration
|
||||
- Results visualization
|
||||
|
||||
## Risk Management & Mitigation
|
||||
|
||||
### Technical Risks
|
||||
**Risk:** Exchange API rate limiting affecting data collection
|
||||
**Mitigation:** Implement intelligent rate limiting, multiple API keys, and fallback data sources
|
||||
|
||||
**Risk:** Database performance degradation with large datasets
|
||||
**Mitigation:** Implement data partitioning, archival strategies, and query optimization (in future)
|
||||
|
||||
**Risk:** System downtime during market volatility
|
||||
**Mitigation:** Design redundant systems, implement circuit breakers, and emergency procedures (in future)
|
||||
|
||||
### Business Risks
|
||||
**Risk:** Regulatory changes affecting crypto trading
|
||||
**Mitigation:** Implement compliance monitoring, maintain regulatory awareness, design for adaptability
|
||||
|
||||
**Risk:** Competition from established trading platforms
|
||||
**Mitigation:** Focus on unique value propositions, rapid feature development, strong user experience
|
||||
|
||||
### 8.3 User Risks
|
||||
**Risk:** User losses due to platform errors
|
||||
**Mitigation:** Comprehensive testing, simulation modes, risk warnings, and liability disclaimers
|
||||
|
||||
## Future Expansion
|
||||
|
||||
While keeping the initial implementation simple, the design accommodates future enhancements:
|
||||
|
||||
1. **Authentication System**: Add multi-user support with role-based access
|
||||
2. **Advanced Strategies**: Support for machine learning and AI-based strategies
|
||||
3. **Multi-Exchange Support**: Expand beyond OKX to other exchanges
|
||||
4. **Microservices Migration**: Extract components into separate services
|
||||
5. **Advanced Monitoring**: Integration with Prometheus/Grafana
|
||||
6. **Cloud Deployment**: Support for AWS/GCP/Azure deployment
|
||||
|
||||
## Success Metrics
|
||||
|
||||
The platform's success will be measured by these key metrics:
|
||||
|
||||
1. **Development Timeline**: Complete core functionality within 14 days
|
||||
2. **System Stability**: Maintain 99% uptime during internal testing. System should monitor itself and restart if needed (all or just modules)
|
||||
3. **Strategy Testing**: Successfully backtest at least 3 different strategies
|
||||
4. **Bot Performance**: Run at least 2 bots concurrently for 72+ hours
|
||||
493
docs/decisions/ADR-001-data-processing-refactor.md
Normal file
493
docs/decisions/ADR-001-data-processing-refactor.md
Normal file
@@ -0,0 +1,493 @@
|
||||
# ADR-001: Data Processing and Aggregation Refactor
|
||||
|
||||
## Status
|
||||
**Accepted**
|
||||
|
||||
## Context
|
||||
The initial data collection and processing system was tightly coupled with the OKX exchange implementation. This made it difficult to add new exchanges, maintain the code, and ensure consistent data aggregation across different sources. Key issues included:
|
||||
- Business logic mixed with data fetching.
|
||||
- Inconsistent timestamp handling.
|
||||
- No clear strategy for handling sparse data, leading to potential future data leakage.
|
||||
|
||||
A refactor was necessary to create a modular, extensible, and robust data processing pipeline that aligns with industry standards.
|
||||
|
||||
## Decision
|
||||
We will refactor the data processing system to adhere to the following principles:
|
||||
|
||||
1. **Modular & Extensible Design**: Separate exchange-specific logic from the core aggregation and storage logic using a factory pattern and base classes.
|
||||
2. **Right-Aligned Timestamps**: Adopt the industry standard for OHLCV candles where the timestamp represents the closing time of the interval. This ensures compatibility with major exchanges and historical data providers.
|
||||
3. **Sparse Candle Aggregation**: Emit candles only when trading activity occurs within a time bucket. This accurately reflects market activity and reduces storage.
|
||||
4. **No Future Leakage**: Implement a robust aggregation mechanism that only finalizes candles when their time period has definitively passed, preventing lookahead bias.
|
||||
5. **Centralized Repository for Database Operations**: Abstract all database interactions into a `Repository` pattern to decouple business logic from data persistence.
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- **Improved Maintainability**: Code is cleaner, more organized, and easier to understand.
|
||||
- **Enhanced Extensibility**: Adding new exchanges is significantly easier.
|
||||
- **Data Integrity**: Standardized timestamping and aggregation prevent data inconsistencies and lookahead bias.
|
||||
- **Efficiency**: The sparse candle approach reduces storage and processing overhead.
|
||||
- **Testability**: Decoupled components are easier to unit test.
|
||||
|
||||
### Negative
|
||||
- **Initial Development Overhead**: The refactor required an initial time investment to design and implement the new architecture.
|
||||
- **Increased Complexity**: The new system has more moving parts (factories, repositories), which may have a slightly steeper learning curve for new developers.
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
1. **Keep the Monolithic Design**: Continue with the tightly coupled approach.
|
||||
- **Reason for Rejection**: This was not scalable and would have led to significant technical debt as new exchanges were added.
|
||||
2. **Use a Third-Party Data Library**: Integrate a library like `ccxt` for data collection.
|
||||
- **Reason for Rejection**: While powerful, these libraries did not offer the fine-grained control over the real-time aggregation and WebSocket handling that was required. Building a custom solution provides more flexibility.
|
||||
|
||||
## Related Documentation
|
||||
- **Aggregation Strategy**: [docs/reference/aggregation-strategy.md](../reference/aggregation-strategy.md)
|
||||
- **Data Collectors**: [docs/modules/data_collectors.md](../modules/data_collectors.md)
|
||||
- **Database Operations**: [docs/modules/database_operations.md](../modules/database_operations.md)
|
||||
|
||||
---
|
||||
*Back to [All Decisions (`./`)]*
|
||||
|
||||
# Refactored Data Processing Architecture
|
||||
|
||||
## Overview
|
||||
|
||||
The data processing system has been significantly refactored to improve reusability, maintainability, and scalability across different exchanges. The key improvement is the extraction of common utilities into a shared framework while keeping exchange-specific components focused and minimal.
|
||||
|
||||
## Architecture Changes
|
||||
|
||||
### Before (Monolithic)
|
||||
```
|
||||
data/exchanges/okx/
|
||||
├── data_processor.py # 1343 lines - everything in one file
|
||||
├── collector.py
|
||||
└── websocket.py
|
||||
```
|
||||
|
||||
### After (Modular)
|
||||
```
|
||||
data/
|
||||
├── common/ # Shared utilities for all exchanges
|
||||
│ ├── __init__.py
|
||||
│ ├── data_types.py # StandardizedTrade, OHLCVCandle, etc.
|
||||
│ ├── aggregation.py # TimeframeBucket, RealTimeCandleProcessor
|
||||
│ ├── transformation.py # BaseDataTransformer, UnifiedDataTransformer
|
||||
│ └── validation.py # BaseDataValidator, common validation
|
||||
└── exchanges/
|
||||
└── okx/
|
||||
├── data_processor.py # ~600 lines - OKX-specific only
|
||||
├── collector.py # Updated to use common utilities
|
||||
└── websocket.py
|
||||
```
|
||||
|
||||
## Key Benefits
|
||||
|
||||
### 1. **Reusability Across Exchanges**
|
||||
- Candle aggregation logic works for any exchange
|
||||
- Standardized data formats enable uniform processing
|
||||
- Base classes provide common patterns for new exchanges
|
||||
|
||||
### 2. **Maintainability**
|
||||
- Smaller, focused files are easier to understand and modify
|
||||
- Common utilities are tested once and reused everywhere
|
||||
- Clear separation of concerns
|
||||
|
||||
### 3. **Extensibility**
|
||||
- Adding new exchanges requires minimal code
|
||||
- New data types and timeframes are automatically supported
|
||||
- Validation and transformation patterns are consistent
|
||||
|
||||
### 4. **Performance**
|
||||
- Optimized aggregation algorithms and memory usage
|
||||
- Efficient candle bucketing algorithms
|
||||
- Lazy evaluation where possible
|
||||
|
||||
### 5. **Testing**
|
||||
- Modular components are easier to test independently
|
||||
|
||||
## Time Aggregation Strategy
|
||||
|
||||
### Right-Aligned Timestamps (Industry Standard)
|
||||
|
||||
The system uses **RIGHT-ALIGNED timestamps** following industry standards from major exchanges (Binance, OKX, Coinbase):
|
||||
|
||||
- **Candle timestamp = end time of the interval (close time)**
|
||||
- 5-minute candle with timestamp `09:05:00` represents data from `09:00:01` to `09:05:00`
|
||||
- 1-minute candle with timestamp `14:32:00` represents data from `14:31:01` to `14:32:00`
|
||||
- This aligns with how exchanges report historical data
|
||||
|
||||
### Aggregation Process (No Future Leakage)
|
||||
|
||||
```python
|
||||
def process_trade_realtime(trade: StandardizedTrade, timeframe: str):
|
||||
"""
|
||||
Real-time aggregation with strict future leakage prevention
|
||||
|
||||
CRITICAL: Only emit completed candles, never incomplete ones
|
||||
"""
|
||||
|
||||
# 1. Calculate which time bucket this trade belongs to
|
||||
trade_bucket_start = get_bucket_start_time(trade.timestamp, timeframe)
|
||||
|
||||
# 2. Check if current bucket exists and matches
|
||||
current_bucket = current_buckets.get(timeframe)
|
||||
|
||||
# 3. Handle time boundary crossing
|
||||
if current_bucket is None:
|
||||
# First bucket for this timeframe
|
||||
current_bucket = create_bucket(trade_bucket_start, timeframe)
|
||||
elif current_bucket.start_time != trade_bucket_start:
|
||||
# Time boundary crossed - complete previous bucket FIRST
|
||||
if current_bucket.has_trades():
|
||||
completed_candle = current_bucket.to_candle(is_complete=True)
|
||||
emit_candle(completed_candle) # Store in market_data table
|
||||
|
||||
# Create new bucket for current time period
|
||||
current_bucket = create_bucket(trade_bucket_start, timeframe)
|
||||
|
||||
# 4. Add trade to current bucket
|
||||
current_bucket.add_trade(trade)
|
||||
|
||||
# 5. Return only completed candles (never incomplete/future data)
|
||||
return completed_candles # Empty list unless boundary crossed
|
||||
```
|
||||
|
||||
### Time Bucket Calculation Examples
|
||||
|
||||
```python
|
||||
# 5-minute timeframes (00:00, 00:05, 00:10, 00:15, etc.)
|
||||
trade_time = "09:03:45" -> bucket_start = "09:00:00", bucket_end = "09:05:00"
|
||||
trade_time = "09:07:23" -> bucket_start = "09:05:00", bucket_end = "09:10:00"
|
||||
trade_time = "09:05:00" -> bucket_start = "09:05:00", bucket_end = "09:10:00"
|
||||
|
||||
# 1-hour timeframes (align to hour boundaries)
|
||||
trade_time = "14:35:22" -> bucket_start = "14:00:00", bucket_end = "15:00:00"
|
||||
trade_time = "15:00:00" -> bucket_start = "15:00:00", bucket_end = "16:00:00"
|
||||
|
||||
# 4-hour timeframes (00:00, 04:00, 08:00, 12:00, 16:00, 20:00)
|
||||
trade_time = "13:45:12" -> bucket_start = "12:00:00", bucket_end = "16:00:00"
|
||||
trade_time = "16:00:01" -> bucket_start = "16:00:00", bucket_end = "20:00:00"
|
||||
```
|
||||
|
||||
### Future Leakage Prevention
|
||||
|
||||
**CRITICAL SAFEGUARDS:**
|
||||
|
||||
1. **Boundary Crossing Detection**: Only complete candles when trade timestamp definitively crosses time boundary
|
||||
2. **No Premature Completion**: Never emit incomplete candles during real-time processing
|
||||
3. **Strict Time Validation**: Trades only added to buckets if `start_time <= trade.timestamp < end_time`
|
||||
4. **Historical Consistency**: Same logic for real-time and historical processing
|
||||
|
||||
```python
|
||||
# CORRECT: Only complete candle when boundary is crossed
|
||||
if current_bucket.start_time != trade_bucket_start:
|
||||
# Time boundary definitely crossed - safe to complete
|
||||
completed_candle = current_bucket.to_candle(is_complete=True)
|
||||
emit_to_storage(completed_candle)
|
||||
|
||||
# INCORRECT: Would cause future leakage
|
||||
if some_timer_expires():
|
||||
# Never complete based on timers or external events
|
||||
completed_candle = current_bucket.to_candle(is_complete=True) # WRONG!
|
||||
```
|
||||
|
||||
### Data Storage Flow
|
||||
|
||||
```
|
||||
WebSocket Trade Data → Validation → Transformation → Aggregation → Storage
|
||||
| | |
|
||||
↓ ↓ ↓
|
||||
Raw individual trades Completed OHLCV Incomplete OHLCV
|
||||
| candles (storage) (monitoring only)
|
||||
↓ |
|
||||
raw_trades table market_data table
|
||||
(debugging/compliance) (trading decisions)
|
||||
```
|
||||
|
||||
**Storage Rules:**
|
||||
- **Raw trades** → `raw_trades` table (every individual trade/orderbook/ticker)
|
||||
- **Completed candles** → `market_data` table (only when timeframe boundary crossed)
|
||||
- **Incomplete candles** → Memory only (never stored, used for monitoring)
|
||||
|
||||
### Aggregation Logic Implementation
|
||||
|
||||
```python
|
||||
def aggregate_to_timeframe(trades: List[StandardizedTrade], timeframe: str) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Aggregate trades to specified timeframe with right-aligned timestamps
|
||||
"""
|
||||
# Group trades by time intervals
|
||||
buckets = {}
|
||||
completed_candles = []
|
||||
|
||||
for trade in sorted(trades, key=lambda t: t.timestamp):
|
||||
# Calculate bucket start time (left boundary)
|
||||
bucket_start = get_bucket_start_time(trade.timestamp, timeframe)
|
||||
|
||||
# Get or create bucket
|
||||
if bucket_start not in buckets:
|
||||
buckets[bucket_start] = TimeframeBucket(timeframe, bucket_start)
|
||||
|
||||
# Add trade to bucket
|
||||
buckets[bucket_start].add_trade(trade)
|
||||
|
||||
# Convert all buckets to candles with right-aligned timestamps
|
||||
for bucket in buckets.values():
|
||||
candle = bucket.to_candle(is_complete=True)
|
||||
# candle.timestamp = bucket.end_time (right-aligned)
|
||||
completed_candles.append(candle)
|
||||
|
||||
return completed_candles
|
||||
```
|
||||
|
||||
## Common Components
|
||||
|
||||
### Data Types (`data/common/data_types.py`)
|
||||
|
||||
**StandardizedTrade**: Universal trade format
|
||||
```python
|
||||
@dataclass
|
||||
class StandardizedTrade:
|
||||
symbol: str
|
||||
trade_id: str
|
||||
price: Decimal
|
||||
size: Decimal
|
||||
side: str # 'buy' or 'sell'
|
||||
timestamp: datetime
|
||||
exchange: str = "okx"
|
||||
raw_data: Optional[Dict[str, Any]] = None
|
||||
```
|
||||
|
||||
**OHLCVCandle**: Universal candle format
|
||||
```python
|
||||
@dataclass
|
||||
class OHLCVCandle:
|
||||
symbol: str
|
||||
timeframe: str
|
||||
start_time: datetime
|
||||
end_time: datetime
|
||||
open: Decimal
|
||||
high: Decimal
|
||||
low: Decimal
|
||||
close: Decimal
|
||||
volume: Decimal
|
||||
trade_count: int
|
||||
is_complete: bool = False
|
||||
```
|
||||
|
||||
### Aggregation (`data/common/aggregation.py`)
|
||||
|
||||
**RealTimeCandleProcessor**: Handles real-time candle building for any exchange
|
||||
- Processes trades immediately as they arrive
|
||||
- Supports multiple timeframes simultaneously
|
||||
- Emits completed candles when time boundaries cross
|
||||
- Thread-safe and memory efficient
|
||||
|
||||
**BatchCandleProcessor**: Handles historical data processing
|
||||
- Processes large batches of trades efficiently
|
||||
- Memory-optimized for backfill scenarios
|
||||
- Same candle output format as real-time processor
|
||||
|
||||
### Transformation (`data/common/transformation.py`)
|
||||
|
||||
**BaseDataTransformer**: Abstract base class for exchange transformers
|
||||
- Common transformation utilities (timestamp conversion, decimal handling)
|
||||
- Abstract methods for exchange-specific transformations
|
||||
- Consistent error handling patterns
|
||||
|
||||
**UnifiedDataTransformer**: Unified interface for all transformation scenarios
|
||||
- Works with real-time, historical, and backfill data
|
||||
- Handles batch processing efficiently
|
||||
- Integrates with aggregation components
|
||||
|
||||
### Validation (`data/common/validation.py`)
|
||||
|
||||
**BaseDataValidator**: Common validation patterns
|
||||
- Price, size, volume validation
|
||||
- Timestamp validation
|
||||
- Orderbook validation
|
||||
- Generic symbol validation
|
||||
|
||||
## Exchange-Specific Components
|
||||
|
||||
### OKX Data Processor (`data/exchanges/okx/data_processor.py`)
|
||||
|
||||
Now focused only on OKX-specific functionality:
|
||||
|
||||
**OKXDataValidator**: Extends BaseDataValidator
|
||||
- OKX-specific symbol patterns (BTC-USDT format)
|
||||
- OKX message structure validation
|
||||
- OKX field mappings and requirements
|
||||
|
||||
**OKXDataTransformer**: Extends BaseDataTransformer
|
||||
- OKX WebSocket format transformation
|
||||
- OKX-specific field extraction
|
||||
- Integration with common utilities
|
||||
|
||||
**OKXDataProcessor**: Main processor using common framework
|
||||
- Uses common validation and transformation utilities
|
||||
- Significantly simplified (~600 lines vs 1343 lines)
|
||||
- Better separation of concerns
|
||||
|
||||
### Updated OKX Collector (`data/exchanges/okx/collector.py`)
|
||||
|
||||
**Key improvements:**
|
||||
- Uses OKXDataProcessor with common utilities
|
||||
- Automatic candle generation for trades
|
||||
- Simplified message processing
|
||||
- Better error handling and statistics
|
||||
- Callback system for real-time data
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Creating a New Exchange
|
||||
|
||||
To add support for a new exchange (e.g., Binance):
|
||||
|
||||
1. **Create exchange-specific validator:**
|
||||
```python
|
||||
class BinanceDataValidator(BaseDataValidator):
|
||||
def __init__(self, component_name="binance_validator"):
|
||||
super().__init__("binance", component_name)
|
||||
self._symbol_pattern = re.compile(r'^[A-Z]+[A-Z]+$') # BTCUSDT format
|
||||
|
||||
def validate_symbol_format(self, symbol: str) -> ValidationResult:
|
||||
# Binance-specific symbol validation
|
||||
pass
|
||||
```
|
||||
|
||||
2. **Create exchange-specific transformer:**
|
||||
```python
|
||||
class BinanceDataTransformer(BaseDataTransformer):
|
||||
def transform_trade_data(self, raw_data: Dict[str, Any], symbol: str) -> Optional[StandardizedTrade]:
|
||||
return create_standardized_trade(
|
||||
symbol=raw_data['s'], # Binance field mapping
|
||||
trade_id=raw_data['t'],
|
||||
price=raw_data['p'],
|
||||
size=raw_data['q'],
|
||||
side='buy' if raw_data['m'] else 'sell',
|
||||
timestamp=raw_data['T'],
|
||||
exchange="binance",
|
||||
raw_data=raw_data
|
||||
)
|
||||
```
|
||||
|
||||
3. **Automatic candle support:**
|
||||
```python
|
||||
# Real-time candles work automatically
|
||||
processor = RealTimeCandleProcessor(symbol, "binance", config)
|
||||
for trade in trades:
|
||||
completed_candles = processor.process_trade(trade)
|
||||
```
|
||||
|
||||
### Using Common Utilities
|
||||
|
||||
**Data transformation:**
|
||||
```python
|
||||
# Works with any exchange
|
||||
transformer = UnifiedDataTransformer(exchange_transformer)
|
||||
standardized_trade = transformer.transform_trade_data(raw_trade, symbol)
|
||||
|
||||
# Batch processing
|
||||
candles = transformer.process_trades_to_candles(
|
||||
trades_iterator,
|
||||
['1m', '5m', '1h'],
|
||||
symbol
|
||||
)
|
||||
```
|
||||
|
||||
**Real-time candle processing:**
|
||||
```python
|
||||
# Example usage
|
||||
from data.common.aggregation.realtime import RealTimeCandleProcessor
|
||||
|
||||
processor = RealTimeCandleProcessor(symbol, "binance", config)
|
||||
processor.add_candle_callback(on_candle_completed)
|
||||
processor.process_trade(trade)
|
||||
```
|
||||
|
||||
```python
|
||||
# Example usage
|
||||
from data.common.aggregation.realtime import RealTimeCandleProcessor
|
||||
|
||||
candle_processor = RealTimeCandleProcessor(symbol, exchange, config)
|
||||
candle_processor.add_candle_callback(on_candle_completed)
|
||||
candle_processor.process_trade(trade)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
The refactored architecture includes comprehensive testing:
|
||||
|
||||
**Test script:** `scripts/test_refactored_okx.py`
|
||||
- Tests common utilities
|
||||
- Tests OKX-specific components
|
||||
- Tests integration between components
|
||||
- Performance and memory testing
|
||||
|
||||
**Run tests:**
|
||||
```bash
|
||||
python scripts/test_refactored_okx.py
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### For Existing OKX Code
|
||||
|
||||
1. **Update imports:**
|
||||
```python
|
||||
# Old
|
||||
from data.exchanges.okx.data_processor import StandardizedTrade, OHLCVCandle
|
||||
|
||||
# New
|
||||
from data.common import StandardizedTrade, OHLCVCandle
|
||||
```
|
||||
|
||||
2. **Use new processor:**
|
||||
```python
|
||||
# Old
|
||||
from data.exchanges.okx.data_processor import OKXDataProcessor, UnifiedDataTransformer
|
||||
|
||||
# New
|
||||
from data.exchanges.okx.data_processor import OKXDataProcessor # Uses common utilities internally
|
||||
```
|
||||
|
||||
3. **Existing functionality preserved:**
|
||||
- All existing APIs remain the same
|
||||
- Performance improved due to optimizations
|
||||
- More features available (better candle processing, validation)
|
||||
|
||||
### For New Exchange Development
|
||||
|
||||
1. **Start with common base classes**
|
||||
2. **Implement only exchange-specific validation and transformation**
|
||||
3. **Get candle processing, batch processing, and validation for free**
|
||||
4. **Focus on exchange API integration rather than data processing logic**
|
||||
|
||||
## Performance Improvements
|
||||
|
||||
**Memory Usage:**
|
||||
- Streaming processing reduces memory footprint
|
||||
- Efficient candle bucketing algorithms
|
||||
- Lazy evaluation where possible
|
||||
|
||||
**Processing Speed:**
|
||||
- Optimized validation with early returns
|
||||
- Batch processing capabilities
|
||||
- Parallel processing support
|
||||
|
||||
**Maintainability:**
|
||||
- Smaller, focused components
|
||||
- Better test coverage
|
||||
- Clear error handling and logging
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
**Planned Features:**
|
||||
1. **Exchange Factory Pattern** - Automatically create collectors for any exchange
|
||||
2. **Plugin System** - Load exchange implementations dynamically
|
||||
3. **Configuration-Driven Development** - Define new exchanges via config files
|
||||
4. **Enhanced Analytics** - Built-in technical indicators and statistics
|
||||
5. **Multi-Exchange Arbitrage** - Cross-exchange data synchronization
|
||||
|
||||
This refactored architecture provides a solid foundation for scalable, maintainable cryptocurrency data processing across any number of exchanges while keeping exchange-specific code minimal and focused.
|
||||
124
docs/decisions/ADR-002-common-package-refactor.md
Normal file
124
docs/decisions/ADR-002-common-package-refactor.md
Normal file
@@ -0,0 +1,124 @@
|
||||
# ADR-002: Common Package Refactoring
|
||||
|
||||
## Status
|
||||
Accepted and Implemented
|
||||
|
||||
## Context
|
||||
The common package contained several large, monolithic files that were becoming difficult to maintain and extend. The files included:
|
||||
- aggregation.py
|
||||
- indicators.py
|
||||
- validation.py
|
||||
- transformation.py
|
||||
- data_types.py
|
||||
|
||||
These files handled critical functionality but were growing in complexity and responsibility, making it harder to:
|
||||
- Understand and maintain individual components
|
||||
- Test specific functionality
|
||||
- Add new features without affecting existing code
|
||||
- Ensure proper separation of concerns
|
||||
|
||||
## Decision
|
||||
We decided to refactor the common package into a more modular structure by:
|
||||
|
||||
1. **Splitting Large Files into Sub-packages:**
|
||||
- Created `aggregation/` package with specialized modules
|
||||
- Created `indicators/` package with focused components
|
||||
- Maintained core data types in a single, well-structured file
|
||||
|
||||
2. **Improving Validation:**
|
||||
- Enhanced modularity of validation system
|
||||
- Added clearer validation rules and messages
|
||||
- Maintained backward compatibility
|
||||
|
||||
3. **Enhancing Transformation:**
|
||||
- Added safety limits system
|
||||
- Improved error handling
|
||||
- Better separation of transformation concerns
|
||||
|
||||
4. **Preserving Data Types:**
|
||||
- Reviewed and verified data_types.py structure
|
||||
- Maintained as single file due to good organization
|
||||
- Documented existing patterns
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Better code organization and maintainability
|
||||
- Clearer separation of concerns
|
||||
- Easier to test individual components
|
||||
- More focused and cohesive modules
|
||||
- Better safety with new limits system
|
||||
- Improved documentation and examples
|
||||
- Easier to extend with new features
|
||||
|
||||
### Negative
|
||||
- Slightly more complex import paths
|
||||
- Need to update existing documentation
|
||||
- Initial learning curve for new structure
|
||||
|
||||
### Neutral
|
||||
- Need to maintain more files
|
||||
- More granular version control
|
||||
- Different organization pattern from original
|
||||
|
||||
## Alternatives Considered
|
||||
|
||||
### Keep Monolithic Structure
|
||||
Rejected because:
|
||||
- Growing complexity
|
||||
- Difficult maintenance
|
||||
- Hard to test
|
||||
- Poor separation of concerns
|
||||
|
||||
### Complete Microservices Split
|
||||
Rejected because:
|
||||
- Too complex for current needs
|
||||
- Would introduce unnecessary overhead
|
||||
- Not justified by current scale
|
||||
|
||||
### Hybrid Approach (Selected)
|
||||
Selected because:
|
||||
- Balances modularity and simplicity
|
||||
- Maintains good performance
|
||||
- Easy to understand and maintain
|
||||
- Allows for future growth
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Phase 1: Aggregation and Indicators
|
||||
- Split into focused sub-packages
|
||||
- Added proper interfaces
|
||||
- Maintained backward compatibility
|
||||
- Added comprehensive tests
|
||||
|
||||
### Phase 2: Validation and Transformation
|
||||
- Enhanced validation system
|
||||
- Added safety limits
|
||||
- Improved error handling
|
||||
- Updated documentation
|
||||
|
||||
### Phase 3: Verification
|
||||
- Reviewed data types
|
||||
- Ran comprehensive tests
|
||||
- Updated documentation
|
||||
- Verified no regressions
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### For Developers
|
||||
1. Update imports to use new package structure
|
||||
2. Review new safety limits in transformation
|
||||
3. Check validation error handling
|
||||
4. Update any custom extensions
|
||||
|
||||
### For Maintainers
|
||||
1. Familiarize with new package structure
|
||||
2. Review new testing patterns
|
||||
3. Understand safety limit system
|
||||
4. Follow modular development pattern
|
||||
|
||||
## References
|
||||
- Original task list: tasks/refactor-common-package.md
|
||||
- Documentation standards: docs/documentation.mdc
|
||||
- Test coverage reports
|
||||
- Code review feedback
|
||||
309
docs/guides/README.md
Normal file
309
docs/guides/README.md
Normal file
@@ -0,0 +1,309 @@
|
||||
# Guides Documentation
|
||||
|
||||
This section contains user guides, tutorials, and setup instructions for the TCP Dashboard platform.
|
||||
|
||||
## 📋 Contents
|
||||
|
||||
### Setup & Installation
|
||||
|
||||
- **[Setup Guide](setup.md)** - *Comprehensive setup instructions for new machines and environments*
|
||||
- Environment configuration and prerequisites
|
||||
- Database setup with Docker and PostgreSQL
|
||||
- Development workflow and best practices
|
||||
- Production deployment guidelines
|
||||
- Troubleshooting common setup issues
|
||||
|
||||
### Quick Start Guides
|
||||
|
||||
#### For Developers
|
||||
|
||||
```bash
|
||||
# Quick setup for development
|
||||
git clone <repository>
|
||||
cd TCPDashboard
|
||||
uv sync
|
||||
cp .env.example .env
|
||||
docker-compose up -d
|
||||
uv run python scripts/init_database.py
|
||||
```
|
||||
|
||||
#### For Users
|
||||
|
||||
```python
|
||||
# Quick data collection setup
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
collector = create_okx_collector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE]
|
||||
)
|
||||
await collector.start()
|
||||
```
|
||||
|
||||
## 🚀 Tutorial Series
|
||||
|
||||
### Getting Started
|
||||
|
||||
1. **[Environment Setup](setup.md#environment-setup)** - Setting up your development environment
|
||||
2. **[First Data Collector](setup.md#first-collector)** - Creating your first data collector
|
||||
3. **[Database Integration](setup.md#database-setup)** - Connecting to the database
|
||||
4. **[Adding Monitoring](setup.md#monitoring)** - Setting up logging and monitoring
|
||||
|
||||
### Advanced Topics
|
||||
|
||||
1. **[Multi-Exchange Setup](setup.md#multi-exchange)** - Collecting from multiple exchanges
|
||||
2. **[Production Deployment](setup.md#production)** - Deploying to production
|
||||
3. **[Performance Optimization](setup.md#optimization)** - Optimizing for high throughput
|
||||
4. **[Custom Integrations](setup.md#custom)** - Building custom data sources
|
||||
|
||||
## 🛠️ Development Workflow
|
||||
|
||||
### Daily Development
|
||||
|
||||
```bash
|
||||
# Start development environment
|
||||
docker-compose up -d
|
||||
|
||||
# Install new dependencies
|
||||
uv add package-name
|
||||
|
||||
# Run tests
|
||||
uv run pytest
|
||||
|
||||
# Check code quality
|
||||
uv run black .
|
||||
uv run isort .
|
||||
```
|
||||
|
||||
### Code Organization
|
||||
|
||||
- **`data/`**: Data collection and processing
|
||||
- **`database/`**: Database models and utilities
|
||||
- **`utils/`**: Shared utilities and logging
|
||||
- **`tests/`**: Test suite
|
||||
- **`docs/`**: Documentation
|
||||
- **`config/`**: Configuration files
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Follow existing patterns**: Use established code patterns
|
||||
2. **Write tests first**: TDD approach for new features
|
||||
3. **Document changes**: Update docs with code changes
|
||||
4. **Use type hints**: Full type annotation coverage
|
||||
5. **Handle errors**: Robust error handling throughout
|
||||
|
||||
## 🔧 Configuration Management
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Key environment variables to configure:
|
||||
|
||||
```bash
|
||||
# Database
|
||||
DATABASE_URL=postgresql://user:pass@localhost:5432/tcp_dashboard
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=INFO
|
||||
LOG_CLEANUP=true
|
||||
|
||||
# Data Collection
|
||||
DEFAULT_HEALTH_CHECK_INTERVAL=30
|
||||
AUTO_RESTART=true
|
||||
```
|
||||
|
||||
### Configuration Files
|
||||
|
||||
The platform uses JSON configuration files:
|
||||
|
||||
- **`config/okx_config.json`**: OKX exchange settings
|
||||
- **`config/database_config.json`**: Database configuration
|
||||
- **`config/logging_config.json`**: Logging settings
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
- **Never commit secrets**: Use `.env` files for sensitive data
|
||||
- **Validate inputs**: Comprehensive input validation
|
||||
- **Use HTTPS**: Secure connections in production
|
||||
- **Regular updates**: Keep dependencies updated
|
||||
|
||||
## 📊 Monitoring & Observability
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
The platform includes comprehensive health monitoring:
|
||||
|
||||
```python
|
||||
# Check system health
|
||||
from data.collector_manager import CollectorManager
|
||||
|
||||
manager = CollectorManager()
|
||||
status = manager.get_status()
|
||||
|
||||
print(f"Running collectors: {status['statistics']['running_collectors']}")
|
||||
print(f"Failed collectors: {status['statistics']['failed_collectors']}")
|
||||
```
|
||||
|
||||
### Logging
|
||||
|
||||
Structured logging across all components:
|
||||
|
||||
```python
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger("my_component")
|
||||
logger.info("Component started", extra={"component": "my_component"})
|
||||
```
|
||||
|
||||
### Performance Metrics
|
||||
|
||||
Built-in performance tracking:
|
||||
|
||||
- **Message rates**: Real-time data processing rates
|
||||
- **Error rates**: System health and stability
|
||||
- **Resource usage**: Memory and CPU utilization
|
||||
- **Uptime**: Component availability metrics
|
||||
|
||||
## 🧪 Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
uv run pytest
|
||||
|
||||
# Run specific test files
|
||||
uv run pytest tests/test_base_collector.py
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest --cov=data --cov-report=html
|
||||
|
||||
# Run integration tests
|
||||
uv run pytest tests/integration/
|
||||
```
|
||||
|
||||
### Test Organization
|
||||
|
||||
- **Unit tests**: Individual component testing
|
||||
- **Integration tests**: Cross-component functionality
|
||||
- **Performance tests**: Load and stress testing
|
||||
- **End-to-end tests**: Full system workflows
|
||||
|
||||
### Writing Tests
|
||||
|
||||
Follow these patterns when writing tests:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
import asyncio
|
||||
from data.exchanges import create_okx_collector
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_okx_collector():
|
||||
collector = create_okx_collector('BTC-USDT')
|
||||
assert collector is not None
|
||||
|
||||
# Test lifecycle
|
||||
await collector.start()
|
||||
status = collector.get_status()
|
||||
assert status['status'] == 'running'
|
||||
|
||||
await collector.stop()
|
||||
```
|
||||
|
||||
## 🚀 Deployment
|
||||
|
||||
### Development Deployment
|
||||
|
||||
For local development:
|
||||
|
||||
```bash
|
||||
# Start services
|
||||
docker-compose up -d
|
||||
|
||||
# Initialize database
|
||||
uv run python scripts/init_database.py
|
||||
|
||||
# Start data collection
|
||||
uv run python scripts/start_collectors.py
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
|
||||
For production environments:
|
||||
|
||||
```bash
|
||||
# Use production docker-compose
|
||||
docker-compose -f docker-compose.prod.yml up -d
|
||||
|
||||
# Set production environment
|
||||
export ENV=production
|
||||
export LOG_LEVEL=INFO
|
||||
|
||||
# Start with monitoring
|
||||
uv run python scripts/production_start.py
|
||||
```
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
Using Docker containers:
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
CMD ["python", "-m", "scripts.production_start"]
|
||||
```
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- **[Modules Documentation (`../modules/`)](../modules/)** - Technical component details
|
||||
- **[Architecture Overview (`../architecture.md`)]** - System design
|
||||
- **[Exchange Documentation (`../modules/exchanges/`)](../modules/exchanges/)** - Exchange integrations
|
||||
- **[Reference (`../reference/`)](../reference/)** - Technical specifications
|
||||
|
||||
## 📞 Support & Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Database Connection Errors**
|
||||
- Check Docker services: `docker-compose ps`
|
||||
- Verify environment variables in `.env`
|
||||
- Test connection: `uv run python scripts/test_db_connection.py`
|
||||
|
||||
2. **Collector Failures**
|
||||
- Check logs: `tail -f logs/collector_error.log`
|
||||
- Verify configuration: Review `config/*.json` files
|
||||
- Test manually: `uv run python scripts/test_okx_collector.py`
|
||||
|
||||
3. **Performance Issues**
|
||||
- Monitor resource usage: `docker stats`
|
||||
- Check message rates: Collector status endpoints
|
||||
- Optimize configuration: Adjust health check intervals
|
||||
|
||||
### Getting Help
|
||||
|
||||
1. **Check Documentation**: Review relevant section documentation
|
||||
2. **Review Logs**: System logs in `./logs/` directory
|
||||
3. **Test Components**: Use built-in test scripts
|
||||
4. **Check Status**: Use status and health check methods
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable detailed debugging:
|
||||
|
||||
```bash
|
||||
export LOG_LEVEL=DEBUG
|
||||
uv run python your_script.py
|
||||
|
||||
# Check detailed logs
|
||||
tail -f logs/*_debug.log
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*For the complete documentation index, see the [main documentation README (`../README.md`)]*
|
||||
381
docs/guides/adding-new-indicators.md
Normal file
381
docs/guides/adding-new-indicators.md
Normal file
@@ -0,0 +1,381 @@
|
||||
# Adding New Indicators Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This guide provides comprehensive instructions for adding new technical indicators to the Crypto Trading Bot Dashboard. The system uses a modular approach where each indicator is implemented as a separate class inheriting from `BaseIndicator`.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Implementation Steps](#implementation-steps)
|
||||
3. [Integration with Charts](#integration-with-charts)
|
||||
4. [Best Practices](#best-practices)
|
||||
5. [Testing Guidelines](#testing-guidelines)
|
||||
6. [Common Pitfalls](#common-pitfalls)
|
||||
7. [Example Implementation](#example-implementation)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python knowledge with pandas/numpy
|
||||
- Understanding of technical analysis concepts
|
||||
- Familiarity with the project structure
|
||||
- Knowledge of the indicator's mathematical formula
|
||||
- Understanding of the dashboard's chart system
|
||||
|
||||
## Implementation Steps
|
||||
|
||||
### 1. Create Indicator Class
|
||||
|
||||
Create a new file in `data/common/indicators/implementations/` named after your indicator (e.g., `stochastic.py`):
|
||||
|
||||
```python
|
||||
from typing import Dict, Any, List
|
||||
import pandas as pd
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
class StochasticIndicator(BaseIndicator):
|
||||
"""
|
||||
Stochastic Oscillator implementation.
|
||||
|
||||
The Stochastic Oscillator is a momentum indicator comparing a particular closing price
|
||||
of a security to a range of its prices over a certain period of time.
|
||||
"""
|
||||
|
||||
def __init__(self, logger=None):
|
||||
super().__init__(logger)
|
||||
self.name = "stochastic"
|
||||
|
||||
def calculate(self, df: pd.DataFrame, k_period: int = 14,
|
||||
d_period: int = 3, price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Stochastic Oscillator.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
k_period: The K period (default: 14)
|
||||
d_period: The D period (default: 3)
|
||||
price_column: Column to use for calculations (default: 'close')
|
||||
|
||||
Returns:
|
||||
List of IndicatorResult objects containing %K and %D values
|
||||
"""
|
||||
try:
|
||||
# Validate inputs
|
||||
self._validate_dataframe(df)
|
||||
self._validate_period(k_period, min_value=2)
|
||||
self._validate_period(d_period, min_value=2)
|
||||
|
||||
# Calculate %K
|
||||
lowest_low = df['low'].rolling(window=k_period).min()
|
||||
highest_high = df['high'].rolling(window=k_period).max()
|
||||
k_percent = 100 * ((df[price_column] - lowest_low) /
|
||||
(highest_high - lowest_low))
|
||||
|
||||
# Calculate %D (signal line)
|
||||
d_percent = k_percent.rolling(window=d_period).mean()
|
||||
|
||||
# Create results
|
||||
results = []
|
||||
for idx, row in df.iterrows():
|
||||
if pd.notna(k_percent[idx]) and pd.notna(d_percent[idx]):
|
||||
results.append(IndicatorResult(
|
||||
timestamp=idx,
|
||||
symbol=self._get_symbol(df),
|
||||
timeframe=self._get_timeframe(df),
|
||||
values={
|
||||
'k_percent': float(k_percent[idx]),
|
||||
'd_percent': float(d_percent[idx])
|
||||
},
|
||||
metadata={
|
||||
'k_period': k_period,
|
||||
'd_period': d_period
|
||||
}
|
||||
))
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
self._handle_error(f"Error calculating Stochastic: {str(e)}")
|
||||
return []
|
||||
```
|
||||
|
||||
### 2. Register the Indicator
|
||||
|
||||
Add your indicator to `data/common/indicators/implementations/__init__.py`:
|
||||
|
||||
```python
|
||||
from .stochastic import StochasticIndicator
|
||||
|
||||
__all__ = [
|
||||
'SMAIndicator',
|
||||
'EMAIndicator',
|
||||
'RSIIndicator',
|
||||
'MACDIndicator',
|
||||
'BollingerBandsIndicator',
|
||||
'StochasticIndicator'
|
||||
]
|
||||
```
|
||||
|
||||
### 3. Add to TechnicalIndicators Class
|
||||
|
||||
Update `data/common/indicators/technical.py`:
|
||||
|
||||
```python
|
||||
class TechnicalIndicators:
|
||||
def __init__(self, logger=None):
|
||||
self.logger = logger
|
||||
# ... existing indicators ...
|
||||
self._stochastic = StochasticIndicator(logger)
|
||||
|
||||
def stochastic(self, df: pd.DataFrame, k_period: int = 14,
|
||||
d_period: int = 3, price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Stochastic Oscillator.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
k_period: The K period (default: 14)
|
||||
d_period: The D period (default: 3)
|
||||
price_column: Column to use (default: 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with %K and %D values
|
||||
"""
|
||||
return self._stochastic.calculate(
|
||||
df,
|
||||
k_period=k_period,
|
||||
d_period=d_period,
|
||||
price_column=price_column
|
||||
)
|
||||
```
|
||||
|
||||
## Integration with Charts
|
||||
|
||||
### 1. Create Chart Layer
|
||||
|
||||
Create a new layer class in `components/charts/layers/indicators.py` (overlay) or `components/charts/layers/subplots.py` (subplot):
|
||||
|
||||
```python
|
||||
class StochasticLayer(IndicatorLayer):
|
||||
def __init__(self, config: Dict[str, Any]):
|
||||
super().__init__(config)
|
||||
self.name = "stochastic"
|
||||
self.display_type = "subplot"
|
||||
|
||||
def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]:
|
||||
traces = []
|
||||
traces.append(go.Scatter(
|
||||
x=df.index,
|
||||
y=values['k_percent'],
|
||||
mode='lines',
|
||||
name=f"%K ({self.config.get('k_period', 14)})",
|
||||
line=dict(
|
||||
color=self.config.get('color', '#007bff'),
|
||||
width=self.config.get('line_width', 2)
|
||||
)
|
||||
))
|
||||
traces.append(go.Scatter(
|
||||
x=df.index,
|
||||
y=values['d_percent'],
|
||||
mode='lines',
|
||||
name=f"%D ({self.config.get('d_period', 3)})",
|
||||
line=dict(
|
||||
color=self.config.get('secondary_color', '#ff6b35'),
|
||||
width=self.config.get('line_width', 2)
|
||||
)
|
||||
))
|
||||
return traces
|
||||
```
|
||||
|
||||
### 2. Register in Layer Registry
|
||||
|
||||
Update `components/charts/layers/__init__.py`:
|
||||
|
||||
```python
|
||||
SUBPLOT_REGISTRY = {
|
||||
'rsi': RSILayer,
|
||||
'macd': MACDLayer,
|
||||
'stochastic': StochasticLayer,
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Add UI Components
|
||||
|
||||
Update `dashboard/components/indicator_modal.py`:
|
||||
|
||||
```python
|
||||
def create_parameter_fields():
|
||||
return html.Div([
|
||||
# ... existing fields ...
|
||||
html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col([
|
||||
dbc.Label("%K Period:"),
|
||||
dcc.Input(
|
||||
id='stochastic-k-period-input',
|
||||
type='number',
|
||||
value=14
|
||||
)
|
||||
], width=6),
|
||||
dbc.Col([
|
||||
dbc.Label("%D Period:"),
|
||||
dcc.Input(
|
||||
id='stochastic-d-period-input',
|
||||
type='number',
|
||||
value=3
|
||||
)
|
||||
], width=6),
|
||||
]),
|
||||
dbc.FormText("Stochastic oscillator periods")
|
||||
], id='stochastic-parameters', style={'display': 'none'})
|
||||
])
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Code Quality
|
||||
- Follow the project's coding style
|
||||
- Add comprehensive docstrings
|
||||
- Include type hints
|
||||
- Handle edge cases gracefully
|
||||
- Use vectorized operations where possible
|
||||
|
||||
### Error Handling
|
||||
- Validate all input parameters
|
||||
- Check for sufficient data
|
||||
- Handle NaN values appropriately
|
||||
- Log errors with meaningful messages
|
||||
- Return empty results for invalid inputs
|
||||
|
||||
### Performance
|
||||
- Use vectorized operations
|
||||
- Avoid unnecessary loops
|
||||
- Clean up temporary calculations
|
||||
- Consider memory usage
|
||||
- Cache results when appropriate
|
||||
|
||||
### Documentation
|
||||
- Document all public methods
|
||||
- Include usage examples
|
||||
- Explain parameter ranges
|
||||
- Document any assumptions
|
||||
- Keep documentation up-to-date
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### Test File Structure
|
||||
Create `tests/indicators/test_stochastic.py`:
|
||||
|
||||
```python
|
||||
import pytest
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from data.common.indicators import TechnicalIndicators
|
||||
|
||||
@pytest.fixture
|
||||
def sample_data():
|
||||
return pd.DataFrame({
|
||||
'open': [10, 11, 12, 13, 14],
|
||||
'high': [12, 13, 14, 15, 16],
|
||||
'low': [8, 9, 10, 11, 12],
|
||||
'close': [11, 12, 13, 14, 15],
|
||||
'volume': [100, 110, 120, 130, 140]
|
||||
}, index=pd.date_range('2023-01-01', periods=5))
|
||||
|
||||
def test_stochastic_calculation(sample_data):
|
||||
indicators = TechnicalIndicators()
|
||||
results = indicators.stochastic(sample_data, k_period=3, d_period=2)
|
||||
|
||||
assert len(results) > 0
|
||||
for result in results:
|
||||
assert 0 <= result.values['k_percent'] <= 100
|
||||
assert 0 <= result.values['d_percent'] <= 100
|
||||
```
|
||||
|
||||
### Testing Checklist
|
||||
- [ ] Basic functionality with ideal data
|
||||
- [ ] Edge cases (insufficient data, NaN values)
|
||||
- [ ] Performance with large datasets
|
||||
- [ ] Error handling
|
||||
- [ ] Parameter validation
|
||||
- [ ] Integration with TechnicalIndicators class
|
||||
- [ ] Chart layer rendering
|
||||
- [ ] UI interaction
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
# Run all indicator tests
|
||||
uv run pytest tests/indicators/
|
||||
|
||||
# Run specific indicator tests
|
||||
uv run pytest tests/indicators/test_stochastic.py
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest tests/indicators/ --cov=data.common.indicators
|
||||
```
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
1. **Insufficient Data Handling**
|
||||
- Always check if enough data points are available
|
||||
- Return empty results rather than partial calculations
|
||||
- Consider the impact of NaN values
|
||||
|
||||
2. **NaN Handling**
|
||||
- Use appropriate pandas NaN handling methods
|
||||
- Don't propagate NaN values unnecessarily
|
||||
- Document NaN handling behavior
|
||||
|
||||
3. **Memory Leaks**
|
||||
- Clean up temporary DataFrames
|
||||
- Avoid storing large datasets
|
||||
- Use efficient data structures
|
||||
|
||||
4. **Performance Issues**
|
||||
- Use vectorized operations instead of loops
|
||||
- Profile code with large datasets
|
||||
- Consider caching strategies
|
||||
|
||||
5. **UI Integration**
|
||||
- Handle all parameter combinations
|
||||
- Provide meaningful validation
|
||||
- Give clear user feedback
|
||||
|
||||
## Example Implementation
|
||||
|
||||
See the complete Stochastic Oscillator implementation above as a reference. Key points:
|
||||
|
||||
1. **Modular Structure**
|
||||
- Separate indicator class
|
||||
- Clear inheritance hierarchy
|
||||
- Focused responsibility
|
||||
|
||||
2. **Error Handling**
|
||||
- Input validation
|
||||
- Exception handling
|
||||
- Meaningful error messages
|
||||
|
||||
3. **Performance**
|
||||
- Vectorized calculations
|
||||
- Efficient data structures
|
||||
- Memory management
|
||||
|
||||
4. **Testing**
|
||||
- Comprehensive test cases
|
||||
- Edge case handling
|
||||
- Performance verification
|
||||
|
||||
## Support
|
||||
|
||||
For questions or issues:
|
||||
1. Check existing documentation
|
||||
2. Review test cases
|
||||
3. Consult with team members
|
||||
4. Create detailed bug reports if needed
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Technical Indicators Overview](../modules/technical-indicators.md)
|
||||
- [Chart System Documentation](../modules/charts/README.md)
|
||||
- [Data Types Documentation](../modules/data-types.md)
|
||||
537
docs/guides/setup.md
Normal file
537
docs/guides/setup.md
Normal file
@@ -0,0 +1,537 @@
|
||||
# Crypto Trading Bot Dashboard - Setup Guide
|
||||
|
||||
This guide will help you set up the Crypto Trading Bot Dashboard on a new machine from scratch.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Required Software
|
||||
|
||||
1. **Python 3.12+**
|
||||
- Download from [python.org](https://python.org)
|
||||
- Ensure Python is added to PATH
|
||||
|
||||
2. **UV Package Manager**
|
||||
```powershell
|
||||
# Windows (PowerShell)
|
||||
powershell -c "irm https://astral.sh/uv/install.ps1 | iex"
|
||||
|
||||
# macOS/Linux
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
|
||||
3. **Docker Desktop**
|
||||
- Download from [docker.com](https://docker.com)
|
||||
- Ensure Docker is running before proceeding
|
||||
|
||||
4. **Git**
|
||||
- Download from [git-scm.com](https://git-scm.com)
|
||||
|
||||
### System Requirements
|
||||
|
||||
- **RAM**: Minimum 4GB, Recommended 8GB+
|
||||
- **Storage**: At least 2GB free space
|
||||
- **OS**: Windows 10/11, macOS 10.15+, or Linux
|
||||
|
||||
## Project Setup
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd TCPDashboard
|
||||
```
|
||||
|
||||
### 2. Environment Configuration
|
||||
|
||||
Create the environment file from the template:
|
||||
|
||||
```powershell
|
||||
# Windows
|
||||
Copy-Item env.template .env
|
||||
|
||||
# macOS/Linux
|
||||
cp env.template .env
|
||||
```
|
||||
|
||||
**Important**:
|
||||
- The `.env` file is **REQUIRED** - the application will not work without it
|
||||
- The `.env` file contains secure passwords for database and Redis
|
||||
- **Never commit the `.env` file to version control**
|
||||
- All credentials must be loaded from environment variables - no hardcoded passwords exist in the codebase
|
||||
|
||||
Current configuration in `.env`:
|
||||
```env
|
||||
POSTGRES_PORT=5434
|
||||
POSTGRES_PASSWORD=your_secure_password_here
|
||||
REDIS_PASSWORD=your_redis_password_here
|
||||
```
|
||||
|
||||
### 3. Configure Custom Ports (Optional)
|
||||
|
||||
If you have other PostgreSQL instances running, the default configuration uses port `5434` to avoid conflicts. You can modify these in your `.env` file.
|
||||
|
||||
## Database Setup
|
||||
|
||||
### 1. Start Database Services
|
||||
|
||||
Start PostgreSQL with TimescaleDB and Redis using Docker Compose:
|
||||
|
||||
```powershell
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
This will:
|
||||
- Create a PostgreSQL database with TimescaleDB extension on port `5434`
|
||||
- Create a Redis instance on port `6379`
|
||||
- Set up persistent volumes for data storage
|
||||
- Configure password authentication
|
||||
- **Automatically initialize the database schema** using the clean schema (without TimescaleDB hypertables for simpler setup)
|
||||
|
||||
### 2. Verify Services are Running
|
||||
|
||||
Check container status:
|
||||
```powershell
|
||||
docker-compose ps
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
|
||||
dashboard_postgres timescale/timescaledb:latest-pg15 "docker-entrypoint.s…" postgres X minutes ago Up X minutes (healthy) 0.0.0.0:5434->5432/tcp
|
||||
dashboard_redis redis:7-alpine "docker-entrypoint.s…" redis X minutes ago Up X minutes (healthy) 0.0.0.0:6379->6379/tcp
|
||||
```
|
||||
|
||||
### 3. Database Migration System
|
||||
|
||||
The project uses **Alembic** for database schema versioning and migrations. This allows for safe, trackable database schema changes.
|
||||
|
||||
#### Understanding Migration vs Direct Schema
|
||||
|
||||
The project supports two approaches for database setup:
|
||||
|
||||
1. **Direct Schema (Default)**: Uses `database/init/schema_clean.sql` for automatic Docker initialization
|
||||
2. **Migration System**: Uses Alembic for versioned schema changes and updates
|
||||
|
||||
#### Migration Commands
|
||||
|
||||
**Check migration status:**
|
||||
```powershell
|
||||
uv run alembic current
|
||||
```
|
||||
|
||||
**View migration history:**
|
||||
```powershell
|
||||
uv run alembic history --verbose
|
||||
```
|
||||
|
||||
**Upgrade to latest migration:**
|
||||
```powershell
|
||||
uv run alembic upgrade head
|
||||
```
|
||||
|
||||
**Downgrade to previous migration:**
|
||||
```powershell
|
||||
uv run alembic downgrade -1
|
||||
```
|
||||
|
||||
**Create new migration (for development):**
|
||||
```powershell
|
||||
# Auto-generate migration from model changes
|
||||
uv run alembic revision --autogenerate -m "Description of changes"
|
||||
|
||||
# Create empty migration for custom changes
|
||||
uv run alembic revision -m "Description of changes"
|
||||
```
|
||||
|
||||
#### Migration Files Location
|
||||
|
||||
- **Configuration**: `alembic.ini`
|
||||
- **Environment**: `database/migrations/env.py`
|
||||
- **Versions**: `database/migrations/versions/`
|
||||
|
||||
#### When to Use Migrations
|
||||
|
||||
**Use Direct Schema (recommended for new setups):**
|
||||
- Fresh installations
|
||||
- Development environments
|
||||
- When you want automatic schema setup with Docker
|
||||
|
||||
**Use Migrations (recommended for updates):**
|
||||
- Updating existing databases
|
||||
- Production schema changes
|
||||
- When you need to track schema history
|
||||
- Rolling back database changes
|
||||
|
||||
#### Migration Best Practices
|
||||
|
||||
1. **Always backup before migrations in production**
|
||||
2. **Test migrations on a copy of production data first**
|
||||
3. **Review auto-generated migrations before applying**
|
||||
4. **Use descriptive migration messages**
|
||||
5. **Never edit migration files after they've been applied**
|
||||
|
||||
### 4. Verify Database Schema
|
||||
|
||||
The database schema is automatically initialized when containers start. You can verify it worked:
|
||||
|
||||
```powershell
|
||||
docker exec dashboard_postgres psql -U dashboard -d dashboard -c "\dt"
|
||||
```
|
||||
|
||||
Expected output should show tables: `bots`, `bot_performance`, `market_data`, `raw_trades`, `signals`, `supported_exchanges`, `supported_timeframes`, `trades`
|
||||
|
||||
### 5. Test Database Initialization Script (Optional)
|
||||
|
||||
You can also test the database initialization using the Python script:
|
||||
|
||||
```powershell
|
||||
uv run .\scripts\init_database.py
|
||||
```
|
||||
|
||||
This script will:
|
||||
- Load environment variables from `.env` file
|
||||
- Test database connection
|
||||
- Create all tables using SQLAlchemy models
|
||||
- Verify all expected tables exist
|
||||
- Show connection pool status
|
||||
|
||||
## Application Setup
|
||||
|
||||
### 1. Install Python Dependencies
|
||||
|
||||
```powershell
|
||||
uv sync
|
||||
```
|
||||
|
||||
This will:
|
||||
- Create a virtual environment in `.venv/`
|
||||
- Install all required dependencies
|
||||
- Set up the project for development
|
||||
|
||||
### 2. Activate Virtual Environment
|
||||
|
||||
```powershell
|
||||
# Windows
|
||||
uv run <command>
|
||||
|
||||
# Or activate manually
|
||||
.venv\Scripts\Activate.ps1
|
||||
|
||||
# macOS/Linux
|
||||
source .venv/bin/activate
|
||||
```
|
||||
|
||||
### 3. Verify Database Schema (Optional)
|
||||
|
||||
The database schema is automatically initialized when Docker containers start. You can verify it's working:
|
||||
|
||||
```powershell
|
||||
# Check if all tables exist
|
||||
docker exec dashboard_postgres psql -U dashboard -d dashboard -c "SELECT table_name FROM information_schema.tables WHERE table_schema = 'public' ORDER BY table_name;"
|
||||
|
||||
# Verify sample data was inserted
|
||||
docker exec dashboard_postgres psql -U dashboard -d dashboard -c "SELECT * FROM supported_timeframes;"
|
||||
```
|
||||
|
||||
## Running the Application
|
||||
|
||||
### 1. Start the Dashboard
|
||||
|
||||
```powershell
|
||||
uv run python main.py
|
||||
```
|
||||
|
||||
### 2. Access the Application
|
||||
|
||||
Open your browser and navigate to:
|
||||
- **Local**: http://localhost:8050
|
||||
- **Network**: http://0.0.0.0:8050 (if accessible from other machines)
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Key configuration options in `.env`:
|
||||
|
||||
```env
|
||||
# Database Configuration
|
||||
POSTGRES_HOST=localhost
|
||||
POSTGRES_PORT=5434
|
||||
POSTGRES_DB=dashboard
|
||||
POSTGRES_USER=dashboard
|
||||
POSTGRES_PASSWORD=your_secure_password_here
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=your_redis_password_here
|
||||
|
||||
# Application Configuration
|
||||
DASH_HOST=0.0.0.0
|
||||
DASH_PORT=8050
|
||||
DEBUG=true
|
||||
|
||||
# OKX API Configuration (for real trading)
|
||||
OKX_API_KEY=your_okx_api_key_here
|
||||
OKX_SECRET_KEY=your_okx_secret_key_here
|
||||
OKX_PASSPHRASE=your_okx_passphrase_here
|
||||
OKX_SANDBOX=true
|
||||
```
|
||||
|
||||
### Port Configuration
|
||||
|
||||
If you need to change ports due to conflicts:
|
||||
|
||||
1. **PostgreSQL Port**: Update `POSTGRES_PORT` in `.env` and the port mapping in `docker-compose.yml`
|
||||
2. **Redis Port**: Update `REDIS_PORT` in `.env` and `docker-compose.yml`
|
||||
3. **Dashboard Port**: Update `DASH_PORT` in `.env`
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### 1. Daily Development Setup
|
||||
|
||||
```powershell
|
||||
# Start databases
|
||||
docker-compose up -d
|
||||
|
||||
# Start development server
|
||||
uv run python main.py
|
||||
```
|
||||
|
||||
### 2. Stop Services
|
||||
|
||||
```powershell
|
||||
# Stop application: Ctrl+C in terminal
|
||||
|
||||
# Stop databases
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
### 3. Reset Database (if needed)
|
||||
|
||||
```powershell
|
||||
# WARNING: This will delete all data
|
||||
docker-compose down -v
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Run Unit Tests
|
||||
|
||||
```powershell
|
||||
# Run all tests
|
||||
uv run pytest
|
||||
|
||||
# Run specific test file
|
||||
uv run pytest tests/test_database.py
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest --cov=. --cov-report=html
|
||||
```
|
||||
|
||||
### Test Database Connection
|
||||
|
||||
Create a quick test script:
|
||||
|
||||
```python
|
||||
# test_connection.py
|
||||
import os
|
||||
from database.connection import DatabaseManager
|
||||
|
||||
# Load environment variables
|
||||
from dotenv import load_dotenv
|
||||
load_dotenv()
|
||||
|
||||
# Test Database
|
||||
db = DatabaseManager()
|
||||
db.initialize()
|
||||
if db.test_connection():
|
||||
print("✅ Database connection successful!")
|
||||
db.close()
|
||||
|
||||
# Test Redis
|
||||
from database.redis_manager import get_sync_redis_manager
|
||||
|
||||
try:
|
||||
redis_manager = get_sync_redis_manager()
|
||||
redis_manager.initialize()
|
||||
print("✅ Redis connection successful!")
|
||||
except Exception as e:
|
||||
print(f"❌ Redis connection failed: {e}")
|
||||
```
|
||||
|
||||
Run test:
|
||||
```powershell
|
||||
uv run python test_connection.py
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### 1. Port Already in Use
|
||||
|
||||
**Error**: `Port 5434 is already allocated`
|
||||
|
||||
**Solution**:
|
||||
- Change `POSTGRES_PORT` in `.env` to a different port (e.g., 5435)
|
||||
- Update `docker-compose.yml` port mapping accordingly
|
||||
- Restart containers: `docker-compose down && docker-compose up -d`
|
||||
|
||||
#### 2. Docker Permission Issues
|
||||
|
||||
**Error**: `permission denied while trying to connect to the Docker daemon`
|
||||
|
||||
**Solution**:
|
||||
- Ensure Docker Desktop is running
|
||||
- On Linux: Add user to docker group: `sudo usermod -aG docker $USER`
|
||||
- Restart terminal/session
|
||||
|
||||
#### 3. Database Connection Failed
|
||||
|
||||
**Error**: `password authentication failed`
|
||||
|
||||
**Solution**:
|
||||
- Ensure `.env` password matches `docker-compose.yml`
|
||||
- Reset database: `docker-compose down -v && docker-compose up -d`
|
||||
- Wait for database initialization (30-60 seconds)
|
||||
|
||||
#### 4. Database Schema Not Created
|
||||
|
||||
**Error**: Tables don't exist or `\dt` shows no tables
|
||||
|
||||
**Solution**:
|
||||
```powershell
|
||||
# Check initialization logs
|
||||
docker-compose logs postgres
|
||||
|
||||
# Use the Python initialization script to create/verify schema
|
||||
uv run .\scripts\init_database.py
|
||||
|
||||
# Verify tables were created
|
||||
docker exec dashboard_postgres psql -U dashboard -d dashboard -c "\dt"
|
||||
```
|
||||
|
||||
#### 5. Application Dependencies Issues
|
||||
|
||||
**Error**: Package installation failures
|
||||
|
||||
**Solution**:
|
||||
```powershell
|
||||
# Clear UV cache
|
||||
uv cache clean
|
||||
|
||||
# Reinstall dependencies
|
||||
rm -rf .venv
|
||||
uv sync
|
||||
```
|
||||
|
||||
#### 6. Migration Issues
|
||||
|
||||
**Error**: `alembic.util.exc.CommandError: Target database is not up to date`
|
||||
|
||||
**Solution**:
|
||||
```powershell
|
||||
# Check current migration status
|
||||
uv run alembic current
|
||||
|
||||
# Upgrade to latest migration
|
||||
uv run alembic upgrade head
|
||||
|
||||
# If migrations are out of sync, stamp current version
|
||||
uv run alembic stamp head
|
||||
```
|
||||
|
||||
**Error**: `ModuleNotFoundError: No module named 'database'`
|
||||
|
||||
**Solution**:
|
||||
- Ensure you're running commands from the project root directory
|
||||
- Verify the virtual environment is activated: `uv run <command>`
|
||||
|
||||
**Error**: Migration revision conflicts
|
||||
|
||||
**Solution**:
|
||||
```powershell
|
||||
# Check migration history
|
||||
uv run alembic history --verbose
|
||||
|
||||
# Merge conflicting migrations
|
||||
uv run alembic merge -m "Merge conflicting revisions" <revision1> <revision2>
|
||||
```
|
||||
|
||||
**Error**: Database already has tables but no migration history
|
||||
|
||||
**Solution**:
|
||||
```powershell
|
||||
# Mark current schema as the initial migration
|
||||
uv run alembic stamp head
|
||||
|
||||
# Or start fresh with migrations
|
||||
docker-compose down -v
|
||||
docker-compose up -d
|
||||
uv run alembic upgrade head
|
||||
```
|
||||
|
||||
### Log Files
|
||||
|
||||
View service logs:
|
||||
```powershell
|
||||
# All services
|
||||
docker-compose logs
|
||||
|
||||
# Specific service
|
||||
docker-compose logs postgres
|
||||
docker-compose logs redis
|
||||
|
||||
# Follow logs in real-time
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Database Management
|
||||
|
||||
#### Backup Database
|
||||
|
||||
```powershell
|
||||
docker exec dashboard_postgres pg_dump -U dashboard dashboard > backup.sql
|
||||
```
|
||||
|
||||
#### Restore Database
|
||||
|
||||
```powershell
|
||||
docker exec -i dashboard_postgres psql -U dashboard dashboard < backup.sql
|
||||
```
|
||||
|
||||
#### Access Database CLI
|
||||
|
||||
```powershell
|
||||
docker exec -it dashboard_postgres psql -U dashboard -d dashboard
|
||||
```
|
||||
|
||||
#### Access Redis CLI
|
||||
|
||||
```powershell
|
||||
docker exec -it dashboard_redis redis-cli -a $env:REDIS_PASSWORD
|
||||
```
|
||||
|
||||
## Security Notes
|
||||
|
||||
1. **Never commit `.env` file** to version control
|
||||
2. **Change default passwords** in production environments
|
||||
3. **Use strong passwords** for production deployments
|
||||
4. **Enable SSL/TLS** for production database connections
|
||||
5. **Restrict network access** in production environments
|
||||
|
||||
## Support
|
||||
|
||||
If you encounter issues not covered in this guide:
|
||||
|
||||
1. Check the [project documentation](../README.md)
|
||||
2. Review [GitHub issues](link-to-issues)
|
||||
3. Contact the development team
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-05-30
|
||||
**Version**: 1.0
|
||||
**Tested On**: Windows 11, Docker Desktop 4.x
|
||||
220
docs/modules/README.md
Normal file
220
docs/modules/README.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Modules Documentation
|
||||
|
||||
This section contains detailed technical documentation for all system modules in the TCP Dashboard platform.
|
||||
|
||||
## 📋 Contents
|
||||
|
||||
### User Interface & Visualization
|
||||
|
||||
- **[Chart System (`charts/`)](./charts/)** - *Comprehensive modular chart system*
|
||||
- **Strategy-driven Configuration**: 5 professional trading strategies with JSON persistence
|
||||
- **26+ Indicator Presets**: SMA, EMA, RSI, MACD, Bollinger Bands with customization
|
||||
- **User Indicator Management**: Interactive CRUD system with real-time updates
|
||||
- **Modular Dashboard Integration**: Separated layouts, callbacks, and components
|
||||
- **Validation System**: 10+ validation rules with detailed error reporting
|
||||
- **Extensible Architecture**: Foundation for bot signal integration
|
||||
- Real-time chart updates with indicator toggling
|
||||
- Strategy dropdown with auto-loading configurations
|
||||
|
||||
### Data Collection System
|
||||
|
||||
- **[Data Collectors (`data_collectors.md`)]** - *Comprehensive guide to the enhanced data collector system*
|
||||
- **BaseDataCollector** abstract class with health monitoring
|
||||
- **CollectorManager** for centralized management
|
||||
- **Exchange Factory Pattern** for standardized collector creation
|
||||
- **Modular Exchange Architecture** for scalable implementation
|
||||
- Auto-restart and failure recovery mechanisms
|
||||
- Health monitoring and alerting systems
|
||||
- Performance optimization techniques
|
||||
- Integration examples and patterns
|
||||
- Comprehensive troubleshooting guide
|
||||
|
||||
- **[Data Validation (`validation.md`)]** - *Robust data validation framework*
|
||||
- **BaseDataValidator** abstract class for exchange-specific validation
|
||||
- **Field Validators** for common market data fields
|
||||
- **Validation Results** with error and warning handling
|
||||
- **Exchange-Specific Validators** with custom rules
|
||||
- Comprehensive test coverage
|
||||
- Error handling and sanitization
|
||||
- Performance optimization for high-frequency validation
|
||||
- Integration examples and patterns
|
||||
|
||||
### Database Operations
|
||||
|
||||
- **[Database Operations (`database_operations.md`)]** - *Repository pattern for clean database interactions*
|
||||
- **Repository Pattern** implementation for data access abstraction
|
||||
- **MarketDataRepository** for candle/OHLCV operations
|
||||
- **RawTradeRepository** for WebSocket data storage
|
||||
- Automatic transaction management and session cleanup
|
||||
- Configurable duplicate handling with force update options
|
||||
- Custom error handling with DatabaseOperationError
|
||||
- Database health monitoring and performance statistics
|
||||
- Migration guide from direct SQL to repository pattern
|
||||
|
||||
### Technical Analysis
|
||||
|
||||
- **[Technical Indicators (`technical-indicators.md`)]** - *Comprehensive technical analysis module*
|
||||
- **Five Core Indicators**: SMA, EMA, RSI, MACD, and Bollinger Bands
|
||||
- **Sparse Data Handling**: Optimized for the platform's aggregation strategy
|
||||
- **Vectorized Calculations**: High-performance pandas and numpy implementation
|
||||
- **Flexible Configuration**: JSON-based parameter configuration with validation
|
||||
- **Integration Ready**: Seamless integration with OHLCV data and real-time processing
|
||||
- Batch processing for multiple indicators
|
||||
- Support for different price columns (open, high, low, close)
|
||||
- Comprehensive unit testing and documentation
|
||||
|
||||
### Logging & Monitoring
|
||||
|
||||
- **[Enhanced Logging System (`logging.md`)]** - *Unified logging framework*
|
||||
- Multi-level logging with automatic cleanup
|
||||
- Console and file output with formatting
|
||||
- Performance monitoring integration
|
||||
- Cross-component logging standards
|
||||
- Log aggregation and analysis
|
||||
|
||||
## 🔧 Component Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TCP Dashboard Platform │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Modular Dashboard System │ │
|
||||
│ │ • Separated layouts, callbacks, components │ │
|
||||
│ │ • Chart layers with strategy management │ │
|
||||
│ │ • Real-time indicator updates │ │
|
||||
│ │ • User indicator CRUD operations │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ CollectorManager │ │
|
||||
│ │ ┌─────────────────────────────────────────────────┐│ │
|
||||
│ │ │ Global Health Monitor ││ │
|
||||
│ │ │ • System-wide health checks ││ │
|
||||
│ │ │ • Auto-restart coordination ││ │
|
||||
│ │ │ • Performance analytics ││ │
|
||||
│ │ └─────────────────────────────────────────────────┘│ │
|
||||
│ │ │ │ │
|
||||
│ │ ┌─────────────┐ ┌─────────────┐ ┌────────────────┐ │ │
|
||||
│ │ │OKX Collector│ │Binance Coll.│ │Custom Collector│ │ │
|
||||
│ │ │• Health Mon │ │• Health Mon │ │• Health Monitor│ │ │
|
||||
│ │ │• Auto-restart│ │• Auto-restart│ │• Auto-restart │ │ │
|
||||
│ │ │• Data Valid │ │• Data Valid │ │• Data Validate │ │ │
|
||||
│ │ └─────────────┘ └─────────────┘ └────────────────┘ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Design Patterns
|
||||
|
||||
- **Factory Pattern**: Standardized component creation across exchanges and charts
|
||||
- **Observer Pattern**: Event-driven data processing and real-time updates
|
||||
- **Strategy Pattern**: Pluggable data processing and chart configuration strategies
|
||||
- **Singleton Pattern**: Centralized logging and configuration management
|
||||
- **Modular Architecture**: Separated concerns with reusable components
|
||||
- **Repository Pattern**: Clean database access abstraction
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Using Chart Components
|
||||
|
||||
```python
|
||||
# Chart system usage
|
||||
from components.charts.config import get_available_strategy_names
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
|
||||
# Get available strategies
|
||||
strategy_names = get_available_strategy_names()
|
||||
|
||||
# Create custom indicator
|
||||
manager = get_indicator_manager()
|
||||
indicator = manager.create_indicator(
|
||||
name="Custom SMA 50",
|
||||
indicator_type="sma",
|
||||
parameters={"period": 50}
|
||||
)
|
||||
```
|
||||
|
||||
### Using Data Components
|
||||
|
||||
```python
|
||||
# Data Collector usage
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
collector = create_okx_collector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK]
|
||||
)
|
||||
|
||||
# Logging usage
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger("my_component")
|
||||
logger.info("Component initialized")
|
||||
```
|
||||
|
||||
### Component Integration
|
||||
|
||||
```python
|
||||
# Integrating multiple components
|
||||
from data.collector_manager import CollectorManager
|
||||
from dashboard.app import create_app
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Start data collection
|
||||
manager = CollectorManager("production_system")
|
||||
|
||||
# Create dashboard app
|
||||
app = create_app()
|
||||
|
||||
# Components work together seamlessly
|
||||
await manager.start()
|
||||
app.run_server(host='0.0.0.0', port=8050)
|
||||
```
|
||||
|
||||
## 📊 Performance & Monitoring
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
All components include built-in health monitoring:
|
||||
|
||||
- **Real-time Status**: Component state and performance metrics
|
||||
- **Auto-Recovery**: Automatic restart on failures
|
||||
- **Performance Tracking**: Message rates, uptime, error rates
|
||||
- **Alerting**: Configurable alerts for component health
|
||||
- **Dashboard Integration**: Visual system health monitoring
|
||||
|
||||
### Logging Integration
|
||||
|
||||
Unified logging across all components:
|
||||
|
||||
- **Structured Logging**: JSON-formatted logs for analysis
|
||||
- **Multiple Levels**: Debug, Info, Warning, Error levels
|
||||
- **Automatic Cleanup**: Log rotation and old file cleanup
|
||||
- **Performance Metrics**: Built-in performance tracking
|
||||
- **Component Isolation**: Separate loggers for different modules
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- **[Dashboard Modular Structure (dashboard-modular-structure.md)](./dashboard-modular-structure.md)** - Complete dashboard architecture
|
||||
- **[Exchange Documentation (exchanges/)](./exchanges/)** - Exchange-specific implementations
|
||||
- **[Architecture Overview (`../../architecture.md`)]** - System design and patterns
|
||||
- **[Setup Guide (`../../guides/setup.md`)]** - Component configuration and deployment
|
||||
- **[API Reference (`../../reference/`)** - Technical specifications
|
||||
|
||||
## 📈 Future Components
|
||||
|
||||
Planned component additions:
|
||||
|
||||
- **Signal Layer**: Bot trading signal visualization and integration
|
||||
- **Strategy Engine**: Trading strategy execution framework
|
||||
- **Portfolio Manager**: Position and risk management
|
||||
- **Alert Manager**: Advanced alerting and notification system
|
||||
- **Data Analytics**: Historical data analysis and reporting
|
||||
|
||||
---
|
||||
|
||||
*For the complete documentation index, see the [main documentation README (`../README.md`)]*
|
||||
702
docs/modules/charts/README.md
Normal file
702
docs/modules/charts/README.md
Normal file
@@ -0,0 +1,702 @@
|
||||
# Modular Chart Layers System
|
||||
|
||||
The Modular Chart Layers System is a flexible, strategy-driven chart system that supports technical indicator overlays, subplot management, and future bot signal integration. This system replaces basic chart functionality with a modular architecture that adapts to different trading strategies and their specific indicator requirements.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Architecture](#architecture)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Components](#components)
|
||||
- [User Indicator Management](#user-indicator-management)
|
||||
- [Configuration System](#configuration-system)
|
||||
- [Example Strategies](#example-strategies)
|
||||
- [Validation System](#validation-system)
|
||||
- [API Reference](#api-reference)
|
||||
- [Examples](#examples)
|
||||
- [Best Practices](#best-practices)
|
||||
|
||||
## Overview
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Modular Architecture**: Chart layers can be independently tested and composed
|
||||
- **User Indicator Management**: Create, edit, and manage custom indicators with JSON persistence
|
||||
- **Strategy-Driven Configuration**: JSON-based configurations for different trading strategies
|
||||
- **Comprehensive Validation**: 10+ validation rules with detailed error reporting
|
||||
- **Example Strategies**: 5 real-world trading strategy templates
|
||||
- **Indicator Support**: 26+ professionally configured indicator presets
|
||||
- **Extensible Design**: Easy to add new indicators, strategies, and chart types
|
||||
|
||||
### Supported Indicators
|
||||
|
||||
**Trend Indicators:**
|
||||
- Simple Moving Average (SMA) - Multiple periods
|
||||
- Exponential Moving Average (EMA) - Multiple periods
|
||||
- Bollinger Bands - Various configurations
|
||||
|
||||
**Momentum Indicators:**
|
||||
- Relative Strength Index (RSI) - Multiple periods
|
||||
- MACD - Various speed configurations
|
||||
|
||||
**Volume Indicators:**
|
||||
- Volume analysis and confirmation
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
components/charts/
|
||||
├── indicator_manager.py # User indicator CRUD operations
|
||||
├── indicator_defaults.py # Default indicator templates
|
||||
├── config/ # Configuration management
|
||||
│ ├── indicator_defs.py # Indicator schemas and validation
|
||||
│ ├── defaults.py # Default configurations and presets
|
||||
│ ├── strategy_charts.py # Strategy-specific configurations
|
||||
│ ├── validation.py # Validation system
|
||||
│ ├── example_strategies.py # Real-world strategy examples
|
||||
│ └── __init__.py # Package exports
|
||||
├── layers/ # Chart layer implementation
|
||||
│ ├── base.py # Base layer system
|
||||
│ ├── indicators.py # Indicator overlays
|
||||
│ ├── subplots.py # Subplot management
|
||||
│ └── signals.py # Signal overlays (future)
|
||||
├── builder.py # Main chart builder
|
||||
└── utils.py # Chart utilities
|
||||
|
||||
dashboard/ # Modular dashboard integration
|
||||
├── layouts/market_data.py # Chart layout with controls
|
||||
├── callbacks/charts.py # Chart update callbacks
|
||||
├── components/
|
||||
│ ├── chart_controls.py # Reusable chart configuration panel
|
||||
│ └── indicator_modal.py # Indicator management UI
|
||||
|
||||
config/indicators/
|
||||
└── user_indicators/ # User-created indicators (JSON files)
|
||||
├── sma_abc123.json
|
||||
├── ema_def456.json
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Dashboard Integration
|
||||
|
||||
The chart system is fully integrated with the modular dashboard structure:
|
||||
|
||||
### Modular Components
|
||||
|
||||
- **`dashboard/layouts/market_data.py`** - Chart layout with strategy selection and indicator controls
|
||||
- **`dashboard/callbacks/charts.py`** - Chart update callbacks with strategy handling
|
||||
- **`dashboard/components/chart_controls.py`** - Reusable chart configuration panel
|
||||
- **`dashboard/components/indicator_modal.py`** - Complete indicator management interface
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Strategy Dropdown**: Auto-loads predefined indicator combinations
|
||||
- **Real-time Updates**: Charts update immediately with indicator changes
|
||||
- **Modular Architecture**: Each component under 300 lines for maintainability
|
||||
- **Separated Concerns**: Layouts, callbacks, and components in dedicated modules
|
||||
|
||||
### Usage in Dashboard
|
||||
|
||||
```python
|
||||
# From dashboard/layouts/market_data.py
|
||||
from components.charts.config import get_available_strategy_names
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
|
||||
# Get available strategies for dropdown
|
||||
strategy_names = get_available_strategy_names()
|
||||
strategy_options = [{'label': name.replace('_', ' ').title(), 'value': name}
|
||||
for name in strategy_names]
|
||||
|
||||
# Get user indicators for checklists
|
||||
indicator_manager = get_indicator_manager()
|
||||
overlay_indicators = indicator_manager.get_indicators_by_type('overlay')
|
||||
subplot_indicators = indicator_manager.get_indicators_by_type('subplot')
|
||||
```
|
||||
|
||||
For complete dashboard documentation, see [Dashboard Modular Structure (`../dashboard-modular-structure.md`)](../dashboard-modular-structure.md).
|
||||
|
||||
## User Indicator Management
|
||||
|
||||
The system includes a comprehensive user indicator management system that allows creating, editing, and managing custom technical indicators.
|
||||
|
||||
### Features
|
||||
|
||||
- **Interactive UI**: Modal dialog for creating and editing indicators
|
||||
- **Real-time Updates**: Charts update immediately when indicators are toggled
|
||||
- **JSON Persistence**: Each indicator saved as individual JSON file
|
||||
- **Full CRUD Operations**: Create, Read, Update, Delete functionality
|
||||
- **Type Validation**: Parameter validation based on indicator type
|
||||
- **Custom Styling**: Color, line width, and appearance customization
|
||||
|
||||
### Quick Access
|
||||
|
||||
- **📊 [Complete Indicator Documentation (`indicators.md`)](./indicators.md)** - Comprehensive guide to the indicator system
|
||||
- **⚡ [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)** - Step-by-step checklist for developers
|
||||
|
||||
### Current User Indicators
|
||||
|
||||
| Indicator | Type | Parameters | Display |
|
||||
|-----------|------|------------|---------|
|
||||
| Simple Moving Average (SMA) | `sma` | period (1-200) | Overlay |
|
||||
| Exponential Moving Average (EMA) | `ema` | period (1-200) | Overlay |
|
||||
| Bollinger Bands | `bollinger_bands` | period (5-100), std_dev (0.5-5.0) | Overlay |
|
||||
| Relative Strength Index (RSI) | `rsi` | period (2-50) | Subplot |
|
||||
| MACD | `macd` | fast_period, slow_period, signal_period | Subplot |
|
||||
|
||||
### Usage Example
|
||||
|
||||
```python
|
||||
# Get indicator manager
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
manager = get_indicator_manager()
|
||||
|
||||
# Create new indicator
|
||||
indicator = manager.create_indicator(
|
||||
name="My SMA 50",
|
||||
indicator_type="sma",
|
||||
parameters={"period": 50},
|
||||
description="50-period Simple Moving Average",
|
||||
color="#ff0000"
|
||||
)
|
||||
|
||||
# Load and update
|
||||
loaded = manager.load_indicator("sma_abc123")
|
||||
success = manager.update_indicator("sma_abc123", name="Updated SMA")
|
||||
|
||||
# Get indicators by type
|
||||
overlay_indicators = manager.get_indicators_by_type("overlay")
|
||||
subplot_indicators = manager.get_indicators_by_type("subplot")
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
create_ema_crossover_strategy,
|
||||
get_strategy_config,
|
||||
validate_configuration
|
||||
)
|
||||
|
||||
# Get a pre-built strategy
|
||||
strategy = create_ema_crossover_strategy()
|
||||
config = strategy.config
|
||||
|
||||
# Validate the configuration
|
||||
report = validate_configuration(config)
|
||||
if report.is_valid:
|
||||
print("Configuration is valid!")
|
||||
else:
|
||||
print(f"Errors: {[str(e) for e in report.errors]}")
|
||||
|
||||
# Use with dashboard
|
||||
# chart = create_chart(config, market_data)
|
||||
```
|
||||
|
||||
### Custom Strategy Creation
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
StrategyChartConfig,
|
||||
SubplotConfig,
|
||||
ChartStyle,
|
||||
TradingStrategy,
|
||||
SubplotType
|
||||
)
|
||||
|
||||
# Create custom strategy
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="My Custom Strategy",
|
||||
strategy_type=TradingStrategy.DAY_TRADING,
|
||||
description="Custom day trading strategy",
|
||||
timeframes=["15m", "1h"],
|
||||
overlay_indicators=["ema_12", "ema_26", "bb_20_20"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_14"]
|
||||
)
|
||||
]
|
||||
)
|
||||
|
||||
# Validate and use
|
||||
is_valid, errors = config.validate()
|
||||
```
|
||||
|
||||
## Components
|
||||
|
||||
### 1. Configuration System
|
||||
|
||||
The configuration system provides schema validation, default presets, and strategy management.
|
||||
|
||||
**Key Files:**
|
||||
- `indicator_defs.py` - Core schemas and validation
|
||||
- `defaults.py` - 26+ indicator presets organized by category
|
||||
- `strategy_charts.py` - Complete strategy configurations
|
||||
|
||||
**Features:**
|
||||
- Type-safe indicator definitions
|
||||
- Parameter validation with ranges
|
||||
- Category-based organization (trend, momentum, volatility)
|
||||
- Strategy-specific recommendations
|
||||
|
||||
### 2. Validation System
|
||||
|
||||
Comprehensive validation with 10 validation rules:
|
||||
|
||||
1. **Required Fields** - Essential configuration validation
|
||||
2. **Height Ratios** - Chart layout validation
|
||||
3. **Indicator Existence** - Indicator availability check
|
||||
4. **Timeframe Format** - Valid timeframe patterns
|
||||
5. **Chart Style** - Color and styling validation
|
||||
6. **Subplot Config** - Subplot compatibility check
|
||||
7. **Strategy Consistency** - Strategy-timeframe alignment
|
||||
8. **Performance Impact** - Resource usage warnings
|
||||
9. **Indicator Conflicts** - Redundancy detection
|
||||
10. **Resource Usage** - Memory and rendering estimates
|
||||
|
||||
**Usage:**
|
||||
```python
|
||||
from components.charts.config import validate_configuration
|
||||
|
||||
report = validate_configuration(config)
|
||||
print(f"Valid: {report.is_valid}")
|
||||
print(f"Errors: {len(report.errors)}")
|
||||
print(f"Warnings: {len(report.warnings)}")
|
||||
```
|
||||
|
||||
### 3. Example Strategies
|
||||
|
||||
Five professionally configured trading strategies:
|
||||
|
||||
1. **EMA Crossover** (Intermediate, Medium Risk)
|
||||
- Classic trend-following with EMA crossovers
|
||||
- Best for trending markets, 15m-4h timeframes
|
||||
|
||||
2. **Momentum Breakout** (Advanced, High Risk)
|
||||
- Fast indicators for momentum capture
|
||||
- Volume confirmation, best for volatile markets
|
||||
|
||||
3. **Mean Reversion** (Intermediate, Medium Risk)
|
||||
- Oversold/overbought conditions
|
||||
- Multiple RSI periods, best for ranging markets
|
||||
|
||||
4. **Scalping** (Advanced, High Risk)
|
||||
- Ultra-fast indicators for 1m-5m trading
|
||||
- Tight risk management, high frequency
|
||||
|
||||
5. **Swing Trading** (Beginner, Medium Risk)
|
||||
- Medium-term trend following
|
||||
- 4h-1d timeframes, suitable for part-time traders
|
||||
|
||||
## Configuration System
|
||||
|
||||
### Indicator Definitions
|
||||
|
||||
Each indicator has a complete schema definition:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ChartIndicatorConfig:
|
||||
indicator_type: IndicatorType
|
||||
parameters: Dict[str, Any]
|
||||
display_name: str
|
||||
color: str
|
||||
line_style: LineStyle
|
||||
line_width: int
|
||||
display_type: DisplayType
|
||||
```
|
||||
|
||||
### Strategy Configuration
|
||||
|
||||
Complete strategy definitions include:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class StrategyChartConfig:
|
||||
strategy_name: str
|
||||
strategy_type: TradingStrategy
|
||||
description: str
|
||||
timeframes: List[str]
|
||||
layout: ChartLayout
|
||||
main_chart_height: float
|
||||
overlay_indicators: List[str]
|
||||
subplot_configs: List[SubplotConfig]
|
||||
chart_style: ChartStyle
|
||||
```
|
||||
|
||||
### Default Configurations
|
||||
|
||||
26+ indicator presets organized by category:
|
||||
|
||||
- **Trend Indicators**: 13 SMA/EMA presets
|
||||
- **Momentum Indicators**: 9 RSI/MACD presets
|
||||
- **Volatility Indicators**: 4 Bollinger Bands configurations
|
||||
|
||||
Access via:
|
||||
```python
|
||||
from components.charts.config import get_all_default_indicators
|
||||
|
||||
indicators = get_all_default_indicators()
|
||||
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
|
||||
```
|
||||
|
||||
## Example Strategies
|
||||
|
||||
### EMA Crossover Strategy
|
||||
|
||||
```python
|
||||
from components.charts.config import create_ema_crossover_strategy
|
||||
|
||||
strategy = create_ema_crossover_strategy()
|
||||
config = strategy.config
|
||||
|
||||
# Strategy includes:
|
||||
# - EMA 12, 26, 50 for trend analysis
|
||||
# - RSI 14 for momentum confirmation
|
||||
# - MACD for signal confirmation
|
||||
# - Bollinger Bands for volatility context
|
||||
```
|
||||
|
||||
### Custom Strategy Creation
|
||||
|
||||
```python
|
||||
from components.charts.config import create_custom_strategy_config
|
||||
|
||||
config, errors = create_custom_strategy_config(
|
||||
strategy_name="My Strategy",
|
||||
strategy_type=TradingStrategy.MOMENTUM,
|
||||
description="Custom momentum strategy",
|
||||
timeframes=["5m", "15m"],
|
||||
overlay_indicators=["ema_8", "ema_21"],
|
||||
subplot_configs=[{
|
||||
"subplot_type": "rsi",
|
||||
"height_ratio": 0.2,
|
||||
"indicators": ["rsi_7"]
|
||||
}]
|
||||
)
|
||||
```
|
||||
|
||||
## Validation System
|
||||
|
||||
### Comprehensive Validation
|
||||
|
||||
```python
|
||||
from components.charts.config import validate_configuration
|
||||
|
||||
# Full validation with detailed reporting
|
||||
report = validate_configuration(config)
|
||||
|
||||
# Check results
|
||||
if report.is_valid:
|
||||
print("✅ Configuration is valid")
|
||||
else:
|
||||
print("❌ Configuration has errors:")
|
||||
for error in report.errors:
|
||||
print(f" • {error}")
|
||||
|
||||
# Check warnings
|
||||
if report.warnings:
|
||||
print("⚠️ Warnings:")
|
||||
for warning in report.warnings:
|
||||
print(f" • {warning}")
|
||||
```
|
||||
|
||||
### Validation Rules Information
|
||||
|
||||
```python
|
||||
from components.charts.config import get_validation_rules_info
|
||||
|
||||
rules = get_validation_rules_info()
|
||||
for rule, info in rules.items():
|
||||
print(f"{info['name']}: {info['description']}")
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### Core Classes
|
||||
|
||||
#### `StrategyChartConfig`
|
||||
Main configuration class for chart strategies.
|
||||
|
||||
**Methods:**
|
||||
- `validate()` → `tuple[bool, List[str]]` - Basic validation
|
||||
- `validate_comprehensive()` → `ValidationReport` - Detailed validation
|
||||
- `get_all_indicators()` → `List[str]` - Get all indicator names
|
||||
- `get_indicator_configs()` → `Dict[str, ChartIndicatorConfig]` - Get configurations
|
||||
|
||||
#### `StrategyExample`
|
||||
Container for example strategies with metadata.
|
||||
|
||||
**Properties:**
|
||||
- `config: StrategyChartConfig` - The strategy configuration
|
||||
- `description: str` - Detailed strategy description
|
||||
- `difficulty: str` - Beginner/Intermediate/Advanced
|
||||
- `risk_level: str` - Low/Medium/High
|
||||
- `market_conditions: List[str]` - Suitable market conditions
|
||||
|
||||
### Utility Functions
|
||||
|
||||
#### Configuration Access
|
||||
```python
|
||||
# Get all example strategies
|
||||
get_all_example_strategies() → Dict[str, StrategyExample]
|
||||
|
||||
# Filter by criteria
|
||||
get_strategies_by_difficulty("Intermediate") → List[StrategyExample]
|
||||
get_strategies_by_risk_level("Medium") → List[StrategyExample]
|
||||
get_strategies_by_market_condition("Trending") → List[StrategyExample]
|
||||
|
||||
# Get strategy summary
|
||||
get_strategy_summary() → Dict[str, Dict[str, str]]
|
||||
```
|
||||
|
||||
#### JSON Export/Import
|
||||
```python
|
||||
# Export to JSON
|
||||
export_strategy_config_to_json(config) → str
|
||||
export_example_strategies_to_json() → str
|
||||
|
||||
# Import from JSON
|
||||
load_strategy_config_from_json(json_data) → tuple[StrategyChartConfig, List[str]]
|
||||
```
|
||||
|
||||
#### Validation
|
||||
```python
|
||||
# Comprehensive validation
|
||||
validate_configuration(config, rules=None, strict=False) → ValidationReport
|
||||
|
||||
# Get validation rules info
|
||||
get_validation_rules_info() → Dict[ValidationRule, Dict[str, str]]
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Using Pre-built Strategy
|
||||
|
||||
```python
|
||||
from components.charts.config import get_example_strategy
|
||||
|
||||
# Get a specific strategy
|
||||
strategy = get_example_strategy("ema_crossover")
|
||||
|
||||
print(f"Strategy: {strategy.config.strategy_name}")
|
||||
print(f"Difficulty: {strategy.difficulty}")
|
||||
print(f"Risk Level: {strategy.risk_level}")
|
||||
print(f"Timeframes: {strategy.config.timeframes}")
|
||||
print(f"Indicators: {strategy.config.overlay_indicators}")
|
||||
|
||||
# Validate before use
|
||||
is_valid, errors = strategy.config.validate()
|
||||
if is_valid:
|
||||
# Use in dashboard
|
||||
pass
|
||||
```
|
||||
|
||||
### Example 2: Creating Custom Configuration
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
StrategyChartConfig, SubplotConfig, ChartStyle,
|
||||
TradingStrategy, SubplotType, ChartLayout
|
||||
)
|
||||
|
||||
# Create custom configuration
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="Custom Momentum Strategy",
|
||||
strategy_type=TradingStrategy.MOMENTUM,
|
||||
description="Fast momentum strategy with volume confirmation",
|
||||
timeframes=["5m", "15m"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.65,
|
||||
overlay_indicators=["ema_8", "ema_21", "bb_20_25"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_7"],
|
||||
title="Fast RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.VOLUME,
|
||||
height_ratio=0.2,
|
||||
indicators=[],
|
||||
title="Volume Confirmation"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
candlestick_up_color="#00d4aa",
|
||||
candlestick_down_color="#fe6a85"
|
||||
)
|
||||
)
|
||||
|
||||
# Comprehensive validation
|
||||
report = config.validate_comprehensive()
|
||||
print(f"Validation: {report.summary()}")
|
||||
```
|
||||
|
||||
### Example 3: Filtering Strategies
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
get_strategies_by_difficulty,
|
||||
get_strategies_by_market_condition
|
||||
)
|
||||
|
||||
# Get beginner-friendly strategies
|
||||
beginner_strategies = get_strategies_by_difficulty("Beginner")
|
||||
print("Beginner Strategies:")
|
||||
for strategy in beginner_strategies:
|
||||
print(f" • {strategy.config.strategy_name}")
|
||||
|
||||
# Get strategies for trending markets
|
||||
trending_strategies = get_strategies_by_market_condition("Trending")
|
||||
print("\nTrending Market Strategies:")
|
||||
for strategy in trending_strategies:
|
||||
print(f" • {strategy.config.strategy_name}")
|
||||
```
|
||||
|
||||
### Example 4: Validation with Error Handling
|
||||
|
||||
```python
|
||||
from components.charts.config import validate_configuration, ValidationLevel
|
||||
|
||||
# Validate with comprehensive reporting
|
||||
report = validate_configuration(config)
|
||||
|
||||
# Handle different severity levels
|
||||
if report.errors:
|
||||
print("🚨 ERRORS (must fix):")
|
||||
for error in report.errors:
|
||||
print(f" • {error}")
|
||||
|
||||
if report.warnings:
|
||||
print("\n⚠️ WARNINGS (recommended fixes):")
|
||||
for warning in report.warnings:
|
||||
print(f" • {warning}")
|
||||
|
||||
if report.info:
|
||||
print("\nℹ️ INFO (optimization suggestions):")
|
||||
for info in report.info:
|
||||
print(f" • {info}")
|
||||
|
||||
# Check specific validation rules
|
||||
height_issues = report.get_issues_by_rule(ValidationRule.HEIGHT_RATIOS)
|
||||
if height_issues:
|
||||
print(f"\nHeight ratio issues: {len(height_issues)}")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Configuration Design
|
||||
|
||||
- **Use meaningful names**: Strategy names should be descriptive
|
||||
- **Validate early**: Always validate configurations before use
|
||||
- **Consider timeframes**: Match timeframes to strategy type
|
||||
- **Height ratios**: Ensure total height ≤ 1.0
|
||||
|
||||
### 2. Indicator Selection
|
||||
|
||||
- **Avoid redundancy**: Don't use multiple similar indicators
|
||||
- **Performance impact**: Limit complex indicators (>3 Bollinger Bands)
|
||||
- **Category balance**: Mix trend, momentum, and volume indicators
|
||||
- **Timeframe alignment**: Use appropriate indicator periods
|
||||
|
||||
### 3. Strategy Development
|
||||
|
||||
- **Start simple**: Begin with proven strategies like EMA crossover
|
||||
- **Test thoroughly**: Validate both technically and with market data
|
||||
- **Document well**: Include entry/exit rules and market conditions
|
||||
- **Consider risk**: Match complexity to experience level
|
||||
|
||||
### 4. Validation Usage
|
||||
|
||||
- **Use comprehensive validation**: Get detailed reports with suggestions
|
||||
- **Handle warnings**: Address performance and usability warnings
|
||||
- **Test edge cases**: Validate with extreme configurations
|
||||
- **Monitor updates**: Re-validate when changing configurations
|
||||
|
||||
### 5. Performance Optimization
|
||||
|
||||
- **Limit indicators**: Keep total indicators <10 for performance
|
||||
- **Monitor memory**: Check resource usage warnings
|
||||
- **Optimize rendering**: Consider visual complexity
|
||||
- **Cache configurations**: Reuse validated configurations
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues and Solutions
|
||||
|
||||
1. **"Indicator not found in defaults"**
|
||||
```python
|
||||
# Check available indicators
|
||||
from components.charts.config import get_all_default_indicators
|
||||
available = get_all_default_indicators()
|
||||
print(list(available.keys()))
|
||||
```
|
||||
|
||||
2. **"Total height exceeds 1.0"**
|
||||
```python
|
||||
# Adjust height ratios
|
||||
config.main_chart_height = 0.7
|
||||
for subplot in config.subplot_configs:
|
||||
subplot.height_ratio = 0.1
|
||||
```
|
||||
|
||||
3. **"Invalid timeframe format"**
|
||||
```python
|
||||
# Use standard formats
|
||||
config.timeframes = ["1m", "5m", "15m", "1h", "4h", "1d", "1w"]
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
The system includes comprehensive tests:
|
||||
|
||||
- **112+ test cases** across all components
|
||||
- **Unit tests** for individual components
|
||||
- **Integration tests** for system interactions
|
||||
- **Validation tests** for error handling
|
||||
|
||||
Run tests:
|
||||
```bash
|
||||
uv run pytest tests/test_*_strategies.py -v
|
||||
uv run pytest tests/test_validation.py -v
|
||||
uv run pytest tests/test_defaults.py -v
|
||||
```
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- **✅ Signal Layer Integration**: Bot trade signals and alerts - **IMPLEMENTED** - See [Bot Integration Guide (`bot-integration.md`)](./bot-integration.md)
|
||||
- **Custom Indicators**: User-defined technical indicators
|
||||
- **Advanced Layouts**: Multi-chart and grid layouts
|
||||
- **Real-time Updates**: Live chart updates with indicator toggling
|
||||
- **Performance Monitoring**: Advanced resource usage tracking
|
||||
|
||||
## Bot Integration
|
||||
|
||||
The chart system now includes comprehensive bot integration capabilities:
|
||||
|
||||
- **Real-time Signal Visualization**: Live bot signals on charts
|
||||
- **Trade Execution Tracking**: P&L and trade entry/exit points
|
||||
- **Multi-Bot Support**: Compare strategies across multiple bots
|
||||
- **Performance Analytics**: Built-in bot performance metrics
|
||||
|
||||
📊 **[Complete Bot Integration Guide (`bot-integration.md`)](./bot-integration.md)** - Comprehensive documentation for integrating bot signals with charts
|
||||
|
||||
## Support
|
||||
|
||||
For issues, questions, or contributions:
|
||||
|
||||
1. Check existing configurations in `example_strategies.py`
|
||||
2. Review validation rules in `validation.py`
|
||||
3. Test with comprehensive validation
|
||||
4. Refer to this documentation
|
||||
|
||||
The modular chart system is designed to be extensible and maintainable, providing a solid foundation for advanced trading chart functionality.
|
||||
---
|
||||
|
||||
*Back to [Modules Documentation](../README.md)*
|
||||
630
docs/modules/charts/bot-integration.md
Normal file
630
docs/modules/charts/bot-integration.md
Normal file
@@ -0,0 +1,630 @@
|
||||
# Bot Integration with Chart Signal Layers
|
||||
|
||||
> **⚠️ Feature Not Yet Implemented**
|
||||
>
|
||||
> The functionality described in this document for bot integration with chart layers is **planned for a future release**. It depends on the **Strategy Engine** and **Bot Manager**, which are not yet implemented. This document outlines the intended architecture and usage once these components are available.
|
||||
|
||||
The Chart Layers System provides seamless integration with the bot management system, allowing real-time visualization of bot signals, trades, and performance data directly on charts.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Architecture](#architecture)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Bot Data Service](#bot-data-service)
|
||||
- [Signal Layer Integration](#signal-layer-integration)
|
||||
- [Enhanced Bot Layers](#enhanced-bot-layers)
|
||||
- [Multi-Bot Visualization](#multi-bot-visualization)
|
||||
- [Configuration Options](#configuration-options)
|
||||
- [Examples](#examples)
|
||||
- [Best Practices](#best-practices)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Overview
|
||||
|
||||
The bot integration system provides automatic data fetching and visualization of:
|
||||
|
||||
- **Trading Signals**: Buy/sell/hold signals from active bots
|
||||
- **Trade Executions**: Entry/exit points with P&L information
|
||||
- **Bot Performance**: Real-time performance metrics and analytics
|
||||
- **Strategy Comparison**: Side-by-side strategy analysis
|
||||
- **Multi-Bot Views**: Aggregate views across multiple bots
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Automatic Data Fetching**: No manual data queries required
|
||||
- **Real-time Updates**: Charts update with live bot data
|
||||
- **Database Integration**: Direct connection to bot management system
|
||||
- **Advanced Filtering**: Filter by bot, strategy, symbol, timeframe
|
||||
- **Performance Analytics**: Built-in performance calculation
|
||||
- **Error Handling**: Graceful handling of database errors
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
components/charts/layers/
|
||||
├── bot_integration.py # Core bot data services
|
||||
├── bot_enhanced_layers.py # Enhanced chart layers with bot integration
|
||||
└── signals.py # Base signal layers
|
||||
|
||||
Bot Integration Components:
|
||||
├── BotFilterConfig # Configuration for bot filtering
|
||||
├── BotDataService # Database operations for bot data
|
||||
├── BotSignalLayerIntegration # Chart-specific integration utilities
|
||||
├── BotIntegratedSignalLayer # Auto-fetching signal layer
|
||||
├── BotIntegratedTradeLayer # Auto-fetching trade layer
|
||||
└── BotMultiLayerIntegration # Multi-bot layer management
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Bot Signal Visualization
|
||||
|
||||
```python
|
||||
from components.charts.layers import create_bot_signal_layer
|
||||
|
||||
# Create a bot-integrated signal layer for BTCUSDT
|
||||
signal_layer = create_bot_signal_layer(
|
||||
symbol='BTCUSDT',
|
||||
active_only=True,
|
||||
confidence_threshold=0.5,
|
||||
time_window_days=7
|
||||
)
|
||||
|
||||
# Add to chart
|
||||
fig = go.Figure()
|
||||
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
```
|
||||
|
||||
### Complete Bot Visualization Setup
|
||||
|
||||
```python
|
||||
from components.charts.layers import create_complete_bot_layers
|
||||
|
||||
# Create complete bot layer set for a symbol
|
||||
result = create_complete_bot_layers(
|
||||
symbol='BTCUSDT',
|
||||
timeframe='1h',
|
||||
active_only=True,
|
||||
time_window_days=7
|
||||
)
|
||||
|
||||
if result['success']:
|
||||
signal_layer = result['layers']['signals']
|
||||
trade_layer = result['layers']['trades']
|
||||
|
||||
# Add to chart
|
||||
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
fig = trade_layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
```
|
||||
|
||||
## Bot Data Service
|
||||
|
||||
The `BotDataService` provides the core interface for fetching bot-related data from the database.
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from components.charts.layers.bot_integration import BotDataService, BotFilterConfig
|
||||
|
||||
# Initialize service
|
||||
service = BotDataService()
|
||||
|
||||
# Create filter configuration
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=['BTCUSDT'],
|
||||
strategies=['momentum', 'ema_crossover'],
|
||||
active_only=True
|
||||
)
|
||||
|
||||
# Fetch bot data
|
||||
bots_df = service.get_bots(bot_filter)
|
||||
signals_df = service.get_signals_for_bots(
|
||||
bot_ids=bots_df['id'].tolist(),
|
||||
start_time=datetime.now() - timedelta(days=7),
|
||||
end_time=datetime.now(),
|
||||
min_confidence=0.3
|
||||
)
|
||||
```
|
||||
|
||||
### Available Methods
|
||||
|
||||
| Method | Description | Parameters |
|
||||
|--------|-------------|------------|
|
||||
| `get_bots()` | Fetch bot information | `filter_config: BotFilterConfig` |
|
||||
| `get_signals_for_bots()` | Fetch signals from bots | `bot_ids, start_time, end_time, signal_types, min_confidence` |
|
||||
| `get_trades_for_bots()` | Fetch trades from bots | `bot_ids, start_time, end_time, sides` |
|
||||
| `get_bot_performance()` | Fetch performance data | `bot_ids, start_time, end_time` |
|
||||
|
||||
### BotFilterConfig Options
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class BotFilterConfig:
|
||||
bot_ids: Optional[List[int]] = None # Specific bot IDs
|
||||
bot_names: Optional[List[str]] = None # Specific bot names
|
||||
strategies: Optional[List[str]] = None # Strategy filter
|
||||
symbols: Optional[List[str]] = None # Symbol filter
|
||||
statuses: Optional[List[str]] = None # Bot status filter
|
||||
date_range: Optional[Tuple[datetime, datetime]] = None
|
||||
active_only: bool = False # Only active bots
|
||||
```
|
||||
|
||||
## Signal Layer Integration
|
||||
|
||||
The `BotSignalLayerIntegration` provides chart-specific utilities for integrating bot data with chart layers.
|
||||
|
||||
### Chart-Specific Signal Fetching
|
||||
|
||||
```python
|
||||
from components.charts.layers.bot_integration import BotSignalLayerIntegration
|
||||
|
||||
integration = BotSignalLayerIntegration()
|
||||
|
||||
# Get signals for specific chart context
|
||||
signals_df = integration.get_signals_for_chart(
|
||||
symbol='BTCUSDT',
|
||||
timeframe='1h',
|
||||
bot_filter=BotFilterConfig(active_only=True),
|
||||
time_range=(start_time, end_time),
|
||||
signal_types=['buy', 'sell'],
|
||||
min_confidence=0.5
|
||||
)
|
||||
|
||||
# Get trades for chart context
|
||||
trades_df = integration.get_trades_for_chart(
|
||||
symbol='BTCUSDT',
|
||||
timeframe='1h',
|
||||
bot_filter=BotFilterConfig(strategies=['momentum']),
|
||||
time_range=(start_time, end_time)
|
||||
)
|
||||
|
||||
# Get bot summary statistics
|
||||
stats = integration.get_bot_summary_stats(bot_ids=[1, 2, 3])
|
||||
```
|
||||
|
||||
### Performance Analytics
|
||||
|
||||
```python
|
||||
# Get comprehensive performance summary
|
||||
performance = get_bot_performance_summary(
|
||||
bot_id=1, # Specific bot or None for all
|
||||
days_back=30
|
||||
)
|
||||
|
||||
print(f"Total trades: {performance['trade_count']}")
|
||||
print(f"Win rate: {performance['win_rate']:.1f}%")
|
||||
print(f"Total P&L: ${performance['bot_stats']['total_pnl']:.2f}")
|
||||
```
|
||||
|
||||
## Enhanced Bot Layers
|
||||
|
||||
Enhanced layers provide automatic data fetching and bot-specific visualization features.
|
||||
|
||||
### BotIntegratedSignalLayer
|
||||
|
||||
```python
|
||||
from components.charts.layers import BotIntegratedSignalLayer, BotSignalLayerConfig
|
||||
|
||||
# Configure bot-integrated signal layer
|
||||
config = BotSignalLayerConfig(
|
||||
name="BTCUSDT Bot Signals",
|
||||
auto_fetch_data=True, # Automatically fetch from database
|
||||
time_window_days=7, # Look back 7 days
|
||||
active_bots_only=True, # Only active bots
|
||||
include_bot_info=True, # Include bot info in hover
|
||||
group_by_strategy=True, # Group signals by strategy
|
||||
confidence_threshold=0.3, # Minimum confidence
|
||||
signal_types=['buy', 'sell'] # Signal types to show
|
||||
)
|
||||
|
||||
layer = BotIntegratedSignalLayer(config)
|
||||
|
||||
# Render automatically fetches data
|
||||
fig = layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
```
|
||||
|
||||
### BotIntegratedTradeLayer
|
||||
|
||||
```python
|
||||
from components.charts.layers import BotIntegratedTradeLayer, BotTradeLayerConfig
|
||||
|
||||
config = BotTradeLayerConfig(
|
||||
name="BTCUSDT Bot Trades",
|
||||
auto_fetch_data=True,
|
||||
time_window_days=7,
|
||||
show_pnl=True, # Show profit/loss
|
||||
show_trade_lines=True, # Connect entry/exit
|
||||
include_bot_info=True, # Bot info in hover
|
||||
group_by_strategy=False
|
||||
)
|
||||
|
||||
layer = BotIntegratedTradeLayer(config)
|
||||
fig = layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
```
|
||||
|
||||
## Multi-Bot Visualization
|
||||
|
||||
### Strategy Comparison
|
||||
|
||||
```python
|
||||
from components.charts.layers import bot_multi_layer
|
||||
|
||||
# Compare multiple strategies on the same symbol
|
||||
result = bot_multi_layer.create_strategy_comparison_layers(
|
||||
symbol='BTCUSDT',
|
||||
strategies=['momentum', 'ema_crossover', 'mean_reversion'],
|
||||
timeframe='1h',
|
||||
time_window_days=14
|
||||
)
|
||||
|
||||
if result['success']:
|
||||
for strategy in result['strategies']:
|
||||
signal_layer = result['layers'][f"{strategy}_signals"]
|
||||
trade_layer = result['layers'][f"{strategy}_trades"]
|
||||
|
||||
fig = signal_layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
fig = trade_layer.render(fig, market_data, symbol='BTCUSDT')
|
||||
```
|
||||
|
||||
### Multi-Symbol Bot View
|
||||
|
||||
```python
|
||||
# Create bot layers for multiple symbols
|
||||
symbols = ['BTCUSDT', 'ETHUSDT', 'ADAUSDT']
|
||||
|
||||
for symbol in symbols:
|
||||
bot_layers = create_complete_bot_layers(
|
||||
symbol=symbol,
|
||||
active_only=True,
|
||||
time_window_days=7
|
||||
)
|
||||
|
||||
if bot_layers['success']:
|
||||
# Add layers to respective charts
|
||||
signal_layer = bot_layers['layers']['signals']
|
||||
# ... render on symbol-specific chart
|
||||
```
|
||||
|
||||
## Configuration Options
|
||||
|
||||
### Auto-Fetch Configuration
|
||||
|
||||
```python
|
||||
# Disable auto-fetch for manual data control
|
||||
config = BotSignalLayerConfig(
|
||||
name="Manual Bot Signals",
|
||||
auto_fetch_data=False, # Disable auto-fetch
|
||||
active_bots_only=True
|
||||
)
|
||||
|
||||
layer = BotIntegratedSignalLayer(config)
|
||||
|
||||
# Manually provide signal data
|
||||
manual_signals = get_signals_from_api()
|
||||
fig = layer.render(fig, market_data, signals=manual_signals)
|
||||
```
|
||||
|
||||
### Time Window Management
|
||||
|
||||
```python
|
||||
# Custom time window
|
||||
config = BotSignalLayerConfig(
|
||||
name="Short Term Signals",
|
||||
time_window_days=1, # Last 24 hours only
|
||||
active_bots_only=True,
|
||||
confidence_threshold=0.7 # High confidence only
|
||||
)
|
||||
```
|
||||
|
||||
### Bot-Specific Filtering
|
||||
|
||||
```python
|
||||
# Filter for specific bots
|
||||
bot_filter = BotFilterConfig(
|
||||
bot_ids=[1, 2, 5], # Specific bot IDs
|
||||
symbols=['BTCUSDT'],
|
||||
active_only=True
|
||||
)
|
||||
|
||||
config = BotSignalLayerConfig(
|
||||
name="Selected Bots",
|
||||
bot_filter=bot_filter,
|
||||
include_bot_info=True
|
||||
)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Dashboard Integration Example
|
||||
|
||||
```python
|
||||
# dashboard/callbacks/charts.py
|
||||
from components.charts.layers import (
|
||||
create_bot_signal_layer,
|
||||
create_bot_trade_layer,
|
||||
get_active_bot_signals
|
||||
)
|
||||
|
||||
@app.callback(
|
||||
Output('chart', 'figure'),
|
||||
[Input('symbol-dropdown', 'value'),
|
||||
Input('show-bot-signals', 'value')]
|
||||
)
|
||||
def update_chart_with_bots(symbol, show_bot_signals):
|
||||
fig = create_base_chart(symbol)
|
||||
|
||||
if 'bot-signals' in show_bot_signals:
|
||||
# Add bot signals
|
||||
signal_layer = create_bot_signal_layer(
|
||||
symbol=symbol,
|
||||
active_only=True,
|
||||
confidence_threshold=0.3
|
||||
)
|
||||
fig = signal_layer.render(fig, market_data, symbol=symbol)
|
||||
|
||||
# Add bot trades
|
||||
trade_layer = create_bot_trade_layer(
|
||||
symbol=symbol,
|
||||
active_only=True,
|
||||
show_pnl=True
|
||||
)
|
||||
fig = trade_layer.render(fig, market_data, symbol=symbol)
|
||||
|
||||
return fig
|
||||
```
|
||||
|
||||
### Custom Bot Analysis
|
||||
|
||||
```python
|
||||
# Custom analysis for specific strategy
|
||||
def analyze_momentum_strategy(symbol: str, days_back: int = 30):
|
||||
"""Analyze momentum strategy performance for a symbol."""
|
||||
|
||||
# Get momentum bot signals
|
||||
signals = get_bot_signals_by_strategy(
|
||||
strategy_name='momentum',
|
||||
symbol=symbol,
|
||||
days_back=days_back
|
||||
)
|
||||
|
||||
# Get performance summary
|
||||
performance = get_bot_performance_summary(days_back=days_back)
|
||||
|
||||
# Create visualizations
|
||||
signal_layer = create_bot_signal_layer(
|
||||
symbol=symbol,
|
||||
active_only=False, # Include all momentum bots
|
||||
time_window_days=days_back
|
||||
)
|
||||
|
||||
return {
|
||||
'signals': signals,
|
||||
'performance': performance,
|
||||
'layer': signal_layer
|
||||
}
|
||||
|
||||
# Usage
|
||||
analysis = analyze_momentum_strategy('BTCUSDT', days_back=14)
|
||||
```
|
||||
|
||||
### Real-time Monitoring Setup
|
||||
|
||||
```python
|
||||
# Real-time bot monitoring dashboard component
|
||||
def create_realtime_bot_monitor(symbols: List[str]):
|
||||
"""Create real-time bot monitoring charts."""
|
||||
|
||||
charts = {}
|
||||
|
||||
for symbol in symbols:
|
||||
# Get latest bot data
|
||||
active_signals = get_active_bot_signals(
|
||||
symbol=symbol,
|
||||
days_back=1, # Last 24 hours
|
||||
min_confidence=0.5
|
||||
)
|
||||
|
||||
# Create monitoring layers
|
||||
signal_layer = create_bot_signal_layer(
|
||||
symbol=symbol,
|
||||
active_only=True,
|
||||
time_window_days=1
|
||||
)
|
||||
|
||||
trade_layer = create_bot_trade_layer(
|
||||
symbol=symbol,
|
||||
active_only=True,
|
||||
show_pnl=True,
|
||||
time_window_days=1
|
||||
)
|
||||
|
||||
charts[symbol] = {
|
||||
'signal_layer': signal_layer,
|
||||
'trade_layer': trade_layer,
|
||||
'active_signals': len(active_signals)
|
||||
}
|
||||
|
||||
return charts
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
```python
|
||||
# 1. Use appropriate time windows
|
||||
config = BotSignalLayerConfig(
|
||||
time_window_days=7, # Don't fetch more data than needed
|
||||
confidence_threshold=0.3 # Filter low-confidence signals
|
||||
)
|
||||
|
||||
# 2. Filter by active bots only when possible
|
||||
bot_filter = BotFilterConfig(
|
||||
active_only=True, # Reduces database queries
|
||||
symbols=['BTCUSDT'] # Specific symbols only
|
||||
)
|
||||
|
||||
# 3. Reuse integration instances
|
||||
integration = BotSignalLayerIntegration() # Create once
|
||||
# Use multiple times for different symbols
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
```python
|
||||
try:
|
||||
bot_layers = create_complete_bot_layers('BTCUSDT')
|
||||
|
||||
if not bot_layers['success']:
|
||||
logger.warning(f"Bot layer creation failed: {bot_layers.get('error')}")
|
||||
# Fallback to manual signal layer
|
||||
signal_layer = TradingSignalLayer()
|
||||
else:
|
||||
signal_layer = bot_layers['layers']['signals']
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Bot integration error: {e}")
|
||||
# Graceful degradation
|
||||
signal_layer = TradingSignalLayer()
|
||||
```
|
||||
|
||||
### Database Connection Management
|
||||
|
||||
```python
|
||||
# The bot integration handles database connections automatically
|
||||
# But for custom queries, follow these patterns:
|
||||
|
||||
from database.connection import get_session
|
||||
|
||||
def custom_bot_query():
|
||||
try:
|
||||
with get_session() as session:
|
||||
# Your database operations
|
||||
result = session.query(Bot).filter(...).all()
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.error(f"Database query failed: {e}")
|
||||
return []
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **No signals showing on chart**
|
||||
```python
|
||||
# Check if bots exist for symbol
|
||||
service = BotDataService()
|
||||
bots = service.get_bots(BotFilterConfig(symbols=['BTCUSDT']))
|
||||
print(f"Found {len(bots)} bots for BTCUSDT")
|
||||
|
||||
# Check signal count
|
||||
signals = get_active_bot_signals('BTCUSDT', days_back=7)
|
||||
print(f"Found {len(signals)} signals in last 7 days")
|
||||
```
|
||||
|
||||
2. **Database connection errors**
|
||||
```python
|
||||
# Test database connection
|
||||
try:
|
||||
from database.connection import get_session
|
||||
with get_session() as session:
|
||||
print("Database connection successful")
|
||||
except Exception as e:
|
||||
print(f"Database connection failed: {e}")
|
||||
```
|
||||
|
||||
3. **Performance issues with large datasets**
|
||||
```python
|
||||
# Reduce time window
|
||||
config = BotSignalLayerConfig(
|
||||
time_window_days=3, # Reduced from 7
|
||||
confidence_threshold=0.5 # Higher threshold
|
||||
)
|
||||
|
||||
# Filter by specific strategies
|
||||
bot_filter = BotFilterConfig(
|
||||
strategies=['momentum'], # Specific strategy only
|
||||
active_only=True
|
||||
)
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
# Enable debug logging for bot integration
|
||||
logging.getLogger('bot_integration').setLevel(logging.DEBUG)
|
||||
logging.getLogger('bot_enhanced_layers').setLevel(logging.DEBUG)
|
||||
|
||||
# This will show detailed information about:
|
||||
# - Database queries
|
||||
# - Data fetching operations
|
||||
# - Filter applications
|
||||
# - Performance metrics
|
||||
```
|
||||
|
||||
### Testing Bot Integration
|
||||
|
||||
```python
|
||||
# Test bot integration components
|
||||
from tests.test_signal_layers import TestBotIntegration
|
||||
|
||||
# Run specific bot integration tests
|
||||
pytest.main(['-v', 'tests/test_signal_layers.py::TestBotIntegration'])
|
||||
|
||||
# Test with mock data
|
||||
def test_bot_integration():
|
||||
config = BotSignalLayerConfig(
|
||||
name="Test Bot Signals",
|
||||
auto_fetch_data=False # Use manual data for testing
|
||||
)
|
||||
|
||||
layer = BotIntegratedSignalLayer(config)
|
||||
|
||||
# Provide test data
|
||||
test_signals = pd.DataFrame({
|
||||
'timestamp': [datetime.now()],
|
||||
'signal_type': ['buy'],
|
||||
'price': [50000],
|
||||
'confidence': [0.8],
|
||||
'bot_name': ['Test Bot']
|
||||
})
|
||||
|
||||
fig = go.Figure()
|
||||
result = layer.render(fig, market_data, signals=test_signals)
|
||||
|
||||
assert len(result.data) > 0
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### Core Classes
|
||||
|
||||
- **`BotDataService`** - Main service for database operations
|
||||
- **`BotSignalLayerIntegration`** - Chart-specific integration utilities
|
||||
- **`BotIntegratedSignalLayer`** - Auto-fetching signal layer
|
||||
- **`BotIntegratedTradeLayer`** - Auto-fetching trade layer
|
||||
- **`BotMultiLayerIntegration`** - Multi-bot layer management
|
||||
|
||||
### Configuration Classes
|
||||
|
||||
- **`BotFilterConfig`** - Bot filtering configuration
|
||||
- **`BotSignalLayerConfig`** - Signal layer configuration with bot options
|
||||
- **`BotTradeLayerConfig`** - Trade layer configuration with bot options
|
||||
|
||||
### Convenience Functions
|
||||
|
||||
- **`create_bot_signal_layer()`** - Quick bot signal layer creation
|
||||
- **`create_bot_trade_layer()`** - Quick bot trade layer creation
|
||||
- **`create_complete_bot_layers()`** - Complete bot layer set
|
||||
- **`get_active_bot_signals()`** - Get signals from active bots
|
||||
- **`get_active_bot_trades()`** - Get trades from active bots
|
||||
- **`get_bot_signals_by_strategy()`** - Get signals by strategy
|
||||
- **`get_bot_performance_summary()`** - Get performance analytics
|
||||
|
||||
For complete API documentation, see the module docstrings in:
|
||||
- `components/charts/layers/bot_integration.py`
|
||||
- `components/charts/layers/bot_enhanced_layers.py`
|
||||
754
docs/modules/charts/configuration.md
Normal file
754
docs/modules/charts/configuration.md
Normal file
@@ -0,0 +1,754 @@
|
||||
# Chart Configuration System
|
||||
|
||||
The Chart Configuration System provides comprehensive management of chart settings, indicator definitions, and trading strategy configurations. It includes schema validation, default presets, and extensible configuration patterns.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Indicator Definitions](#indicator-definitions)
|
||||
- [Default Configurations](#default-configurations)
|
||||
- [Strategy Configurations](#strategy-configurations)
|
||||
- [Validation System](#validation-system)
|
||||
- [Configuration Files](#configuration-files)
|
||||
- [Usage Examples](#usage-examples)
|
||||
- [Extension Guide](#extension-guide)
|
||||
|
||||
## Overview
|
||||
|
||||
The configuration system is built around three core concepts:
|
||||
|
||||
1. **Indicator Definitions** - Schema and validation for technical indicators
|
||||
2. **Default Configurations** - Pre-built indicator presets organized by category
|
||||
3. **Strategy Configurations** - Complete chart setups for trading strategies
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
components/charts/config/
|
||||
├── indicator_defs.py # Core schemas and validation
|
||||
├── defaults.py # Default indicator presets
|
||||
├── strategy_charts.py # Strategy configurations
|
||||
├── validation.py # Validation system
|
||||
├── example_strategies.py # Real-world examples
|
||||
└── __init__.py # Package exports
|
||||
```
|
||||
|
||||
## Indicator Definitions
|
||||
|
||||
### Core Classes
|
||||
|
||||
#### `ChartIndicatorConfig`
|
||||
|
||||
The main configuration class for individual indicators:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ChartIndicatorConfig:
|
||||
name: str
|
||||
indicator_type: str
|
||||
parameters: Dict[str, Any]
|
||||
display_type: str # 'overlay', 'subplot'
|
||||
color: str
|
||||
line_style: str = 'solid' # 'solid', 'dash', 'dot'
|
||||
line_width: int = 2
|
||||
opacity: float = 1.0
|
||||
visible: bool = True
|
||||
subplot_height_ratio: float = 0.3 # For subplot indicators
|
||||
```
|
||||
|
||||
#### Enums
|
||||
|
||||
**IndicatorType**
|
||||
```python
|
||||
class IndicatorType(str, Enum):
|
||||
SMA = "sma"
|
||||
EMA = "ema"
|
||||
RSI = "rsi"
|
||||
MACD = "macd"
|
||||
BOLLINGER_BANDS = "bollinger_bands"
|
||||
VOLUME = "volume"
|
||||
```
|
||||
|
||||
**DisplayType**
|
||||
```python
|
||||
class DisplayType(str, Enum):
|
||||
OVERLAY = "overlay" # Overlaid on price chart
|
||||
SUBPLOT = "subplot" # Separate subplot
|
||||
HISTOGRAM = "histogram" # Histogram display
|
||||
```
|
||||
|
||||
**LineStyle**
|
||||
```python
|
||||
class LineStyle(str, Enum):
|
||||
SOLID = "solid"
|
||||
DASHED = "dash"
|
||||
DOTTED = "dot"
|
||||
DASH_DOT = "dashdot"
|
||||
```
|
||||
|
||||
### Schema Validation
|
||||
|
||||
#### `IndicatorParameterSchema`
|
||||
|
||||
Defines validation rules for indicator parameters:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class IndicatorParameterSchema:
|
||||
name: str
|
||||
type: type
|
||||
required: bool = True
|
||||
default: Any = None
|
||||
min_value: Optional[Union[int, float]] = None
|
||||
max_value: Optional[Union[int, float]] = None
|
||||
description: str = ""
|
||||
```
|
||||
|
||||
#### `IndicatorSchema`
|
||||
|
||||
Complete schema for an indicator type:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class IndicatorSchema:
|
||||
indicator_type: IndicatorType
|
||||
display_type: DisplayType
|
||||
required_parameters: List[IndicatorParameterSchema]
|
||||
optional_parameters: List[IndicatorParameterSchema] = field(default_factory=list)
|
||||
min_data_points: int = 1
|
||||
description: str = ""
|
||||
```
|
||||
|
||||
### Schema Definitions
|
||||
|
||||
The system includes complete schemas for all supported indicators:
|
||||
|
||||
```python
|
||||
INDICATOR_SCHEMAS = {
|
||||
IndicatorType.SMA: IndicatorSchema(
|
||||
indicator_type=IndicatorType.SMA,
|
||||
display_type=DisplayType.OVERLAY,
|
||||
parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=200,
|
||||
default_value=20,
|
||||
description="Number of periods for the moving average"
|
||||
),
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default_value="close",
|
||||
valid_values=["open", "high", "low", "close"],
|
||||
description="Price column to use for calculation"
|
||||
)
|
||||
],
|
||||
description="Simple Moving Average - arithmetic mean of prices",
|
||||
calculation_description="Sum of closing prices divided by period"
|
||||
),
|
||||
# ... more schemas
|
||||
}
|
||||
```
|
||||
|
||||
### Utility Functions
|
||||
|
||||
#### Validation Functions
|
||||
|
||||
```python
|
||||
# Validate individual indicator configuration
|
||||
def validate_indicator_configuration(config: ChartIndicatorConfig) -> tuple[bool, List[str]]
|
||||
|
||||
# Create indicator configuration with validation
|
||||
def create_indicator_config(
|
||||
name: str,
|
||||
indicator_type: str,
|
||||
parameters: Dict[str, Any],
|
||||
display_type: Optional[str] = None,
|
||||
color: str = "#007bff",
|
||||
**display_options
|
||||
) -> tuple[Optional[ChartIndicatorConfig], List[str]]
|
||||
|
||||
# Get schema for indicator type
|
||||
def get_indicator_schema(indicator_type: str) -> Optional[IndicatorSchema]
|
||||
|
||||
# Get available indicator types
|
||||
def get_available_indicator_types() -> List[str]
|
||||
|
||||
# Validate parameters for specific type
|
||||
def validate_parameters_for_type(
|
||||
indicator_type: str,
|
||||
parameters: Dict[str, Any]
|
||||
) -> tuple[bool, List[str]]
|
||||
```
|
||||
|
||||
## Default Configurations
|
||||
|
||||
### Organization
|
||||
|
||||
Default configurations are organized by category and trading strategy:
|
||||
|
||||
#### Categories
|
||||
|
||||
```python
|
||||
class IndicatorCategory(str, Enum):
|
||||
TREND = "trend"
|
||||
MOMENTUM = "momentum"
|
||||
VOLATILITY = "volatility"
|
||||
VOLUME = "volume"
|
||||
SUPPORT_RESISTANCE = "support_resistance"
|
||||
```
|
||||
|
||||
#### Trading Strategies
|
||||
|
||||
```python
|
||||
class TradingStrategy(str, Enum):
|
||||
SCALPING = "scalping"
|
||||
DAY_TRADING = "day_trading"
|
||||
SWING_TRADING = "swing_trading"
|
||||
POSITION_TRADING = "position_trading"
|
||||
MOMENTUM = "momentum"
|
||||
MEAN_REVERSION = "mean_reversion"
|
||||
```
|
||||
|
||||
### Indicator Presets
|
||||
|
||||
#### `IndicatorPreset`
|
||||
|
||||
Container for pre-configured indicators:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class IndicatorPreset:
|
||||
name: str
|
||||
config: ChartIndicatorConfig
|
||||
category: IndicatorCategory
|
||||
description: str
|
||||
recommended_timeframes: List[str]
|
||||
suitable_strategies: List[TradingStrategy]
|
||||
notes: List[str] = field(default_factory=list)
|
||||
```
|
||||
|
||||
### Available Presets
|
||||
|
||||
**Trend Indicators (13 presets)**
|
||||
- `sma_5`, `sma_10`, `sma_20`, `sma_50`, `sma_100`, `sma_200`
|
||||
- `ema_5`, `ema_12`, `ema_21`, `ema_26`, `ema_50`, `ema_100`, `ema_200`
|
||||
|
||||
**Momentum Indicators (9 presets)**
|
||||
- `rsi_7`, `rsi_14`, `rsi_21`
|
||||
- `macd_5_13_4`, `macd_8_17_6`, `macd_12_26_9`, `macd_19_39_13`
|
||||
|
||||
**Volatility Indicators (4 presets)**
|
||||
- `bb_10_15`, `bb_20_15`, `bb_20_20`, `bb_50_20`
|
||||
|
||||
### Color Schemes
|
||||
|
||||
Organized color palettes by category:
|
||||
|
||||
```python
|
||||
CATEGORY_COLORS = {
|
||||
IndicatorCategory.TREND: {
|
||||
"primary": "#2E86C1", # Blue
|
||||
"secondary": "#5DADE2", # Light Blue
|
||||
"accent": "#1F618D" # Dark Blue
|
||||
},
|
||||
IndicatorCategory.MOMENTUM: {
|
||||
"primary": "#E74C3C", # Red
|
||||
"secondary": "#F1948A", # Light Red
|
||||
"accent": "#C0392B" # Dark Red
|
||||
},
|
||||
# ... more colors
|
||||
}
|
||||
```
|
||||
|
||||
### Access Functions
|
||||
|
||||
```python
|
||||
# Get all default indicators
|
||||
def get_all_default_indicators() -> Dict[str, IndicatorPreset]
|
||||
|
||||
# Filter by category
|
||||
def get_indicators_by_category(category: IndicatorCategory) -> Dict[str, IndicatorPreset]
|
||||
|
||||
# Filter by timeframe
|
||||
def get_indicators_for_timeframe(timeframe: str) -> Dict[str, IndicatorPreset]
|
||||
|
||||
# Get strategy-specific indicators
|
||||
def get_strategy_indicators(strategy: TradingStrategy) -> Dict[str, IndicatorPreset]
|
||||
|
||||
# Create custom preset
|
||||
def create_custom_preset(
|
||||
name: str,
|
||||
indicator_type: IndicatorType,
|
||||
parameters: Dict[str, Any],
|
||||
category: IndicatorCategory,
|
||||
**kwargs
|
||||
) -> tuple[Optional[IndicatorPreset], List[str]]
|
||||
```
|
||||
|
||||
## Strategy Configurations
|
||||
|
||||
### Core Classes
|
||||
|
||||
#### `StrategyChartConfig`
|
||||
|
||||
Complete chart configuration for a trading strategy:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class StrategyChartConfig:
|
||||
strategy_name: str
|
||||
strategy_type: TradingStrategy
|
||||
description: str
|
||||
timeframes: List[str]
|
||||
|
||||
# Chart layout
|
||||
layout: ChartLayout = ChartLayout.MAIN_WITH_SUBPLOTS
|
||||
main_chart_height: float = 0.7
|
||||
|
||||
# Indicators
|
||||
overlay_indicators: List[str] = field(default_factory=list)
|
||||
subplot_configs: List[SubplotConfig] = field(default_factory=list)
|
||||
|
||||
# Style
|
||||
chart_style: ChartStyle = field(default_factory=ChartStyle)
|
||||
|
||||
# Metadata
|
||||
created_at: Optional[datetime] = None
|
||||
updated_at: Optional[datetime] = None
|
||||
version: str = "1.0"
|
||||
tags: List[str] = field(default_factory=list)
|
||||
```
|
||||
|
||||
#### `SubplotConfig`
|
||||
|
||||
Configuration for chart subplots:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class SubplotConfig:
|
||||
subplot_type: SubplotType
|
||||
height_ratio: float = 0.3
|
||||
indicators: List[str] = field(default_factory=list)
|
||||
title: Optional[str] = None
|
||||
y_axis_label: Optional[str] = None
|
||||
show_grid: bool = True
|
||||
show_legend: bool = True
|
||||
background_color: Optional[str] = None
|
||||
```
|
||||
|
||||
#### `ChartStyle`
|
||||
|
||||
Comprehensive chart styling:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ChartStyle:
|
||||
theme: str = "plotly_white"
|
||||
background_color: str = "#ffffff"
|
||||
grid_color: str = "#e6e6e6"
|
||||
text_color: str = "#2c3e50"
|
||||
font_family: str = "Arial, sans-serif"
|
||||
font_size: int = 12
|
||||
candlestick_up_color: str = "#26a69a"
|
||||
candlestick_down_color: str = "#ef5350"
|
||||
volume_color: str = "#78909c"
|
||||
show_volume: bool = True
|
||||
show_grid: bool = True
|
||||
show_legend: bool = True
|
||||
show_toolbar: bool = True
|
||||
```
|
||||
|
||||
### Default Strategy Configurations
|
||||
|
||||
Pre-built strategy configurations for common trading approaches:
|
||||
|
||||
1. **Scalping Strategy**
|
||||
- Ultra-fast indicators (EMA 5, 12, 21)
|
||||
- Fast RSI (7) and MACD (5,13,4)
|
||||
- 1m-5m timeframes
|
||||
|
||||
2. **Day Trading Strategy**
|
||||
- Balanced indicators (SMA 20, EMA 12/26, BB 20,2.0)
|
||||
- Standard RSI (14) and MACD (12,26,9)
|
||||
- 5m-1h timeframes
|
||||
|
||||
3. **Swing Trading Strategy**
|
||||
- Longer-term indicators (SMA 50, EMA 21/50, BB 20,2.0)
|
||||
- Standard momentum indicators
|
||||
- 1h-1d timeframes
|
||||
|
||||
### Configuration Functions
|
||||
|
||||
```python
|
||||
# Create default strategy configurations
|
||||
def create_default_strategy_configurations() -> Dict[str, StrategyChartConfig]
|
||||
|
||||
# Create custom strategy
|
||||
def create_custom_strategy_config(
|
||||
strategy_name: str,
|
||||
strategy_type: TradingStrategy,
|
||||
description: str,
|
||||
timeframes: List[str],
|
||||
overlay_indicators: List[str],
|
||||
subplot_configs: List[Dict[str, Any]],
|
||||
**kwargs
|
||||
) -> tuple[Optional[StrategyChartConfig], List[str]]
|
||||
|
||||
# JSON import/export
|
||||
def load_strategy_config_from_json(json_data: Union[str, Dict[str, Any]]) -> tuple[Optional[StrategyChartConfig], List[str]]
|
||||
def export_strategy_config_to_json(config: StrategyChartConfig) -> str
|
||||
|
||||
# Access functions
|
||||
def get_strategy_config(strategy_name: str) -> Optional[StrategyChartConfig]
|
||||
def get_all_strategy_configs() -> Dict[str, StrategyChartConfig]
|
||||
def get_available_strategy_names() -> List[str]
|
||||
```
|
||||
|
||||
## Validation System
|
||||
|
||||
### Validation Rules
|
||||
|
||||
The system includes 10 comprehensive validation rules:
|
||||
|
||||
1. **REQUIRED_FIELDS** - Validates essential configuration fields
|
||||
2. **HEIGHT_RATIOS** - Ensures chart height ratios sum correctly
|
||||
3. **INDICATOR_EXISTENCE** - Checks indicator availability
|
||||
4. **TIMEFRAME_FORMAT** - Validates timeframe patterns
|
||||
5. **CHART_STYLE** - Validates styling options
|
||||
6. **SUBPLOT_CONFIG** - Validates subplot configurations
|
||||
7. **STRATEGY_CONSISTENCY** - Checks strategy-timeframe alignment
|
||||
8. **PERFORMANCE_IMPACT** - Warns about performance issues
|
||||
9. **INDICATOR_CONFLICTS** - Detects redundant indicators
|
||||
10. **RESOURCE_USAGE** - Estimates resource consumption
|
||||
|
||||
### Validation Classes
|
||||
|
||||
#### `ValidationReport`
|
||||
|
||||
Comprehensive validation results:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ValidationReport:
|
||||
is_valid: bool
|
||||
errors: List[ValidationIssue] = field(default_factory=list)
|
||||
warnings: List[ValidationIssue] = field(default_factory=list)
|
||||
info: List[ValidationIssue] = field(default_factory=list)
|
||||
debug: List[ValidationIssue] = field(default_factory=list)
|
||||
validation_time: Optional[datetime] = None
|
||||
rules_applied: Set[ValidationRule] = field(default_factory=set)
|
||||
```
|
||||
|
||||
#### `ValidationIssue`
|
||||
|
||||
Individual validation issue:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ValidationIssue:
|
||||
level: ValidationLevel
|
||||
rule: ValidationRule
|
||||
message: str
|
||||
field_path: str = ""
|
||||
suggestion: Optional[str] = None
|
||||
auto_fix: Optional[str] = None
|
||||
context: Dict[str, Any] = field(default_factory=dict)
|
||||
```
|
||||
|
||||
### Validation Usage
|
||||
|
||||
```python
|
||||
from components.charts.config import validate_configuration
|
||||
|
||||
# Comprehensive validation
|
||||
report = validate_configuration(config)
|
||||
|
||||
# Check results
|
||||
if report.is_valid:
|
||||
print("✅ Configuration is valid")
|
||||
else:
|
||||
print("❌ Configuration has errors:")
|
||||
for error in report.errors:
|
||||
print(f" • {error}")
|
||||
|
||||
# Handle warnings
|
||||
if report.warnings:
|
||||
print("⚠️ Warnings:")
|
||||
for warning in report.warnings:
|
||||
print(f" • {warning}")
|
||||
```
|
||||
|
||||
## Configuration Files
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
components/charts/config/
|
||||
├── __init__.py # Package exports and public API
|
||||
├── indicator_defs.py # Core indicator schemas and validation
|
||||
├── defaults.py # Default indicator presets and categories
|
||||
├── strategy_charts.py # Strategy configuration classes and defaults
|
||||
├── validation.py # Validation system and rules
|
||||
└── example_strategies.py # Real-world trading strategy examples
|
||||
```
|
||||
|
||||
### Key Exports
|
||||
|
||||
From `__init__.py`:
|
||||
|
||||
```python
|
||||
# Core classes
|
||||
from .indicator_defs import (
|
||||
IndicatorType, DisplayType, LineStyle, PriceColumn,
|
||||
IndicatorParameterSchema, IndicatorSchema, ChartIndicatorConfig
|
||||
)
|
||||
|
||||
# Default configurations
|
||||
from .defaults import (
|
||||
IndicatorCategory, TradingStrategy, IndicatorPreset,
|
||||
get_all_default_indicators, get_indicators_by_category
|
||||
)
|
||||
|
||||
# Strategy configurations
|
||||
from .strategy_charts import (
|
||||
ChartLayout, SubplotType, SubplotConfig, ChartStyle, StrategyChartConfig,
|
||||
create_default_strategy_configurations
|
||||
)
|
||||
|
||||
# Validation system
|
||||
from .validation import (
|
||||
ValidationLevel, ValidationRule, ValidationIssue, ValidationReport,
|
||||
validate_configuration
|
||||
)
|
||||
|
||||
# Utility functions from indicator_defs
|
||||
from .indicator_defs import (
|
||||
create_indicator_config, get_indicator_schema, get_available_indicator_types
|
||||
)
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Creating Custom Indicator
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
create_indicator_config, IndicatorType
|
||||
)
|
||||
|
||||
# Create custom EMA configuration
|
||||
config, errors = create_indicator_config(
|
||||
name="EMA 21",
|
||||
indicator_type=IndicatorType.EMA,
|
||||
parameters={"period": 21, "price_column": "close"},
|
||||
color="#2E86C1",
|
||||
line_width=2
|
||||
)
|
||||
|
||||
if config:
|
||||
print(f"Created: {config.name}")
|
||||
else:
|
||||
print(f"Errors: {errors}")
|
||||
```
|
||||
|
||||
### Example 2: Using Default Presets
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
get_all_default_indicators,
|
||||
get_indicators_by_category,
|
||||
IndicatorCategory
|
||||
)
|
||||
|
||||
# Get all available indicators
|
||||
all_indicators = get_all_default_indicators()
|
||||
print(f"Available indicators: {len(all_indicators)}")
|
||||
|
||||
# Get trend indicators only
|
||||
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
|
||||
for name, preset in trend_indicators.items():
|
||||
print(f"{name}: {preset.description}")
|
||||
```
|
||||
|
||||
### Example 3: Strategy Configuration
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
create_custom_strategy_config,
|
||||
TradingStrategy
|
||||
)
|
||||
|
||||
# Create custom momentum strategy
|
||||
config, errors = create_custom_strategy_config(
|
||||
strategy_name="Custom Momentum",
|
||||
strategy_type=TradingStrategy.MOMENTUM,
|
||||
description="Fast momentum trading strategy",
|
||||
timeframes=["5m", "15m"],
|
||||
overlay_indicators=["ema_8", "ema_21"],
|
||||
subplot_configs=[{
|
||||
"subplot_type": "rsi",
|
||||
"height_ratio": 0.2,
|
||||
"indicators": ["rsi_7"]
|
||||
}]
|
||||
)
|
||||
|
||||
if config:
|
||||
print(f"Created strategy: {config.strategy_name}")
|
||||
is_valid, validation_errors = config.validate()
|
||||
if is_valid:
|
||||
print("Strategy is valid!")
|
||||
else:
|
||||
print(f"Validation errors: {validation_errors}")
|
||||
```
|
||||
|
||||
### Example 4: Comprehensive Validation
|
||||
|
||||
```python
|
||||
from components.charts.config import (
|
||||
validate_configuration,
|
||||
ValidationRule
|
||||
)
|
||||
|
||||
# Validate with specific rules
|
||||
rules = {ValidationRule.REQUIRED_FIELDS, ValidationRule.HEIGHT_RATIOS}
|
||||
report = validate_configuration(config, rules=rules)
|
||||
|
||||
# Detailed error handling
|
||||
for error in report.errors:
|
||||
print(f"ERROR: {error.message}")
|
||||
if error.suggestion:
|
||||
print(f" Suggestion: {error.suggestion}")
|
||||
if error.auto_fix:
|
||||
print(f" Auto-fix: {error.auto_fix}")
|
||||
|
||||
# Performance warnings
|
||||
performance_issues = report.get_issues_by_rule(ValidationRule.PERFORMANCE_IMPACT)
|
||||
if performance_issues:
|
||||
print(f"Performance concerns: {len(performance_issues)}")
|
||||
```
|
||||
|
||||
## Extension Guide
|
||||
|
||||
### Adding New Indicators
|
||||
|
||||
1. **Define Indicator Type**
|
||||
```python
|
||||
# Add to IndicatorType enum
|
||||
class IndicatorType(str, Enum):
|
||||
# ... existing types
|
||||
STOCHASTIC = "stochastic"
|
||||
```
|
||||
|
||||
2. **Create Schema**
|
||||
```python
|
||||
# Add to INDICATOR_SCHEMAS
|
||||
INDICATOR_SCHEMAS[IndicatorType.STOCHASTIC] = IndicatorSchema(
|
||||
indicator_type=IndicatorType.STOCHASTIC,
|
||||
display_type=DisplayType.SUBPLOT,
|
||||
parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="k_period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=100,
|
||||
default_value=14
|
||||
),
|
||||
# ... more parameters
|
||||
],
|
||||
description="Stochastic Oscillator",
|
||||
calculation_description="Momentum indicator comparing closing price to price range"
|
||||
)
|
||||
```
|
||||
|
||||
3. **Create Default Presets**
|
||||
```python
|
||||
# Add to defaults.py
|
||||
def create_momentum_indicators():
|
||||
# ... existing indicators
|
||||
indicators["stoch_14"] = IndicatorPreset(
|
||||
name="stoch_14",
|
||||
config=create_indicator_config(
|
||||
IndicatorType.STOCHASTIC,
|
||||
{"k_period": 14, "d_period": 3},
|
||||
display_name="Stochastic %K(14,%D(3))",
|
||||
color=CATEGORY_COLORS[IndicatorCategory.MOMENTUM]["primary"]
|
||||
)[0],
|
||||
category=IndicatorCategory.MOMENTUM,
|
||||
description="Standard Stochastic oscillator",
|
||||
recommended_timeframes=["15m", "1h", "4h"],
|
||||
suitable_strategies=[TradingStrategy.SWING_TRADING]
|
||||
)
|
||||
```
|
||||
|
||||
### Adding New Validation Rules
|
||||
|
||||
1. **Define Rule**
|
||||
```python
|
||||
# Add to ValidationRule enum
|
||||
class ValidationRule(str, Enum):
|
||||
# ... existing rules
|
||||
CUSTOM_RULE = "custom_rule"
|
||||
```
|
||||
|
||||
2. **Implement Validation**
|
||||
```python
|
||||
# Add to ConfigurationValidator
|
||||
def _validate_custom_rule(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
# Custom validation logic
|
||||
if some_condition:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.CUSTOM_RULE,
|
||||
message="Custom validation message",
|
||||
suggestion="Suggested fix"
|
||||
))
|
||||
```
|
||||
|
||||
3. **Add to Validator**
|
||||
```python
|
||||
# Add to validate_strategy_config method
|
||||
if ValidationRule.CUSTOM_RULE in self.enabled_rules:
|
||||
self._validate_custom_rule(config, report)
|
||||
```
|
||||
|
||||
### Adding New Strategy Types
|
||||
|
||||
1. **Define Strategy Type**
|
||||
```python
|
||||
# Add to TradingStrategy enum
|
||||
class TradingStrategy(str, Enum):
|
||||
# ... existing strategies
|
||||
GRID_TRADING = "grid_trading"
|
||||
```
|
||||
|
||||
2. **Create Strategy Configuration**
|
||||
```python
|
||||
# Add to create_default_strategy_configurations()
|
||||
strategy_configs["grid_trading"] = StrategyChartConfig(
|
||||
strategy_name="Grid Trading Strategy",
|
||||
strategy_type=TradingStrategy.GRID_TRADING,
|
||||
description="Grid trading with support/resistance levels",
|
||||
timeframes=["1h", "4h"],
|
||||
overlay_indicators=["sma_20", "sma_50"],
|
||||
# ... complete configuration
|
||||
)
|
||||
```
|
||||
|
||||
3. **Add Example Strategy**
|
||||
```python
|
||||
# Create in example_strategies.py
|
||||
def create_grid_trading_strategy() -> StrategyExample:
|
||||
config = StrategyChartConfig(...)
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="Grid trading strategy description...",
|
||||
difficulty="Intermediate",
|
||||
risk_level="Medium"
|
||||
)
|
||||
```
|
||||
|
||||
The configuration system is designed to be highly extensible while maintaining type safety and comprehensive validation. All additions should follow the established patterns and include appropriate tests.
|
||||
313
docs/modules/charts/indicators.md
Normal file
313
docs/modules/charts/indicators.md
Normal file
@@ -0,0 +1,313 @@
|
||||
# Indicator System Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The Crypto Trading Bot Dashboard features a comprehensive modular indicator system that allows users to create, customize, and manage technical indicators for chart analysis. The system supports both overlay indicators (displayed on the main price chart) and subplot indicators (displayed in separate panels below the main chart).
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [System Architecture](#system-architecture)
|
||||
2. [Current Indicators](#current-indicators)
|
||||
3. [User Interface](#user-interface)
|
||||
4. [File Structure](#file-structure)
|
||||
5. [Adding New Indicators](#adding-new-indicators)
|
||||
6. [Configuration Format](#configuration-format)
|
||||
7. [API Reference](#api-reference)
|
||||
8. [Troubleshooting](#troubleshooting)
|
||||
|
||||
## System Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
```
|
||||
components/charts/
|
||||
├── indicator_manager.py # Core indicator CRUD operations
|
||||
├── indicator_defaults.py # Default indicator templates
|
||||
├── layers/
|
||||
│ ├── indicators.py # Overlay indicator rendering
|
||||
│ └── subplots.py # Subplot indicator rendering
|
||||
└── config/
|
||||
└── indicator_defs.py # Indicator definitions and schemas
|
||||
|
||||
config/indicators/
|
||||
└── user_indicators/ # User-created indicators (JSON files)
|
||||
├── sma_abc123.json
|
||||
├── ema_def456.json
|
||||
└── ...
|
||||
```
|
||||
|
||||
### Key Classes
|
||||
|
||||
- **`IndicatorManager`**: Handles CRUD operations for user indicators
|
||||
- **`UserIndicator`**: Data structure for indicator configuration
|
||||
- **`IndicatorStyling`**: Appearance and styling configuration
|
||||
- **Indicator Layers**: Rendering classes for different indicator types
|
||||
|
||||
## Current Indicators
|
||||
|
||||
### Overlay Indicators
|
||||
These indicators are displayed directly on the price chart:
|
||||
|
||||
| Indicator | Type | Parameters | Description |
|
||||
|-----------|------|------------|-------------|
|
||||
| **Simple Moving Average (SMA)** | `sma` | `period` (1-200) | Average price over N periods |
|
||||
| **Exponential Moving Average (EMA)** | `ema` | `period` (1-200) | Weighted average giving more weight to recent prices |
|
||||
| **Bollinger Bands** | `bollinger_bands` | `period` (5-100), `std_dev` (0.5-5.0) | Price channels based on standard deviation |
|
||||
|
||||
### Subplot Indicators
|
||||
These indicators are displayed in separate panels:
|
||||
|
||||
| Indicator | Type | Parameters | Description |
|
||||
|-----------|------|------------|-------------|
|
||||
| **Relative Strength Index (RSI)** | `rsi` | `period` (2-50) | Momentum oscillator (0-100 scale) |
|
||||
| **MACD** | `macd` | `fast_period` (2-50), `slow_period` (5-100), `signal_period` (2-30) | Moving average convergence divergence |
|
||||
|
||||
## User Interface
|
||||
|
||||
### Adding Indicators
|
||||
|
||||
1. **Click "➕ Add New Indicator"** button
|
||||
2. **Configure Basic Settings**:
|
||||
- Name: Custom name for the indicator
|
||||
- Type: Select from available indicator types
|
||||
- Description: Optional description
|
||||
3. **Set Parameters**: Type-specific parameters appear dynamically
|
||||
4. **Customize Styling**:
|
||||
- Color: Hex color code
|
||||
- Line Width: 1-5 pixels
|
||||
5. **Save**: Creates a new JSON file and updates the UI
|
||||
|
||||
### Managing Indicators
|
||||
|
||||
- **✅ Checkboxes**: Toggle indicator visibility on chart
|
||||
- **✏️ Edit Button**: Modify existing indicator settings
|
||||
- **🗑️ Delete Button**: Remove indicator permanently
|
||||
|
||||
### Real-time Updates
|
||||
|
||||
- Chart updates automatically when indicators are toggled
|
||||
- Changes are saved immediately to JSON files
|
||||
- No page refresh required
|
||||
|
||||
## File Structure
|
||||
|
||||
### Indicator JSON Format
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "ema_ca5fd53d",
|
||||
"name": "EMA 10",
|
||||
"description": "10-period Exponential Moving Average for fast signals",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 10
|
||||
},
|
||||
"styling": {
|
||||
"color": "#ff6b35",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.455729+00:00",
|
||||
"modified_date": "2025-06-04T04:54:49.608549+00:00"
|
||||
}
|
||||
```
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
config/indicators/
|
||||
└── user_indicators/
|
||||
├── sma_abc123.json # Individual indicator files
|
||||
├── ema_def456.json
|
||||
├── rsi_ghi789.json
|
||||
└── macd_jkl012.json
|
||||
```
|
||||
|
||||
## Adding New Indicators
|
||||
|
||||
For developers who want to add new indicator types to the system, please refer to the comprehensive step-by-step guide:
|
||||
|
||||
**📋 [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)**
|
||||
|
||||
This guide covers:
|
||||
- ✅ Complete 11-step implementation checklist
|
||||
- ✅ Full code examples (Stochastic Oscillator implementation)
|
||||
- ✅ File modification requirements
|
||||
- ✅ Testing checklist and common patterns
|
||||
- ✅ Tips and best practices
|
||||
|
||||
## Configuration Format
|
||||
|
||||
### User Indicator Structure
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class UserIndicator:
|
||||
id: str # Unique identifier
|
||||
name: str # Display name
|
||||
description: str # User description
|
||||
type: str # Indicator type (sma, ema, etc.)
|
||||
display_type: str # "overlay" or "subplot"
|
||||
parameters: Dict[str, Any] # Type-specific parameters
|
||||
styling: IndicatorStyling # Appearance settings
|
||||
visible: bool = True # Default visibility
|
||||
created_date: datetime # Creation timestamp
|
||||
modified_date: datetime # Last modification timestamp
|
||||
```
|
||||
|
||||
### Styling Options
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class IndicatorStyling:
|
||||
color: str = "#007bff" # Hex color code
|
||||
line_width: int = 2 # Line thickness (1-5)
|
||||
opacity: float = 1.0 # Transparency (0.0-1.0)
|
||||
line_style: str = "solid" # Line style
|
||||
```
|
||||
|
||||
### Parameter Examples
|
||||
|
||||
```python
|
||||
# SMA/EMA Parameters
|
||||
{"period": 20}
|
||||
|
||||
# RSI Parameters
|
||||
{"period": 14}
|
||||
|
||||
# MACD Parameters
|
||||
{
|
||||
"fast_period": 12,
|
||||
"slow_period": 26,
|
||||
"signal_period": 9
|
||||
}
|
||||
|
||||
# Bollinger Bands Parameters
|
||||
{
|
||||
"period": 20,
|
||||
"std_dev": 2.0
|
||||
}
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### IndicatorManager Class
|
||||
|
||||
```python
|
||||
class IndicatorManager:
|
||||
def create_indicator(self, name: str, indicator_type: str,
|
||||
parameters: Dict[str, Any], **kwargs) -> Optional[UserIndicator]
|
||||
|
||||
def load_indicator(self, indicator_id: str) -> Optional[UserIndicator]
|
||||
|
||||
def update_indicator(self, indicator_id: str, **kwargs) -> bool
|
||||
|
||||
def delete_indicator(self, indicator_id: str) -> bool
|
||||
|
||||
def list_indicators(self) -> List[UserIndicator]
|
||||
|
||||
def get_indicators_by_type(self, display_type: str) -> List[UserIndicator]
|
||||
```
|
||||
|
||||
### Usage Examples
|
||||
|
||||
```python
|
||||
# Get indicator manager
|
||||
manager = get_indicator_manager()
|
||||
|
||||
# Create new indicator
|
||||
indicator = manager.create_indicator(
|
||||
name="My SMA 50",
|
||||
indicator_type="sma",
|
||||
parameters={"period": 50},
|
||||
description="50-period Simple Moving Average",
|
||||
color="#ff0000"
|
||||
)
|
||||
|
||||
# Load indicator
|
||||
loaded = manager.load_indicator("sma_abc123")
|
||||
|
||||
# Update indicator
|
||||
success = manager.update_indicator(
|
||||
"sma_abc123",
|
||||
name="Updated SMA",
|
||||
parameters={"period": 30}
|
||||
)
|
||||
|
||||
# Delete indicator
|
||||
deleted = manager.delete_indicator("sma_abc123")
|
||||
|
||||
# List all indicators
|
||||
all_indicators = manager.list_indicators()
|
||||
|
||||
# Get by type
|
||||
overlay_indicators = manager.get_indicators_by_type("overlay")
|
||||
subplot_indicators = manager.get_indicators_by_type("subplot")
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Indicator not appearing in dropdown**
|
||||
- Check if registered in `INDICATOR_REGISTRY`
|
||||
- Verify the indicator type matches the class name
|
||||
|
||||
2. **Parameters not saving**
|
||||
- Ensure parameter fields are added to save callback
|
||||
- Check parameter collection logic in `save_new_indicator`
|
||||
|
||||
3. **Chart not updating**
|
||||
- Verify the indicator layer implements `calculate_values` and `create_traces`
|
||||
- Check if indicator is registered in the correct registry
|
||||
|
||||
4. **File permission errors**
|
||||
- Ensure `config/indicators/user_indicators/` directory is writable
|
||||
- Check file permissions on existing JSON files
|
||||
|
||||
### Debug Information
|
||||
|
||||
- Check browser console for JavaScript errors
|
||||
- Look at application logs for Python exceptions
|
||||
- Verify JSON file structure with a validator
|
||||
- Test indicator calculations with sample data
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
- Indicators with large periods may take longer to calculate
|
||||
- Consider data availability when setting parameter limits
|
||||
- Subplot indicators require additional chart space
|
||||
- Real-time updates may impact performance with many indicators
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Naming Conventions**
|
||||
- Use descriptive names for indicators
|
||||
- Include parameter values in names (e.g., "SMA 20")
|
||||
- Use consistent naming patterns
|
||||
|
||||
2. **Parameter Validation**
|
||||
- Set appropriate min/max values for parameters
|
||||
- Provide helpful descriptions for parameters
|
||||
- Use sensible default values
|
||||
|
||||
3. **Error Handling**
|
||||
- Handle insufficient data gracefully
|
||||
- Provide meaningful error messages
|
||||
- Log errors for debugging
|
||||
|
||||
4. **Performance**
|
||||
- Cache calculated values when possible
|
||||
- Optimize calculation algorithms
|
||||
- Limit the number of active indicators
|
||||
|
||||
5. **User Experience**
|
||||
- Provide immediate visual feedback
|
||||
- Use intuitive color schemes
|
||||
- Group related indicators logically
|
||||
---
|
||||
|
||||
*Back to [Chart System Documentation (`README.md`)]*
|
||||
280
docs/modules/charts/quick-reference.md
Normal file
280
docs/modules/charts/quick-reference.md
Normal file
@@ -0,0 +1,280 @@
|
||||
# Chart System Quick Reference
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Import Everything You Need
|
||||
```python
|
||||
from components.charts.config import (
|
||||
# Example strategies
|
||||
create_ema_crossover_strategy,
|
||||
get_all_example_strategies,
|
||||
|
||||
# Configuration
|
||||
StrategyChartConfig,
|
||||
create_custom_strategy_config,
|
||||
validate_configuration,
|
||||
|
||||
# Indicators
|
||||
get_all_default_indicators,
|
||||
get_indicators_by_category,
|
||||
IndicatorCategory,
|
||||
TradingStrategy
|
||||
)
|
||||
```
|
||||
|
||||
### Use Pre-built Strategy
|
||||
```python
|
||||
# Get EMA crossover strategy
|
||||
strategy = create_ema_crossover_strategy()
|
||||
config = strategy.config
|
||||
|
||||
# Validate before use
|
||||
report = validate_configuration(config)
|
||||
if report.is_valid:
|
||||
print("✅ Ready to use!")
|
||||
else:
|
||||
print(f"❌ Errors: {[str(e) for e in report.errors]}")
|
||||
```
|
||||
|
||||
### Create Custom Strategy
|
||||
```python
|
||||
config, errors = create_custom_strategy_config(
|
||||
strategy_name="My Strategy",
|
||||
strategy_type=TradingStrategy.DAY_TRADING,
|
||||
description="Custom day trading strategy",
|
||||
timeframes=["15m", "1h"],
|
||||
overlay_indicators=["ema_12", "ema_26"],
|
||||
subplot_configs=[{
|
||||
"subplot_type": "rsi",
|
||||
"height_ratio": 0.2,
|
||||
"indicators": ["rsi_14"]
|
||||
}]
|
||||
)
|
||||
```
|
||||
|
||||
## Available Indicators
|
||||
|
||||
### Trend Indicators
|
||||
- `sma_5`, `sma_10`, `sma_20`, `sma_50`, `sma_100`, `sma_200`
|
||||
- `ema_5`, `ema_12`, `ema_21`, `ema_26`, `ema_50`, `ema_100`, `ema_200`
|
||||
|
||||
### Momentum Indicators
|
||||
- `rsi_7`, `rsi_14`, `rsi_21`
|
||||
- `macd_5_13_4`, `macd_8_17_6`, `macd_12_26_9`, `macd_19_39_13`
|
||||
|
||||
### Volatility Indicators
|
||||
- `bb_10_15`, `bb_20_15`, `bb_20_20`, `bb_50_20`
|
||||
|
||||
## Example Strategies
|
||||
|
||||
### 1. EMA Crossover (Intermediate, Medium Risk)
|
||||
```python
|
||||
strategy = create_ema_crossover_strategy()
|
||||
# Uses: EMA 12/26/50, RSI 14, MACD, Bollinger Bands
|
||||
# Best for: Trending markets, 15m-4h timeframes
|
||||
```
|
||||
|
||||
### 2. Momentum Breakout (Advanced, High Risk)
|
||||
```python
|
||||
strategy = create_momentum_breakout_strategy()
|
||||
# Uses: EMA 8/21, Fast RSI/MACD, Volume
|
||||
# Best for: Volatile markets, 5m-1h timeframes
|
||||
```
|
||||
|
||||
### 3. Mean Reversion (Intermediate, Medium Risk)
|
||||
```python
|
||||
strategy = create_mean_reversion_strategy()
|
||||
# Uses: SMA 20/50, Multiple RSI, Tight BB
|
||||
# Best for: Ranging markets, 15m-4h timeframes
|
||||
```
|
||||
|
||||
### 4. Scalping (Advanced, High Risk)
|
||||
```python
|
||||
strategy = create_scalping_strategy()
|
||||
# Uses: Ultra-fast EMAs, RSI 7, Fast MACD
|
||||
# Best for: High liquidity, 1m-5m timeframes
|
||||
```
|
||||
|
||||
### 5. Swing Trading (Beginner, Medium Risk)
|
||||
```python
|
||||
strategy = create_swing_trading_strategy()
|
||||
# Uses: SMA 20/50, Standard indicators
|
||||
# Best for: Trending markets, 4h-1d timeframes
|
||||
```
|
||||
|
||||
## Strategy Filtering
|
||||
|
||||
### By Difficulty
|
||||
```python
|
||||
beginner = get_strategies_by_difficulty("Beginner")
|
||||
intermediate = get_strategies_by_difficulty("Intermediate")
|
||||
advanced = get_strategies_by_difficulty("Advanced")
|
||||
```
|
||||
|
||||
### By Risk Level
|
||||
```python
|
||||
low_risk = get_strategies_by_risk_level("Low")
|
||||
medium_risk = get_strategies_by_risk_level("Medium")
|
||||
high_risk = get_strategies_by_risk_level("High")
|
||||
```
|
||||
|
||||
### By Market Condition
|
||||
```python
|
||||
trending = get_strategies_by_market_condition("Trending")
|
||||
sideways = get_strategies_by_market_condition("Sideways")
|
||||
volatile = get_strategies_by_market_condition("Volatile")
|
||||
```
|
||||
|
||||
## Validation Quick Checks
|
||||
|
||||
### Basic Validation
|
||||
```python
|
||||
is_valid, errors = config.validate()
|
||||
if not is_valid:
|
||||
for error in errors:
|
||||
print(f"❌ {error}")
|
||||
```
|
||||
|
||||
### Comprehensive Validation
|
||||
```python
|
||||
report = validate_configuration(config)
|
||||
|
||||
# Errors (must fix)
|
||||
for error in report.errors:
|
||||
print(f"🚨 {error}")
|
||||
|
||||
# Warnings (recommended)
|
||||
for warning in report.warnings:
|
||||
print(f"⚠️ {warning}")
|
||||
|
||||
# Info (optional)
|
||||
for info in report.info:
|
||||
print(f"ℹ️ {info}")
|
||||
```
|
||||
|
||||
## JSON Export/Import
|
||||
|
||||
### Export Strategy
|
||||
```python
|
||||
json_data = export_strategy_config_to_json(config)
|
||||
```
|
||||
|
||||
### Import Strategy
|
||||
```python
|
||||
config, errors = load_strategy_config_from_json(json_data)
|
||||
```
|
||||
|
||||
### Export All Examples
|
||||
```python
|
||||
all_strategies_json = export_example_strategies_to_json()
|
||||
```
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Get Strategy Summary
|
||||
```python
|
||||
summary = get_strategy_summary()
|
||||
for name, info in summary.items():
|
||||
print(f"{name}: {info['difficulty']} - {info['risk_level']}")
|
||||
```
|
||||
|
||||
### List Available Indicators
|
||||
```python
|
||||
indicators = get_all_default_indicators()
|
||||
for name, preset in indicators.items():
|
||||
print(f"{name}: {preset.description}")
|
||||
```
|
||||
|
||||
### Filter by Category
|
||||
```python
|
||||
trend_indicators = get_indicators_by_category(IndicatorCategory.TREND)
|
||||
momentum_indicators = get_indicators_by_category(IndicatorCategory.MOMENTUM)
|
||||
```
|
||||
|
||||
## Configuration Structure
|
||||
|
||||
### Strategy Config
|
||||
```python
|
||||
StrategyChartConfig(
|
||||
strategy_name="Strategy Name",
|
||||
strategy_type=TradingStrategy.DAY_TRADING,
|
||||
description="Strategy description",
|
||||
timeframes=["15m", "1h"],
|
||||
overlay_indicators=["ema_12", "ema_26"],
|
||||
subplot_configs=[
|
||||
{
|
||||
"subplot_type": "rsi",
|
||||
"height_ratio": 0.2,
|
||||
"indicators": ["rsi_14"]
|
||||
}
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### Subplot Types
|
||||
- `"rsi"` - RSI oscillator
|
||||
- `"macd"` - MACD with histogram
|
||||
- `"volume"` - Volume bars
|
||||
|
||||
### Timeframe Formats
|
||||
- `"1m"`, `"5m"`, `"15m"`, `"30m"`
|
||||
- `"1h"`, `"2h"`, `"4h"`, `"6h"`, `"12h"`
|
||||
- `"1d"`, `"1w"`, `"1M"`
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
1. **"Indicator not found"** - Check available indicators list
|
||||
2. **"Height ratios exceed 1.0"** - Adjust main_chart_height and subplot ratios
|
||||
3. **"Invalid timeframe"** - Use standard timeframe formats
|
||||
|
||||
### Validation Rules
|
||||
1. Required fields present
|
||||
2. Height ratios sum ≤ 1.0
|
||||
3. Indicators exist in defaults
|
||||
4. Valid timeframe formats
|
||||
5. Chart style validation
|
||||
6. Subplot configuration
|
||||
7. Strategy consistency
|
||||
8. Performance impact
|
||||
9. Indicator conflicts
|
||||
10. Resource usage
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Strategy Design
|
||||
- Start with proven strategies (EMA crossover)
|
||||
- Match timeframes to strategy type
|
||||
- Balance indicator categories (trend + momentum + volume)
|
||||
- Consider performance impact (<10 indicators)
|
||||
|
||||
### Validation
|
||||
- Always validate before use
|
||||
- Address all errors
|
||||
- Consider warnings for optimization
|
||||
- Test with edge cases
|
||||
|
||||
### Performance
|
||||
- Limit complex indicators (Bollinger Bands)
|
||||
- Monitor resource usage warnings
|
||||
- Cache validated configurations
|
||||
- Use appropriate timeframes for strategy type
|
||||
|
||||
## Testing Commands
|
||||
|
||||
```bash
|
||||
# Test all chart components
|
||||
pytest tests/test_*_strategies.py -v
|
||||
pytest tests/test_validation.py -v
|
||||
pytest tests/test_defaults.py -v
|
||||
|
||||
# Test specific component
|
||||
pytest tests/test_example_strategies.py::TestEMACrossoverStrategy -v
|
||||
```
|
||||
|
||||
## File Locations
|
||||
|
||||
- **Main config**: `components/charts/config/`
|
||||
- **Documentation**: `docs/modules/charts/`
|
||||
- **Tests**: `tests/test_*_strategies.py`
|
||||
- **Examples**: `components/charts/config/example_strategies.py`
|
||||
302
docs/modules/dashboard-modular-structure.md
Normal file
302
docs/modules/dashboard-modular-structure.md
Normal file
@@ -0,0 +1,302 @@
|
||||
# Dashboard Modular Structure Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The Crypto Trading Bot Dashboard has been successfully refactored into a modular architecture for better maintainability, scalability, and development efficiency. This document outlines the new structure and how to work with it.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
dashboard/
|
||||
├── __init__.py # Package initialization
|
||||
├── app.py # Main app creation and configuration
|
||||
├── layouts/ # UI layout modules
|
||||
│ ├── __init__.py
|
||||
│ ├── market_data.py # Market data visualization layout
|
||||
│ ├── bot_management.py # Bot management interface layout
|
||||
│ ├── performance.py # Performance analytics layout
|
||||
│ └── system_health.py # System health monitoring layout
|
||||
├── callbacks/ # Dash callback modules
|
||||
│ ├── __init__.py
|
||||
│ ├── navigation.py # Tab navigation callbacks
|
||||
│ ├── charts.py # Chart-related callbacks
|
||||
│ ├── indicators.py # Indicator management callbacks
|
||||
│ └── system_health.py # System health callbacks
|
||||
└── components/ # Reusable UI components
|
||||
├── __init__.py
|
||||
├── indicator_modal.py # Indicator creation/editing modal
|
||||
└── chart_controls.py # Chart configuration controls
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. Main Application (`dashboard/app.py`)
|
||||
|
||||
**Purpose**: Creates and configures the main Dash application.
|
||||
|
||||
**Key Functions**:
|
||||
- `create_app()`: Initializes Dash app with main layout
|
||||
- `register_callbacks()`: Registers all callback modules
|
||||
|
||||
**Features**:
|
||||
- Centralized app configuration
|
||||
- Main navigation structure
|
||||
- Global components (modals, intervals)
|
||||
|
||||
### 2. Layout Modules (`dashboard/layouts/`)
|
||||
|
||||
**Purpose**: Define UI layouts for different dashboard sections.
|
||||
|
||||
#### Market Data Layout (`market_data.py`)
|
||||
- Symbol and timeframe selection
|
||||
- Chart configuration panel with indicator management
|
||||
- Parameter controls for indicator customization
|
||||
- Real-time chart display
|
||||
- Market statistics
|
||||
|
||||
#### Bot Management Layout (`bot_management.py`)
|
||||
- Bot status overview
|
||||
- Bot control interface (placeholder for Phase 4.0)
|
||||
|
||||
#### Performance Layout (`performance.py`)
|
||||
- Portfolio performance metrics (placeholder for Phase 6.0)
|
||||
|
||||
#### System Health Layout (`system_health.py`)
|
||||
- Database status monitoring
|
||||
- Data collection status
|
||||
- Redis status monitoring
|
||||
|
||||
### 3. Callback Modules (`dashboard/callbacks/`)
|
||||
|
||||
**Purpose**: Handle user interactions and data updates.
|
||||
|
||||
#### Navigation Callbacks (`navigation.py`)
|
||||
- Tab switching logic
|
||||
- Content rendering based on active tab
|
||||
|
||||
#### Chart Callbacks (`charts.py`)
|
||||
- Chart data updates
|
||||
- Strategy selection handling
|
||||
- Market statistics updates
|
||||
|
||||
#### Indicator Callbacks (`indicators.py`)
|
||||
- Complete indicator modal management
|
||||
- CRUD operations for custom indicators
|
||||
- Parameter field dynamics
|
||||
- Checkbox synchronization
|
||||
- Edit/delete functionality
|
||||
|
||||
#### System Health Callbacks (`system_health.py`)
|
||||
- Database status monitoring
|
||||
- Data collection status updates
|
||||
- Redis status checks
|
||||
|
||||
### 4. UI Components (`dashboard/components/`)
|
||||
|
||||
**Purpose**: Reusable UI components for consistent design.
|
||||
|
||||
#### Indicator Modal (`indicator_modal.py`)
|
||||
- Complete indicator creation/editing interface
|
||||
- Dynamic parameter fields
|
||||
- Styling controls
|
||||
- Form validation
|
||||
|
||||
#### Chart Controls (`chart_controls.py`)
|
||||
- Chart configuration panel
|
||||
- Parameter control sliders
|
||||
- Auto-update controls
|
||||
|
||||
## Benefits of Modular Structure
|
||||
|
||||
### 1. **Maintainability**
|
||||
- **Separation of Concerns**: Each module has a specific responsibility
|
||||
- **Smaller Files**: Easier to navigate and understand (under 300 lines each)
|
||||
- **Clear Dependencies**: Explicit imports show component relationships
|
||||
|
||||
### 2. **Scalability**
|
||||
- **Easy Extension**: Add new layouts/callbacks without touching existing code
|
||||
- **Parallel Development**: Multiple developers can work on different modules
|
||||
- **Component Reusability**: UI components can be shared across layouts
|
||||
|
||||
### 3. **Testing**
|
||||
- **Unit Testing**: Each module can be tested independently
|
||||
- **Mock Dependencies**: Easier to mock specific components for testing
|
||||
- **Isolated Debugging**: Issues can be traced to specific modules
|
||||
|
||||
### 4. **Code Organization**
|
||||
- **Logical Grouping**: Related functionality is grouped together
|
||||
- **Consistent Structure**: Predictable file organization
|
||||
- **Documentation**: Each module can have focused documentation
|
||||
|
||||
## Migration from Monolithic Structure
|
||||
|
||||
### Before (app.py - 1523 lines)
|
||||
```python
|
||||
# Single large file with:
|
||||
# - All layouts mixed together
|
||||
# - All callbacks in one place
|
||||
# - UI components embedded in layouts
|
||||
# - Difficult to navigate and maintain
|
||||
```
|
||||
|
||||
### After (Modular Structure)
|
||||
```python
|
||||
# dashboard/app.py (73 lines)
|
||||
# dashboard/layouts/market_data.py (124 lines)
|
||||
# dashboard/components/indicator_modal.py (290 lines)
|
||||
# dashboard/callbacks/navigation.py (32 lines)
|
||||
# dashboard/callbacks/charts.py (122 lines)
|
||||
# dashboard/callbacks/indicators.py (590 lines)
|
||||
# dashboard/callbacks/system_health.py (88 lines)
|
||||
# ... and so on
|
||||
```
|
||||
|
||||
## Development Workflow
|
||||
|
||||
### Adding a New Layout
|
||||
|
||||
1. **Create Layout Module**:
|
||||
```python
|
||||
# dashboard/layouts/new_feature.py
|
||||
def get_new_feature_layout():
|
||||
return html.Div([...])
|
||||
```
|
||||
|
||||
2. **Update Layout Package**:
|
||||
```python
|
||||
# dashboard/layouts/__init__.py
|
||||
from .new_feature import get_new_feature_layout
|
||||
```
|
||||
|
||||
3. **Add Navigation**:
|
||||
```python
|
||||
# dashboard/callbacks/navigation.py
|
||||
elif active_tab == 'new-feature':
|
||||
return get_new_feature_layout()
|
||||
```
|
||||
|
||||
### Adding New Callbacks
|
||||
|
||||
1. **Create Callback Module**:
|
||||
```python
|
||||
# dashboard/callbacks/new_feature.py
|
||||
def register_new_feature_callbacks(app):
|
||||
@app.callback(...)
|
||||
def callback_function(...):
|
||||
pass
|
||||
```
|
||||
|
||||
2. **Register Callbacks**:
|
||||
```python
|
||||
# dashboard/app.py or main app file
|
||||
from dashboard.callbacks import register_new_feature_callbacks
|
||||
register_new_feature_callbacks(app)
|
||||
```
|
||||
|
||||
### Creating Reusable Components
|
||||
|
||||
1. **Create Component Module**:
|
||||
```python
|
||||
# dashboard/components/new_component.py
|
||||
def create_new_component(params):
|
||||
return html.Div([...])
|
||||
```
|
||||
|
||||
2. **Export Component**:
|
||||
```python
|
||||
# dashboard/components/__init__.py
|
||||
from .new_component import create_new_component
|
||||
```
|
||||
|
||||
3. **Use in Layouts**:
|
||||
```python
|
||||
# dashboard/layouts/some_layout.py
|
||||
from dashboard.components import create_new_component
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. **File Organization**
|
||||
- Keep files under 300-400 lines
|
||||
- Use descriptive module names
|
||||
- Group related functionality together
|
||||
|
||||
### 2. **Import Management**
|
||||
- Use explicit imports
|
||||
- Avoid circular dependencies
|
||||
- Import only what you need
|
||||
|
||||
### 3. **Component Design**
|
||||
- Make components reusable
|
||||
- Use parameters for customization
|
||||
- Include proper documentation
|
||||
|
||||
### 4. **Callback Organization**
|
||||
- Group related callbacks in same module
|
||||
- Use descriptive function names
|
||||
- Include error handling
|
||||
|
||||
### 5. **Testing Strategy**
|
||||
- Test each module independently
|
||||
- Mock external dependencies
|
||||
- Use consistent testing patterns
|
||||
|
||||
## Current Status
|
||||
|
||||
### ✅ **Completed**
|
||||
- ✅ Modular directory structure
|
||||
- ✅ Layout modules extracted
|
||||
- ✅ UI components modularized
|
||||
- ✅ Navigation callbacks implemented
|
||||
- ✅ Chart callbacks extracted and working
|
||||
- ✅ Indicator callbacks extracted and working
|
||||
- ✅ System health callbacks extracted and working
|
||||
- ✅ All imports fixed and dependencies resolved
|
||||
- ✅ Modular dashboard fully functional
|
||||
|
||||
### 📋 **Next Steps**
|
||||
1. Implement comprehensive testing for each module
|
||||
2. Add error handling and validation improvements
|
||||
3. Create development guidelines
|
||||
4. Update deployment scripts
|
||||
5. Performance optimization for large datasets
|
||||
|
||||
## Usage
|
||||
|
||||
### Running the Modular Dashboard
|
||||
|
||||
```bash
|
||||
# Use the new modular version
|
||||
uv run python app_new.py
|
||||
|
||||
# Original monolithic version (for comparison)
|
||||
uv run python app.py
|
||||
```
|
||||
|
||||
### Development Mode
|
||||
|
||||
```bash
|
||||
# The modular structure supports hot reloading
|
||||
# Changes to individual modules are reflected immediately
|
||||
```
|
||||
|
||||
## Conclusion
|
||||
|
||||
The modular dashboard structure migration has been **successfully completed**! All functionality from the original 1523-line monolithic application has been extracted into clean, maintainable modules while preserving all existing features including:
|
||||
|
||||
- Complete indicator management system (CRUD operations)
|
||||
- Chart visualization with dynamic indicators
|
||||
- Strategy selection and auto-loading
|
||||
- System health monitoring
|
||||
- Real-time data updates
|
||||
- Professional UI with modals and controls
|
||||
|
||||
> **Note on UI Components:** While the modular structure is in place, many UI sections, such as the **Bot Management** and **Performance** layouts, are currently placeholders. The controls and visualizations for these features will be implemented once the corresponding backend components (Bot Manager, Strategy Engine) are developed.
|
||||
|
||||
This architecture provides a solid foundation for future development while maintaining all existing functionality. The separation of concerns makes the codebase more maintainable and allows for easier collaboration and testing.
|
||||
|
||||
**The modular dashboard is now production-ready and fully functional!** 🚀
|
||||
---
|
||||
*Back to [Modules Documentation (`../README.md`)]*
|
||||
215
docs/modules/data_collectors.md
Normal file
215
docs/modules/data_collectors.md
Normal file
@@ -0,0 +1,215 @@
|
||||
# Enhanced Data Collector System
|
||||
|
||||
This documentation describes the enhanced data collector system, featuring a modular architecture, centralized management, and robust health monitoring.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [System Architecture](#system-architecture)
|
||||
- [Core Components](#core-components)
|
||||
- [Exchange Factory](#exchange-factory)
|
||||
- [Health Monitoring](#health-monitoring)
|
||||
- [API Reference](#api-reference)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
|
||||
## Overview
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Modular Exchange Integration**: Easily add new exchanges without impacting core logic
|
||||
- **Centralized Management**: `CollectorManager` for system-wide control
|
||||
- **Robust Health Monitoring**: Automatic restarts and failure detection
|
||||
- **Factory Pattern**: Standardized creation of collector instances
|
||||
- **Asynchronous Operations**: High-performance data collection
|
||||
- **Comprehensive Logging**: Detailed component-level logging
|
||||
|
||||
### Supported Exchanges
|
||||
|
||||
- **OKX**: Full implementation with WebSocket support
|
||||
- **Binance (Future)**: Planned support
|
||||
- **Coinbase (Future)**: Planned support
|
||||
|
||||
For exchange-specific documentation, see [Exchange Implementations (`./exchanges/`)](./exchanges/).
|
||||
|
||||
## System Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ TCP Dashboard Platform │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ CollectorManager │ │
|
||||
│ │ • Centralized start/stop/status control │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────────────┐│ │
|
||||
│ │ │ Global Health Monitor ││ │
|
||||
│ │ │ • System-wide health checks ││ │
|
||||
│ │ │ • Auto-restart coordination ││ │
|
||||
│ │ │ • Performance analytics ││ │
|
||||
│ │ └─────────────────────────────────────────────────┘│ │
|
||||
│ │ │ │ │
|
||||
│ │ ┌─────────────┐ ┌─────────────┐ ┌────────────────┐ │ │
|
||||
│ │ │OKX Collector│ │Binance Coll.│ │Custom Collector│ │ │
|
||||
│ │ │• Health Mon │ │• Health Mon │ │• Health Monitor│ │ │
|
||||
│ │ │• Auto-restart│ │• Auto-restart│ │• Auto-restart │ │ │
|
||||
│ │ │• Data Valid │ │• Data Valid │ │• Data Validate │ │ │
|
||||
│ │ └─────────────┘ └─────────────┘ └────────────────┘ │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Core Components
|
||||
|
||||
### 1. `BaseDataCollector`
|
||||
|
||||
An abstract base class that defines the common interface for all exchange collectors.
|
||||
|
||||
**Key Responsibilities:**
|
||||
- Standardized `start`, `stop`, `restart` methods
|
||||
- Built-in health monitoring with heartbeat and data silence detection
|
||||
- Automatic reconnect and restart logic
|
||||
- Asynchronous message handling
|
||||
|
||||
### 2. `CollectorManager`
|
||||
|
||||
A singleton class that manages all active data collectors in the system.
|
||||
|
||||
**Key Responsibilities:**
|
||||
- Centralized `start` and `stop` for all collectors
|
||||
- System-wide status aggregation
|
||||
- Global health monitoring
|
||||
- Coordination of restart policies
|
||||
|
||||
### 3. Exchange-Specific Collectors
|
||||
|
||||
Concrete implementations of `BaseDataCollector` for each exchange (e.g., `OKXCollector`).
|
||||
|
||||
**Key Responsibilities:**
|
||||
- Handle exchange-specific WebSocket protocols
|
||||
- Parse and standardize incoming data
|
||||
- Implement exchange-specific authentication
|
||||
- Define subscription messages for different data types
|
||||
|
||||
For more details, see [OKX Collector Documentation (`./exchanges/okx.md`)](./exchanges/okx.md).
|
||||
|
||||
## Exchange Factory
|
||||
|
||||
The `ExchangeFactory` provides a standardized way to create data collectors, decoupling the client code from specific implementations.
|
||||
|
||||
### Features
|
||||
|
||||
- **Simplified Creation**: Single function to create any supported collector
|
||||
- **Configuration Driven**: Uses `ExchangeCollectorConfig` for flexible setup
|
||||
- **Validation**: Validates configuration before creating a collector
|
||||
- **Extensible**: Easily register new exchange collectors
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from data.exchanges import ExchangeFactory, ExchangeCollectorConfig
|
||||
from data.common import DataType
|
||||
|
||||
# Create config for OKX collector
|
||||
config = ExchangeCollectorConfig(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
auto_restart=True
|
||||
)
|
||||
|
||||
# Create collector using the factory
|
||||
try:
|
||||
collector = ExchangeFactory.create_collector(config)
|
||||
# Use the collector
|
||||
await collector.start()
|
||||
except ValueError as e:
|
||||
print(f"Error creating collector: {e}")
|
||||
|
||||
# Create multiple collectors
|
||||
configs = [...]
|
||||
collectors = ExchangeFactory.create_multiple_collectors(configs)
|
||||
```
|
||||
|
||||
## Health Monitoring
|
||||
|
||||
The system includes a robust, two-level health monitoring system.
|
||||
|
||||
### 1. Collector-Level Monitoring
|
||||
|
||||
Each `BaseDataCollector` instance has its own health monitoring.
|
||||
|
||||
**Key Metrics:**
|
||||
- **Heartbeat**: Regular internal signal to confirm the collector is responsive
|
||||
- **Data Silence**: Tracks time since last message to detect frozen connections
|
||||
- **Restart Count**: Number of automatic restarts
|
||||
- **Connection Status**: Tracks WebSocket connection state
|
||||
|
||||
### 2. Manager-Level Monitoring
|
||||
|
||||
The `CollectorManager` provides a global view of system health.
|
||||
|
||||
**Key Metrics:**
|
||||
- **Aggregate Status**: Overview of all collectors (running, stopped, failed)
|
||||
- **System Uptime**: Total uptime for the collector system
|
||||
- **Failed Collectors**: List of collectors that failed to restart
|
||||
- **Resource Usage**: (Future) System-level CPU and memory monitoring
|
||||
|
||||
### Health Status API
|
||||
|
||||
```python
|
||||
# Get status of a single collector
|
||||
status = collector.get_status()
|
||||
health = collector.get_health_status()
|
||||
|
||||
# Get status of the entire system
|
||||
system_status = manager.get_status()
|
||||
```
|
||||
|
||||
For detailed status schemas, refer to the [Reference Documentation (`../../reference/README.md`)](../../reference/README.md).
|
||||
|
||||
## API Reference
|
||||
|
||||
### `BaseDataCollector`
|
||||
- `async start()`
|
||||
- `async stop()`
|
||||
- `async restart()`
|
||||
- `get_status() -> dict`
|
||||
- `get_health_status() -> dict`
|
||||
|
||||
### `CollectorManager`
|
||||
- `add_collector(collector)`
|
||||
- `async start_all()`
|
||||
- `async stop_all()`
|
||||
- `get_status() -> dict`
|
||||
- `list_collectors() -> list`
|
||||
|
||||
### `ExchangeFactory`
|
||||
- `create_collector(config) -> BaseDataCollector`
|
||||
- `create_multiple_collectors(configs) -> list`
|
||||
- `get_supported_exchanges() -> list`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Collector fails to start**
|
||||
- **Cause**: Invalid symbol, incorrect API keys, or network issues.
|
||||
- **Solution**: Check logs for error messages. Verify configuration and network connectivity.
|
||||
|
||||
2. **Collector stops receiving data**
|
||||
- **Cause**: WebSocket connection dropped, exchange issues.
|
||||
- **Solution**: Health monitor should automatically restart. If not, check logs for reconnect errors.
|
||||
|
||||
3. **"Exchange not supported" error**
|
||||
- **Cause**: Trying to create a collector for an exchange not registered in the factory.
|
||||
- **Solution**: Implement the collector and register it in `data/exchanges/__init__.py`.
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use the `CollectorManager` for lifecycle management.
|
||||
- Always validate configurations before creating collectors.
|
||||
- Monitor system status regularly using `manager.get_status()`.
|
||||
- Refer to logs for detailed error analysis.
|
||||
|
||||
---
|
||||
*Back to [Modules Documentation (`../README.md`)]*
|
||||
545
docs/modules/database_operations.md
Normal file
545
docs/modules/database_operations.md
Normal file
@@ -0,0 +1,545 @@
|
||||
# Database Operations Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The Database Operations module (`database/operations.py`) provides a clean, centralized interface for all database interactions using the **Repository Pattern**. This approach abstracts SQL complexity from business logic, ensuring maintainable, testable, and consistent database operations across the entire application.
|
||||
|
||||
## Key Benefits
|
||||
|
||||
### 🏗️ **Clean Architecture**
|
||||
- **Repository Pattern**: Separates data access logic from business logic
|
||||
- **Centralized Operations**: All database interactions go through well-defined APIs
|
||||
- **No Raw SQL**: Business logic never contains direct SQL queries
|
||||
- **Consistent Interface**: Standardized methods across all database operations
|
||||
|
||||
### 🛡️ **Reliability & Safety**
|
||||
- **Automatic Transaction Management**: Sessions and commits handled automatically
|
||||
- **Error Handling**: Custom exceptions with proper context
|
||||
- **Connection Pooling**: Efficient database connection management
|
||||
- **Session Cleanup**: Automatic session management and cleanup
|
||||
|
||||
### 🔧 **Maintainability**
|
||||
- **Easy Testing**: Repository methods can be easily mocked for testing
|
||||
- **Database Agnostic**: Can change database implementations without affecting business logic
|
||||
- **Type Safety**: Full type hints for better IDE support and error detection
|
||||
- **Logging Integration**: Built-in logging for monitoring and debugging
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ DatabaseOperations │
|
||||
│ ┌─────────────────────────────────────────────────────┐ │
|
||||
│ │ Health Check & Stats │ │
|
||||
│ │ • Connection health monitoring │ │
|
||||
│ │ • Database statistics │ │
|
||||
│ │ • Performance metrics │ │
|
||||
│ └─────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
|
||||
│ │MarketDataRepo │ │RawTradeRepo │ │ BotRepo │ │
|
||||
│ │ │ │ │ │ │ │
|
||||
│ │ • upsert_candle │ │ • insert_data │ │ • add │ │
|
||||
│ │ • get_candles │ │ • get_trades │ │ • get_by_id │ │
|
||||
│ │ • get_latest │ │ • raw_websocket │ │ • update/delete│ │
|
||||
│ └─────────────────┘ └─────────────────┘ └──────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────┐
|
||||
│ BaseRepository │
|
||||
│ │
|
||||
│ • Session Mgmt │
|
||||
│ • Error Logging │
|
||||
│ • DB Connection │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from database.operations import get_database_operations
|
||||
from data.common.data_types import OHLCVCandle
|
||||
from datetime import datetime, timezone
|
||||
|
||||
# Get the database operations instance (singleton)
|
||||
db = get_database_operations()
|
||||
|
||||
# Check database health
|
||||
if not db.health_check():
|
||||
print("Database connection issue!")
|
||||
return
|
||||
|
||||
# Store a candle
|
||||
candle = OHLCVCandle(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
timeframe="5s",
|
||||
open=50000.0,
|
||||
high=50100.0,
|
||||
low=49900.0,
|
||||
close=50050.0,
|
||||
volume=1.5,
|
||||
trade_count=25,
|
||||
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
|
||||
end_time=datetime(2024, 1, 1, 12, 0, 5, tzinfo=timezone.utc)
|
||||
)
|
||||
|
||||
# Store candle (with duplicate handling)
|
||||
success = db.market_data.upsert_candle(candle, force_update=False)
|
||||
if success:
|
||||
print("Candle stored successfully!")
|
||||
```
|
||||
|
||||
### With Data Collectors
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.exchanges.okx import OKXCollector
|
||||
from data.base_collector import DataType
|
||||
from database.operations import get_database_operations
|
||||
|
||||
async def main():
|
||||
# Initialize database operations
|
||||
db = get_database_operations()
|
||||
|
||||
# The collector automatically uses the database operations module
|
||||
collector = OKXCollector(
|
||||
symbols=['BTC-USDT'],
|
||||
data_types=[DataType.TRADE],
|
||||
store_raw_data=True, # Stores raw WebSocket data
|
||||
force_update_candles=False # Ignore duplicate candles
|
||||
)
|
||||
|
||||
await collector.start()
|
||||
await asyncio.sleep(60) # Collect for 1 minute
|
||||
await collector.stop()
|
||||
|
||||
# Check statistics
|
||||
stats = db.get_stats()
|
||||
print(f"Total bots: {stats['bot_count']}")
|
||||
print(f"Total candles: {stats['candle_count']}")
|
||||
print(f"Total raw trades: {stats['raw_trade_count']}")
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### DatabaseOperations
|
||||
|
||||
Main entry point for all database operations.
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `health_check() -> bool`
|
||||
Test database connection health.
|
||||
|
||||
```python
|
||||
db = get_database_operations()
|
||||
if db.health_check():
|
||||
print("✅ Database is healthy")
|
||||
else:
|
||||
print("❌ Database connection issues")
|
||||
```
|
||||
|
||||
##### `get_stats() -> Dict[str, Any]`
|
||||
Get comprehensive database statistics.
|
||||
|
||||
```python
|
||||
stats = db.get_stats()
|
||||
print(f"Bots: {stats['bot_count']:,}")
|
||||
print(f"Candles: {stats['candle_count']:,}")
|
||||
print(f"Raw trades: {stats['raw_trade_count']:,}")
|
||||
print(f"Health: {stats['healthy']}")
|
||||
```
|
||||
|
||||
### MarketDataRepository
|
||||
|
||||
Repository for `market_data` table operations (candles/OHLCV data).
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `upsert_candle(candle: OHLCVCandle, force_update: bool = False) -> bool`
|
||||
|
||||
Store or update candle data with configurable duplicate handling.
|
||||
|
||||
**Parameters:**
|
||||
- `candle`: OHLCVCandle object to store
|
||||
- `force_update`: If True, overwrites existing data; if False, ignores duplicates
|
||||
|
||||
**Returns:** True if successful, False otherwise
|
||||
|
||||
**Duplicate Handling:**
|
||||
- `force_update=False`: Uses `ON CONFLICT DO NOTHING` (preserves existing candles)
|
||||
- `force_update=True`: Uses `ON CONFLICT DO UPDATE SET` (overwrites existing candles)
|
||||
|
||||
```python
|
||||
# Store new candle, ignore if duplicate exists
|
||||
db.market_data.upsert_candle(candle, force_update=False)
|
||||
|
||||
# Store candle, overwrite if duplicate exists
|
||||
db.market_data.upsert_candle(candle, force_update=True)
|
||||
```
|
||||
|
||||
##### `get_candles(symbol: str, timeframe: str, start_time: datetime, end_time: datetime, exchange: str = "okx") -> List[Dict[str, Any]]`
|
||||
|
||||
Retrieve historical candle data.
|
||||
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
|
||||
candles = db.market_data.get_candles(
|
||||
symbol="BTC-USDT",
|
||||
timeframe="5s",
|
||||
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
|
||||
end_time=datetime(2024, 1, 1, 13, 0, 0, tzinfo=timezone.utc),
|
||||
exchange="okx"
|
||||
)
|
||||
|
||||
for candle in candles:
|
||||
print(f"{candle['timestamp']}: O={candle['open']} H={candle['high']} L={candle['low']} C={candle['close']}")
|
||||
```
|
||||
|
||||
##### `get_latest_candle(symbol: str, timeframe: str, exchange: str = "okx") -> Optional[Dict[str, Any]]`
|
||||
|
||||
Get the most recent candle for a symbol/timeframe combination.
|
||||
|
||||
```python
|
||||
latest = db.market_data.get_latest_candle("BTC-USDT", "5s")
|
||||
if latest:
|
||||
print(f"Latest 5s candle: {latest['close']} at {latest['timestamp']}")
|
||||
else:
|
||||
print("No candles found")
|
||||
```
|
||||
|
||||
### BotRepository
|
||||
|
||||
Repository for `bots` table operations.
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `add(bot_data: Dict[str, Any]) -> Bot`
|
||||
|
||||
Adds a new bot to the database.
|
||||
|
||||
**Parameters:**
|
||||
- `bot_data`: Dictionary containing the bot's attributes (`name`, `strategy_name`, etc.)
|
||||
|
||||
**Returns:** The newly created `Bot` object.
|
||||
|
||||
```python
|
||||
from decimal import Decimal
|
||||
|
||||
bot_data = {
|
||||
"name": "MyTestBot",
|
||||
"strategy_name": "SimpleMACD",
|
||||
"symbol": "BTC-USDT",
|
||||
"timeframe": "1h",
|
||||
"status": "inactive",
|
||||
"virtual_balance": Decimal("10000"),
|
||||
}
|
||||
new_bot = db.bots.add(bot_data)
|
||||
print(f"Added bot with ID: {new_bot.id}")
|
||||
```
|
||||
|
||||
##### `get_by_id(bot_id: int) -> Optional[Bot]`
|
||||
|
||||
Retrieves a bot by its unique ID.
|
||||
|
||||
```python
|
||||
bot = db.bots.get_by_id(1)
|
||||
if bot:
|
||||
print(f"Found bot: {bot.name}")
|
||||
```
|
||||
|
||||
##### `get_by_name(name: str) -> Optional[Bot]`
|
||||
|
||||
Retrieves a bot by its unique name.
|
||||
|
||||
```python
|
||||
bot = db.bots.get_by_name("MyTestBot")
|
||||
if bot:
|
||||
print(f"Found bot with ID: {bot.id}")
|
||||
```
|
||||
|
||||
##### `update(bot_id: int, update_data: Dict[str, Any]) -> Optional[Bot]`
|
||||
|
||||
Updates an existing bot's attributes.
|
||||
|
||||
```python
|
||||
from datetime import datetime, timezone
|
||||
|
||||
update_payload = {"status": "active", "last_heartbeat": datetime.now(timezone.utc)}
|
||||
updated_bot = db.bots.update(1, update_payload)
|
||||
if updated_bot:
|
||||
print(f"Bot status updated to: {updated_bot.status}")
|
||||
```
|
||||
|
||||
##### `delete(bot_id: int) -> bool`
|
||||
|
||||
Deletes a bot from the database.
|
||||
|
||||
**Returns:** `True` if deletion was successful, `False` otherwise.
|
||||
|
||||
```python
|
||||
success = db.bots.delete(1)
|
||||
if success:
|
||||
print("Bot deleted successfully.")
|
||||
```
|
||||
|
||||
### RawTradeRepository
|
||||
|
||||
Repository for `raw_trades` table operations (raw WebSocket data).
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `insert_market_data_point(data_point: MarketDataPoint) -> bool`
|
||||
|
||||
Store raw market data from WebSocket streams.
|
||||
|
||||
```python
|
||||
from data.base_collector import MarketDataPoint, DataType
|
||||
from datetime import datetime, timezone
|
||||
|
||||
data_point = MarketDataPoint(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
timestamp=datetime.now(timezone.utc),
|
||||
data_type=DataType.TRADE,
|
||||
data={"price": 50000, "size": 0.1, "side": "buy"}
|
||||
)
|
||||
|
||||
success = db.raw_trades.insert_market_data_point(data_point)
|
||||
```
|
||||
|
||||
##### `insert_raw_websocket_data(exchange: str, symbol: str, data_type: str, raw_data: Dict[str, Any], timestamp: Optional[datetime] = None) -> bool`
|
||||
|
||||
Store raw WebSocket data for debugging purposes.
|
||||
|
||||
```python
|
||||
db.raw_trades.insert_raw_websocket_data(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
data_type="raw_trade",
|
||||
raw_data={"instId": "BTC-USDT", "px": "50000", "sz": "0.1"},
|
||||
timestamp=datetime.now(timezone.utc)
|
||||
)
|
||||
```
|
||||
|
||||
##### `get_raw_trades(symbol: str, data_type: str, start_time: datetime, end_time: datetime, exchange: str = "okx", limit: Optional[int] = None) -> List[Dict[str, Any]]`
|
||||
|
||||
Retrieve raw trade data for analysis.
|
||||
|
||||
```python
|
||||
trades = db.raw_trades.get_raw_trades(
|
||||
symbol="BTC-USDT",
|
||||
data_type="trade",
|
||||
start_time=datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc),
|
||||
end_time=datetime(2024, 1, 1, 13, 0, 0, tzinfo=timezone.utc),
|
||||
limit=1000
|
||||
)
|
||||
```
|
||||
|
||||
##### `cleanup_old_raw_data(days_to_keep: int = 7) -> int`
|
||||
|
||||
Clean up old raw data to prevent table bloat.
|
||||
|
||||
**Parameters:**
|
||||
- `days_to_keep`: Number of days to retain raw data records.
|
||||
|
||||
**Returns:** The number of records deleted.
|
||||
|
||||
```python
|
||||
# Clean up raw data older than 14 days
|
||||
deleted_count = db.raw_trades.cleanup_old_raw_data(days_to_keep=14)
|
||||
print(f"Deleted {deleted_count} old raw data records.")
|
||||
```
|
||||
|
||||
##### `get_raw_data_stats() -> Dict[str, Any]`
|
||||
|
||||
Get statistics about raw data storage.
|
||||
|
||||
**Returns:** A dictionary with statistics like total records, table size, etc.
|
||||
|
||||
```python
|
||||
raw_stats = db.raw_trades.get_raw_data_stats()
|
||||
print(f"Raw Trades Table Size: {raw_stats.get('table_size')}")
|
||||
print(f"Total Raw Records: {raw_stats.get('total_records')}")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The database operations module includes comprehensive error handling with custom exceptions.
|
||||
|
||||
### DatabaseOperationError
|
||||
|
||||
Custom exception for database operation failures.
|
||||
|
||||
```python
|
||||
from database.operations import DatabaseOperationError
|
||||
|
||||
try:
|
||||
db.market_data.upsert_candle(candle)
|
||||
except DatabaseOperationError as e:
|
||||
logger.error(f"Database operation failed: {e}")
|
||||
# Handle the error appropriately
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Always Handle Exceptions**: Wrap database operations in try-catch blocks
|
||||
2. **Check Health First**: Use `health_check()` before critical operations
|
||||
3. **Monitor Performance**: Use `get_stats()` to monitor database growth
|
||||
4. **Use Appropriate Repositories**: Use `market_data` for candles, `raw_trades` for raw data
|
||||
5. **Handle Duplicates Appropriately**: Choose the right `force_update` setting
|
||||
|
||||
## Configuration
|
||||
|
||||
### Force Update Behavior
|
||||
|
||||
The `force_update_candles` parameter in collectors controls duplicate handling:
|
||||
|
||||
```python
|
||||
# In OKX collector configuration
|
||||
collector = OKXCollector(
|
||||
symbols=['BTC-USDT'],
|
||||
force_update_candles=False # Default: ignore duplicates
|
||||
)
|
||||
|
||||
# Or enable force updates
|
||||
collector = OKXCollector(
|
||||
symbols=['BTC-USDT'],
|
||||
force_update_candles=True # Overwrite existing candles
|
||||
)
|
||||
```
|
||||
|
||||
### Logging Integration
|
||||
|
||||
Database operations automatically integrate with the application's logging system:
|
||||
|
||||
```python
|
||||
import logging
|
||||
from database.operations import get_database_operations
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
db = get_database_operations(logger)
|
||||
|
||||
# All database operations will now log through your logger
|
||||
db.market_data.upsert_candle(candle) # Logs: "Stored candle: BTC-USDT 5s at ..."
|
||||
```
|
||||
|
||||
## Migration from Direct SQL
|
||||
|
||||
If you have existing code using direct SQL, here's how to migrate:
|
||||
|
||||
### Before (Direct SQL - ❌ Don't do this)
|
||||
|
||||
```python
|
||||
# OLD WAY - direct SQL queries
|
||||
from database.connection import get_db_manager
|
||||
from sqlalchemy import text
|
||||
|
||||
db_manager = get_db_manager()
|
||||
with db_manager.get_session() as session:
|
||||
session.execute(text("""
|
||||
INSERT INTO market_data (exchange, symbol, timeframe, ...)
|
||||
VALUES (:exchange, :symbol, :timeframe, ...)
|
||||
"""), {'exchange': 'okx', 'symbol': 'BTC-USDT', ...})
|
||||
session.commit()
|
||||
```
|
||||
|
||||
### After (Repository Pattern - ✅ Correct way)
|
||||
|
||||
```python
|
||||
# NEW WAY - using repository pattern
|
||||
from database.operations import get_database_operations
|
||||
from data.common.data_types import OHLCVCandle
|
||||
|
||||
db = get_database_operations()
|
||||
candle = OHLCVCandle(...) # Create candle object
|
||||
success = db.market_data.upsert_candle(candle)
|
||||
```
|
||||
|
||||
The entire repository layer has been standardized to use the SQLAlchemy ORM internally, ensuring a consistent, maintainable, and database-agnostic approach. Raw SQL is avoided in favor of type-safe ORM queries.
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Connection Pooling
|
||||
|
||||
The database operations module automatically manages connection pooling through the underlying `DatabaseManager`.
|
||||
|
||||
### Batch Operations
|
||||
|
||||
For high-throughput scenarios, consider batching operations:
|
||||
|
||||
```python
|
||||
# Store multiple candles efficiently
|
||||
candles = [candle1, candle2, candle3, ...]
|
||||
|
||||
for candle in candles:
|
||||
db.market_data.upsert_candle(candle)
|
||||
```
|
||||
|
||||
### Monitoring
|
||||
|
||||
Monitor database performance using the built-in statistics:
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
# Monitor database load
|
||||
while True:
|
||||
stats = db.get_stats()
|
||||
print(f"Candles: {stats['candle_count']:,}, Health: {stats['healthy']}")
|
||||
time.sleep(30)
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### 1. Connection Errors
|
||||
```python
|
||||
if not db.health_check():
|
||||
logger.error("Database connection failed - check connection settings")
|
||||
```
|
||||
|
||||
#### 2. Duplicate Key Errors
|
||||
```python
|
||||
# Use force_update=False to ignore duplicates
|
||||
db.market_data.upsert_candle(candle, force_update=False)
|
||||
```
|
||||
|
||||
#### 3. Transaction Errors
|
||||
The repository automatically handles session management, but if you encounter issues:
|
||||
```python
|
||||
try:
|
||||
db.market_data.upsert_candle(candle)
|
||||
except DatabaseOperationError as e:
|
||||
logger.error(f"Transaction failed: {e}")
|
||||
```
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable database query logging for debugging:
|
||||
|
||||
```python
|
||||
# Set environment variable
|
||||
import os
|
||||
os.environ['DEBUG'] = 'true'
|
||||
|
||||
# This will log all SQL queries
|
||||
db = get_database_operations()
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- **[Database Connection](../architecture/database.md)** - Connection pooling and configuration
|
||||
- **[Data Collectors](data_collectors.md)** - How collectors use database operations
|
||||
- **[Architecture Overview](../architecture/architecture.md)** - System design patterns
|
||||
|
||||
---
|
||||
|
||||
*This documentation covers the repository pattern implementation in `database/operations.py`. For database schema details, see the [Architecture Documentation](../architecture/).*
|
||||
60
docs/modules/exchanges/README.md
Normal file
60
docs/modules/exchanges/README.md
Normal file
@@ -0,0 +1,60 @@
|
||||
# Exchange Integrations
|
||||
|
||||
## Overview
|
||||
This module provides a standardized interface for collecting real-time data from various cryptocurrency exchanges. It uses a modular architecture that allows easy addition of new exchanges while maintaining consistent behavior and error handling.
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
- **[Technical Documentation](exchanges.md)**: Detailed technical documentation of the exchange module architecture, including factory pattern, configuration, and error handling.
|
||||
- **Exchange-Specific Implementations**:
|
||||
- **[OKX](okx_collector.md)**: Complete guide for OKX exchange integration
|
||||
|
||||
## Quick Links
|
||||
|
||||
- [Data Collection Architecture](../data_collectors.md)
|
||||
- [Error Handling Guide](../error_handling.md)
|
||||
- [Logging Configuration](../logging.md)
|
||||
|
||||
## Exchange Status
|
||||
|
||||
| Exchange | Status | Features | Documentation |
|
||||
|----------|---------|-----------|---------------|
|
||||
| OKX | ✅ Production | Trades, Order Book, Ticker, Configurable Timeframes (1s+) | [Guide](okx_collector.md) |
|
||||
| Binance | 🔄 Planned | TBD | - |
|
||||
| Coinbase | 🔄 Planned | TBD | - |
|
||||
|
||||
## Features
|
||||
|
||||
### Core Features
|
||||
- Real-time data collection
|
||||
- Robust error handling
|
||||
- Automatic reconnection
|
||||
- Health monitoring
|
||||
- Configurable timeframes
|
||||
- Support for 1-second intervals
|
||||
- Flexible timeframe configuration
|
||||
- Custom timeframe aggregation
|
||||
|
||||
### Exchange-Specific Features
|
||||
- OKX: Full WebSocket support with configurable timeframes (1s+)
|
||||
- More exchanges coming soon
|
||||
|
||||
## Adding New Exchanges
|
||||
|
||||
See [Technical Documentation](exchanges.md) for detailed implementation guide.
|
||||
|
||||
Key Steps:
|
||||
1. Create exchange module in `data/exchanges/`
|
||||
2. Implement collector class extending `BaseDataCollector`
|
||||
3. Add WebSocket/REST implementations
|
||||
4. Register in `ExchangeFactory`
|
||||
5. Add documentation
|
||||
|
||||
## Support
|
||||
|
||||
- Report issues in the project issue tracker
|
||||
- See [Contributing Guide](../../CONTRIBUTING.md) for development guidelines
|
||||
- Check [Known Issues](exchanges.md#known-issues) for current limitations
|
||||
|
||||
---
|
||||
*Back to [Main Documentation](../../README.md)*
|
||||
228
docs/modules/exchanges/exchanges.md
Normal file
228
docs/modules/exchanges/exchanges.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# Exchange Module Technical Documentation
|
||||
|
||||
## Implementation Guide
|
||||
|
||||
### Core Components
|
||||
|
||||
1. **Base Collector**
|
||||
- Inherit from `BaseDataCollector`
|
||||
- Implement required abstract methods
|
||||
- Handle connection lifecycle
|
||||
|
||||
2. **WebSocket Client**
|
||||
- Implement exchange-specific WebSocket handling
|
||||
- Manage subscriptions and message parsing
|
||||
- Handle reconnection logic
|
||||
|
||||
3. **Configuration**
|
||||
- Define exchange-specific parameters
|
||||
- Implement validation rules
|
||||
- Set up default values
|
||||
|
||||
### Factory Implementation
|
||||
|
||||
The `ExchangeFactory` uses a registry pattern for dynamic collector creation:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class ExchangeCollectorConfig:
|
||||
"""Configuration for creating an exchange collector."""
|
||||
exchange: str
|
||||
symbol: str
|
||||
data_types: List[DataType]
|
||||
timeframes: Optional[List[str]] = None # Timeframes for candle collection
|
||||
auto_restart: bool = True
|
||||
health_check_interval: float = 30.0
|
||||
store_raw_data: bool = True
|
||||
custom_params: Optional[Dict[str, Any]] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate configuration after initialization."""
|
||||
if not self.exchange:
|
||||
raise InvalidConfigurationError("Exchange name cannot be empty")
|
||||
if not self.symbol:
|
||||
raise InvalidConfigurationError("Symbol cannot be empty")
|
||||
if not self.data_types:
|
||||
raise InvalidConfigurationError("At least one data type must be specified")
|
||||
if self.timeframes is not None:
|
||||
if not all(isinstance(tf, str) for tf in self.timeframes):
|
||||
raise InvalidConfigurationError("All timeframes must be strings")
|
||||
if not self.timeframes:
|
||||
raise InvalidConfigurationError("Timeframes list cannot be empty if provided")
|
||||
```
|
||||
|
||||
### Registry Configuration
|
||||
|
||||
Exchange capabilities are defined in the registry:
|
||||
|
||||
```python
|
||||
EXCHANGE_REGISTRY = {
|
||||
'okx': {
|
||||
'collector': 'data.exchanges.okx.collector.OKXCollector',
|
||||
'websocket': 'data.exchanges.okx.websocket.OKXWebSocketClient',
|
||||
'name': 'OKX',
|
||||
'supported_pairs': ['BTC-USDT', 'ETH-USDT', 'SOL-USDT', 'DOGE-USDT', 'TON-USDT'],
|
||||
'supported_data_types': ['trade', 'orderbook', 'ticker', 'candles'],
|
||||
'supported_timeframes': ['1s', '5s', '1m', '5m', '15m', '1h', '4h', '1d'] # Available timeframes
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Example Usage with Timeframes
|
||||
|
||||
```python
|
||||
# Create collector with specific timeframes
|
||||
config = ExchangeCollectorConfig(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
data_types=[DataType.TRADE, DataType.CANDLE],
|
||||
timeframes=['1s', '5s', '1m', '5m'] # Specify desired timeframes
|
||||
)
|
||||
|
||||
collector = ExchangeFactory.create_collector(config)
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
Custom exceptions hierarchy for precise error handling:
|
||||
|
||||
```python
|
||||
class ExchangeError(Exception):
|
||||
"""Base exception for all exchange-related errors."""
|
||||
pass
|
||||
|
||||
class ExchangeNotSupportedError(ExchangeError):
|
||||
"""Exchange not supported/found in registry."""
|
||||
pass
|
||||
|
||||
class InvalidConfigurationError(ExchangeError):
|
||||
"""Invalid exchange configuration."""
|
||||
pass
|
||||
|
||||
# Usage example:
|
||||
try:
|
||||
collector = ExchangeFactory.create_collector(config)
|
||||
except ExchangeNotSupportedError as e:
|
||||
logger.error(f"Exchange not supported: {e}")
|
||||
except InvalidConfigurationError as e:
|
||||
logger.error(f"Invalid configuration: {e}")
|
||||
```
|
||||
|
||||
### Logging Integration
|
||||
|
||||
The module uses the project's unified logging system:
|
||||
|
||||
```python
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger('exchanges')
|
||||
|
||||
class ExchangeFactory:
|
||||
@staticmethod
|
||||
def create_collector(config: ExchangeCollectorConfig) -> BaseDataCollector:
|
||||
logger.info(f"Creating collector for {config.exchange} {config.symbol}")
|
||||
try:
|
||||
# Implementation
|
||||
logger.debug("Collector created successfully")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create collector: {e}")
|
||||
raise
|
||||
```
|
||||
|
||||
## Testing Guidelines
|
||||
|
||||
### Unit Tests
|
||||
|
||||
```python
|
||||
def test_exchange_factory_validation():
|
||||
"""Test configuration validation."""
|
||||
config = ExchangeCollectorConfig(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
data_types=[DataType.TRADE]
|
||||
)
|
||||
is_valid, errors = ExchangeFactory.validate_config(config)
|
||||
assert is_valid
|
||||
assert not errors
|
||||
|
||||
def test_invalid_exchange():
|
||||
"""Test handling of invalid exchange."""
|
||||
with pytest.raises(ExchangeNotSupportedError):
|
||||
ExchangeFactory.create_collector(
|
||||
ExchangeCollectorConfig(
|
||||
exchange="invalid",
|
||||
symbol="BTC-USDT",
|
||||
data_types=[DataType.TRADE]
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
```python
|
||||
async def test_collector_lifecycle():
|
||||
"""Test collector startup and shutdown."""
|
||||
collector = create_okx_collector("BTC-USDT")
|
||||
|
||||
await collector.start()
|
||||
assert collector.is_running()
|
||||
|
||||
await asyncio.sleep(5) # Allow time for connection
|
||||
status = collector.get_status()
|
||||
assert status['status'] == 'running'
|
||||
|
||||
await collector.stop()
|
||||
assert not collector.is_running()
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Memory Management**
|
||||
- Implement proper cleanup in collector shutdown
|
||||
- Monitor message queue sizes
|
||||
- Clear unused subscriptions
|
||||
|
||||
2. **Connection Management**
|
||||
- Implement exponential backoff for reconnections
|
||||
- Monitor connection health
|
||||
- Handle rate limits properly
|
||||
|
||||
3. **Data Processing**
|
||||
- Process messages asynchronously
|
||||
- Batch updates when possible
|
||||
- Use efficient data structures
|
||||
|
||||
## Future Improvements
|
||||
|
||||
1. **Rate Limiting**
|
||||
```python
|
||||
class ExchangeRateLimit:
|
||||
def __init__(self, requests_per_second: int):
|
||||
self.rate = requests_per_second
|
||||
self.tokens = requests_per_second
|
||||
self.last_update = time.time()
|
||||
```
|
||||
|
||||
2. **Automatic Retries**
|
||||
```python
|
||||
async def with_retry(func, max_retries=3, backoff_factor=1.5):
|
||||
for attempt in range(max_retries):
|
||||
try:
|
||||
return await func()
|
||||
except ExchangeError as e:
|
||||
if attempt == max_retries - 1:
|
||||
raise
|
||||
wait_time = backoff_factor ** attempt
|
||||
await asyncio.sleep(wait_time)
|
||||
```
|
||||
|
||||
3. **Exchange-Specific Validation**
|
||||
```python
|
||||
class ExchangeValidator:
|
||||
def __init__(self, exchange_info: dict):
|
||||
self.rules = exchange_info.get('validation_rules', {})
|
||||
|
||||
def validate_symbol(self, symbol: str) -> bool:
|
||||
pattern = self.rules.get('symbol_pattern')
|
||||
return bool(re.match(pattern, symbol))
|
||||
```
|
||||
965
docs/modules/exchanges/okx_collector.md
Normal file
965
docs/modules/exchanges/okx_collector.md
Normal file
@@ -0,0 +1,965 @@
|
||||
# OKX Data Collector Documentation
|
||||
|
||||
## Overview
|
||||
|
||||
The OKX Data Collector provides real-time market data collection from OKX exchange using WebSocket API. It's built on the modular exchange architecture and provides robust connection management, automatic reconnection, health monitoring, and comprehensive data processing.
|
||||
|
||||
## Features
|
||||
|
||||
### 🎯 **OKX-Specific Features**
|
||||
- **Real-time Data**: Live trades, orderbook, and ticker data
|
||||
- **Single Pair Focus**: Each collector handles one trading pair for better isolation
|
||||
- **Ping/Pong Management**: OKX-specific keepalive mechanism with proper format
|
||||
- **Raw Data Storage**: Optional storage of raw OKX messages for debugging
|
||||
- **Connection Resilience**: Robust reconnection logic for OKX WebSocket
|
||||
|
||||
### 📊 **Supported Data Types**
|
||||
- **Trades**: Real-time trade executions (`trades` channel)
|
||||
- **Orderbook**: 5-level order book depth (`books5` channel)
|
||||
- **Ticker**: 24h ticker statistics (`tickers` channel)
|
||||
- **Candles**: Real-time OHLCV aggregation with configurable timeframes
|
||||
- Supports any timeframe from 1s upwards
|
||||
- Common timeframes: 1s, 5s, 1m, 5m, 15m, 1h, 4h, 1d
|
||||
- Custom timeframes can be configured in data_collection.json
|
||||
|
||||
### 🔧 **Configuration Options**
|
||||
- Auto-restart on failures
|
||||
- Health check intervals
|
||||
- Raw data storage toggle
|
||||
- Custom ping/pong timing
|
||||
- Reconnection attempts configuration
|
||||
- Flexible timeframe configuration (1s, 5s, 1m, 5m, 15m, 1h, etc.)
|
||||
- Configurable candle aggregation settings
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Using Factory Pattern (Recommended)
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
async def main():
|
||||
# Create OKX collector using convenience function
|
||||
collector = create_okx_collector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
auto_restart=True,
|
||||
health_check_interval=30.0,
|
||||
store_raw_data=True
|
||||
)
|
||||
|
||||
# Add data callbacks
|
||||
def on_trade(data_point):
|
||||
trade = data_point.data
|
||||
print(f"Trade: {trade['side']} {trade['sz']} @ {trade['px']} (ID: {trade['tradeId']})")
|
||||
|
||||
def on_orderbook(data_point):
|
||||
book = data_point.data
|
||||
if book.get('bids') and book.get('asks'):
|
||||
best_bid = book['bids'][0]
|
||||
best_ask = book['asks'][0]
|
||||
print(f"Orderbook: Bid {best_bid[0]}@{best_bid[1]} Ask {best_ask[0]}@{best_ask[1]}")
|
||||
|
||||
collector.add_data_callback(DataType.TRADE, on_trade)
|
||||
collector.add_data_callback(DataType.ORDERBOOK, on_orderbook)
|
||||
|
||||
# Start collector
|
||||
await collector.start()
|
||||
|
||||
# Run for 60 seconds
|
||||
await asyncio.sleep(60)
|
||||
|
||||
# Stop gracefully
|
||||
await collector.stop()
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 2. Direct OKX Collector Usage
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.exchanges.okx import OKXCollector
|
||||
from data.base_collector import DataType
|
||||
|
||||
async def main():
|
||||
# Create collector directly
|
||||
collector = OKXCollector(
|
||||
symbol='ETH-USDT',
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
component_name='eth_collector',
|
||||
auto_restart=True,
|
||||
health_check_interval=30.0,
|
||||
store_raw_data=True
|
||||
)
|
||||
|
||||
# Add callbacks
|
||||
def on_data(data_point):
|
||||
print(f"{data_point.data_type.value}: {data_point.symbol} - {data_point.timestamp}")
|
||||
|
||||
collector.add_data_callback(DataType.TRADE, on_data)
|
||||
collector.add_data_callback(DataType.ORDERBOOK, on_data)
|
||||
|
||||
# Start and monitor
|
||||
await collector.start()
|
||||
|
||||
# Monitor status
|
||||
for i in range(12): # 60 seconds total
|
||||
await asyncio.sleep(5)
|
||||
status = collector.get_status()
|
||||
print(f"Status: {status['status']} - Messages: {status.get('messages_processed', 0)}")
|
||||
|
||||
await collector.stop()
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 3. Multiple OKX Collectors with Manager
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.collector_manager import CollectorManager
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
async def main():
|
||||
# Create manager
|
||||
manager = CollectorManager(
|
||||
manager_name="okx_trading_system",
|
||||
global_health_check_interval=30.0
|
||||
)
|
||||
|
||||
# Create multiple OKX collectors
|
||||
symbols = ['BTC-USDT', 'ETH-USDT', 'SOL-USDT']
|
||||
|
||||
for symbol in symbols:
|
||||
collector = create_okx_collector(
|
||||
symbol=symbol,
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
auto_restart=True
|
||||
)
|
||||
manager.add_collector(collector)
|
||||
|
||||
# Start manager
|
||||
await manager.start()
|
||||
|
||||
# Monitor all collectors
|
||||
while True:
|
||||
status = manager.get_status()
|
||||
stats = status.get('statistics', {})
|
||||
|
||||
print(f"=== OKX Collectors Status ===")
|
||||
print(f"Running: {stats.get('running_collectors', 0)}")
|
||||
print(f"Failed: {stats.get('failed_collectors', 0)}")
|
||||
print(f"Total messages: {stats.get('total_messages', 0)}")
|
||||
|
||||
# Individual collector status
|
||||
for collector_name in manager.list_collectors():
|
||||
collector_status = manager.get_collector_status(collector_name)
|
||||
if collector_status:
|
||||
info = collector_status.get('status', {})
|
||||
print(f" {collector_name}: {info.get('status')} - "
|
||||
f"Messages: {info.get('messages_processed', 0)}")
|
||||
|
||||
await asyncio.sleep(15)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### 3. Multi-Timeframe Candle Processing
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.exchanges.okx import OKXCollector
|
||||
from data.base_collector import DataType
|
||||
from data.common import CandleProcessingConfig
|
||||
|
||||
async def main():
|
||||
# Configure multi-timeframe candle processing with 1s support
|
||||
candle_config = CandleProcessingConfig(
|
||||
timeframes=['1s', '5s', '1m', '5m', '15m', '1h'], # Including 1s timeframe
|
||||
auto_save_candles=True,
|
||||
emit_incomplete_candles=False
|
||||
)
|
||||
|
||||
# Create collector with candle processing
|
||||
collector = OKXCollector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE], # Trades needed for candle aggregation
|
||||
timeframes=['1s', '5s', '1m', '5m', '15m', '1h'], # Specify desired timeframes
|
||||
candle_config=candle_config,
|
||||
auto_restart=True,
|
||||
store_raw_data=False # Disable raw storage for production
|
||||
)
|
||||
|
||||
# Add candle callback
|
||||
def on_candle_completed(candle):
|
||||
print(f"Completed {candle.timeframe} candle: "
|
||||
f"OHLCV=({candle.open},{candle.high},{candle.low},{candle.close},{candle.volume}) "
|
||||
f"at {candle.end_time}")
|
||||
|
||||
collector.add_candle_callback(on_candle_completed)
|
||||
|
||||
# Start collector
|
||||
await collector.start()
|
||||
|
||||
# Monitor real-time candle generation
|
||||
await asyncio.sleep(300) # 5 minutes
|
||||
|
||||
await collector.stop()
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### 1. JSON Configuration File
|
||||
|
||||
The system uses `config/okx_config.json` for configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"exchange": "okx",
|
||||
"connection": {
|
||||
"public_ws_url": "wss://ws.okx.com:8443/ws/v5/public",
|
||||
"private_ws_url": "wss://ws.okx.com:8443/ws/v5/private",
|
||||
"ping_interval": 25.0,
|
||||
"pong_timeout": 10.0,
|
||||
"max_reconnect_attempts": 5,
|
||||
"reconnect_delay": 5.0
|
||||
},
|
||||
"data_collection": {
|
||||
"store_raw_data": true,
|
||||
"health_check_interval": 30.0,
|
||||
"auto_restart": true,
|
||||
"buffer_size": 1000
|
||||
},
|
||||
"factory": {
|
||||
"use_factory_pattern": true,
|
||||
"default_data_types": ["trade", "orderbook"],
|
||||
"batch_create": true
|
||||
},
|
||||
"trading_pairs": [
|
||||
{
|
||||
"symbol": "BTC-USDT",
|
||||
"enabled": true,
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
},
|
||||
{
|
||||
"symbol": "ETH-USDT",
|
||||
"enabled": true,
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
}
|
||||
],
|
||||
"logging": {
|
||||
"component_name_template": "okx_collector_{symbol}",
|
||||
"log_level": "INFO",
|
||||
"verbose": false
|
||||
},
|
||||
"database": {
|
||||
"store_processed_data": true,
|
||||
"store_raw_data": true,
|
||||
"batch_size": 100,
|
||||
"flush_interval": 5.0
|
||||
},
|
||||
"monitoring": {
|
||||
"enable_health_checks": true,
|
||||
"health_check_interval": 30.0,
|
||||
"alert_on_connection_loss": true,
|
||||
"max_consecutive_errors": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Programmatic Configuration
|
||||
|
||||
```python
|
||||
from data.exchanges.okx import OKXCollector
|
||||
from data.base_collector import DataType
|
||||
|
||||
# Custom configuration
|
||||
collector = OKXCollector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
component_name='custom_btc_collector',
|
||||
auto_restart=True,
|
||||
health_check_interval=15.0, # Check every 15 seconds
|
||||
store_raw_data=True # Store raw OKX messages
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Factory Configuration
|
||||
|
||||
```python
|
||||
from data.exchanges import ExchangeFactory, ExchangeCollectorConfig
|
||||
from data.base_collector import DataType
|
||||
|
||||
config = ExchangeCollectorConfig(
|
||||
exchange='okx',
|
||||
symbol='ETH-USDT',
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
auto_restart=True,
|
||||
health_check_interval=30.0,
|
||||
store_raw_data=True,
|
||||
custom_params={
|
||||
'ping_interval': 20.0, # Custom ping interval
|
||||
'max_reconnect_attempts': 10, # More reconnection attempts
|
||||
'pong_timeout': 15.0 # Longer pong timeout
|
||||
}
|
||||
)
|
||||
|
||||
collector = ExchangeFactory.create_collector(config)
|
||||
```
|
||||
|
||||
## Data Processing
|
||||
|
||||
### OKX Message Formats
|
||||
|
||||
#### Trade Data
|
||||
|
||||
```python
|
||||
# Raw OKX trade message
|
||||
{
|
||||
"arg": {
|
||||
"channel": "trades",
|
||||
"instId": "BTC-USDT"
|
||||
},
|
||||
"data": [
|
||||
{
|
||||
"instId": "BTC-USDT",
|
||||
"tradeId": "12345678",
|
||||
"px": "50000.5", # Price
|
||||
"sz": "0.001", # Size
|
||||
"side": "buy", # Side (buy/sell)
|
||||
"ts": "1697123456789" # Timestamp (ms)
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Processed MarketDataPoint
|
||||
MarketDataPoint(
|
||||
exchange="okx",
|
||||
symbol="BTC-USDT",
|
||||
timestamp=datetime(2023, 10, 12, 15, 30, 56, tzinfo=timezone.utc),
|
||||
data_type=DataType.TRADE,
|
||||
data={
|
||||
"instId": "BTC-USDT",
|
||||
"tradeId": "12345678",
|
||||
"px": "50000.5",
|
||||
"sz": "0.001",
|
||||
"side": "buy",
|
||||
"ts": "1697123456789"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
#### Orderbook Data
|
||||
|
||||
```python
|
||||
# Raw OKX orderbook message (books5)
|
||||
{
|
||||
"arg": {
|
||||
"channel": "books5",
|
||||
"instId": "BTC-USDT"
|
||||
},
|
||||
"data": [
|
||||
{
|
||||
"asks": [
|
||||
["50001.0", "0.5", "0", "3"], # [price, size, liquidated, orders]
|
||||
["50002.0", "1.0", "0", "5"]
|
||||
],
|
||||
"bids": [
|
||||
["50000.0", "0.8", "0", "2"],
|
||||
["49999.0", "1.2", "0", "4"]
|
||||
],
|
||||
"ts": "1697123456789",
|
||||
"checksum": "123456789"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
# Usage in callback
|
||||
def on_orderbook(data_point):
|
||||
book = data_point.data
|
||||
|
||||
if book.get('bids') and book.get('asks'):
|
||||
best_bid = book['bids'][0]
|
||||
best_ask = book['asks'][0]
|
||||
|
||||
spread = float(best_ask[0]) - float(best_bid[0])
|
||||
print(f"Spread: ${spread:.2f}")
|
||||
```
|
||||
|
||||
#### Ticker Data
|
||||
|
||||
```python
|
||||
# Raw OKX ticker message
|
||||
{
|
||||
"arg": {
|
||||
"channel": "tickers",
|
||||
"instId": "BTC-USDT"
|
||||
},
|
||||
"data": [
|
||||
{
|
||||
"instType": "SPOT",
|
||||
"instId": "BTC-USDT",
|
||||
"last": "50000.5", # Last price
|
||||
"lastSz": "0.001", # Last size
|
||||
"askPx": "50001.0", # Best ask price
|
||||
"askSz": "0.5", # Best ask size
|
||||
"bidPx": "50000.0", # Best bid price
|
||||
"bidSz": "0.8", # Best bid size
|
||||
"open24h": "49500.0", # 24h open
|
||||
"high24h": "50500.0", # 24h high
|
||||
"low24h": "49000.0", # 24h low
|
||||
"vol24h": "1234.567", # 24h volume
|
||||
"ts": "1697123456789"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Data Validation
|
||||
|
||||
The OKX collector includes comprehensive data validation:
|
||||
|
||||
```python
|
||||
# Automatic validation in collector
|
||||
class OKXCollector(BaseDataCollector):
|
||||
async def _process_data_item(self, channel: str, data_item: Dict[str, Any]):
|
||||
# Validate message structure
|
||||
if not isinstance(data_item, dict):
|
||||
self.logger.warning("Invalid data item type")
|
||||
return None
|
||||
|
||||
# Validate required fields based on channel
|
||||
if channel == "trades":
|
||||
required_fields = ['tradeId', 'px', 'sz', 'side', 'ts']
|
||||
elif channel == "books5":
|
||||
required_fields = ['bids', 'asks', 'ts']
|
||||
elif channel == "tickers":
|
||||
required_fields = ['last', 'ts']
|
||||
else:
|
||||
self.logger.warning(f"Unknown channel: {channel}")
|
||||
return None
|
||||
|
||||
# Check required fields
|
||||
for field in required_fields:
|
||||
if field not in data_item:
|
||||
self.logger.warning(f"Missing required field '{field}' in {channel} data")
|
||||
return None
|
||||
|
||||
# Process and return validated data
|
||||
return await self._create_market_data_point(channel, data_item)
|
||||
```
|
||||
|
||||
## Monitoring and Status
|
||||
|
||||
### Status Information
|
||||
|
||||
```python
|
||||
# Get comprehensive status
|
||||
status = collector.get_status()
|
||||
|
||||
print(f"Exchange: {status['exchange']}") # 'okx'
|
||||
print(f"Symbol: {status['symbol']}") # 'BTC-USDT'
|
||||
print(f"Status: {status['status']}") # 'running'
|
||||
print(f"WebSocket Connected: {status['websocket_connected']}") # True/False
|
||||
print(f"WebSocket State: {status['websocket_state']}") # 'connected'
|
||||
print(f"Messages Processed: {status['messages_processed']}") # Integer
|
||||
print(f"Errors: {status['errors']}") # Integer
|
||||
print(f"Last Trade ID: {status['last_trade_id']}") # String or None
|
||||
|
||||
# WebSocket statistics
|
||||
if 'websocket_stats' in status:
|
||||
ws_stats = status['websocket_stats']
|
||||
print(f"Messages Received: {ws_stats['messages_received']}")
|
||||
print(f"Messages Sent: {ws_stats['messages_sent']}")
|
||||
print(f"Pings Sent: {ws_stats['pings_sent']}")
|
||||
print(f"Pongs Received: {ws_stats['pongs_received']}")
|
||||
print(f"Reconnections: {ws_stats['reconnections']}")
|
||||
```
|
||||
|
||||
### Health Monitoring
|
||||
|
||||
```python
|
||||
# Get health status
|
||||
health = collector.get_health_status()
|
||||
|
||||
print(f"Is Healthy: {health['is_healthy']}") # True/False
|
||||
print(f"Issues: {health['issues']}") # List of issues
|
||||
print(f"Last Heartbeat: {health['last_heartbeat']}") # ISO timestamp
|
||||
print(f"Last Data: {health['last_data_received']}") # ISO timestamp
|
||||
print(f"Should Be Running: {health['should_be_running']}") # True/False
|
||||
print(f"Is Running: {health['is_running']}") # True/False
|
||||
|
||||
# Auto-restart status
|
||||
if not health['is_healthy']:
|
||||
print("Collector is unhealthy - auto-restart will trigger")
|
||||
for issue in health['issues']:
|
||||
print(f" Issue: {issue}")
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
async def monitor_performance():
|
||||
collector = create_okx_collector('BTC-USDT', [DataType.TRADE])
|
||||
await collector.start()
|
||||
|
||||
start_time = time.time()
|
||||
last_message_count = 0
|
||||
|
||||
while True:
|
||||
await asyncio.sleep(10) # Check every 10 seconds
|
||||
|
||||
status = collector.get_status()
|
||||
current_messages = status.get('messages_processed', 0)
|
||||
|
||||
# Calculate message rate
|
||||
elapsed = time.time() - start_time
|
||||
messages_per_second = current_messages / elapsed if elapsed > 0 else 0
|
||||
|
||||
# Calculate recent rate
|
||||
recent_messages = current_messages - last_message_count
|
||||
recent_rate = recent_messages / 10 # per second over last 10 seconds
|
||||
|
||||
print(f"=== Performance Stats ===")
|
||||
print(f"Total Messages: {current_messages}")
|
||||
print(f"Average Rate: {messages_per_second:.2f} msg/sec")
|
||||
print(f"Recent Rate: {recent_rate:.2f} msg/sec")
|
||||
print(f"Errors: {status.get('errors', 0)}")
|
||||
print(f"WebSocket State: {status.get('websocket_state', 'unknown')}")
|
||||
|
||||
last_message_count = current_messages
|
||||
|
||||
# Run performance monitoring
|
||||
asyncio.run(monitor_performance())
|
||||
```
|
||||
|
||||
## WebSocket Connection Details
|
||||
|
||||
### OKX WebSocket Client
|
||||
|
||||
The OKX implementation includes a specialized WebSocket client:
|
||||
|
||||
```python
|
||||
from data.exchanges.okx import OKXWebSocketClient, OKXSubscription, OKXChannelType
|
||||
|
||||
# Create WebSocket client directly (usually handled by collector)
|
||||
ws_client = OKXWebSocketClient(
|
||||
component_name='okx_ws_btc',
|
||||
ping_interval=25.0, # Must be < 30 seconds for OKX
|
||||
pong_timeout=10.0,
|
||||
max_reconnect_attempts=5,
|
||||
reconnect_delay=5.0
|
||||
)
|
||||
|
||||
# Connect to OKX
|
||||
await ws_client.connect(use_public=True)
|
||||
|
||||
# Create subscriptions
|
||||
subscriptions = [
|
||||
OKXSubscription(
|
||||
channel=OKXChannelType.TRADES.value,
|
||||
inst_id='BTC-USDT',
|
||||
enabled=True
|
||||
),
|
||||
OKXSubscription(
|
||||
channel=OKXChannelType.BOOKS5.value,
|
||||
inst_id='BTC-USDT',
|
||||
enabled=True
|
||||
)
|
||||
]
|
||||
|
||||
# Subscribe to channels
|
||||
await ws_client.subscribe(subscriptions)
|
||||
|
||||
# Add message callback
|
||||
def on_message(message):
|
||||
print(f"Received: {message}")
|
||||
|
||||
ws_client.add_message_callback(on_message)
|
||||
|
||||
# WebSocket will handle messages automatically
|
||||
await asyncio.sleep(60)
|
||||
|
||||
# Disconnect
|
||||
await ws_client.disconnect()
|
||||
```
|
||||
|
||||
### Connection States
|
||||
|
||||
The WebSocket client tracks connection states:
|
||||
|
||||
```python
|
||||
from data.exchanges.okx.websocket import ConnectionState
|
||||
|
||||
# Check connection state
|
||||
state = ws_client.connection_state
|
||||
|
||||
if state == ConnectionState.CONNECTED:
|
||||
print("WebSocket is connected and ready")
|
||||
elif state == ConnectionState.CONNECTING:
|
||||
print("WebSocket is connecting...")
|
||||
elif state == ConnectionState.RECONNECTING:
|
||||
print("WebSocket is reconnecting...")
|
||||
elif state == ConnectionState.DISCONNECTED:
|
||||
print("WebSocket is disconnected")
|
||||
elif state == ConnectionState.ERROR:
|
||||
print("WebSocket has error")
|
||||
```
|
||||
|
||||
### Ping/Pong Mechanism
|
||||
|
||||
OKX requires specific ping/pong format:
|
||||
|
||||
```python
|
||||
# OKX expects simple "ping" string (not JSON)
|
||||
# The WebSocket client handles this automatically:
|
||||
|
||||
# Send: "ping"
|
||||
# Receive: "pong"
|
||||
|
||||
# This is handled automatically by OKXWebSocketClient
|
||||
# Ping interval must be < 30 seconds to avoid disconnection
|
||||
```
|
||||
|
||||
## Error Handling & Resilience
|
||||
|
||||
The OKX collector includes comprehensive error handling and automatic recovery mechanisms:
|
||||
|
||||
### Connection Management
|
||||
- **Automatic Reconnection**: Handles network disconnections with exponential backoff
|
||||
- **Task Synchronization**: Prevents race conditions during reconnection using asyncio locks
|
||||
- **Graceful Shutdown**: Properly cancels background tasks and closes connections
|
||||
- **Connection State Tracking**: Monitors connection health and validity
|
||||
|
||||
### Enhanced WebSocket Handling (v2.1+)
|
||||
- **Race Condition Prevention**: Uses synchronization locks to prevent multiple recv() calls
|
||||
- **Task Lifecycle Management**: Properly manages background task startup and shutdown
|
||||
- **Reconnection Locking**: Prevents concurrent reconnection attempts
|
||||
- **Subscription Persistence**: Automatically re-subscribes to channels after reconnection
|
||||
|
||||
```python
|
||||
# The collector handles these scenarios automatically:
|
||||
# - Network interruptions
|
||||
# - WebSocket connection drops
|
||||
# - OKX server maintenance
|
||||
# - Rate limiting responses
|
||||
# - Malformed data packets
|
||||
|
||||
# Enhanced error logging for diagnostics
|
||||
collector = OKXCollector('BTC-USDT', [DataType.TRADE])
|
||||
stats = collector.get_status()
|
||||
print(f"Connection state: {stats['connection_state']}")
|
||||
print(f"Reconnection attempts: {stats['reconnect_attempts']}")
|
||||
print(f"Error count: {stats['error_count']}")
|
||||
```
|
||||
|
||||
### Common Error Patterns
|
||||
|
||||
#### WebSocket Concurrency Errors (Fixed in v2.1)
|
||||
```
|
||||
ERROR: cannot call recv while another coroutine is already running recv or recv_streaming
|
||||
```
|
||||
**Solution**: Updated WebSocket client with proper task synchronization and reconnection locking.
|
||||
|
||||
#### Connection Recovery
|
||||
```python
|
||||
# Monitor connection health
|
||||
async def monitor_connection():
|
||||
while True:
|
||||
if collector.is_connected():
|
||||
print("✅ Connected and receiving data")
|
||||
else:
|
||||
print("❌ Connection issue - auto-recovery in progress")
|
||||
await asyncio.sleep(30)
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Run the existing test scripts:
|
||||
|
||||
```bash
|
||||
# Test single collector
|
||||
python scripts/test_okx_collector.py single
|
||||
|
||||
# Test collector manager
|
||||
python scripts/test_okx_collector.py manager
|
||||
|
||||
# Test factory pattern
|
||||
python scripts/test_exchange_factory.py
|
||||
```
|
||||
|
||||
### Custom Testing
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
async def test_okx_collector():
|
||||
"""Test OKX collector functionality."""
|
||||
|
||||
# Test data collection
|
||||
message_count = 0
|
||||
error_count = 0
|
||||
|
||||
def on_trade(data_point):
|
||||
nonlocal message_count
|
||||
message_count += 1
|
||||
print(f"Trade #{message_count}: {data_point.data.get('tradeId')}")
|
||||
|
||||
def on_error(error):
|
||||
nonlocal error_count
|
||||
error_count += 1
|
||||
print(f"Error #{error_count}: {error}")
|
||||
|
||||
# Create and configure collector
|
||||
collector = create_okx_collector(
|
||||
symbol='BTC-USDT',
|
||||
data_types=[DataType.TRADE],
|
||||
auto_restart=True
|
||||
)
|
||||
|
||||
collector.add_data_callback(DataType.TRADE, on_trade)
|
||||
|
||||
# Test lifecycle
|
||||
print("Starting collector...")
|
||||
await collector.start()
|
||||
|
||||
print("Collecting data for 30 seconds...")
|
||||
await asyncio.sleep(30)
|
||||
|
||||
print("Stopping collector...")
|
||||
await collector.stop()
|
||||
|
||||
# Check results
|
||||
status = collector.get_status()
|
||||
print(f"Final status: {status['status']}")
|
||||
print(f"Messages processed: {status.get('messages_processed', 0)}")
|
||||
print(f"Errors: {status.get('errors', 0)}")
|
||||
|
||||
assert message_count > 0, "No messages received"
|
||||
assert error_count == 0, f"Unexpected errors: {error_count}"
|
||||
|
||||
print("Test passed!")
|
||||
|
||||
# Run test
|
||||
asyncio.run(test_okx_collector())
|
||||
```
|
||||
|
||||
## Production Deployment
|
||||
|
||||
### Recommended Configuration
|
||||
|
||||
```python
|
||||
# Production-ready OKX collector setup
|
||||
import asyncio
|
||||
from data.collector_manager import CollectorManager
|
||||
from data.exchanges import create_okx_collector
|
||||
from data.base_collector import DataType
|
||||
|
||||
async def deploy_okx_production():
|
||||
"""Production deployment configuration."""
|
||||
|
||||
# Create manager with appropriate settings
|
||||
manager = CollectorManager(
|
||||
manager_name="okx_production",
|
||||
global_health_check_interval=30.0, # Check every 30 seconds
|
||||
restart_delay=10.0 # Wait 10 seconds between restarts
|
||||
)
|
||||
|
||||
# Production trading pairs
|
||||
trading_pairs = [
|
||||
'BTC-USDT', 'ETH-USDT', 'SOL-USDT',
|
||||
'DOGE-USDT', 'TON-USDT', 'UNI-USDT'
|
||||
]
|
||||
|
||||
# Create collectors with production settings
|
||||
for symbol in trading_pairs:
|
||||
collector = create_okx_collector(
|
||||
symbol=symbol,
|
||||
data_types=[DataType.TRADE, DataType.ORDERBOOK],
|
||||
auto_restart=True,
|
||||
health_check_interval=15.0, # More frequent health checks
|
||||
store_raw_data=False # Disable raw data storage in production
|
||||
)
|
||||
|
||||
manager.add_collector(collector)
|
||||
|
||||
# Start system
|
||||
await manager.start()
|
||||
|
||||
# Production monitoring loop
|
||||
try:
|
||||
while True:
|
||||
await asyncio.sleep(60) # Check every minute
|
||||
|
||||
status = manager.get_status()
|
||||
stats = status.get('statistics', {})
|
||||
|
||||
# Log production metrics
|
||||
print(f"=== Production Status ===")
|
||||
print(f"Running: {stats.get('running_collectors', 0)}/{len(trading_pairs)}")
|
||||
print(f"Failed: {stats.get('failed_collectors', 0)}")
|
||||
print(f"Total restarts: {stats.get('restarts_performed', 0)}")
|
||||
|
||||
# Alert on failures
|
||||
failed_count = stats.get('failed_collectors', 0)
|
||||
if failed_count > 0:
|
||||
print(f"ALERT: {failed_count} collectors failed!")
|
||||
# Implement alerting system here
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("Shutting down production system...")
|
||||
await manager.stop()
|
||||
print("Production system stopped")
|
||||
|
||||
# Deploy to production
|
||||
asyncio.run(deploy_okx_production())
|
||||
```
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
```dockerfile
|
||||
# Dockerfile for OKX collector
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
COPY requirements.txt .
|
||||
RUN pip install -r requirements.txt
|
||||
|
||||
COPY . .
|
||||
|
||||
# Production command
|
||||
CMD ["python", "-m", "scripts.deploy_okx_production"]
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Production environment variables
|
||||
export LOG_LEVEL=INFO
|
||||
export OKX_ENV=production
|
||||
export HEALTH_CHECK_INTERVAL=30
|
||||
export AUTO_RESTART=true
|
||||
export STORE_RAW_DATA=false
|
||||
export DATABASE_URL=postgresql://user:pass@host:5432/db
|
||||
```
|
||||
|
||||
## API Reference
|
||||
|
||||
### OKXCollector Class
|
||||
|
||||
```python
|
||||
class OKXCollector(BaseDataCollector):
|
||||
def __init__(self,
|
||||
symbol: str,
|
||||
data_types: Optional[List[DataType]] = None,
|
||||
component_name: Optional[str] = None,
|
||||
auto_restart: bool = True,
|
||||
health_check_interval: float = 30.0,
|
||||
store_raw_data: bool = True):
|
||||
"""
|
||||
Initialize OKX collector.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol (e.g., 'BTC-USDT')
|
||||
data_types: Data types to collect (default: [TRADE, ORDERBOOK])
|
||||
component_name: Name for logging (default: auto-generated)
|
||||
auto_restart: Enable automatic restart on failures
|
||||
health_check_interval: Seconds between health checks
|
||||
store_raw_data: Whether to store raw OKX data
|
||||
"""
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
The OKX collector consists of three main components working together:
|
||||
|
||||
### `OKXCollector`
|
||||
|
||||
- **Main class**: `OKXCollector(BaseDataCollector)`
|
||||
- **Responsibilities**:
|
||||
- Manages WebSocket connection state
|
||||
- Subscribes to required data channels
|
||||
- Dispatches raw messages to the data processor
|
||||
- Stores standardized data in the database
|
||||
- Provides health and status monitoring
|
||||
|
||||
### `OKXWebSocketClient`
|
||||
|
||||
- **Handles WebSocket communication**: `OKXWebSocketClient`
|
||||
- **Responsibilities**:
|
||||
- Manages connection, reconnection, and ping/pong
|
||||
- Decodes incoming messages
|
||||
- Handles authentication for private channels
|
||||
|
||||
### `OKXDataProcessor`
|
||||
|
||||
- **New in v2.0**: `OKXDataProcessor`
|
||||
- **Responsibilities**:
|
||||
- Validates incoming raw data from WebSocket
|
||||
- Transforms data into standardized `StandardizedTrade` and `OHLCVCandle` formats
|
||||
- Aggregates trades into OHLCV candles
|
||||
- Invokes callbacks for processed trades and completed candles
|
||||
|
||||
## Configuration
|
||||
|
||||
### `OKXCollector` Configuration
|
||||
|
||||
Configuration options for the `OKXCollector` class:
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-------------------------|---------------------|---------------------------------------|-----------------------------------------------------------------------------|
|
||||
| `symbol` | `str` | - | Trading symbol (e.g., `BTC-USDT`) |
|
||||
| `data_types` | `List[DataType]` | `[TRADE, ORDERBOOK]` | List of data types to collect |
|
||||
| `auto_restart` | `bool` | `True` | Automatically restart on failures |
|
||||
| `health_check_interval` | `float` | `30.0` | Seconds between health checks |
|
||||
| `store_raw_data` | `bool` | `True` | Store raw WebSocket data for debugging |
|
||||
| `force_update_candles` | `bool` | `False` | If `True`, update existing candles; if `False`, keep existing ones unchanged |
|
||||
| `logger` | `Logger` | `None` | Logger instance for conditional logging |
|
||||
| `log_errors_only` | `bool` | `False` | If `True` and logger provided, only log error-level messages |
|
||||
|
||||
### Health & Status Monitoring
|
||||
|
||||
status = collector.get_status()
|
||||
print(json.dumps(status, indent=2))
|
||||
|
||||
Example output:
|
||||
|
||||
```json
|
||||
{
|
||||
"component_name": "okx_collector_btc_usdt",
|
||||
"status": "running",
|
||||
"uptime": "0:10:15.123456",
|
||||
"symbol": "BTC-USDT",
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"connection_state": "connected",
|
||||
"last_health_check": "2023-11-15T10:30:00Z",
|
||||
"message_count": 1052,
|
||||
"processed_trades": 512,
|
||||
"processed_candles": 10,
|
||||
"error_count": 2
|
||||
}
|
||||
```
|
||||
|
||||
## Database Integration
|
||||
412
docs/modules/logging.md
Normal file
412
docs/modules/logging.md
Normal file
@@ -0,0 +1,412 @@
|
||||
# Unified Logging System
|
||||
|
||||
The TCP Dashboard project uses a unified logging system built on Python's standard `logging` library. It provides consistent, centralized logging across all components.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Component-based Logging**: Each component (e.g., `bot_manager`, `data_collector`) gets its own dedicated logger, with logs organized into separate directories under `logs/`.
|
||||
- **Standardized & Simple**: Relies on standard Python `logging` handlers, making it robust and easy to maintain.
|
||||
- **Date-based Rotation**: Log files are automatically rotated daily at midnight by `TimedRotatingFileHandler`.
|
||||
- **Automatic Cleanup**: Log file retention is managed automatically based on the number of backup files to keep (`backupCount`), preventing excessive disk usage.
|
||||
- **Unified Format**: All log messages follow a detailed, consistent format: `[YYYY-MM-DD HH:MM:SS - LEVEL - pathname:lineno - funcName] - message]`.
|
||||
- **Configurable Console Output**: Optional console output for real-time monitoring, configurable via function arguments or environment variables.
|
||||
|
||||
## Usage
|
||||
|
||||
### Getting a Logger
|
||||
|
||||
The primary way to get a logger is via the `get_logger` function. It is thread-safe and ensures that loggers are configured only once.
|
||||
|
||||
```python
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Get a logger for the bot manager component
|
||||
# This will create a file logger and, if verbose=True, a console logger.
|
||||
logger = get_logger('bot_manager', verbose=True)
|
||||
|
||||
logger.info("Bot started successfully")
|
||||
logger.debug("Connecting to database...")
|
||||
logger.warning("API response time is high")
|
||||
logger.error("Failed to execute trade", exc_info=True)
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
The `get_logger` function accepts the following parameters:
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-------------------|---------------------|---------|-----------------------------------------------------------------------------|
|
||||
| `component_name` | `str` | `default_logger` | Name of the component (e.g., `bot_manager`). Used for the logger name and directory. |
|
||||
| `log_level` | `str` | `INFO` | The minimum logging level to be processed (DEBUG, INFO, WARNING, ERROR, CRITICAL). |
|
||||
| `verbose` | `Optional[bool]` | `None` | If `True`, enables console logging. If `None`, uses `VERBOSE_LOGGING` or `LOG_TO_CONSOLE` from environment variables. |
|
||||
| `max_log_files` | `int` | `30` | The maximum number of backup log files to keep. The core of the log cleanup mechanism. |
|
||||
| `clean_old_logs` | `bool` | `True` | **Deprecated**. Kept for backward compatibility but has no effect. Cleanup is controlled by `max_log_files`. |
|
||||
|
||||
For centralized control, you can use environment variables:
|
||||
- `VERBOSE_LOGGING`: Set to `true` to enable console logging for all loggers.
|
||||
- `LOG_TO_CONSOLE`: An alias for `VERBOSE_LOGGING`.
|
||||
|
||||
### Log File Structure
|
||||
|
||||
The logger creates a directory for each component inside `logs/`. The main log file is named `component_name.log`. When rotated, old logs are renamed with a date suffix.
|
||||
|
||||
```
|
||||
logs/
|
||||
├── bot_manager/
|
||||
│ ├── bot_manager.log (current log file)
|
||||
│ └── bot_manager.log.2023-11-15
|
||||
├── data_collector/
|
||||
│ ├── data_collector.log
|
||||
│ └── data_collector.log.2023-11-15
|
||||
└── default_logger/
|
||||
└── default_logger.log
|
||||
└── test_component/
|
||||
└── test_component.log
|
||||
```
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Age-Based Log Cleanup
|
||||
|
||||
While the primary cleanup mechanism is count-based (via `max_log_files`), a separate utility function, `cleanup_old_logs`, is available for age-based cleanup if you have specific retention policies.
|
||||
|
||||
```python
|
||||
from utils.logger import cleanup_old_logs
|
||||
|
||||
# Deletes all log files in the 'bot_manager' directory older than 15 days
|
||||
cleanup_old_logs('bot_manager', days_to_keep=15)
|
||||
```
|
||||
|
||||
### Shutting Down Logging
|
||||
|
||||
In some cases, especially in tests or when an application is shutting down gracefully, you may need to explicitly close all log file handlers.
|
||||
|
||||
```python
|
||||
from utils.logger import shutdown_logging
|
||||
|
||||
# Closes all open file handlers managed by the logging system
|
||||
shutdown_logging()
|
||||
```
|
||||
|
||||
## Component Integration Pattern (Conditional Logging)
|
||||
|
||||
While the logger utility is simple, it is designed to support a powerful conditional logging pattern at the application level. This allows components to be developed to run with or without logging, making them more flexible and easier to test.
|
||||
|
||||
### Key Concepts
|
||||
|
||||
1. **Optional Logging**: Components are designed to accept `logger=None` in their constructor and function normally without producing any logs.
|
||||
2. **Error-Only Mode**: A component can be designed to only log messages of level `ERROR` or higher. This is a component-level implementation pattern, not a feature of `get_logger`.
|
||||
3. **Logger Inheritance**: Parent components can pass their logger instance to child components, ensuring a consistent logging context.
|
||||
|
||||
### Example: Component Constructor
|
||||
|
||||
All major components should follow this pattern to support conditional logging.
|
||||
|
||||
```python
|
||||
class ComponentExample:
|
||||
def __init__(self, logger=None, log_errors_only=False):
|
||||
self.logger = logger
|
||||
self.log_errors_only = log_errors_only
|
||||
|
||||
def _log_info(self, message: str) -> None:
|
||||
"""Log info message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.info(message)
|
||||
|
||||
def _log_error(self, message: str, exc_info: bool = False) -> None:
|
||||
"""Log error message if logger is available."""
|
||||
if self.logger:
|
||||
self.logger.error(message, exc_info=exc_info)
|
||||
|
||||
# ... other helper methods for debug, warning, critical ...
|
||||
```
|
||||
|
||||
This pattern decouples the component's logic from the global logging configuration and makes its logging behavior explicit and easy to manage.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
- **Permissions**: Ensure the application has write permissions to the `logs/` directory.
|
||||
- **No Logs**: If file logging fails (e.g., due to permissions), a warning is printed to the console. If `verbose` is not enabled, no further logs will be produced. Ensure the `logs/` directory is writable.
|
||||
- **Console Spam**: If the console is too noisy, set `verbose=False` when calling `get_logger` and ensure `VERBOSE_LOGGING` is not set to `true` in your environment.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Component Naming
|
||||
|
||||
Use descriptive, consistent component names:
|
||||
- `bot_manager` - for bot lifecycle management
|
||||
- `data_collector` - for market data collection
|
||||
- `strategies` - for trading strategies
|
||||
- `backtesting` - for backtesting engine
|
||||
- `dashboard` - for web dashboard
|
||||
|
||||
### 2. Log Level Guidelines
|
||||
|
||||
- **DEBUG**: Detailed diagnostic information, typically only of interest when diagnosing problems
|
||||
- **INFO**: General information about program execution
|
||||
- **WARNING**: Something unexpected happened, but the program is still working
|
||||
- **ERROR**: A serious problem occurred, the program couldn't perform a function
|
||||
- **CRITICAL**: A serious error occurred, the program may not be able to continue
|
||||
|
||||
### 3. Verbose Logging Guidelines
|
||||
|
||||
```python
|
||||
# Development: Use verbose logging with DEBUG level
|
||||
dev_logger = get_logger('component', 'DEBUG', verbose=True, max_log_files=3)
|
||||
|
||||
# Production: Use INFO level with no console output
|
||||
prod_logger = get_logger('component', 'INFO', verbose=False, max_log_files=30)
|
||||
|
||||
# Testing: Disable cleanup to preserve test logs
|
||||
test_logger = get_logger('test_component', 'DEBUG', verbose=True, clean_old_logs=False)
|
||||
```
|
||||
|
||||
### 4. Log Retention Guidelines
|
||||
|
||||
```python
|
||||
# High-frequency components (data collectors): shorter retention
|
||||
data_logger = get_logger('data_collector', max_log_files=7)
|
||||
|
||||
# Important components (bot managers): longer retention
|
||||
bot_logger = get_logger('bot_manager', max_log_files=30)
|
||||
|
||||
# Development: very short retention
|
||||
dev_logger = get_logger('dev_component', max_log_files=3)
|
||||
```
|
||||
|
||||
### 5. Message Content
|
||||
|
||||
```python
|
||||
# Good: Descriptive and actionable
|
||||
logger.error("Failed to connect to OKX API: timeout after 30s")
|
||||
|
||||
# Bad: Vague and unhelpful
|
||||
logger.error("Error occurred")
|
||||
|
||||
# Good: Include relevant context
|
||||
logger.info(f"Bot {bot_id} executed trade: {symbol} {side} {quantity}@{price}")
|
||||
|
||||
# Good: Include duration for performance monitoring
|
||||
start_time = time.time()
|
||||
# ... do work ...
|
||||
duration = time.time() - start_time
|
||||
logger.info(f"Data aggregation completed in {duration:.2f}s")
|
||||
```
|
||||
|
||||
### 6. Exception Handling
|
||||
|
||||
```python
|
||||
try:
|
||||
execute_trade(symbol, quantity, price)
|
||||
logger.info(f"Trade executed successfully: {symbol}")
|
||||
except APIError as e:
|
||||
logger.error(f"API error during trade execution: {e}", exc_info=True)
|
||||
raise
|
||||
except ValidationError as e:
|
||||
logger.warning(f"Trade validation failed: {e}")
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.critical(f"Unexpected error during trade execution: {e}", exc_info=True)
|
||||
raise
|
||||
```
|
||||
|
||||
### 7. Performance Considerations
|
||||
|
||||
```python
|
||||
# Good: Efficient string formatting
|
||||
logger.debug(f"Processing {len(data)} records")
|
||||
|
||||
# Avoid: Expensive operations in log messages unless necessary
|
||||
# logger.debug(f"Data: {expensive_serialization(data)}") # Only if needed
|
||||
|
||||
# Better: Check log level first for expensive operations
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logger.debug(f"Data: {expensive_serialization(data)}")
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### Updating Existing Components
|
||||
|
||||
1. **Add logger parameter to constructor**:
|
||||
```python
|
||||
def __init__(self, ..., logger=None, log_errors_only=False):
|
||||
```
|
||||
|
||||
2. **Add conditional logging helpers**:
|
||||
```python
|
||||
def _log_debug(self, message: str) -> None:
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.debug(message)
|
||||
```
|
||||
|
||||
3. **Update all logging calls**:
|
||||
```python
|
||||
# Before
|
||||
self.logger.info("Message")
|
||||
|
||||
# After
|
||||
self._log_info("Message")
|
||||
```
|
||||
|
||||
4. **Pass logger to child components**:
|
||||
```python
|
||||
child = ChildComponent(logger=self.logger)
|
||||
```
|
||||
|
||||
### From Standard Logging
|
||||
```python
|
||||
# Old logging (if any existed)
|
||||
import logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# New unified logging
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger('component_name', verbose=True)
|
||||
```
|
||||
|
||||
### Gradual Adoption
|
||||
1. **Phase 1**: Add optional logger parameters to new components
|
||||
2. **Phase 2**: Update existing components to support conditional logging
|
||||
3. **Phase 3**: Implement hierarchical logging structure
|
||||
4. **Phase 4**: Add error-only logging mode
|
||||
|
||||
## Testing
|
||||
|
||||
### Testing Conditional Logging
|
||||
|
||||
#### Test Script Example
|
||||
```python
|
||||
# test_conditional_logging.py
|
||||
from utils.logger import get_logger
|
||||
from data.collector_manager import CollectorManager
|
||||
from data.exchanges.okx.collector import OKXCollector
|
||||
|
||||
def test_no_logging():
|
||||
"""Test components work without loggers."""
|
||||
manager = CollectorManager(logger=None)
|
||||
collector = OKXCollector("BTC-USDT", logger=None)
|
||||
print("✓ No logging test passed")
|
||||
|
||||
def test_with_logging():
|
||||
"""Test components work with loggers."""
|
||||
logger = get_logger('test_system')
|
||||
manager = CollectorManager(logger=logger)
|
||||
collector = OKXCollector("BTC-USDT", logger=logger)
|
||||
print("✓ With logging test passed")
|
||||
|
||||
def test_error_only():
|
||||
"""Test error-only logging mode."""
|
||||
logger = get_logger('test_errors')
|
||||
collector = OKXCollector("BTC-USDT", logger=logger, log_errors_only=True)
|
||||
print("✓ Error-only logging test passed")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_no_logging()
|
||||
test_with_logging()
|
||||
test_error_only()
|
||||
print("✅ All conditional logging tests passed!")
|
||||
```
|
||||
|
||||
### Testing Changes
|
||||
|
||||
```python
|
||||
# Test without logger
|
||||
component = MyComponent(logger=None)
|
||||
# Should work without errors, no logging
|
||||
|
||||
# Test with logger
|
||||
logger = get_logger('test_component')
|
||||
component = MyComponent(logger=logger)
|
||||
# Should log normally
|
||||
|
||||
# Test error-only mode
|
||||
component = MyComponent(logger=logger, log_errors_only=True)
|
||||
# Should only log errors
|
||||
```
|
||||
|
||||
### Basic System Test
|
||||
|
||||
Run a simple test to verify the logging system:
|
||||
```bash
|
||||
python -c "from utils.logger import get_logger; logger = get_logger('test', verbose=True); logger.info('Test message'); print('Check logs/test/ directory')"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Permission errors**: Ensure the application has write permissions to the project directory
|
||||
2. **Disk space**: Monitor disk usage and adjust log retention with `max_log_files`
|
||||
3. **Threading issues**: The logger is thread-safe, but check for application-level concurrency issues
|
||||
4. **Too many console messages**: Adjust `verbose` parameter or log levels
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug logging to troubleshoot issues:
|
||||
```python
|
||||
logger = get_logger('component_name', 'DEBUG', verbose=True)
|
||||
```
|
||||
|
||||
### Console Output Issues
|
||||
|
||||
```python
|
||||
# Force console output regardless of environment
|
||||
logger = get_logger('component_name', verbose=True)
|
||||
|
||||
# Check environment variables
|
||||
import os
|
||||
print(f"VERBOSE_LOGGING: {os.getenv('VERBOSE_LOGGING')}")
|
||||
print(f"LOG_TO_CONSOLE: {os.getenv('LOG_TO_CONSOLE')}")
|
||||
```
|
||||
|
||||
### Fallback Logging
|
||||
|
||||
If file logging fails, the system automatically falls back to console logging with a warning message.
|
||||
|
||||
## Integration with Existing Code
|
||||
|
||||
The logging system is designed to be gradually adopted:
|
||||
|
||||
1. **Start with new modules**: Use the unified logger in new code
|
||||
2. **Replace existing logging**: Gradually migrate existing logging to the unified system
|
||||
3. **No breaking changes**: Existing code continues to work
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Automatic Cleanup Benefits
|
||||
|
||||
The automatic cleanup feature provides several benefits:
|
||||
- **Disk space management**: Prevents log directories from growing indefinitely
|
||||
- **Performance**: Fewer files to scan in log directories
|
||||
- **Maintenance-free**: No need for external cron jobs or scripts
|
||||
- **Component-specific**: Each component can have different retention policies
|
||||
|
||||
### Manual Cleanup for Special Cases
|
||||
|
||||
For cases requiring age-based cleanup instead of count-based:
|
||||
```python
|
||||
# cleanup_logs.py
|
||||
from utils.logger import cleanup_old_logs
|
||||
|
||||
components = ['bot_manager', 'data_collector', 'strategies', 'dashboard']
|
||||
for component in components:
|
||||
cleanup_old_logs(component, days_to_keep=30)
|
||||
```
|
||||
|
||||
### Monitoring Disk Usage
|
||||
|
||||
Monitor the `logs/` directory size and adjust retention policies as needed:
|
||||
```bash
|
||||
# Check log directory size
|
||||
du -sh logs/
|
||||
|
||||
# Find large log files
|
||||
find logs/ -name "*.txt" -size +10M
|
||||
|
||||
# Count log files per component
|
||||
find logs/ -name "*.txt" | cut -d'/' -f2 | sort | uniq -c
|
||||
```
|
||||
|
||||
This conditional logging system provides maximum flexibility while maintaining clean, maintainable code that works in all scenarios.
|
||||
858
docs/modules/services/data_collection_service.md
Normal file
858
docs/modules/services/data_collection_service.md
Normal file
@@ -0,0 +1,858 @@
|
||||
# Data Collection Service
|
||||
|
||||
**Service for collecting and storing real-time market data from multiple exchanges.**
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The data collection service uses a **manager-worker architecture** to collect data for multiple trading pairs concurrently.
|
||||
|
||||
- **`CollectorManager`**: The central manager responsible for creating, starting, stopping, and monitoring individual data collectors.
|
||||
- **`OKXCollector`**: A dedicated worker responsible for collecting data for a single trading pair from the OKX exchange.
|
||||
|
||||
This architecture allows for high scalability and fault tolerance.
|
||||
|
||||
## Key Components
|
||||
|
||||
### `CollectorManager`
|
||||
|
||||
- **Location**: `tasks/collector_manager.py`
|
||||
- **Responsibilities**:
|
||||
- Manages the lifecycle of multiple collectors
|
||||
- Provides a unified API for controlling all collectors
|
||||
- Monitors the health of each collector
|
||||
- Distributes tasks and aggregates results
|
||||
|
||||
### `OKXCollector`
|
||||
|
||||
- **Location**: `data/exchanges/okx/collector.py`
|
||||
- **Responsibilities**:
|
||||
- Connects to the OKX WebSocket API
|
||||
- Subscribes to real-time data channels
|
||||
- Processes and standardizes incoming data
|
||||
- Stores data in the database
|
||||
|
||||
## Configuration
|
||||
|
||||
The service is configured through `config/bot_configs/data_collector_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"service_name": "data_collection_service",
|
||||
"enabled": true,
|
||||
"manager_config": {
|
||||
"component_name": "collector_manager",
|
||||
"health_check_interval": 60,
|
||||
"log_level": "INFO",
|
||||
"verbose": true
|
||||
},
|
||||
"collectors": [
|
||||
{
|
||||
"exchange": "okx",
|
||||
"symbol": "BTC-USDT",
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"enabled": true
|
||||
},
|
||||
{
|
||||
"exchange": "okx",
|
||||
"symbol": "ETH-USDT",
|
||||
"data_types": ["trade"],
|
||||
"enabled": true
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Start the service from the main application entry point:
|
||||
|
||||
```python
|
||||
# main.py
|
||||
from tasks.collector_manager import CollectorManager
|
||||
|
||||
async def main():
|
||||
manager = CollectorManager()
|
||||
await manager.start_all_collectors()
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Health & Monitoring
|
||||
|
||||
The `CollectorManager` provides a `get_status()` method to monitor the health of all collectors.
|
||||
|
||||
## Features
|
||||
|
||||
- **Service Lifecycle Management**: Start, stop, and monitor data collection operations
|
||||
- **JSON Configuration**: File-based configuration with automatic defaults
|
||||
- **Clean Production Logging**: Only essential operational information
|
||||
- **Health Monitoring**: Service-level health checks and auto-recovery
|
||||
- **Graceful Shutdown**: Proper signal handling and cleanup
|
||||
- **Multi-Exchange Orchestration**: Coordinate collectors across multiple exchanges
|
||||
- **Production Ready**: Designed for 24/7 operation with monitoring
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Start with default configuration (indefinite run)
|
||||
python scripts/start_data_collection.py
|
||||
|
||||
# Run for 8 hours
|
||||
python scripts/start_data_collection.py --hours 8
|
||||
|
||||
# Use custom configuration
|
||||
python scripts/start_data_collection.py --config config/my_config.json
|
||||
```
|
||||
|
||||
### Monitoring
|
||||
|
||||
```bash
|
||||
# Check status once
|
||||
python scripts/monitor_clean.py
|
||||
|
||||
# Monitor continuously every 60 seconds
|
||||
python scripts/monitor_clean.py --interval 60
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
The service uses JSON configuration files with automatic default creation if none exists.
|
||||
|
||||
### Default Configuration Location
|
||||
|
||||
`config/data_collection.json`
|
||||
|
||||
### Configuration Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"exchanges": {
|
||||
"okx": {
|
||||
"enabled": true,
|
||||
"trading_pairs": [
|
||||
{
|
||||
"symbol": "BTC-USDT",
|
||||
"enabled": true,
|
||||
"data_types": ["trade"],
|
||||
"timeframes": ["1m", "5m", "15m", "1h"]
|
||||
},
|
||||
{
|
||||
"symbol": "ETH-USDT",
|
||||
"enabled": true,
|
||||
"data_types": ["trade"],
|
||||
"timeframes": ["1m", "5m", "15m", "1h"]
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
"collection_settings": {
|
||||
"health_check_interval": 120,
|
||||
"store_raw_data": true,
|
||||
"auto_restart": true,
|
||||
"max_restart_attempts": 3
|
||||
},
|
||||
"logging": {
|
||||
"level": "INFO",
|
||||
"log_errors_only": true,
|
||||
"verbose_data_logging": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
#### Exchange Settings
|
||||
|
||||
- **enabled**: Whether to enable this exchange
|
||||
- **trading_pairs**: Array of trading pair configurations
|
||||
|
||||
#### Trading Pair Settings
|
||||
|
||||
- **symbol**: Trading pair symbol (e.g., "BTC-USDT")
|
||||
- **enabled**: Whether to collect data for this pair
|
||||
- **data_types**: Types of data to collect (["trade"], ["ticker"], etc.)
|
||||
- **timeframes**: Candle timeframes to generate (["1m", "5m", "15m", "1h", "4h", "1d"])
|
||||
|
||||
#### Collection Settings
|
||||
|
||||
- **health_check_interval**: Health check frequency in seconds
|
||||
- **store_raw_data**: Whether to store raw trade data
|
||||
- **auto_restart**: Enable automatic restart on failures
|
||||
- **max_restart_attempts**: Maximum restart attempts before giving up
|
||||
|
||||
#### Logging Settings
|
||||
|
||||
- **level**: Log level ("DEBUG", "INFO", "WARNING", "ERROR")
|
||||
- **log_errors_only**: Only log errors and essential events
|
||||
- **verbose_data_logging**: Enable verbose logging of individual trades/candles
|
||||
|
||||
## Service Architecture
|
||||
|
||||
### Service Layer Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────┐
|
||||
│ DataCollectionService │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ Configuration Manager │ │
|
||||
│ │ • JSON config loading/validation │ │
|
||||
│ │ • Default config generation │ │
|
||||
│ │ • Runtime config updates │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ Service Monitor │ │
|
||||
│ │ • Service-level health checks │ │
|
||||
│ │ • Uptime tracking │ │
|
||||
│ │ • Error aggregation │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌─────────────────────────────────────────┐ │
|
||||
│ │ CollectorManager │ │
|
||||
│ │ • Individual collector management │ │
|
||||
│ │ • Health monitoring │ │
|
||||
│ │ • Auto-restart coordination │ │
|
||||
│ └─────────────────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────────────────┐
|
||||
│ Core Data Collectors │
|
||||
│ (See data_collectors.md) │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
```
|
||||
Configuration → Service → CollectorManager → Data Collectors → Database
|
||||
↓ ↓
|
||||
Service Monitor Health Monitor
|
||||
```
|
||||
|
||||
### Storage Integration
|
||||
|
||||
- **Raw Data**: PostgreSQL `raw_trades` table via repository pattern
|
||||
- **Candles**: PostgreSQL `market_data` table with multiple timeframes
|
||||
- **Real-time**: Redis pub/sub for live data distribution
|
||||
- **Service Metrics**: Service uptime, error counts, collector statistics
|
||||
|
||||
## Logging Philosophy
|
||||
|
||||
The service implements **clean production logging** focused on operational needs:
|
||||
|
||||
### What Gets Logged
|
||||
|
||||
✅ **Service Lifecycle**
|
||||
- Service start/stop events
|
||||
- Configuration loading
|
||||
- Service initialization
|
||||
|
||||
✅ **Collector Orchestration**
|
||||
- Collector creation and destruction
|
||||
- Service-level health summaries
|
||||
- Recovery operations
|
||||
|
||||
✅ **Configuration Events**
|
||||
- Config file changes
|
||||
- Runtime configuration updates
|
||||
- Validation errors
|
||||
|
||||
✅ **Service Statistics**
|
||||
- Periodic uptime reports
|
||||
- Collection summary statistics
|
||||
- Performance metrics
|
||||
|
||||
### What Doesn't Get Logged
|
||||
|
||||
❌ **Individual Data Points**
|
||||
- Every trade received
|
||||
- Every candle generated
|
||||
- Raw market data
|
||||
|
||||
❌ **Internal Operations**
|
||||
- Individual collector heartbeats
|
||||
- Routine database operations
|
||||
- Internal processing steps
|
||||
|
||||
## API Reference
|
||||
|
||||
### DataCollectionService
|
||||
|
||||
The main service class for managing data collection operations.
|
||||
|
||||
#### Constructor
|
||||
|
||||
```python
|
||||
DataCollectionService(config_path: str = "config/data_collection.json")
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
- `config_path`: Path to JSON configuration file
|
||||
|
||||
#### Methods
|
||||
|
||||
##### `async run(duration_hours: Optional[float] = None) -> bool`
|
||||
|
||||
Run the service for a specified duration or indefinitely.
|
||||
|
||||
**Parameters:**
|
||||
- `duration_hours`: Optional duration in hours (None = indefinite)
|
||||
|
||||
**Returns:**
|
||||
- `bool`: True if successful, False if error occurred
|
||||
|
||||
**Example:**
|
||||
```python
|
||||
service = DataCollectionService()
|
||||
await service.run(duration_hours=24) # Run for 24 hours
|
||||
```
|
||||
|
||||
##### `async start() -> bool`
|
||||
|
||||
Start the data collection service and all configured collectors.
|
||||
|
||||
**Returns:**
|
||||
- `bool`: True if started successfully
|
||||
|
||||
##### `async stop() -> None`
|
||||
|
||||
Stop the service gracefully, including all collectors and cleanup.
|
||||
|
||||
##### `get_status() -> Dict[str, Any]`
|
||||
|
||||
Get current service status including uptime, collector counts, and errors.
|
||||
|
||||
**Returns:**
|
||||
```python
|
||||
{
|
||||
'service_running': True,
|
||||
'uptime_hours': 12.5,
|
||||
'collectors_total': 6,
|
||||
'collectors_running': 5,
|
||||
'collectors_failed': 1,
|
||||
'errors_count': 2,
|
||||
'last_error': 'Connection timeout for ETH-USDT',
|
||||
'configuration': {
|
||||
'config_file': 'config/data_collection.json',
|
||||
'exchanges_enabled': ['okx'],
|
||||
'total_trading_pairs': 6
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
##### `async initialize_collectors() -> bool`
|
||||
|
||||
Initialize all collectors based on configuration.
|
||||
|
||||
**Returns:**
|
||||
- `bool`: True if all collectors initialized successfully
|
||||
|
||||
##### `load_configuration() -> Dict[str, Any]`
|
||||
|
||||
Load and validate configuration from file.
|
||||
|
||||
**Returns:**
|
||||
- `dict`: Loaded configuration
|
||||
|
||||
### Standalone Function
|
||||
|
||||
#### `run_data_collection_service(config_path, duration_hours)`
|
||||
|
||||
```python
|
||||
async def run_data_collection_service(
|
||||
config_path: str = "config/data_collection.json",
|
||||
duration_hours: Optional[float] = None
|
||||
) -> bool
|
||||
```
|
||||
|
||||
Convenience function to run the service with minimal setup.
|
||||
|
||||
**Parameters:**
|
||||
- `config_path`: Path to configuration file
|
||||
- `duration_hours`: Optional duration in hours
|
||||
|
||||
**Returns:**
|
||||
- `bool`: True if successful
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Basic Service Integration
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
async def main():
|
||||
service = DataCollectionService("config/my_config.json")
|
||||
|
||||
# Run for 24 hours
|
||||
success = await service.run(duration_hours=24)
|
||||
|
||||
if not success:
|
||||
print("Service encountered errors")
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Custom Status Monitoring
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
async def monitor_service():
|
||||
service = DataCollectionService()
|
||||
|
||||
# Start service in background
|
||||
start_task = asyncio.create_task(service.run())
|
||||
|
||||
# Monitor status every 5 minutes
|
||||
while service.running:
|
||||
status = service.get_status()
|
||||
print(f"Service Uptime: {status['uptime_hours']:.1f}h")
|
||||
print(f"Collectors: {status['collectors_running']}/{status['collectors_total']}")
|
||||
print(f"Errors: {status['errors_count']}")
|
||||
|
||||
await asyncio.sleep(300) # 5 minutes
|
||||
|
||||
await start_task
|
||||
|
||||
asyncio.run(monitor_service())
|
||||
```
|
||||
|
||||
### Programmatic Control
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
async def controlled_collection():
|
||||
service = DataCollectionService()
|
||||
|
||||
try:
|
||||
# Initialize and start
|
||||
await service.initialize_collectors()
|
||||
await service.start()
|
||||
|
||||
# Monitor and control
|
||||
while True:
|
||||
status = service.get_status()
|
||||
|
||||
# Check if any collectors failed
|
||||
if status['collectors_failed'] > 0:
|
||||
print("Some collectors failed, checking health...")
|
||||
# Service auto-restart will handle this
|
||||
|
||||
await asyncio.sleep(60) # Check every minute
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("Shutting down service...")
|
||||
finally:
|
||||
await service.stop()
|
||||
|
||||
asyncio.run(controlled_collection())
|
||||
```
|
||||
|
||||
### Configuration Management
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
import json
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
async def dynamic_configuration():
|
||||
service = DataCollectionService()
|
||||
|
||||
# Load and modify configuration
|
||||
config = service.load_configuration()
|
||||
|
||||
# Add new trading pair
|
||||
config['exchanges']['okx']['trading_pairs'].append({
|
||||
'symbol': 'SOL-USDT',
|
||||
'enabled': True,
|
||||
'data_types': ['trade'],
|
||||
'timeframes': ['1m', '5m']
|
||||
})
|
||||
|
||||
# Save updated configuration
|
||||
with open('config/data_collection.json', 'w') as f:
|
||||
json.dump(config, f, indent=2)
|
||||
|
||||
# Restart service with new config
|
||||
await service.stop()
|
||||
await service.start()
|
||||
|
||||
asyncio.run(dynamic_configuration())
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The service implements robust error handling at the service orchestration level:
|
||||
|
||||
### Service Level Errors
|
||||
|
||||
- **Configuration Errors**: Invalid JSON, missing required fields
|
||||
- **Initialization Errors**: Failed collector creation, database connectivity
|
||||
- **Runtime Errors**: Service-level exceptions, resource exhaustion
|
||||
|
||||
### Error Recovery Strategies
|
||||
|
||||
1. **Graceful Degradation**: Continue with healthy collectors
|
||||
2. **Configuration Validation**: Validate before applying changes
|
||||
3. **Service Restart**: Full service restart on critical errors
|
||||
4. **Error Aggregation**: Collect and report errors across all collectors
|
||||
|
||||
### Error Reporting
|
||||
|
||||
```python
|
||||
# Service status includes error information
|
||||
status = service.get_status()
|
||||
|
||||
if status['errors_count'] > 0:
|
||||
print(f"Service has {status['errors_count']} errors")
|
||||
print(f"Last error: {status['last_error']}")
|
||||
|
||||
# Get detailed error information from collectors
|
||||
for collector_name in service.manager.list_collectors():
|
||||
collector_status = service.manager.get_collector_status(collector_name)
|
||||
if collector_status['status'] == 'error':
|
||||
print(f"Collector {collector_name}: {collector_status['statistics']['last_error']}")
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Running Service Tests
|
||||
|
||||
```bash
|
||||
# Run all data collection service tests
|
||||
uv run pytest tests/test_data_collection_service.py -v
|
||||
|
||||
# Run specific test categories
|
||||
uv run pytest tests/test_data_collection_service.py::TestDataCollectionService -v
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest tests/test_data_collection_service.py --cov=data.collection_service
|
||||
```
|
||||
|
||||
### Test Coverage
|
||||
|
||||
The service test suite covers:
|
||||
- Service initialization and configuration loading
|
||||
- Collector orchestration and management
|
||||
- Service lifecycle (start/stop/restart)
|
||||
- Configuration validation and error handling
|
||||
- Signal handling and graceful shutdown
|
||||
- Status reporting and monitoring
|
||||
- Error aggregation and recovery
|
||||
|
||||
### Mock Testing
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from unittest.mock import AsyncMock, patch
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_service_with_mock_collectors():
|
||||
with patch('data.collection_service.CollectorManager') as mock_manager:
|
||||
# Mock successful initialization
|
||||
mock_manager.return_value.start.return_value = True
|
||||
|
||||
service = DataCollectionService()
|
||||
result = await service.start()
|
||||
|
||||
assert result is True
|
||||
mock_manager.return_value.start.assert_called_once()
|
||||
```
|
||||
|
||||
## Production Deployment
|
||||
|
||||
### Docker Deployment
|
||||
|
||||
```dockerfile
|
||||
FROM python:3.11-slim
|
||||
|
||||
WORKDIR /app
|
||||
COPY . .
|
||||
|
||||
# Install dependencies
|
||||
RUN pip install uv
|
||||
RUN uv pip install -r requirements.txt
|
||||
|
||||
# Create logs and config directories
|
||||
RUN mkdir -p logs config
|
||||
|
||||
# Copy production configuration
|
||||
COPY config/production.json config/data_collection.json
|
||||
|
||||
# Health check
|
||||
HEALTHCHECK --interval=60s --timeout=10s --start-period=30s --retries=3 \
|
||||
CMD python scripts/health_check.py || exit 1
|
||||
|
||||
# Run service
|
||||
CMD ["python", "scripts/start_data_collection.py", "--config", "config/data_collection.json"]
|
||||
```
|
||||
|
||||
### Kubernetes Deployment
|
||||
|
||||
```yaml
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: data-collection-service
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: data-collection-service
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: data-collection-service
|
||||
spec:
|
||||
containers:
|
||||
- name: data-collector
|
||||
image: crypto-dashboard/data-collector:latest
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
env:
|
||||
- name: POSTGRES_HOST
|
||||
value: "postgres-service"
|
||||
- name: REDIS_HOST
|
||||
value: "redis-service"
|
||||
volumeMounts:
|
||||
- name: config-volume
|
||||
mountPath: /app/config
|
||||
- name: logs-volume
|
||||
mountPath: /app/logs
|
||||
livenessProbe:
|
||||
exec:
|
||||
command:
|
||||
- python
|
||||
- scripts/health_check.py
|
||||
initialDelaySeconds: 30
|
||||
periodSeconds: 60
|
||||
volumes:
|
||||
- name: config-volume
|
||||
configMap:
|
||||
name: data-collection-config
|
||||
- name: logs-volume
|
||||
emptyDir: {}
|
||||
```
|
||||
|
||||
### Systemd Service
|
||||
|
||||
```ini
|
||||
[Unit]
|
||||
Description=Cryptocurrency Data Collection Service
|
||||
After=network.target postgres.service redis.service
|
||||
Requires=postgres.service redis.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=crypto-collector
|
||||
Group=crypto-collector
|
||||
WorkingDirectory=/opt/crypto-dashboard
|
||||
ExecStart=/usr/bin/python scripts/start_data_collection.py --config config/production.json
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
KillMode=mixed
|
||||
TimeoutStopSec=30
|
||||
|
||||
# Environment
|
||||
Environment=PYTHONPATH=/opt/crypto-dashboard
|
||||
Environment=LOG_LEVEL=INFO
|
||||
|
||||
# Security
|
||||
NoNewPrivileges=true
|
||||
PrivateTmp=true
|
||||
ProtectSystem=strict
|
||||
ReadWritePaths=/opt/crypto-dashboard/logs
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
```bash
|
||||
# Production environment variables
|
||||
export ENVIRONMENT=production
|
||||
export POSTGRES_HOST=postgres.internal
|
||||
export POSTGRES_PORT=5432
|
||||
export POSTGRES_DB=crypto_dashboard
|
||||
export POSTGRES_USER=dashboard_user
|
||||
export POSTGRES_PASSWORD=secure_password
|
||||
export REDIS_HOST=redis.internal
|
||||
export REDIS_PORT=6379
|
||||
|
||||
# Service configuration
|
||||
export DATA_COLLECTION_CONFIG=/etc/crypto-dashboard/data_collection.json
|
||||
export LOG_LEVEL=INFO
|
||||
export HEALTH_CHECK_INTERVAL=120
|
||||
```
|
||||
|
||||
## Monitoring and Alerting
|
||||
|
||||
### Metrics Collection
|
||||
|
||||
The service exposes metrics for monitoring systems:
|
||||
|
||||
```python
|
||||
# Service metrics
|
||||
service_uptime_hours = 24.5
|
||||
collectors_running = 5
|
||||
collectors_total = 6
|
||||
errors_per_hour = 0.2
|
||||
data_points_processed = 15000
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
|
||||
```python
|
||||
# External health check endpoint
|
||||
async def health_check():
|
||||
service = DataCollectionService()
|
||||
status = service.get_status()
|
||||
|
||||
if not status['service_running']:
|
||||
return {'status': 'unhealthy', 'reason': 'service_stopped'}
|
||||
|
||||
if status['collectors_failed'] > status['collectors_total'] * 0.5:
|
||||
return {'status': 'degraded', 'reason': 'too_many_failed_collectors'}
|
||||
|
||||
return {'status': 'healthy'}
|
||||
```
|
||||
|
||||
### Alerting Rules
|
||||
|
||||
```yaml
|
||||
# Prometheus alerting rules
|
||||
groups:
|
||||
- name: data_collection_service
|
||||
rules:
|
||||
- alert: DataCollectionServiceDown
|
||||
expr: up{job="data-collection-service"} == 0
|
||||
for: 5m
|
||||
annotations:
|
||||
summary: "Data collection service is down"
|
||||
|
||||
- alert: TooManyFailedCollectors
|
||||
expr: collectors_failed / collectors_total > 0.5
|
||||
for: 10m
|
||||
annotations:
|
||||
summary: "More than 50% of collectors have failed"
|
||||
|
||||
- alert: HighErrorRate
|
||||
expr: rate(errors_total[5m]) > 0.1
|
||||
for: 15m
|
||||
annotations:
|
||||
summary: "High error rate in data collection service"
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Resource Usage
|
||||
|
||||
- **Memory**: ~150MB base + ~15MB per trading pair (including service overhead)
|
||||
- **CPU**: Low (async I/O bound, service orchestration)
|
||||
- **Network**: ~1KB/s per trading pair
|
||||
- **Storage**: Service logs ~10MB/day
|
||||
|
||||
### Scaling Strategies
|
||||
|
||||
1. **Horizontal Scaling**: Multiple service instances with different configurations
|
||||
2. **Configuration Partitioning**: Separate services by exchange or asset class
|
||||
3. **Load Balancing**: Distribute trading pairs across service instances
|
||||
4. **Regional Deployment**: Deploy closer to exchange data centers
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Configuration Tuning**: Optimize health check intervals and timeframes
|
||||
2. **Resource Limits**: Set appropriate memory and CPU limits
|
||||
3. **Batch Operations**: Use efficient database operations
|
||||
4. **Monitoring Overhead**: Balance monitoring frequency with performance
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Service Issues
|
||||
|
||||
#### Service Won't Start
|
||||
|
||||
```
|
||||
❌ Failed to start data collection service
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
1. Check configuration file validity
|
||||
2. Verify database connectivity
|
||||
3. Ensure no port conflicts
|
||||
4. Check file permissions
|
||||
|
||||
#### Configuration Loading Failed
|
||||
|
||||
```
|
||||
❌ Failed to load config from config/data_collection.json: Invalid JSON
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
1. Validate JSON syntax
|
||||
2. Check required fields
|
||||
3. Verify file encoding (UTF-8)
|
||||
4. Recreate default configuration
|
||||
|
||||
#### No Collectors Created
|
||||
|
||||
```
|
||||
❌ No collectors were successfully initialized
|
||||
```
|
||||
|
||||
**Solutions:**
|
||||
1. Check exchange configuration
|
||||
2. Verify trading pair symbols
|
||||
3. Check network connectivity
|
||||
4. Review collector creation logs
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable verbose service debugging:
|
||||
|
||||
```json
|
||||
{
|
||||
"logging": {
|
||||
"level": "DEBUG",
|
||||
"log_errors_only": false,
|
||||
"verbose_data_logging": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Service Diagnostics
|
||||
|
||||
```python
|
||||
# Run diagnostic check
|
||||
from data.collection_service import DataCollectionService
|
||||
|
||||
service = DataCollectionService()
|
||||
status = service.get_status()
|
||||
|
||||
print(f"Service Running: {status['service_running']}")
|
||||
print(f"Configuration File: {status['configuration']['config_file']}")
|
||||
print(f"Collectors: {status['collectors_running']}/{status['collectors_total']}")
|
||||
|
||||
# Check individual collector health
|
||||
for collector_name in service.manager.list_collectors():
|
||||
collector_status = service.manager.get_collector_status(collector_name)
|
||||
print(f"{collector_name}: {collector_status['status']}")
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Data Collectors System](../components/data_collectors.md) - Core collector components
|
||||
- [Logging System](../components/logging.md) - Logging configuration
|
||||
- [Database Operations](../database/operations.md) - Database integration
|
||||
- [Monitoring Guide](../monitoring/README.md) - System monitoring setup
|
||||
339
docs/modules/technical-indicators.md
Normal file
339
docs/modules/technical-indicators.md
Normal file
@@ -0,0 +1,339 @@
|
||||
# Technical Indicators Module
|
||||
|
||||
## Overview
|
||||
|
||||
The Technical Indicators module provides a modular, extensible system for calculating technical analysis indicators. It is designed to handle sparse OHLCV data efficiently, making it ideal for real-time trading applications.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Package Structure
|
||||
```
|
||||
data/common/indicators/
|
||||
├── __init__.py # Package exports
|
||||
├── technical.py # Main facade class
|
||||
├── base.py # Base indicator class
|
||||
├── result.py # Result container class
|
||||
├── utils.py # Utility functions
|
||||
└── implementations/ # Individual indicator implementations
|
||||
├── __init__.py
|
||||
├── sma.py # Simple Moving Average
|
||||
├── ema.py # Exponential Moving Average
|
||||
├── rsi.py # Relative Strength Index
|
||||
├── macd.py # MACD
|
||||
└── bollinger.py # Bollinger Bands
|
||||
```
|
||||
|
||||
### Key Components
|
||||
|
||||
#### 1. Base Classes
|
||||
- **BaseIndicator**: Abstract base class providing common functionality
|
||||
- Data preparation
|
||||
- Validation
|
||||
- Error handling
|
||||
- Logging
|
||||
|
||||
#### 2. Individual Indicators
|
||||
Each indicator is implemented as a separate class inheriting from `BaseIndicator`:
|
||||
- Focused responsibility
|
||||
- Independent testing
|
||||
- Easy maintenance
|
||||
- Clear documentation
|
||||
|
||||
#### 3. TechnicalIndicators Facade
|
||||
Main entry point providing:
|
||||
- Unified interface
|
||||
- Batch calculations
|
||||
- Consistent error handling
|
||||
- Data preparation
|
||||
|
||||
## Supported Indicators
|
||||
|
||||
### Simple Moving Average (SMA)
|
||||
```python
|
||||
from data.common.indicators import TechnicalIndicators
|
||||
|
||||
indicators = TechnicalIndicators()
|
||||
results = indicators.sma(df, period=20, price_column='close')
|
||||
```
|
||||
- **Parameters**:
|
||||
- `period`: Number of periods (default: 20)
|
||||
- `price_column`: Column to average (default: 'close')
|
||||
|
||||
### Exponential Moving Average (EMA)
|
||||
```python
|
||||
results = indicators.ema(df, period=12, price_column='close')
|
||||
```
|
||||
- **Parameters**:
|
||||
- `period`: Number of periods (default: 20)
|
||||
- `price_column`: Column to average (default: 'close')
|
||||
|
||||
### Relative Strength Index (RSI)
|
||||
```python
|
||||
results = indicators.rsi(df, period=14, price_column='close')
|
||||
```
|
||||
- **Parameters**:
|
||||
- `period`: Number of periods (default: 14)
|
||||
- `price_column`: Column to analyze (default: 'close')
|
||||
|
||||
### Moving Average Convergence Divergence (MACD)
|
||||
```python
|
||||
results = indicators.macd(
|
||||
df,
|
||||
fast_period=12,
|
||||
slow_period=26,
|
||||
signal_period=9,
|
||||
price_column='close'
|
||||
)
|
||||
```
|
||||
- **Parameters**:
|
||||
- `fast_period`: Fast EMA period (default: 12)
|
||||
- `slow_period`: Slow EMA period (default: 26)
|
||||
- `signal_period`: Signal line period (default: 9)
|
||||
- `price_column`: Column to analyze (default: 'close')
|
||||
|
||||
### Bollinger Bands
|
||||
```python
|
||||
results = indicators.bollinger_bands(
|
||||
df,
|
||||
period=20,
|
||||
std_dev=2.0,
|
||||
price_column='close'
|
||||
)
|
||||
```
|
||||
- **Parameters**:
|
||||
- `period`: SMA period (default: 20)
|
||||
- `std_dev`: Standard deviation multiplier (default: 2.0)
|
||||
- `price_column`: Column to analyze (default: 'close')
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Usage
|
||||
```python
|
||||
from data.common.indicators import TechnicalIndicators
|
||||
|
||||
# Initialize calculator
|
||||
indicators = TechnicalIndicators(logger=my_logger)
|
||||
|
||||
# Calculate single indicator
|
||||
sma_results = indicators.sma(df, period=20)
|
||||
|
||||
# Access results
|
||||
for result in sma_results:
|
||||
print(f"Time: {result.timestamp}, SMA: {result.values['sma']}")
|
||||
```
|
||||
|
||||
### Batch Calculations
|
||||
```python
|
||||
# Configure multiple indicators
|
||||
config = {
|
||||
'sma_20': {'type': 'sma', 'period': 20},
|
||||
'ema_12': {'type': 'ema', 'period': 12},
|
||||
'rsi_14': {'type': 'rsi', 'period': 14},
|
||||
'macd': {
|
||||
'type': 'macd',
|
||||
'fast_period': 12,
|
||||
'slow_period': 26,
|
||||
'signal_period': 9
|
||||
}
|
||||
}
|
||||
|
||||
# Calculate all at once
|
||||
results = indicators.calculate_multiple_indicators(df, config)
|
||||
```
|
||||
|
||||
### Dynamic Indicator Selection
|
||||
```python
|
||||
# Calculate any indicator by name
|
||||
result = indicators.calculate(
|
||||
'macd',
|
||||
df,
|
||||
fast_period=12,
|
||||
slow_period=26,
|
||||
signal_period=9
|
||||
)
|
||||
```
|
||||
|
||||
## Data Structures
|
||||
|
||||
### IndicatorResult
|
||||
```python
|
||||
@dataclass
|
||||
class IndicatorResult:
|
||||
timestamp: datetime # Right-aligned timestamp
|
||||
symbol: str # Trading symbol
|
||||
timeframe: str # Candle timeframe
|
||||
values: Dict[str, float] # Indicator values
|
||||
metadata: Optional[Dict[str, Any]] = None # Calculation metadata
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The module provides comprehensive error handling:
|
||||
- Input validation
|
||||
- Data sufficiency checks
|
||||
- Calculation error handling
|
||||
- Detailed error logging
|
||||
|
||||
Example:
|
||||
```python
|
||||
try:
|
||||
results = indicators.rsi(df, period=14)
|
||||
except Exception as e:
|
||||
logger.error(f"RSI calculation failed: {e}")
|
||||
results = []
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
1. **Data Preparation**
|
||||
- Uses pandas for vectorized calculations
|
||||
- Handles sparse data efficiently
|
||||
- Maintains timestamp alignment
|
||||
|
||||
2. **Memory Usage**
|
||||
- Avoids unnecessary data copies
|
||||
- Cleans up temporary calculations
|
||||
- Uses efficient data structures
|
||||
|
||||
3. **Calculation Optimization**
|
||||
- Vectorized operations where possible
|
||||
- Minimal data transformations
|
||||
- Efficient algorithm implementations
|
||||
|
||||
## Testing
|
||||
|
||||
The module includes comprehensive tests:
|
||||
- Unit tests for each indicator
|
||||
- Integration tests for the facade
|
||||
- Edge case handling
|
||||
- Performance benchmarks
|
||||
|
||||
Run tests with:
|
||||
```bash
|
||||
uv run pytest tests/test_indicators.py
|
||||
```
|
||||
|
||||
## Contributing
|
||||
|
||||
When adding new indicators:
|
||||
1. Create a new class in `implementations/`
|
||||
2. Inherit from `BaseIndicator`
|
||||
3. Implement the `calculate` method
|
||||
4. Add tests
|
||||
5. Update documentation
|
||||
|
||||
See [Adding New Indicators](./adding-new-indicators.md) for detailed instructions.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **DataFrame-Centric Design**: Operates directly on pandas DataFrames for performance and simplicity.
|
||||
- **Vectorized Calculations**: Leverages pandas and numpy for high-speed computation.
|
||||
- **Flexible `calculate` Method**: A single entry point for calculating any supported indicator by name.
|
||||
- **Standardized Output**: All methods return a DataFrame containing the calculated indicator values, indexed by timestamp.
|
||||
- **Modular Architecture**: Clear separation between calculation logic, result types, and utilities.
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Importing the Required Components
|
||||
|
||||
```python
|
||||
from data.common.indicators import (
|
||||
TechnicalIndicators,
|
||||
IndicatorResult,
|
||||
create_default_indicators_config,
|
||||
validate_indicator_config
|
||||
)
|
||||
from data.common.data_types import OHLCVCandle
|
||||
```
|
||||
|
||||
### Preparing the DataFrame
|
||||
|
||||
Before you can calculate indicators, you need a properly formatted pandas DataFrame. The `prepare_chart_data` utility is the recommended way to create one from a list of candle dictionaries.
|
||||
|
||||
```python
|
||||
from components.charts.utils import prepare_chart_data
|
||||
from data.common.indicators import TechnicalIndicators
|
||||
|
||||
# Assume 'candles' is a list of OHLCV dictionaries from the database
|
||||
# candles = fetch_market_data(...)
|
||||
|
||||
# Prepare the DataFrame
|
||||
df = prepare_chart_data(candles)
|
||||
|
||||
# df is now ready for indicator calculations
|
||||
# It has a DatetimeIndex and the necessary OHLCV columns.
|
||||
```
|
||||
|
||||
### Basic Indicator Calculation
|
||||
|
||||
Once you have a prepared DataFrame, you can calculate indicators directly.
|
||||
|
||||
```python
|
||||
# Initialize the calculator
|
||||
indicators = TechnicalIndicators()
|
||||
|
||||
# Calculate a Simple Moving Average
|
||||
sma_df = indicators.sma(df, period=20)
|
||||
|
||||
# Calculate an Exponential Moving Average
|
||||
ema_df = indicators.ema(df, period=12)
|
||||
|
||||
# sma_df and ema_df are pandas DataFrames containing the results.
|
||||
```
|
||||
|
||||
### Using the `calculate` Method
|
||||
|
||||
The most flexible way to compute an indicator is with the `calculate` method, which accepts the indicator type as a string.
|
||||
|
||||
```python
|
||||
# Calculate RSI using the generic method
|
||||
rsi_pkg = indicators.calculate('rsi', df, period=14)
|
||||
if rsi_pkg:
|
||||
rsi_df = rsi_pkg['data']
|
||||
|
||||
# Calculate MACD with custom parameters
|
||||
macd_pkg = indicators.calculate('macd', df, fast_period=10, slow_period=30, signal_period=8)
|
||||
if macd_pkg:
|
||||
macd_df = macd_pkg['data']
|
||||
```
|
||||
|
||||
### Using Different Price Columns
|
||||
|
||||
You can specify which price column (`open`, `high`, `low`, or `close`) to use for the calculation.
|
||||
|
||||
```python
|
||||
# Calculate SMA on the 'high' price
|
||||
sma_high_df = indicators.sma(df, period=20, price_column='high')
|
||||
|
||||
# Calculate RSI on the 'open' price
|
||||
rsi_open_pkg = indicators.calculate('rsi', df, period=14, price_column='open')
|
||||
```
|
||||
|
||||
## Indicator Details
|
||||
|
||||
The following details the parameters and the columns returned in the result DataFrame for each indicator.
|
||||
|
||||
### Simple Moving Average (SMA)
|
||||
|
||||
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
|
||||
- **Returned Columns**: `sma`
|
||||
|
||||
### Exponential Moving Average (EMA)
|
||||
|
||||
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
|
||||
- **Returned Columns**: `ema`
|
||||
|
||||
### Relative Strength Index (RSI)
|
||||
|
||||
- **Parameters**: `period` (int), `price_column` (str, default: 'close')
|
||||
- **Returned Columns**: `rsi`
|
||||
|
||||
### MACD (Moving Average Convergence Divergence)
|
||||
|
||||
- **Parameters**: `fast_period` (int), `slow_period` (int), `signal_period` (int), `price_column` (str, default: 'close')
|
||||
- **Returned Columns**: `macd`, `signal`, `histogram`
|
||||
|
||||
### Bollinger Bands
|
||||
|
||||
- **Parameters**: `period` (int), `std_dev` (float), `price_column` (str, default: 'close')
|
||||
- **Returned Columns**: `upper_band`, `
|
||||
165
docs/modules/transformation.md
Normal file
165
docs/modules/transformation.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Transformation Module
|
||||
|
||||
## Purpose
|
||||
The transformation module provides safe and standardized data transformation utilities for crypto trading operations, with built-in safety limits and validations to prevent errors and protect against edge cases.
|
||||
|
||||
## Architecture
|
||||
The module is organized into several submodules:
|
||||
|
||||
### safety.py
|
||||
Provides safety limits and validations for trading operations:
|
||||
- Trade size limits (min/max)
|
||||
- Price deviation checks
|
||||
- Symbol format validation
|
||||
- Stablecoin-specific rules
|
||||
|
||||
### trade.py
|
||||
Handles trade data transformation with comprehensive safety checks:
|
||||
- Trade side normalization
|
||||
- Size and price validation
|
||||
- Symbol validation
|
||||
- Market price deviation checks
|
||||
|
||||
## Safety Limits
|
||||
|
||||
### Default Limits
|
||||
```python
|
||||
DEFAULT_LIMITS = TradeLimits(
|
||||
min_size=Decimal('0.00000001'), # 1 satoshi
|
||||
max_size=Decimal('10000.0'), # 10K units
|
||||
min_notional=Decimal('1.0'), # Min $1
|
||||
max_notional=Decimal('10000000.0'), # Max $10M
|
||||
price_precision=8,
|
||||
size_precision=8,
|
||||
max_price_deviation=Decimal('30.0') # 30%
|
||||
)
|
||||
```
|
||||
|
||||
### Stablecoin Pairs
|
||||
```python
|
||||
STABLECOIN_LIMITS = DEFAULT_LIMITS._replace(
|
||||
max_size=Decimal('1000000.0'), # 1M units
|
||||
max_notional=Decimal('50000000.0'), # $50M
|
||||
max_price_deviation=Decimal('5.0') # 5%
|
||||
)
|
||||
```
|
||||
|
||||
### Volatile Pairs
|
||||
```python
|
||||
VOLATILE_LIMITS = DEFAULT_LIMITS._replace(
|
||||
max_size=Decimal('1000.0'), # 1K units
|
||||
max_notional=Decimal('1000000.0'), # $1M
|
||||
max_price_deviation=Decimal('50.0') # 50%
|
||||
)
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Trade Transformation
|
||||
```python
|
||||
from data.common.transformation.trade import TradeTransformer
|
||||
|
||||
# Initialize transformer
|
||||
transformer = TradeTransformer()
|
||||
|
||||
# Transform trade data
|
||||
trade_data = {
|
||||
'symbol': 'BTC-USDT',
|
||||
'side': 'buy',
|
||||
'size': '1.5',
|
||||
'price': '50000'
|
||||
}
|
||||
|
||||
try:
|
||||
transformed = transformer.transform_trade(trade_data)
|
||||
print(f"Transformed trade: {transformed}")
|
||||
except ValueError as e:
|
||||
print(f"Validation error: {e}")
|
||||
```
|
||||
|
||||
### With Market Price Validation
|
||||
```python
|
||||
from data.common.transformation.trade import TradeTransformer
|
||||
from your_market_data_provider import MarketDataProvider
|
||||
|
||||
# Initialize with market data for price deviation checks
|
||||
transformer = TradeTransformer(
|
||||
market_data_provider=MarketDataProvider()
|
||||
)
|
||||
|
||||
# Transform with price validation
|
||||
try:
|
||||
transformed = transformer.transform_trade({
|
||||
'symbol': 'ETH-USDT',
|
||||
'side': 'sell',
|
||||
'size': '10',
|
||||
'price': '2000'
|
||||
})
|
||||
print(f"Transformed trade: {transformed}")
|
||||
except ValueError as e:
|
||||
print(f"Validation error: {e}")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
The module uses explicit error handling with descriptive messages:
|
||||
|
||||
```python
|
||||
try:
|
||||
transformed = transformer.transform_trade(trade_data)
|
||||
except ValueError as e:
|
||||
if "below minimum" in str(e):
|
||||
# Handle size too small
|
||||
pass
|
||||
elif "exceeds maximum" in str(e):
|
||||
# Handle size too large
|
||||
pass
|
||||
elif "deviation" in str(e):
|
||||
# Handle price deviation too large
|
||||
pass
|
||||
else:
|
||||
# Handle other validation errors
|
||||
pass
|
||||
```
|
||||
|
||||
## Logging
|
||||
|
||||
The module integrates with Python's logging system for monitoring and debugging:
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Transformer will log warnings when approaching limits
|
||||
transformer = TradeTransformer()
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test suite:
|
||||
```bash
|
||||
uv run pytest tests/common/transformation/test_safety.py -v
|
||||
```
|
||||
|
||||
Key test areas:
|
||||
- Trade size validation
|
||||
- Price deviation checks
|
||||
- Symbol format validation
|
||||
- Stablecoin detection
|
||||
- Edge case handling
|
||||
|
||||
## Dependencies
|
||||
- Internal:
|
||||
- `data.common.types`
|
||||
- `data.common.validation`
|
||||
- External:
|
||||
- Python's decimal module
|
||||
- Python's logging module
|
||||
|
||||
## Known Limitations
|
||||
- Market price validation requires a market data provider
|
||||
- Stablecoin detection is based on a predefined list
|
||||
- Price deviation checks are percentage-based only
|
||||
194
docs/modules/validation.md
Normal file
194
docs/modules/validation.md
Normal file
@@ -0,0 +1,194 @@
|
||||
# Data Validation Module
|
||||
|
||||
## Purpose
|
||||
The data validation module provides a robust, extensible framework for validating market data across different exchanges. It ensures data consistency, type safety, and business rule compliance through a modular validation system.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Package Structure
|
||||
```
|
||||
data/common/validation/
|
||||
├── __init__.py # Public interface
|
||||
├── result.py # Validation result classes
|
||||
├── field_validators.py # Individual field validators
|
||||
└── base.py # BaseDataValidator class
|
||||
```
|
||||
|
||||
### Core Components
|
||||
|
||||
#### ValidationResult
|
||||
Represents the outcome of validating a single field or component:
|
||||
```python
|
||||
ValidationResult(
|
||||
is_valid: bool, # Whether validation passed
|
||||
errors: List[str] = [], # Error messages
|
||||
warnings: List[str] = [], # Warning messages
|
||||
sanitized_data: Any = None # Cleaned/normalized data
|
||||
)
|
||||
```
|
||||
|
||||
#### DataValidationResult
|
||||
Represents the outcome of validating a complete data structure:
|
||||
```python
|
||||
DataValidationResult(
|
||||
is_valid: bool,
|
||||
errors: List[str],
|
||||
warnings: List[str],
|
||||
sanitized_data: Optional[Dict[str, Any]] = None
|
||||
)
|
||||
```
|
||||
|
||||
#### BaseDataValidator
|
||||
Abstract base class providing common validation patterns for exchange-specific implementations:
|
||||
```python
|
||||
class BaseDataValidator(ABC):
|
||||
def __init__(self, exchange_name: str, component_name: str, logger: Optional[Logger])
|
||||
|
||||
@abstractmethod
|
||||
def validate_symbol_format(self, symbol: str) -> ValidationResult
|
||||
|
||||
@abstractmethod
|
||||
def validate_websocket_message(self, message: Dict[str, Any]) -> DataValidationResult
|
||||
```
|
||||
|
||||
### Field Validators
|
||||
Common validation functions for market data fields:
|
||||
- `validate_price()`: Price value validation
|
||||
- `validate_size()`: Size/quantity validation
|
||||
- `validate_volume()`: Volume validation
|
||||
- `validate_trade_side()`: Trade side validation
|
||||
- `validate_timestamp()`: Timestamp validation
|
||||
- `validate_trade_id()`: Trade ID validation
|
||||
- `validate_symbol_match()`: Symbol matching validation
|
||||
- `validate_required_fields()`: Required field presence validation
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Creating an Exchange-Specific Validator
|
||||
```python
|
||||
from data.common.validation import BaseDataValidator, ValidationResult
|
||||
|
||||
class OKXDataValidator(BaseDataValidator):
|
||||
def __init__(self, component_name: str = "okx_data_validator", logger = None):
|
||||
super().__init__("okx", component_name, logger)
|
||||
self._symbol_pattern = re.compile(r'^[A-Z0-9]+-[A-Z0-9]+$')
|
||||
|
||||
def validate_symbol_format(self, symbol: str) -> ValidationResult:
|
||||
errors = []
|
||||
warnings = []
|
||||
|
||||
if not isinstance(symbol, str):
|
||||
errors.append(f"Symbol must be string, got {type(symbol)}")
|
||||
return ValidationResult(False, errors, warnings)
|
||||
|
||||
if not self._symbol_pattern.match(symbol):
|
||||
errors.append(f"Invalid symbol format: {symbol}")
|
||||
|
||||
return ValidationResult(len(errors) == 0, errors, warnings)
|
||||
```
|
||||
|
||||
### Validating Trade Data
|
||||
```python
|
||||
def validate_trade(validator: BaseDataValidator, trade_data: Dict[str, Any]) -> None:
|
||||
result = validator.validate_trade_data(trade_data)
|
||||
|
||||
if not result.is_valid:
|
||||
raise ValidationError(f"Trade validation failed: {result.errors}")
|
||||
|
||||
if result.warnings:
|
||||
logger.warning(f"Trade validation warnings: {result.warnings}")
|
||||
|
||||
return result.sanitized_data
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Validation Constants
|
||||
The module defines several constants for validation rules:
|
||||
```python
|
||||
MIN_PRICE = Decimal('0.00000001')
|
||||
MAX_PRICE = Decimal('1000000000')
|
||||
MIN_SIZE = Decimal('0.00000001')
|
||||
MAX_SIZE = Decimal('1000000000')
|
||||
MIN_TIMESTAMP = 946684800000 # 2000-01-01
|
||||
MAX_TIMESTAMP = 32503680000000 # 3000-01-01
|
||||
VALID_TRADE_SIDES = {'buy', 'sell'}
|
||||
```
|
||||
|
||||
### Regular Expression Patterns
|
||||
```python
|
||||
NUMERIC_PATTERN = re.compile(r'^-?\d*\.?\d+$')
|
||||
TRADE_ID_PATTERN = re.compile(r'^[\w-]+$')
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Running Tests
|
||||
```bash
|
||||
pytest tests/test_data_validation.py -v
|
||||
```
|
||||
|
||||
### Test Coverage
|
||||
The validation module has comprehensive test coverage including:
|
||||
- Basic validation result functionality
|
||||
- Field validator functions
|
||||
- Base validator class
|
||||
- Exchange-specific validator implementations
|
||||
- Error handling and edge cases
|
||||
|
||||
## Dependencies
|
||||
- Internal:
|
||||
- `data.common.data_types`
|
||||
- `data.base_collector`
|
||||
- External:
|
||||
- `typing`
|
||||
- `decimal`
|
||||
- `logging`
|
||||
- `abc`
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Validation Errors
|
||||
- Invalid data type
|
||||
- Value out of bounds
|
||||
- Missing required fields
|
||||
- Invalid format
|
||||
- Symbol mismatch
|
||||
|
||||
### Error Response Format
|
||||
```python
|
||||
{
|
||||
'is_valid': False,
|
||||
'errors': ['Price must be positive', 'Size exceeds maximum'],
|
||||
'warnings': ['Price below recommended minimum'],
|
||||
'sanitized_data': None
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Implementing New Validators
|
||||
1. Extend `BaseDataValidator`
|
||||
2. Implement required abstract methods
|
||||
3. Add exchange-specific validation rules
|
||||
4. Reuse common field validators
|
||||
5. Add comprehensive tests
|
||||
|
||||
### Validation Guidelines
|
||||
- Always sanitize input data
|
||||
- Include helpful error messages
|
||||
- Use warnings for non-critical issues
|
||||
- Maintain type safety
|
||||
- Log validation failures appropriately
|
||||
|
||||
## Known Issues and Limitations
|
||||
- Timestamp validation assumes millisecond precision
|
||||
- Trade ID format is loosely validated
|
||||
- Some exchanges may require custom numeric precision
|
||||
|
||||
## Future Improvements
|
||||
- Add support for custom validation rules
|
||||
- Implement async validation methods
|
||||
- Add validation rule configuration system
|
||||
- Enhance performance for high-frequency validation
|
||||
- Add more exchange-specific validators
|
||||
397
docs/reference/README.md
Normal file
397
docs/reference/README.md
Normal file
@@ -0,0 +1,397 @@
|
||||
# Reference Documentation
|
||||
|
||||
This section contains technical specifications, API references, and detailed documentation for the TCP Dashboard platform.
|
||||
|
||||
## 📋 Contents
|
||||
|
||||
### Technical Specifications
|
||||
|
||||
- **[Project Specification](specification.md)** - *Technical specifications and requirements*
|
||||
- System requirements and constraints
|
||||
- Database schema specifications
|
||||
- API endpoint definitions
|
||||
- Data format specifications
|
||||
- Integration requirements
|
||||
|
||||
- **[Aggregation Strategy](aggregation-strategy.md)** - *Comprehensive data aggregation documentation*
|
||||
- Right-aligned timestamp strategy (industry standard)
|
||||
- Future leakage prevention safeguards
|
||||
- Real-time vs historical processing
|
||||
- Database storage patterns
|
||||
- Testing methodology and examples
|
||||
|
||||
### API References
|
||||
|
||||
#### Data Collection APIs
|
||||
|
||||
```python
|
||||
# BaseDataCollector API
|
||||
class BaseDataCollector:
|
||||
async def start() -> bool
|
||||
async def stop(force: bool = False) -> None
|
||||
async def restart() -> bool
|
||||
def get_status() -> Dict[str, Any]
|
||||
def get_health_status() -> Dict[str, Any]
|
||||
def add_data_callback(data_type: DataType, callback: Callable) -> None
|
||||
|
||||
# CollectorManager API
|
||||
class CollectorManager:
|
||||
def add_collector(collector: BaseDataCollector) -> None
|
||||
async def start() -> bool
|
||||
async def stop() -> None
|
||||
def get_status() -> Dict[str, Any]
|
||||
def list_collectors() -> List[str]
|
||||
```
|
||||
|
||||
#### Exchange Factory APIs
|
||||
|
||||
```python
|
||||
# Factory Pattern API
|
||||
class ExchangeFactory:
|
||||
@staticmethod
|
||||
def create_collector(config: ExchangeCollectorConfig) -> BaseDataCollector
|
||||
|
||||
@staticmethod
|
||||
def create_multiple_collectors(configs: List[ExchangeCollectorConfig]) -> List[BaseDataCollector]
|
||||
|
||||
@staticmethod
|
||||
def get_supported_exchanges() -> List[str]
|
||||
|
||||
@staticmethod
|
||||
def validate_config(config: ExchangeCollectorConfig) -> bool
|
||||
|
||||
# Configuration API
|
||||
@dataclass
|
||||
class ExchangeCollectorConfig:
|
||||
exchange: str
|
||||
symbol: str
|
||||
data_types: List[DataType]
|
||||
auto_restart: bool = True
|
||||
health_check_interval: float = 30.0
|
||||
store_raw_data: bool = True
|
||||
custom_params: Optional[Dict[str, Any]] = None
|
||||
```
|
||||
|
||||
## 📊 Data Schemas
|
||||
|
||||
### Market Data Point
|
||||
|
||||
The standardized data structure for all market data:
|
||||
|
||||
```python
|
||||
@dataclass
|
||||
class MarketDataPoint:
|
||||
exchange: str # Exchange name (e.g., 'okx', 'binance')
|
||||
symbol: str # Trading symbol (e.g., 'BTC-USDT')
|
||||
timestamp: datetime # Data timestamp (UTC)
|
||||
data_type: DataType # Type of data (TRADE, ORDERBOOK, etc.)
|
||||
data: Dict[str, Any] # Raw data payload
|
||||
```
|
||||
|
||||
### Data Types
|
||||
|
||||
```python
|
||||
class DataType(Enum):
|
||||
TICKER = "ticker" # Price and volume updates
|
||||
TRADE = "trade" # Individual trade executions
|
||||
ORDERBOOK = "orderbook" # Order book snapshots
|
||||
CANDLE = "candle" # OHLCV candle data
|
||||
BALANCE = "balance" # Account balance updates
|
||||
```
|
||||
|
||||
### Status Schemas
|
||||
|
||||
#### Collector Status
|
||||
|
||||
```python
|
||||
{
|
||||
'exchange': str, # Exchange name
|
||||
'status': str, # Current status (running, stopped, error)
|
||||
'should_be_running': bool, # Desired state
|
||||
'symbols': List[str], # Configured symbols
|
||||
'data_types': List[str], # Data types being collected
|
||||
'auto_restart': bool, # Auto-restart enabled
|
||||
'health': {
|
||||
'time_since_heartbeat': float, # Seconds since last heartbeat
|
||||
'time_since_data': float, # Seconds since last data
|
||||
'max_silence_duration': float # Max allowed silence
|
||||
},
|
||||
'statistics': {
|
||||
'messages_received': int, # Total messages received
|
||||
'messages_processed': int, # Successfully processed
|
||||
'errors': int, # Error count
|
||||
'restarts': int, # Restart count
|
||||
'uptime_seconds': float, # Current uptime
|
||||
'reconnect_attempts': int, # Current reconnect attempts
|
||||
'last_message_time': str, # ISO timestamp
|
||||
'connection_uptime': str, # Connection start time
|
||||
'last_error': str, # Last error message
|
||||
'last_restart_time': str # Last restart time
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Health Status
|
||||
|
||||
```python
|
||||
{
|
||||
'is_healthy': bool, # Overall health status
|
||||
'issues': List[str], # List of current issues
|
||||
'status': str, # Current collector status
|
||||
'last_heartbeat': str, # Last heartbeat timestamp
|
||||
'last_data_received': str, # Last data timestamp
|
||||
'should_be_running': bool, # Expected state
|
||||
'is_running': bool # Actual running state
|
||||
}
|
||||
```
|
||||
|
||||
## 🔧 Configuration Schemas
|
||||
|
||||
### Database Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"database": {
|
||||
"url": "postgresql://user:pass@host:port/db",
|
||||
"pool_size": 10,
|
||||
"max_overflow": 20,
|
||||
"pool_timeout": 30,
|
||||
"pool_recycle": 3600
|
||||
},
|
||||
"tables": {
|
||||
"market_data": "market_data",
|
||||
"raw_trades": "raw_trades",
|
||||
"collector_status": "collector_status"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Exchange Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"exchange": "okx",
|
||||
"connection": {
|
||||
"public_ws_url": "wss://ws.okx.com:8443/ws/v5/public",
|
||||
"ping_interval": 25.0,
|
||||
"pong_timeout": 10.0,
|
||||
"max_reconnect_attempts": 5,
|
||||
"reconnect_delay": 5.0
|
||||
},
|
||||
"data_collection": {
|
||||
"store_raw_data": true,
|
||||
"health_check_interval": 30.0,
|
||||
"auto_restart": true,
|
||||
"buffer_size": 1000
|
||||
},
|
||||
"trading_pairs": [
|
||||
{
|
||||
"symbol": "BTC-USDT",
|
||||
"enabled": true,
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Logging Configuration
|
||||
|
||||
```json
|
||||
{
|
||||
"logging": {
|
||||
"level": "INFO",
|
||||
"format": "detailed",
|
||||
"console_output": true,
|
||||
"file_output": true,
|
||||
"cleanup": true,
|
||||
"max_files": 30,
|
||||
"log_directory": "./logs"
|
||||
},
|
||||
"components": {
|
||||
"data_collectors": {
|
||||
"level": "INFO",
|
||||
"verbose": false
|
||||
},
|
||||
"websocket_clients": {
|
||||
"level": "DEBUG",
|
||||
"verbose": true
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 🌐 Protocol Specifications
|
||||
|
||||
### WebSocket Message Formats
|
||||
|
||||
#### OKX Message Format
|
||||
|
||||
```json
|
||||
{
|
||||
"arg": {
|
||||
"channel": "trades",
|
||||
"instId": "BTC-USDT"
|
||||
},
|
||||
"data": [
|
||||
{
|
||||
"instId": "BTC-USDT",
|
||||
"tradeId": "12345678",
|
||||
"px": "50000.5",
|
||||
"sz": "0.001",
|
||||
"side": "buy",
|
||||
"ts": "1697123456789"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### Subscription Message Format
|
||||
|
||||
```json
|
||||
{
|
||||
"op": "subscribe",
|
||||
"args": [
|
||||
{
|
||||
"channel": "trades",
|
||||
"instId": "BTC-USDT"
|
||||
},
|
||||
{
|
||||
"channel": "books5",
|
||||
"instId": "BTC-USDT"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Database Schemas
|
||||
|
||||
#### Market Data Table
|
||||
|
||||
```sql
|
||||
CREATE TABLE market_data (
|
||||
id SERIAL PRIMARY KEY,
|
||||
exchange VARCHAR(50) NOT NULL,
|
||||
symbol VARCHAR(50) NOT NULL,
|
||||
data_type VARCHAR(20) NOT NULL,
|
||||
timestamp TIMESTAMP WITH TIME ZONE NOT NULL,
|
||||
data JSONB NOT NULL,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
|
||||
INDEX(exchange, symbol, timestamp),
|
||||
INDEX(data_type, timestamp)
|
||||
);
|
||||
```
|
||||
|
||||
#### Raw Trades Table
|
||||
|
||||
```sql
|
||||
CREATE TABLE raw_trades (
|
||||
id SERIAL PRIMARY KEY,
|
||||
exchange VARCHAR(50) NOT NULL,
|
||||
symbol VARCHAR(50) NOT NULL,
|
||||
trade_id VARCHAR(100),
|
||||
price DECIMAL(20, 8) NOT NULL,
|
||||
size DECIMAL(20, 8) NOT NULL,
|
||||
side VARCHAR(10) NOT NULL,
|
||||
timestamp TIMESTAMP WITH TIME ZONE NOT NULL,
|
||||
raw_data JSONB,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
|
||||
|
||||
UNIQUE(exchange, symbol, trade_id),
|
||||
INDEX(exchange, symbol, timestamp),
|
||||
INDEX(timestamp)
|
||||
);
|
||||
```
|
||||
|
||||
## 📈 Performance Specifications
|
||||
|
||||
### System Requirements
|
||||
|
||||
#### Minimum Requirements
|
||||
|
||||
- **CPU**: 2 cores, 2.0 GHz
|
||||
- **Memory**: 4 GB RAM
|
||||
- **Storage**: 20 GB available space
|
||||
- **Network**: Stable internet connection (100 Mbps+)
|
||||
|
||||
#### Recommended Requirements
|
||||
|
||||
- **CPU**: 4+ cores, 3.0+ GHz
|
||||
- **Memory**: 8+ GB RAM
|
||||
- **Storage**: 100+ GB SSD
|
||||
- **Network**: High-speed internet (1 Gbps+)
|
||||
|
||||
### Performance Targets
|
||||
|
||||
#### Data Collection
|
||||
|
||||
- **Latency**: < 100ms from exchange to processing
|
||||
- **Throughput**: 1000+ messages/second per collector
|
||||
- **Uptime**: 99.9% availability
|
||||
- **Memory Usage**: < 50 MB per collector
|
||||
|
||||
#### Database Operations
|
||||
|
||||
- **Insert Rate**: 10,000+ inserts/second
|
||||
- **Query Response**: < 100ms for typical queries
|
||||
- **Storage Growth**: ~1 GB/month per active trading pair
|
||||
- **Retention**: 2+ years of historical data
|
||||
|
||||
## 🔒 Security Specifications
|
||||
|
||||
### Authentication & Authorization
|
||||
|
||||
- **API Keys**: Secure storage in environment variables
|
||||
- **Database Access**: Connection pooling with authentication
|
||||
- **WebSocket Connections**: TLS encryption for all connections
|
||||
- **Logging**: No sensitive data in logs
|
||||
|
||||
### Data Protection
|
||||
|
||||
- **Encryption**: TLS 1.3 for all external communications
|
||||
- **Data Validation**: Comprehensive input validation
|
||||
- **Error Handling**: Secure error messages without data leakage
|
||||
- **Backup**: Regular automated backups with encryption
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- **[Modules Documentation (`../modules/`)](../modules/)** - Implementation details
|
||||
- **[Architecture Overview (`../architecture.md`)]** - System design
|
||||
- **[Exchange Documentation (`../modules/exchanges/`)](../modules/exchanges/)** - Exchange integrations
|
||||
- **[Setup Guide (`../guides/`)](../guides/)** - Configuration and deployment
|
||||
|
||||
## 📞 Support
|
||||
|
||||
### API Support
|
||||
|
||||
For API-related questions:
|
||||
|
||||
1. **Check Examples**: Review code examples in each API section
|
||||
2. **Test Endpoints**: Use provided test scripts
|
||||
3. **Validate Schemas**: Ensure data matches specified formats
|
||||
4. **Review Logs**: Check detailed logs for API interactions
|
||||
|
||||
### Schema Validation
|
||||
|
||||
For data schema issues:
|
||||
|
||||
```python
|
||||
# Validate data point structure
|
||||
def validate_market_data_point(data_point):
|
||||
required_fields = ['exchange', 'symbol', 'timestamp', 'data_type', 'data']
|
||||
for field in required_fields:
|
||||
if not hasattr(data_point, field):
|
||||
raise ValueError(f"Missing required field: {field}")
|
||||
|
||||
if not isinstance(data_point.data_type, DataType):
|
||||
raise ValueError("Invalid data_type")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
*For the complete documentation index, see the [main documentation README (`../README.md`)]*
|
||||
291
docs/reference/aggregation-strategy.md
Normal file
291
docs/reference/aggregation-strategy.md
Normal file
@@ -0,0 +1,291 @@
|
||||
# Data Aggregation Strategy
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the comprehensive data aggregation strategy used in the TCP Trading Platform for converting real-time trade data into OHLCV (Open, High, Low, Close, Volume) candles across multiple timeframes, including sub-minute precision.
|
||||
|
||||
## Core Principles
|
||||
|
||||
### 1. Right-Aligned Timestamps (Industry Standard)
|
||||
|
||||
The system follows the **RIGHT-ALIGNED timestamp** convention used by major exchanges:
|
||||
|
||||
- **Candle timestamp = end time of the interval (close time)**
|
||||
- This represents when the candle period **closes**, not when it opens
|
||||
- Aligns with Binance, OKX, Coinbase, and other major exchanges
|
||||
- Ensures consistency with historical data APIs
|
||||
|
||||
**Examples:**
|
||||
- 1-second candle covering 09:00:15.000-09:00:16.000 → timestamp = 09:00:16.000
|
||||
- 5-second candle covering 09:00:15.000-09:00:20.000 → timestamp = 09:00:20.000
|
||||
- 30-second candle covering 09:00:00.000-09:00:30.000 → timestamp = 09:00:30.000
|
||||
- 1-minute candle covering 09:00:00-09:01:00 → timestamp = 09:01:00
|
||||
- 5-minute candle covering 09:00:00-09:05:00 → timestamp = 09:05:00
|
||||
|
||||
### 2. Sparse Candles (Trade-Driven Aggregation)
|
||||
|
||||
**CRITICAL**: The system uses a **SPARSE CANDLE APPROACH** - candles are only emitted when trades actually occur during the time period.
|
||||
|
||||
#### What This Means:
|
||||
- **No trades during period = No candle emitted**
|
||||
- **Time gaps in data** are normal and expected
|
||||
- **Storage efficient** - only meaningful periods are stored
|
||||
- **Industry standard** behavior matching major exchanges
|
||||
|
||||
#### Examples of Sparse Behavior:
|
||||
|
||||
**1-Second Timeframe:**
|
||||
```
|
||||
09:00:15 → Trade occurs → 1s candle emitted at 09:00:16
|
||||
09:00:16 → No trades → NO candle emitted
|
||||
09:00:17 → No trades → NO candle emitted
|
||||
09:00:18 → Trade occurs → 1s candle emitted at 09:00:19
|
||||
```
|
||||
|
||||
**5-Second Timeframe:**
|
||||
```
|
||||
09:00:15-20 → Trades occur → 5s candle emitted at 09:00:20
|
||||
09:00:20-25 → No trades → NO candle emitted
|
||||
09:00:25-30 → Trade occurs → 5s candle emitted at 09:00:30
|
||||
```
|
||||
|
||||
#### Real-World Coverage Examples:
|
||||
|
||||
From live testing with BTC-USDT (3-minute test):
|
||||
- **Expected 1s candles**: 180
|
||||
- **Actual 1s candles**: 53 (29% coverage)
|
||||
- **Missing periods**: 127 seconds with no trading activity
|
||||
|
||||
From live testing with ETH-USDT (1-minute test):
|
||||
- **Expected 1s candles**: 60
|
||||
- **Actual 1s candles**: 22 (37% coverage)
|
||||
- **Missing periods**: 38 seconds with no trading activity
|
||||
|
||||
### 3. No Future Leakage Prevention
|
||||
|
||||
The aggregation system prevents future leakage by:
|
||||
|
||||
- **Only completing candles when time boundaries are definitively crossed**
|
||||
- **Never emitting incomplete candles during real-time processing**
|
||||
- **Waiting for actual trades to trigger bucket completion**
|
||||
- **Using trade timestamps, not system clock times, for bucket assignment**
|
||||
|
||||
## Supported Timeframes
|
||||
|
||||
The system supports the following timeframes with precise bucket calculations:
|
||||
|
||||
### Second-Based Timeframes:
|
||||
- **1s**: 1-second buckets (00:00, 00:01, 00:02, ...)
|
||||
- **5s**: 5-second buckets (00:00, 00:05, 00:10, 00:15, ...)
|
||||
- **10s**: 10-second buckets (00:00, 00:10, 00:20, 00:30, ...)
|
||||
- **15s**: 15-second buckets (00:00, 00:15, 00:30, 00:45, ...)
|
||||
- **30s**: 30-second buckets (00:00, 00:30, ...)
|
||||
|
||||
### Minute-Based Timeframes:
|
||||
- **1m**: 1-minute buckets aligned to minute boundaries
|
||||
- **5m**: 5-minute buckets (00:00, 00:05, 00:10, ...)
|
||||
- **15m**: 15-minute buckets (00:00, 00:15, 00:30, 00:45)
|
||||
- **30m**: 30-minute buckets (00:00, 00:30)
|
||||
|
||||
### Hour-Based Timeframes:
|
||||
- **1h**: 1-hour buckets aligned to hour boundaries
|
||||
- **4h**: 4-hour buckets (00:00, 04:00, 08:00, 12:00, 16:00, 20:00)
|
||||
- **1d**: 1-day buckets aligned to midnight UTC
|
||||
|
||||
## Processing Flow
|
||||
|
||||
### Real-Time Aggregation Process
|
||||
|
||||
1. **Trade arrives** from WebSocket with timestamp T
|
||||
2. **For each configured timeframe**:
|
||||
- Calculate which time bucket this trade belongs to
|
||||
- Get current bucket for this timeframe
|
||||
- **Check if trade timestamp crosses time boundary**
|
||||
- **If boundary crossed**: complete and emit previous bucket (only if it has trades), create new bucket
|
||||
- Add trade to current bucket (updates OHLCV)
|
||||
3. **Only emit completed candles** when time boundaries are definitively crossed
|
||||
4. **Never emit incomplete/future candles** during real-time processing
|
||||
|
||||
### Bucket Management
|
||||
|
||||
**Time Bucket Creation:**
|
||||
- Buckets are created **only when the first trade arrives** for that time period
|
||||
- Empty time periods do not create buckets
|
||||
|
||||
**Bucket Completion:**
|
||||
- Buckets are completed **only when a trade arrives that belongs to a different time bucket**
|
||||
- Completed buckets are emitted **only if they contain at least one trade**
|
||||
- Empty buckets are discarded silently
|
||||
|
||||
**Example Timeline:**
|
||||
```
|
||||
Time Trade 1s Bucket Action 5s Bucket Action
|
||||
------- ------- ------------------------- ------------------
|
||||
09:15:23 BUY 0.1 Create bucket 09:15:23 Create bucket 09:15:20
|
||||
09:15:24 SELL 0.2 Complete 09:15:23 → emit Add to 09:15:20
|
||||
09:15:25 - (no trade = no action) (no action)
|
||||
09:15:26 BUY 0.5 Create bucket 09:15:26 Complete 09:15:20 → emit
|
||||
```
|
||||
|
||||
## Handling Sparse Data in Applications
|
||||
|
||||
### For Trading Algorithms
|
||||
|
||||
```python
|
||||
def handle_sparse_candles(candles: List[OHLCVCandle], timeframe: str) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Handle sparse candle data in trading algorithms.
|
||||
"""
|
||||
if not candles:
|
||||
return candles
|
||||
|
||||
# Option 1: Use only available data (recommended)
|
||||
# Just work with what you have - gaps indicate no trading activity
|
||||
return candles
|
||||
|
||||
# Option 2: Fill gaps with last known price (if needed)
|
||||
filled_candles = []
|
||||
last_candle = None
|
||||
|
||||
for candle in candles:
|
||||
if last_candle:
|
||||
# Check for gap
|
||||
expected_next = last_candle.end_time + get_timeframe_delta(timeframe)
|
||||
if candle.start_time > expected_next:
|
||||
# Gap detected - could fill if needed for your strategy
|
||||
pass
|
||||
|
||||
filled_candles.append(candle)
|
||||
last_candle = candle
|
||||
|
||||
return filled_candles
|
||||
```
|
||||
|
||||
### For Charting and Visualization
|
||||
|
||||
```python
|
||||
def prepare_chart_data(candles: List[OHLCVCandle], fill_gaps: bool = True) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Prepare sparse candle data for charting applications.
|
||||
"""
|
||||
if not fill_gaps or not candles:
|
||||
return candles
|
||||
|
||||
# Fill gaps with previous close price for continuous charts
|
||||
filled_candles = []
|
||||
|
||||
for i, candle in enumerate(candles):
|
||||
if i > 0:
|
||||
prev_candle = filled_candles[-1]
|
||||
gap_periods = calculate_gap_periods(prev_candle.end_time, candle.start_time, timeframe)
|
||||
|
||||
# Fill gap periods with flat candles
|
||||
for gap_time in gap_periods:
|
||||
flat_candle = create_flat_candle(
|
||||
start_time=gap_time,
|
||||
price=prev_candle.close,
|
||||
timeframe=timeframe
|
||||
)
|
||||
filled_candles.append(flat_candle)
|
||||
|
||||
filled_candles.append(candle)
|
||||
|
||||
return filled_candles
|
||||
```
|
||||
|
||||
### Database Queries
|
||||
|
||||
When querying candle data, be aware of potential gaps:
|
||||
|
||||
```sql
|
||||
-- Query that handles sparse data appropriately
|
||||
SELECT
|
||||
timestamp,
|
||||
open, high, low, close, volume,
|
||||
trade_count,
|
||||
-- Flag periods with actual trading activity
|
||||
CASE WHEN trade_count > 0 THEN 'ACTIVE' ELSE 'EMPTY' END as period_type
|
||||
FROM market_data
|
||||
WHERE symbol = 'BTC-USDT'
|
||||
AND timeframe = '1s'
|
||||
AND timestamp BETWEEN '2024-01-01 09:00:00' AND '2024-01-01 09:05:00'
|
||||
ORDER BY timestamp;
|
||||
|
||||
-- Query to detect gaps in data
|
||||
WITH candle_gaps AS (
|
||||
SELECT
|
||||
timestamp,
|
||||
LAG(timestamp) OVER (ORDER BY timestamp) as prev_timestamp,
|
||||
timestamp - LAG(timestamp) OVER (ORDER BY timestamp) as gap_duration
|
||||
FROM market_data
|
||||
WHERE symbol = 'BTC-USDT' AND timeframe = '1s'
|
||||
ORDER BY timestamp
|
||||
)
|
||||
SELECT * FROM candle_gaps
|
||||
WHERE gap_duration > INTERVAL '1 second';
|
||||
```
|
||||
|
||||
## Performance Characteristics
|
||||
|
||||
### Storage Efficiency
|
||||
- **Sparse approach reduces storage** by 50-80% compared to complete time series
|
||||
- **Only meaningful periods** are stored in the database
|
||||
- **Faster queries** due to smaller dataset size
|
||||
|
||||
### Processing Efficiency
|
||||
- **Lower memory usage** during real-time processing
|
||||
- **Faster aggregation** - no need to maintain empty buckets
|
||||
- **Efficient WebSocket processing** - only processes actual market events
|
||||
|
||||
### Coverage Statistics
|
||||
Based on real-world testing:
|
||||
|
||||
| Timeframe | Major Pairs Coverage | Minor Pairs Coverage |
|
||||
|-----------|---------------------|---------------------|
|
||||
| 1s | 20-40% | 5-15% |
|
||||
| 5s | 60-80% | 30-50% |
|
||||
| 10s | 75-90% | 50-70% |
|
||||
| 15s | 80-95% | 60-80% |
|
||||
| 30s | 90-98% | 80-95% |
|
||||
| 1m | 95-99% | 90-98% |
|
||||
|
||||
*Coverage = Percentage of time periods that actually have candles*
|
||||
|
||||
## Best Practices
|
||||
|
||||
### For Real-Time Systems
|
||||
1. **Design algorithms to handle gaps** - missing candles are normal
|
||||
2. **Use last known price** for periods without trades
|
||||
3. **Don't interpolate** unless specifically required
|
||||
4. **Monitor coverage ratios** to detect market conditions
|
||||
|
||||
### For Historical Analysis
|
||||
1. **Be aware of sparse data** when calculating statistics
|
||||
2. **Consider volume-weighted metrics** over time-weighted ones
|
||||
3. **Use trade_count=0** to identify empty periods when filling gaps
|
||||
4. **Validate data completeness** before running backtests
|
||||
|
||||
### For Database Storage
|
||||
1. **Index on (symbol, timeframe, timestamp)** for efficient queries
|
||||
2. **Partition by time periods** for large datasets
|
||||
3. **Consider trade_count > 0** filters for active-only queries
|
||||
4. **Monitor storage growth** - sparse data grows much slower
|
||||
|
||||
## Configuration
|
||||
|
||||
The sparse aggregation behavior is controlled by:
|
||||
|
||||
```json
|
||||
{
|
||||
"timeframes": ["1s", "5s", "10s", "15s", "30s", "1m", "5m", "15m", "1h"],
|
||||
"auto_save_candles": true,
|
||||
"emit_incomplete_candles": false, // Never emit incomplete candles
|
||||
"max_trades_per_candle": 100000
|
||||
}
|
||||
```
|
||||
|
||||
**Key Setting**: `emit_incomplete_candles: false` ensures only complete, trade-containing candles are emitted.
|
||||
|
||||
---
|
||||
|
||||
**Note**: This sparse approach is the **industry standard** used by major exchanges and trading platforms. It provides the most accurate representation of actual market activity while maintaining efficiency and preventing data artifacts.
|
||||
89
docs/reference/specification.md
Normal file
89
docs/reference/specification.md
Normal file
@@ -0,0 +1,89 @@
|
||||
# Simplified Crypto Trading Bot Platform: Product Requirements Document
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This simplified PRD addresses the need for a rapid-deployment crypto trading bot platform designed for internal testing and strategy development. The platform eliminates microservices complexity in favor of a monolithic architecture that can be functional within 1-2 weeks while supporting 5-10 concurrent bots. The system focuses on core functionality including data collection, strategy execution, backtesting, and visualization without requiring advanced monitoring or orchestration tools.
|
||||
|
||||
## System Architecture Overview
|
||||
|
||||
The platform follows a streamlined monolithic design that consolidates all components within a single application boundary. This approach enables rapid development while maintaining clear separation between functional modules for future scalability.The architecture consists of six core components working together: Data Collection Module for exchange connectivity, Strategy Engine for unified signal generation, Bot Manager for concurrent bot orchestration, PostgreSQL database for data persistence, Backtesting Engine for historical simulation, and Dashboard for visualization and control.
|
||||
|
||||
## Simplified Technical Stack
|
||||
|
||||
### Core Technologies
|
||||
|
||||
The platform utilizes a Python-based technology stack optimized for rapid development. The backend employs Python 3.10+ with Dash framework (including built-in Flask server for REST APIs), PostgreSQL 14+ with TimescaleDB extension for time-series optimization, and Redis for real-time pub/sub messaging. The frontend leverages Dash with Plotly for interactive visualization and bot control interfaces, providing a unified full-stack solution.
|
||||
|
||||
### Database Design
|
||||
|
||||
The database schema emphasizes simplicity while supporting essential trading operations. The core approach separates frequently-accessed OHLCV market data from optional raw tick data for optimal performance. Core tables include market_data for OHLCV candles used by bots, bots for instance management with JSON configuration references, signals for trading decisions, trades for execution records, and bot_performance for portfolio tracking. Raw trade data storage is optional and can be implemented later for advanced backtesting scenarios.
|
||||
|
||||
## Development Methodology
|
||||
|
||||
### Two-Week Implementation Timeline
|
||||
|
||||
The development follows a structured three-phase approach designed for rapid deployment. Phase 1 (Days 1-5) establishes foundational components including database setup, data collection implementation, and basic visualization. Phase 2 (Days 6-10) completes core functionality with backtesting engine development, trading logic implementation, and dashboard enhancement. Phase 3 (Days 11-14) focuses on system refinement, comprehensive testing, and deployment preparation.
|
||||
|
||||
### Strategy Implementation Example
|
||||
|
||||
The platform supports multiple trading strategies through a unified interface design. Strategy parameters are stored in JSON files, making it easy to test different configurations without rebuilding code. A simple moving average crossover strategy demonstrates the system's capability to generate buy and sell signals based on technical indicators. This example strategy shows how the system processes market data, calculates moving averages, generates trading signals, and tracks portfolio performance over time. The visualization includes price movements, moving average lines, signal markers, and portfolio value progression.
|
||||
|
||||
## Backtesting and Performance Analysis
|
||||
|
||||
### Strategy Validation Framework
|
||||
|
||||
The backtesting engine enables comprehensive strategy testing using historical market data. The system calculates key performance metrics including total returns, Sharpe ratios, maximum drawdown, and win/loss ratios to evaluate strategy effectiveness.
|
||||
|
||||
### Portfolio Management
|
||||
|
||||
The platform tracks portfolio allocation and performance throughout strategy execution. Real-time monitoring capabilities show the distribution between cryptocurrency holdings and cash reserves.
|
||||
|
||||
## Simplified Data Flow
|
||||
|
||||
### Real-Time Processing
|
||||
|
||||
The data collection module connects to exchange APIs (starting with OKX) to retrieve market information via WebSocket connections. Instead of storing all raw tick data, the system focuses on aggregating trades into OHLCV candles (1-minute, 5-minute, hourly, etc.) which are stored in PostgreSQL. Processed OHLCV data is published through Redis channels for real-time distribution to active trading bots. Raw trade data can optionally be stored for advanced backtesting scenarios.
|
||||
|
||||
### Signal Generation and Execution
|
||||
|
||||
Trading strategies subscribe to relevant OHLCV data streams and generate trading signals based on configured algorithms stored in JSON files for easy parameter testing. The bot manager validates signals against portfolio constraints and executes simulated trades with realistic fee modeling. The system includes automatic crash recovery - bots are monitored and restarted if they fail, and the application can restore active bot states after system restarts.
|
||||
|
||||
## Future Scalability Considerations
|
||||
|
||||
### Microservices Migration Path
|
||||
|
||||
While implementing a monolithic architecture for rapid deployment, the system design maintains clear component boundaries that facilitate future extraction into microservices. API-first design principles ensure internal components communicate through well-defined interfaces that can be externalized as needed.
|
||||
|
||||
### Authentication and Multi-User Support
|
||||
|
||||
The current single-user design can be extended to support multiple users through role-based access control implementation. Database schema accommodates user management tables and permission structures without requiring significant architectural changes.
|
||||
|
||||
### Advanced Monitoring Integration
|
||||
|
||||
The simplified monitoring approach can be enhanced with Prometheus and Grafana integration when scaling requirements justify the additional complexity. Current basic monitoring provides foundation metrics that can be extended to comprehensive observability systems.
|
||||
|
||||
## Technical Implementation Details
|
||||
|
||||
### Time Series Data Management
|
||||
|
||||
The platform implements proper time aggregation aligned with exchange standards to ensure accurate candle formation. Timestamp alignment follows right-aligned methodology where 5-minute candles from 09:00:00-09:05:00 receive the 09:05:00 timestamp.
|
||||
|
||||
### Performance Optimization
|
||||
|
||||
Database indexing on timestamp and symbol fields ensures efficient time-series queries. Connection pooling prevents database connection leaks while prepared statements optimize query execution. Memory management includes proper cleanup of data objects after processing to maintain system stability.
|
||||
|
||||
## Success Metrics and Validation
|
||||
|
||||
### Development Milestones
|
||||
|
||||
Platform success is measured through specific deliverables including core functionality completion within 14 days, system stability maintenance at 99% uptime during internal testing, successful backtesting of at least 3 different strategies, and concurrent operation of 5+ bots for 72+ hours to demonstrate the platform's scalability within its target range.
|
||||
|
||||
### Strategy Testing Capabilities
|
||||
|
||||
The system enables comprehensive strategy validation through historical simulation, real-time testing with virtual portfolios, and performance comparison across multiple algorithms. Backtesting results provide insights into strategy effectiveness before live deployment.
|
||||
|
||||
## Conclusion
|
||||
|
||||
This simplified crypto trading bot platform balances rapid development requirements with future scalability needs. The monolithic architecture enables deployment within 1-2 weeks while maintaining architectural flexibility for future enhancements. The OHLCV-focused data approach optimizes performance by avoiding unnecessary raw data storage, while JSON-based configuration files enable rapid strategy parameter testing without code changes.
|
||||
|
||||
Clear component separation, streamlined database design, and strategic technology choices create a foundation that supports both immediate testing objectives and long-term platform evolution. The platform's focus on essential functionality without unnecessary complexity ensures teams can begin strategy testing quickly while building toward more sophisticated implementations as requirements expand. This approach maximizes development velocity while preserving options for future architectural evolution and feature enhancement.
|
||||
283
docs/setup.md
283
docs/setup.md
@@ -1,283 +0,0 @@
|
||||
# Development Environment Setup
|
||||
|
||||
This guide will help you set up the Crypto Trading Bot Dashboard development environment.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.10+
|
||||
- Docker Desktop (for Windows/Mac) or Docker Engine (for Linux)
|
||||
- UV package manager
|
||||
- Git
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Initial Setup
|
||||
|
||||
```bash
|
||||
# Install dependencies (including dev tools)
|
||||
uv sync --dev
|
||||
|
||||
# Set up environment and start services
|
||||
python scripts/dev.py setup
|
||||
```
|
||||
|
||||
### 2. Start Services
|
||||
|
||||
```bash
|
||||
# Start PostgreSQL and Redis services
|
||||
python scripts/dev.py start
|
||||
```
|
||||
|
||||
### 3. Configure API Keys
|
||||
|
||||
Copy `env.template` to `.env` and update the OKX API credentials:
|
||||
|
||||
```bash
|
||||
# Copy template (Windows)
|
||||
copy env.template .env
|
||||
|
||||
# Copy template (Unix)
|
||||
cp env.template .env
|
||||
|
||||
# Edit .env file with your actual OKX API credentials
|
||||
# OKX_API_KEY=your_actual_api_key
|
||||
# OKX_SECRET_KEY=your_actual_secret_key
|
||||
# OKX_PASSPHRASE=your_actual_passphrase
|
||||
```
|
||||
|
||||
### 4. Verify Setup
|
||||
|
||||
```bash
|
||||
# Run setup verification tests
|
||||
uv run python tests/test_setup.py
|
||||
```
|
||||
|
||||
### 5. Start Dashboard with Hot Reload
|
||||
|
||||
```bash
|
||||
# Start with hot reload (recommended for development)
|
||||
python scripts/dev.py dev-server
|
||||
|
||||
# Or start without hot reload
|
||||
python scripts/dev.py run
|
||||
```
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Using the dev.py script:
|
||||
|
||||
```bash
|
||||
# Show available commands
|
||||
python scripts/dev.py
|
||||
|
||||
# Set up environment and install dependencies
|
||||
python scripts/dev.py setup
|
||||
|
||||
# Start all services (Docker)
|
||||
python scripts/dev.py start
|
||||
|
||||
# Stop all services
|
||||
python scripts/dev.py stop
|
||||
|
||||
# Restart services
|
||||
python scripts/dev.py restart
|
||||
|
||||
# Check service status
|
||||
python scripts/dev.py status
|
||||
|
||||
# Install/update dependencies
|
||||
python scripts/dev.py install
|
||||
|
||||
# Run development server with hot reload
|
||||
python scripts/dev.py dev-server
|
||||
|
||||
# Run application without hot reload
|
||||
python scripts/dev.py run
|
||||
```
|
||||
|
||||
### Direct Docker commands:
|
||||
|
||||
```bash
|
||||
# Start services in background
|
||||
docker-compose up -d
|
||||
|
||||
# View service logs
|
||||
docker-compose logs -f
|
||||
|
||||
# Stop services
|
||||
docker-compose down
|
||||
|
||||
# Rebuild and restart
|
||||
docker-compose up -d --build
|
||||
```
|
||||
|
||||
## Hot Reload Development
|
||||
|
||||
The development server includes hot reload functionality that automatically restarts the application when Python files change.
|
||||
|
||||
### Features:
|
||||
- 🔥 **Auto-restart** on file changes
|
||||
- 👀 **Watches multiple directories** (config, database, components, etc.)
|
||||
- 🚀 **Fast restart** with debouncing (1-second delay)
|
||||
- 🛑 **Graceful shutdown** with Ctrl+C
|
||||
|
||||
### Usage:
|
||||
```bash
|
||||
# Start hot reload server
|
||||
python scripts/dev.py dev-server
|
||||
|
||||
# The server will watch these directories:
|
||||
# - . (root)
|
||||
# - config/
|
||||
# - database/
|
||||
# - components/
|
||||
# - data/
|
||||
# - strategies/
|
||||
# - trader/
|
||||
```
|
||||
|
||||
## Data Persistence
|
||||
|
||||
### Database Persistence
|
||||
✅ **PostgreSQL data persists** across container restarts
|
||||
- Volume: `postgres_data` mounted to `/var/lib/postgresql/data`
|
||||
- Data survives `docker-compose down` and `docker-compose up`
|
||||
|
||||
### Redis Persistence
|
||||
✅ **Redis data persists** with AOF (Append-Only File)
|
||||
- Volume: `redis_data` mounted to `/data`
|
||||
- AOF sync every second for durability
|
||||
- Data survives container restarts
|
||||
|
||||
### Removing Persistent Data
|
||||
```bash
|
||||
# Stop services and remove volumes (CAUTION: This deletes all data)
|
||||
docker-compose down -v
|
||||
|
||||
# Or remove specific volumes
|
||||
docker volume rm dashboard_postgres_data
|
||||
docker volume rm dashboard_redis_data
|
||||
```
|
||||
|
||||
## Dependency Management
|
||||
|
||||
### Adding New Dependencies
|
||||
|
||||
```bash
|
||||
# Add runtime dependency
|
||||
uv add "package-name>=1.0.0"
|
||||
|
||||
# Add development dependency
|
||||
uv add --dev "dev-package>=1.0.0"
|
||||
|
||||
# Install all dependencies
|
||||
uv sync --dev
|
||||
```
|
||||
|
||||
### Key Dependencies Included:
|
||||
- **Web Framework**: Dash, Plotly
|
||||
- **Database**: SQLAlchemy, psycopg2-binary, Alembic
|
||||
- **Data Processing**: pandas, numpy
|
||||
- **Configuration**: pydantic, python-dotenv
|
||||
- **Development**: watchdog (hot reload), pytest, black, mypy
|
||||
|
||||
See `docs/dependency-management.md` for detailed dependency management guide.
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
Dashboard/
|
||||
├── config/ # Configuration files
|
||||
│ ├── settings.py # Application settings
|
||||
│ └── bot_configs/ # Bot configuration files
|
||||
├── database/ # Database related files
|
||||
│ └── init/ # Database initialization scripts
|
||||
├── scripts/ # Development scripts
|
||||
│ ├── dev.py # Main development script
|
||||
│ ├── setup.sh # Setup script (Unix)
|
||||
│ ├── start.sh # Start script (Unix)
|
||||
│ └── stop.sh # Stop script (Unix)
|
||||
├── tests/ # Test files
|
||||
│ └── test_setup.py # Setup verification tests
|
||||
├── docs/ # Documentation
|
||||
│ ├── setup.md # This file
|
||||
│ └── dependency-management.md # Dependency guide
|
||||
├── docker-compose.yml # Docker services configuration
|
||||
├── env.template # Environment variables template
|
||||
├── pyproject.toml # Dependencies and project config
|
||||
└── main.py # Main application entry point
|
||||
```
|
||||
|
||||
## Services
|
||||
|
||||
### PostgreSQL Database
|
||||
- **Host**: localhost:5432
|
||||
- **Database**: dashboard
|
||||
- **User**: dashboard
|
||||
- **Password**: dashboard123 (development only)
|
||||
- **Persistence**: ✅ Data persists across restarts
|
||||
|
||||
### Redis Cache
|
||||
- **Host**: localhost:6379
|
||||
- **No password** (development only)
|
||||
- **Persistence**: ✅ AOF enabled, data persists across restarts
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Key environment variables (see `env.template` for full list):
|
||||
|
||||
- `DATABASE_URL` - PostgreSQL connection string
|
||||
- `OKX_API_KEY` - OKX API key
|
||||
- `OKX_SECRET_KEY` - OKX secret key
|
||||
- `OKX_PASSPHRASE` - OKX passphrase
|
||||
- `OKX_SANDBOX` - Use OKX sandbox (true/false)
|
||||
- `DEBUG` - Enable debug mode
|
||||
- `LOG_LEVEL` - Logging level (DEBUG, INFO, WARNING, ERROR)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Docker Issues
|
||||
|
||||
1. **Docker not running**: Start Docker Desktop/Engine
|
||||
2. **Port conflicts**: Check if ports 5432 or 6379 are already in use
|
||||
3. **Permission issues**: On Linux, add your user to the docker group
|
||||
4. **Data persistence issues**: Check if volumes are properly mounted
|
||||
|
||||
### Database Connection Issues
|
||||
|
||||
1. **Connection refused**: Ensure PostgreSQL container is running
|
||||
2. **Authentication failed**: Check credentials in `.env` file
|
||||
3. **Database doesn't exist**: Run the setup script again
|
||||
4. **Data loss**: Check if volume is mounted correctly
|
||||
|
||||
### Dependency Issues
|
||||
|
||||
1. **Import errors**: Run `uv sync --dev` to install dependencies
|
||||
2. **Version conflicts**: Check `pyproject.toml` for compatibility
|
||||
3. **Hot reload not working**: Ensure `watchdog` is installed
|
||||
|
||||
### Hot Reload Issues
|
||||
|
||||
1. **Changes not detected**: Check if files are in watched directories
|
||||
2. **Rapid restarts**: Built-in 1-second debouncing should prevent this
|
||||
3. **Process not stopping**: Use Ctrl+C to gracefully shutdown
|
||||
|
||||
## Performance Tips
|
||||
|
||||
1. **Use SSD**: Store Docker volumes on SSD for better database performance
|
||||
2. **Increase Docker memory**: Allocate more RAM to Docker Desktop
|
||||
3. **Hot reload**: Use `dev-server` for faster development cycles
|
||||
4. **Dependency caching**: UV caches dependencies for faster installs
|
||||
|
||||
## Next Steps
|
||||
|
||||
After successful setup:
|
||||
|
||||
1. **Phase 1.0**: Database Infrastructure Setup
|
||||
2. **Phase 2.0**: Bot Management System Development
|
||||
3. **Phase 3.0**: OKX Integration and Data Pipeline
|
||||
4. **Phase 4.0**: Dashboard UI and Visualization
|
||||
5. **Phase 5.0**: Backtesting System Implementation
|
||||
|
||||
See `tasks/tasks-prd-crypto-bot-dashboard.md` for detailed task list.
|
||||
|
||||
Reference in New Issue
Block a user