From 666a58e7993cad6d77fc83e71e1902acc97341a2 Mon Sep 17 00:00:00 2001 From: "Vasily.onl" Date: Fri, 6 Jun 2025 20:33:29 +0800 Subject: [PATCH] documentation update --- README.md | 106 +- docs/API.md | 11 + docs/CHANGELOG.md | 21 + docs/CONTRIBUTING.md | 31 + docs/README.md | 74 +- docs/architecture.md | 116 ++ docs/architecture/README.md | 44 - docs/architecture/architecture.md | 276 ---- .../charts/adding-new-indicators.md | 393 ----- docs/components/data_collectors.md | 1322 ----------------- docs/{architecture => }/crypto-bot-prd.md | 5 + .../ADR-001-data-processing-refactor.md} | 50 + docs/exchanges/README.md | 297 ---- docs/guides/README.md | 10 +- docs/{components => modules}/README.md | 26 +- docs/{components => modules}/charts/README.md | 17 +- docs/modules/charts/adding-new-indicators.md | 249 ++++ .../charts/bot-integration.md | 4 + .../charts/configuration.md | 46 +- .../charts/indicators.md | 7 +- .../charts/quick-reference.md | 10 +- .../dashboard-modular-structure.md | 6 +- docs/modules/data_collectors.md | 215 +++ .../database_operations.md | 8 +- docs/modules/exchanges/README.md | 43 + docs/{ => modules}/exchanges/okx_collector.md | 75 +- docs/{components => modules}/logging.md | 357 +---- .../services/data_collection_service.md | 82 +- .../technical-indicators.md | 33 +- docs/reference/README.md | 10 +- tasks/{PRD-tasks.md => MAIN-task-list.md} | 0 31 files changed, 1107 insertions(+), 2837 deletions(-) create mode 100644 docs/API.md create mode 100644 docs/CHANGELOG.md create mode 100644 docs/CONTRIBUTING.md create mode 100644 docs/architecture.md delete mode 100644 docs/architecture/README.md delete mode 100644 docs/architecture/architecture.md delete mode 100644 docs/components/charts/adding-new-indicators.md delete mode 100644 docs/components/data_collectors.md rename docs/{architecture => }/crypto-bot-prd.md (98%) rename docs/{architecture/data-processing-refactor.md => decisions/ADR-001-data-processing-refactor.md} (81%) delete mode 100644 docs/exchanges/README.md rename docs/{components => modules}/README.md (89%) rename docs/{components => modules}/charts/README.md (96%) create mode 100644 docs/modules/charts/adding-new-indicators.md rename docs/{components => modules}/charts/bot-integration.md (98%) rename docs/{components => modules}/charts/configuration.md (95%) rename docs/{components => modules}/charts/indicators.md (97%) rename docs/{components => modules}/charts/quick-reference.md (96%) rename docs/{components => modules}/dashboard-modular-structure.md (95%) create mode 100644 docs/modules/data_collectors.md rename docs/{components => modules}/database_operations.md (98%) create mode 100644 docs/modules/exchanges/README.md rename docs/{ => modules}/exchanges/okx_collector.md (88%) rename docs/{components => modules}/logging.md (63%) rename docs/{ => modules}/services/data_collection_service.md (90%) rename docs/{components => modules}/technical-indicators.md (89%) rename tasks/{PRD-tasks.md => MAIN-task-list.md} (100%) diff --git a/README.md b/README.md index 420136f..5384224 100644 --- a/README.md +++ b/README.md @@ -1,105 +1,53 @@ # Crypto Trading Bot Platform -A simplified crypto trading bot platform for strategy testing and development. Test multiple trading strategies in parallel using real OKX market data with virtual trading simulation. +A simplified crypto trading bot platform for strategy testing and development using real OKX market data and virtual trading simulation. ## Overview -This platform enables rapid strategy testing within 1-2 weeks of development. Built with a monolithic architecture for simplicity, it supports 5-10 concurrent trading bots with real-time monitoring and performance tracking. +This platform enables rapid strategy development with a monolithic architecture that supports multiple concurrent trading bots, real-time monitoring, and performance tracking. ## Key Features -- **Multi-Bot Management**: Run 5-10 trading bots simultaneously with different strategies -- **Real-time Monitoring**: Live OHLCV charts with bot trading signals overlay -- **πŸ“Š Modular Chart Layers**: Advanced technical analysis with 26+ indicators and strategy presets -- **πŸ€– Bot Signal Integration**: Real-time bot signal visualization with performance analytics -- **Virtual Trading**: Simulation-first approach with realistic fee modeling -- **JSON Configuration**: Easy strategy parameter testing without code changes -- **Backtesting Engine**: Test strategies on historical market data -- **Crash Recovery**: Automatic bot restart and state restoration - -## Chart System Features - -The platform includes a sophisticated modular chart system with: - -- **Technical Indicators**: 26+ professionally configured indicators (SMA, EMA, Bollinger Bands, RSI, MACD) -- **Strategy Presets**: 5 real-world trading strategy templates (EMA crossover, momentum, mean reversion) -- **Bot Integration**: Real-time visualization of bot signals, trades, and performance -- **Custom Indicators**: User-defined indicators with JSON persistence -- **Validation System**: 10+ validation rules with detailed error reporting -- **Modular Architecture**: Independently testable chart layers and components - -πŸ“Š **[Complete Chart Documentation](docs/components/charts/README.md)** +- **Multi-Bot Management**: Run multiple trading bots simultaneously with different strategies. +- **Real-time Monitoring**: Live OHLCV charts with bot trading signals overlay. +- **Modular Chart System**: Advanced technical analysis with 26+ indicators and strategy presets. +- **Virtual Trading**: Simulation-first approach with realistic fee modeling. +- **JSON Configuration**: Easy strategy parameter testing without code changes. +- **Backtesting Engine**: Test strategies on historical market data (planned). +- **Crash Recovery**: Automatic bot restart and state restoration. ## Tech Stack -- **Framework**: Python 3.10+ with Dash (unified frontend/backend) -- **Database**: PostgreSQL with optimized OHLCV data storage -- **Real-time**: Redis pub/sub for live updates +- **Framework**: Python 3.10+ with Dash +- **Database**: PostgreSQL +- **Real-time Messaging**: Redis - **Package Management**: UV -- **Development**: Docker for consistent environment +- **Containerization**: Docker ## Quick Start -### Prerequisites -- Python 3.10+, Docker, UV package manager +For detailed instructions on setting up and running the project, please refer to the main documentation. -### Setup +**➑️ [Go to the Full Documentation](docs/README.md)** -**πŸ“– For detailed setup instructions, see [docs/setup.md](docs/setup.md)** - -Quick setup: ```bash -python scripts/dev.py setup # Setup environment -python scripts/dev.py start # Start services -python scripts/dev.py dev-server # Start with hot reload -``` - -## Project Structure - -``` -β”œβ”€β”€ app.py # Main Dash application -β”œβ”€β”€ bot_manager.py # Bot lifecycle management -β”œβ”€β”€ database/ # PostgreSQL models and connection -β”œβ”€β”€ data/ # OKX API integration -β”œβ”€β”€ components/ # Dashboard UI components -β”œβ”€β”€ strategies/ # Trading strategy modules -β”œβ”€β”€ config/bot_configs/ # JSON bot configurations -└── docs/ # Project documentation +# Quick setup for development +git clone +cd TCPDashboard +uv sync +cp env.template .env +docker-compose up -d +uv run python main.py ``` ## Documentation -- **[Setup Guide](docs/setup.md)** - Complete setup instructions for new machines -- **[Product Requirements](docs/crypto-bot-prd.md)** - Complete system specifications and requirements -- **[Technical Architecture](docs/architecture.md)** - Implementation details and component design -- **[Platform Overview](docs/specification.md)** - Human-readable system overview -- **πŸ“Š [Chart Layers System](docs/components/charts/README.md)** - Modular chart system with technical indicators -- **πŸ€– [Bot Integration Guide](docs/components/charts/bot-integration.md)** - Real-time bot signal visualization +All project documentation is located in the `docs/` directory. The best place to start is the main documentation index. -## Configuration Example - -Bot configurations use simple JSON files for rapid testing: - -```json -{ - "bot_id": "ema_crossover_01", - "strategy_file": "ema_crossover.json", - "symbol": "BTC-USDT", - "virtual_balance": 10000, - "enabled": true -} -``` - -## Development Timeline - -**Target**: Functional system within 1-2 weeks -- **Phase 1** (Days 1-5): Database, data collection, basic visualization -- **Phase 2** (Days 6-10): Bot management, backtesting, trading logic -- **Phase 3** (Days 11-14): Testing, optimization, deployment +- **[Main Documentation (`docs/README.md`)]** - The central hub for all project documentation, including setup guides, architecture, and module details. +- **[Setup Guide (`docs/guides/setup.md`)]** - Complete setup instructions for new machines. +- **[Project Context (`CONTEXT.md`)]** - The single source of truth for the project's current state. ## Contributing -1. Review [architecture documentation](docs/architecture.md) for technical approach -2. Check [task list](tasks/tasks-prd-crypto-bot-dashboard.md) for available work -3. Follow project coding standards and use UV for dependencies -4. Update documentation when adding features +We welcome contributions! Please review the **[Contributing Guidelines (`docs/CONTRIBUTING.md`)]** and the **[Project Context (`CONTEXT.md`)]** before getting started. diff --git a/docs/API.md b/docs/API.md new file mode 100644 index 0000000..81e6347 --- /dev/null +++ b/docs/API.md @@ -0,0 +1,11 @@ +# API Documentation + +This document will contain the documentation for the platform's REST API once it is implemented. + +The API will provide endpoints for: +- Managing bots (creating, starting, stopping) +- Configuring strategies +- Retrieving market data +- Viewing performance metrics + +*This documentation is currently a placeholder.* \ No newline at end of file diff --git a/docs/CHANGELOG.md b/docs/CHANGELOG.md new file mode 100644 index 0000000..da1c19b --- /dev/null +++ b/docs/CHANGELOG.md @@ -0,0 +1,21 @@ +# Changelog + +All notable changes to this project will be documented in this file. + +The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), +and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). + +## [Unreleased] + +### Added +- Initial project setup with data collection for OKX. +- Basic dashboard for system health monitoring and data visualization. +- Modularized data collector and processing framework. +- Comprehensive documentation structure. + +### Changed +- Refactored data processing to be more modular and extensible. +- Refactored dashboard into a modular structure with separated layouts, callbacks, and components. + +### Removed +- Monolithic `app.py` in favor of a modular dashboard structure. \ No newline at end of file diff --git a/docs/CONTRIBUTING.md b/docs/CONTRIBUTING.md new file mode 100644 index 0000000..46e5bcc --- /dev/null +++ b/docs/CONTRIBUTING.md @@ -0,0 +1,31 @@ +# Contributing + +We welcome contributions to the TCP Trading Platform! Please follow these guidelines to ensure a smooth development process. + +## Development Process + +1. **Check for Existing Issues**: Before starting work on a new feature or bugfix, check the issue tracker to see if it has already been reported. +2. **Fork the Repository**: Create your own fork of the repository to work on your changes. +3. **Create a Branch**: Create a new branch for your feature or bugfix. Use a descriptive name (e.g., `feature/add-binance-support`, `fix/chart-rendering-bug`). +4. **Write Code**: + * Adhere to the coding standards outlined in `CONTEXT.md`. + * Maintain a modular structure and keep components decoupled. + * Ensure all new code is well-documented with docstrings and comments. +5. **Update Documentation**: If you add or change a feature, update the relevant documentation in the `docs/` directory. +6. **Write Tests**: Add unit and integration tests for any new functionality. +7. **Submit a Pull Request**: Once your changes are complete, submit a pull request to the `main` branch. Provide a clear description of your changes and reference any related issues. + +## Coding Standards + +* **Style**: Follow PEP 8 for Python code. +* **Naming**: Use `PascalCase` for classes and `snake_case` for functions and variables. +* **Type Hinting**: All function signatures must include type hints. +* **Modularity**: Keep files small and focused on a single responsibility. + +## Commit Messages + +* Use clear and descriptive commit messages. +* Start with a verb in the imperative mood (e.g., `Add`, `Fix`, `Update`). +* Reference the issue number if applicable (e.g., `Fix: Resolve issue #42`). + +Thank you for contributing! \ No newline at end of file diff --git a/docs/README.md b/docs/README.md index 0044fa6..79b8888 100644 --- a/docs/README.md +++ b/docs/README.md @@ -1,62 +1,46 @@ # TCP Dashboard Documentation -Welcome to the **TCP Dashboard** (Trading Crypto Platform) documentation. This platform provides a comprehensive solution for cryptocurrency trading bot development, backtesting, and portfolio management. +Welcome to the documentation for the TCP Trading Platform. This resource provides comprehensive information for developers, contributors, and anyone interested in the platform's architecture and functionality. -## πŸ“š Documentation Structure +## Table of Contents -The documentation is organized into specialized sections for better navigation and maintenance: +### 1. Project Overview +- **[Project Context (`CONTEXT.md`)]** - The single source of truth for the project's current state, architecture, and conventions. **Start here.** +- **[Product Requirements (`crypto-bot-prd.md`)]** - The Product Requirements Document (PRD) outlining the project's goals and scope. -### πŸ—οΈ **[Architecture & Design](architecture/)** +### 2. Getting Started +- **[Setup Guide (`guides/setup.md`)]** - Instructions for setting up the development environment. +- **[Contributing (`CONTRIBUTING.md`)]** - Guidelines for contributing to the project. -- **[Architecture Overview](architecture/architecture.md)** - High-level system architecture and component design -- **[Dashboard Modular Structure](dashboard-modular-structure.md)** - *New modular dashboard architecture* - - Separation of layouts, callbacks, and components - - Maintainable file structure under 300-400 lines each - - Parallel development support with clear responsibilities -- **[Data Processing Refactor](architecture/data-processing-refactor.md)** - *New modular data processing architecture* - - Common utilities shared across all exchanges - - Right-aligned timestamp aggregation strategy - - Future leakage prevention mechanisms - - Exchange-specific component design -- **[Crypto Bot PRD](architecture/crypto-bot-prd.md)** - Product Requirements Document for the crypto trading bot platform +### 3. Architecture & Design +- **[Architecture Overview (`architecture.md`)]** - High-level system architecture, components, and data flow. +- **[Architecture Decision Records (`decisions/`)](./decisions/)** - Key architectural decisions and their justifications. -### πŸ”§ **[Core Components](components/)** +### 4. Modules Documentation +This section contains detailed technical documentation for each system module. -- **[Chart Layers System](components/charts/)** - *Comprehensive modular chart system* - - Strategy-driven chart configurations with JSON persistence - - 26+ professional indicator presets with user customization - - Real-time chart updates with indicator toggling - - 5 example trading strategies with validation system - - Extensible architecture for future bot signal integration +- **[Chart System (`modules/charts/`)](./modules/charts/)** - Comprehensive documentation for the modular chart system. +- **[Data Collectors (`modules/data_collectors.md`)]** - Guide to the data collector framework. +- **[Database Operations (`modules/database_operations.md`)]** - Details on the repository pattern for database interactions. +- **[Technical Indicators (`modules/technical-indicators.md`)]** - Information on the technical analysis module. +- **[Exchange Integrations (`modules/exchanges/`)](./modules/exchanges/)** - Exchange-specific implementation details. +- **[Logging System (`modules/logging.md`)]** - The unified logging framework. +- **[Data Collection Service (`modules/services/data_collection_service.md`)]** - The high-level service that orchestrates data collectors. -- **[Data Collectors](components/data_collectors.md)** - *Comprehensive guide to the enhanced data collector system* - - BaseDataCollector abstract class with health monitoring - - CollectorManager for centralized management - - Exchange Factory Pattern for standardized collector creation - - Modular Exchange Architecture for scalable implementation - - Auto-restart and failure recovery mechanisms +### 5. API & Reference +- **[API Documentation (`API.md`)]** - Placeholder for future REST API documentation. +- **[Technical Reference (`reference/`)](./reference/)** - Detailed specifications, data formats, and standards. +- **[Changelog (`CHANGELOG.md`)]** - A log of all notable changes to the project. -- **[Technical Indicators](components/technical-indicators.md)** - *Technical analysis module for trading strategies* - - SMA, EMA, RSI, MACD, and Bollinger Bands calculations - - Optimized for sparse OHLCV data handling - - Vectorized calculations using pandas and numpy - - JSON configuration support with validation - - Integration with aggregation strategy +## How to Use This Documentation -- **[Logging System](components/logging.md)** - *Unified logging framework* - - Multi-level logging with automatic cleanup - - Console and file output with formatting - - Performance monitoring integration +- **For a high-level understanding**, start with the `CONTEXT.md` and `architecture.md` files. +- **For development tasks**, refer to the specific module documentation in the `modules/` directory. +- **For setup and contribution guidelines**, see the `guides/` and `CONTRIBUTING.md` files. -### 🌐 **[Exchange Integrations](exchanges/)** +This documentation is intended to be a living document that evolves with the project. Please keep it up-to-date as you make changes. -- **[OKX Collector](exchanges/okx_collector.md)** - *Complete guide to OKX exchange integration* - - Real-time trades, orderbook, and ticker data collection - - WebSocket connection management with OKX-specific ping/pong - - Factory pattern usage and configuration - - Production deployment guide -- **[Exchange Overview](exchanges/)** - Multi-exchange architecture and comparison ### πŸ“– **[Setup & Guides](guides/)** diff --git a/docs/architecture.md b/docs/architecture.md new file mode 100644 index 0000000..26d4356 --- /dev/null +++ b/docs/architecture.md @@ -0,0 +1,116 @@ +# System Architecture + +This document provides a high-level overview of the system architecture for the Crypto Trading Bot Platform. + +## 1. Core Components + +The platform consists of six core components designed to work together in a monolithic application structure. This design prioritizes rapid development and clear separation of concerns. + +``` +β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” +β”‚ TCP Dashboard Platform β”‚ +β”‚ β”‚ +β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚ +β”‚ β”‚ Data Collector │────> β”‚ Strategy Engine │────>β”‚ Bot Manager β”‚β”‚ +β”‚ β”‚ (OKX, Binance...) β”‚ β”‚ (Signal Generation)β”‚ β”‚(State & Execution)β”‚β”‚ +β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚ +β”‚ β”‚ β”‚ β”‚ β”‚ +β”‚β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚ +β”‚β”‚ Dashboard β”‚<──── β”‚ Backtesting β”‚<────│ Database β”‚β”‚ +β”‚β”‚ (Visualization) β”‚ β”‚ Engine β”‚ β”‚ (PostgreSQL) β”‚β”‚ +β”‚β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚ +β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ +``` + +### 1. Data Collection Module +**Responsibility**: Collect real-time market data from exchanges +**Implementation**: `data/` +**Key Features**: +- Connects to exchange WebSocket APIs (OKX implemented) +- Aggregates raw trades into OHLCV candles +- Publishes data to Redis for real-time distribution +- Stores data in PostgreSQL for historical analysis +- See [Data Collectors Documentation (`modules/data_collectors.md`)](./modules/data_collectors.md) for details. + +### 2. Strategy Engine +**Responsibility**: Unified interface for all trading strategies +**Status**: Not yet implemented. This section describes the planned architecture. + +```python +class BaseStrategy: + def __init__(self, parameters: dict): + self.params = parameters + + def calculate(self, ohlcv_data: pd.DataFrame) -> Signal: + raise NotImplementedError + + def get_indicators(self) -> dict: + raise NotImplementedError +``` + +### 3. Bot Manager +**Responsibility**: Orchestrate bot execution and state management +**Status**: Not yet implemented. This section describes the planned architecture. + +```python +class BotManager: + def __init__(self): + self.bots = {} + + def create_bot(self, config: dict) -> Bot: + # ... + + def run_all_bots(self): + # ... +``` + +### 4. Database +**Responsibility**: Data persistence and storage +**Implementation**: `database/` +**Key Features**: +- PostgreSQL with TimescaleDB extension for time-series data +- SQLAlchemy for ORM and schema management +- Alembic for database migrations +- See [Database Operations Documentation (`modules/database_operations.md`)](./modules/database_operations.md) for details. + +### 5. Backtesting Engine +**Responsibility**: Test strategies against historical data +**Status**: Not yet implemented. This section describes the planned architecture. + +### 6. Dashboard +**Responsibility**: Visualization and user interaction +**Implementation**: `dashboard/` +**Key Features**: +- Dash-based web interface +- Real-time chart visualization with Plotly +- System health monitoring +- Bot management UI (planned) +- See the [Chart System Documentation (`modules/charts/`)](./modules/charts/) for details. + +## 2. Data Flow + +### Real-time Data Flow +1. **Data Collector** connects to exchange WebSocket (e.g., OKX). +2. Raw trades are aggregated into OHLCV candles (1m, 5m, etc.). +3. OHLCV data is published to a **Redis** channel. +4. **Strategy Engine** subscribes to Redis and receives OHLCV data. +5. Strategy generates a **Signal** (BUY/SELL/HOLD). +6. **Bot Manager** receives the signal and executes a virtual trade. +7. Trade details are stored in the **Database**. +8. **Dashboard** visualizes real-time data and bot activity. + +### Backtesting Data Flow +1. **Backtesting Engine** queries historical OHLCV data from the **Database**. +2. Data is fed into the **Strategy Engine**. +3. Strategy generates signals, which are logged. +4. Performance metrics are calculated and stored. + +## 3. Design Principles + +- **Monolithic Architecture**: All components are part of a single application for simplicity. +- **Modular Design**: Components are loosely coupled to allow for future migration to microservices. +- **API-First**: Internal components communicate through well-defined interfaces. +- **Configuration-driven**: Bot and strategy parameters are managed via JSON files. + +--- +*Back to [Main Documentation (`README.md`)]* \ No newline at end of file diff --git a/docs/architecture/README.md b/docs/architecture/README.md deleted file mode 100644 index d06017e..0000000 --- a/docs/architecture/README.md +++ /dev/null @@ -1,44 +0,0 @@ -# Architecture & Design Documentation - -This section contains high-level system architecture documentation and design decisions for the TCP Trading Platform. - -## Documents - -### [Architecture Overview](architecture.md) -Comprehensive overview of the system architecture, including: -- Component relationships and data flow -- Technology stack and infrastructure decisions -- Scalability and performance considerations -- Security architecture and best practices - -### [Data Processing Refactor](data-processing-refactor.md) -Documentation of the major refactoring of the data processing system: -- Migration from monolithic to modular architecture -- Common utilities framework for all exchanges -- Right-aligned timestamp aggregation strategy -- Future leakage prevention mechanisms -- Exchange-specific component design patterns - -### [Crypto Bot PRD](crypto-bot-prd.md) -Product Requirements Document defining: -- Platform objectives and scope -- Functional and non-functional requirements -- User stories and acceptance criteria -- Technical constraints and assumptions - -## Quick Navigation - -- **New to the platform?** Start with [Architecture Overview](architecture.md) -- **Understanding data processing?** See [Data Processing Refactor](data-processing-refactor.md) -- **Product requirements?** Check [Crypto Bot PRD](crypto-bot-prd.md) -- **Implementation details?** See [Technical Reference](../reference/) - -## Related Documentation - -- [Technical Reference](../reference/) - Detailed specifications and API documentation -- [Core Components](../components/) - Implementation details for system components -- [Exchange Integrations](../exchanges/) - Exchange-specific documentation - ---- - -*For the complete documentation index, see the [main documentation README](../README.md).* \ No newline at end of file diff --git a/docs/architecture/architecture.md b/docs/architecture/architecture.md deleted file mode 100644 index 962c9ca..0000000 --- a/docs/architecture/architecture.md +++ /dev/null @@ -1,276 +0,0 @@ -## Architecture Components - -### 1. Data Collector -**Responsibility**: OHLCV data collection and aggregation from exchanges -```python -class DataCollector: - def __init__(self): - self.providers = {} # Registry of data providers - self.store_raw_data = False # Optional raw data storage - - def register_provider(self, name: str, provider: DataProvider): - """Register a new data provider""" - - def start_collection(self, symbols: List[str], timeframes: List[str]): - """Start collecting OHLCV data for specified symbols and timeframes""" - - def process_raw_trades(self, raw_trades: List[dict]) -> dict: - """Aggregate raw trades into OHLCV candles""" - - def store_ohlcv_data(self, ohlcv_data: dict): - """Store OHLCV data in PostgreSQL market_data table""" - - def send_market_update(self, symbol: str, ohlcv_data: dict): - """Send Redis signal with OHLCV update to active bots""" - - def store_raw_data_optional(self, raw_data: dict): - """Optionally store raw data for detailed backtesting""" -``` - -### 2. Strategy Engine -**Responsibility**: Unified interface for all trading strategies -```python -class BaseStrategy: - def __init__(self, parameters: dict): - self.parameters = parameters - - def process_data(self, data: pd.DataFrame) -> Signal: - """Process market data and generate signals""" - raise NotImplementedError - - def get_indicators(self) -> dict: - """Return calculated indicators for plotting""" - return {} -``` - -### 3. Bot Manager -**Responsibility**: Orchestrate bot execution and state management -```python -class BotManager: - def __init__(self): - self.active_bots = {} - self.config_path = "config/bots/" - - def load_bot_config(self, bot_id: int) -> dict: - """Load bot configuration from JSON file""" - - def start_bot(self, bot_id: int): - """Start a bot instance with crash recovery monitoring""" - - def stop_bot(self, bot_id: int): - """Stop a bot instance and update database status""" - - def process_signal(self, bot_id: int, signal: Signal): - """Process signal and make virtual trading decision""" - - def update_bot_heartbeat(self, bot_id: int): - """Update bot heartbeat in database for monitoring""" - - def restart_crashed_bots(self): - """Monitor and restart crashed bots (max 3 attempts/hour)""" - - def restore_active_bots_on_startup(self): - """Restore active bot states after application restart""" -``` - -## Communication Architecture - -### Redis Pub/Sub Patterns -```python -# Real-time market data distribution -MARKET_DATA_CHANNEL = "market:{symbol}" # OHLCV updates -BOT_SIGNALS_CHANNEL = "signals:{bot_id}" # Trading decisions -BOT_STATUS_CHANNEL = "status:{bot_id}" # Bot lifecycle events -SYSTEM_EVENTS_CHANNEL = "system:events" # Global notifications -``` - -## Time Aggregation Strategy - -### Candlestick Alignment -- **Use RIGHT-ALIGNED timestamps** (industry standard) -- 5-minute candle with timestamp 09:05:00 represents data from 09:00:01 to 09:05:00 -- Timestamp = close time of the candle -- Aligns with major exchanges (Binance, OKX, Coinbase) - -### Aggregation Logic -```python -def aggregate_to_timeframe(ticks: List[dict], timeframe: str) -> dict: - """ - Aggregate tick data to specified timeframe - timeframe: '1m', '5m', '15m', '1h', '4h', '1d' - """ - # Convert timeframe to seconds - interval_seconds = parse_timeframe(timeframe) - - # Group ticks by time intervals (right-aligned) - for group in group_by_interval(ticks, interval_seconds): - candle = { - 'timestamp': group.end_time, # Right-aligned - 'open': group.first_price, - 'high': group.max_price, - 'low': group.min_price, - 'close': group.last_price, - 'volume': group.total_volume - } - yield candle -``` - -## Backtesting Strategy - -### Vectorized Processing Approach -```python -import pandas as pd -import numpy as np - -def backtest_strategy_simple(strategy, market_data: pd.DataFrame, initial_balance: float = 10000): - """ - Simple vectorized backtesting using pandas operations - - Parameters: - - strategy: Strategy instance with process_data method - - market_data: DataFrame with OHLCV data - - initial_balance: Starting portfolio value - - Returns: - - Portfolio performance metrics and trade history - """ - - # Calculate all signals at once using vectorized operations - signals = [] - portfolio_value = [] - current_balance = initial_balance - position = 0 - - for idx, row in market_data.iterrows(): - # Get signal from strategy - signal = strategy.process_data(market_data.iloc[:idx+1]) - - # Simulate trade execution - if signal.action == 'buy' and position == 0: - position = current_balance / row['close'] - current_balance = 0 - - elif signal.action == 'sell' and position > 0: - current_balance = position * row['close'] * 0.999 # 0.1% fee - position = 0 - - # Track portfolio value - total_value = current_balance + (position * row['close']) - portfolio_value.append(total_value) - signals.append(signal) - - return { - 'final_value': portfolio_value[-1], - 'total_return': (portfolio_value[-1] / initial_balance - 1) * 100, - 'signals': signals, - 'portfolio_progression': portfolio_value - } - -def calculate_performance_metrics(portfolio_values: List[float]) -> dict: - """Calculate standard performance metrics""" - returns = pd.Series(portfolio_values).pct_change().dropna() - - return { - 'sharpe_ratio': returns.mean() / returns.std() if returns.std() > 0 else 0, - 'max_drawdown': (pd.Series(portfolio_values).cummax() - pd.Series(portfolio_values)).max(), - 'win_rate': (returns > 0).mean(), - 'total_trades': len(returns) - } -``` - -### Optimization Techniques -1. **Vectorized Operations**: Use pandas for bulk data processing -2. **Efficient Indexing**: Pre-calculate indicators where possible -3. **Memory Management**: Process data in chunks for large datasets -4. **Simple Parallelization**: Run multiple strategy tests independently - -## Key Design Principles - -1. **OHLCV-First Data Strategy**: Primary focus on aggregated candle data, optional raw data storage -2. **Signal Tracking**: All trading signals recorded in database for analysis and debugging -3. **JSON Configuration**: Strategy parameters and bot configs in JSON for rapid testing -4. **Real-time State Management**: Bot states updated via Redis and PostgreSQL for monitoring -5. **Crash Recovery**: Automatic bot restart and application state recovery -6. **Virtual Trading**: Simulation-first approach with fee modeling -7. **Simplified Architecture**: Monolithic design with clear component boundaries for future scaling - -## Repository Pattern for Database Operations - -### Database Abstraction Layer -The system uses the **Repository Pattern** to abstract database operations from business logic, providing a clean, maintainable, and testable interface for all data access. - -```python -# Centralized database operations -from database.operations import get_database_operations - -class DataCollector: - def __init__(self): - # Use repository pattern instead of direct SQL - self.db = get_database_operations() - - def store_candle(self, candle: OHLCVCandle): - """Store candle using repository pattern""" - success = self.db.market_data.upsert_candle(candle, force_update=False) - - def store_raw_trade(self, data_point: MarketDataPoint): - """Store raw trade data using repository pattern""" - success = self.db.raw_trades.insert_market_data_point(data_point) -``` - -### Repository Structure -```python -# Clean API for database operations -class DatabaseOperations: - def __init__(self): - self.market_data = MarketDataRepository() # Candle operations - self.raw_trades = RawTradeRepository() # Raw data operations - - def health_check(self) -> bool: - """Check database connection health""" - - def get_stats(self) -> dict: - """Get database statistics and metrics""" - -class MarketDataRepository: - def upsert_candle(self, candle: OHLCVCandle, force_update: bool = False) -> bool: - """Store or update candle with duplicate handling""" - - def get_candles(self, symbol: str, timeframe: str, start: datetime, end: datetime) -> List[dict]: - """Retrieve historical candle data""" - - def get_latest_candle(self, symbol: str, timeframe: str) -> Optional[dict]: - """Get most recent candle for symbol/timeframe""" - -class RawTradeRepository: - def insert_market_data_point(self, data_point: MarketDataPoint) -> bool: - """Store raw WebSocket data""" - - def get_raw_trades(self, symbol: str, data_type: str, start: datetime, end: datetime) -> List[dict]: - """Retrieve raw trade data for analysis""" -``` - -### Benefits of Repository Pattern -- **No Raw SQL**: Business logic never contains direct SQL queries -- **Centralized Operations**: All database interactions go through well-defined APIs -- **Easy Testing**: Repository methods can be easily mocked for unit tests -- **Database Agnostic**: Can change database implementations without affecting business logic -- **Automatic Transaction Management**: Sessions, commits, and rollbacks handled automatically -- **Consistent Error Handling**: Custom exceptions with proper context -- **Type Safety**: Full type hints for better IDE support and error detection - -## Database Architecture - -### Core Tables -- **market_data**: OHLCV candles for bot operations and backtesting (primary table) -- **bots**: Bot instances with JSON config references and status tracking -- **signals**: Trading decisions with confidence scores and indicator values -- **trades**: Virtual trade execution records with P&L tracking -- **bot_performance**: Portfolio snapshots for performance visualization - -### Optional Tables -- **raw_trades**: Raw tick data for advanced backtesting (partitioned by month) - -### Data Access Patterns -- **Real-time**: Bots read recent OHLCV data via indexes on (symbol, timeframe, timestamp) -- **Historical**: Dashboard queries aggregated performance data for charts -- **Backtesting**: Sequential access to historical OHLCV data by date range \ No newline at end of file diff --git a/docs/components/charts/adding-new-indicators.md b/docs/components/charts/adding-new-indicators.md deleted file mode 100644 index 65cc511..0000000 --- a/docs/components/charts/adding-new-indicators.md +++ /dev/null @@ -1,393 +0,0 @@ -# Quick Guide: Adding New Indicators - -## Overview - -This guide provides a step-by-step checklist for adding new technical indicators to the Crypto Trading Bot Dashboard. - -## Prerequisites - -- Understanding of Python and technical analysis -- Familiarity with the project structure -- Knowledge of the indicator type (overlay vs subplot) - -## Step-by-Step Checklist - -### βœ… Step 1: Plan Your Indicator - -- [ ] Determine indicator type (overlay or subplot) -- [ ] Define required parameters -- [ ] Choose default styling -- [ ] Research calculation formula - -### βœ… Step 2: Create Indicator Class - -**File**: `components/charts/layers/indicators.py` (overlay) or `components/charts/layers/subplots.py` (subplot) - -```python -class StochasticLayer(IndicatorLayer): - """Stochastic Oscillator indicator implementation.""" - - def __init__(self, config: Dict[str, Any]): - super().__init__(config) - self.name = "stochastic" - self.display_type = "subplot" # or "overlay" - - def calculate_values(self, df: pd.DataFrame) -> Dict[str, pd.Series]: - """Calculate stochastic oscillator values.""" - k_period = self.config.get('k_period', 14) - d_period = self.config.get('d_period', 3) - - # Calculate %K and %D lines - lowest_low = df['low'].rolling(window=k_period).min() - highest_high = df['high'].rolling(window=k_period).max() - - k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low)) - d_percent = k_percent.rolling(window=d_period).mean() - - return { - 'k_percent': k_percent, - 'd_percent': d_percent - } - - def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]: - """Create plotly traces for stochastic oscillator.""" - traces = [] - - # %K line - traces.append(go.Scatter( - x=df.index, - y=values['k_percent'], - mode='lines', - name=f"%K ({self.config.get('k_period', 14)})", - line=dict( - color=self.config.get('color', '#007bff'), - width=self.config.get('line_width', 2) - ) - )) - - # %D line - traces.append(go.Scatter( - x=df.index, - y=values['d_percent'], - mode='lines', - name=f"%D ({self.config.get('d_period', 3)})", - line=dict( - color=self.config.get('secondary_color', '#ff6b35'), - width=self.config.get('line_width', 2) - ) - )) - - return traces -``` - -### βœ… Step 3: Register Indicator - -**File**: `components/charts/layers/__init__.py` - -```python -# Import the new class -from .subplots import StochasticLayer - -# Add to appropriate registry -SUBPLOT_REGISTRY = { - 'rsi': RSILayer, - 'macd': MACDLayer, - 'stochastic': StochasticLayer, # Add this line -} - -# For overlay indicators, add to INDICATOR_REGISTRY instead -INDICATOR_REGISTRY = { - 'sma': SMALayer, - 'ema': EMALayer, - 'bollinger_bands': BollingerBandsLayer, - 'stochastic': StochasticLayer, # Only if overlay -} -``` - -### βœ… Step 4: Add UI Dropdown Option - -**File**: `app.py` (in the indicator type dropdown) - -```python -dcc.Dropdown( - id='indicator-type-dropdown', - options=[ - {'label': 'Simple Moving Average (SMA)', 'value': 'sma'}, - {'label': 'Exponential Moving Average (EMA)', 'value': 'ema'}, - {'label': 'Relative Strength Index (RSI)', 'value': 'rsi'}, - {'label': 'MACD', 'value': 'macd'}, - {'label': 'Bollinger Bands', 'value': 'bollinger_bands'}, - {'label': 'Stochastic Oscillator', 'value': 'stochastic'}, # Add this - ] -) -``` - -### βœ… Step 5: Add Parameter Fields to Modal - -**File**: `app.py` (in the modal parameters section) - -```python -# Add parameter section for stochastic -html.Div([ - html.Div([ - html.Label("%K Period:", style={'font-weight': 'bold', 'margin-bottom': '5px'}), - dcc.Input( - id='stochastic-k-period-input', - type='number', - value=14, - min=5, max=50, - style={'width': '80px', 'padding': '8px', 'border': '1px solid #ddd', 'border-radius': '4px'} - ) - ], style={'margin-bottom': '10px'}), - html.Div([ - html.Label("%D Period:", style={'font-weight': 'bold', 'margin-bottom': '5px'}), - dcc.Input( - id='stochastic-d-period-input', - type='number', - value=3, - min=2, max=10, - style={'width': '80px', 'padding': '8px', 'border': '1px solid #ddd', 'border-radius': '4px'} - ) - ]), - html.P("Stochastic oscillator periods for %K and %D lines", - style={'color': '#7f8c8d', 'font-size': '12px', 'margin-top': '5px'}) -], id='stochastic-parameters', style={'display': 'none', 'margin-bottom': '10px'}) -``` - -### βœ… Step 6: Update Parameter Visibility Callback - -**File**: `app.py` (in `update_parameter_fields` callback) - -```python -@app.callback( - [Output('indicator-parameters-message', 'style'), - Output('sma-parameters', 'style'), - Output('ema-parameters', 'style'), - Output('rsi-parameters', 'style'), - Output('macd-parameters', 'style'), - Output('bb-parameters', 'style'), - Output('stochastic-parameters', 'style')], # Add this output - Input('indicator-type-dropdown', 'value'), - prevent_initial_call=True -) -def update_parameter_fields(indicator_type): - # ... existing code ... - - # Add stochastic style - stochastic_style = hidden_style - - # Show the relevant parameter section - if indicator_type == 'sma': - sma_style = visible_style - elif indicator_type == 'ema': - ema_style = visible_style - elif indicator_type == 'rsi': - rsi_style = visible_style - elif indicator_type == 'macd': - macd_style = visible_style - elif indicator_type == 'bollinger_bands': - bb_style = visible_style - elif indicator_type == 'stochastic': # Add this - stochastic_style = visible_style - - return message_style, sma_style, ema_style, rsi_style, macd_style, bb_style, stochastic_style -``` - -### βœ… Step 7: Update Save Indicator Callback - -**File**: `app.py` (in `save_new_indicator` callback) - -```python -# Add stochastic parameters to State inputs -State('stochastic-k-period-input', 'value'), -State('stochastic-d-period-input', 'value'), - -# Add to parameter collection logic -def save_new_indicator(n_clicks, name, indicator_type, description, color, line_width, - sma_period, ema_period, rsi_period, - macd_fast, macd_slow, macd_signal, - bb_period, bb_stddev, - stochastic_k, stochastic_d, # Add these - edit_data): - - # ... existing code ... - - elif indicator_type == 'stochastic': - parameters = { - 'k_period': stochastic_k or 14, - 'd_period': stochastic_d or 3 - } -``` - -### βœ… Step 8: Update Edit Callback Parameters - -**File**: `app.py` (in `edit_indicator` callback) - -```python -# Add output for stochastic parameters -Output('stochastic-k-period-input', 'value'), -Output('stochastic-d-period-input', 'value'), - -# Add parameter loading logic -elif indicator.type == 'stochastic': - stochastic_k = params.get('k_period', 14) - stochastic_d = params.get('d_period', 3) - -# Add to return statement -return ( - "✏️ Edit Indicator", - indicator.name, - indicator.type, - indicator.description, - indicator.styling.color, - edit_data, - sma_period, - ema_period, - rsi_period, - macd_fast, - macd_slow, - macd_signal, - bb_period, - bb_stddev, - stochastic_k, # Add these - stochastic_d -) -``` - -### βœ… Step 9: Update Reset Callback - -**File**: `app.py` (in `reset_modal_form` callback) - -```python -# Add outputs -Output('stochastic-k-period-input', 'value', allow_duplicate=True), -Output('stochastic-d-period-input', 'value', allow_duplicate=True), - -# Add default values to return -return "", None, "", "#007bff", 2, "πŸ“Š Add New Indicator", None, 20, 12, 14, 12, 26, 9, 20, 2.0, 14, 3 -``` - -### βœ… Step 10: Create Default Template - -**File**: `components/charts/indicator_defaults.py` - -```python -def create_stochastic_template() -> UserIndicator: - """Create default Stochastic Oscillator template.""" - return UserIndicator( - id=f"stochastic_{generate_short_id()}", - name="Stochastic 14,3", - description="14-period %K with 3-period %D smoothing", - type="stochastic", - display_type="subplot", - parameters={ - "k_period": 14, - "d_period": 3 - }, - styling=IndicatorStyling( - color="#9c27b0", - line_width=2 - ) - ) - -# Add to DEFAULT_TEMPLATES -DEFAULT_TEMPLATES = { - "sma": create_sma_template, - "ema": create_ema_template, - "rsi": create_rsi_template, - "macd": create_macd_template, - "bollinger_bands": create_bollinger_bands_template, - "stochastic": create_stochastic_template, # Add this -} -``` - -### βœ… Step 11: Add Calculation Function (Optional) - -**File**: `data/common/indicators.py` - -```python -def calculate_stochastic(df: pd.DataFrame, k_period: int = 14, d_period: int = 3) -> tuple: - """Calculate Stochastic Oscillator (%K and %D).""" - lowest_low = df['low'].rolling(window=k_period).min() - highest_high = df['high'].rolling(window=k_period).max() - - k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low)) - d_percent = k_percent.rolling(window=d_period).mean() - - return k_percent, d_percent -``` - -## Testing Checklist - -- [ ] Indicator appears in dropdown -- [ ] Parameter fields show/hide correctly -- [ ] Default values are set properly -- [ ] Indicator saves and loads correctly -- [ ] Edit functionality works -- [ ] Chart updates with indicator -- [ ] Delete functionality works -- [ ] Error handling works with insufficient data - -## Common Patterns - -### Single Line Overlay -```python -# Simple indicators like SMA, EMA -def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]: - return [go.Scatter( - x=df.index, - y=values['indicator_name'], - mode='lines', - name=self.config.get('name', 'Indicator'), - line=dict(color=self.config.get('color', '#007bff')) - )] -``` - -### Multi-Line Subplot -```python -# Complex indicators like MACD, Stochastic -def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]: - traces = [] - for key, series in values.items(): - traces.append(go.Scatter( - x=df.index, - y=series, - mode='lines', - name=f"{key.title()}" - )) - return traces -``` - -### Band Indicators -```python -# Indicators with bands like Bollinger Bands -def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]: - return [ - # Upper band - go.Scatter(x=df.index, y=values['upper'], name='Upper'), - # Middle line - go.Scatter(x=df.index, y=values['middle'], name='Middle'), - # Lower band with fill - go.Scatter(x=df.index, y=values['lower'], name='Lower', - fill='tonexty', fillcolor='rgba(0,123,255,0.1)') - ] -``` - -## File Change Summary - -When adding a new indicator, you'll typically modify these files: - -1. **`components/charts/layers/indicators.py`** or **`subplots.py`** - Indicator class -2. **`components/charts/layers/__init__.py`** - Registry registration -3. **`app.py`** - UI dropdown, parameter fields, callbacks -4. **`components/charts/indicator_defaults.py`** - Default template -5. **`data/common/indicators.py`** - Calculation function (optional) - -## Tips - -- Start with a simple single-line indicator first -- Test each step before moving to the next -- Use existing indicators as templates -- Check console/logs for errors -- Test with different parameter values -- Verify calculations with known data \ No newline at end of file diff --git a/docs/components/data_collectors.md b/docs/components/data_collectors.md deleted file mode 100644 index 0dae42a..0000000 --- a/docs/components/data_collectors.md +++ /dev/null @@ -1,1322 +0,0 @@ -# Data Collector System Documentation - -## Overview - -The Data Collector System provides a robust, scalable framework for collecting real-time market data from cryptocurrency exchanges. It features comprehensive health monitoring, automatic recovery, centralized management, and a modular exchange-based architecture designed for production trading environments. - -This documentation covers the **core collector components**. For the high-level service layer that orchestrates these collectors, see [Data Collection Service](../services/data_collection_service.md). - -## Key Features - -### πŸ—οΈ **Modular Exchange Architecture** -- **Exchange-Based Organization**: Each exchange has its own implementation folder -- **Factory Pattern**: Easy creation of collectors from any supported exchange -- **Standardized Interface**: Consistent API across all exchange implementations -- **Scalable Design**: Easy addition of new exchanges (Binance, Coinbase, etc.) - -### πŸ”„ **Auto-Recovery & Health Monitoring** -- **Heartbeat System**: Continuous health monitoring with configurable intervals -- **Auto-Restart**: Automatic restart on failures with exponential backoff -- **Connection Recovery**: Robust reconnection logic for network interruptions -- **Data Freshness Monitoring**: Detects stale data and triggers recovery - -### πŸŽ›οΈ **Centralized Management** -- **CollectorManager**: Supervises multiple collectors with coordinated lifecycle -- **Dynamic Control**: Enable/disable collectors at runtime without system restart -- **Global Health Checks**: System-wide monitoring and alerting -- **Graceful Shutdown**: Proper cleanup and resource management - -### πŸ“Š **Comprehensive Monitoring** -- **Real-time Status**: Detailed status reporting for all collectors -- **Performance Metrics**: Message counts, uptime, error rates, restart counts -- **Health Analytics**: Connection state, data freshness, error tracking -- **Conditional Logging**: Enhanced logging with configurable verbosity (see [Logging System](logging.md)) -- **Multi-Timeframe Support**: Sub-second to daily candle aggregation (1s, 5s, 10s, 15s, 30s, 1m, 5m, 15m, 1h, 4h, 1d) - -### πŸ›’οΈ **Database Integration** -- **Repository Pattern**: All database operations use the centralized `database/operations.py` module -- **No Raw SQL**: Clean API through `MarketDataRepository` and `RawTradeRepository` classes -- **Automatic Transaction Management**: Sessions, commits, and rollbacks handled automatically -- **Configurable Duplicate Handling**: `force_update_candles` parameter controls duplicate behavior -- **Real-time Storage**: Completed candles automatically saved to `market_data` table -- **Raw Data Storage**: Optional raw WebSocket data storage via `RawTradeRepository` -- **Custom Error Handling**: Proper exception handling with `DatabaseOperationError` -- **Health Monitoring**: Built-in database health checks and statistics -- **Connection Pooling**: Efficient database connection management through repositories - -## Architecture - -``` -β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” -β”‚ CollectorManager β”‚ -β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ -β”‚ β”‚ Global Health Monitor β”‚ β”‚ -β”‚ β”‚ β€’ System-wide health checks β”‚ β”‚ -β”‚ β”‚ β€’ Auto-restart coordination β”‚ β”‚ -β”‚ β”‚ β€’ Performance analytics β”‚ β”‚ -β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ -β”‚ β”‚ β”‚ -β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ -β”‚ β”‚ OKX Collector β”‚ β”‚Binance Collectorβ”‚ β”‚ Custom β”‚ β”‚ -β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ Collector β”‚ β”‚ -β”‚ β”‚ β€’ Health Monitorβ”‚ β”‚ β€’ Health Monitorβ”‚ β”‚ β€’ Health Mon β”‚ β”‚ -β”‚ β”‚ β€’ Auto-restart β”‚ β”‚ β€’ Auto-restart β”‚ β”‚ β€’ Auto-resta β”‚ β”‚ -β”‚ β”‚ β€’ Data Validate β”‚ β”‚ β€’ Data Validate β”‚ β”‚ β€’ Data Valid β”‚ β”‚ -β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ -β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ - β”‚ - β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” - β”‚ Data Output β”‚ - β”‚ β”‚ - β”‚ β€’ Callbacks β”‚ - β”‚ β€’ Redis Pub/Sub β”‚ - β”‚ β€’ Database β”‚ - β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ -``` - -### Exchange Module Structure - -The modular architecture organizes exchange implementations: - -``` -data/ -β”œβ”€β”€ base_collector.py # Abstract base classes -β”œβ”€β”€ collector_manager.py # Cross-platform collector manager -β”œβ”€β”€ aggregator.py # Cross-exchange data aggregation -β”œβ”€β”€ exchanges/ # Exchange-specific implementations -β”‚ β”œβ”€β”€ __init__.py # Main exports and factory -β”‚ β”œβ”€β”€ registry.py # Exchange registry and capabilities -β”‚ β”œβ”€β”€ factory.py # Factory pattern for collectors -β”‚ └── okx/ # OKX implementation -β”‚ β”œβ”€β”€ __init__.py # OKX exports -β”‚ β”œβ”€β”€ collector.py # OKXCollector class -β”‚ └── websocket.py # OKXWebSocketClient class -β”‚ └── binance/ # Future: Binance implementation -β”‚ β”œβ”€β”€ __init__.py -β”‚ β”œβ”€β”€ collector.py -β”‚ └── websocket.py -``` - -## Quick Start - -### 1. Using Exchange Factory (Recommended) - -```python -import asyncio -from data.exchanges import ExchangeFactory, ExchangeCollectorConfig, create_okx_collector -from data.base_collector import DataType -from utils.logger import get_logger - -async def main(): - # Create logger for the collector - logger = get_logger('okx_collector', verbose=True) - - # Method 1: Using factory with configuration - config = ExchangeCollectorConfig( - exchange='okx', - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - auto_restart=True, - health_check_interval=30.0, - store_raw_data=True - ) - - collector = ExchangeFactory.create_collector(config, logger=logger) - - # Method 2: Using convenience function - okx_collector = create_okx_collector( - symbol='ETH-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - logger=logger - ) - - # Add data callback - def on_trade_data(data_point): - print(f"Trade: {data_point.symbol} - {data_point.data}") - - collector.add_data_callback(DataType.TRADE, on_trade_data) - - # Start collector - await collector.start() - - # Let it run - await asyncio.sleep(60) - - # Stop collector - await collector.stop() - -asyncio.run(main()) -``` - -### 2. Creating Multiple Collectors with Manager - -```python -import asyncio -from data.exchanges import ExchangeFactory, ExchangeCollectorConfig -from data.base_collector import DataType -from data.collector_manager import CollectorManager -from utils.logger import get_logger - -async def main(): - # Create manager with logging - manager_logger = get_logger('collector_manager', verbose=True) - manager = CollectorManager(logger=manager_logger) - - # Create multiple collectors using factory - configs = [ - ExchangeCollectorConfig('okx', 'BTC-USDT', [DataType.TRADE, DataType.ORDERBOOK]), - ExchangeCollectorConfig('okx', 'ETH-USDT', [DataType.TRADE]), - ExchangeCollectorConfig('okx', 'SOL-USDT', [DataType.ORDERBOOK]) - ] - - # Create collectors with individual loggers - for config in configs: - collector_logger = get_logger(f'okx_{config.symbol.lower().replace("-", "_")}') - collector = ExchangeFactory.create_collector(config, logger=collector_logger) - manager.add_collector(collector) - - print(f"Created {len(manager.list_collectors())} collectors") - - # Start all collectors - await manager.start() - - # Monitor - await asyncio.sleep(60) - - # Stop all - await manager.stop() - -asyncio.run(main()) -``` - -## API Reference - -### BaseDataCollector - -The abstract base class that all data collectors must inherit from. - -#### Constructor - -```python -def __init__(self, - exchange_name: str, - symbols: List[str], - data_types: Optional[List[DataType]] = None, - component_name: Optional[str] = None, - auto_restart: bool = True, - health_check_interval: float = 30.0, - logger: Optional[logging.Logger] = None, - log_errors_only: bool = False) -``` - -**Parameters:** -- `exchange_name`: Name of the exchange (e.g., 'okx', 'binance') -- `symbols`: List of trading symbols to collect data for -- `data_types`: Types of data to collect (default: [DataType.CANDLE]) -- `component_name`: Name for logging (default: based on exchange_name) -- `auto_restart`: Enable automatic restart on failures (default: True) -- `health_check_interval`: Seconds between health checks (default: 30.0) -- `logger`: Logger instance for conditional logging (default: None) -- `log_errors_only`: Only log error-level messages (default: False) - -#### Abstract Methods - -Must be implemented by subclasses: - -```python -async def connect(self) -> bool -async def disconnect(self) -> None -async def subscribe_to_data(self, symbols: List[str], data_types: List[DataType]) -> bool -async def unsubscribe_from_data(self, symbols: List[str], data_types: List[DataType]) -> bool -async def _process_message(self, message: Any) -> Optional[MarketDataPoint] -async def _handle_messages(self) -> None -``` - -#### Public Methods - -```python -async def start() -> bool # Start the collector -async def stop(force: bool = False) -> None # Stop the collector -async def restart() -> bool # Restart the collector - -# Callback management -def add_data_callback(self, data_type: DataType, callback: Callable) -> None -def remove_data_callback(self, data_type: DataType, callback: Callable) -> None - -# Symbol management -def add_symbol(self, symbol: str) -> None -def remove_symbol(self, symbol: str) -> None - -# Status and monitoring -def get_status(self) -> Dict[str, Any] -def get_health_status(self) -> Dict[str, Any] - -# Data validation -def validate_ohlcv_data(self, data: Dict[str, Any], symbol: str, timeframe: str) -> OHLCVData -``` - -#### Conditional Logging Methods - -All collectors support conditional logging (see [Logging System](logging.md) for details): - -```python -def _log_debug(self, message: str) -> None # Debug messages (if not errors-only) -def _log_info(self, message: str) -> None # Info messages (if not errors-only) -def _log_warning(self, message: str) -> None # Warning messages (if not errors-only) -def _log_error(self, message: str, exc_info: bool = False) -> None # Always logged -def _log_critical(self, message: str, exc_info: bool = False) -> None # Always logged -``` - -#### Status Information - -The `get_status()` method returns comprehensive status information: - -```python -{ - 'exchange': 'okx', - 'status': 'running', # Current status - 'should_be_running': True, # Desired state - 'symbols': ['BTC-USDT', 'ETH-USDT'], # Configured symbols - 'data_types': ['ticker'], # Data types being collected - 'auto_restart': True, # Auto-restart enabled - 'health': { - 'time_since_heartbeat': 5.2, # Seconds since last heartbeat - 'time_since_data': 2.1, # Seconds since last data - 'max_silence_duration': 300.0 # Max allowed silence - }, - 'statistics': { - 'messages_received': 1250, # Total messages received - 'messages_processed': 1248, # Successfully processed - 'errors': 2, # Error count - 'restarts': 1, # Restart count - 'uptime_seconds': 3600.5, # Current uptime - 'reconnect_attempts': 0, # Current reconnect attempts - 'last_message_time': '2023-...', # ISO timestamp - 'connection_uptime': '2023-...', # Connection start time - 'last_error': 'Connection failed', # Last error message - 'last_restart_time': '2023-...' # Last restart time - } -} -``` - -#### Health Status - -The `get_health_status()` method provides detailed health information: - -```python -{ - 'is_healthy': True, # Overall health status - 'issues': [], # List of current issues - 'status': 'running', # Current collector status - 'last_heartbeat': '2023-...', # Last heartbeat timestamp - 'last_data_received': '2023-...', # Last data timestamp - 'should_be_running': True, # Expected state - 'is_running': True # Actual running state -} -``` - -### CollectorManager - -Manages multiple data collectors with coordinated lifecycle and health monitoring. - -#### Constructor - -```python -def __init__(self, - manager_name: str = "collector_manager", - global_health_check_interval: float = 60.0, - restart_delay: float = 5.0, - logger: Optional[logging.Logger] = None, - log_errors_only: bool = False) -``` - -**Parameters:** -- `manager_name`: Name for the manager (used in logging) -- `global_health_check_interval`: Seconds between global health checks -- `restart_delay`: Delay between restart attempts -- `logger`: Logger instance for conditional logging (default: None) -- `log_errors_only`: Only log error-level messages (default: False) - -#### Public Methods - -```python -# Collector management -def add_collector(self, collector: BaseDataCollector, config: Optional[CollectorConfig] = None) -> None -def remove_collector(self, collector_name: str) -> bool -def enable_collector(self, collector_name: str) -> bool -def disable_collector(self, collector_name: str) -> bool - -# Lifecycle management -async def start() -> bool -async def stop() -> None -async def restart_collector(self, collector_name: str) -> bool -async def restart_all_collectors(self) -> Dict[str, bool] - -# Status and monitoring -def get_status(self) -> Dict[str, Any] -def get_collector_status(self, collector_name: str) -> Optional[Dict[str, Any]] -def list_collectors(self) -> List[str] -def get_running_collectors(self) -> List[str] -def get_failed_collectors(self) -> List[str] -``` - -### CollectorConfig - -Configuration dataclass for collectors: - -```python -@dataclass -class CollectorConfig: - name: str # Unique collector name - exchange: str # Exchange name - symbols: List[str] # Trading symbols - data_types: List[str] # Data types to collect - auto_restart: bool = True # Enable auto-restart - health_check_interval: float = 30.0 # Health check interval - enabled: bool = True # Initially enabled -``` - -### Data Types - -#### DataType Enum - -```python -class DataType(Enum): - TICKER = "ticker" # Price and volume updates - TRADE = "trade" # Individual trade executions - ORDERBOOK = "orderbook" # Order book snapshots - CANDLE = "candle" # OHLCV candle data - BALANCE = "balance" # Account balance updates -``` - -#### MarketDataPoint - -Standardized market data structure: - -```python -@dataclass -class MarketDataPoint: - exchange: str # Exchange name - symbol: str # Trading symbol - timestamp: datetime # Data timestamp (UTC) - data_type: DataType # Type of data - data: Dict[str, Any] # Raw data payload -``` - -#### OHLCVData - -OHLCV (candlestick) data structure with validation: - -```python -@dataclass -class OHLCVData: - symbol: str # Trading symbol - timeframe: str # Timeframe (1m, 5m, 1h, etc.) - timestamp: datetime # Candle timestamp - open: Decimal # Opening price - high: Decimal # Highest price - low: Decimal # Lowest price - close: Decimal # Closing price - volume: Decimal # Trading volume - trades_count: Optional[int] = None # Number of trades -``` - -## Health Monitoring - -### Monitoring Levels - -The system provides multi-level health monitoring: - -1. **Individual Collector Health** - - Heartbeat monitoring (message loop activity) - - Data freshness (time since last data received) - - Connection state monitoring - - Error rate tracking - -2. **Manager-Level Health** - - Global health checks across all collectors - - Coordinated restart management - - System-wide performance metrics - - Resource utilization monitoring - -### Health Check Intervals - -- **Individual Collector**: Configurable per collector (default: 30s) -- **Global Manager**: Configurable for manager (default: 60s) -- **Heartbeat Updates**: Updated with each message loop iteration -- **Data Freshness**: Updated when data is received - -### Auto-Restart Triggers - -Collectors are automatically restarted when: - -1. **No Heartbeat**: Message loop becomes unresponsive -2. **Stale Data**: No data received within configured timeout -3. **Connection Failures**: WebSocket or API connection lost -4. **Error Status**: Collector enters ERROR or UNHEALTHY state -5. **Manual Trigger**: Explicit restart request - -### Failure Handling - -```python -# Configure failure handling with conditional logging -from utils.logger import get_logger - -logger = get_logger('my_collector', verbose=True) - -collector = MyCollector( - symbols=["BTC-USDT"], - auto_restart=True, # Enable auto-restart - health_check_interval=30.0, # Check every 30 seconds - logger=logger, # Enable logging - log_errors_only=False # Log all levels -) - -# The collector will automatically: -# 1. Detect failures within 30 seconds -# 2. Attempt reconnection with exponential backoff -# 3. Restart up to 5 times (configurable) -# 4. Log all recovery attempts (if logger provided) -# 5. Report status to manager -``` - -## Configuration - -### Environment Variables - -The system respects these environment variables: - -```bash -# Logging configuration (see logging.md for details) -VERBOSE_LOGGING=true # Enable console logging -LOG_TO_CONSOLE=true # Alternative verbose setting - -# Health monitoring -DEFAULT_HEALTH_CHECK_INTERVAL=30 # Default health check interval (seconds) -MAX_SILENCE_DURATION=300 # Max time without data (seconds) -MAX_RECONNECT_ATTEMPTS=5 # Maximum reconnection attempts -RECONNECT_DELAY=5 # Delay between reconnect attempts (seconds) -``` - -### Programmatic Configuration - -```python -from utils.logger import get_logger - -# Configure individual collector with conditional logging -logger = get_logger('custom_collector', verbose=True) - -collector = MyCollector( - exchange_name="custom_exchange", - symbols=["BTC-USDT", "ETH-USDT"], - data_types=[DataType.TICKER, DataType.TRADE], - auto_restart=True, - health_check_interval=15.0, # Check every 15 seconds - logger=logger, # Enable logging - log_errors_only=False # Log all message types -) - -# Configure manager with conditional logging -manager_logger = get_logger('production_manager', verbose=False) -manager = CollectorManager( - manager_name="production_manager", - global_health_check_interval=30.0, # Global checks every 30s - restart_delay=10.0, # 10s delay between restarts - logger=manager_logger, # Manager logging - log_errors_only=True # Only log errors for manager -) - -# Configure specific collector in manager -config = CollectorConfig( - name="primary_okx", - exchange="okx", - symbols=["BTC-USDT", "ETH-USDT", "SOL-USDT"], - data_types=["ticker", "trade", "orderbook"], - auto_restart=True, - health_check_interval=20.0, - enabled=True -) - -manager.add_collector(collector, config) -``` - -## Best Practices - -### 1. Collector Implementation with Conditional Logging - -```python -from utils.logger import get_logger -from data.base_collector import BaseDataCollector, DataType - -class ProductionCollector(BaseDataCollector): - def __init__(self, exchange_name: str, symbols: list, logger=None): - super().__init__( - exchange_name=exchange_name, - symbols=symbols, - data_types=[DataType.TICKER, DataType.TRADE], - auto_restart=True, # Always enable auto-restart - health_check_interval=30.0, # Reasonable interval - logger=logger, # Pass logger for conditional logging - log_errors_only=False # Log all levels - ) - - # Connection management - self.connection_pool = None - self.rate_limiter = RateLimiter(100, 60) # 100 requests per minute - - # Data validation - self.data_validator = DataValidator() - - # Performance monitoring - self.metrics = MetricsCollector() - - async def connect(self) -> bool: - """Implement robust connection logic.""" - try: - self._log_info("Establishing connection to exchange") - - # Use connection pooling for reliability - self.connection_pool = await create_connection_pool( - self.exchange_name, - max_connections=5, - retry_attempts=3 - ) - - # Test connection - await self.connection_pool.ping() - self._log_info("Connection established successfully") - return True - - except Exception as e: - self._log_error(f"Connection failed: {e}", exc_info=True) - return False - - async def _process_message(self, message) -> Optional[MarketDataPoint]: - """Implement thorough data processing.""" - try: - # Rate limiting - await self.rate_limiter.acquire() - - # Data validation - if not self.data_validator.validate(message): - self._log_warning(f"Invalid message format received") - return None - - # Metrics collection - self.metrics.increment('messages_processed') - - # Log detailed processing (only if not errors-only) - self._log_debug(f"Processing message for {message.get('symbol', 'unknown')}") - - # Create standardized data point - data_point = MarketDataPoint( - exchange=self.exchange_name, - symbol=message['symbol'], - timestamp=self._parse_timestamp(message['timestamp']), - data_type=DataType.TICKER, - data=self._normalize_data(message) - ) - - self._log_debug(f"Successfully processed data point for {data_point.symbol}") - return data_point - - except Exception as e: - self.metrics.increment('processing_errors') - self._log_error(f"Message processing failed: {e}", exc_info=True) - raise # Let health monitor handle it -``` - -### 2. Error Handling - -```python -# Implement proper error handling with conditional logging -class RobustCollector(BaseDataCollector): - async def _handle_messages(self) -> None: - """Handle messages with proper error management.""" - try: - # Check connection health - if not await self._check_connection_health(): - raise ConnectionError("Connection health check failed") - - # Receive message with timeout - message = await asyncio.wait_for( - self.websocket.receive(), - timeout=30.0 # 30 second timeout - ) - - # Process message - data_point = await self._process_message(message) - if data_point: - await self._notify_callbacks(data_point) - - except asyncio.TimeoutError: - # No data received - let health monitor handle - self._log_warning("Message receive timeout") - raise ConnectionError("Message receive timeout") - - except WebSocketError as e: - # WebSocket specific errors - self._log_error(f"WebSocket error: {e}") - raise ConnectionError(f"WebSocket failed: {e}") - - except ValidationError as e: - # Data validation errors - don't restart for these - self._log_warning(f"Data validation failed: {e}") - # Continue without raising - these are data issues, not connection issues - - except Exception as e: - # Unexpected errors - trigger restart - self._log_error(f"Unexpected error in message handling: {e}", exc_info=True) - raise -``` - -### 3. Manager Setup with Hierarchical Logging - -```python -from utils.logger import get_logger - -async def setup_production_system(): - """Setup production collector system with conditional logging.""" - - # Create manager with its own logger - manager_logger = get_logger('crypto_trading_system', verbose=True) - manager = CollectorManager( - manager_name="crypto_trading_system", - global_health_check_interval=60.0, # Check every minute - restart_delay=30.0, # 30s between restarts - logger=manager_logger, # Manager logging - log_errors_only=False # Log all levels for manager - ) - - # Add primary data sources with individual loggers - exchanges = ['okx', 'binance', 'coinbase'] - symbols = ['BTC-USDT', 'ETH-USDT', 'SOL-USDT', 'AVAX-USDT'] - - for exchange in exchanges: - # Create individual logger for each exchange - exchange_logger = get_logger(f'{exchange}_collector', verbose=True) - - collector = create_collector( - exchange, - symbols, - logger=exchange_logger # Individual collector logging - ) - - # Configure for production - config = CollectorConfig( - name=f"{exchange}_primary", - exchange=exchange, - symbols=symbols, - data_types=["ticker", "trade"], - auto_restart=True, - health_check_interval=30.0, - enabled=True - ) - - # Add callbacks for data processing - collector.add_data_callback(DataType.TICKER, process_ticker_data) - collector.add_data_callback(DataType.TRADE, process_trade_data) - - manager.add_collector(collector, config) - - # Start system - success = await manager.start() - if not success: - raise RuntimeError("Failed to start collector system") - - return manager - -# Usage -async def main(): - manager = await setup_production_system() - - # Monitor system health - while True: - status = manager.get_status() - - if status['statistics']['failed_collectors'] > 0: - # Alert on failures - await send_alert(f"Collectors failed: {manager.get_failed_collectors()}") - - # Log status every 5 minutes (if manager has logging enabled) - await asyncio.sleep(300) -``` - -### 4. Monitoring Integration - -```python -# Integrate with monitoring systems and conditional logging -import prometheus_client -from utils.logger import get_logger - -class MonitoredCollector(BaseDataCollector): - def __init__(self, *args, **kwargs): - # Extract logger before passing to parent - logger = kwargs.get('logger') - super().__init__(*args, **kwargs) - - # Prometheus metrics - self.messages_counter = prometheus_client.Counter( - 'collector_messages_total', - 'Total messages processed', - ['exchange', 'symbol', 'type'] - ) - - self.errors_counter = prometheus_client.Counter( - 'collector_errors_total', - 'Total errors', - ['exchange', 'error_type'] - ) - - self.uptime_gauge = prometheus_client.Gauge( - 'collector_uptime_seconds', - 'Collector uptime', - ['exchange'] - ) - - async def _notify_callbacks(self, data_point: MarketDataPoint): - """Override to add metrics.""" - # Update metrics - self.messages_counter.labels( - exchange=data_point.exchange, - symbol=data_point.symbol, - type=data_point.data_type.value - ).inc() - - # Update uptime - status = self.get_status() - if status['statistics']['uptime_seconds']: - self.uptime_gauge.labels( - exchange=self.exchange_name - ).set(status['statistics']['uptime_seconds']) - - # Log metrics update (only if debug logging enabled) - self._log_debug(f"Updated metrics for {data_point.symbol}") - - # Call parent - await super()._notify_callbacks(data_point) - - async def _handle_connection_error(self) -> bool: - """Override to add error metrics.""" - self.errors_counter.labels( - exchange=self.exchange_name, - error_type='connection' - ).inc() - - # Always log connection errors - self._log_error("Connection error occurred") - - return await super()._handle_connection_error() -``` - -## Troubleshooting - -### Common Issues - -#### 1. Collector Won't Start - -**Symptoms**: `start()` returns `False`, status shows `ERROR` - -**Solutions**: -```python -# Check connection details with debugging -from utils.logger import get_logger - -debug_logger = get_logger('debug_collector', verbose=True) -collector = MyCollector(symbols=["BTC-USDT"], logger=debug_logger) - -success = await collector.start() -if not success: - status = collector.get_status() - print(f"Error: {status['statistics']['last_error']}") - -# Common fixes: -# - Verify API credentials -# - Check network connectivity -# - Validate symbol names -# - Review exchange-specific requirements -``` - -#### 2. Frequent Restarts - -**Symptoms**: High restart count, intermittent data - -**Solutions**: -```python -# Adjust health check intervals and enable detailed logging -logger = get_logger('troubleshoot_collector', verbose=True) - -collector = MyCollector( - symbols=["BTC-USDT"], - health_check_interval=60.0, # Increase interval - auto_restart=True, - logger=logger, # Enable detailed logging - log_errors_only=False # Log all message types -) - -# Check for: -# - Network instability -# - Exchange rate limiting -# - Invalid message formats -# - Resource constraints -``` - -#### 3. No Data Received - -**Symptoms**: Collector running but no callbacks triggered - -**Solutions**: -```python -# Check data flow with debug logging -logger = get_logger('data_debug', verbose=True) -collector = MyCollector(symbols=["BTC-USDT"], logger=logger) - -def debug_callback(data_point): - print(f"Received: {data_point}") - -collector.add_data_callback(DataType.TICKER, debug_callback) - -# Verify: -# - Callback registration -# - Symbol subscription -# - Message parsing logic -# - Exchange data availability -``` - -#### 4. Memory Leaks - -**Symptoms**: Increasing memory usage over time - -**Solutions**: -```python -# Implement proper cleanup with logging -class CleanCollector(BaseDataCollector): - async def disconnect(self): - """Ensure proper cleanup.""" - self._log_info("Starting cleanup process") - - # Clear buffers - if hasattr(self, 'message_buffer'): - self.message_buffer.clear() - self._log_debug("Cleared message buffer") - - # Close connections - if self.websocket: - await self.websocket.close() - self.websocket = None - self._log_debug("Closed WebSocket connection") - - # Clear callbacks - for callback_list in self._data_callbacks.values(): - callback_list.clear() - self._log_debug("Cleared callbacks") - - await super().disconnect() - self._log_info("Cleanup completed") -``` - -## Exchange Factory System - -### Overview - -The Exchange Factory system provides a standardized way to create data collectors for different exchanges. It implements the factory pattern to abstract the creation logic and provides a consistent interface across all exchanges. - -### Exchange Registry - -The system maintains a registry of supported exchanges and their capabilities: - -```python -from data.exchanges import get_supported_exchanges, get_exchange_info - -# Get all supported exchanges -exchanges = get_supported_exchanges() -print(f"Supported exchanges: {exchanges}") # ['okx'] - -# Get exchange information -okx_info = get_exchange_info('okx') -print(f"OKX pairs: {okx_info['supported_pairs']}") -print(f"OKX data types: {okx_info['supported_data_types']}") -``` - -### Factory Configuration - -```python -from data.exchanges import ExchangeCollectorConfig, ExchangeFactory -from data.base_collector import DataType -from utils.logger import get_logger - -# Create configuration with conditional logging -logger = get_logger('factory_collector', verbose=True) - -config = ExchangeCollectorConfig( - exchange='okx', # Exchange name - symbol='BTC-USDT', # Trading pair - data_types=[DataType.TRADE, DataType.ORDERBOOK], # Data types - auto_restart=True, # Auto-restart on failures - health_check_interval=30.0, # Health check interval - store_raw_data=True, # Store raw data for debugging - custom_params={ # Exchange-specific parameters - 'ping_interval': 25.0, - 'max_reconnect_attempts': 5 - } -) - -# Validate configuration -is_valid = ExchangeFactory.validate_config(config) -if is_valid: - collector = ExchangeFactory.create_collector(config, logger=logger) -``` - -### Exchange Capabilities - -Query what each exchange supports: - -```python -from data.exchanges import ExchangeFactory - -# Get supported trading pairs -okx_pairs = ExchangeFactory.get_supported_pairs('okx') -print(f"OKX supports: {okx_pairs}") - -# Get supported data types -okx_data_types = ExchangeFactory.get_supported_data_types('okx') -print(f"OKX data types: {okx_data_types}") -``` - -### Convenience Functions - -Each exchange provides convenience functions for easy collector creation: - -```python -from data.exchanges import create_okx_collector -from utils.logger import get_logger - -# Quick OKX collector creation with logging -logger = get_logger('okx_btc_usdt', verbose=True) - -collector = create_okx_collector( - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - auto_restart=True, - logger=logger -) -``` - -## OKX Implementation - -### OKX Collector Features - -The OKX collector provides: - -- **Real-time Data**: Live trades, orderbook, and ticker data -- **Single Pair Focus**: Each collector handles one trading pair for better isolation -- **Ping/Pong Management**: OKX-specific keepalive mechanism with proper format -- **Raw Data Storage**: Optional storage of raw OKX messages for debugging -- **Connection Resilience**: Robust reconnection logic for OKX WebSocket -- **Conditional Logging**: Full integration with the logging system - -### OKX Usage Examples - -```python -from utils.logger import get_logger - -# Direct OKX collector usage with conditional logging -logger = get_logger('okx_collector', verbose=True) - -collector = OKXCollector( - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - auto_restart=True, - health_check_interval=30.0, - store_raw_data=True, - logger=logger, # Enable logging - log_errors_only=False # Log all levels -) - -# Factory pattern usage with error-only logging -error_logger = get_logger('okx_critical', verbose=False) - -collector = create_okx_collector( - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - logger=error_logger, - log_errors_only=True # Only log errors -) - -# Multiple collectors with different logging strategies -configs = [ - ExchangeCollectorConfig('okx', 'BTC-USDT', [DataType.TRADE]), - ExchangeCollectorConfig('okx', 'ETH-USDT', [DataType.ORDERBOOK]) -] - -collectors = [] -for config in configs: - # Different logging for each collector - if config.symbol == 'BTC-USDT': - logger = get_logger('okx_btc', verbose=True) # Full logging - else: - logger = get_logger('okx_eth', verbose=False, log_errors_only=True) # Errors only - - collector = ExchangeFactory.create_collector(config, logger=logger) - collectors.append(collector) -``` - -### OKX Data Processing - -The OKX collector processes three main data types: - -#### Trade Data -```python -# OKX trade message format -{ - "arg": {"channel": "trades", "instId": "BTC-USDT"}, - "data": [{ - "tradeId": "12345678", - "px": "50000.5", # Price - "sz": "0.001", # Size - "side": "buy", # Side (buy/sell) - "ts": "1697123456789" # Timestamp (ms) - }] -} -``` - -#### Orderbook Data -```python -# OKX orderbook message format (books5) -{ - "arg": {"channel": "books5", "instId": "BTC-USDT"}, - "data": [{ - "asks": [["50001.0", "0.5", "0", "3"]], # [price, size, liquidated, orders] - "bids": [["50000.0", "0.8", "0", "2"]], - "ts": "1697123456789" - }] -} -``` - -#### Ticker Data -```python -# OKX ticker message format -{ - "arg": {"channel": "tickers", "instId": "BTC-USDT"}, - "data": [{ - "last": "50000.5", # Last price - "askPx": "50001.0", # Best ask price - "bidPx": "50000.0", # Best bid price - "open24h": "49500.0", # 24h open - "high24h": "50500.0", # 24h high - "low24h": "49000.0", # 24h low - "vol24h": "1234.567", # 24h volume - "ts": "1697123456789" - }] -} -``` - -For comprehensive OKX documentation, see [OKX Collector Documentation](okx_collector.md). - -## Integration Examples - -### Django Integration - -```python -# Django management command with conditional logging -from django.core.management.base import BaseCommand -from data import CollectorManager -from utils.logger import get_logger -import asyncio - -class Command(BaseCommand): - help = 'Start crypto data collectors' - - def handle(self, *args, **options): - async def run_collectors(): - # Create manager with logging - manager_logger = get_logger('django_collectors', verbose=True) - manager = CollectorManager("django_collectors", logger=manager_logger) - - # Add collectors with individual loggers - from myapp.collectors import OKXCollector, BinanceCollector - - okx_logger = get_logger('django_okx', verbose=True) - binance_logger = get_logger('django_binance', verbose=True, log_errors_only=True) - - manager.add_collector(OKXCollector(['BTC-USDT'], logger=okx_logger)) - manager.add_collector(BinanceCollector(['ETH-USDT'], logger=binance_logger)) - - # Start system - await manager.start() - - # Keep running - try: - while True: - await asyncio.sleep(60) - status = manager.get_status() - self.stdout.write(f"Status: {status['statistics']}") - except KeyboardInterrupt: - await manager.stop() - - asyncio.run(run_collectors()) -``` - -### FastAPI Integration - -```python -# FastAPI application with conditional logging -from fastapi import FastAPI -from data import CollectorManager -from utils.logger import get_logger -import asyncio - -app = FastAPI() -manager = None - -@app.on_event("startup") -async def startup_event(): - global manager - - # Create manager with logging - manager_logger = get_logger('fastapi_collectors', verbose=True) - manager = CollectorManager("fastapi_collectors", logger=manager_logger) - - # Add collectors with error-only logging for production - from collectors import OKXCollector - - collector_logger = get_logger('fastapi_okx', verbose=False, log_errors_only=True) - collector = OKXCollector(['BTC-USDT', 'ETH-USDT'], logger=collector_logger) - manager.add_collector(collector) - - # Start in background - await manager.start() - -@app.on_event("shutdown") -async def shutdown_event(): - global manager - if manager: - await manager.stop() - -@app.get("/collector/status") -async def get_collector_status(): - return manager.get_status() - -@app.post("/collector/{name}/restart") -async def restart_collector(name: str): - success = await manager.restart_collector(name) - return {"success": success} -``` - -## Migration Guide - -### From Manual Connection Management - -**Before** (manual management): -```python -class OldCollector: - def __init__(self): - self.websocket = None - self.running = False - - async def start(self): - while self.running: - try: - self.websocket = await connect() - await self.listen() - except Exception as e: - print(f"Error: {e}") - await asyncio.sleep(5) # Manual retry -``` - -**After** (with BaseDataCollector and conditional logging): -```python -from utils.logger import get_logger - -class NewCollector(BaseDataCollector): - def __init__(self): - logger = get_logger('new_collector', verbose=True) - super().__init__( - "exchange", - ["BTC-USDT"], - logger=logger, - log_errors_only=False - ) - # Auto-restart and health monitoring included - - async def connect(self) -> bool: - self._log_info("Connecting to exchange") - self.websocket = await connect() - self._log_info("Connection established") - return True - - async def _handle_messages(self): - message = await self.websocket.receive() - self._log_debug(f"Received message: {message}") - # Error handling and restart logic automatic -``` - -### From Basic Monitoring - -**Before** (basic monitoring): -```python -# Manual status tracking -status = { - 'connected': False, - 'last_message': None, - 'error_count': 0 -} - -# Manual health checks -async def health_check(): - if time.time() - status['last_message'] > 300: - print("No data for 5 minutes!") -``` - -**After** (comprehensive monitoring with conditional logging): -```python -# Automatic health monitoring with logging -logger = get_logger('monitored_collector', verbose=True) -collector = MyCollector(["BTC-USDT"], logger=logger) - -# Rich status information -status = collector.get_status() -health = collector.get_health_status() - -# Automatic alerts and recovery with logging -if not health['is_healthy']: - print(f"Issues: {health['issues']}") - # Auto-restart already triggered and logged -``` - -## Related Documentation - -- [Data Collection Service](../services/data_collection_service.md) - High-level service orchestration -- [Logging System](logging.md) - Conditional logging implementation -- [Database Operations](../database/operations.md) - Database integration patterns -- [Monitoring Guide](../monitoring/README.md) - System monitoring and alerting - ---- - -## Support and Contributing - -### Getting Help - -1. **Check Logs**: Review logs in `./logs/` directory (see [Logging System](logging.md)) -2. **Status Information**: Use `get_status()` and `get_health_status()` methods -3. **Debug Mode**: Enable debug logging with conditional logging system -4. **Test with Demo**: Run `examples/collector_demo.py` to verify setup - -### Contributing - -The data collector system is designed to be extensible. Contributions are welcome for: - -- New exchange implementations -- Enhanced monitoring features -- Performance optimizations -- Additional data types -- Integration examples -- Logging system improvements - -### License - -This documentation and the associated code are part of the Crypto Trading Bot Platform project. - ---- - -*For more information, see the main project documentation in `/docs/`.* \ No newline at end of file diff --git a/docs/architecture/crypto-bot-prd.md b/docs/crypto-bot-prd.md similarity index 98% rename from docs/architecture/crypto-bot-prd.md rename to docs/crypto-bot-prd.md index 36a4993..071d7e1 100644 --- a/docs/architecture/crypto-bot-prd.md +++ b/docs/crypto-bot-prd.md @@ -5,10 +5,15 @@ **Author:** Vasily **Status:** Draft +> **Note on Implementation Status:** This document describes the complete vision for the platform. As of the current development phase, many components like the **Strategy Engine**, **Bot Manager**, and **Backtesting Engine** are planned but not yet implemented. For a detailed view of the current status, please refer to the main `CONTEXT.md` file. + ## Executive Summary This PRD outlines the development of a simplified crypto trading bot platform that enables strategy testing, development, and execution without the complexity of microservices and advanced monitoring. The goal is to create a functional system within 1-2 weeks that allows for strategy testing while establishing a foundation that can scale in the future. The platform addresses key requirements including data collection, strategy execution, visualization, and backtesting capabilities in a monolithic architecture optimized for internal use. +--- +*Back to [Main Documentation (`../README.md`)]* + ## Current Requirements & Constraints - **Speed to Deployment**: System must be functional within 1-2 weeks diff --git a/docs/architecture/data-processing-refactor.md b/docs/decisions/ADR-001-data-processing-refactor.md similarity index 81% rename from docs/architecture/data-processing-refactor.md rename to docs/decisions/ADR-001-data-processing-refactor.md index abdb1ae..cc435a8 100644 --- a/docs/architecture/data-processing-refactor.md +++ b/docs/decisions/ADR-001-data-processing-refactor.md @@ -1,3 +1,53 @@ +# ADR-001: Data Processing and Aggregation Refactor + +## Status +**Accepted** + +## Context +The initial data collection and processing system was tightly coupled with the OKX exchange implementation. This made it difficult to add new exchanges, maintain the code, and ensure consistent data aggregation across different sources. Key issues included: +- Business logic mixed with data fetching. +- Inconsistent timestamp handling. +- No clear strategy for handling sparse data, leading to potential future data leakage. + +A refactor was necessary to create a modular, extensible, and robust data processing pipeline that aligns with industry standards. + +## Decision +We will refactor the data processing system to adhere to the following principles: + +1. **Modular & Extensible Design**: Separate exchange-specific logic from the core aggregation and storage logic using a factory pattern and base classes. +2. **Right-Aligned Timestamps**: Adopt the industry standard for OHLCV candles where the timestamp represents the closing time of the interval. This ensures compatibility with major exchanges and historical data providers. +3. **Sparse Candle Aggregation**: Emit candles only when trading activity occurs within a time bucket. This accurately reflects market activity and reduces storage. +4. **No Future Leakage**: Implement a robust aggregation mechanism that only finalizes candles when their time period has definitively passed, preventing lookahead bias. +5. **Centralized Repository for Database Operations**: Abstract all database interactions into a `Repository` pattern to decouple business logic from data persistence. + +## Consequences + +### Positive +- **Improved Maintainability**: Code is cleaner, more organized, and easier to understand. +- **Enhanced Extensibility**: Adding new exchanges is significantly easier. +- **Data Integrity**: Standardized timestamping and aggregation prevent data inconsistencies and lookahead bias. +- **Efficiency**: The sparse candle approach reduces storage and processing overhead. +- **Testability**: Decoupled components are easier to unit test. + +### Negative +- **Initial Development Overhead**: The refactor required an initial time investment to design and implement the new architecture. +- **Increased Complexity**: The new system has more moving parts (factories, repositories), which may have a slightly steeper learning curve for new developers. + +## Alternatives Considered + +1. **Keep the Monolithic Design**: Continue with the tightly coupled approach. + - **Reason for Rejection**: This was not scalable and would have led to significant technical debt as new exchanges were added. +2. **Use a Third-Party Data Library**: Integrate a library like `ccxt` for data collection. + - **Reason for Rejection**: While powerful, these libraries did not offer the fine-grained control over the real-time aggregation and WebSocket handling that was required. Building a custom solution provides more flexibility. + +## Related Documentation +- **Aggregation Strategy**: [docs/reference/aggregation-strategy.md](../reference/aggregation-strategy.md) +- **Data Collectors**: [docs/modules/data_collectors.md](../modules/data_collectors.md) +- **Database Operations**: [docs/modules/database_operations.md](../modules/database_operations.md) + +--- +*Back to [All Decisions (`./`)]* + # Refactored Data Processing Architecture ## Overview diff --git a/docs/exchanges/README.md b/docs/exchanges/README.md deleted file mode 100644 index c95aada..0000000 --- a/docs/exchanges/README.md +++ /dev/null @@ -1,297 +0,0 @@ -# Exchange Documentation - -This section contains detailed documentation for all cryptocurrency exchange integrations in the TCP Dashboard platform. - -## πŸ“‹ Contents - -### Supported Exchanges - -#### Production Ready - -- **[OKX Collector](okx_collector.md)** - *Complete guide to OKX exchange integration* - - Real-time trades, orderbook, and ticker data collection - - WebSocket connection management with OKX-specific ping/pong - - Factory pattern usage and configuration - - Data processing and validation - - Monitoring and troubleshooting - - Production deployment guide - -#### Planned Integrations - -- **Binance** - Major global exchange (development planned) -- **Coinbase Pro** - US-regulated exchange (development planned) -- **Kraken** - European exchange (development planned) -- **Bybit** - Derivatives exchange (development planned) - -## πŸ—οΈ Exchange Architecture - -### Modular Design - -Each exchange implementation follows a standardized structure: - -``` -data/exchanges/ -β”œβ”€β”€ __init__.py # Main exports and factory -β”œβ”€β”€ registry.py # Exchange registry and capabilities -β”œβ”€β”€ factory.py # Factory pattern for collectors -└── {exchange}/ # Exchange-specific implementation - β”œβ”€β”€ __init__.py # Exchange exports - β”œβ”€β”€ collector.py # {Exchange}Collector class - └── websocket.py # {Exchange}WebSocketClient class -``` - -### Standardized Interface - -All exchange collectors implement the same interface: - -```python -from data.exchanges import ExchangeFactory, ExchangeCollectorConfig -from data.base_collector import DataType - -# Unified configuration across all exchanges -config = ExchangeCollectorConfig( - exchange='okx', # or 'binance', 'coinbase', etc. - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK], - auto_restart=True -) - -collector = ExchangeFactory.create_collector(config) -``` - -## πŸš€ Quick Start - -### Using Factory Pattern - -```python -import asyncio -from data.exchanges import get_supported_exchanges, create_okx_collector -from data.base_collector import DataType - -async def main(): - # Check supported exchanges - exchanges = get_supported_exchanges() - print(f"Supported: {exchanges}") # ['okx'] - - # Create OKX collector - collector = create_okx_collector( - symbol='BTC-USDT', - data_types=[DataType.TRADE, DataType.ORDERBOOK] - ) - - # Add data callback - def on_trade(data_point): - print(f"Trade: {data_point.data}") - - collector.add_data_callback(DataType.TRADE, on_trade) - - # Start collection - await collector.start() - await asyncio.sleep(60) - await collector.stop() - -asyncio.run(main()) -``` - -### Multi-Exchange Setup - -```python -from data.exchanges import ExchangeFactory, ExchangeCollectorConfig -from data.collector_manager import CollectorManager - -async def setup_multi_exchange(): - manager = CollectorManager("multi_exchange_system") - - # Future: Multiple exchanges - configs = [ - ExchangeCollectorConfig('okx', 'BTC-USDT', [DataType.TRADE]), - # ExchangeCollectorConfig('binance', 'BTC-USDT', [DataType.TRADE]), - # ExchangeCollectorConfig('coinbase', 'BTC-USD', [DataType.TRADE]) - ] - - for config in configs: - collector = ExchangeFactory.create_collector(config) - manager.add_collector(collector) - - await manager.start() - return manager -``` - -## πŸ“Š Exchange Capabilities - -### Data Types - -Different exchanges support different data types: - -| Exchange | Trades | Orderbook | Ticker | Candles | Balance | -|----------|--------|-----------|--------|---------|---------| -| OKX | βœ… | βœ… | βœ… | πŸ”„ | πŸ”„ | -| Binance | πŸ”„ | πŸ”„ | πŸ”„ | πŸ”„ | πŸ”„ | -| Coinbase | πŸ”„ | πŸ”„ | πŸ”„ | πŸ”„ | πŸ”„ | - -Legend: βœ… Implemented, πŸ”„ Planned, ❌ Not supported - -### Trading Pairs - -Query supported trading pairs for each exchange: - -```python -from data.exchanges import ExchangeFactory - -# Get supported pairs -okx_pairs = ExchangeFactory.get_supported_pairs('okx') -print(f"OKX pairs: {okx_pairs}") - -# Get exchange information -okx_info = ExchangeFactory.get_exchange_info('okx') -print(f"OKX capabilities: {okx_info}") -``` - -## πŸ”§ Exchange Configuration - -### Common Configuration - -All exchanges share common configuration options: - -```python -from data.exchanges import ExchangeCollectorConfig - -config = ExchangeCollectorConfig( - exchange='okx', # Exchange name - symbol='BTC-USDT', # Trading pair - data_types=[DataType.TRADE], # Data types to collect - auto_restart=True, # Auto-restart on failures - health_check_interval=30.0, # Health check interval - store_raw_data=True, # Store raw exchange data - custom_params={ # Exchange-specific parameters - 'ping_interval': 25.0, - 'max_reconnect_attempts': 5 - } -) -``` - -### Exchange-Specific Configuration - -Each exchange has specific configuration files: - -- **OKX**: `config/okx_config.json` -- **Binance**: `config/binance_config.json` (planned) -- **Coinbase**: `config/coinbase_config.json` (planned) - -## πŸ“ˆ Performance Comparison - -### Real-time Data Rates - -Approximate message rates for different exchanges: - -| Exchange | Trades/sec | Orderbook Updates/sec | Latency | -|----------|------------|----------------------|---------| -| OKX | 5-50 | 10-100 | ~50ms | -| Binance | TBD | TBD | TBD | -| Coinbase | TBD | TBD | TBD | - -*Note: Rates vary by trading pair activity* - -### Resource Usage - -Memory and CPU usage per collector: - -| Exchange | Memory (MB) | CPU (%) | Network (KB/s) | -|----------|-------------|---------|----------------| -| OKX | 15-25 | 1-3 | 5-20 | -| Binance | TBD | TBD | TBD | -| Coinbase | TBD | TBD | TBD | - -## πŸ” Monitoring & Debugging - -### Exchange Status - -Monitor exchange-specific metrics: - -```python -# Get exchange status -status = collector.get_status() -print(f"Exchange: {status['exchange']}") -print(f"WebSocket State: {status['websocket_state']}") -print(f"Messages Processed: {status['messages_processed']}") - -# Exchange-specific metrics -if 'websocket_stats' in status: - ws_stats = status['websocket_stats'] - print(f"Reconnections: {ws_stats['reconnections']}") - print(f"Ping/Pong: {ws_stats['pings_sent']}/{ws_stats['pongs_received']}") -``` - -### Debug Mode - -Enable exchange-specific debugging: - -```python -import os -os.environ['LOG_LEVEL'] = 'DEBUG' - -# Detailed exchange logging -collector = create_okx_collector('BTC-USDT', [DataType.TRADE]) -# Check logs: ./logs/okx_collector_btc_usdt_debug.log -``` - -## πŸ› οΈ Adding New Exchanges - -### Implementation Checklist - -To add a new exchange: - -1. **Create Exchange Folder**: `data/exchanges/{exchange}/` -2. **Implement WebSocket Client**: `{exchange}/websocket.py` -3. **Implement Collector**: `{exchange}/collector.py` -4. **Add to Registry**: Update `registry.py` -5. **Create Configuration**: `config/{exchange}_config.json` -6. **Add Documentation**: `docs/exchanges/{exchange}_collector.md` -7. **Add Tests**: `tests/test_{exchange}_collector.py` - -### Implementation Template - -```python -# data/exchanges/newexchange/collector.py -from data.base_collector import BaseDataCollector, DataType -from .websocket import NewExchangeWebSocketClient - -class NewExchangeCollector(BaseDataCollector): - def __init__(self, symbol: str, **kwargs): - super().__init__("newexchange", [symbol], **kwargs) - self.ws_client = NewExchangeWebSocketClient() - - async def connect(self) -> bool: - return await self.ws_client.connect() - - # Implement other required methods... -``` - -## πŸ”— Related Documentation - -- **[Components Documentation](../components/)** - Core system components -- **[Architecture Overview](../architecture/)** - System design -- **[Setup Guide](../guides/setup.md)** - Configuration and deployment -- **[API Reference](../reference/)** - Technical specifications - -## πŸ“ž Support - -### Exchange-Specific Issues - -For exchange-specific problems: - -1. **Check Status**: Use `get_status()` and `get_health_status()` -2. **Review Logs**: Check exchange-specific log files -3. **Verify Configuration**: Confirm exchange configuration files -4. **Test Connection**: Run exchange-specific test scripts - -### Common Issues - -- **Rate Limiting**: Each exchange has different rate limits -- **Symbol Formats**: Trading pair naming conventions vary -- **WebSocket Protocols**: Each exchange has unique WebSocket requirements -- **Data Formats**: Message structures differ between exchanges - ---- - -*For the complete documentation index, see the [main documentation README](../README.md).* \ No newline at end of file diff --git a/docs/guides/README.md b/docs/guides/README.md index 3171e0f..eaf2391 100644 --- a/docs/guides/README.md +++ b/docs/guides/README.md @@ -261,10 +261,10 @@ CMD ["python", "-m", "scripts.production_start"] ## πŸ”— Related Documentation -- **[Components Documentation](../components/)** - Technical component details -- **[Architecture Overview](../architecture/)** - System design -- **[Exchange Documentation](../exchanges/)** - Exchange integrations -- **[API Reference](../reference/)** - Technical specifications +- **[Modules Documentation (`../modules/`)](../modules/)** - Technical component details +- **[Architecture Overview (`../architecture.md`)]** - System design +- **[Exchange Documentation (`../modules/exchanges/`)](../modules/exchanges/)** - Exchange integrations +- **[Reference (`../reference/`)](../reference/)** - Technical specifications ## πŸ“ž Support & Troubleshooting @@ -306,4 +306,4 @@ tail -f logs/*_debug.log --- -*For the complete documentation index, see the [main documentation README](../README.md).* \ No newline at end of file +*For the complete documentation index, see the [main documentation README (`../README.md`)]* \ No newline at end of file diff --git a/docs/components/README.md b/docs/modules/README.md similarity index 89% rename from docs/components/README.md rename to docs/modules/README.md index dea69ef..9e685ad 100644 --- a/docs/components/README.md +++ b/docs/modules/README.md @@ -1,12 +1,12 @@ -# Components Documentation +# Modules Documentation -This section contains detailed technical documentation for all system components in the TCP Dashboard platform. +This section contains detailed technical documentation for all system modules in the TCP Dashboard platform. ## πŸ“‹ Contents ### User Interface & Visualization -- **[Chart Layers System](charts/)** - *Comprehensive modular chart system* +- **[Chart System (`charts/`)](./charts/)** - *Comprehensive modular chart system* - **Strategy-driven Configuration**: 5 professional trading strategies with JSON persistence - **26+ Indicator Presets**: SMA, EMA, RSI, MACD, Bollinger Bands with customization - **User Indicator Management**: Interactive CRUD system with real-time updates @@ -18,7 +18,7 @@ This section contains detailed technical documentation for all system components ### Data Collection System -- **[Data Collectors](data_collectors.md)** - *Comprehensive guide to the enhanced data collector system* +- **[Data Collectors (`data_collectors.md`)]** - *Comprehensive guide to the enhanced data collector system* - **BaseDataCollector** abstract class with health monitoring - **CollectorManager** for centralized management - **Exchange Factory Pattern** for standardized collector creation @@ -31,7 +31,7 @@ This section contains detailed technical documentation for all system components ### Database Operations -- **[Database Operations](database_operations.md)** - *Repository pattern for clean database interactions* +- **[Database Operations (`database_operations.md`)]** - *Repository pattern for clean database interactions* - **Repository Pattern** implementation for data access abstraction - **MarketDataRepository** for candle/OHLCV operations - **RawTradeRepository** for WebSocket data storage @@ -43,7 +43,7 @@ This section contains detailed technical documentation for all system components ### Technical Analysis -- **[Technical Indicators](technical-indicators.md)** - *Comprehensive technical analysis module* +- **[Technical Indicators (`technical-indicators.md`)]** - *Comprehensive technical analysis module* - **Five Core Indicators**: SMA, EMA, RSI, MACD, and Bollinger Bands - **Sparse Data Handling**: Optimized for the platform's aggregation strategy - **Vectorized Calculations**: High-performance pandas and numpy implementation @@ -55,7 +55,7 @@ This section contains detailed technical documentation for all system components ### Logging & Monitoring -- **[Enhanced Logging System](logging.md)** - *Unified logging framework* +- **[Enhanced Logging System (`logging.md`)]** - *Unified logging framework* - Multi-level logging with automatic cleanup - Console and file output with formatting - Performance monitoring integration @@ -189,11 +189,11 @@ Unified logging across all components: ## πŸ”— Related Documentation -- **[Dashboard Modular Structure](../dashboard-modular-structure.md)** - Complete dashboard architecture -- **[Exchange Documentation](../exchanges/)** - Exchange-specific implementations -- **[Architecture Overview](../architecture/)** - System design and patterns -- **[Setup Guide](../guides/setup.md)** - Component configuration and deployment -- **[API Reference](../reference/)** - Technical specifications +- **[Dashboard Modular Structure (dashboard-modular-structure.md)](./dashboard-modular-structure.md)** - Complete dashboard architecture +- **[Exchange Documentation (exchanges/)](./exchanges/)** - Exchange-specific implementations +- **[Architecture Overview (`../../architecture.md`)]** - System design and patterns +- **[Setup Guide (`../../guides/setup.md`)]** - Component configuration and deployment +- **[API Reference (`../../reference/`)** - Technical specifications ## πŸ“ˆ Future Components @@ -207,4 +207,4 @@ Planned component additions: --- -*For the complete documentation index, see the [main documentation README](../README.md).* \ No newline at end of file +*For the complete documentation index, see the [main documentation README (`../README.md`)]* \ No newline at end of file diff --git a/docs/components/charts/README.md b/docs/modules/charts/README.md similarity index 96% rename from docs/components/charts/README.md rename to docs/modules/charts/README.md index 327a7e2..423d550 100644 --- a/docs/components/charts/README.md +++ b/docs/modules/charts/README.md @@ -67,7 +67,7 @@ dashboard/ # Modular dashboard integration β”œβ”€β”€ layouts/market_data.py # Chart layout with controls β”œβ”€β”€ callbacks/charts.py # Chart update callbacks β”œβ”€β”€ components/ -β”‚ β”œβ”€β”€ chart_controls.py # Reusable chart controls +β”‚ β”œβ”€β”€ chart_controls.py # Reusable chart configuration panel β”‚ └── indicator_modal.py # Indicator management UI config/indicators/ @@ -113,7 +113,7 @@ overlay_indicators = indicator_manager.get_indicators_by_type('overlay') subplot_indicators = indicator_manager.get_indicators_by_type('subplot') ``` -For complete dashboard documentation, see [Dashboard Modular Structure](../../dashboard-modular-structure.md). +For complete dashboard documentation, see [Dashboard Modular Structure (`../dashboard-modular-structure.md`)](../dashboard-modular-structure.md). ## User Indicator Management @@ -130,8 +130,8 @@ The system includes a comprehensive user indicator management system that allows ### Quick Access -- **πŸ“Š [Complete Indicator Documentation](./indicators.md)** - Comprehensive guide to the indicator system -- **⚑ [Quick Guide: Adding New Indicators](./adding-new-indicators.md)** - Step-by-step checklist for developers +- **πŸ“Š [Complete Indicator Documentation (`indicators.md`)](./indicators.md)** - Comprehensive guide to the indicator system +- **⚑ [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)** - Step-by-step checklist for developers ### Current User Indicators @@ -670,7 +670,7 @@ uv run pytest tests/test_defaults.py -v ## Future Enhancements -- **βœ… Signal Layer Integration**: Bot trade signals and alerts - **IMPLEMENTED** - See [Bot Integration Guide](./bot-integration.md) +- **βœ… Signal Layer Integration**: Bot trade signals and alerts - **IMPLEMENTED** - See [Bot Integration Guide (`bot-integration.md`)](./bot-integration.md) - **Custom Indicators**: User-defined technical indicators - **Advanced Layouts**: Multi-chart and grid layouts - **Real-time Updates**: Live chart updates with indicator toggling @@ -685,7 +685,7 @@ The chart system now includes comprehensive bot integration capabilities: - **Multi-Bot Support**: Compare strategies across multiple bots - **Performance Analytics**: Built-in bot performance metrics -πŸ“Š **[Complete Bot Integration Guide](./bot-integration.md)** - Comprehensive documentation for integrating bot signals with charts +πŸ“Š **[Complete Bot Integration Guide (`bot-integration.md`)](./bot-integration.md)** - Comprehensive documentation for integrating bot signals with charts ## Support @@ -696,4 +696,7 @@ For issues, questions, or contributions: 3. Test with comprehensive validation 4. Refer to this documentation -The modular chart system is designed to be extensible and maintainable, providing a solid foundation for advanced trading chart functionality. \ No newline at end of file +The modular chart system is designed to be extensible and maintainable, providing a solid foundation for advanced trading chart functionality. +--- + +*Back to [Modules Documentation](../README.md)* \ No newline at end of file diff --git a/docs/modules/charts/adding-new-indicators.md b/docs/modules/charts/adding-new-indicators.md new file mode 100644 index 0000000..25b9e2d --- /dev/null +++ b/docs/modules/charts/adding-new-indicators.md @@ -0,0 +1,249 @@ +# Quick Guide: Adding New Indicators + +## Overview + +This guide provides a step-by-step checklist for adding new technical indicators to the Crypto Trading Bot Dashboard, updated for the new modular dashboard structure. + +## Prerequisites + +- Understanding of Python and technical analysis +- Familiarity with the project structure and Dash callbacks +- Knowledge of the indicator type (overlay vs subplot) + +## Step-by-Step Checklist + +### βœ… Step 1: Plan Your Indicator + +- [ ] Determine indicator type (overlay or subplot) +- [ ] Define required parameters +- [ ] Choose default styling +- [ ] Research calculation formula + +### βœ… Step 2: Create Indicator Class + +**File**: `components/charts/layers/indicators.py` (overlay) or `components/charts/layers/subplots.py` (subplot) + +Create a class for your indicator that inherits from `IndicatorLayer`. + +```python +class StochasticLayer(IndicatorLayer): + def __init__(self, config: Dict[str, Any]): + super().__init__(config) + self.name = "stochastic" + self.display_type = "subplot" + + def calculate_values(self, df: pd.DataFrame) -> Dict[str, pd.Series]: + k_period = self.config.get('k_period', 14) + d_period = self.config.get('d_period', 3) + lowest_low = df['low'].rolling(window=k_period).min() + highest_high = df['high'].rolling(window=k_period).max() + k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low)) + d_percent = k_percent.rolling(window=d_period).mean() + return {'k_percent': k_percent, 'd_percent': d_percent} + + def create_traces(self, df: pd.DataFrame, values: Dict[str, pd.Series]) -> List[go.Scatter]: + traces = [] + traces.append(go.Scatter(x=df.index, y=values['k_percent'], mode='lines', name=f"%K ({self.config.get('k_period', 14)})", line=dict(color=self.config.get('color', '#007bff'), width=self.config.get('line_width', 2)))) + traces.append(go.Scatter(x=df.index, y=values['d_percent'], mode='lines', name=f"%D ({self.config.get('d_period', 3)})", line=dict(color=self.config.get('secondary_color', '#ff6b35'), width=self.config.get('line_width', 2)))) + return traces +``` + +### βœ… Step 3: Register Indicator + +**File**: `components/charts/layers/__init__.py` + +Register your new indicator class in the appropriate registry. + +```python +from .subplots import StochasticLayer + +SUBPLOT_REGISTRY = { + 'rsi': RSILayer, + 'macd': MACDLayer, + 'stochastic': StochasticLayer, +} + +INDICATOR_REGISTRY = { + 'sma': SMALayer, + 'ema': EMALayer, + 'bollinger_bands': BollingerBandsLayer, +} +``` + +### βœ… Step 4: Add UI Dropdown Option + +**File**: `dashboard/components/indicator_modal.py` + +Add your new indicator to the `indicator-type-dropdown` options. + +```python +dcc.Dropdown( + id='indicator-type-dropdown', + options=[ + {'label': 'Simple Moving Average (SMA)', 'value': 'sma'}, + {'label': 'Exponential Moving Average (EMA)', 'value': 'ema'}, + {'label': 'Relative Strength Index (RSI)', 'value': 'rsi'}, + {'label': 'MACD', 'value': 'macd'}, + {'label': 'Bollinger Bands', 'value': 'bollinger_bands'}, + {'label': 'Stochastic Oscillator', 'value': 'stochastic'}, + ], + placeholder='Select indicator type', +) +``` + +### βœ… Step 5: Add Parameter Fields to Modal + +**File**: `dashboard/components/indicator_modal.py` + +In `create_parameter_fields`, add the `dcc.Input` components for your indicator's parameters. + +```python +def create_parameter_fields(): + return html.Div([ + # ... existing parameter fields ... + html.Div([ + dbc.Row([ + dbc.Col([dbc.Label("%K Period:"), dcc.Input(id='stochastic-k-period-input', type='number', value=14)], width=6), + dbc.Col([dbc.Label("%D Period:"), dcc.Input(id='stochastic-d-period-input', type='number', value=3)], width=6), + ]), + dbc.FormText("Stochastic oscillator periods for %K and %D lines") + ], id='stochastic-parameters', style={'display': 'none'}, className="mb-3") + ]) +``` + +### βœ… Step 6: Update Parameter Visibility Callback + +**File**: `dashboard/callbacks/indicators.py` + +In `update_parameter_fields`, add an `Output` and logic to show/hide your new parameter fields. + +```python +@app.callback( + [Output('indicator-parameters-message', 'style'), + Output('sma-parameters', 'style'), + Output('ema-parameters', 'style'), + Output('rsi-parameters', 'style'), + Output('macd-parameters', 'style'), + Output('bb-parameters', 'style'), + Output('stochastic-parameters', 'style')], + Input('indicator-type-dropdown', 'value'), +) +def update_parameter_fields(indicator_type): + styles = { 'sma': {'display': 'none'}, 'ema': {'display': 'none'}, 'rsi': {'display': 'none'}, 'macd': {'display': 'none'}, 'bb': {'display': 'none'}, 'stochastic': {'display': 'none'} } + message_style = {'display': 'block'} if not indicator_type else {'display': 'none'} + if indicator_type: + styles[indicator_type] = {'display': 'block'} + return [message_style] + list(styles.values()) +``` + +### βœ… Step 7: Update Save Indicator Callback + +**File**: `dashboard/callbacks/indicators.py` + +In `save_new_indicator`, add `State` inputs for your parameters and logic to collect them. + +```python +@app.callback( + # ... Outputs ... + Input('save-indicator-btn', 'n_clicks'), + [# ... States ... + State('stochastic-k-period-input', 'value'), + State('stochastic-d-period-input', 'value'), + State('edit-indicator-store', 'data')], +) +def save_new_indicator(n_clicks, name, indicator_type, ..., stochastic_k, stochastic_d, edit_data): + # ... + elif indicator_type == 'stochastic': + parameters = {'k_period': stochastic_k or 14, 'd_period': stochastic_d or 3} + # ... +``` + +### βœ… Step 8: Update Edit Callback Parameters + +**File**: `dashboard/callbacks/indicators.py` + +In `edit_indicator`, add `Output`s for your parameter fields and logic to load values. + +```python +@app.callback( + [# ... Outputs ... + Output('stochastic-k-period-input', 'value'), + Output('stochastic-d-period-input', 'value')], + Input({'type': 'edit-indicator-btn', 'index': dash.ALL}, 'n_clicks'), +) +def edit_indicator(edit_clicks, button_ids): + # ... + stochastic_k, stochastic_d = 14, 3 + if indicator: + # ... + elif indicator.type == 'stochastic': + stochastic_k = params.get('k_period', 14) + stochastic_d = params.get('d_period', 3) + return (..., stochastic_k, stochastic_d) +``` + +### βœ… Step 9: Update Reset Callback + +**File**: `dashboard/callbacks/indicators.py` + +In `reset_modal_form`, add `Output`s for your parameter fields and their default values. + +```python +@app.callback( + [# ... Outputs ... + Output('stochastic-k-period-input', 'value', allow_duplicate=True), + Output('stochastic-d-period-input', 'value', allow_duplicate=True)], + Input('cancel-indicator-btn', 'n_clicks'), +) +def reset_modal_form(cancel_clicks): + # ... + return ..., 14, 3 +``` + +### βœ… Step 10: Create Default Template + +**File**: `components/charts/indicator_defaults.py` + +Create a default template for your indicator. + +```python +def create_stochastic_template() -> UserIndicator: + return UserIndicator( + id=f"stochastic_{generate_short_id()}", + name="Stochastic 14,3", + type="stochastic", + display_type="subplot", + parameters={"k_period": 14, "d_period": 3}, + styling=IndicatorStyling(color="#9c27b0", line_width=2) + ) + +DEFAULT_TEMPLATES = { + # ... + "stochastic": create_stochastic_template, +} +``` + +### βœ… Step 11: Add Calculation Function (Optional) + +**File**: `data/common/indicators.py` + +Add a standalone calculation function. + +```python +def calculate_stochastic(df: pd.DataFrame, k_period: int = 14, d_period: int = 3) -> tuple: + lowest_low = df['low'].rolling(window=k_period).min() + highest_high = df['high'].rolling(window=k_period).max() + k_percent = 100 * ((df['close'] - lowest_low) / (highest_high - lowest_low)) + d_percent = k_percent.rolling(window=d_period).mean() + return k_percent, d_percent +``` + +## File Change Summary + +When adding a new indicator, you'll typically modify these files: +1. **`components/charts/layers/indicators.py`** or **`subplots.py`** +2. **`components/charts/layers/__init__.py`** +3. **`dashboard/components/indicator_modal.py`** +4. **`dashboard/callbacks/indicators.py`** +5. **`components/charts/indicator_defaults.py`** +6. **`data/common/indicators.py`** (optional) \ No newline at end of file diff --git a/docs/components/charts/bot-integration.md b/docs/modules/charts/bot-integration.md similarity index 98% rename from docs/components/charts/bot-integration.md rename to docs/modules/charts/bot-integration.md index b2a5245..2e4109d 100644 --- a/docs/components/charts/bot-integration.md +++ b/docs/modules/charts/bot-integration.md @@ -1,5 +1,9 @@ # Bot Integration with Chart Signal Layers +> **⚠️ Feature Not Yet Implemented** +> +> The functionality described in this document for bot integration with chart layers is **planned for a future release**. It depends on the **Strategy Engine** and **Bot Manager**, which are not yet implemented. This document outlines the intended architecture and usage once these components are available. + The Chart Layers System provides seamless integration with the bot management system, allowing real-time visualization of bot signals, trades, and performance data directly on charts. ## Table of Contents diff --git a/docs/components/charts/configuration.md b/docs/modules/charts/configuration.md similarity index 95% rename from docs/components/charts/configuration.md rename to docs/modules/charts/configuration.md index 55a44ea..2a878ef 100644 --- a/docs/components/charts/configuration.md +++ b/docs/modules/charts/configuration.md @@ -44,15 +44,16 @@ The main configuration class for individual indicators: ```python @dataclass class ChartIndicatorConfig: - indicator_type: IndicatorType + name: str + indicator_type: str parameters: Dict[str, Any] - display_name: str + display_type: str # 'overlay', 'subplot' color: str - line_style: LineStyle = LineStyle.SOLID + line_style: str = 'solid' # 'solid', 'dash', 'dot' line_width: int = 2 - display_type: DisplayType = DisplayType.OVERLAY opacity: float = 1.0 - show_legend: bool = True + visible: bool = True + subplot_height_ratio: float = 0.3 # For subplot indicators ``` #### Enums @@ -97,11 +98,10 @@ class IndicatorParameterSchema: name: str type: type required: bool = True + default: Any = None min_value: Optional[Union[int, float]] = None max_value: Optional[Union[int, float]] = None - default_value: Any = None description: str = "" - valid_values: Optional[List[Any]] = None ``` #### `IndicatorSchema` @@ -113,10 +113,10 @@ Complete schema for an indicator type: class IndicatorSchema: indicator_type: IndicatorType display_type: DisplayType - parameters: List[IndicatorParameterSchema] - description: str - calculation_description: str - usage_notes: List[str] = field(default_factory=list) + required_parameters: List[IndicatorParameterSchema] + optional_parameters: List[IndicatorParameterSchema] = field(default_factory=list) + min_data_points: int = 1 + description: str = "" ``` ### Schema Definitions @@ -163,20 +163,23 @@ def validate_indicator_configuration(config: ChartIndicatorConfig) -> tuple[bool # Create indicator configuration with validation def create_indicator_config( - indicator_type: IndicatorType, + name: str, + indicator_type: str, parameters: Dict[str, Any], - **kwargs + display_type: Optional[str] = None, + color: str = "#007bff", + **display_options ) -> tuple[Optional[ChartIndicatorConfig], List[str]] # Get schema for indicator type -def get_indicator_schema(indicator_type: IndicatorType) -> Optional[IndicatorSchema] +def get_indicator_schema(indicator_type: str) -> Optional[IndicatorSchema] # Get available indicator types -def get_available_indicator_types() -> List[IndicatorType] +def get_available_indicator_types() -> List[str] # Validate parameters for specific type def validate_parameters_for_type( - indicator_type: IndicatorType, + indicator_type: str, parameters: Dict[str, Any] ) -> tuple[bool, List[str]] ``` @@ -522,10 +525,9 @@ from .validation import ( validate_configuration ) -# Example strategies -from .example_strategies import ( - StrategyExample, create_ema_crossover_strategy, - get_all_example_strategies +# Utility functions from indicator_defs +from .indicator_defs import ( + create_indicator_config, get_indicator_schema, get_available_indicator_types ) ``` @@ -540,15 +542,15 @@ from components.charts.config import ( # Create custom EMA configuration config, errors = create_indicator_config( + name="EMA 21", indicator_type=IndicatorType.EMA, parameters={"period": 21, "price_column": "close"}, - display_name="EMA 21", color="#2E86C1", line_width=2 ) if config: - print(f"Created: {config.display_name}") + print(f"Created: {config.name}") else: print(f"Errors: {errors}") ``` diff --git a/docs/components/charts/indicators.md b/docs/modules/charts/indicators.md similarity index 97% rename from docs/components/charts/indicators.md rename to docs/modules/charts/indicators.md index a3a54d9..876f074 100644 --- a/docs/components/charts/indicators.md +++ b/docs/modules/charts/indicators.md @@ -130,7 +130,7 @@ config/indicators/ For developers who want to add new indicator types to the system, please refer to the comprehensive step-by-step guide: -**πŸ“‹ [Quick Guide: Adding New Indicators](./adding-new-indicators.md)** +**πŸ“‹ [Quick Guide: Adding New Indicators (`adding-new-indicators.md`)](./adding-new-indicators.md)** This guide covers: - βœ… Complete 11-step implementation checklist @@ -307,4 +307,7 @@ subplot_indicators = manager.get_indicators_by_type("subplot") 5. **User Experience** - Provide immediate visual feedback - Use intuitive color schemes - - Group related indicators logically \ No newline at end of file + - Group related indicators logically +--- + +*Back to [Chart System Documentation (`README.md`)]* \ No newline at end of file diff --git a/docs/components/charts/quick-reference.md b/docs/modules/charts/quick-reference.md similarity index 96% rename from docs/components/charts/quick-reference.md rename to docs/modules/charts/quick-reference.md index 3137f0e..6887157 100644 --- a/docs/components/charts/quick-reference.md +++ b/docs/modules/charts/quick-reference.md @@ -264,17 +264,17 @@ StrategyChartConfig( ```bash # Test all chart components -uv run pytest tests/test_*_strategies.py -v -uv run pytest tests/test_validation.py -v -uv run pytest tests/test_defaults.py -v +pytest tests/test_*_strategies.py -v +pytest tests/test_validation.py -v +pytest tests/test_defaults.py -v # Test specific component -uv run pytest tests/test_example_strategies.py::TestEMACrossoverStrategy -v +pytest tests/test_example_strategies.py::TestEMACrossoverStrategy -v ``` ## File Locations - **Main config**: `components/charts/config/` -- **Documentation**: `docs/components/charts/` +- **Documentation**: `docs/modules/charts/` - **Tests**: `tests/test_*_strategies.py` - **Examples**: `components/charts/config/example_strategies.py` \ No newline at end of file diff --git a/docs/components/dashboard-modular-structure.md b/docs/modules/dashboard-modular-structure.md similarity index 95% rename from docs/components/dashboard-modular-structure.md rename to docs/modules/dashboard-modular-structure.md index 6619286..51fd5c2 100644 --- a/docs/components/dashboard-modular-structure.md +++ b/docs/modules/dashboard-modular-structure.md @@ -293,6 +293,10 @@ The modular dashboard structure migration has been **successfully completed**! A - Real-time data updates - Professional UI with modals and controls +> **Note on UI Components:** While the modular structure is in place, many UI sections, such as the **Bot Management** and **Performance** layouts, are currently placeholders. The controls and visualizations for these features will be implemented once the corresponding backend components (Bot Manager, Strategy Engine) are developed. + This architecture provides a solid foundation for future development while maintaining all existing functionality. The separation of concerns makes the codebase more maintainable and allows for easier collaboration and testing. -**The modular dashboard is now production-ready and fully functional!** πŸš€ \ No newline at end of file +**The modular dashboard is now production-ready and fully functional!** πŸš€ +--- +*Back to [Modules Documentation (`../README.md`)]* \ No newline at end of file diff --git a/docs/modules/data_collectors.md b/docs/modules/data_collectors.md new file mode 100644 index 0000000..d25e1b1 --- /dev/null +++ b/docs/modules/data_collectors.md @@ -0,0 +1,215 @@ +# Enhanced Data Collector System + +This documentation describes the enhanced data collector system, featuring a modular architecture, centralized management, and robust health monitoring. + +## Table of Contents + +- [Overview](#overview) +- [System Architecture](#system-architecture) +- [Core Components](#core-components) +- [Exchange Factory](#exchange-factory) +- [Health Monitoring](#health-monitoring) +- [API Reference](#api-reference) +- [Troubleshooting](#troubleshooting) + +## Overview + +### Key Features + +- **Modular Exchange Integration**: Easily add new exchanges without impacting core logic +- **Centralized Management**: `CollectorManager` for system-wide control +- **Robust Health Monitoring**: Automatic restarts and failure detection +- **Factory Pattern**: Standardized creation of collector instances +- **Asynchronous Operations**: High-performance data collection +- **Comprehensive Logging**: Detailed component-level logging + +### Supported Exchanges + +- **OKX**: Full implementation with WebSocket support +- **Binance (Future)**: Planned support +- **Coinbase (Future)**: Planned support + +For exchange-specific documentation, see [Exchange Implementations (`./exchanges/`)](./exchanges/). + +## System Architecture + +``` +β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” +β”‚ TCP Dashboard Platform β”‚ +β”‚ β”‚ +β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ +β”‚ β”‚ CollectorManager β”‚ β”‚ +β”‚ β”‚ β€’ Centralized start/stop/status control β”‚ β”‚ +β”‚ β”‚ β”‚ β”‚ +β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚ β”‚ +β”‚ β”‚ β”‚ Global Health Monitor β”‚β”‚ β”‚ +β”‚ β”‚ β”‚ β€’ System-wide health checks β”‚β”‚ β”‚ +β”‚ β”‚ β”‚ β€’ Auto-restart coordination β”‚β”‚ β”‚ +β”‚ β”‚ β”‚ β€’ Performance analytics β”‚β”‚ β”‚ +β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚ β”‚ +β”‚ β”‚ β”‚ β”‚ β”‚ +β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ +β”‚ β”‚ β”‚OKX Collectorβ”‚ β”‚Binance Coll.β”‚ β”‚Custom Collectorβ”‚ β”‚ β”‚ +β”‚ β”‚ β”‚β€’ Health Mon β”‚ β”‚β€’ Health Mon β”‚ β”‚β€’ Health Monitorβ”‚ β”‚ β”‚ +β”‚ β”‚ β”‚β€’ Auto-restartβ”‚ β”‚β€’ Auto-restartβ”‚ β”‚β€’ Auto-restart β”‚ β”‚ β”‚ +β”‚ β”‚ β”‚β€’ Data Valid β”‚ β”‚β€’ Data Valid β”‚ β”‚β€’ Data Validate β”‚ β”‚ β”‚ +β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ +β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ +β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ +``` + +## Core Components + +### 1. `BaseDataCollector` + +An abstract base class that defines the common interface for all exchange collectors. + +**Key Responsibilities:** +- Standardized `start`, `stop`, `restart` methods +- Built-in health monitoring with heartbeat and data silence detection +- Automatic reconnect and restart logic +- Asynchronous message handling + +### 2. `CollectorManager` + +A singleton class that manages all active data collectors in the system. + +**Key Responsibilities:** +- Centralized `start` and `stop` for all collectors +- System-wide status aggregation +- Global health monitoring +- Coordination of restart policies + +### 3. Exchange-Specific Collectors + +Concrete implementations of `BaseDataCollector` for each exchange (e.g., `OKXCollector`). + +**Key Responsibilities:** +- Handle exchange-specific WebSocket protocols +- Parse and standardize incoming data +- Implement exchange-specific authentication +- Define subscription messages for different data types + +For more details, see [OKX Collector Documentation (`./exchanges/okx.md`)](./exchanges/okx.md). + +## Exchange Factory + +The `ExchangeFactory` provides a standardized way to create data collectors, decoupling the client code from specific implementations. + +### Features + +- **Simplified Creation**: Single function to create any supported collector +- **Configuration Driven**: Uses `ExchangeCollectorConfig` for flexible setup +- **Validation**: Validates configuration before creating a collector +- **Extensible**: Easily register new exchange collectors + +### Usage + +```python +from data.exchanges import ExchangeFactory, ExchangeCollectorConfig +from data.common import DataType + +# Create config for OKX collector +config = ExchangeCollectorConfig( + exchange="okx", + symbol="BTC-USDT", + data_types=[DataType.TRADE, DataType.ORDERBOOK], + auto_restart=True +) + +# Create collector using the factory +try: + collector = ExchangeFactory.create_collector(config) + # Use the collector + await collector.start() +except ValueError as e: + print(f"Error creating collector: {e}") + +# Create multiple collectors +configs = [...] +collectors = ExchangeFactory.create_multiple_collectors(configs) +``` + +## Health Monitoring + +The system includes a robust, two-level health monitoring system. + +### 1. Collector-Level Monitoring + +Each `BaseDataCollector` instance has its own health monitoring. + +**Key Metrics:** +- **Heartbeat**: Regular internal signal to confirm the collector is responsive +- **Data Silence**: Tracks time since last message to detect frozen connections +- **Restart Count**: Number of automatic restarts +- **Connection Status**: Tracks WebSocket connection state + +### 2. Manager-Level Monitoring + +The `CollectorManager` provides a global view of system health. + +**Key Metrics:** +- **Aggregate Status**: Overview of all collectors (running, stopped, failed) +- **System Uptime**: Total uptime for the collector system +- **Failed Collectors**: List of collectors that failed to restart +- **Resource Usage**: (Future) System-level CPU and memory monitoring + +### Health Status API + +```python +# Get status of a single collector +status = collector.get_status() +health = collector.get_health_status() + +# Get status of the entire system +system_status = manager.get_status() +``` + +For detailed status schemas, refer to the [Reference Documentation (`../../reference/README.md`)](../../reference/README.md). + +## API Reference + +### `BaseDataCollector` +- `async start()` +- `async stop()` +- `async restart()` +- `get_status() -> dict` +- `get_health_status() -> dict` + +### `CollectorManager` +- `add_collector(collector)` +- `async start_all()` +- `async stop_all()` +- `get_status() -> dict` +- `list_collectors() -> list` + +### `ExchangeFactory` +- `create_collector(config) -> BaseDataCollector` +- `create_multiple_collectors(configs) -> list` +- `get_supported_exchanges() -> list` + +## Troubleshooting + +### Common Issues + +1. **Collector fails to start** + - **Cause**: Invalid symbol, incorrect API keys, or network issues. + - **Solution**: Check logs for error messages. Verify configuration and network connectivity. + +2. **Collector stops receiving data** + - **Cause**: WebSocket connection dropped, exchange issues. + - **Solution**: Health monitor should automatically restart. If not, check logs for reconnect errors. + +3. **"Exchange not supported" error** + - **Cause**: Trying to create a collector for an exchange not registered in the factory. + - **Solution**: Implement the collector and register it in `data/exchanges/__init__.py`. + +### Best Practices + +- Use the `CollectorManager` for lifecycle management. +- Always validate configurations before creating collectors. +- Monitor system status regularly using `manager.get_status()`. +- Refer to logs for detailed error analysis. + +--- +*Back to [Modules Documentation (`../README.md`)]* \ No newline at end of file diff --git a/docs/components/database_operations.md b/docs/modules/database_operations.md similarity index 98% rename from docs/components/database_operations.md rename to docs/modules/database_operations.md index 198be91..7eedd8b 100644 --- a/docs/components/database_operations.md +++ b/docs/modules/database_operations.md @@ -119,7 +119,7 @@ async def main(): # Check statistics stats = db.get_stats() print(f"Total candles: {stats['candle_count']}") - print(f"Total raw trades: {stats['raw_trade_count']}") + print(f"Total raw trades: {stats['trade_count']}") asyncio.run(main()) ``` @@ -149,7 +149,7 @@ Get comprehensive database statistics. ```python stats = db.get_stats() print(f"Candles: {stats['candle_count']:,}") -print(f"Raw trades: {stats['raw_trade_count']:,}") +print(f"Raw trades: {stats['trade_count']:,}") print(f"Health: {stats['healthy']}") ``` @@ -342,7 +342,7 @@ with db_manager.get_session() as session: session.execute(text(""" INSERT INTO market_data (exchange, symbol, timeframe, ...) VALUES (:exchange, :symbol, :timeframe, ...) - """), {...}) + """), {'exchange': 'okx', 'symbol': 'BTC-USDT', ...}) session.commit() ``` @@ -351,8 +351,10 @@ with db_manager.get_session() as session: ```python # NEW WAY - using repository pattern from database.operations import get_database_operations +from data.common.data_types import OHLCVCandle db = get_database_operations() +candle = OHLCVCandle(...) # Create candle object success = db.market_data.upsert_candle(candle) ``` diff --git a/docs/modules/exchanges/README.md b/docs/modules/exchanges/README.md new file mode 100644 index 0000000..9199c33 --- /dev/null +++ b/docs/modules/exchanges/README.md @@ -0,0 +1,43 @@ +# Exchange Integrations + +This section provides documentation for integrating with different cryptocurrency exchanges. + +## Architecture + +The platform uses a modular architecture for exchange integration, allowing for easy addition of new exchanges without modifying core application logic. + +### Core Components + +- **`BaseDataCollector`**: An abstract base class defining the standard interface for all exchange collectors. +- **`ExchangeFactory`**: A factory for creating exchange-specific collector instances. +- **Exchange-Specific Modules**: Each exchange has its own module containing the collector implementation and any specific data processing logic. + +For a high-level overview of the data collection system, see the [Data Collectors Documentation (`../data_collectors.md`)](../data_collectors.md). + +## Supported Exchanges + +### OKX +- **Status**: Production Ready +- **Features**: Real-time trades, order book, and ticker data. +- **Documentation**: [OKX Collector Guide (`okx.md`)] + +### Binance +- **Status**: Planned +- **Features**: To be determined. + +### Coinbase +- **Status**: Planned +- **Features**: To be determined. + +## Adding a New Exchange + +To add support for a new exchange, you need to: + +1. Create a new module in the `data/exchanges/` directory. +2. Implement a new collector class that inherits from `BaseDataCollector`. +3. Implement the exchange-specific WebSocket connection and data parsing logic. +4. Register the new collector in the `ExchangeFactory`. +5. Add a new documentation file in this directory explaining the implementation details. + +--- +*Back to [Modules Documentation (`../README.md`)]* \ No newline at end of file diff --git a/docs/exchanges/okx_collector.md b/docs/modules/exchanges/okx_collector.md similarity index 88% rename from docs/exchanges/okx_collector.md rename to docs/modules/exchanges/okx_collector.md index f877584..50d33de 100644 --- a/docs/exchanges/okx_collector.md +++ b/docs/modules/exchanges/okx_collector.md @@ -884,4 +884,77 @@ class OKXCollector(BaseDataCollector): health_check_interval: Seconds between health checks store_raw_data: Whether to store raw OKX data """ -``` \ No newline at end of file +``` + +## Key Components + +The OKX collector consists of three main components working together: + +### `OKXCollector` + +- **Main class**: `OKXCollector(BaseDataCollector)` +- **Responsibilities**: + - Manages WebSocket connection state + - Subscribes to required data channels + - Dispatches raw messages to the data processor + - Stores standardized data in the database + - Provides health and status monitoring + +### `OKXWebSocketClient` + +- **Handles WebSocket communication**: `OKXWebSocketClient` +- **Responsibilities**: + - Manages connection, reconnection, and ping/pong + - Decodes incoming messages + - Handles authentication for private channels + +### `OKXDataProcessor` + +- **New in v2.0**: `OKXDataProcessor` +- **Responsibilities**: + - Validates incoming raw data from WebSocket + - Transforms data into standardized `StandardizedTrade` and `OHLCVCandle` formats + - Aggregates trades into OHLCV candles + - Invokes callbacks for processed trades and completed candles + +## Configuration + +### `OKXCollector` Configuration + +Configuration options for the `OKXCollector` class: + +| Parameter | Type | Default | Description | +|-------------------------|---------------------|---------------------------------------|-----------------------------------------------------------------------------| +| `symbol` | `str` | - | Trading symbol (e.g., `BTC-USDT`) | +| `data_types` | `List[DataType]` | `[TRADE, ORDERBOOK]` | List of data types to collect | +| `auto_restart` | `bool` | `True` | Automatically restart on failures | +| `health_check_interval` | `float` | `30.0` | Seconds between health checks | +| `store_raw_data` | `bool` | `True` | Store raw WebSocket data for debugging | +| `force_update_candles` | `bool` | `False` | If `True`, update existing candles; if `False`, keep existing ones unchanged | +| `logger` | `Logger` | `None` | Logger instance for conditional logging | +| `log_errors_only` | `bool` | `False` | If `True` and logger provided, only log error-level messages | + +### Health & Status Monitoring + +status = collector.get_status() +print(json.dumps(status, indent=2)) + +Example output: + +```json +{ + "component_name": "okx_collector_btc_usdt", + "status": "running", + "uptime": "0:10:15.123456", + "symbol": "BTC-USDT", + "data_types": ["trade", "orderbook"], + "connection_state": "connected", + "last_health_check": "2023-11-15T10:30:00Z", + "message_count": 1052, + "processed_trades": 512, + "processed_candles": 10, + "error_count": 2 +} +``` + +## Database Integration \ No newline at end of file diff --git a/docs/components/logging.md b/docs/modules/logging.md similarity index 63% rename from docs/components/logging.md rename to docs/modules/logging.md index b283e44..8a8abec 100644 --- a/docs/components/logging.md +++ b/docs/modules/logging.md @@ -2,6 +2,15 @@ The TCP Dashboard project uses a unified logging system that provides consistent, centralized logging across all components with advanced conditional logging capabilities. +## Key Features + +- **Component-based logging**: Each component (e.g., `bot_manager`, `data_collector`) gets its own dedicated logger and log directory under `logs/`. +- **Centralized control**: `UnifiedLogger` class manages all logger instances, ensuring consistent configuration. +- **Date-based rotation**: Log files are automatically rotated daily (e.g., `2023-11-15.txt`). +- **Unified format**: All log messages follow `[YYYY-MM-DD HH:MM:SS - LEVEL - message]`. +- **Verbose console logging**: Optional verbose console output for real-time monitoring, controlled by environment variables. +- **Automatic cleanup**: Old log files are automatically removed to save disk space. + ## Features - **Component-specific directories**: Each component gets its own log directory @@ -218,339 +227,61 @@ The following components support conditional logging: - Parameters: `logger=None` - Data processing with conditional logging -## Basic Usage +## Usage -### Import and Initialize +### Getting a Logger ```python from utils.logger import get_logger -# Basic usage - gets logger with default settings -logger = get_logger('bot_manager') - -# With verbose console output +# Get logger for bot manager logger = get_logger('bot_manager', verbose=True) -# With custom cleanup settings -logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7) - -# All parameters -logger = get_logger( - component_name='bot_manager', - log_level='DEBUG', - verbose=True, - clean_old_logs=True, - max_log_files=14 -) +logger.info("Bot started successfully") +logger.debug("Connecting to database...") +logger.warning("API response time is high") +logger.error("Failed to execute trade", extra={'trade_id': 12345}) ``` -### Log Messages +### Configuration -```python -# Different log levels -logger.debug("Detailed debugging information") -logger.info("General information about program execution") -logger.warning("Something unexpected happened") -logger.error("An error occurred", exc_info=True) # Include stack trace -logger.critical("A critical error occurred") -``` +The `get_logger` function accepts the following parameters: -### Complete Example +| Parameter | Type | Default | Description | +|-------------------|---------------------|---------|-----------------------------------------------------------------------------| +| `component_name` | `str` | - | Name of the component (e.g., `bot_manager`, `data_collector`) | +| `log_level` | `str` | `INFO` | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) | +| `verbose` | `Optional[bool]` | `None` | Enable console logging. If `None`, uses `VERBOSE_LOGGING` from `.env` | +| `clean_old_logs` | `bool` | `True` | Automatically clean old log files when creating new ones | +| `max_log_files` | `int` | `30` | Maximum number of log files to keep per component | -```python -from utils.logger import get_logger +## Log Cleanup -class BotManager: - def __init__(self): - # Initialize with verbose output and keep only 7 days of logs - self.logger = get_logger('bot_manager', verbose=True, max_log_files=7) - self.logger.info("BotManager initialized") - - def start_bot(self, bot_id: str): - try: - self.logger.info(f"Starting bot {bot_id}") - # Bot startup logic here - self.logger.info(f"Bot {bot_id} started successfully") - except Exception as e: - self.logger.error(f"Failed to start bot {bot_id}: {e}", exc_info=True) - raise - - def stop_bot(self, bot_id: str): - self.logger.info(f"Stopping bot {bot_id}") - # Bot shutdown logic here - self.logger.info(f"Bot {bot_id} stopped") -``` +Log cleanup is now based on the number of files, not age. +- **Enabled by default**: `clean_old_logs=True` +- **Default retention**: Keeps the most recent 30 log files (`max_log_files=30`) -## Log Format +## Centralized Control -All log messages follow this unified format: -``` -[YYYY-MM-DD HH:MM:SS - LEVEL - message] -``` +For consistent logging behavior across the application, it is recommended to use environment variables in an `.env` file instead of passing parameters to `get_logger`. -Example: -``` -[2024-01-15 14:30:25 - INFO - Bot started successfully] -[2024-01-15 14:30:26 - ERROR - Connection failed: timeout] -``` +- `LOG_LEVEL`: "INFO", "DEBUG", etc. +- `VERBOSE_LOGGING`: "true" or "false" +- `CLEAN_OLD_LOGS`: "true" or "false" +- `MAX_LOG_FILES`: e.g., "15" -## File Organization +## File Structure -Logs are organized in a hierarchical structure: ``` logs/ -β”œβ”€β”€ tcp_dashboard/ -β”‚ β”œβ”€β”€ 2024-01-15.txt -β”‚ └── 2024-01-16.txt -β”œβ”€β”€ production_manager/ -β”‚ β”œβ”€β”€ 2024-01-15.txt -β”‚ └── 2024-01-16.txt -β”œβ”€β”€ collector_manager/ -β”‚ └── 2024-01-15.txt -β”œβ”€β”€ okx_collector_btc_usdt/ -β”‚ └── 2024-01-15.txt -└── okx_collector_eth_usdt/ - └── 2024-01-15.txt -``` - -## Configuration - -### Logger Parameters - -The `get_logger()` function accepts several parameters for customization: - -```python -get_logger( - component_name: str, # Required: component name - log_level: str = "INFO", # Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL - verbose: Optional[bool] = None, # Console logging: True, False, or None (use env) - clean_old_logs: bool = True, # Auto-cleanup old logs - max_log_files: int = 30 # Max number of log files to keep -) -``` - -### Log Levels - -Set the log level when getting a logger: -```python -# Available levels: DEBUG, INFO, WARNING, ERROR, CRITICAL -logger = get_logger('component_name', 'DEBUG') # Show all messages -logger = get_logger('component_name', 'ERROR') # Show only errors and critical -``` - -### Verbose Console Logging - -Control console output with the `verbose` parameter: - -```python -# Explicit verbose settings -logger = get_logger('bot_manager', verbose=True) # Always show console logs -logger = get_logger('bot_manager', verbose=False) # Never show console logs - -# Use environment variable (default behavior) -logger = get_logger('bot_manager', verbose=None) # Uses VERBOSE_LOGGING from .env -``` - -Environment variables for console logging: -```bash -# In .env file or environment -VERBOSE_LOGGING=true # Enable verbose console logging -LOG_TO_CONSOLE=true # Alternative environment variable (backward compatibility) -``` - -Console output respects log levels: -- **DEBUG level**: Shows all messages (DEBUG, INFO, WARNING, ERROR, CRITICAL) -- **INFO level**: Shows INFO and above (INFO, WARNING, ERROR, CRITICAL) -- **WARNING level**: Shows WARNING and above (WARNING, ERROR, CRITICAL) -- **ERROR level**: Shows ERROR and above (ERROR, CRITICAL) -- **CRITICAL level**: Shows only CRITICAL messages - -### Automatic Log Cleanup - -Control automatic cleanup of old log files: - -```python -# Enable automatic cleanup (default) -logger = get_logger('bot_manager', clean_old_logs=True, max_log_files=7) - -# Disable automatic cleanup -logger = get_logger('bot_manager', clean_old_logs=False) - -# Custom retention (keep 14 most recent log files) -logger = get_logger('bot_manager', max_log_files=14) -``` - -**How automatic cleanup works:** -- Triggered every time a new log file is created (date change) -- Keeps only the most recent `max_log_files` files -- Deletes older files automatically -- Based on file modification time, not filename - -## Best Practices for Conditional Logging - -### 1. Logger Inheritance -```python -# Parent component creates logger -parent_logger = get_logger('parent_system') -parent = ParentComponent(logger=parent_logger) - -# Pass logger to children for consistent hierarchy -child1 = ChildComponent(logger=parent_logger) -child2 = ChildComponent(logger=parent_logger, log_errors_only=True) -child3 = ChildComponent(logger=None) # No logging -``` - -### 2. Environment-Based Configuration -```python -import os -from utils.logger import get_logger - -def create_system_logger(): - """Create logger based on environment.""" - env = os.getenv('ENVIRONMENT', 'development') - - if env == 'production': - return get_logger('production_system', log_level='INFO', verbose=False) - elif env == 'testing': - return None # No logging during tests - else: - return get_logger('dev_system', log_level='DEBUG', verbose=True) - -# Use in components -system_logger = create_system_logger() -manager = CollectorManager(logger=system_logger) -``` - -### 3. Conditional Error-Only Mode -```python -def create_collector_with_logging_strategy(symbol, strategy='normal'): - """Create collector with different logging strategies.""" - base_logger = get_logger(f'collector_{symbol.lower().replace("-", "_")}') - - if strategy == 'silent': - return OKXCollector(symbol, logger=None) - elif strategy == 'errors_only': - return OKXCollector(symbol, logger=base_logger, log_errors_only=True) - else: - return OKXCollector(symbol, logger=base_logger) - -# Usage -btc_collector = create_collector_with_logging_strategy('BTC-USDT', 'normal') -eth_collector = create_collector_with_logging_strategy('ETH-USDT', 'errors_only') -ada_collector = create_collector_with_logging_strategy('ADA-USDT', 'silent') -``` - -### 4. Performance Optimization -```python -class OptimizedComponent: - def __init__(self, logger=None, log_errors_only=False): - self.logger = logger - self.log_errors_only = log_errors_only - - # Pre-compute logging capabilities for performance - self.can_log_debug = logger and not log_errors_only - self.can_log_info = logger and not log_errors_only - self.can_log_warning = logger and not log_errors_only - self.can_log_error = logger is not None - self.can_log_critical = logger is not None - - def process_data(self, data): - if self.can_log_debug: - self.logger.debug(f"Processing {len(data)} records") - - # ... processing logic ... - - if self.can_log_info: - self.logger.info("Data processing completed") -``` - -## Advanced Features - -### Manual Log Cleanup - -Remove old log files manually based on age: -```python -from utils.logger import cleanup_old_logs - -# Remove logs older than 30 days for a specific component -cleanup_old_logs('bot_manager', days_to_keep=30) - -# Or clean up logs for multiple components -for component in ['bot_manager', 'data_collector', 'strategies']: - cleanup_old_logs(component, days_to_keep=7) -``` - -### Error Handling with Context - -```python -try: - risky_operation() -except Exception as e: - logger.error(f"Operation failed: {e}", exc_info=True) - # exc_info=True includes the full stack trace -``` - -### Structured Logging - -For complex data, use structured messages: -```python -# Good: Structured information -logger.info(f"Trade executed: symbol={symbol}, price={price}, quantity={quantity}") - -# Even better: JSON-like structure for parsing -logger.info(f"Trade executed", extra={ - 'symbol': symbol, - 'price': price, - 'quantity': quantity, - 'timestamp': datetime.now().isoformat() -}) -``` - -## Configuration Examples - -### Development Environment -```python -# Verbose logging with frequent cleanup -logger = get_logger( - 'bot_manager', - log_level='DEBUG', - verbose=True, - max_log_files=3 # Keep only 3 days of logs -) -``` - -### Production Environment -```python -# Minimal console output with longer retention -logger = get_logger( - 'bot_manager', - log_level='INFO', - verbose=False, - max_log_files=30 # Keep 30 days of logs -) -``` - -### Testing Environment -```python -# Disable cleanup for testing -logger = get_logger( - 'test_component', - log_level='DEBUG', - verbose=True, - clean_old_logs=False # Don't delete logs during tests -) -``` - -## Environment Variables - -Create a `.env` file to control default logging behavior: - -```bash -# Enable verbose console logging globally -VERBOSE_LOGGING=true - -# Alternative (backward compatibility) -LOG_TO_CONSOLE=true +β”œβ”€β”€ bot_manager/ +β”‚ β”œβ”€β”€ 2023-11-14.txt +β”‚ └── 2023-11-15.txt +β”œβ”€β”€ data_collector/ +β”‚ β”œβ”€β”€ 2023-11-14.txt +β”‚ └── 2023-11-15.txt +└── default_logger/ + └── 2023-11-15.txt ``` ## Best Practices diff --git a/docs/services/data_collection_service.md b/docs/modules/services/data_collection_service.md similarity index 90% rename from docs/services/data_collection_service.md rename to docs/modules/services/data_collection_service.md index 42be79d..203773f 100644 --- a/docs/services/data_collection_service.md +++ b/docs/modules/services/data_collection_service.md @@ -1,10 +1,86 @@ # Data Collection Service -The Data Collection Service is a production-ready service for cryptocurrency market data collection with clean logging and robust error handling. It provides a service layer that manages multiple data collectors for different trading pairs and exchanges. +**Service for collecting and storing real-time market data from multiple exchanges.** -## Overview +## Architecture Overview -The service provides a high-level interface for managing the data collection system, handling configuration, lifecycle management, and monitoring. It acts as a orchestration layer on top of the core data collector components. +The data collection service uses a **manager-worker architecture** to collect data for multiple trading pairs concurrently. + +- **`CollectorManager`**: The central manager responsible for creating, starting, stopping, and monitoring individual data collectors. +- **`OKXCollector`**: A dedicated worker responsible for collecting data for a single trading pair from the OKX exchange. + +This architecture allows for high scalability and fault tolerance. + +## Key Components + +### `CollectorManager` + +- **Location**: `tasks/collector_manager.py` +- **Responsibilities**: + - Manages the lifecycle of multiple collectors + - Provides a unified API for controlling all collectors + - Monitors the health of each collector + - Distributes tasks and aggregates results + +### `OKXCollector` + +- **Location**: `data/exchanges/okx/collector.py` +- **Responsibilities**: + - Connects to the OKX WebSocket API + - Subscribes to real-time data channels + - Processes and standardizes incoming data + - Stores data in the database + +## Configuration + +The service is configured through `config/bot_configs/data_collector_config.json`: + +```json +{ + "service_name": "data_collection_service", + "enabled": true, + "manager_config": { + "component_name": "collector_manager", + "health_check_interval": 60, + "log_level": "INFO", + "verbose": true + }, + "collectors": [ + { + "exchange": "okx", + "symbol": "BTC-USDT", + "data_types": ["trade", "orderbook"], + "enabled": true + }, + { + "exchange": "okx", + "symbol": "ETH-USDT", + "data_types": ["trade"], + "enabled": true + } + ] +} +``` + +## Usage + +Start the service from the main application entry point: + +```python +# main.py +from tasks.collector_manager import CollectorManager + +async def main(): + manager = CollectorManager() + await manager.start_all_collectors() + +if __name__ == "__main__": + asyncio.run(main()) +``` + +## Health & Monitoring + +The `CollectorManager` provides a `get_status()` method to monitor the health of all collectors. ## Features diff --git a/docs/components/technical-indicators.md b/docs/modules/technical-indicators.md similarity index 89% rename from docs/components/technical-indicators.md rename to docs/modules/technical-indicators.md index 2aaadf3..65e0665 100644 --- a/docs/components/technical-indicators.md +++ b/docs/modules/technical-indicators.md @@ -264,4 +264,35 @@ Potential future additions to the indicators module: - [Aggregation Strategy Documentation](aggregation-strategy.md) - [Data Types Documentation](data-types.md) - [Database Schema Documentation](database-schema.md) -- [API Reference](api-reference.md) \ No newline at end of file +- [API Reference](api-reference.md) + +## `TechnicalIndicators` Class + +The main class for calculating technical indicators. + +- **RSI**: `rsi(df, period=14, price_column='close')` +- **MACD**: `macd(df, fast_period=12, slow_period=26, signal_period=9, price_column='close')` +- **Bollinger Bands**: `bollinger_bands(df, period=20, std_dev=2.0, price_column='close')` + +### `calculate_multiple_indicators` + +Calculates multiple indicators in a single pass for efficiency. + +```python +# Configuration for multiple indicators +indicators_config = { + 'sma_20': {'type': 'sma', 'period': 20}, + 'ema_50': {'type': 'ema', 'period': 50}, + 'rsi_14': {'type': 'rsi', 'period': 14} +} + +# Calculate all indicators +all_results = ti.calculate_multiple_indicators(candles, indicators_config) + +print(f"SMA results: {len(all_results['sma_20'])}") +print(f"RSI results: {len(all_results['rsi_14'])}") +``` + +## Sparse Data Handling + +The `TechnicalIndicators` class is designed to handle sparse OHLCV data, which is a common scenario in real-time data collection. \ No newline at end of file diff --git a/docs/reference/README.md b/docs/reference/README.md index 7c4e09c..1550e83 100644 --- a/docs/reference/README.md +++ b/docs/reference/README.md @@ -360,10 +360,10 @@ CREATE TABLE raw_trades ( ## πŸ”— Related Documentation -- **[Components Documentation](../components/)** - Implementation details -- **[Architecture Overview](../architecture/)** - System design -- **[Exchange Documentation](../exchanges/)** - Exchange integrations -- **[Setup Guide](../guides/)** - Configuration and deployment +- **[Modules Documentation (`../modules/`)](../modules/)** - Implementation details +- **[Architecture Overview (`../architecture.md`)]** - System design +- **[Exchange Documentation (`../modules/exchanges/`)](../modules/exchanges/)** - Exchange integrations +- **[Setup Guide (`../guides/`)](../guides/)** - Configuration and deployment ## πŸ“ž Support @@ -394,4 +394,4 @@ def validate_market_data_point(data_point): --- -*For the complete documentation index, see the [main documentation README](../README.md).* \ No newline at end of file +*For the complete documentation index, see the [main documentation README (`../README.md`)]* \ No newline at end of file diff --git a/tasks/PRD-tasks.md b/tasks/MAIN-task-list.md similarity index 100% rename from tasks/PRD-tasks.md rename to tasks/MAIN-task-list.md