Merge branch 'main' of https://dep.sokaris.link/vasily/TCPDashboard
This commit is contained in:
commit
0c08d0a8fe
61
.cursor/rules/always-global.mdc
Normal file
61
.cursor/rules/always-global.mdc
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
description: Global development standards and AI interaction principles
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Rule: Always Apply - Global Development Standards
|
||||
|
||||
## AI Interaction Principles
|
||||
|
||||
### Step-by-Step Development
|
||||
- **NEVER** generate large blocks of code without explanation
|
||||
- **ALWAYS** ask "provide your plan in a concise bullet list and wait for my confirmation before proceeding"
|
||||
- Break complex tasks into smaller, manageable pieces (≤250 lines per file, ≤50 lines per function)
|
||||
- Explain your reasoning step-by-step before writing code
|
||||
- Wait for explicit approval before moving to the next sub-task
|
||||
|
||||
### Context Awareness
|
||||
- **ALWAYS** reference existing code patterns and data structures before suggesting new approaches
|
||||
- Ask about existing conventions before implementing new functionality
|
||||
- Preserve established architectural decisions unless explicitly asked to change them
|
||||
- Maintain consistency with existing naming conventions and code style
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### File and Function Limits
|
||||
- **Maximum file size**: 250 lines
|
||||
- **Maximum function size**: 50 lines
|
||||
- **Maximum complexity**: If a function does more than one main thing, break it down
|
||||
- **Naming**: Use clear, descriptive names that explain purpose
|
||||
|
||||
### Documentation Requirements
|
||||
- **Every public function** must have a docstring explaining purpose, parameters, and return value
|
||||
- **Every class** must have a class-level docstring
|
||||
- **Complex logic** must have inline comments explaining the "why", not just the "what"
|
||||
- **API endpoints** must be documented with request/response examples
|
||||
|
||||
### Error Handling
|
||||
- **ALWAYS** include proper error handling for external dependencies
|
||||
- **NEVER** use bare except clauses
|
||||
- Provide meaningful error messages that help with debugging
|
||||
- Log errors appropriately for the application context
|
||||
|
||||
## Security and Best Practices
|
||||
- **NEVER** hardcode credentials, API keys, or sensitive data
|
||||
- **ALWAYS** validate user inputs
|
||||
- Use parameterized queries for database operations
|
||||
- Follow the principle of least privilege
|
||||
- Implement proper authentication and authorization
|
||||
|
||||
## Testing Requirements
|
||||
- **Every implementation** should have corresponding unit tests
|
||||
- **Every API endpoint** should have integration tests
|
||||
- Test files should be placed alongside the code they test
|
||||
- Use descriptive test names that explain what is being tested
|
||||
|
||||
## Response Format
|
||||
- Be concise and avoid unnecessary repetition
|
||||
- Focus on actionable information
|
||||
- Provide examples when explaining complex concepts
|
||||
- Ask clarifying questions when requirements are ambiguous
|
||||
237
.cursor/rules/architecture.mdc
Normal file
237
.cursor/rules/architecture.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Modular design principles and architecture guidelines for scalable development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Architecture and Modular Design
|
||||
|
||||
## Goal
|
||||
Maintain a clean, modular architecture that scales effectively and prevents the complexity issues that arise in AI-assisted development.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
### 1. Modular Design
|
||||
- **Single Responsibility**: Each module has one clear purpose
|
||||
- **Loose Coupling**: Modules depend on interfaces, not implementations
|
||||
- **High Cohesion**: Related functionality is grouped together
|
||||
- **Clear Boundaries**: Module interfaces are well-defined and stable
|
||||
|
||||
### 2. Size Constraints
|
||||
- **Files**: Maximum 250 lines per file
|
||||
- **Functions**: Maximum 50 lines per function
|
||||
- **Classes**: Maximum 300 lines per class
|
||||
- **Modules**: Maximum 10 public functions/classes per module
|
||||
|
||||
### 3. Dependency Management
|
||||
- **Layer Dependencies**: Higher layers depend on lower layers only
|
||||
- **No Circular Dependencies**: Modules cannot depend on each other cyclically
|
||||
- **Interface Segregation**: Depend on specific interfaces, not broad ones
|
||||
- **Dependency Injection**: Pass dependencies rather than creating them internally
|
||||
|
||||
## Modular Architecture Patterns
|
||||
|
||||
### Layer Structure
|
||||
```
|
||||
src/
|
||||
├── presentation/ # UI, API endpoints, CLI interfaces
|
||||
├── application/ # Business logic, use cases, workflows
|
||||
├── domain/ # Core business entities and rules
|
||||
├── infrastructure/ # Database, external APIs, file systems
|
||||
└── shared/ # Common utilities, constants, types
|
||||
```
|
||||
|
||||
### Module Organization
|
||||
```
|
||||
module_name/
|
||||
├── __init__.py # Public interface exports
|
||||
├── core.py # Main module logic
|
||||
├── types.py # Type definitions and interfaces
|
||||
├── utils.py # Module-specific utilities
|
||||
├── tests/ # Module tests
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
## Design Patterns for AI Development
|
||||
|
||||
### 1. Repository Pattern
|
||||
Separate data access from business logic:
|
||||
|
||||
```python
|
||||
# Domain interface
|
||||
class UserRepository:
|
||||
def get_by_id(self, user_id: str) -> User: ...
|
||||
def save(self, user: User) -> None: ...
|
||||
|
||||
# Infrastructure implementation
|
||||
class SqlUserRepository(UserRepository):
|
||||
def get_by_id(self, user_id: str) -> User:
|
||||
# Database-specific implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. Service Pattern
|
||||
Encapsulate business logic in focused services:
|
||||
|
||||
```python
|
||||
class UserService:
|
||||
def __init__(self, user_repo: UserRepository):
|
||||
self._user_repo = user_repo
|
||||
|
||||
def create_user(self, data: UserData) -> User:
|
||||
# Validation and business logic
|
||||
# Single responsibility: user creation
|
||||
pass
|
||||
```
|
||||
|
||||
### 3. Factory Pattern
|
||||
Create complex objects with clear interfaces:
|
||||
|
||||
```python
|
||||
class DatabaseFactory:
|
||||
@staticmethod
|
||||
def create_connection(config: DatabaseConfig) -> Connection:
|
||||
# Handle different database types
|
||||
# Encapsulate connection complexity
|
||||
pass
|
||||
```
|
||||
|
||||
## Architecture Decision Guidelines
|
||||
|
||||
### When to Create New Modules
|
||||
Create a new module when:
|
||||
- **Functionality** exceeds size constraints (250 lines)
|
||||
- **Responsibility** is distinct from existing modules
|
||||
- **Dependencies** would create circular references
|
||||
- **Reusability** would benefit other parts of the system
|
||||
- **Testing** requires isolated test environments
|
||||
|
||||
### When to Split Existing Modules
|
||||
Split modules when:
|
||||
- **File size** exceeds 250 lines
|
||||
- **Multiple responsibilities** are evident
|
||||
- **Testing** becomes difficult due to complexity
|
||||
- **Dependencies** become too numerous
|
||||
- **Change frequency** differs significantly between parts
|
||||
|
||||
### Module Interface Design
|
||||
```python
|
||||
# Good: Clear, focused interface
|
||||
class PaymentProcessor:
|
||||
def process_payment(self, amount: Money, method: PaymentMethod) -> PaymentResult:
|
||||
"""Process a single payment transaction."""
|
||||
pass
|
||||
|
||||
# Bad: Unfocused, kitchen-sink interface
|
||||
class PaymentManager:
|
||||
def process_payment(self, ...): pass
|
||||
def validate_card(self, ...): pass
|
||||
def send_receipt(self, ...): pass
|
||||
def update_inventory(self, ...): pass # Wrong responsibility!
|
||||
```
|
||||
|
||||
## Architecture Validation
|
||||
|
||||
### Architecture Review Checklist
|
||||
- [ ] **Dependencies flow in one direction** (no cycles)
|
||||
- [ ] **Layers are respected** (presentation doesn't call infrastructure directly)
|
||||
- [ ] **Modules have single responsibility**
|
||||
- [ ] **Interfaces are stable** and well-defined
|
||||
- [ ] **Size constraints** are maintained
|
||||
- [ ] **Testing** is straightforward for each module
|
||||
|
||||
### Red Flags
|
||||
- **God Objects**: Classes/modules that do too many things
|
||||
- **Circular Dependencies**: Modules that depend on each other
|
||||
- **Deep Inheritance**: More than 3 levels of inheritance
|
||||
- **Large Interfaces**: Interfaces with more than 7 methods
|
||||
- **Tight Coupling**: Modules that know too much about each other's internals
|
||||
|
||||
## Refactoring Guidelines
|
||||
|
||||
### When to Refactor
|
||||
- Module exceeds size constraints
|
||||
- Code duplication across modules
|
||||
- Difficult to test individual components
|
||||
- New features require changing multiple unrelated modules
|
||||
- Performance bottlenecks due to poor separation
|
||||
|
||||
### Refactoring Process
|
||||
1. **Identify** the specific architectural problem
|
||||
2. **Design** the target architecture
|
||||
3. **Create tests** to verify current behavior
|
||||
4. **Implement changes** incrementally
|
||||
5. **Validate** that tests still pass
|
||||
6. **Update documentation** to reflect changes
|
||||
|
||||
### Safe Refactoring Practices
|
||||
- **One change at a time**: Don't mix refactoring with new features
|
||||
- **Tests first**: Ensure comprehensive test coverage before refactoring
|
||||
- **Incremental changes**: Small steps with verification at each stage
|
||||
- **Backward compatibility**: Maintain existing interfaces during transition
|
||||
- **Documentation updates**: Keep architecture documentation current
|
||||
|
||||
## Architecture Documentation
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-003: Service Layer Architecture
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
As the application grows, business logic is scattered across controllers and models.
|
||||
|
||||
## Decision
|
||||
Implement a service layer to encapsulate business logic.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- Clear separation of concerns
|
||||
- Easier testing of business logic
|
||||
- Better reusability across different interfaces
|
||||
|
||||
**Negative:**
|
||||
- Additional abstraction layer
|
||||
- More files to maintain
|
||||
```
|
||||
|
||||
### Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
What this module does and why it exists.
|
||||
|
||||
## Dependencies
|
||||
- **Imports from**: List of modules this depends on
|
||||
- **Used by**: List of modules that depend on this one
|
||||
- **External**: Third-party dependencies
|
||||
|
||||
## Public Interface
|
||||
```python
|
||||
# Key functions and classes exposed by this module
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
- Design patterns used
|
||||
- Important architectural decisions
|
||||
- Known limitations or constraints
|
||||
```
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### Legacy Code Integration
|
||||
- **Strangler Fig Pattern**: Gradually replace old code with new modules
|
||||
- **Adapter Pattern**: Create interfaces to integrate old and new code
|
||||
- **Facade Pattern**: Simplify complex legacy interfaces
|
||||
|
||||
### Gradual Modernization
|
||||
1. **Identify boundaries** in existing code
|
||||
2. **Extract modules** one at a time
|
||||
3. **Create interfaces** for each extracted module
|
||||
4. **Test thoroughly** at each step
|
||||
5. **Update documentation** continuously
|
||||
123
.cursor/rules/code-review.mdc
Normal file
123
.cursor/rules/code-review.mdc
Normal file
@ -0,0 +1,123 @@
|
||||
---
|
||||
description: AI-generated code review checklist and quality assurance guidelines
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Review and Quality Assurance
|
||||
|
||||
## Goal
|
||||
Establish systematic review processes for AI-generated code to maintain quality, security, and maintainability standards.
|
||||
|
||||
## AI Code Review Checklist
|
||||
|
||||
### Pre-Implementation Review
|
||||
Before accepting any AI-generated code:
|
||||
|
||||
1. **Understand the Code**
|
||||
- [ ] Can you explain what the code does in your own words?
|
||||
- [ ] Do you understand each function and its purpose?
|
||||
- [ ] Are there any "magic" values or unexplained logic?
|
||||
- [ ] Does the code solve the actual problem stated?
|
||||
|
||||
2. **Architecture Alignment**
|
||||
- [ ] Does the code follow established project patterns?
|
||||
- [ ] Is it consistent with existing data structures?
|
||||
- [ ] Does it integrate cleanly with existing components?
|
||||
- [ ] Are new dependencies justified and necessary?
|
||||
|
||||
3. **Code Quality**
|
||||
- [ ] Are functions smaller than 50 lines?
|
||||
- [ ] Are files smaller than 250 lines?
|
||||
- [ ] Are variable and function names descriptive?
|
||||
- [ ] Is the code DRY (Don't Repeat Yourself)?
|
||||
|
||||
### Security Review
|
||||
- [ ] **Input Validation**: All user inputs are validated and sanitized
|
||||
- [ ] **Authentication**: Proper authentication checks are in place
|
||||
- [ ] **Authorization**: Access controls are implemented correctly
|
||||
- [ ] **Data Protection**: Sensitive data is handled securely
|
||||
- [ ] **SQL Injection**: Database queries use parameterized statements
|
||||
- [ ] **XSS Prevention**: Output is properly escaped
|
||||
- [ ] **Error Handling**: Errors don't leak sensitive information
|
||||
|
||||
### Integration Review
|
||||
- [ ] **Existing Functionality**: New code doesn't break existing features
|
||||
- [ ] **Data Consistency**: Database changes maintain referential integrity
|
||||
- [ ] **API Compatibility**: Changes don't break existing API contracts
|
||||
- [ ] **Performance Impact**: New code doesn't introduce performance bottlenecks
|
||||
- [ ] **Testing Coverage**: Appropriate tests are included
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Initial Code Analysis
|
||||
1. **Read through the entire generated code** before running it
|
||||
2. **Identify patterns** that don't match existing codebase
|
||||
3. **Check dependencies** - are new packages really needed?
|
||||
4. **Verify logic flow** - does the algorithm make sense?
|
||||
|
||||
### Step 2: Security and Error Handling Review
|
||||
1. **Trace data flow** from input to output
|
||||
2. **Identify potential failure points** and verify error handling
|
||||
3. **Check for security vulnerabilities** using the security checklist
|
||||
4. **Verify proper logging** and monitoring implementation
|
||||
|
||||
### Step 3: Integration Testing
|
||||
1. **Test with existing code** to ensure compatibility
|
||||
2. **Run existing test suite** to verify no regressions
|
||||
3. **Test edge cases** and error conditions
|
||||
4. **Verify performance** under realistic conditions
|
||||
|
||||
## Common AI Code Issues to Watch For
|
||||
|
||||
### Overcomplication Patterns
|
||||
- **Unnecessary abstractions**: AI creating complex patterns for simple tasks
|
||||
- **Over-engineering**: Solutions that are more complex than needed
|
||||
- **Redundant code**: AI recreating existing functionality
|
||||
- **Inappropriate design patterns**: Using patterns that don't fit the use case
|
||||
|
||||
### Context Loss Indicators
|
||||
- **Inconsistent naming**: Different conventions from existing code
|
||||
- **Wrong data structures**: Using different patterns than established
|
||||
- **Ignored existing functions**: Reimplementing existing functionality
|
||||
- **Architectural misalignment**: Code that doesn't fit the overall design
|
||||
|
||||
### Technical Debt Indicators
|
||||
- **Magic numbers**: Hardcoded values without explanation
|
||||
- **Poor error messages**: Generic or unhelpful error handling
|
||||
- **Missing documentation**: Code without adequate comments
|
||||
- **Tight coupling**: Components that are too interdependent
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Mandatory Reviews
|
||||
All AI-generated code must pass these gates before acceptance:
|
||||
|
||||
1. **Security Review**: No security vulnerabilities detected
|
||||
2. **Integration Review**: Integrates cleanly with existing code
|
||||
3. **Performance Review**: Meets performance requirements
|
||||
4. **Maintainability Review**: Code can be easily modified by team members
|
||||
5. **Documentation Review**: Adequate documentation is provided
|
||||
|
||||
### Acceptance Criteria
|
||||
- [ ] Code is understandable by any team member
|
||||
- [ ] Integration requires minimal changes to existing code
|
||||
- [ ] Security review passes all checks
|
||||
- [ ] Performance meets established benchmarks
|
||||
- [ ] Documentation is complete and accurate
|
||||
|
||||
## Rejection Criteria
|
||||
Reject AI-generated code if:
|
||||
- Security vulnerabilities are present
|
||||
- Code is too complex for the problem being solved
|
||||
- Integration requires major refactoring of existing code
|
||||
- Code duplicates existing functionality without justification
|
||||
- Documentation is missing or inadequate
|
||||
|
||||
## Review Documentation
|
||||
For each review, document:
|
||||
- Issues found and how they were resolved
|
||||
- Performance impact assessment
|
||||
- Security concerns and mitigations
|
||||
- Integration challenges and solutions
|
||||
- Recommendations for future similar tasks
|
||||
93
.cursor/rules/context-management.mdc
Normal file
93
.cursor/rules/context-management.mdc
Normal file
@ -0,0 +1,93 @@
|
||||
---
|
||||
description: Context management for maintaining codebase awareness and preventing context drift
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Context Management
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive project context to prevent context drift and ensure AI-generated code integrates seamlessly with existing codebase patterns and architecture.
|
||||
|
||||
## Context Documentation Requirements
|
||||
|
||||
### PRD.md file documentation
|
||||
1. **Project Overview**
|
||||
- Business objectives and goals
|
||||
- Target users and use cases
|
||||
- Key success metrics
|
||||
|
||||
### CONTEXT.md File Structure
|
||||
Every project must maintain a `CONTEXT.md` file in the root directory with:
|
||||
|
||||
1. **Architecture Overview**
|
||||
- High-level system architecture
|
||||
- Key design patterns used
|
||||
- Database schema overview
|
||||
- API structure and conventions
|
||||
|
||||
2. **Technology Stack**
|
||||
- Programming languages and versions
|
||||
- Frameworks and libraries
|
||||
- Database systems
|
||||
- Development and deployment tools
|
||||
|
||||
3. **Coding Conventions**
|
||||
- Naming conventions
|
||||
- File organization patterns
|
||||
- Code structure preferences
|
||||
- Import/export patterns
|
||||
|
||||
4. **Current Implementation Status**
|
||||
- Completed features
|
||||
- Work in progress
|
||||
- Known technical debt
|
||||
- Planned improvements
|
||||
|
||||
## Context Maintenance Protocol
|
||||
|
||||
### Before Every Coding Session
|
||||
1. **Review CONTEXT.md and PRD.md** to understand current project state
|
||||
2. **Scan recent changes** in git history to understand latest patterns
|
||||
3. **Identify existing patterns** for similar functionality before implementing new features
|
||||
4. **Ask for clarification** if existing patterns are unclear or conflicting
|
||||
|
||||
### During Development
|
||||
1. **Reference existing code** when explaining implementation approaches
|
||||
2. **Maintain consistency** with established patterns and conventions
|
||||
3. **Update CONTEXT.md** when making architectural decisions
|
||||
4. **Document deviations** from established patterns with reasoning
|
||||
|
||||
### Context Preservation Strategies
|
||||
- **Incremental development**: Build on existing patterns rather than creating new ones
|
||||
- **Pattern consistency**: Use established data structures and function signatures
|
||||
- **Integration awareness**: Consider how new code affects existing functionality
|
||||
- **Dependency management**: Understand existing dependencies before adding new ones
|
||||
|
||||
## Context Prompting Best Practices
|
||||
|
||||
### Effective Context Sharing
|
||||
- Include relevant sections of CONTEXT.md in prompts for complex tasks
|
||||
- Reference specific existing files when asking for similar functionality
|
||||
- Provide examples of existing patterns when requesting new implementations
|
||||
- Share recent git commit messages to understand latest changes
|
||||
|
||||
### Context Window Optimization
|
||||
- Prioritize most relevant context for current task
|
||||
- Use @filename references to include specific files
|
||||
- Break large contexts into focused, task-specific chunks
|
||||
- Update context references as project evolves
|
||||
|
||||
## Red Flags - Context Loss Indicators
|
||||
- AI suggests patterns that conflict with existing code
|
||||
- New implementations ignore established conventions
|
||||
- Proposed solutions don't integrate with existing architecture
|
||||
- Code suggestions require significant refactoring of existing functionality
|
||||
|
||||
## Recovery Protocol
|
||||
When context loss is detected:
|
||||
1. **Stop development** and review CONTEXT.md
|
||||
2. **Analyze existing codebase** for established patterns
|
||||
3. **Update context documentation** with missing information
|
||||
4. **Restart task** with proper context provided
|
||||
5. **Test integration** with existing code before proceeding
|
||||
@ -1,10 +1,10 @@
|
||||
---
|
||||
description:
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
244
.cursor/rules/documentation.mdc
Normal file
244
.cursor/rules/documentation.mdc
Normal file
@ -0,0 +1,244 @@
|
||||
---
|
||||
description: Documentation standards for code, architecture, and development decisions
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Documentation Standards
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive, up-to-date documentation that supports development, onboarding, and long-term maintenance of the codebase.
|
||||
|
||||
## Documentation Hierarchy
|
||||
|
||||
### 1. Project Level Documentation (in ./docs/)
|
||||
- **README.md**: Project overview, setup instructions, basic usage
|
||||
- **CONTEXT.md**: Current project state, architecture decisions, patterns
|
||||
- **CHANGELOG.md**: Version history and significant changes
|
||||
- **CONTRIBUTING.md**: Development guidelines and processes
|
||||
- **API.md**: API endpoints, request/response formats, authentication
|
||||
|
||||
### 2. Module Level Documentation (in ./docs/modules/)
|
||||
- **[module-name].md**: Purpose, public interfaces, usage examples
|
||||
- **dependencies.md**: External dependencies and their purposes
|
||||
- **architecture.md**: Module relationships and data flow
|
||||
|
||||
### 3. Code Level Documentation
|
||||
- **Docstrings**: Function and class documentation
|
||||
- **Inline comments**: Complex logic explanations
|
||||
- **Type hints**: Clear parameter and return types
|
||||
- **README files**: Directory-specific instructions
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### Code Documentation
|
||||
```python
|
||||
def process_user_data(user_id: str, data: dict) -> UserResult:
|
||||
"""
|
||||
Process and validate user data before storage.
|
||||
|
||||
Args:
|
||||
user_id: Unique identifier for the user
|
||||
data: Dictionary containing user information to process
|
||||
|
||||
Returns:
|
||||
UserResult: Processed user data with validation status
|
||||
|
||||
Raises:
|
||||
ValidationError: When user data fails validation
|
||||
DatabaseError: When storage operation fails
|
||||
|
||||
Example:
|
||||
>>> result = process_user_data("123", {"name": "John", "email": "john@example.com"})
|
||||
>>> print(result.status)
|
||||
'valid'
|
||||
"""
|
||||
```
|
||||
|
||||
### API Documentation Format
|
||||
```markdown
|
||||
### POST /api/users
|
||||
|
||||
Create a new user account.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required)",
|
||||
"email": "string (required, valid email)",
|
||||
"age": "number (optional, min: 13)"
|
||||
}
|
||||
```
|
||||
|
||||
**Response (201):**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "string",
|
||||
"email": "string",
|
||||
"created_at": "iso_datetime"
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- 400: Invalid input data
|
||||
- 409: Email already exists
|
||||
```
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant architecture decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-001: Database Choice - PostgreSQL
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
We need to choose a database for storing user data and application state.
|
||||
|
||||
## Decision
|
||||
We will use PostgreSQL as our primary database.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- ACID compliance ensures data integrity
|
||||
- Rich query capabilities with SQL
|
||||
- Good performance for our expected load
|
||||
|
||||
**Negative:**
|
||||
- More complex setup than simpler alternatives
|
||||
- Requires SQL knowledge from team members
|
||||
|
||||
## Alternatives Considered
|
||||
- MongoDB: Rejected due to consistency requirements
|
||||
- SQLite: Rejected due to scalability needs
|
||||
```
|
||||
|
||||
## Documentation Maintenance
|
||||
|
||||
### When to Update Documentation
|
||||
|
||||
#### Always Update:
|
||||
- **API changes**: Any modification to public interfaces
|
||||
- **Architecture changes**: New patterns, data structures, or workflows
|
||||
- **Configuration changes**: Environment variables, deployment settings
|
||||
- **Dependencies**: Adding, removing, or upgrading packages
|
||||
- **Business logic changes**: Core functionality modifications
|
||||
|
||||
#### Update Weekly:
|
||||
- **CONTEXT.md**: Current development status and priorities
|
||||
- **Known issues**: Bug reports and workarounds
|
||||
- **Performance notes**: Bottlenecks and optimization opportunities
|
||||
|
||||
#### Update per Release:
|
||||
- **CHANGELOG.md**: User-facing changes and improvements
|
||||
- **Version documentation**: Breaking changes and migration guides
|
||||
- **Examples and tutorials**: Keep sample code current
|
||||
|
||||
### Documentation Quality Checklist
|
||||
|
||||
#### Completeness
|
||||
- [ ] Purpose and scope clearly explained
|
||||
- [ ] All public interfaces documented
|
||||
- [ ] Examples provided for complex usage
|
||||
- [ ] Error conditions and handling described
|
||||
- [ ] Dependencies and requirements listed
|
||||
|
||||
#### Accuracy
|
||||
- [ ] Code examples are tested and working
|
||||
- [ ] Links point to correct locations
|
||||
- [ ] Version numbers are current
|
||||
- [ ] Screenshots reflect current UI
|
||||
|
||||
#### Clarity
|
||||
- [ ] Written for the intended audience
|
||||
- [ ] Technical jargon is explained
|
||||
- [ ] Step-by-step instructions are clear
|
||||
- [ ] Visual aids used where helpful
|
||||
|
||||
## Documentation Automation
|
||||
|
||||
### Auto-Generated Documentation
|
||||
- **API docs**: Generate from code annotations
|
||||
- **Type documentation**: Extract from type hints
|
||||
- **Module dependencies**: Auto-update from imports
|
||||
- **Test coverage**: Include coverage reports
|
||||
|
||||
### Documentation Testing
|
||||
```python
|
||||
# Test that code examples in documentation work
|
||||
def test_documentation_examples():
|
||||
"""Verify code examples in docs actually work."""
|
||||
# Test examples from README.md
|
||||
# Test API examples from docs/API.md
|
||||
# Test configuration examples
|
||||
```
|
||||
|
||||
## Documentation Templates
|
||||
|
||||
### New Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
Brief description of what this module does and why it exists.
|
||||
|
||||
## Public Interface
|
||||
### Functions
|
||||
- `function_name(params)`: Description and example
|
||||
|
||||
### Classes
|
||||
- `ClassName`: Purpose and basic usage
|
||||
|
||||
## Usage Examples
|
||||
```python
|
||||
# Basic usage example
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
- Internal: List of internal modules this depends on
|
||||
- External: List of external packages required
|
||||
|
||||
## Testing
|
||||
How to run tests for this module.
|
||||
|
||||
## Known Issues
|
||||
Current limitations or bugs.
|
||||
```
|
||||
|
||||
### API Endpoint Template
|
||||
```markdown
|
||||
### [METHOD] /api/endpoint
|
||||
|
||||
Brief description of what this endpoint does.
|
||||
|
||||
**Authentication:** Required/Optional
|
||||
**Rate Limiting:** X requests per minute
|
||||
|
||||
**Request:**
|
||||
- Headers required
|
||||
- Body schema
|
||||
- Query parameters
|
||||
|
||||
**Response:**
|
||||
- Success response format
|
||||
- Error response format
|
||||
- Status codes
|
||||
|
||||
**Example:**
|
||||
Working request/response example
|
||||
```
|
||||
|
||||
## Review and Maintenance Process
|
||||
|
||||
### Documentation Review
|
||||
- Include documentation updates in code reviews
|
||||
- Verify examples still work with code changes
|
||||
- Check for broken links and outdated information
|
||||
- Ensure consistency with current implementation
|
||||
|
||||
### Regular Audits
|
||||
- Monthly review of documentation accuracy
|
||||
- Quarterly assessment of documentation completeness
|
||||
- Annual review of documentation structure and organization
|
||||
207
.cursor/rules/enhanced-task-list.mdc
Normal file
207
.cursor/rules/enhanced-task-list.mdc
Normal file
@ -0,0 +1,207 @@
|
||||
---
|
||||
description: Enhanced task list management with quality gates and iterative workflow integration
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Enhanced Task List Management
|
||||
|
||||
## Goal
|
||||
Manage task lists with integrated quality gates and iterative workflow to prevent context loss and ensure sustainable development.
|
||||
|
||||
## Task Implementation Protocol
|
||||
|
||||
### Pre-Implementation Check
|
||||
Before starting any sub-task:
|
||||
- [ ] **Context Review**: Have you reviewed CONTEXT.md and relevant documentation?
|
||||
- [ ] **Pattern Identification**: Do you understand existing patterns to follow?
|
||||
- [ ] **Integration Planning**: Do you know how this will integrate with existing code?
|
||||
- [ ] **Size Validation**: Is this task small enough (≤50 lines, ≤250 lines per file)?
|
||||
|
||||
### Implementation Process
|
||||
1. **One sub-task at a time**: Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y"
|
||||
2. **Step-by-step execution**:
|
||||
- Plan the approach in bullet points
|
||||
- Wait for approval
|
||||
- Implement the specific sub-task
|
||||
- Test the implementation
|
||||
- Update documentation if needed
|
||||
3. **Quality validation**: Run through the code review checklist before marking complete
|
||||
|
||||
### Completion Protocol
|
||||
When you finish a **sub‑task**:
|
||||
1. **Immediate marking**: Change `[ ]` to `[x]`
|
||||
2. **Quality check**: Verify the implementation meets quality standards
|
||||
3. **Integration test**: Ensure new code works with existing functionality
|
||||
4. **Documentation update**: Update relevant files if needed
|
||||
5. **Parent task check**: If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed
|
||||
6. **Stop and wait**: Get user approval before proceeding to next sub-task
|
||||
|
||||
## Enhanced Task List Structure
|
||||
|
||||
### Task File Header
|
||||
```markdown
|
||||
# Task List: [Feature Name]
|
||||
|
||||
**Source PRD**: `prd-[feature-name].md`
|
||||
**Status**: In Progress / Complete / Blocked
|
||||
**Context Last Updated**: [Date]
|
||||
**Architecture Review**: Required / Complete / N/A
|
||||
|
||||
## Quick Links
|
||||
- [Context Documentation](./CONTEXT.md)
|
||||
- [Architecture Guidelines](./docs/architecture.md)
|
||||
- [Related Files](#relevant-files)
|
||||
```
|
||||
|
||||
### Task Format with Quality Gates
|
||||
```markdown
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- **Quality Gate**: Architecture review required
|
||||
- **Dependencies**: List any dependencies
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- **Size estimate**: [Small/Medium/Large]
|
||||
- **Pattern reference**: [Reference to existing pattern]
|
||||
- **Test requirements**: [Unit/Integration/Both]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- **Integration points**: [List affected components]
|
||||
- **Risk level**: [Low/Medium/High]
|
||||
```
|
||||
|
||||
## Relevant Files Management
|
||||
|
||||
### Enhanced File Tracking
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
### Implementation Files
|
||||
- `path/to/file1.ts` - Brief description of purpose and role
|
||||
- **Status**: Created / Modified / Needs Review
|
||||
- **Last Modified**: [Date]
|
||||
- **Review Status**: Pending / Approved / Needs Changes
|
||||
|
||||
### Test Files
|
||||
- `path/to/file1.test.ts` - Unit tests for file1.ts
|
||||
- **Coverage**: [Percentage or status]
|
||||
- **Last Run**: [Date and result]
|
||||
|
||||
### Documentation Files
|
||||
- `docs/module-name.md` - Module documentation
|
||||
- **Status**: Up to date / Needs update / Missing
|
||||
- **Last Updated**: [Date]
|
||||
|
||||
### Configuration Files
|
||||
- `config/setting.json` - Configuration changes
|
||||
- **Environment**: [Dev/Staging/Prod affected]
|
||||
- **Backup**: [Location of backup]
|
||||
```
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
### During Development
|
||||
1. **Regular updates**: Update task status after each significant change
|
||||
2. **File tracking**: Add new files as they are created or modified
|
||||
3. **Dependency tracking**: Note when new dependencies between tasks emerge
|
||||
4. **Risk assessment**: Flag tasks that become more complex than anticipated
|
||||
|
||||
### Quality Checkpoints
|
||||
At 25%, 50%, 75%, and 100% completion:
|
||||
- [ ] **Architecture alignment**: Code follows established patterns
|
||||
- [ ] **Performance impact**: No significant performance degradation
|
||||
- [ ] **Security review**: No security vulnerabilities introduced
|
||||
- [ ] **Documentation current**: All changes are documented
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Completion assessment**: What percentage of tasks are actually complete?
|
||||
2. **Quality assessment**: Are completed tasks meeting quality standards?
|
||||
3. **Process assessment**: Is the iterative workflow being followed?
|
||||
4. **Risk assessment**: Are there emerging risks or blockers?
|
||||
|
||||
## Task Status Indicators
|
||||
|
||||
### Status Levels
|
||||
- `[ ]` **Not Started**: Task not yet begun
|
||||
- `[~]` **In Progress**: Currently being worked on
|
||||
- `[?]` **Blocked**: Waiting for dependencies or decisions
|
||||
- `[!]` **Needs Review**: Implementation complete but needs quality review
|
||||
- `[x]` **Complete**: Finished and quality approved
|
||||
|
||||
### Quality Indicators
|
||||
- ✅ **Quality Approved**: Passed all quality gates
|
||||
- ⚠️ **Quality Concerns**: Has issues but functional
|
||||
- ❌ **Quality Failed**: Needs rework before approval
|
||||
- 🔄 **Under Review**: Currently being reviewed
|
||||
|
||||
### Integration Status
|
||||
- 🔗 **Integrated**: Successfully integrated with existing code
|
||||
- 🔧 **Integration Issues**: Problems with existing code integration
|
||||
- ⏳ **Integration Pending**: Ready for integration testing
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Tasks Become Too Complex
|
||||
If a sub-task grows beyond expected scope:
|
||||
1. **Stop implementation** immediately
|
||||
2. **Document current state** and what was discovered
|
||||
3. **Break down** the task into smaller pieces
|
||||
4. **Update task list** with new sub-tasks
|
||||
5. **Get approval** for the new breakdown before proceeding
|
||||
|
||||
### When Context is Lost
|
||||
If AI seems to lose track of project patterns:
|
||||
1. **Pause development**
|
||||
2. **Review CONTEXT.md** and recent changes
|
||||
3. **Update context documentation** with current state
|
||||
4. **Restart** with explicit pattern references
|
||||
5. **Reduce task size** until context is re-established
|
||||
|
||||
### When Quality Gates Fail
|
||||
If implementation doesn't meet quality standards:
|
||||
1. **Mark task** with `[!]` status
|
||||
2. **Document specific issues** found
|
||||
3. **Create remediation tasks** if needed
|
||||
4. **Don't proceed** until quality issues are resolved
|
||||
|
||||
## AI Instructions Integration
|
||||
|
||||
### Context Awareness Commands
|
||||
```markdown
|
||||
**Before starting any task, run these checks:**
|
||||
1. @CONTEXT.md - Review current project state
|
||||
2. @architecture.md - Understand design principles
|
||||
3. @code-review.md - Know quality standards
|
||||
4. Look at existing similar code for patterns
|
||||
```
|
||||
|
||||
### Quality Validation Commands
|
||||
```markdown
|
||||
**After completing any sub-task:**
|
||||
1. Run code review checklist
|
||||
2. Test integration with existing code
|
||||
3. Update documentation if needed
|
||||
4. Mark task complete only after quality approval
|
||||
```
|
||||
|
||||
### Workflow Commands
|
||||
```markdown
|
||||
**For each development session:**
|
||||
1. Review incomplete tasks and their status
|
||||
2. Identify next logical sub-task to work on
|
||||
3. Check dependencies and blockers
|
||||
4. Follow iterative workflow process
|
||||
5. Update task list with progress and findings
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Daily Success Indicators
|
||||
- Tasks are completed according to quality standards
|
||||
- No sub-tasks are started without completing previous ones
|
||||
- File tracking remains accurate and current
|
||||
- Integration issues are caught early
|
||||
|
||||
### Weekly Success Indicators
|
||||
- Overall task completion rate is sustainable
|
||||
- Quality issues are decreasing over time
|
||||
- Context loss incidents are rare
|
||||
- Team confidence in codebase remains high
|
||||
@ -1,5 +1,5 @@
|
||||
---
|
||||
description:
|
||||
description: Generate a task list or TODO for a user requirement or implementation.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
236
.cursor/rules/iterative-workflow.mdc
Normal file
236
.cursor/rules/iterative-workflow.mdc
Normal file
@ -0,0 +1,236 @@
|
||||
---
|
||||
description: Iterative development workflow for AI-assisted coding
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Iterative Development Workflow
|
||||
|
||||
## Goal
|
||||
Establish a structured, iterative development process that prevents the chaos and complexity that can arise from uncontrolled AI-assisted development.
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: Planning and Design
|
||||
**Before writing any code:**
|
||||
|
||||
1. **Understand the Requirement**
|
||||
- Break down the task into specific, measurable objectives
|
||||
- Identify existing code patterns that should be followed
|
||||
- List dependencies and integration points
|
||||
- Define acceptance criteria
|
||||
|
||||
2. **Design Review**
|
||||
- Propose approach in bullet points
|
||||
- Wait for explicit approval before proceeding
|
||||
- Consider how the solution fits existing architecture
|
||||
- Identify potential risks and mitigation strategies
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
**One small piece at a time:**
|
||||
|
||||
1. **Micro-Tasks** (≤ 50 lines each)
|
||||
- Implement one function or small class at a time
|
||||
- Test immediately after implementation
|
||||
- Ensure integration with existing code
|
||||
- Document decisions and patterns used
|
||||
|
||||
2. **Validation Checkpoints**
|
||||
- After each micro-task, verify it works correctly
|
||||
- Check that it follows established patterns
|
||||
- Confirm it integrates cleanly with existing code
|
||||
- Get approval before moving to next micro-task
|
||||
|
||||
### Phase 3: Integration and Testing
|
||||
**Ensuring system coherence:**
|
||||
|
||||
1. **Integration Testing**
|
||||
- Test new code with existing functionality
|
||||
- Verify no regressions in existing features
|
||||
- Check performance impact
|
||||
- Validate error handling
|
||||
|
||||
2. **Documentation Update**
|
||||
- Update relevant documentation
|
||||
- Record any new patterns or decisions
|
||||
- Update context files if architecture changed
|
||||
|
||||
## Iterative Prompting Strategy
|
||||
|
||||
### Step 1: Context Setting
|
||||
```
|
||||
Before implementing [feature], help me understand:
|
||||
1. What existing patterns should I follow?
|
||||
2. What existing functions/classes are relevant?
|
||||
3. How should this integrate with [specific existing component]?
|
||||
4. What are the potential architectural impacts?
|
||||
```
|
||||
|
||||
### Step 2: Plan Creation
|
||||
```
|
||||
Based on the context, create a detailed plan for implementing [feature]:
|
||||
1. Break it into micro-tasks (≤50 lines each)
|
||||
2. Identify dependencies and order of implementation
|
||||
3. Specify integration points with existing code
|
||||
4. List potential risks and mitigation strategies
|
||||
|
||||
Wait for my approval before implementing.
|
||||
```
|
||||
|
||||
### Step 3: Incremental Implementation
|
||||
```
|
||||
Implement only the first micro-task: [specific task]
|
||||
- Use existing patterns from [reference file/function]
|
||||
- Keep it under 50 lines
|
||||
- Include error handling
|
||||
- Add appropriate tests
|
||||
- Explain your implementation choices
|
||||
|
||||
Stop after this task and wait for approval.
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Before Each Implementation
|
||||
- [ ] **Purpose is clear**: Can explain what this piece does and why
|
||||
- [ ] **Pattern is established**: Following existing code patterns
|
||||
- [ ] **Size is manageable**: Implementation is small enough to understand completely
|
||||
- [ ] **Integration is planned**: Know how it connects to existing code
|
||||
|
||||
### After Each Implementation
|
||||
- [ ] **Code is understood**: Can explain every line of implemented code
|
||||
- [ ] **Tests pass**: All existing and new tests are passing
|
||||
- [ ] **Integration works**: New code works with existing functionality
|
||||
- [ ] **Documentation updated**: Changes are reflected in relevant documentation
|
||||
|
||||
### Before Moving to Next Task
|
||||
- [ ] **Current task complete**: All acceptance criteria met
|
||||
- [ ] **No regressions**: Existing functionality still works
|
||||
- [ ] **Clean state**: No temporary code or debugging artifacts
|
||||
- [ ] **Approval received**: Explicit go-ahead for next task
|
||||
- [ ] **Documentaion updated**: If relevant changes to module was made.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Large Block Implementation
|
||||
**Don't:**
|
||||
```
|
||||
Implement the entire user management system with authentication,
|
||||
CRUD operations, and email notifications.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
First, implement just the User model with basic fields.
|
||||
Stop there and let me review before continuing.
|
||||
```
|
||||
|
||||
### Context Loss
|
||||
**Don't:**
|
||||
```
|
||||
Create a new authentication system.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Looking at the existing auth patterns in auth.py, implement
|
||||
password validation following the same structure as the
|
||||
existing email validation function.
|
||||
```
|
||||
|
||||
### Over-Engineering
|
||||
**Don't:**
|
||||
```
|
||||
Build a flexible, extensible user management framework that
|
||||
can handle any future requirements.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Implement user creation functionality that matches the existing
|
||||
pattern in customer.py, focusing only on the current requirements.
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Task Status Indicators
|
||||
- 🔄 **In Planning**: Requirements gathering and design
|
||||
- ⏳ **In Progress**: Currently implementing
|
||||
- ✅ **Complete**: Implemented, tested, and integrated
|
||||
- 🚫 **Blocked**: Waiting for decisions or dependencies
|
||||
- 🔧 **Needs Refactor**: Working but needs improvement
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Progress Assessment**
|
||||
- What was completed this week?
|
||||
- What challenges were encountered?
|
||||
- How well did the iterative process work?
|
||||
|
||||
2. **Process Adjustment**
|
||||
- Were task sizes appropriate?
|
||||
- Did context management work effectively?
|
||||
- What improvements can be made?
|
||||
|
||||
3. **Architecture Review**
|
||||
- Is the code remaining maintainable?
|
||||
- Are patterns staying consistent?
|
||||
- Is technical debt accumulating?
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Things Go Wrong
|
||||
If development becomes chaotic or problematic:
|
||||
|
||||
1. **Stop Development**
|
||||
- Don't continue adding to the problem
|
||||
- Take time to assess the situation
|
||||
- Don't rush to "fix" with more AI-generated code
|
||||
|
||||
2. **Assess the Situation**
|
||||
- What specific problems exist?
|
||||
- How far has the code diverged from established patterns?
|
||||
- What parts are still working correctly?
|
||||
|
||||
3. **Recovery Process**
|
||||
- Roll back to last known good state
|
||||
- Update context documentation with lessons learned
|
||||
- Restart with smaller, more focused tasks
|
||||
- Get explicit approval for each step of recovery
|
||||
|
||||
### Context Recovery
|
||||
When AI seems to lose track of project patterns:
|
||||
|
||||
1. **Context Refresh**
|
||||
- Review and update CONTEXT.md
|
||||
- Include examples of current code patterns
|
||||
- Clarify architectural decisions
|
||||
|
||||
2. **Pattern Re-establishment**
|
||||
- Show AI examples of existing, working code
|
||||
- Explicitly state patterns to follow
|
||||
- Start with very small, pattern-matching tasks
|
||||
|
||||
3. **Gradual Re-engagement**
|
||||
- Begin with simple, low-risk tasks
|
||||
- Verify pattern adherence at each step
|
||||
- Gradually increase task complexity as consistency returns
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Short-term (Daily)
|
||||
- Code is understandable and well-integrated
|
||||
- No major regressions introduced
|
||||
- Development velocity feels sustainable
|
||||
- Team confidence in codebase remains high
|
||||
|
||||
### Medium-term (Weekly)
|
||||
- Technical debt is not accumulating
|
||||
- New features integrate cleanly
|
||||
- Development patterns remain consistent
|
||||
- Documentation stays current
|
||||
|
||||
### Long-term (Monthly)
|
||||
- Codebase remains maintainable as it grows
|
||||
- New team members can understand and contribute
|
||||
- AI assistance enhances rather than hinders development
|
||||
- Architecture remains clean and purposeful
|
||||
@ -3,6 +3,21 @@ description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
- use UV for package management
|
||||
- ./docs folder for the documetation and the modules description, update related files if logic changed
|
||||
# Rule: Project specific rules
|
||||
|
||||
## Goal
|
||||
Unify the project structure and interraction with tools and console
|
||||
|
||||
### System tools
|
||||
- **ALWAYS** use UV for package management
|
||||
- **ALWAYS** use windows PowerShell command for terminal
|
||||
|
||||
### Coding patterns
|
||||
- **ALWYAS** check the arguments and methods before use to avoid errors with whron parameters or names
|
||||
- If in doubt, check [CONTEXT.md](mdc:CONTEXT.md) file and [architecture.md](mdc:docs/architecture.md)
|
||||
- **PREFER** ORM pattern for databases with SQLAclhemy.
|
||||
|
||||
### Testing
|
||||
- Use UV for test in format *uv run pytest [filename]*
|
||||
|
||||
|
||||
|
||||
237
.cursor/rules/refactoring.mdc
Normal file
237
.cursor/rules/refactoring.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Code refactoring and technical debt management for AI-assisted development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Refactoring and Technical Debt Management
|
||||
|
||||
## Goal
|
||||
Guide AI in systematic code refactoring to improve maintainability, reduce complexity, and prevent technical debt accumulation in AI-assisted development projects.
|
||||
|
||||
## When to Apply This Rule
|
||||
- Code complexity has increased beyond manageable levels
|
||||
- Duplicate code patterns are detected
|
||||
- Performance issues are identified
|
||||
- New features are difficult to integrate
|
||||
- Code review reveals maintainability concerns
|
||||
- Weekly technical debt assessment indicates refactoring needs
|
||||
|
||||
## Pre-Refactoring Assessment
|
||||
|
||||
Before starting any refactoring, the AI MUST:
|
||||
|
||||
1. **Context Analysis:**
|
||||
- Review existing `CONTEXT.md` for architectural decisions
|
||||
- Analyze current code patterns and conventions
|
||||
- Identify all files that will be affected (search the codebase for use)
|
||||
- Check for existing tests that verify current behavior
|
||||
|
||||
2. **Scope Definition:**
|
||||
- Clearly define what will and will not be changed
|
||||
- Identify the specific refactoring pattern to apply
|
||||
- Estimate the blast radius of changes
|
||||
- Plan rollback strategy if needed
|
||||
|
||||
3. **Documentation Review:**
|
||||
- Check `./docs/` for relevant module documentation
|
||||
- Review any existing architectural diagrams
|
||||
- Identify dependencies and integration points
|
||||
- Note any known constraints or limitations
|
||||
|
||||
## Refactoring Process
|
||||
|
||||
### Phase 1: Planning and Safety
|
||||
1. **Create Refactoring Plan:**
|
||||
- Document the current state and desired end state
|
||||
- Break refactoring into small, atomic steps
|
||||
- Identify tests that must pass throughout the process
|
||||
- Plan verification steps for each change
|
||||
|
||||
2. **Establish Safety Net:**
|
||||
- Ensure comprehensive test coverage exists
|
||||
- If tests are missing, create them BEFORE refactoring
|
||||
- Document current behavior that must be preserved
|
||||
- Create backup of current implementation approach
|
||||
|
||||
3. **Get Approval:**
|
||||
- Present the refactoring plan to the user
|
||||
- Wait for explicit "Go" or "Proceed" confirmation
|
||||
- Do NOT start refactoring without approval
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
4. **One Change at a Time:**
|
||||
- Implement ONE refactoring step per iteration
|
||||
- Run tests after each step to ensure nothing breaks
|
||||
- Update documentation if interfaces change
|
||||
- Mark progress in the refactoring plan
|
||||
|
||||
5. **Verification Protocol:**
|
||||
- Run all relevant tests after each change
|
||||
- Verify functionality works as expected
|
||||
- Check performance hasn't degraded
|
||||
- Ensure no new linting or type errors
|
||||
|
||||
6. **User Checkpoint:**
|
||||
- After each significant step, pause for user review
|
||||
- Present what was changed and current status
|
||||
- Wait for approval before continuing
|
||||
- Address any concerns before proceeding
|
||||
|
||||
### Phase 3: Completion and Documentation
|
||||
7. **Final Verification:**
|
||||
- Run full test suite to ensure nothing is broken
|
||||
- Verify all original functionality is preserved
|
||||
- Check that new code follows project conventions
|
||||
- Confirm performance is maintained or improved
|
||||
|
||||
8. **Documentation Update:**
|
||||
- Update `CONTEXT.md` with new patterns/decisions
|
||||
- Update module documentation in `./docs/`
|
||||
- Document any new conventions established
|
||||
- Note lessons learned for future refactoring
|
||||
|
||||
## Common Refactoring Patterns
|
||||
|
||||
### Extract Method/Function
|
||||
```
|
||||
WHEN: Functions/methods exceed 50 lines or have multiple responsibilities
|
||||
HOW:
|
||||
1. Identify logical groupings within the function
|
||||
2. Extract each group into a well-named helper function
|
||||
3. Ensure each function has a single responsibility
|
||||
4. Verify tests still pass
|
||||
```
|
||||
|
||||
### Extract Module/Class
|
||||
```
|
||||
WHEN: Files exceed 250 lines or handle multiple concerns
|
||||
HOW:
|
||||
1. Identify cohesive functionality groups
|
||||
2. Create new files for each group
|
||||
3. Move related functions/classes together
|
||||
4. Update imports and dependencies
|
||||
5. Verify module boundaries are clean
|
||||
```
|
||||
|
||||
### Eliminate Duplication
|
||||
```
|
||||
WHEN: Similar code appears in multiple places
|
||||
HOW:
|
||||
1. Identify the common pattern or functionality
|
||||
2. Extract to a shared utility function or module
|
||||
3. Update all usage sites to use the shared code
|
||||
4. Ensure the abstraction is not over-engineered
|
||||
```
|
||||
|
||||
### Improve Data Structures
|
||||
```
|
||||
WHEN: Complex nested objects or unclear data flow
|
||||
HOW:
|
||||
1. Define clear interfaces/types for data structures
|
||||
2. Create transformation functions between different representations
|
||||
3. Ensure data flow is unidirectional where possible
|
||||
4. Add validation at boundaries
|
||||
```
|
||||
|
||||
### Reduce Coupling
|
||||
```
|
||||
WHEN: Modules are tightly interconnected
|
||||
HOW:
|
||||
1. Identify dependencies between modules
|
||||
2. Extract interfaces for external dependencies
|
||||
3. Use dependency injection where appropriate
|
||||
4. Ensure modules can be tested in isolation
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
Every refactoring must pass these gates:
|
||||
|
||||
### Technical Quality
|
||||
- [ ] All existing tests pass
|
||||
- [ ] No new linting errors introduced
|
||||
- [ ] Code follows established project conventions
|
||||
- [ ] No performance regression detected
|
||||
- [ ] File sizes remain under 250 lines
|
||||
- [ ] Function sizes remain under 50 lines
|
||||
|
||||
### Maintainability
|
||||
- [ ] Code is more readable than before
|
||||
- [ ] Duplicated code has been reduced
|
||||
- [ ] Module responsibilities are clearer
|
||||
- [ ] Dependencies are explicit and minimal
|
||||
- [ ] Error handling is consistent
|
||||
|
||||
### Documentation
|
||||
- [ ] Public interfaces are documented
|
||||
- [ ] Complex logic has explanatory comments
|
||||
- [ ] Architectural decisions are recorded
|
||||
- [ ] Examples are provided where helpful
|
||||
|
||||
## AI Instructions for Refactoring
|
||||
|
||||
1. **Always ask for permission** before starting any refactoring work
|
||||
2. **Start with tests** - ensure comprehensive coverage before changing code
|
||||
3. **Work incrementally** - make small changes and verify each step
|
||||
4. **Preserve behavior** - functionality must remain exactly the same
|
||||
5. **Update documentation** - keep all docs current with changes
|
||||
6. **Follow conventions** - maintain consistency with existing codebase
|
||||
7. **Stop and ask** if any step fails or produces unexpected results
|
||||
8. **Explain changes** - clearly communicate what was changed and why
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Over-Engineering
|
||||
- Don't create abstractions for code that isn't duplicated
|
||||
- Avoid complex inheritance hierarchies
|
||||
- Don't optimize prematurely
|
||||
|
||||
### Breaking Changes
|
||||
- Never change public APIs without explicit approval
|
||||
- Don't remove functionality, even if it seems unused
|
||||
- Avoid changing behavior "while we're here"
|
||||
|
||||
### Scope Creep
|
||||
- Stick to the defined refactoring scope
|
||||
- Don't add new features during refactoring
|
||||
- Resist the urge to "improve" unrelated code
|
||||
|
||||
## Success Metrics
|
||||
|
||||
Track these metrics to ensure refactoring effectiveness:
|
||||
|
||||
### Code Quality
|
||||
- Reduced cyclomatic complexity
|
||||
- Lower code duplication percentage
|
||||
- Improved test coverage
|
||||
- Fewer linting violations
|
||||
|
||||
### Developer Experience
|
||||
- Faster time to understand code
|
||||
- Easier integration of new features
|
||||
- Reduced bug introduction rate
|
||||
- Higher developer confidence in changes
|
||||
|
||||
### Maintainability
|
||||
- Clearer module boundaries
|
||||
- More predictable behavior
|
||||
- Easier debugging and troubleshooting
|
||||
- Better performance characteristics
|
||||
|
||||
## Output Files
|
||||
|
||||
When refactoring is complete, update:
|
||||
- `refactoring-log-[date].md` - Document what was changed and why
|
||||
- `CONTEXT.md` - Update with new patterns and decisions
|
||||
- `./docs/` - Update relevant module documentation
|
||||
- Task lists - Mark refactoring tasks as complete
|
||||
|
||||
## Final Verification
|
||||
|
||||
Before marking refactoring complete:
|
||||
1. Run full test suite and verify all tests pass
|
||||
2. Check that code follows all project conventions
|
||||
3. Verify documentation is up to date
|
||||
4. Confirm user is satisfied with the results
|
||||
5. Record lessons learned for future refactoring efforts
|
||||
@ -1,5 +1,5 @@
|
||||
---
|
||||
description:
|
||||
description: TODO list task implementation
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
23
.env
23
.env
@ -1,15 +1,15 @@
|
||||
# Database Configuration
|
||||
POSTGRES_DB=dashboard
|
||||
POSTGRES_USER=dashboard
|
||||
POSTGRES_PASSWORD=dashboard123
|
||||
POSTGRES_PASSWORD=sdkjfh534^jh
|
||||
POSTGRES_HOST=localhost
|
||||
POSTGRES_PORT=5432
|
||||
DATABASE_URL=postgresql://dashboard:dashboard123@localhost:5432/dashboard
|
||||
POSTGRES_PORT=5434
|
||||
DATABASE_URL=postgresql://dashboard:sdkjfh534^jh@localhost:5434/dashboard
|
||||
|
||||
# Redis Configuration
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_PASSWORD=
|
||||
REDIS_PASSWORD=redis987secure
|
||||
|
||||
# OKX API Configuration
|
||||
OKX_API_KEY=your_okx_api_key_here
|
||||
@ -29,10 +29,21 @@ DASH_DEBUG=true
|
||||
|
||||
# Bot Configuration
|
||||
MAX_CONCURRENT_BOTS=5
|
||||
BOT_UPDATE_INTERVAL=2 # seconds
|
||||
BOT_UPDATE_INTERVAL=2
|
||||
DEFAULT_VIRTUAL_BALANCE=10000
|
||||
|
||||
# Data Configuration
|
||||
MARKET_DATA_SYMBOLS=BTC-USDT,ETH-USDT,LTC-USDT
|
||||
HISTORICAL_DATA_DAYS=30
|
||||
CHART_UPDATE_INTERVAL=2000 # milliseconds
|
||||
CHART_UPDATE_INTERVAL=2000
|
||||
|
||||
# Logging
|
||||
VERBOSE_LOGGING = true
|
||||
LOG_CLEANUP=true
|
||||
LOG_MAX_FILES=30
|
||||
|
||||
# Health monitoring
|
||||
DEFAULT_HEALTH_CHECK_INTERVAL=30
|
||||
MAX_SILENCE_DURATION=300
|
||||
MAX_RECONNECT_ATTEMPTS=5
|
||||
RECONNECT_DELAY=5
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
@ -1 +1,9 @@
|
||||
*.pyc
|
||||
.env
|
||||
.env.local
|
||||
.env.*
|
||||
database/migrations/versions/*
|
||||
|
||||
# Exclude log files
|
||||
logs/
|
||||
*.log
|
||||
|
||||
92
CONTEXT.md
Normal file
92
CONTEXT.md
Normal file
@ -0,0 +1,92 @@
|
||||
# Project Context: Simplified Crypto Trading Bot Platform
|
||||
|
||||
This document provides a comprehensive overview of the project's architecture, technology stack, conventions, and current implementation status, following the guidelines in `context-management.md`.
|
||||
|
||||
## 1. Architecture Overview
|
||||
|
||||
The platform is a **monolithic application** built with Python, designed for rapid development and internal testing of crypto trading strategies. The architecture is modular, with clear separation between components to facilitate future migration to microservices if needed.
|
||||
|
||||
### Core Components
|
||||
- **Data Collection Service**: A standalone, multi-process service responsible for collecting real-time market data from exchanges (currently OKX). It uses a robust `BaseDataCollector` abstraction and specific exchange implementations (e.g., `OKXCollector`). Data is processed, aggregated into OHLCV candles, and stored.
|
||||
- **Database**: PostgreSQL with the TimescaleDB extension (though currently using a "clean" schema without hypertables for simplicity). It stores market data, bot configurations, trading signals, and performance metrics. SQLAlchemy is used as the ORM.
|
||||
- **Real-time Messaging**: Redis is used for pub/sub messaging, intended for real-time data distribution between components (though its use in the dashboard is currently deferred).
|
||||
- **Dashboard & API**: A Dash application serves as the main user interface for visualization, bot management, and system monitoring. The underlying Flask server can be extended for REST APIs.
|
||||
- **Strategy Engine & Bot Manager**: (Not yet implemented) This component will be responsible for executing trading logic, managing bot lifecycles, and tracking virtual portfolios.
|
||||
- **Backtesting Engine**: (Not yet implemented) This will provide capabilities to test strategies against historical data.
|
||||
|
||||
### Data Flow
|
||||
1. The `DataCollectionService` connects to the OKX WebSocket API.
|
||||
2. Raw trade data is received and processed by `OKXDataProcessor`.
|
||||
3. Trades are aggregated into OHLCV candles (1m, 5m, etc.).
|
||||
4. Both raw trade data and processed OHLCV candles are stored in the PostgreSQL database.
|
||||
5. (Future) The Strategy Engine will consume OHLCV data to generate trading signals.
|
||||
6. The Dashboard reads data from the database to provide visualizations and system health monitoring.
|
||||
|
||||
## 2. Technology Stack
|
||||
|
||||
- **Backend**: Python 3.10+
|
||||
- **Web Framework**: Dash with Dash Bootstrap Components for the frontend UI.
|
||||
- **Database**: PostgreSQL 14+. SQLAlchemy for ORM. Alembic for migrations.
|
||||
- **Messaging**: Redis for pub/sub.
|
||||
- **Data & Numerics**: pandas for data manipulation (especially in backtesting).
|
||||
- **Package Management**: `uv`
|
||||
- **Containerization**: Docker and Docker Compose for setting up the development environment (PostgreSQL, Redis, etc.).
|
||||
|
||||
## 3. Coding Conventions
|
||||
|
||||
- **Modular Design**: Code is organized into modules with a clear purpose (e.g., `data`, `database`, `dashboard`). See `architecture.md` for more details.
|
||||
- **Naming Conventions**:
|
||||
- **Classes**: `PascalCase` (e.g., `MarketData`, `BaseDataCollector`).
|
||||
- **Functions & Methods**: `snake_case` (e.g., `get_system_health_layout`, `connect`).
|
||||
- **Variables & Attributes**: `snake_case` (e.g., `exchange_name`, `_ws_client`).
|
||||
- **Constants**: `UPPER_SNAKE_CASE` (e.g., `MAX_RECONNECT_ATTEMPTS`).
|
||||
- **Modules**: `snake_case.py` (e.g., `collector_manager.py`).
|
||||
- **Private Attributes/Methods**: Use a single leading underscore `_` (e.g., `_process_message`). Avoid double underscores unless for name mangling in classes.
|
||||
- **File Organization & Code Structure**:
|
||||
- **Directory Structure**: Top-level directories separate major concerns (`data`, `database`, `dashboard`, `strategies`). Sub-packages should be used for further organization (e.g., `data/exchanges/okx`).
|
||||
- **Module Structure**: Within a Python module (`.py` file), the preferred order is:
|
||||
1. Module-level docstring explaining its purpose.
|
||||
2. Imports (see pattern below).
|
||||
3. Module-level constants (`ALL_CAPS`).
|
||||
4. Custom exception classes.
|
||||
5. Data classes or simple data structures.
|
||||
6. Helper functions (if any, typically private `_helper()`).
|
||||
7. Main business logic classes.
|
||||
- **`__init__.py`**: Use `__init__.py` files to define a package's public API and simplify imports for consumers of the package.
|
||||
- **Import/Export Patterns**:
|
||||
- **Grouping**: Imports should be grouped in the following order, with a blank line between each group:
|
||||
1. Standard library imports (e.g., `asyncio`, `datetime`).
|
||||
2. Third-party library imports (e.g., `dash`, `sqlalchemy`).
|
||||
3. Local application imports (e.g., `from utils.logger import get_logger`).
|
||||
- **Style**: Use absolute imports (`from data.base_collector import ...`) over relative imports (`from ..base_collector import ...`) for better readability and to avoid ambiguity.
|
||||
- **Exports**: To create a clean public API for a package, import the desired classes/functions into the `__init__.py` file. This allows users to import directly from the package (e.g., `from data.exchanges import ExchangeFactory`) instead of from the specific submodule.
|
||||
- **Abstract Base Classes**: Used to define common interfaces, as seen in `data/base_collector.py`.
|
||||
- **Configuration**: Bot and strategy parameters are managed via JSON files in `config/`. Centralized application settings are handled by `config/settings.py`.
|
||||
- **Logging**: A unified logging system is available in `utils/logger.py` and should be used across all components for consistent output.
|
||||
- **Type Hinting**: Mandatory for all function signatures (parameters and return values) for clarity and static analysis.
|
||||
- **Error Handling**: Custom, specific exceptions should be defined (e.g., `DataCollectorError`). Use `try...except` blocks to handle potential failures gracefully and provide informative error messages.
|
||||
- **Database Access**: All database operations must go through the repository layer, accessible via `database.operations.get_database_operations()`. The repositories exclusively use the **SQLAlchemy ORM** for all queries to ensure type safety, maintainability, and consistency. Raw SQL is strictly forbidden in the repository layer to maintain database-agnostic flexibility.
|
||||
|
||||
## 4. Current Implementation Status
|
||||
|
||||
### Completed Features
|
||||
- **Database Foundation**: The database schema is fully defined in `database/models.py` and `database/schema_clean.sql`, with all necessary tables, indexes, and relationships. Database connection management is robust.
|
||||
- **Data Collection System**: A highly robust and asynchronous data collection service is in place. It supports OKX, handles WebSocket connections, processes data, aggregates OHLCV candles, and stores data reliably. It features health monitoring and automatic restarts.
|
||||
- **Basic Dashboard**: A functional dashboard exists.
|
||||
- **System Health Monitoring**: A comprehensive page shows the real-time status of the data collection service, database, Redis, and system performance (CPU/memory).
|
||||
- **Data Visualization**: Price charts with technical indicator overlays are implemented.
|
||||
|
||||
### Work in Progress / To-Do
|
||||
The core business logic of the application is yet to be implemented. The main remaining tasks are:
|
||||
- **Strategy Engine and Bot Management (Task Group 4.0)**:
|
||||
- Designing the base strategy interface.
|
||||
- Implementing bot lifecycle management (create, run, stop).
|
||||
- Signal generation and virtual portfolio tracking.
|
||||
- **Advanced Dashboard Features (Task Group 5.0)**:
|
||||
- Building the UI for managing bots and configuring strategies.
|
||||
- **Backtesting Engine (Task Group 6.0)**:
|
||||
- Implementing the engine to test strategies on historical data.
|
||||
- **Real-Time Trading Simulation (Task Group 7.0)**:
|
||||
- Executing virtual trades based on signals.
|
||||
|
||||
The project has a solid foundation. The next phase of development should focus on implementing the trading logic and user-facing bot management features.
|
||||
131
README.md
131
README.md
@ -1,118 +1,53 @@
|
||||
# Crypto Trading Bot Dashboard
|
||||
# Crypto Trading Bot Platform
|
||||
|
||||
A simple control dashboard for managing and monitoring multiple cryptocurrency trading bots simultaneously. Test different trading strategies in parallel using real OKX market data and virtual trading simulation.
|
||||
A simplified crypto trading bot platform for strategy testing and development using real OKX market data and virtual trading simulation.
|
||||
|
||||
## Features
|
||||
## Overview
|
||||
|
||||
- **Multi-Bot Management**: Run up to 5 trading bots simultaneously with different strategies
|
||||
- **Real-time Monitoring**: Live price charts with bot buy/sell decision markers
|
||||
- **Performance Tracking**: Monitor virtual balance, P&L, trade count, and timing for each bot
|
||||
- **Backtesting**: Test strategies on historical data with accelerated execution
|
||||
- **Simple Configuration**: JSON-based bot configuration files
|
||||
- **Hot Reloading**: System remembers active bots and restores state on restart
|
||||
This platform enables rapid strategy development with a monolithic architecture that supports multiple concurrent trading bots, real-time monitoring, and performance tracking.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Multi-Bot Management**: Run multiple trading bots simultaneously with different strategies.
|
||||
- **Real-time Monitoring**: Live OHLCV charts with bot trading signals overlay.
|
||||
- **Modular Chart System**: Advanced technical analysis with 26+ indicators and strategy presets.
|
||||
- **Virtual Trading**: Simulation-first approach with realistic fee modeling.
|
||||
- **JSON Configuration**: Easy strategy parameter testing without code changes.
|
||||
- **Backtesting Engine**: Test strategies on historical market data (planned).
|
||||
- **Crash Recovery**: Automatic bot restart and state restoration.
|
||||
|
||||
## Tech Stack
|
||||
|
||||
- **Backend**: Python with existing OKX, strategy, and trader modules
|
||||
- **Frontend**: Plotly Dash for rapid development
|
||||
- **Database**: PostgreSQL with SQLAlchemy ORM
|
||||
- **Framework**: Python 3.10+ with Dash
|
||||
- **Database**: PostgreSQL
|
||||
- **Real-time Messaging**: Redis
|
||||
- **Package Management**: UV
|
||||
- **Development**: Docker for consistent environment
|
||||
- **Containerization**: Docker
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
For detailed instructions on setting up and running the project, please refer to the main documentation.
|
||||
|
||||
- Python 3.10+
|
||||
- Docker and Docker Compose
|
||||
- UV package manager
|
||||
**➡️ [Go to the Full Documentation](./docs/README.md)**
|
||||
|
||||
### Development Setup
|
||||
|
||||
*Complete setup workflow*
|
||||
|
||||
```
|
||||
python scripts/dev.py setup # Setup environment and dependencies
|
||||
python scripts/dev.py start # Start Docker services
|
||||
uv run python tests/test_setup.py # Verify everything works
|
||||
```
|
||||
|
||||
*Development workflow*
|
||||
```
|
||||
python scripts/dev.py dev-server # Start with hot reload (recommended)
|
||||
python scripts/dev.py run # Start without hot reload
|
||||
python scripts/dev.py status # Check service status
|
||||
python scripts/dev.py stop # Stop services
|
||||
```
|
||||
|
||||
*Dependency management*
|
||||
```
|
||||
uv add "new-package>=1.0.0" # Add new dependency
|
||||
uv sync --dev # Install all dependencies
|
||||
```
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
Dashboard/
|
||||
├── app.py # Main Dash application
|
||||
├── bot_manager.py # Bot lifecycle management
|
||||
├── database/
|
||||
│ ├── models.py # SQLAlchemy models
|
||||
│ └── connection.py # Database connection
|
||||
├── data/
|
||||
│ └── okx_integration.py # OKX API integration
|
||||
├── components/
|
||||
│ ├── dashboard.py # Dashboard components
|
||||
│ └── charts.py # Chart components
|
||||
├── backtesting/
|
||||
│ └── engine.py # Backtesting framework
|
||||
├── config/
|
||||
│ └── bot_configs/ # Bot configuration files
|
||||
├── strategies/ # Trading strategy modules
|
||||
├── trader/ # Virtual trading logic
|
||||
└── docs/ # Project documentation
|
||||
```bash
|
||||
# Quick setup for development
|
||||
git clone <repository>
|
||||
cd TCPDashboard
|
||||
uv sync
|
||||
cp env.template .env
|
||||
docker-compose up -d
|
||||
uv run python main.py
|
||||
```
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Product Requirements](tasks/prd-crypto-bot-dashboard.md)** - Detailed project requirements and specifications
|
||||
- **[Implementation Tasks](tasks/tasks-prd-crypto-bot-dashboard.md)** - Step-by-step development task list
|
||||
- **[API Documentation](docs/)** - Module and API documentation
|
||||
All project documentation is located in the `docs/` directory. The best place to start is the main documentation index.
|
||||
|
||||
## Bot Configuration
|
||||
|
||||
Create bot configuration files in `config/bot_configs/`:
|
||||
|
||||
```json
|
||||
{
|
||||
"bot_id": "ema_crossover_01",
|
||||
"strategy": "EMA_Crossover",
|
||||
"parameters": {
|
||||
"fast_period": 12,
|
||||
"slow_period": 26,
|
||||
"symbol": "BTC-USDT"
|
||||
},
|
||||
"virtual_balance": 10000
|
||||
}
|
||||
```
|
||||
|
||||
## Development Status
|
||||
|
||||
This project is in active development. See the [task list](tasks/tasks-prd-crypto-bot-dashboard.md) for current implementation progress.
|
||||
|
||||
### Current Phase: Setup and Infrastructure
|
||||
- [ ] Development environment setup
|
||||
- [ ] Database schema design
|
||||
- [ ] Basic bot management system
|
||||
- [ ] OKX integration
|
||||
- [ ] Dashboard UI implementation
|
||||
- [ ] Backtesting framework
|
||||
- **[Main Documentation](./docs/README.md)** - The central hub for all project documentation, including setup guides, architecture, and module details.
|
||||
- **[Setup Guide](./docs/guides/setup.md)** - Complete setup instructions for new machines.
|
||||
- **[Project Context](./CONTEXT.md)** - The single source of truth for the project's current state.
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Check the [task list](tasks/tasks-prd-crypto-bot-dashboard.md) for available tasks
|
||||
2. Follow the project's coding standards and architectural patterns
|
||||
3. Use UV for package management
|
||||
4. Write tests for new functionality
|
||||
5. Update documentation when adding features
|
||||
We welcome contributions! Please review the **[Contributing Guidelines](./docs/CONTRIBUTING.md)** and the **[Project Context](./CONTEXT.md)** before getting started.
|
||||
|
||||
138
alembic.ini
Normal file
138
alembic.ini
Normal file
@ -0,0 +1,138 @@
|
||||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts.
|
||||
# this is typically a path given in POSIX (e.g. forward slashes)
|
||||
# format, relative to the token %(here)s which refers to the location of this
|
||||
# ini file
|
||||
script_location = %(here)s/database/migrations
|
||||
|
||||
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||
# Uncomment the line below if you want the files to be prepended with date and time
|
||||
# see https://alembic.sqlalchemy.org/en/latest/tutorial.html#editing-the-ini-file
|
||||
# for all available tokens
|
||||
file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||
|
||||
# sys.path path, will be prepended to sys.path if present.
|
||||
# defaults to the current working directory. for multiple paths, the path separator
|
||||
# is defined by "path_separator" below.
|
||||
prepend_sys_path = .
|
||||
|
||||
# timezone to use when rendering the date within the migration file
|
||||
# as well as the filename.
|
||||
# If specified, requires the python>=3.9 or backports.zoneinfo library and tzdata library.
|
||||
# Any required deps can installed by adding `alembic[tz]` to the pip requirements
|
||||
# string value is passed to ZoneInfo()
|
||||
# leave blank for localtime
|
||||
timezone = UTC
|
||||
|
||||
# max length of characters to apply to the "slug" field
|
||||
truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version location specification; This defaults
|
||||
# to <script_location>/versions. When using multiple version
|
||||
# directories, initial revisions must be specified with --version-path.
|
||||
# The path separator used here should be the separator specified by "path_separator"
|
||||
# below.
|
||||
# version_locations = %(here)s/bar:%(here)s/bat:%(here)s/alembic/versions
|
||||
|
||||
# path_separator; This indicates what character is used to split lists of file
|
||||
# paths, including version_locations and prepend_sys_path within configparser
|
||||
# files such as alembic.ini.
|
||||
# The default rendered in new alembic.ini files is "os", which uses os.pathsep
|
||||
# to provide os-dependent path splitting.
|
||||
#
|
||||
# Note that in order to support legacy alembic.ini files, this default does NOT
|
||||
# take place if path_separator is not present in alembic.ini. If this
|
||||
# option is omitted entirely, fallback logic is as follows:
|
||||
#
|
||||
# 1. Parsing of the version_locations option falls back to using the legacy
|
||||
# "version_path_separator" key, which if absent then falls back to the legacy
|
||||
# behavior of splitting on spaces and/or commas.
|
||||
# 2. Parsing of the prepend_sys_path option falls back to the legacy
|
||||
# behavior of splitting on spaces, commas, or colons.
|
||||
#
|
||||
# Valid values for path_separator are:
|
||||
#
|
||||
# path_separator = :
|
||||
# path_separator = ;
|
||||
# path_separator = space
|
||||
# path_separator = newline
|
||||
#
|
||||
# Use os.pathsep. Default configuration used for new projects.
|
||||
path_separator = os
|
||||
|
||||
# set to 'true' to search source files recursively
|
||||
# in each "version_locations" directory
|
||||
# new in Alembic version 1.10
|
||||
# recursive_version_locations = false
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
# database URL. This will be overridden by env.py to use environment variables
|
||||
# The actual URL is configured via DATABASE_URL environment variable
|
||||
sqlalchemy.url =
|
||||
|
||||
[post_write_hooks]
|
||||
# post_write_hooks defines scripts or Python functions that are run
|
||||
# on newly generated revision scripts. See the documentation for further
|
||||
# detail and examples
|
||||
|
||||
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||
# hooks = black
|
||||
# black.type = console_scripts
|
||||
# black.entrypoint = black
|
||||
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||
|
||||
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
|
||||
# hooks = ruff
|
||||
# ruff.type = exec
|
||||
# ruff.executable = %(here)s/.venv/bin/ruff
|
||||
# ruff.options = check --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Logging configuration. This is also consumed by the user-maintained
|
||||
# env.py script only.
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARNING
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARNING
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
||||
44
app_new.py
Normal file
44
app_new.py
Normal file
@ -0,0 +1,44 @@
|
||||
"""
|
||||
Crypto Trading Bot Dashboard - Modular Version
|
||||
|
||||
This is the main entry point for the dashboard application using the new modular structure.
|
||||
"""
|
||||
|
||||
from dashboard import create_app
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger("main")
|
||||
|
||||
|
||||
def main():
|
||||
"""Main entry point for the dashboard application."""
|
||||
try:
|
||||
# Create the dashboard app
|
||||
app = create_app()
|
||||
|
||||
# Import and register all callbacks after app creation
|
||||
from dashboard.callbacks import (
|
||||
register_navigation_callbacks,
|
||||
register_chart_callbacks,
|
||||
register_indicator_callbacks,
|
||||
register_system_health_callbacks
|
||||
)
|
||||
|
||||
# Register all callback modules
|
||||
register_navigation_callbacks(app)
|
||||
register_chart_callbacks(app) # Now includes enhanced market statistics
|
||||
register_indicator_callbacks(app) # Placeholder for now
|
||||
register_system_health_callbacks(app) # Placeholder for now
|
||||
|
||||
logger.info("Dashboard application initialized successfully")
|
||||
|
||||
# Run the app (debug=False for stability, manual restart required for changes)
|
||||
app.run(debug=False, host='0.0.0.0', port=8050)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to start dashboard application: {e}")
|
||||
raise
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
29
components/__init__.py
Normal file
29
components/__init__.py
Normal file
@ -0,0 +1,29 @@
|
||||
"""
|
||||
Dashboard UI Components Package
|
||||
|
||||
This package contains reusable UI components for the Crypto Trading Bot Dashboard.
|
||||
Components are designed to be modular and can be composed to create complex layouts.
|
||||
"""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
# Package metadata
|
||||
__version__ = "0.1.0"
|
||||
__package_name__ = "components"
|
||||
|
||||
# Make components directory available
|
||||
COMPONENTS_DIR = Path(__file__).parent
|
||||
|
||||
# Component registry for future component discovery
|
||||
AVAILABLE_COMPONENTS = [
|
||||
"dashboard", # Main dashboard layout components
|
||||
"charts", # Chart and visualization components
|
||||
]
|
||||
|
||||
def get_component_path(component_name: str) -> Path:
|
||||
"""Get the file path for a specific component."""
|
||||
return COMPONENTS_DIR / f"{component_name}.py"
|
||||
|
||||
def list_components() -> list:
|
||||
"""List all available components."""
|
||||
return AVAILABLE_COMPONENTS.copy()
|
||||
156
components/charts.py
Normal file
156
components/charts.py
Normal file
@ -0,0 +1,156 @@
|
||||
"""
|
||||
Chart and Visualization Components - Redirect to New System
|
||||
|
||||
This module redirects to the new modular chart system in components/charts/.
|
||||
For new development, use the ChartBuilder class directly from components.charts.
|
||||
"""
|
||||
|
||||
# Import and re-export the new modular chart system for simple migration
|
||||
from .charts import (
|
||||
ChartBuilder,
|
||||
create_candlestick_chart,
|
||||
create_strategy_chart,
|
||||
validate_market_data,
|
||||
prepare_chart_data,
|
||||
get_indicator_colors
|
||||
)
|
||||
|
||||
from .charts.config import (
|
||||
get_available_indicators,
|
||||
calculate_indicators,
|
||||
get_overlay_indicators,
|
||||
get_subplot_indicators,
|
||||
get_indicator_display_config
|
||||
)
|
||||
|
||||
# Convenience functions for common operations
|
||||
def get_supported_symbols():
|
||||
"""Get list of symbols that have data in the database."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data("BTC-USDT", "1m", days_back=1) # Test query
|
||||
if candles:
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
try:
|
||||
db = get_database_operations(logger)
|
||||
with db.market_data.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT DISTINCT symbol FROM market_data ORDER BY symbol"))
|
||||
return [row[0] for row in result]
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ['BTC-USDT', 'ETH-USDT'] # Fallback
|
||||
|
||||
|
||||
def get_supported_timeframes():
|
||||
"""Get list of timeframes that have data in the database."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data("BTC-USDT", "1m", days_back=1) # Test query
|
||||
if candles:
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
try:
|
||||
db = get_database_operations(logger)
|
||||
with db.market_data.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT DISTINCT timeframe FROM market_data ORDER BY timeframe"))
|
||||
return [row[0] for row in result]
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ['5s', '1m', '15m', '1h'] # Fallback
|
||||
|
||||
|
||||
# Legacy function names for compatibility during transition
|
||||
get_available_technical_indicators = get_available_indicators
|
||||
fetch_market_data = lambda symbol, timeframe, days_back=7, exchange="okx": ChartBuilder().fetch_market_data(symbol, timeframe, days_back, exchange)
|
||||
create_candlestick_with_volume = lambda df, symbol, timeframe: create_candlestick_chart(symbol, timeframe)
|
||||
create_empty_chart = lambda message="No data available": ChartBuilder()._create_empty_chart(message)
|
||||
create_error_chart = lambda error_message: ChartBuilder()._create_error_chart(error_message)
|
||||
|
||||
def get_market_statistics(symbol: str, timeframe: str = "1h"):
|
||||
"""Calculate market statistics from recent data."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data(symbol, timeframe, days_back=1)
|
||||
|
||||
if not candles:
|
||||
return {'Price': 'N/A', '24h Change': 'N/A', '24h Volume': 'N/A', 'High 24h': 'N/A', 'Low 24h': 'N/A'}
|
||||
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(candles)
|
||||
latest = df.iloc[-1]
|
||||
current_price = float(latest['close'])
|
||||
|
||||
# Calculate 24h change
|
||||
if len(df) > 1:
|
||||
price_24h_ago = float(df.iloc[0]['open'])
|
||||
change_percent = ((current_price - price_24h_ago) / price_24h_ago) * 100
|
||||
else:
|
||||
change_percent = 0
|
||||
|
||||
from .charts.utils import format_price, format_volume
|
||||
return {
|
||||
'Price': format_price(current_price, decimals=2),
|
||||
'24h Change': f"{'+' if change_percent >= 0 else ''}{change_percent:.2f}%",
|
||||
'24h Volume': format_volume(df['volume'].sum()),
|
||||
'High 24h': format_price(df['high'].max(), decimals=2),
|
||||
'Low 24h': format_price(df['low'].min(), decimals=2)
|
||||
}
|
||||
|
||||
def check_data_availability(symbol: str, timeframe: str):
|
||||
"""Check data availability for a symbol and timeframe."""
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
|
||||
try:
|
||||
logger = get_logger("default_logger")
|
||||
db = get_database_operations(logger)
|
||||
latest_candle = db.market_data.get_latest_candle(symbol, timeframe)
|
||||
|
||||
if latest_candle:
|
||||
latest_time = latest_candle['timestamp']
|
||||
time_diff = datetime.now(timezone.utc) - latest_time.replace(tzinfo=timezone.utc)
|
||||
|
||||
return {
|
||||
'has_data': True,
|
||||
'latest_timestamp': latest_time,
|
||||
'time_since_last': time_diff,
|
||||
'is_recent': time_diff < timedelta(hours=1),
|
||||
'message': f"Latest data: {latest_time.strftime('%Y-%m-%d %H:%M:%S UTC')}"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'has_data': False,
|
||||
'latest_timestamp': None,
|
||||
'time_since_last': None,
|
||||
'is_recent': False,
|
||||
'message': f"No data available for {symbol} {timeframe}"
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'has_data': False,
|
||||
'latest_timestamp': None,
|
||||
'time_since_last': None,
|
||||
'is_recent': False,
|
||||
'message': f"Error checking data: {str(e)}"
|
||||
}
|
||||
|
||||
def create_data_status_indicator(symbol: str, timeframe: str):
|
||||
"""Create a data status indicator for the dashboard."""
|
||||
status = check_data_availability(symbol, timeframe)
|
||||
|
||||
if status['has_data']:
|
||||
if status['is_recent']:
|
||||
icon, color, status_text = "🟢", "#27ae60", "Real-time Data"
|
||||
else:
|
||||
icon, color, status_text = "🟡", "#f39c12", "Delayed Data"
|
||||
else:
|
||||
icon, color, status_text = "🔴", "#e74c3c", "No Data"
|
||||
|
||||
return f'<span style="color: {color}; font-weight: bold;">{icon} {status_text}</span><br><small>{status["message"]}</small>'
|
||||
496
components/charts/__init__.py
Normal file
496
components/charts/__init__.py
Normal file
@ -0,0 +1,496 @@
|
||||
"""
|
||||
Modular Chart System for Crypto Trading Bot Dashboard
|
||||
|
||||
This package provides a flexible, strategy-driven chart system that supports:
|
||||
- Technical indicator overlays (SMA, EMA, Bollinger Bands)
|
||||
- Subplot management (RSI, MACD)
|
||||
- Strategy-specific configurations
|
||||
- Future bot signal integration
|
||||
|
||||
Main Components:
|
||||
- ChartBuilder: Main orchestrator for chart creation
|
||||
- Layer System: Modular rendering components
|
||||
- Configuration System: Strategy-driven chart configs
|
||||
"""
|
||||
|
||||
import plotly.graph_objects as go
|
||||
from typing import List
|
||||
from .builder import ChartBuilder
|
||||
from .utils import (
|
||||
validate_market_data,
|
||||
prepare_chart_data,
|
||||
get_indicator_colors
|
||||
)
|
||||
from .config import (
|
||||
get_available_indicators,
|
||||
calculate_indicators,
|
||||
get_overlay_indicators,
|
||||
get_subplot_indicators,
|
||||
get_indicator_display_config
|
||||
)
|
||||
from .data_integration import (
|
||||
MarketDataIntegrator,
|
||||
DataIntegrationConfig,
|
||||
get_market_data_integrator,
|
||||
fetch_indicator_data,
|
||||
check_symbol_data_quality
|
||||
)
|
||||
from .error_handling import (
|
||||
ChartErrorHandler,
|
||||
ChartError,
|
||||
ErrorSeverity,
|
||||
InsufficientDataError,
|
||||
DataValidationError,
|
||||
IndicatorCalculationError,
|
||||
DataConnectionError,
|
||||
check_data_sufficiency,
|
||||
get_error_message,
|
||||
create_error_annotation
|
||||
)
|
||||
|
||||
# Layer imports with error handling
|
||||
from .layers.base import (
|
||||
LayerConfig,
|
||||
BaseLayer,
|
||||
CandlestickLayer,
|
||||
VolumeLayer,
|
||||
LayerManager
|
||||
)
|
||||
|
||||
from .layers.indicators import (
|
||||
IndicatorLayerConfig,
|
||||
BaseIndicatorLayer,
|
||||
SMALayer,
|
||||
EMALayer,
|
||||
BollingerBandsLayer
|
||||
)
|
||||
|
||||
from .layers.subplots import (
|
||||
SubplotLayerConfig,
|
||||
BaseSubplotLayer,
|
||||
RSILayer,
|
||||
MACDLayer
|
||||
)
|
||||
|
||||
# Version information
|
||||
__version__ = "0.1.0"
|
||||
__package_name__ = "charts"
|
||||
|
||||
# Public API exports
|
||||
__all__ = [
|
||||
# Core components
|
||||
"ChartBuilder",
|
||||
"validate_market_data",
|
||||
"prepare_chart_data",
|
||||
"get_indicator_colors",
|
||||
|
||||
# Chart creation functions
|
||||
"create_candlestick_chart",
|
||||
"create_strategy_chart",
|
||||
"create_empty_chart",
|
||||
"create_error_chart",
|
||||
|
||||
# Data integration
|
||||
"MarketDataIntegrator",
|
||||
"DataIntegrationConfig",
|
||||
"get_market_data_integrator",
|
||||
"fetch_indicator_data",
|
||||
"check_symbol_data_quality",
|
||||
|
||||
# Error handling
|
||||
"ChartErrorHandler",
|
||||
"ChartError",
|
||||
"ErrorSeverity",
|
||||
"InsufficientDataError",
|
||||
"DataValidationError",
|
||||
"IndicatorCalculationError",
|
||||
"DataConnectionError",
|
||||
"check_data_sufficiency",
|
||||
"get_error_message",
|
||||
"create_error_annotation",
|
||||
|
||||
# Utility functions
|
||||
"get_supported_symbols",
|
||||
"get_supported_timeframes",
|
||||
"get_market_statistics",
|
||||
"check_data_availability",
|
||||
"create_data_status_indicator",
|
||||
|
||||
# Base layers
|
||||
"LayerConfig",
|
||||
"BaseLayer",
|
||||
"CandlestickLayer",
|
||||
"VolumeLayer",
|
||||
"LayerManager",
|
||||
|
||||
# Indicator layers
|
||||
"IndicatorLayerConfig",
|
||||
"BaseIndicatorLayer",
|
||||
"SMALayer",
|
||||
"EMALayer",
|
||||
"BollingerBandsLayer",
|
||||
|
||||
# Subplot layers
|
||||
"SubplotLayerConfig",
|
||||
"BaseSubplotLayer",
|
||||
"RSILayer",
|
||||
"MACDLayer",
|
||||
|
||||
# Convenience functions
|
||||
"create_basic_chart",
|
||||
"create_indicator_chart",
|
||||
"create_chart_with_indicators"
|
||||
]
|
||||
|
||||
# Initialize logger
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger("charts")
|
||||
|
||||
def create_candlestick_chart(symbol: str, timeframe: str, days_back: int = 7, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Create a candlestick chart with enhanced data integration.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair (e.g., 'BTC-USDT')
|
||||
timeframe: Timeframe (e.g., '1h', '1d')
|
||||
days_back: Number of days to look back
|
||||
**kwargs: Additional chart parameters
|
||||
|
||||
Returns:
|
||||
Plotly figure with candlestick chart
|
||||
"""
|
||||
builder = ChartBuilder()
|
||||
|
||||
# Check data quality first
|
||||
data_quality = builder.check_data_quality(symbol, timeframe)
|
||||
if not data_quality['available']:
|
||||
logger.warning(f"Data not available for {symbol} {timeframe}: {data_quality['message']}")
|
||||
return builder._create_error_chart(f"No data available: {data_quality['message']}")
|
||||
|
||||
if not data_quality['sufficient_for_indicators']:
|
||||
logger.warning(f"Insufficient data for indicators: {symbol} {timeframe}")
|
||||
|
||||
# Use enhanced data fetching
|
||||
try:
|
||||
candles = builder.fetch_market_data_enhanced(symbol, timeframe, days_back)
|
||||
if not candles:
|
||||
return builder._create_error_chart(f"No market data found for {symbol} {timeframe}")
|
||||
|
||||
# Prepare data for charting
|
||||
df = prepare_chart_data(candles)
|
||||
if df.empty:
|
||||
return builder._create_error_chart("Failed to prepare chart data")
|
||||
|
||||
# Create chart with data quality info
|
||||
fig = builder._create_candlestick_with_volume(df, symbol, timeframe)
|
||||
|
||||
# Add data quality annotation if data is stale
|
||||
if not data_quality['is_recent']:
|
||||
age_hours = data_quality['data_age_minutes'] / 60
|
||||
fig.add_annotation(
|
||||
text=f"⚠️ Data is {age_hours:.1f}h old",
|
||||
xref="paper", yref="paper",
|
||||
x=0.02, y=0.98,
|
||||
showarrow=False,
|
||||
bgcolor="rgba(255,193,7,0.8)",
|
||||
bordercolor="orange",
|
||||
borderwidth=1
|
||||
)
|
||||
|
||||
logger.debug(f"Created enhanced candlestick chart for {symbol} {timeframe} with {len(candles)} candles")
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating enhanced candlestick chart: {e}")
|
||||
return builder._create_error_chart(f"Chart creation failed: {str(e)}")
|
||||
|
||||
def create_strategy_chart(symbol: str, timeframe: str, strategy_name: str, **kwargs):
|
||||
"""
|
||||
Convenience function to create a strategy-specific chart.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
strategy_name: Name of the strategy configuration
|
||||
**kwargs: Additional parameters
|
||||
|
||||
Returns:
|
||||
Plotly Figure object with strategy indicators
|
||||
"""
|
||||
builder = ChartBuilder()
|
||||
return builder.create_strategy_chart(symbol, timeframe, strategy_name, **kwargs)
|
||||
|
||||
def get_supported_symbols():
|
||||
"""Get list of symbols that have data in the database."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data("BTC-USDT", "1m", days_back=1) # Test query
|
||||
if candles:
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
try:
|
||||
db = get_database_operations(logger)
|
||||
with db.market_data.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT DISTINCT symbol FROM market_data ORDER BY symbol"))
|
||||
return [row[0] for row in result]
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ['BTC-USDT', 'ETH-USDT'] # Fallback
|
||||
|
||||
def get_supported_timeframes():
|
||||
"""Get list of timeframes that have data in the database."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data("BTC-USDT", "1m", days_back=1) # Test query
|
||||
if candles:
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
try:
|
||||
db = get_database_operations(logger)
|
||||
with db.market_data.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT DISTINCT timeframe FROM market_data ORDER BY timeframe"))
|
||||
return [row[0] for row in result]
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return ['5s', '1m', '15m', '1h'] # Fallback
|
||||
|
||||
def get_market_statistics(symbol: str, timeframe: str = "1h", days_back: int = 1):
|
||||
"""Calculate market statistics from recent data over a specified period."""
|
||||
builder = ChartBuilder()
|
||||
candles = builder.fetch_market_data(symbol, timeframe, days_back=days_back)
|
||||
|
||||
if not candles:
|
||||
return {'Price': 'N/A', f'Change ({days_back}d)': 'N/A', f'Volume ({days_back}d)': 'N/A', f'High ({days_back}d)': 'N/A', f'Low ({days_back}d)': 'N/A'}
|
||||
|
||||
import pandas as pd
|
||||
df = pd.DataFrame(candles)
|
||||
latest = df.iloc[-1]
|
||||
current_price = float(latest['close'])
|
||||
|
||||
# Calculate change over the period
|
||||
if len(df) > 1:
|
||||
price_period_ago = float(df.iloc[0]['open'])
|
||||
change_percent = ((current_price - price_period_ago) / price_period_ago) * 100
|
||||
else:
|
||||
change_percent = 0
|
||||
|
||||
from .utils import format_price, format_volume
|
||||
|
||||
# Determine label for period (e.g., "24h", "7d", "1h")
|
||||
if days_back == 1/24:
|
||||
period_label = "1h"
|
||||
elif days_back == 4/24:
|
||||
period_label = "4h"
|
||||
elif days_back == 6/24:
|
||||
period_label = "6h"
|
||||
elif days_back == 12/24:
|
||||
period_label = "12h"
|
||||
elif days_back < 1: # For other fractional days, show as hours
|
||||
period_label = f"{int(days_back * 24)}h"
|
||||
elif days_back == 1:
|
||||
period_label = "24h" # Keep 24h for 1 day for clarity
|
||||
else:
|
||||
period_label = f"{days_back}d"
|
||||
|
||||
return {
|
||||
'Price': format_price(current_price, decimals=2),
|
||||
f'Change ({period_label})': f"{'+' if change_percent >= 0 else ''}{change_percent:.2f}%",
|
||||
f'Volume ({period_label})': format_volume(df['volume'].sum()),
|
||||
f'High ({period_label})': format_price(df['high'].max(), decimals=2),
|
||||
f'Low ({period_label})': format_price(df['low'].min(), decimals=2)
|
||||
}
|
||||
|
||||
def check_data_availability(symbol: str, timeframe: str):
|
||||
"""Check data availability for a symbol and timeframe."""
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from database.operations import get_database_operations
|
||||
from utils.logger import get_logger
|
||||
|
||||
try:
|
||||
logger = get_logger("charts_data_check")
|
||||
db = get_database_operations(logger)
|
||||
latest_candle = db.market_data.get_latest_candle(symbol, timeframe)
|
||||
|
||||
if latest_candle:
|
||||
latest_time = latest_candle['timestamp']
|
||||
time_diff = datetime.now(timezone.utc) - latest_time.replace(tzinfo=timezone.utc)
|
||||
|
||||
return {
|
||||
'has_data': True,
|
||||
'latest_timestamp': latest_time,
|
||||
'time_since_last': time_diff,
|
||||
'is_recent': time_diff < timedelta(hours=1),
|
||||
'message': f"Latest data: {latest_time.strftime('%Y-%m-%d %H:%M:%S UTC')}"
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'has_data': False,
|
||||
'latest_timestamp': None,
|
||||
'time_since_last': None,
|
||||
'is_recent': False,
|
||||
'message': f"No data available for {symbol} {timeframe}"
|
||||
}
|
||||
except Exception as e:
|
||||
return {
|
||||
'has_data': False,
|
||||
'latest_timestamp': None,
|
||||
'time_since_last': None,
|
||||
'is_recent': False,
|
||||
'message': f"Error checking data: {str(e)}"
|
||||
}
|
||||
|
||||
def create_data_status_indicator(symbol: str, timeframe: str):
|
||||
"""Create a data status indicator for the dashboard."""
|
||||
status = check_data_availability(symbol, timeframe)
|
||||
|
||||
if status['has_data']:
|
||||
if status['is_recent']:
|
||||
icon, color, status_text = "🟢", "#27ae60", "Real-time Data"
|
||||
else:
|
||||
icon, color, status_text = "🟡", "#f39c12", "Delayed Data"
|
||||
else:
|
||||
icon, color, status_text = "🔴", "#e74c3c", "No Data"
|
||||
|
||||
return f'<span style="color: {color}; font-weight: bold;">{icon} {status_text}</span><br><small>{status["message"]}</small>'
|
||||
|
||||
def create_error_chart(error_message: str):
|
||||
"""Create an error chart with error message."""
|
||||
builder = ChartBuilder()
|
||||
return builder._create_error_chart(error_message)
|
||||
|
||||
def create_basic_chart(symbol: str, data: list,
|
||||
indicators: list = None,
|
||||
error_handling: bool = True) -> 'go.Figure':
|
||||
"""
|
||||
Create a basic chart with error handling.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
data: OHLCV data as list of dictionaries
|
||||
indicators: List of indicator configurations
|
||||
error_handling: Whether to use comprehensive error handling
|
||||
|
||||
Returns:
|
||||
Plotly figure with chart or error display
|
||||
"""
|
||||
try:
|
||||
from plotly import graph_objects as go
|
||||
|
||||
# Initialize chart builder
|
||||
builder = ChartBuilder()
|
||||
|
||||
if error_handling:
|
||||
# Use error-aware chart creation
|
||||
error_handler = ChartErrorHandler()
|
||||
is_valid = error_handler.validate_data_sufficiency(data, indicators=indicators or [])
|
||||
|
||||
if not is_valid:
|
||||
# Create error chart
|
||||
fig = go.Figure()
|
||||
error_msg = error_handler.get_user_friendly_message()
|
||||
fig.add_annotation(create_error_annotation(error_msg, position='center'))
|
||||
fig.update_layout(
|
||||
title=f"Chart Error - {symbol}",
|
||||
xaxis={'visible': False},
|
||||
yaxis={'visible': False},
|
||||
template='plotly_white',
|
||||
height=400
|
||||
)
|
||||
return fig
|
||||
|
||||
# Create chart normally
|
||||
return builder.create_candlestick_chart(data, symbol=symbol, indicators=indicators or [])
|
||||
|
||||
except Exception as e:
|
||||
# Fallback error chart
|
||||
from plotly import graph_objects as go
|
||||
fig = go.Figure()
|
||||
fig.add_annotation(create_error_annotation(
|
||||
f"Chart creation failed: {str(e)}",
|
||||
position='center'
|
||||
))
|
||||
fig.update_layout(
|
||||
title=f"Chart Error - {symbol}",
|
||||
template='plotly_white',
|
||||
height=400
|
||||
)
|
||||
return fig
|
||||
|
||||
def create_indicator_chart(symbol: str, data: list,
|
||||
indicator_type: str, **params) -> 'go.Figure':
|
||||
"""
|
||||
Create a chart focused on a specific indicator.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
data: OHLCV data
|
||||
indicator_type: Type of indicator ('sma', 'ema', 'bollinger_bands', 'rsi', 'macd')
|
||||
**params: Indicator parameters
|
||||
|
||||
Returns:
|
||||
Plotly figure with indicator chart
|
||||
"""
|
||||
try:
|
||||
# Map indicator types to configurations
|
||||
indicator_map = {
|
||||
'sma': {'type': 'sma', 'parameters': {'period': params.get('period', 20)}},
|
||||
'ema': {'type': 'ema', 'parameters': {'period': params.get('period', 20)}},
|
||||
'bollinger_bands': {
|
||||
'type': 'bollinger_bands',
|
||||
'parameters': {
|
||||
'period': params.get('period', 20),
|
||||
'std_dev': params.get('std_dev', 2)
|
||||
}
|
||||
},
|
||||
'rsi': {'type': 'rsi', 'parameters': {'period': params.get('period', 14)}},
|
||||
'macd': {
|
||||
'type': 'macd',
|
||||
'parameters': {
|
||||
'fast_period': params.get('fast_period', 12),
|
||||
'slow_period': params.get('slow_period', 26),
|
||||
'signal_period': params.get('signal_period', 9)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if indicator_type not in indicator_map:
|
||||
raise ValueError(f"Unknown indicator type: {indicator_type}")
|
||||
|
||||
indicator_config = indicator_map[indicator_type]
|
||||
return create_basic_chart(symbol, data, indicators=[indicator_config])
|
||||
|
||||
except Exception as e:
|
||||
return create_basic_chart(symbol, data, indicators=[]) # Fallback to basic chart
|
||||
|
||||
def create_chart_with_indicators(symbol: str, timeframe: str,
|
||||
overlay_indicators: List[str] = None,
|
||||
subplot_indicators: List[str] = None,
|
||||
days_back: int = 7, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Create a chart with dynamically selected indicators.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair (e.g., 'BTC-USDT')
|
||||
timeframe: Timeframe (e.g., '1h', '1d')
|
||||
overlay_indicators: List of overlay indicator names
|
||||
subplot_indicators: List of subplot indicator names
|
||||
days_back: Number of days to look back
|
||||
**kwargs: Additional chart parameters
|
||||
|
||||
Returns:
|
||||
Plotly figure with selected indicators
|
||||
"""
|
||||
builder = ChartBuilder()
|
||||
return builder.create_chart_with_indicators(
|
||||
symbol, timeframe, overlay_indicators, subplot_indicators, days_back, **kwargs
|
||||
)
|
||||
|
||||
def initialize_indicator_manager():
|
||||
# Implementation of initialize_indicator_manager function
|
||||
pass
|
||||
582
components/charts/builder.py
Normal file
582
components/charts/builder.py
Normal file
@ -0,0 +1,582 @@
|
||||
"""
|
||||
ChartBuilder - Main orchestrator for chart creation
|
||||
|
||||
This module contains the ChartBuilder class which serves as the main entry point
|
||||
for creating charts with various configurations, indicators, and layers.
|
||||
"""
|
||||
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
import pandas as pd
|
||||
from datetime import datetime, timedelta, timezone
|
||||
from typing import List, Dict, Any, Optional, Union
|
||||
from decimal import Decimal
|
||||
|
||||
from database.operations import get_database_operations, DatabaseOperationError
|
||||
from utils.logger import get_logger
|
||||
from .utils import validate_market_data, prepare_chart_data, get_indicator_colors
|
||||
from .indicator_manager import get_indicator_manager
|
||||
from .layers import (
|
||||
LayerManager, CandlestickLayer, VolumeLayer,
|
||||
SMALayer, EMALayer, BollingerBandsLayer,
|
||||
RSILayer, MACDLayer, IndicatorLayerConfig
|
||||
)
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class ChartBuilder:
|
||||
"""
|
||||
Main chart builder class for creating modular, configurable charts.
|
||||
|
||||
This class orchestrates the creation of charts by coordinating between
|
||||
data fetching, layer rendering, and configuration management.
|
||||
"""
|
||||
|
||||
def __init__(self, logger_instance: Optional = None):
|
||||
"""
|
||||
Initialize the ChartBuilder.
|
||||
|
||||
Args:
|
||||
logger_instance: Optional logger instance
|
||||
"""
|
||||
self.logger = logger_instance or logger
|
||||
self.db_ops = get_database_operations(self.logger)
|
||||
|
||||
# Initialize market data integrator
|
||||
from .data_integration import get_market_data_integrator
|
||||
self.data_integrator = get_market_data_integrator()
|
||||
|
||||
# Chart styling defaults
|
||||
self.default_colors = get_indicator_colors()
|
||||
self.default_height = 600
|
||||
self.default_template = "plotly_white"
|
||||
|
||||
def fetch_market_data(self, symbol: str, timeframe: str,
|
||||
days_back: int = 7, exchange: str = "okx") -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Fetch market data from the database.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair (e.g., 'BTC-USDT')
|
||||
timeframe: Timeframe (e.g., '1h', '1d')
|
||||
days_back: Number of days to look back
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
List of candle data dictionaries
|
||||
"""
|
||||
try:
|
||||
# Calculate time range
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
# Fetch candles using the database operations API
|
||||
candles = self.db_ops.market_data.get_candles(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
exchange=exchange
|
||||
)
|
||||
|
||||
self.logger.debug(f"Chart builder: Fetched {len(candles)} candles for {symbol} {timeframe}")
|
||||
return candles
|
||||
|
||||
except DatabaseOperationError as e:
|
||||
self.logger.error(f"Chart builder: Database error fetching market data: {e}")
|
||||
return []
|
||||
except Exception as e:
|
||||
self.logger.error(f"Chart builder: Unexpected error fetching market data: {e}")
|
||||
return []
|
||||
|
||||
def fetch_market_data_enhanced(self, symbol: str, timeframe: str,
|
||||
days_back: int = 7, exchange: str = "okx") -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Enhanced market data fetching with validation and caching.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair (e.g., 'BTC-USDT')
|
||||
timeframe: Timeframe (e.g., '1h', '1d')
|
||||
days_back: Number of days to look back
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
List of validated candle data dictionaries
|
||||
"""
|
||||
try:
|
||||
# Use the data integrator for enhanced data handling
|
||||
raw_candles, ohlcv_candles = self.data_integrator.get_market_data_for_indicators(
|
||||
symbol, timeframe, days_back, exchange
|
||||
)
|
||||
|
||||
if not raw_candles:
|
||||
self.logger.warning(f"Chart builder: No market data available for {symbol} {timeframe}")
|
||||
return []
|
||||
|
||||
self.logger.debug(f"Chart builder: Enhanced fetch: {len(raw_candles)} candles for {symbol} {timeframe}")
|
||||
return raw_candles
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Chart builder: Error in enhanced market data fetch: {e}")
|
||||
# Fallback to original method
|
||||
return self.fetch_market_data(symbol, timeframe, days_back, exchange)
|
||||
|
||||
def create_candlestick_chart(self, symbol: str, timeframe: str,
|
||||
days_back: int = 7, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Create a basic candlestick chart.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
days_back: Number of days to look back
|
||||
**kwargs: Additional chart parameters
|
||||
|
||||
Returns:
|
||||
Plotly Figure object with candlestick chart
|
||||
"""
|
||||
try:
|
||||
# Fetch market data
|
||||
candles = self.fetch_market_data(symbol, timeframe, days_back)
|
||||
|
||||
# Handle empty data
|
||||
if not candles:
|
||||
self.logger.warning(f"Chart builder: No data available for {symbol} {timeframe}")
|
||||
return self._create_empty_chart(f"No data available for {symbol} {timeframe}")
|
||||
|
||||
# Validate and prepare data
|
||||
if not validate_market_data(candles):
|
||||
self.logger.error(f"Chart builder: Invalid market data for {symbol} {timeframe}")
|
||||
return self._create_error_chart("Invalid market data format")
|
||||
|
||||
# Prepare chart data
|
||||
df = prepare_chart_data(candles)
|
||||
|
||||
# Determine if we need volume subplot
|
||||
has_volume = 'volume' in df.columns and df['volume'].sum() > 0
|
||||
include_volume = kwargs.get('include_volume', has_volume)
|
||||
|
||||
if include_volume and has_volume:
|
||||
fig, df_chart = self._create_candlestick_with_volume(df, symbol, timeframe, **kwargs)
|
||||
return fig, df_chart
|
||||
else:
|
||||
fig, df_chart = self._create_basic_candlestick(df, symbol, timeframe, **kwargs)
|
||||
return fig, df_chart
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Chart builder: Error creating candlestick chart for {symbol} {timeframe}: {e}")
|
||||
error_fig = self._create_error_chart(f"Error loading chart: {str(e)}")
|
||||
return error_fig, pd.DataFrame()
|
||||
|
||||
def _create_basic_candlestick(self, df: pd.DataFrame, symbol: str,
|
||||
timeframe: str, **kwargs) -> go.Figure:
|
||||
"""Create a basic candlestick chart without volume."""
|
||||
|
||||
# Get custom parameters
|
||||
height = kwargs.get('height', self.default_height)
|
||||
template = kwargs.get('template', self.default_template)
|
||||
|
||||
# Create candlestick chart
|
||||
fig = go.Figure(data=go.Candlestick(
|
||||
x=df['timestamp'],
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name=symbol,
|
||||
increasing_line_color=self.default_colors['bullish'],
|
||||
decreasing_line_color=self.default_colors['bearish']
|
||||
))
|
||||
|
||||
# Update layout
|
||||
fig.update_layout(
|
||||
title=f"{symbol} - {timeframe} Chart",
|
||||
xaxis_title="Time",
|
||||
yaxis_title="Price (USDT)",
|
||||
template=template,
|
||||
showlegend=False,
|
||||
height=height,
|
||||
xaxis_rangeslider_visible=False,
|
||||
hovermode='x unified'
|
||||
)
|
||||
|
||||
self.logger.debug(f"Chart builder: Created basic candlestick chart for {symbol} {timeframe} with {len(df)} candles")
|
||||
return fig, df
|
||||
|
||||
def _create_candlestick_with_volume(self, df: pd.DataFrame, symbol: str,
|
||||
timeframe: str, **kwargs) -> go.Figure:
|
||||
"""Create a candlestick chart with volume subplot."""
|
||||
|
||||
# Get custom parameters
|
||||
height = kwargs.get('height', 700) # Taller for volume subplot
|
||||
template = kwargs.get('template', self.default_template)
|
||||
|
||||
# Create subplots
|
||||
fig = make_subplots(
|
||||
rows=2, cols=1,
|
||||
shared_xaxes=True,
|
||||
vertical_spacing=0.03,
|
||||
subplot_titles=(f'{symbol} Price', 'Volume'),
|
||||
row_heights=[0.7, 0.3] # 70% for price, 30% for volume
|
||||
)
|
||||
|
||||
# Add candlestick chart
|
||||
fig.add_trace(
|
||||
go.Candlestick(
|
||||
x=df['timestamp'],
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name=symbol,
|
||||
increasing_line_color=self.default_colors['bullish'],
|
||||
decreasing_line_color=self.default_colors['bearish']
|
||||
),
|
||||
row=1, col=1
|
||||
)
|
||||
|
||||
# Add volume bars with color coding
|
||||
colors = [self.default_colors['bullish'] if close >= open else self.default_colors['bearish']
|
||||
for close, open in zip(df['close'], df['open'])]
|
||||
|
||||
fig.add_trace(
|
||||
go.Bar(
|
||||
x=df['timestamp'],
|
||||
y=df['volume'],
|
||||
name='Volume',
|
||||
marker_color=colors,
|
||||
opacity=0.7
|
||||
),
|
||||
row=2, col=1
|
||||
)
|
||||
|
||||
# Update layout
|
||||
fig.update_layout(
|
||||
title=f"{symbol} - {timeframe} Chart with Volume",
|
||||
template=template,
|
||||
showlegend=False,
|
||||
height=height,
|
||||
xaxis_rangeslider_visible=False,
|
||||
hovermode='x unified',
|
||||
dragmode='pan'
|
||||
)
|
||||
|
||||
# Update axes
|
||||
fig.update_yaxes(title_text="Price (USDT)", row=1, col=1)
|
||||
fig.update_yaxes(title_text="Volume", row=2, col=1)
|
||||
fig.update_xaxes(title_text="Time", row=2, col=1)
|
||||
|
||||
self.logger.debug(f"Chart builder: Created candlestick chart with volume for {symbol} {timeframe} with {len(df)} candles")
|
||||
return fig, df
|
||||
|
||||
def _create_empty_chart(self, message: str = "No data available") -> go.Figure:
|
||||
"""Create an empty chart with a message."""
|
||||
fig = go.Figure()
|
||||
|
||||
fig.add_annotation(
|
||||
text=message,
|
||||
xref="paper", yref="paper",
|
||||
x=0.5, y=0.5,
|
||||
xanchor='center', yanchor='middle',
|
||||
showarrow=False,
|
||||
font=dict(size=16, color="#7f8c8d")
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
template=self.default_template,
|
||||
height=self.default_height,
|
||||
showlegend=False,
|
||||
xaxis=dict(visible=False),
|
||||
yaxis=dict(visible=False)
|
||||
)
|
||||
|
||||
return fig
|
||||
|
||||
def _create_error_chart(self, error_message: str) -> go.Figure:
|
||||
"""Create an error chart with error message."""
|
||||
fig = go.Figure()
|
||||
|
||||
fig.add_annotation(
|
||||
text=f"⚠️ {error_message}",
|
||||
xref="paper", yref="paper",
|
||||
x=0.5, y=0.5,
|
||||
xanchor='center', yanchor='middle',
|
||||
showarrow=False,
|
||||
font=dict(size=16, color="#e74c3c")
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
template=self.default_template,
|
||||
height=self.default_height,
|
||||
showlegend=False,
|
||||
xaxis=dict(visible=False),
|
||||
yaxis=dict(visible=False)
|
||||
)
|
||||
|
||||
return fig
|
||||
|
||||
def create_strategy_chart(self, symbol: str, timeframe: str,
|
||||
strategy_name: str, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Create a strategy-specific chart (placeholder for future implementation).
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
strategy_name: Name of the strategy configuration
|
||||
**kwargs: Additional parameters
|
||||
|
||||
Returns:
|
||||
Plotly Figure object
|
||||
"""
|
||||
# For now, return a basic candlestick chart
|
||||
# This will be enhanced in later tasks with strategy configurations
|
||||
self.logger.info(f"Chart builder: Creating strategy chart for {strategy_name} (basic implementation)")
|
||||
return self.create_candlestick_chart(symbol, timeframe, **kwargs)
|
||||
|
||||
def check_data_quality(self, symbol: str, timeframe: str,
|
||||
exchange: str = "okx") -> Dict[str, Any]:
|
||||
"""
|
||||
Check data quality and availability for chart creation.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Dictionary with data quality information
|
||||
"""
|
||||
try:
|
||||
return self.data_integrator.check_data_availability(symbol, timeframe, exchange)
|
||||
except Exception as e:
|
||||
self.logger.error(f"Chart builder: Error checking data quality: {e}")
|
||||
return {
|
||||
'available': False,
|
||||
'latest_timestamp': None,
|
||||
'data_age_minutes': None,
|
||||
'sufficient_for_indicators': False,
|
||||
'message': f"Error checking data: {str(e)}"
|
||||
}
|
||||
|
||||
def create_chart_with_indicators(self, symbol: str, timeframe: str,
|
||||
overlay_indicators: List[str] = None,
|
||||
subplot_indicators: List[str] = None,
|
||||
days_back: int = 7, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Create a candlestick chart with specified technical indicators.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
overlay_indicators: List of overlay indicator names
|
||||
subplot_indicators: List of subplot indicator names
|
||||
days_back: Number of days to look back
|
||||
**kwargs: Additional chart parameters
|
||||
|
||||
Returns:
|
||||
Plotly Figure object and a pandas DataFrame with all chart data.
|
||||
"""
|
||||
overlay_indicators = overlay_indicators or []
|
||||
subplot_indicators = subplot_indicators or []
|
||||
try:
|
||||
# 1. Fetch and Prepare Base Data
|
||||
candles = self.fetch_market_data_enhanced(symbol, timeframe, days_back)
|
||||
if not candles:
|
||||
self.logger.warning(f"No data for {symbol} {timeframe}, creating empty chart.")
|
||||
return self._create_empty_chart(f"No data for {symbol} {timeframe}"), pd.DataFrame()
|
||||
|
||||
df = prepare_chart_data(candles)
|
||||
if df.empty:
|
||||
self.logger.warning(f"DataFrame empty for {symbol} {timeframe}, creating empty chart.")
|
||||
return self._create_empty_chart(f"No data for {symbol} {timeframe}"), pd.DataFrame()
|
||||
|
||||
# Initialize final DataFrame for export
|
||||
final_df = df.copy()
|
||||
|
||||
# 2. Setup Subplots
|
||||
# Count subplot indicators to configure rows
|
||||
subplot_count = 0
|
||||
volume_enabled = 'volume' in df.columns and df['volume'].sum() > 0
|
||||
if volume_enabled:
|
||||
subplot_count += 1
|
||||
|
||||
if subplot_indicators:
|
||||
subplot_count += len(subplot_indicators)
|
||||
|
||||
# Create subplot structure if needed
|
||||
if subplot_count > 0:
|
||||
# Calculate height ratios
|
||||
main_height = 0.7 # Main chart gets 70%
|
||||
subplot_height = 0.3 / subplot_count if subplot_count > 0 else 0
|
||||
|
||||
# Create subplot specifications
|
||||
subplot_specs = [[{"secondary_y": False}]] # Main chart
|
||||
row_heights = [main_height]
|
||||
|
||||
if volume_enabled:
|
||||
subplot_specs.append([{"secondary_y": False}])
|
||||
row_heights.append(subplot_height)
|
||||
|
||||
if subplot_indicators:
|
||||
for _ in subplot_indicators:
|
||||
subplot_specs.append([{"secondary_y": False}])
|
||||
row_heights.append(subplot_height)
|
||||
|
||||
# Create subplots figure
|
||||
from plotly.subplots import make_subplots
|
||||
fig = make_subplots(
|
||||
rows=len(subplot_specs),
|
||||
cols=1,
|
||||
shared_xaxes=True,
|
||||
vertical_spacing=0.02,
|
||||
row_heights=row_heights,
|
||||
specs=subplot_specs,
|
||||
subplot_titles=[f"{symbol} - {timeframe}"] + [""] * (len(subplot_specs) - 1)
|
||||
)
|
||||
else:
|
||||
# Create simple figure for main chart only
|
||||
fig = go.Figure()
|
||||
|
||||
current_row = 1
|
||||
|
||||
# 4. Add Candlestick Trace
|
||||
fig.add_trace(go.Candlestick(
|
||||
x=df['timestamp'],
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name=symbol,
|
||||
increasing_line_color=self.default_colors['bullish'],
|
||||
decreasing_line_color=self.default_colors['bearish']
|
||||
), row=current_row, col=1)
|
||||
|
||||
# 5. Add Volume Trace (if applicable)
|
||||
if volume_enabled:
|
||||
current_row += 1
|
||||
volume_colors = [self.default_colors['bullish'] if close >= open else self.default_colors['bearish']
|
||||
for close, open in zip(df['close'], df['open'])]
|
||||
|
||||
volume_trace = go.Bar(
|
||||
x=df['timestamp'],
|
||||
y=df['volume'],
|
||||
name='Volume',
|
||||
marker_color=volume_colors,
|
||||
opacity=0.7
|
||||
)
|
||||
fig.add_trace(volume_trace, row=current_row, col=1)
|
||||
fig.update_yaxes(title_text="Volume", row=current_row, col=1)
|
||||
|
||||
# 6. Add Indicator Traces
|
||||
indicator_manager = get_indicator_manager()
|
||||
all_indicator_configs = []
|
||||
|
||||
# Create IndicatorLayerConfig objects from indicator IDs
|
||||
indicator_ids = (overlay_indicators or []) + (subplot_indicators or [])
|
||||
for ind_id in indicator_ids:
|
||||
indicator = indicator_manager.load_indicator(ind_id)
|
||||
if indicator:
|
||||
config = IndicatorLayerConfig(
|
||||
id=indicator.id,
|
||||
name=indicator.name,
|
||||
indicator_type=indicator.type,
|
||||
parameters=indicator.parameters
|
||||
)
|
||||
all_indicator_configs.append(config)
|
||||
|
||||
if all_indicator_configs:
|
||||
indicator_data_map = self.data_integrator.get_indicator_data(
|
||||
main_df=df,
|
||||
main_timeframe=timeframe,
|
||||
indicator_configs=all_indicator_configs,
|
||||
indicator_manager=indicator_manager,
|
||||
symbol=symbol,
|
||||
exchange="okx"
|
||||
)
|
||||
|
||||
for indicator_id, indicator_df in indicator_data_map.items():
|
||||
indicator = indicator_manager.load_indicator(indicator_id)
|
||||
if not indicator:
|
||||
self.logger.warning(f"Could not load indicator '{indicator_id}' for plotting.")
|
||||
continue
|
||||
|
||||
if indicator_df is not None and not indicator_df.empty:
|
||||
# Add a suffix to the indicator's columns before joining to prevent overlap
|
||||
# when multiple indicators of the same type are added.
|
||||
final_df = final_df.join(indicator_df, how='left', rsuffix=f'_{indicator.id}')
|
||||
|
||||
# Determine target row for plotting
|
||||
target_row = 1 # Default to overlay on the main chart
|
||||
if indicator.id in subplot_indicators:
|
||||
current_row += 1
|
||||
target_row = current_row
|
||||
fig.update_yaxes(title_text=indicator.name, row=target_row, col=1)
|
||||
|
||||
if indicator.type == 'bollinger_bands':
|
||||
if all(c in indicator_df.columns for c in ['upper_band', 'lower_band', 'middle_band']):
|
||||
# Prepare data for the filled area
|
||||
x_vals = indicator_df.index
|
||||
y_upper = indicator_df['upper_band']
|
||||
y_lower = indicator_df['lower_band']
|
||||
|
||||
# Convert hex color to rgba for the fill
|
||||
hex_color = indicator.styling.color.lstrip('#')
|
||||
rgb = tuple(int(hex_color[i:i+2], 16) for i in (0, 2, 4))
|
||||
fill_color = f'rgba({rgb[0]}, {rgb[1]}, {rgb[2]}, 0.1)'
|
||||
|
||||
# Add the transparent fill trace
|
||||
fig.add_trace(go.Scatter(
|
||||
x=pd.concat([x_vals.to_series(), x_vals.to_series()[::-1]]),
|
||||
y=pd.concat([y_upper, y_lower[::-1]]),
|
||||
fill='toself',
|
||||
fillcolor=fill_color,
|
||||
line={'color': 'rgba(255,255,255,0)'},
|
||||
hoverinfo='none',
|
||||
showlegend=False
|
||||
), row=target_row, col=1)
|
||||
|
||||
# Add the visible line traces for the bands
|
||||
fig.add_trace(go.Scatter(x=x_vals, y=y_upper, name=f'{indicator.name} Upper', mode='lines', line=dict(color=indicator.styling.color, width=1.5)), row=target_row, col=1)
|
||||
fig.add_trace(go.Scatter(x=x_vals, y=y_lower, name=f'{indicator.name} Lower', mode='lines', line=dict(color=indicator.styling.color, width=1.5)), row=target_row, col=1)
|
||||
fig.add_trace(go.Scatter(x=x_vals, y=indicator_df['middle_band'], name=f'{indicator.name} Middle', mode='lines', line=dict(color=indicator.styling.color, width=1.5, dash='dash')), row=target_row, col=1)
|
||||
else:
|
||||
# Generic plotting for other indicators
|
||||
for col in indicator_df.columns:
|
||||
if col != 'timestamp':
|
||||
fig.add_trace(go.Scatter(
|
||||
x=indicator_df.index,
|
||||
y=indicator_df[col],
|
||||
mode='lines',
|
||||
name=f"{indicator.name} ({col})",
|
||||
line=dict(color=indicator.styling.color)
|
||||
), row=target_row, col=1)
|
||||
|
||||
# 7. Final Layout Updates
|
||||
height = kwargs.get('height', self.default_height)
|
||||
template = kwargs.get('template', self.default_template)
|
||||
|
||||
fig.update_layout(
|
||||
title=f"{symbol} - {timeframe} Chart",
|
||||
template=template,
|
||||
height=height,
|
||||
showlegend=True,
|
||||
legend=dict(yanchor="top", y=0.99, xanchor="left", x=0.01),
|
||||
xaxis_rangeslider_visible=False,
|
||||
hovermode='x unified'
|
||||
)
|
||||
|
||||
# Update x-axis for all subplots
|
||||
fig.update_xaxes(title_text="Time", row=current_row, col=1)
|
||||
fig.update_yaxes(title_text="Price (USDT)", row=1, col=1)
|
||||
|
||||
indicator_count = len(overlay_indicators or []) + len(subplot_indicators or [])
|
||||
self.logger.debug(f"Created chart for {symbol} {timeframe} with {indicator_count} indicators")
|
||||
self.logger.info(f"Successfully created chart for {symbol} with {len(overlay_indicators + subplot_indicators)} indicators.")
|
||||
return fig, final_df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error in create_chart_with_indicators for {symbol}: {e}", exc_info=True)
|
||||
return self._create_error_chart(f"Error generating indicator chart: {e}"), pd.DataFrame()
|
||||
243
components/charts/config/__init__.py
Normal file
243
components/charts/config/__init__.py
Normal file
@ -0,0 +1,243 @@
|
||||
"""
|
||||
Chart Configuration Package
|
||||
|
||||
This package provides configuration management for the modular chart system,
|
||||
including indicator definitions, schema validation, and default configurations.
|
||||
"""
|
||||
|
||||
from .indicator_defs import (
|
||||
# Core classes
|
||||
IndicatorType,
|
||||
DisplayType,
|
||||
LineStyle,
|
||||
PriceColumn,
|
||||
IndicatorParameterSchema,
|
||||
IndicatorSchema,
|
||||
ChartIndicatorConfig,
|
||||
|
||||
# Schema definitions
|
||||
INDICATOR_SCHEMAS,
|
||||
INDICATOR_DEFINITIONS,
|
||||
|
||||
# Utility functions
|
||||
validate_indicator_configuration,
|
||||
create_indicator_config,
|
||||
get_indicator_schema,
|
||||
get_available_indicator_types,
|
||||
get_indicator_parameter_info,
|
||||
validate_parameters_for_type,
|
||||
create_configuration_from_json,
|
||||
|
||||
# Legacy functions
|
||||
get_indicator_display_config,
|
||||
get_available_indicators,
|
||||
get_overlay_indicators,
|
||||
get_subplot_indicators,
|
||||
get_default_indicator_params,
|
||||
calculate_indicators
|
||||
)
|
||||
|
||||
from .defaults import (
|
||||
# Categories and strategies
|
||||
IndicatorCategory,
|
||||
TradingStrategy,
|
||||
IndicatorPreset,
|
||||
|
||||
# Color schemes
|
||||
CATEGORY_COLORS,
|
||||
|
||||
# Default indicators
|
||||
get_all_default_indicators,
|
||||
get_indicators_by_category,
|
||||
get_indicators_for_timeframe,
|
||||
|
||||
# Strategy presets
|
||||
get_strategy_indicators,
|
||||
get_strategy_info,
|
||||
get_available_strategies,
|
||||
get_available_categories,
|
||||
|
||||
# Custom presets
|
||||
create_custom_preset
|
||||
)
|
||||
|
||||
from .strategy_charts import (
|
||||
# Chart configuration classes
|
||||
ChartLayout,
|
||||
SubplotType,
|
||||
SubplotConfig,
|
||||
ChartStyle,
|
||||
StrategyChartConfig,
|
||||
|
||||
# Strategy configuration functions
|
||||
create_default_strategy_configurations,
|
||||
validate_strategy_configuration,
|
||||
create_custom_strategy_config,
|
||||
load_strategy_config_from_json,
|
||||
export_strategy_config_to_json,
|
||||
get_strategy_config,
|
||||
get_all_strategy_configs,
|
||||
get_available_strategy_names
|
||||
)
|
||||
|
||||
from .validation import (
|
||||
# Validation classes
|
||||
ValidationLevel,
|
||||
ValidationRule,
|
||||
ValidationIssue,
|
||||
ValidationReport,
|
||||
ConfigurationValidator,
|
||||
|
||||
# Validation functions
|
||||
validate_configuration,
|
||||
get_validation_rules_info
|
||||
)
|
||||
|
||||
from .example_strategies import (
|
||||
# Example strategy classes
|
||||
StrategyExample,
|
||||
|
||||
# Example strategy functions
|
||||
create_ema_crossover_strategy,
|
||||
create_momentum_breakout_strategy,
|
||||
create_mean_reversion_strategy,
|
||||
create_scalping_strategy,
|
||||
create_swing_trading_strategy,
|
||||
get_all_example_strategies,
|
||||
get_example_strategy,
|
||||
get_strategies_by_difficulty,
|
||||
get_strategies_by_risk_level,
|
||||
get_strategies_by_market_condition,
|
||||
get_strategy_summary,
|
||||
export_example_strategies_to_json
|
||||
)
|
||||
|
||||
from .error_handling import (
|
||||
# Error handling classes
|
||||
ErrorSeverity,
|
||||
ErrorCategory,
|
||||
ConfigurationError,
|
||||
ErrorReport,
|
||||
ConfigurationErrorHandler,
|
||||
|
||||
# Error handling functions
|
||||
validate_configuration_strict,
|
||||
validate_strategy_name,
|
||||
get_indicator_suggestions,
|
||||
get_strategy_suggestions,
|
||||
check_configuration_health
|
||||
)
|
||||
|
||||
# Package metadata
|
||||
__version__ = "0.1.0"
|
||||
__package_name__ = "config"
|
||||
|
||||
__all__ = [
|
||||
# Core classes from indicator_defs
|
||||
'IndicatorType',
|
||||
'DisplayType',
|
||||
'LineStyle',
|
||||
'PriceColumn',
|
||||
'IndicatorParameterSchema',
|
||||
'IndicatorSchema',
|
||||
'ChartIndicatorConfig',
|
||||
|
||||
# Schema and definitions
|
||||
'INDICATOR_SCHEMAS',
|
||||
'INDICATOR_DEFINITIONS',
|
||||
|
||||
# Validation and creation functions
|
||||
'validate_indicator_configuration',
|
||||
'create_indicator_config',
|
||||
'get_indicator_schema',
|
||||
'get_available_indicator_types',
|
||||
'get_indicator_parameter_info',
|
||||
'validate_parameters_for_type',
|
||||
'create_configuration_from_json',
|
||||
|
||||
# Legacy compatibility functions
|
||||
'get_indicator_display_config',
|
||||
'get_available_indicators',
|
||||
'get_overlay_indicators',
|
||||
'get_subplot_indicators',
|
||||
'get_default_indicator_params',
|
||||
'calculate_indicators',
|
||||
|
||||
# Categories and strategies from defaults
|
||||
'IndicatorCategory',
|
||||
'TradingStrategy',
|
||||
'IndicatorPreset',
|
||||
'CATEGORY_COLORS',
|
||||
|
||||
# Default configuration functions
|
||||
'get_all_default_indicators',
|
||||
'get_indicators_by_category',
|
||||
'get_indicators_for_timeframe',
|
||||
'get_strategy_indicators',
|
||||
'get_strategy_info',
|
||||
'get_available_strategies',
|
||||
'get_available_categories',
|
||||
'create_custom_preset',
|
||||
|
||||
# Strategy chart configuration classes
|
||||
'ChartLayout',
|
||||
'SubplotType',
|
||||
'SubplotConfig',
|
||||
'ChartStyle',
|
||||
'StrategyChartConfig',
|
||||
|
||||
# Strategy configuration functions
|
||||
'create_default_strategy_configurations',
|
||||
'validate_strategy_configuration',
|
||||
'create_custom_strategy_config',
|
||||
'load_strategy_config_from_json',
|
||||
'export_strategy_config_to_json',
|
||||
'get_strategy_config',
|
||||
'get_all_strategy_configs',
|
||||
'get_available_strategy_names',
|
||||
|
||||
# Validation classes
|
||||
'ValidationLevel',
|
||||
'ValidationRule',
|
||||
'ValidationIssue',
|
||||
'ValidationReport',
|
||||
'ConfigurationValidator',
|
||||
|
||||
# Validation functions
|
||||
'validate_configuration',
|
||||
'get_validation_rules_info',
|
||||
|
||||
# Example strategy classes
|
||||
'StrategyExample',
|
||||
|
||||
# Example strategy functions
|
||||
'create_ema_crossover_strategy',
|
||||
'create_momentum_breakout_strategy',
|
||||
'create_mean_reversion_strategy',
|
||||
'create_scalping_strategy',
|
||||
'create_swing_trading_strategy',
|
||||
'get_all_example_strategies',
|
||||
'get_example_strategy',
|
||||
'get_strategies_by_difficulty',
|
||||
'get_strategies_by_risk_level',
|
||||
'get_strategies_by_market_condition',
|
||||
'get_strategy_summary',
|
||||
'export_example_strategies_to_json',
|
||||
|
||||
# Error handling classes
|
||||
'ErrorSeverity',
|
||||
'ErrorCategory',
|
||||
'ConfigurationError',
|
||||
'ErrorReport',
|
||||
'ConfigurationErrorHandler',
|
||||
|
||||
# Error handling functions
|
||||
'validate_configuration_strict',
|
||||
'validate_strategy_name',
|
||||
'get_indicator_suggestions',
|
||||
'get_strategy_suggestions',
|
||||
'check_configuration_health'
|
||||
]
|
||||
|
||||
# Legacy function names for backward compatibility
|
||||
validate_indicator_config = get_default_indicator_params # Will be properly implemented in future tasks
|
||||
460
components/charts/config/defaults.py
Normal file
460
components/charts/config/defaults.py
Normal file
@ -0,0 +1,460 @@
|
||||
"""
|
||||
Default Indicator Configurations and Parameters
|
||||
|
||||
This module provides comprehensive default indicator configurations
|
||||
organized by categories, trading strategies, and common use cases.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from .indicator_defs import ChartIndicatorConfig, create_indicator_config, IndicatorType
|
||||
|
||||
|
||||
class IndicatorCategory(str, Enum):
|
||||
"""Categories for organizing indicators."""
|
||||
TREND = "trend"
|
||||
MOMENTUM = "momentum"
|
||||
VOLATILITY = "volatility"
|
||||
VOLUME = "volume"
|
||||
SUPPORT_RESISTANCE = "support_resistance"
|
||||
|
||||
|
||||
class TradingStrategy(str, Enum):
|
||||
"""Common trading strategy types."""
|
||||
SCALPING = "scalping"
|
||||
DAY_TRADING = "day_trading"
|
||||
SWING_TRADING = "swing_trading"
|
||||
POSITION_TRADING = "position_trading"
|
||||
MOMENTUM = "momentum"
|
||||
MEAN_REVERSION = "mean_reversion"
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorPreset:
|
||||
"""
|
||||
Predefined indicator configuration preset.
|
||||
"""
|
||||
name: str
|
||||
description: str
|
||||
category: IndicatorCategory
|
||||
recommended_timeframes: List[str]
|
||||
config: ChartIndicatorConfig
|
||||
|
||||
|
||||
# Color schemes for different indicator categories
|
||||
CATEGORY_COLORS = {
|
||||
IndicatorCategory.TREND: {
|
||||
'primary': '#007bff', # Blue
|
||||
'secondary': '#28a745', # Green
|
||||
'tertiary': '#17a2b8', # Cyan
|
||||
'quaternary': '#6c757d' # Gray
|
||||
},
|
||||
IndicatorCategory.MOMENTUM: {
|
||||
'primary': '#dc3545', # Red
|
||||
'secondary': '#fd7e14', # Orange
|
||||
'tertiary': '#e83e8c', # Pink
|
||||
'quaternary': '#6f42c1' # Purple
|
||||
},
|
||||
IndicatorCategory.VOLATILITY: {
|
||||
'primary': '#6f42c1', # Purple
|
||||
'secondary': '#e83e8c', # Pink
|
||||
'tertiary': '#20c997', # Teal
|
||||
'quaternary': '#ffc107' # Yellow
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
def create_trend_indicators() -> Dict[str, IndicatorPreset]:
|
||||
"""Create default trend indicator configurations."""
|
||||
trend_indicators = {}
|
||||
|
||||
# Simple Moving Averages
|
||||
sma_configs = [
|
||||
(5, "Very Short Term", ['1m', '5m']),
|
||||
(10, "Short Term", ['5m', '15m']),
|
||||
(20, "Short-Medium Term", ['15m', '1h']),
|
||||
(50, "Medium Term", ['1h', '4h']),
|
||||
(100, "Long Term", ['4h', '1d']),
|
||||
(200, "Very Long Term", ['1d', '1w'])
|
||||
]
|
||||
|
||||
for period, desc, timeframes in sma_configs:
|
||||
config, _ = create_indicator_config(
|
||||
name=f"SMA ({period})",
|
||||
indicator_type="sma",
|
||||
parameters={"period": period},
|
||||
color=CATEGORY_COLORS[IndicatorCategory.TREND]['primary'] if period <= 20 else
|
||||
CATEGORY_COLORS[IndicatorCategory.TREND]['secondary'] if period <= 50 else
|
||||
CATEGORY_COLORS[IndicatorCategory.TREND]['tertiary'],
|
||||
line_width=2 if period <= 50 else 3
|
||||
)
|
||||
|
||||
trend_indicators[f"sma_{period}"] = IndicatorPreset(
|
||||
name=f"SMA {period}",
|
||||
description=f"{desc} Simple Moving Average - {period} periods",
|
||||
category=IndicatorCategory.TREND,
|
||||
recommended_timeframes=timeframes,
|
||||
config=config
|
||||
)
|
||||
|
||||
# Exponential Moving Averages
|
||||
ema_configs = [
|
||||
(5, "Very Short Term", ['1m', '5m']),
|
||||
(12, "Short Term (MACD Fast)", ['5m', '15m', '1h']),
|
||||
(21, "Fibonacci Short Term", ['15m', '1h']),
|
||||
(26, "Medium Term (MACD Slow)", ['1h', '4h']),
|
||||
(50, "Medium-Long Term", ['4h', '1d']),
|
||||
(100, "Long Term", ['1d', '1w']),
|
||||
(200, "Very Long Term", ['1d', '1w'])
|
||||
]
|
||||
|
||||
for period, desc, timeframes in ema_configs:
|
||||
config, _ = create_indicator_config(
|
||||
name=f"EMA ({period})",
|
||||
indicator_type="ema",
|
||||
parameters={"period": period},
|
||||
color=CATEGORY_COLORS[IndicatorCategory.TREND]['secondary'] if period <= 21 else
|
||||
CATEGORY_COLORS[IndicatorCategory.TREND]['tertiary'] if period <= 50 else
|
||||
CATEGORY_COLORS[IndicatorCategory.TREND]['quaternary'],
|
||||
line_width=2,
|
||||
line_style='dash' if period in [12, 26] else 'solid'
|
||||
)
|
||||
|
||||
trend_indicators[f"ema_{period}"] = IndicatorPreset(
|
||||
name=f"EMA {period}",
|
||||
description=f"{desc} Exponential Moving Average - {period} periods",
|
||||
category=IndicatorCategory.TREND,
|
||||
recommended_timeframes=timeframes,
|
||||
config=config
|
||||
)
|
||||
|
||||
return trend_indicators
|
||||
|
||||
|
||||
def create_momentum_indicators() -> Dict[str, IndicatorPreset]:
|
||||
"""Create default momentum indicator configurations."""
|
||||
momentum_indicators = {}
|
||||
|
||||
# RSI configurations
|
||||
rsi_configs = [
|
||||
(7, "Fast RSI", ['1m', '5m', '15m']),
|
||||
(14, "Standard RSI", ['15m', '1h', '4h']),
|
||||
(21, "Slow RSI", ['1h', '4h', '1d']),
|
||||
(30, "Very Slow RSI", ['4h', '1d', '1w'])
|
||||
]
|
||||
|
||||
for period, desc, timeframes in rsi_configs:
|
||||
config, _ = create_indicator_config(
|
||||
name=f"RSI ({period})",
|
||||
indicator_type="rsi",
|
||||
parameters={"period": period},
|
||||
color=CATEGORY_COLORS[IndicatorCategory.MOMENTUM]['primary'] if period == 14 else
|
||||
CATEGORY_COLORS[IndicatorCategory.MOMENTUM]['secondary'],
|
||||
line_width=2,
|
||||
subplot_height_ratio=0.25
|
||||
)
|
||||
|
||||
momentum_indicators[f"rsi_{period}"] = IndicatorPreset(
|
||||
name=f"RSI {period}",
|
||||
description=f"{desc} - Relative Strength Index with {period} periods",
|
||||
category=IndicatorCategory.MOMENTUM,
|
||||
recommended_timeframes=timeframes,
|
||||
config=config
|
||||
)
|
||||
|
||||
# MACD configurations
|
||||
macd_configs = [
|
||||
((5, 13, 4), "Fast MACD", ['1m', '5m']),
|
||||
((8, 17, 6), "Scalping MACD", ['5m', '15m']),
|
||||
((12, 26, 9), "Standard MACD", ['15m', '1h', '4h']),
|
||||
((19, 39, 13), "Slow MACD", ['1h', '4h', '1d']),
|
||||
((26, 52, 18), "Very Slow MACD", ['4h', '1d', '1w'])
|
||||
]
|
||||
|
||||
for (fast, slow, signal), desc, timeframes in macd_configs:
|
||||
config, _ = create_indicator_config(
|
||||
name=f"MACD ({fast},{slow},{signal})",
|
||||
indicator_type="macd",
|
||||
parameters={
|
||||
"fast_period": fast,
|
||||
"slow_period": slow,
|
||||
"signal_period": signal
|
||||
},
|
||||
color=CATEGORY_COLORS[IndicatorCategory.MOMENTUM]['secondary'] if (fast, slow, signal) == (12, 26, 9) else
|
||||
CATEGORY_COLORS[IndicatorCategory.MOMENTUM]['tertiary'],
|
||||
line_width=2,
|
||||
subplot_height_ratio=0.3
|
||||
)
|
||||
|
||||
momentum_indicators[f"macd_{fast}_{slow}_{signal}"] = IndicatorPreset(
|
||||
name=f"MACD {fast}/{slow}/{signal}",
|
||||
description=f"{desc} - MACD with {fast}/{slow}/{signal} periods",
|
||||
category=IndicatorCategory.MOMENTUM,
|
||||
recommended_timeframes=timeframes,
|
||||
config=config
|
||||
)
|
||||
|
||||
return momentum_indicators
|
||||
|
||||
|
||||
def create_volatility_indicators() -> Dict[str, IndicatorPreset]:
|
||||
"""Create default volatility indicator configurations."""
|
||||
volatility_indicators = {}
|
||||
|
||||
# Bollinger Bands configurations
|
||||
bb_configs = [
|
||||
((10, 1.5), "Tight Bollinger Bands", ['1m', '5m']),
|
||||
((20, 2.0), "Standard Bollinger Bands", ['15m', '1h', '4h']),
|
||||
((20, 2.5), "Wide Bollinger Bands", ['1h', '4h']),
|
||||
((50, 2.0), "Long-term Bollinger Bands", ['4h', '1d', '1w'])
|
||||
]
|
||||
|
||||
for (period, std_dev), desc, timeframes in bb_configs:
|
||||
config, _ = create_indicator_config(
|
||||
name=f"BB ({period}, {std_dev})",
|
||||
indicator_type="bollinger_bands",
|
||||
parameters={"period": period, "std_dev": std_dev},
|
||||
color=CATEGORY_COLORS[IndicatorCategory.VOLATILITY]['primary'] if (period, std_dev) == (20, 2.0) else
|
||||
CATEGORY_COLORS[IndicatorCategory.VOLATILITY]['secondary'],
|
||||
line_width=1,
|
||||
opacity=0.7
|
||||
)
|
||||
|
||||
volatility_indicators[f"bb_{period}_{int(std_dev*10)}"] = IndicatorPreset(
|
||||
name=f"Bollinger Bands {period}/{std_dev}",
|
||||
description=f"{desc} - {period} period with {std_dev} standard deviations",
|
||||
category=IndicatorCategory.VOLATILITY,
|
||||
recommended_timeframes=timeframes,
|
||||
config=config
|
||||
)
|
||||
|
||||
return volatility_indicators
|
||||
|
||||
|
||||
def create_strategy_presets() -> Dict[str, Dict[str, List[str]]]:
|
||||
"""Create predefined indicator combinations for common trading strategies."""
|
||||
|
||||
strategy_presets = {
|
||||
TradingStrategy.SCALPING.value: {
|
||||
"name": "Scalping Strategy",
|
||||
"description": "Fast indicators for 1-5 minute scalping",
|
||||
"timeframes": ["1m", "5m"],
|
||||
"indicators": [
|
||||
"ema_5", "ema_12", "ema_21",
|
||||
"rsi_7", "macd_5_13_4",
|
||||
"bb_10_15"
|
||||
]
|
||||
},
|
||||
|
||||
TradingStrategy.DAY_TRADING.value: {
|
||||
"name": "Day Trading Strategy",
|
||||
"description": "Balanced indicators for intraday trading",
|
||||
"timeframes": ["5m", "15m", "1h"],
|
||||
"indicators": [
|
||||
"sma_20", "ema_12", "ema_26",
|
||||
"rsi_14", "macd_12_26_9",
|
||||
"bb_20_20"
|
||||
]
|
||||
},
|
||||
|
||||
TradingStrategy.SWING_TRADING.value: {
|
||||
"name": "Swing Trading Strategy",
|
||||
"description": "Medium-term indicators for swing trading",
|
||||
"timeframes": ["1h", "4h", "1d"],
|
||||
"indicators": [
|
||||
"sma_50", "ema_21", "ema_50",
|
||||
"rsi_14", "rsi_21", "macd_12_26_9",
|
||||
"bb_20_20"
|
||||
]
|
||||
},
|
||||
|
||||
TradingStrategy.POSITION_TRADING.value: {
|
||||
"name": "Position Trading Strategy",
|
||||
"description": "Long-term indicators for position trading",
|
||||
"timeframes": ["4h", "1d", "1w"],
|
||||
"indicators": [
|
||||
"sma_100", "sma_200", "ema_50", "ema_100",
|
||||
"rsi_21", "macd_19_39_13",
|
||||
"bb_50_20"
|
||||
]
|
||||
},
|
||||
|
||||
TradingStrategy.MOMENTUM.value: {
|
||||
"name": "Momentum Strategy",
|
||||
"description": "Momentum-focused indicators",
|
||||
"timeframes": ["15m", "1h", "4h"],
|
||||
"indicators": [
|
||||
"ema_12", "ema_26",
|
||||
"rsi_7", "rsi_14", "macd_8_17_6", "macd_12_26_9"
|
||||
]
|
||||
},
|
||||
|
||||
TradingStrategy.MEAN_REVERSION.value: {
|
||||
"name": "Mean Reversion Strategy",
|
||||
"description": "Indicators for mean reversion trading",
|
||||
"timeframes": ["15m", "1h", "4h"],
|
||||
"indicators": [
|
||||
"sma_20", "sma_50", "bb_20_20", "bb_20_25",
|
||||
"rsi_14", "rsi_21"
|
||||
]
|
||||
}
|
||||
}
|
||||
|
||||
return strategy_presets
|
||||
|
||||
|
||||
def get_all_default_indicators() -> Dict[str, IndicatorPreset]:
|
||||
"""
|
||||
Get all default indicator configurations.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to their preset configurations
|
||||
"""
|
||||
all_indicators = {}
|
||||
|
||||
# Combine all indicator categories
|
||||
all_indicators.update(create_trend_indicators())
|
||||
all_indicators.update(create_momentum_indicators())
|
||||
all_indicators.update(create_volatility_indicators())
|
||||
|
||||
return all_indicators
|
||||
|
||||
|
||||
def get_indicators_by_category(category: IndicatorCategory) -> Dict[str, IndicatorPreset]:
|
||||
"""
|
||||
Get default indicators filtered by category.
|
||||
|
||||
Args:
|
||||
category: Indicator category to filter by
|
||||
|
||||
Returns:
|
||||
Dictionary of indicators in the specified category
|
||||
"""
|
||||
all_indicators = get_all_default_indicators()
|
||||
return {name: preset for name, preset in all_indicators.items()
|
||||
if preset.category == category}
|
||||
|
||||
|
||||
def get_indicators_for_timeframe(timeframe: str) -> Dict[str, IndicatorPreset]:
|
||||
"""
|
||||
Get indicators recommended for a specific timeframe.
|
||||
|
||||
Args:
|
||||
timeframe: Timeframe string (e.g., '1m', '5m', '1h', '4h', '1d')
|
||||
|
||||
Returns:
|
||||
Dictionary of indicators suitable for the timeframe
|
||||
"""
|
||||
all_indicators = get_all_default_indicators()
|
||||
return {name: preset for name, preset in all_indicators.items()
|
||||
if timeframe in preset.recommended_timeframes}
|
||||
|
||||
|
||||
def get_strategy_indicators(strategy: TradingStrategy) -> List[str]:
|
||||
"""
|
||||
Get indicator names for a specific trading strategy.
|
||||
|
||||
Args:
|
||||
strategy: Trading strategy type
|
||||
|
||||
Returns:
|
||||
List of indicator names for the strategy
|
||||
"""
|
||||
presets = create_strategy_presets()
|
||||
strategy_config = presets.get(strategy.value, {})
|
||||
return strategy_config.get("indicators", [])
|
||||
|
||||
|
||||
def get_strategy_info(strategy: TradingStrategy) -> Dict[str, Any]:
|
||||
"""
|
||||
Get complete information about a trading strategy.
|
||||
|
||||
Args:
|
||||
strategy: Trading strategy type
|
||||
|
||||
Returns:
|
||||
Dictionary with strategy details including indicators and timeframes
|
||||
"""
|
||||
presets = create_strategy_presets()
|
||||
return presets.get(strategy.value, {})
|
||||
|
||||
|
||||
def create_custom_preset(
|
||||
name: str,
|
||||
description: str,
|
||||
category: IndicatorCategory,
|
||||
indicator_configs: List[Dict[str, Any]],
|
||||
recommended_timeframes: Optional[List[str]] = None
|
||||
) -> Dict[str, IndicatorPreset]:
|
||||
"""
|
||||
Create custom indicator presets.
|
||||
|
||||
Args:
|
||||
name: Preset name
|
||||
description: Preset description
|
||||
category: Indicator category
|
||||
indicator_configs: List of indicator configuration dictionaries
|
||||
recommended_timeframes: Optional list of recommended timeframes
|
||||
|
||||
Returns:
|
||||
Dictionary of created indicator presets
|
||||
"""
|
||||
custom_presets = {}
|
||||
|
||||
for i, config_data in enumerate(indicator_configs):
|
||||
try:
|
||||
config, errors = create_indicator_config(**config_data)
|
||||
if errors:
|
||||
continue
|
||||
|
||||
preset_name = f"{name.lower().replace(' ', '_')}_{i}"
|
||||
custom_presets[preset_name] = IndicatorPreset(
|
||||
name=f"{name} {i+1}",
|
||||
description=description,
|
||||
category=category,
|
||||
recommended_timeframes=recommended_timeframes or ["15m", "1h", "4h"],
|
||||
config=config
|
||||
)
|
||||
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return custom_presets
|
||||
|
||||
|
||||
def get_available_strategies() -> List[Dict[str, str]]:
|
||||
"""
|
||||
Get list of available trading strategies.
|
||||
|
||||
Returns:
|
||||
List of dictionaries with strategy information
|
||||
"""
|
||||
presets = create_strategy_presets()
|
||||
return [
|
||||
{
|
||||
"value": strategy,
|
||||
"name": info["name"],
|
||||
"description": info["description"],
|
||||
"timeframes": ", ".join(info["timeframes"])
|
||||
}
|
||||
for strategy, info in presets.items()
|
||||
]
|
||||
|
||||
|
||||
def get_available_categories() -> List[Dict[str, str]]:
|
||||
"""
|
||||
Get list of available indicator categories.
|
||||
|
||||
Returns:
|
||||
List of dictionaries with category information
|
||||
"""
|
||||
return [
|
||||
{
|
||||
"value": category.value,
|
||||
"name": category.value.replace("_", " ").title(),
|
||||
"description": f"Indicators for {category.value.replace('_', ' ')} analysis"
|
||||
}
|
||||
for category in IndicatorCategory
|
||||
]
|
||||
605
components/charts/config/error_handling.py
Normal file
605
components/charts/config/error_handling.py
Normal file
@ -0,0 +1,605 @@
|
||||
"""
|
||||
Enhanced Error Handling and User Guidance System
|
||||
|
||||
This module provides comprehensive error handling for missing strategies and indicators,
|
||||
with clear error messages, suggestions, and recovery guidance rather than silent fallbacks.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Optional, Set, Tuple, Any
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
import difflib
|
||||
from datetime import datetime
|
||||
|
||||
from .indicator_defs import IndicatorType, ChartIndicatorConfig
|
||||
from .defaults import get_all_default_indicators, IndicatorCategory, TradingStrategy
|
||||
from .strategy_charts import StrategyChartConfig
|
||||
from .example_strategies import get_all_example_strategies
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class ErrorSeverity(str, Enum):
|
||||
"""Severity levels for configuration errors."""
|
||||
CRITICAL = "critical" # Cannot proceed at all
|
||||
HIGH = "high" # Major functionality missing
|
||||
MEDIUM = "medium" # Some features unavailable
|
||||
LOW = "low" # Minor issues, mostly cosmetic
|
||||
|
||||
|
||||
class ErrorCategory(str, Enum):
|
||||
"""Categories of configuration errors."""
|
||||
MISSING_STRATEGY = "missing_strategy"
|
||||
MISSING_INDICATOR = "missing_indicator"
|
||||
INVALID_PARAMETER = "invalid_parameter"
|
||||
DEPENDENCY_MISSING = "dependency_missing"
|
||||
CONFIGURATION_CORRUPT = "configuration_corrupt"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ConfigurationError:
|
||||
"""Detailed configuration error with guidance."""
|
||||
category: ErrorCategory
|
||||
severity: ErrorSeverity
|
||||
message: str
|
||||
field_path: str = ""
|
||||
missing_item: str = ""
|
||||
suggestions: List[str] = field(default_factory=list)
|
||||
alternatives: List[str] = field(default_factory=list)
|
||||
recovery_steps: List[str] = field(default_factory=list)
|
||||
context: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""String representation of the error."""
|
||||
severity_emoji = {
|
||||
ErrorSeverity.CRITICAL: "🚨",
|
||||
ErrorSeverity.HIGH: "❌",
|
||||
ErrorSeverity.MEDIUM: "⚠️",
|
||||
ErrorSeverity.LOW: "ℹ️"
|
||||
}
|
||||
|
||||
result = f"{severity_emoji.get(self.severity, '❓')} {self.message}"
|
||||
|
||||
if self.suggestions:
|
||||
result += f"\n 💡 Suggestions: {', '.join(self.suggestions)}"
|
||||
|
||||
if self.alternatives:
|
||||
result += f"\n 🔄 Alternatives: {', '.join(self.alternatives)}"
|
||||
|
||||
if self.recovery_steps:
|
||||
result += f"\n 🔧 Recovery steps:"
|
||||
for step in self.recovery_steps:
|
||||
result += f"\n • {step}"
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@dataclass
|
||||
class ErrorReport:
|
||||
"""Comprehensive error report with categorized issues."""
|
||||
is_usable: bool
|
||||
errors: List[ConfigurationError] = field(default_factory=list)
|
||||
missing_strategies: Set[str] = field(default_factory=set)
|
||||
missing_indicators: Set[str] = field(default_factory=set)
|
||||
report_time: datetime = field(default_factory=datetime.now)
|
||||
|
||||
def add_error(self, error: ConfigurationError) -> None:
|
||||
"""Add an error to the report."""
|
||||
self.errors.append(error)
|
||||
|
||||
# Track missing items
|
||||
if error.category == ErrorCategory.MISSING_STRATEGY:
|
||||
self.missing_strategies.add(error.missing_item)
|
||||
elif error.category == ErrorCategory.MISSING_INDICATOR:
|
||||
self.missing_indicators.add(error.missing_item)
|
||||
|
||||
# Update usability based on severity
|
||||
if error.severity in [ErrorSeverity.CRITICAL, ErrorSeverity.HIGH]:
|
||||
self.is_usable = False
|
||||
|
||||
def get_critical_errors(self) -> List[ConfigurationError]:
|
||||
"""Get only critical errors that prevent usage."""
|
||||
return [e for e in self.errors if e.severity == ErrorSeverity.CRITICAL]
|
||||
|
||||
def get_high_priority_errors(self) -> List[ConfigurationError]:
|
||||
"""Get high priority errors that significantly impact functionality."""
|
||||
return [e for e in self.errors if e.severity == ErrorSeverity.HIGH]
|
||||
|
||||
def summary(self) -> str:
|
||||
"""Get a summary of the error report."""
|
||||
if not self.errors:
|
||||
return "✅ No configuration errors found"
|
||||
|
||||
critical = len(self.get_critical_errors())
|
||||
high = len(self.get_high_priority_errors())
|
||||
total = len(self.errors)
|
||||
|
||||
status = "❌ Cannot proceed" if not self.is_usable else "⚠️ Has issues but usable"
|
||||
|
||||
return f"{status} - {total} errors ({critical} critical, {high} high priority)"
|
||||
|
||||
|
||||
class ConfigurationErrorHandler:
|
||||
"""Enhanced error handler for configuration issues."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the error handler."""
|
||||
self.available_indicators = get_all_default_indicators()
|
||||
self.available_strategies = get_all_example_strategies()
|
||||
|
||||
# Cache indicator names for fuzzy matching
|
||||
self.indicator_names = set(self.available_indicators.keys())
|
||||
self.strategy_names = set(self.available_strategies.keys())
|
||||
|
||||
logger.info(f"Error Handling: Error handler initialized with {len(self.indicator_names)} indicators and {len(self.strategy_names)} strategies")
|
||||
|
||||
def validate_strategy_exists(self, strategy_name: str) -> Optional[ConfigurationError]:
|
||||
"""Check if a strategy exists and provide guidance if not."""
|
||||
if strategy_name in self.strategy_names:
|
||||
return None
|
||||
|
||||
# Find similar strategy names
|
||||
similar = difflib.get_close_matches(
|
||||
strategy_name,
|
||||
self.strategy_names,
|
||||
n=3,
|
||||
cutoff=0.6
|
||||
)
|
||||
|
||||
suggestions = []
|
||||
alternatives = list(similar) if similar else []
|
||||
recovery_steps = []
|
||||
|
||||
if similar:
|
||||
suggestions.append(f"Did you mean one of: {', '.join(similar)}?")
|
||||
recovery_steps.append(f"Try using: {similar[0]}")
|
||||
else:
|
||||
suggestions.append("Check available strategies with get_all_example_strategies()")
|
||||
recovery_steps.append("List available strategies: get_strategy_summary()")
|
||||
|
||||
# Add general recovery steps
|
||||
recovery_steps.extend([
|
||||
"Create a custom strategy with create_custom_strategy_config()",
|
||||
"Use a pre-built strategy like 'ema_crossover' or 'swing_trading'"
|
||||
])
|
||||
|
||||
return ConfigurationError(
|
||||
category=ErrorCategory.MISSING_STRATEGY,
|
||||
severity=ErrorSeverity.CRITICAL,
|
||||
message=f"Strategy '{strategy_name}' not found",
|
||||
missing_item=strategy_name,
|
||||
suggestions=suggestions,
|
||||
alternatives=alternatives,
|
||||
recovery_steps=recovery_steps,
|
||||
context={"available_count": len(self.strategy_names)}
|
||||
)
|
||||
|
||||
def validate_indicator_exists(self, indicator_name: str) -> Optional[ConfigurationError]:
|
||||
"""Check if an indicator exists and provide guidance if not."""
|
||||
if indicator_name in self.indicator_names:
|
||||
return None
|
||||
|
||||
# Find similar indicator names
|
||||
similar = difflib.get_close_matches(
|
||||
indicator_name,
|
||||
self.indicator_names,
|
||||
n=3,
|
||||
cutoff=0.6
|
||||
)
|
||||
|
||||
suggestions = []
|
||||
alternatives = list(similar) if similar else []
|
||||
recovery_steps = []
|
||||
|
||||
if similar:
|
||||
suggestions.append(f"Did you mean: {', '.join(similar)}?")
|
||||
recovery_steps.append(f"Try using: {similar[0]}")
|
||||
else:
|
||||
# Suggest by category if no close matches
|
||||
suggestions.append("Check available indicators with get_all_default_indicators()")
|
||||
|
||||
# Try to guess category and suggest alternatives
|
||||
if "sma" in indicator_name.lower() or "ema" in indicator_name.lower():
|
||||
trend_indicators = [name for name in self.indicator_names if name.startswith(("sma_", "ema_"))]
|
||||
alternatives.extend(trend_indicators[:3])
|
||||
suggestions.append("For trend indicators, try SMA or EMA with different periods")
|
||||
elif "rsi" in indicator_name.lower():
|
||||
rsi_indicators = [name for name in self.indicator_names if name.startswith("rsi_")]
|
||||
alternatives.extend(rsi_indicators)
|
||||
suggestions.append("For RSI, try rsi_14, rsi_7, or rsi_21")
|
||||
elif "macd" in indicator_name.lower():
|
||||
macd_indicators = [name for name in self.indicator_names if name.startswith("macd_")]
|
||||
alternatives.extend(macd_indicators)
|
||||
suggestions.append("For MACD, try macd_12_26_9 or other period combinations")
|
||||
elif "bb" in indicator_name.lower() or "bollinger" in indicator_name.lower():
|
||||
bb_indicators = [name for name in self.indicator_names if name.startswith("bb_")]
|
||||
alternatives.extend(bb_indicators)
|
||||
suggestions.append("For Bollinger Bands, try bb_20_20 or bb_20_15")
|
||||
|
||||
# Add general recovery steps
|
||||
recovery_steps.extend([
|
||||
"List available indicators by category: get_indicators_by_category()",
|
||||
"Create custom indicator with create_indicator_config()",
|
||||
"Remove this indicator from your configuration if not essential"
|
||||
])
|
||||
|
||||
# Determine severity based on indicator type
|
||||
severity = ErrorSeverity.HIGH
|
||||
if indicator_name.startswith(("sma_", "ema_")):
|
||||
severity = ErrorSeverity.CRITICAL # Trend indicators are often essential
|
||||
|
||||
return ConfigurationError(
|
||||
category=ErrorCategory.MISSING_INDICATOR,
|
||||
severity=severity,
|
||||
message=f"Indicator '{indicator_name}' not found",
|
||||
missing_item=indicator_name,
|
||||
suggestions=suggestions,
|
||||
alternatives=alternatives,
|
||||
recovery_steps=recovery_steps,
|
||||
context={"available_count": len(self.indicator_names)}
|
||||
)
|
||||
|
||||
def validate_strategy_configuration(self, config: StrategyChartConfig) -> ErrorReport:
|
||||
"""Comprehensively validate a strategy configuration."""
|
||||
report = ErrorReport(is_usable=True)
|
||||
|
||||
# Validate overlay indicators
|
||||
for indicator in config.overlay_indicators:
|
||||
error = self.validate_indicator_exists(indicator)
|
||||
if error:
|
||||
error.field_path = f"overlay_indicators[{indicator}]"
|
||||
report.add_error(error)
|
||||
|
||||
# Validate subplot indicators
|
||||
for i, subplot in enumerate(config.subplot_configs):
|
||||
for indicator in subplot.indicators:
|
||||
error = self.validate_indicator_exists(indicator)
|
||||
if error:
|
||||
error.field_path = f"subplot_configs[{i}].indicators[{indicator}]"
|
||||
report.add_error(error)
|
||||
|
||||
# Check for empty configuration
|
||||
total_indicators = len(config.overlay_indicators) + sum(
|
||||
len(subplot.indicators) for subplot in config.subplot_configs
|
||||
)
|
||||
|
||||
if total_indicators == 0:
|
||||
report.add_error(ConfigurationError(
|
||||
category=ErrorCategory.CONFIGURATION_CORRUPT,
|
||||
severity=ErrorSeverity.CRITICAL,
|
||||
message="Configuration has no indicators defined",
|
||||
suggestions=[
|
||||
"Add at least one overlay indicator (e.g., 'ema_12', 'sma_20')",
|
||||
"Add subplot indicators for momentum analysis (e.g., 'rsi_14')"
|
||||
],
|
||||
recovery_steps=[
|
||||
"Use a pre-built strategy: create_ema_crossover_strategy()",
|
||||
"Add basic indicators: ['ema_12', 'ema_26'] for trend analysis",
|
||||
"Add RSI subplot for momentum: subplot with 'rsi_14'"
|
||||
]
|
||||
))
|
||||
|
||||
# Validate strategy consistency
|
||||
if hasattr(config, 'strategy_type'):
|
||||
consistency_error = self._validate_strategy_consistency(config)
|
||||
if consistency_error:
|
||||
report.add_error(consistency_error)
|
||||
|
||||
return report
|
||||
|
||||
def _validate_strategy_consistency(self, config: StrategyChartConfig) -> Optional[ConfigurationError]:
|
||||
"""Validate that strategy configuration is consistent with strategy type."""
|
||||
strategy_type = config.strategy_type
|
||||
timeframes = config.timeframes
|
||||
|
||||
# Define expected timeframes for different strategies
|
||||
expected_timeframes = {
|
||||
TradingStrategy.SCALPING: ["1m", "5m"],
|
||||
TradingStrategy.DAY_TRADING: ["5m", "15m", "1h", "4h"],
|
||||
TradingStrategy.SWING_TRADING: ["1h", "4h", "1d"],
|
||||
TradingStrategy.MOMENTUM: ["5m", "15m", "1h"],
|
||||
TradingStrategy.MEAN_REVERSION: ["15m", "1h", "4h"]
|
||||
}
|
||||
|
||||
if strategy_type in expected_timeframes:
|
||||
expected = expected_timeframes[strategy_type]
|
||||
overlap = set(timeframes) & set(expected)
|
||||
|
||||
if not overlap:
|
||||
return ConfigurationError(
|
||||
category=ErrorCategory.INVALID_PARAMETER,
|
||||
severity=ErrorSeverity.MEDIUM,
|
||||
message=f"Timeframes {timeframes} may not be optimal for {strategy_type.value} strategy",
|
||||
field_path="timeframes",
|
||||
suggestions=[f"Consider using timeframes: {', '.join(expected)}"],
|
||||
alternatives=expected,
|
||||
recovery_steps=[
|
||||
f"Update timeframes to include: {expected[0]}",
|
||||
f"Or change strategy type to match timeframes"
|
||||
]
|
||||
)
|
||||
|
||||
return None
|
||||
|
||||
def suggest_alternatives_for_missing_indicators(self, missing_indicators: Set[str]) -> Dict[str, List[str]]:
|
||||
"""Suggest alternative indicators for missing ones."""
|
||||
suggestions = {}
|
||||
|
||||
for indicator in missing_indicators:
|
||||
alternatives = []
|
||||
|
||||
# Extract base type and period if possible
|
||||
parts = indicator.split('_')
|
||||
if len(parts) >= 2:
|
||||
base_type = parts[0]
|
||||
|
||||
# Find similar indicators of the same type
|
||||
similar_type = [name for name in self.indicator_names
|
||||
if name.startswith(f"{base_type}_")]
|
||||
alternatives.extend(similar_type[:3])
|
||||
|
||||
# If no similar type, suggest by category
|
||||
if not similar_type:
|
||||
if base_type in ["sma", "ema"]:
|
||||
alternatives = ["sma_20", "ema_12", "ema_26"]
|
||||
elif base_type == "rsi":
|
||||
alternatives = ["rsi_14", "rsi_7", "rsi_21"]
|
||||
elif base_type == "macd":
|
||||
alternatives = ["macd_12_26_9", "macd_8_17_6"]
|
||||
elif base_type == "bb":
|
||||
alternatives = ["bb_20_20", "bb_20_15"]
|
||||
|
||||
if alternatives:
|
||||
suggestions[indicator] = alternatives
|
||||
|
||||
return suggestions
|
||||
|
||||
def generate_recovery_configuration(self, config: StrategyChartConfig, error_report: ErrorReport) -> Tuple[Optional[StrategyChartConfig], List[str]]:
|
||||
"""Generate a recovery configuration with working alternatives."""
|
||||
if not error_report.missing_indicators:
|
||||
return config, []
|
||||
|
||||
recovery_notes = []
|
||||
recovery_config = StrategyChartConfig(
|
||||
strategy_name=f"{config.strategy_name} (Recovery)",
|
||||
strategy_type=config.strategy_type,
|
||||
description=f"{config.description} (Auto-recovered from missing indicators)",
|
||||
timeframes=config.timeframes,
|
||||
layout=config.layout,
|
||||
main_chart_height=config.main_chart_height,
|
||||
overlay_indicators=[],
|
||||
subplot_configs=[],
|
||||
chart_style=config.chart_style
|
||||
)
|
||||
|
||||
# Replace missing overlay indicators
|
||||
for indicator in config.overlay_indicators:
|
||||
if indicator in error_report.missing_indicators:
|
||||
# Find replacement
|
||||
alternatives = self.suggest_alternatives_for_missing_indicators({indicator})
|
||||
if indicator in alternatives and alternatives[indicator]:
|
||||
replacement = alternatives[indicator][0]
|
||||
recovery_config.overlay_indicators.append(replacement)
|
||||
recovery_notes.append(f"Replaced '{indicator}' with '{replacement}'")
|
||||
else:
|
||||
recovery_notes.append(f"Could not find replacement for '{indicator}' - removed")
|
||||
else:
|
||||
recovery_config.overlay_indicators.append(indicator)
|
||||
|
||||
# Handle subplot configurations
|
||||
for subplot in config.subplot_configs:
|
||||
recovered_subplot = subplot.__class__(
|
||||
subplot_type=subplot.subplot_type,
|
||||
height_ratio=subplot.height_ratio,
|
||||
indicators=[],
|
||||
title=subplot.title,
|
||||
y_axis_label=subplot.y_axis_label,
|
||||
show_grid=subplot.show_grid,
|
||||
show_legend=subplot.show_legend
|
||||
)
|
||||
|
||||
for indicator in subplot.indicators:
|
||||
if indicator in error_report.missing_indicators:
|
||||
alternatives = self.suggest_alternatives_for_missing_indicators({indicator})
|
||||
if indicator in alternatives and alternatives[indicator]:
|
||||
replacement = alternatives[indicator][0]
|
||||
recovered_subplot.indicators.append(replacement)
|
||||
recovery_notes.append(f"In subplot: Replaced '{indicator}' with '{replacement}'")
|
||||
else:
|
||||
recovery_notes.append(f"In subplot: Could not find replacement for '{indicator}' - removed")
|
||||
else:
|
||||
recovered_subplot.indicators.append(indicator)
|
||||
|
||||
# Only add subplot if it has indicators
|
||||
if recovered_subplot.indicators:
|
||||
recovery_config.subplot_configs.append(recovered_subplot)
|
||||
else:
|
||||
recovery_notes.append(f"Removed empty subplot: {subplot.subplot_type.value}")
|
||||
|
||||
# Add fallback indicators if configuration is empty
|
||||
if not recovery_config.overlay_indicators and not any(
|
||||
subplot.indicators for subplot in recovery_config.subplot_configs
|
||||
):
|
||||
recovery_config.overlay_indicators = ["ema_12", "ema_26", "sma_20"]
|
||||
recovery_notes.append("Added basic trend indicators: EMA 12, EMA 26, SMA 20")
|
||||
|
||||
# Add basic RSI subplot
|
||||
from .strategy_charts import SubplotConfig, SubplotType
|
||||
recovery_config.subplot_configs.append(
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_14"],
|
||||
title="RSI"
|
||||
)
|
||||
)
|
||||
recovery_notes.append("Added basic RSI subplot")
|
||||
|
||||
return recovery_config, recovery_notes
|
||||
|
||||
|
||||
def validate_configuration_strict(config: StrategyChartConfig) -> ErrorReport:
|
||||
"""
|
||||
Strict validation that fails on any missing dependencies.
|
||||
|
||||
Args:
|
||||
config: Strategy configuration to validate
|
||||
|
||||
Returns:
|
||||
ErrorReport with detailed error information
|
||||
"""
|
||||
handler = ConfigurationErrorHandler()
|
||||
return handler.validate_strategy_configuration(config)
|
||||
|
||||
|
||||
def validate_strategy_name(strategy_name: str) -> Optional[ConfigurationError]:
|
||||
"""
|
||||
Validate that a strategy name exists.
|
||||
|
||||
Args:
|
||||
strategy_name: Name of the strategy to validate
|
||||
|
||||
Returns:
|
||||
ConfigurationError if strategy not found, None otherwise
|
||||
"""
|
||||
handler = ConfigurationErrorHandler()
|
||||
return handler.validate_strategy_exists(strategy_name)
|
||||
|
||||
|
||||
def get_indicator_suggestions(partial_name: str, limit: int = 5) -> List[str]:
|
||||
"""
|
||||
Get indicator suggestions based on partial name.
|
||||
|
||||
Args:
|
||||
partial_name: Partial indicator name
|
||||
limit: Maximum number of suggestions
|
||||
|
||||
Returns:
|
||||
List of suggested indicator names
|
||||
"""
|
||||
handler = ConfigurationErrorHandler()
|
||||
|
||||
# Fuzzy match against available indicators
|
||||
matches = difflib.get_close_matches(
|
||||
partial_name,
|
||||
handler.indicator_names,
|
||||
n=limit,
|
||||
cutoff=0.3
|
||||
)
|
||||
|
||||
# If no fuzzy matches, try substring matching
|
||||
if not matches:
|
||||
substring_matches = [
|
||||
name for name in handler.indicator_names
|
||||
if partial_name.lower() in name.lower()
|
||||
]
|
||||
matches = substring_matches[:limit]
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
def get_strategy_suggestions(partial_name: str, limit: int = 5) -> List[str]:
|
||||
"""
|
||||
Get strategy suggestions based on partial name.
|
||||
|
||||
Args:
|
||||
partial_name: Partial strategy name
|
||||
limit: Maximum number of suggestions
|
||||
|
||||
Returns:
|
||||
List of suggested strategy names
|
||||
"""
|
||||
handler = ConfigurationErrorHandler()
|
||||
|
||||
matches = difflib.get_close_matches(
|
||||
partial_name,
|
||||
handler.strategy_names,
|
||||
n=limit,
|
||||
cutoff=0.3
|
||||
)
|
||||
|
||||
if not matches:
|
||||
substring_matches = [
|
||||
name for name in handler.strategy_names
|
||||
if partial_name.lower() in name.lower()
|
||||
]
|
||||
matches = substring_matches[:limit]
|
||||
|
||||
return matches
|
||||
|
||||
|
||||
def check_configuration_health(config: StrategyChartConfig) -> Dict[str, Any]:
|
||||
"""
|
||||
Perform a comprehensive health check on a configuration.
|
||||
|
||||
Args:
|
||||
config: Strategy configuration to check
|
||||
|
||||
Returns:
|
||||
Dictionary with health check results
|
||||
"""
|
||||
handler = ConfigurationErrorHandler()
|
||||
error_report = handler.validate_strategy_configuration(config)
|
||||
|
||||
# Count indicators by category
|
||||
indicator_counts = {}
|
||||
all_indicators = config.overlay_indicators + [
|
||||
indicator for subplot in config.subplot_configs
|
||||
for indicator in subplot.indicators
|
||||
]
|
||||
|
||||
for indicator in all_indicators:
|
||||
if indicator in handler.available_indicators:
|
||||
category = handler.available_indicators[indicator].category.value
|
||||
indicator_counts[category] = indicator_counts.get(category, 0) + 1
|
||||
|
||||
return {
|
||||
"is_healthy": error_report.is_usable and len(error_report.errors) == 0,
|
||||
"error_report": error_report,
|
||||
"total_indicators": len(all_indicators),
|
||||
"missing_indicators": len(error_report.missing_indicators),
|
||||
"indicator_by_category": indicator_counts,
|
||||
"has_trend_indicators": "trend" in indicator_counts,
|
||||
"has_momentum_indicators": "momentum" in indicator_counts,
|
||||
"recommendations": _generate_health_recommendations(config, error_report, indicator_counts)
|
||||
}
|
||||
|
||||
|
||||
def _generate_health_recommendations(
|
||||
config: StrategyChartConfig,
|
||||
error_report: ErrorReport,
|
||||
indicator_counts: Dict[str, int]
|
||||
) -> List[str]:
|
||||
"""Generate health recommendations for a configuration."""
|
||||
recommendations = []
|
||||
|
||||
# Missing indicators
|
||||
if error_report.missing_indicators:
|
||||
recommendations.append(f"Fix {len(error_report.missing_indicators)} missing indicators")
|
||||
|
||||
# Category balance
|
||||
if not indicator_counts.get("trend", 0):
|
||||
recommendations.append("Add trend indicators (SMA, EMA) for direction analysis")
|
||||
|
||||
if not indicator_counts.get("momentum", 0):
|
||||
recommendations.append("Add momentum indicators (RSI, MACD) for entry timing")
|
||||
|
||||
# Strategy-specific recommendations
|
||||
if config.strategy_type == TradingStrategy.SCALPING:
|
||||
if "1m" not in config.timeframes and "5m" not in config.timeframes:
|
||||
recommendations.append("Add short timeframes (1m, 5m) for scalping strategy")
|
||||
|
||||
elif config.strategy_type == TradingStrategy.SWING_TRADING:
|
||||
if not any(tf in config.timeframes for tf in ["4h", "1d"]):
|
||||
recommendations.append("Add longer timeframes (4h, 1d) for swing trading")
|
||||
|
||||
# Performance recommendations
|
||||
total_indicators = sum(indicator_counts.values())
|
||||
if total_indicators > 10:
|
||||
recommendations.append("Consider reducing indicators for better performance")
|
||||
elif total_indicators < 3:
|
||||
recommendations.append("Add more indicators for comprehensive analysis")
|
||||
|
||||
return recommendations
|
||||
651
components/charts/config/example_strategies.py
Normal file
651
components/charts/config/example_strategies.py
Normal file
@ -0,0 +1,651 @@
|
||||
"""
|
||||
Example Strategy Configurations
|
||||
|
||||
This module provides real-world trading strategy configurations that demonstrate
|
||||
how to combine indicators for specific trading approaches like EMA crossover,
|
||||
momentum trading, and other popular strategies.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Optional
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
|
||||
from .strategy_charts import (
|
||||
StrategyChartConfig,
|
||||
SubplotConfig,
|
||||
ChartStyle,
|
||||
ChartLayout,
|
||||
SubplotType
|
||||
)
|
||||
from .defaults import TradingStrategy
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("example_strategies")
|
||||
|
||||
|
||||
@dataclass
|
||||
class StrategyExample:
|
||||
"""Represents an example trading strategy with metadata."""
|
||||
config: StrategyChartConfig
|
||||
description: str
|
||||
author: str = "TCPDashboard"
|
||||
difficulty: str = "Beginner" # Beginner, Intermediate, Advanced
|
||||
expected_return: Optional[str] = None
|
||||
risk_level: str = "Medium" # Low, Medium, High
|
||||
market_conditions: List[str] = None # Trending, Sideways, Volatile
|
||||
notes: List[str] = None
|
||||
references: List[str] = None
|
||||
|
||||
def __post_init__(self):
|
||||
if self.market_conditions is None:
|
||||
self.market_conditions = ["Trending"]
|
||||
if self.notes is None:
|
||||
self.notes = []
|
||||
if self.references is None:
|
||||
self.references = []
|
||||
|
||||
|
||||
def create_ema_crossover_strategy() -> StrategyExample:
|
||||
"""
|
||||
Create EMA crossover strategy configuration.
|
||||
|
||||
Classic trend-following strategy using fast and slow EMA crossovers
|
||||
for entry and exit signals.
|
||||
"""
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="EMA Crossover Strategy",
|
||||
strategy_type=TradingStrategy.DAY_TRADING,
|
||||
description="Trend-following strategy using EMA crossovers for entry/exit signals",
|
||||
timeframes=["15m", "1h", "4h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.65,
|
||||
overlay_indicators=[
|
||||
"ema_12", # Fast EMA
|
||||
"ema_26", # Slow EMA
|
||||
"ema_50", # Trend filter
|
||||
"bb_20_20" # Bollinger Bands for volatility context
|
||||
],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_14"],
|
||||
title="RSI Momentum",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.2,
|
||||
indicators=["macd_12_26_9"],
|
||||
title="MACD Confirmation",
|
||||
y_axis_label="MACD",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
candlestick_up_color="#26a69a",
|
||||
candlestick_down_color="#ef5350",
|
||||
show_volume=True,
|
||||
show_grid=True
|
||||
),
|
||||
tags=["trend-following", "ema-crossover", "day-trading", "intermediate"]
|
||||
)
|
||||
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="""
|
||||
EMA Crossover Strategy uses the crossing of fast (12-period) and slow (26-period)
|
||||
exponential moving averages to generate buy and sell signals. The 50-period EMA
|
||||
acts as a trend filter to avoid false signals in sideways markets.
|
||||
|
||||
Entry Rules:
|
||||
- Buy when fast EMA crosses above slow EMA and price is above 50 EMA
|
||||
- RSI should be above 30 (not oversold)
|
||||
- MACD line should be above signal line
|
||||
|
||||
Exit Rules:
|
||||
- Sell when fast EMA crosses below slow EMA
|
||||
- Or when RSI reaches overbought levels (>70)
|
||||
- Stop loss: 2% below entry or below recent swing low
|
||||
""",
|
||||
author="TCPDashboard Team",
|
||||
difficulty="Intermediate",
|
||||
expected_return="8-15% monthly (in trending markets)",
|
||||
risk_level="Medium",
|
||||
market_conditions=["Trending", "Breakout"],
|
||||
notes=[
|
||||
"Works best in trending markets",
|
||||
"Can produce whipsaws in sideways markets",
|
||||
"Use 50 EMA as trend filter to reduce false signals",
|
||||
"Consider volume confirmation for stronger signals",
|
||||
"Best timeframes: 15m, 1h, 4h for day trading"
|
||||
],
|
||||
references=[
|
||||
"Moving Average Convergence Divergence - Gerald Appel",
|
||||
"Technical Analysis of Financial Markets - John Murphy"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_momentum_breakout_strategy() -> StrategyExample:
|
||||
"""
|
||||
Create momentum breakout strategy configuration.
|
||||
|
||||
Strategy focused on capturing momentum moves using RSI,
|
||||
MACD, and volume confirmation.
|
||||
"""
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="Momentum Breakout Strategy",
|
||||
strategy_type=TradingStrategy.MOMENTUM,
|
||||
description="Momentum strategy capturing strong directional moves with volume confirmation",
|
||||
timeframes=["5m", "15m", "1h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.6,
|
||||
overlay_indicators=[
|
||||
"ema_8", # Fast trend
|
||||
"ema_21", # Medium trend
|
||||
"bb_20_25" # Volatility bands (wider for breakouts)
|
||||
],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_7", "rsi_14"], # Fast and standard RSI
|
||||
title="RSI Momentum (7 & 14)",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.15,
|
||||
indicators=["macd_8_17_6"], # Faster MACD for momentum
|
||||
title="MACD Fast",
|
||||
y_axis_label="MACD",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.VOLUME,
|
||||
height_ratio=0.1,
|
||||
indicators=[],
|
||||
title="Volume Confirmation",
|
||||
y_axis_label="Volume",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=11,
|
||||
candlestick_up_color="#00d4aa",
|
||||
candlestick_down_color="#fe6a85",
|
||||
show_volume=True,
|
||||
show_grid=True
|
||||
),
|
||||
tags=["momentum", "breakout", "volume", "short-term"]
|
||||
)
|
||||
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="""
|
||||
Momentum Breakout Strategy captures strong directional moves by identifying
|
||||
momentum acceleration with volume confirmation. Uses faster indicators
|
||||
to catch moves early while avoiding false breakouts.
|
||||
|
||||
Entry Rules:
|
||||
- Price breaks above/below Bollinger Bands with strong volume
|
||||
- RSI-7 > 70 for bullish momentum (< 30 for bearish)
|
||||
- MACD line crosses above signal line with rising histogram
|
||||
- Volume should be 1.5x average volume
|
||||
- EMA-8 above EMA-21 for trend confirmation
|
||||
|
||||
Exit Rules:
|
||||
- RSI-7 reaches extreme levels (>80 or <20)
|
||||
- MACD histogram starts declining
|
||||
- Volume drops significantly
|
||||
- Price returns inside Bollinger Bands
|
||||
- Stop loss: 1.5% or below previous swing point
|
||||
""",
|
||||
author="TCPDashboard Team",
|
||||
difficulty="Advanced",
|
||||
expected_return="15-25% monthly (high volatility)",
|
||||
risk_level="High",
|
||||
market_conditions=["Volatile", "Breakout", "High Volume"],
|
||||
notes=[
|
||||
"Requires quick execution and tight risk management",
|
||||
"Best during high volatility periods",
|
||||
"Monitor volume closely for confirmation",
|
||||
"Use smaller position sizes due to higher risk",
|
||||
"Consider market hours for better volume",
|
||||
"Avoid during low liquidity periods"
|
||||
],
|
||||
references=[
|
||||
"Momentum Stock Selection - Richard Driehaus",
|
||||
"High Probability Trading - Marcel Link"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_mean_reversion_strategy() -> StrategyExample:
|
||||
"""
|
||||
Create mean reversion strategy configuration.
|
||||
|
||||
Counter-trend strategy using oversold/overbought conditions
|
||||
and support/resistance levels.
|
||||
"""
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="Mean Reversion Strategy",
|
||||
strategy_type=TradingStrategy.MEAN_REVERSION,
|
||||
description="Counter-trend strategy exploiting oversold/overbought conditions",
|
||||
timeframes=["15m", "1h", "4h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.7,
|
||||
overlay_indicators=[
|
||||
"sma_20", # Mean reversion line
|
||||
"sma_50", # Trend context
|
||||
"bb_20_20", # Standard Bollinger Bands
|
||||
"bb_20_15" # Tighter bands for entry signals
|
||||
],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_14", "rsi_21"], # Standard and slower RSI
|
||||
title="RSI Mean Reversion",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.VOLUME,
|
||||
height_ratio=0.1,
|
||||
indicators=[],
|
||||
title="Volume Analysis",
|
||||
y_axis_label="Volume",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
candlestick_up_color="#4caf50",
|
||||
candlestick_down_color="#f44336",
|
||||
show_volume=True,
|
||||
show_grid=True
|
||||
),
|
||||
tags=["mean-reversion", "counter-trend", "oversold-overbought"]
|
||||
)
|
||||
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="""
|
||||
Mean Reversion Strategy exploits the tendency of prices to return to their
|
||||
average after extreme moves. Uses multiple RSI periods and Bollinger Bands
|
||||
to identify oversold/overbought conditions with high probability reversals.
|
||||
|
||||
Entry Rules (Long):
|
||||
- Price touches or breaks lower Bollinger Band (20,2.0)
|
||||
- RSI-14 < 30 and RSI-21 < 35 (oversold conditions)
|
||||
- Price shows bullish divergence with RSI
|
||||
- Volume spike on the reversal candle
|
||||
- Entry on first green candle after oversold signal
|
||||
|
||||
Entry Rules (Short):
|
||||
- Price touches or breaks upper Bollinger Band
|
||||
- RSI-14 > 70 and RSI-21 > 65 (overbought conditions)
|
||||
- Bearish divergence with RSI
|
||||
- High volume on reversal
|
||||
|
||||
Exit Rules:
|
||||
- Price returns to SMA-20 (mean)
|
||||
- RSI reaches neutral zone (45-55)
|
||||
- Stop loss: Beyond recent swing high/low
|
||||
- Take profit: Opposite Bollinger Band
|
||||
""",
|
||||
author="TCPDashboard Team",
|
||||
difficulty="Intermediate",
|
||||
expected_return="10-18% monthly (ranging markets)",
|
||||
risk_level="Medium",
|
||||
market_conditions=["Sideways", "Ranging", "Oversold/Overbought"],
|
||||
notes=[
|
||||
"Works best in ranging/sideways markets",
|
||||
"Avoid during strong trending periods",
|
||||
"Look for divergences for higher probability setups",
|
||||
"Use proper position sizing due to counter-trend nature",
|
||||
"Consider market structure and support/resistance levels",
|
||||
"Best during regular market hours for better volume"
|
||||
],
|
||||
references=[
|
||||
"Mean Reversion Trading Systems - Howard Bandy",
|
||||
"Contrarian Investment Strategies - David Dreman"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_scalping_strategy() -> StrategyExample:
|
||||
"""
|
||||
Create scalping strategy configuration.
|
||||
|
||||
High-frequency strategy for quick profits using
|
||||
very fast indicators and tight risk management.
|
||||
"""
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="Scalping Strategy",
|
||||
strategy_type=TradingStrategy.SCALPING,
|
||||
description="High-frequency scalping strategy for quick profits",
|
||||
timeframes=["1m", "5m"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.55,
|
||||
overlay_indicators=[
|
||||
"ema_5", # Very fast EMA
|
||||
"ema_12", # Fast EMA
|
||||
"ema_21", # Reference EMA
|
||||
"bb_10_15" # Tight Bollinger Bands
|
||||
],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_7"], # Very fast RSI
|
||||
title="RSI Fast (7)",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.15,
|
||||
indicators=["macd_5_13_4"], # Very fast MACD
|
||||
title="MACD Ultra Fast",
|
||||
y_axis_label="MACD",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.VOLUME,
|
||||
height_ratio=0.1,
|
||||
indicators=[],
|
||||
title="Volume",
|
||||
y_axis_label="Volume",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=10,
|
||||
candlestick_up_color="#00e676",
|
||||
candlestick_down_color="#ff1744",
|
||||
show_volume=True,
|
||||
show_grid=True
|
||||
),
|
||||
tags=["scalping", "high-frequency", "fast", "1-minute"]
|
||||
)
|
||||
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="""
|
||||
Scalping Strategy designed for rapid-fire trading with small profits
|
||||
and very tight risk management. Uses ultra-fast indicators to capture
|
||||
small price movements multiple times per session.
|
||||
|
||||
Entry Rules:
|
||||
- EMA-5 crosses above EMA-12 (bullish scalp)
|
||||
- Price is above EMA-21 for trend alignment
|
||||
- RSI-7 between 40-60 (avoid extremes)
|
||||
- MACD line above signal line
|
||||
- Strong volume confirmation
|
||||
- Enter on pullback to EMA-5 after crossover
|
||||
|
||||
Exit Rules:
|
||||
- Target: 5-10 pips profit (0.1-0.2% for stocks)
|
||||
- Stop loss: 3-5 pips (0.05-0.1%)
|
||||
- Exit if RSI reaches extreme (>75 or <25)
|
||||
- Exit if EMA-5 crosses back below EMA-12
|
||||
- Maximum hold time: 5-15 minutes
|
||||
""",
|
||||
author="TCPDashboard Team",
|
||||
difficulty="Advanced",
|
||||
expected_return="Small profits, high frequency (2-5% daily)",
|
||||
risk_level="High",
|
||||
market_conditions=["High Liquidity", "Volatile", "Active Sessions"],
|
||||
notes=[
|
||||
"Requires very fast execution and low latency",
|
||||
"Best during active market hours (overlapping sessions)",
|
||||
"Use tight spreads and low commission brokers",
|
||||
"Requires constant monitoring and quick decisions",
|
||||
"Risk management is critical - small stops",
|
||||
"Not suitable for beginners",
|
||||
"Consider transaction costs carefully",
|
||||
"Practice on demo account extensively first"
|
||||
],
|
||||
references=[
|
||||
"A Complete Guide to Volume Price Analysis - Anna Coulling",
|
||||
"The Complete Guide to Day Trading - Markus Heitkoetter"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def create_swing_trading_strategy() -> StrategyExample:
|
||||
"""
|
||||
Create swing trading strategy configuration.
|
||||
|
||||
Medium-term strategy capturing price swings over
|
||||
several days to weeks using trend and momentum indicators.
|
||||
"""
|
||||
config = StrategyChartConfig(
|
||||
strategy_name="Swing Trading Strategy",
|
||||
strategy_type=TradingStrategy.SWING_TRADING,
|
||||
description="Medium-term strategy capturing multi-day price swings",
|
||||
timeframes=["4h", "1d"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.7,
|
||||
overlay_indicators=[
|
||||
"sma_20", # Short-term trend
|
||||
"sma_50", # Medium-term trend
|
||||
"ema_21", # Dynamic support/resistance
|
||||
"bb_20_20" # Volatility bands
|
||||
],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_14"],
|
||||
title="RSI (14)",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.15,
|
||||
indicators=["macd_12_26_9"],
|
||||
title="MACD Standard",
|
||||
y_axis_label="MACD",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=13,
|
||||
candlestick_up_color="#388e3c",
|
||||
candlestick_down_color="#d32f2f",
|
||||
show_volume=True,
|
||||
show_grid=True
|
||||
),
|
||||
tags=["swing-trading", "medium-term", "trend-following"]
|
||||
)
|
||||
|
||||
return StrategyExample(
|
||||
config=config,
|
||||
description="""
|
||||
Swing Trading Strategy captures price swings over several days to weeks
|
||||
by identifying trend changes and momentum shifts. Suitable for traders
|
||||
who cannot monitor markets constantly but want to catch significant moves.
|
||||
|
||||
Entry Rules (Long):
|
||||
- Price above SMA-50 (uptrend confirmation)
|
||||
- SMA-20 crosses above SMA-50 or price bounces off SMA-20
|
||||
- RSI pullback to 35-45 then rises above 50
|
||||
- MACD line crosses above signal line
|
||||
- Price finds support at EMA-21 or lower Bollinger Band
|
||||
|
||||
Entry Rules (Short):
|
||||
- Price below SMA-50 (downtrend confirmation)
|
||||
- SMA-20 crosses below SMA-50 or price rejected at SMA-20
|
||||
- RSI pullback to 55-65 then falls below 50
|
||||
- MACD line crosses below signal line
|
||||
|
||||
Exit Rules:
|
||||
- Price reaches opposite Bollinger Band
|
||||
- RSI reaches overbought/oversold extremes (>70/<30)
|
||||
- MACD shows divergence or histogram weakens
|
||||
- Stop loss: Below/above recent swing point (2-4%)
|
||||
- Take profit: 1:2 or 1:3 risk-reward ratio
|
||||
""",
|
||||
author="TCPDashboard Team",
|
||||
difficulty="Beginner",
|
||||
expected_return="15-25% annually",
|
||||
risk_level="Medium",
|
||||
market_conditions=["Trending", "Swing Markets"],
|
||||
notes=[
|
||||
"Suitable for part-time traders",
|
||||
"Requires patience for proper setups",
|
||||
"Good for building trading discipline",
|
||||
"Consider fundamental analysis for direction",
|
||||
"Use proper position sizing (1-2% risk per trade)",
|
||||
"Best on daily timeframes for trend following",
|
||||
"Monitor weekly charts for overall direction"
|
||||
],
|
||||
references=[
|
||||
"Swing Trading For Beginners - Matthew Maybury",
|
||||
"The Master Swing Trader - Alan Farley"
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def get_all_example_strategies() -> Dict[str, StrategyExample]:
|
||||
"""
|
||||
Get all available example strategies.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping strategy names to StrategyExample objects
|
||||
"""
|
||||
return {
|
||||
"ema_crossover": create_ema_crossover_strategy(),
|
||||
"momentum_breakout": create_momentum_breakout_strategy(),
|
||||
"mean_reversion": create_mean_reversion_strategy(),
|
||||
"scalping": create_scalping_strategy(),
|
||||
"swing_trading": create_swing_trading_strategy()
|
||||
}
|
||||
|
||||
|
||||
def get_example_strategy(strategy_name: str) -> Optional[StrategyExample]:
|
||||
"""
|
||||
Get a specific example strategy by name.
|
||||
|
||||
Args:
|
||||
strategy_name: Name of the strategy to retrieve
|
||||
|
||||
Returns:
|
||||
StrategyExample object or None if not found
|
||||
"""
|
||||
strategies = get_all_example_strategies()
|
||||
return strategies.get(strategy_name)
|
||||
|
||||
|
||||
def get_strategies_by_difficulty(difficulty: str) -> List[StrategyExample]:
|
||||
"""
|
||||
Get example strategies filtered by difficulty level.
|
||||
|
||||
Args:
|
||||
difficulty: Difficulty level ("Beginner", "Intermediate", "Advanced")
|
||||
|
||||
Returns:
|
||||
List of StrategyExample objects matching the difficulty
|
||||
"""
|
||||
strategies = get_all_example_strategies()
|
||||
return [strategy for strategy in strategies.values()
|
||||
if strategy.difficulty == difficulty]
|
||||
|
||||
|
||||
def get_strategies_by_risk_level(risk_level: str) -> List[StrategyExample]:
|
||||
"""
|
||||
Get example strategies filtered by risk level.
|
||||
|
||||
Args:
|
||||
risk_level: Risk level ("Low", "Medium", "High")
|
||||
|
||||
Returns:
|
||||
List of StrategyExample objects matching the risk level
|
||||
"""
|
||||
strategies = get_all_example_strategies()
|
||||
return [strategy for strategy in strategies.values()
|
||||
if strategy.risk_level == risk_level]
|
||||
|
||||
|
||||
def get_strategies_by_market_condition(condition: str) -> List[StrategyExample]:
|
||||
"""
|
||||
Get example strategies suitable for specific market conditions.
|
||||
|
||||
Args:
|
||||
condition: Market condition ("Trending", "Sideways", "Volatile", etc.)
|
||||
|
||||
Returns:
|
||||
List of StrategyExample objects suitable for the condition
|
||||
"""
|
||||
strategies = get_all_example_strategies()
|
||||
return [strategy for strategy in strategies.values()
|
||||
if condition in strategy.market_conditions]
|
||||
|
||||
|
||||
def get_strategy_summary() -> Dict[str, Dict[str, str]]:
|
||||
"""
|
||||
Get a summary of all example strategies with key information.
|
||||
|
||||
Returns:
|
||||
Dictionary with strategy summaries
|
||||
"""
|
||||
strategies = get_all_example_strategies()
|
||||
summary = {}
|
||||
|
||||
for name, strategy in strategies.items():
|
||||
summary[name] = {
|
||||
"name": strategy.config.strategy_name,
|
||||
"type": strategy.config.strategy_type.value,
|
||||
"difficulty": strategy.difficulty,
|
||||
"risk_level": strategy.risk_level,
|
||||
"timeframes": ", ".join(strategy.config.timeframes),
|
||||
"market_conditions": ", ".join(strategy.market_conditions),
|
||||
"expected_return": strategy.expected_return or "N/A"
|
||||
}
|
||||
|
||||
return summary
|
||||
|
||||
|
||||
def export_example_strategies_to_json() -> str:
|
||||
"""
|
||||
Export all example strategies to JSON format.
|
||||
|
||||
Returns:
|
||||
JSON string containing all example strategies
|
||||
"""
|
||||
import json
|
||||
from .strategy_charts import export_strategy_config_to_json
|
||||
|
||||
strategies = get_all_example_strategies()
|
||||
export_data = {}
|
||||
|
||||
for name, strategy in strategies.items():
|
||||
export_data[name] = {
|
||||
"config": json.loads(export_strategy_config_to_json(strategy.config)),
|
||||
"metadata": {
|
||||
"description": strategy.description,
|
||||
"author": strategy.author,
|
||||
"difficulty": strategy.difficulty,
|
||||
"expected_return": strategy.expected_return,
|
||||
"risk_level": strategy.risk_level,
|
||||
"market_conditions": strategy.market_conditions,
|
||||
"notes": strategy.notes,
|
||||
"references": strategy.references
|
||||
}
|
||||
}
|
||||
|
||||
return json.dumps(export_data, indent=2)
|
||||
786
components/charts/config/indicator_defs.py
Normal file
786
components/charts/config/indicator_defs.py
Normal file
@ -0,0 +1,786 @@
|
||||
"""
|
||||
Indicator Definitions and Configuration
|
||||
|
||||
This module defines indicator configurations and provides integration
|
||||
with the existing data/common/indicators.py technical indicators module.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Union, Literal
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from decimal import Decimal
|
||||
import json
|
||||
from enum import Enum
|
||||
|
||||
from data.common.indicators import TechnicalIndicators, IndicatorResult, create_default_indicators_config, validate_indicator_config
|
||||
from data.common.data_types import OHLCVCandle
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class IndicatorType(str, Enum):
|
||||
"""Supported indicator types."""
|
||||
SMA = "sma"
|
||||
EMA = "ema"
|
||||
RSI = "rsi"
|
||||
MACD = "macd"
|
||||
BOLLINGER_BANDS = "bollinger_bands"
|
||||
|
||||
|
||||
class DisplayType(str, Enum):
|
||||
"""Chart display types for indicators."""
|
||||
OVERLAY = "overlay"
|
||||
SUBPLOT = "subplot"
|
||||
|
||||
|
||||
class LineStyle(str, Enum):
|
||||
"""Available line styles for chart display."""
|
||||
SOLID = "solid"
|
||||
DASH = "dash"
|
||||
DOT = "dot"
|
||||
DASHDOT = "dashdot"
|
||||
|
||||
|
||||
class PriceColumn(str, Enum):
|
||||
"""Available price columns for calculations."""
|
||||
OPEN = "open"
|
||||
HIGH = "high"
|
||||
LOW = "low"
|
||||
CLOSE = "close"
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorParameterSchema:
|
||||
"""
|
||||
Schema definition for an indicator parameter.
|
||||
"""
|
||||
name: str
|
||||
type: type
|
||||
required: bool = True
|
||||
default: Any = None
|
||||
min_value: Optional[Union[int, float]] = None
|
||||
max_value: Optional[Union[int, float]] = None
|
||||
description: str = ""
|
||||
|
||||
def validate(self, value: Any) -> tuple[bool, str]:
|
||||
"""
|
||||
Validate a parameter value against this schema.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, error_message)
|
||||
"""
|
||||
if value is None:
|
||||
if self.required:
|
||||
return False, f"Parameter '{self.name}' is required"
|
||||
return True, ""
|
||||
|
||||
# Type validation
|
||||
if not isinstance(value, self.type):
|
||||
return False, f"Parameter '{self.name}' must be of type {self.type.__name__}, got {type(value).__name__}"
|
||||
|
||||
# Range validation for numeric types
|
||||
if isinstance(value, (int, float)):
|
||||
if self.min_value is not None and value < self.min_value:
|
||||
return False, f"Parameter '{self.name}' must be >= {self.min_value}, got {value}"
|
||||
if self.max_value is not None and value > self.max_value:
|
||||
return False, f"Parameter '{self.name}' must be <= {self.max_value}, got {value}"
|
||||
|
||||
return True, ""
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorSchema:
|
||||
"""
|
||||
Complete schema definition for an indicator type.
|
||||
"""
|
||||
indicator_type: IndicatorType
|
||||
display_type: DisplayType
|
||||
required_parameters: List[IndicatorParameterSchema]
|
||||
optional_parameters: List[IndicatorParameterSchema] = field(default_factory=list)
|
||||
min_data_points: int = 1
|
||||
description: str = ""
|
||||
|
||||
def get_parameter_schema(self, param_name: str) -> Optional[IndicatorParameterSchema]:
|
||||
"""Get schema for a specific parameter."""
|
||||
for param in self.required_parameters + self.optional_parameters:
|
||||
if param.name == param_name:
|
||||
return param
|
||||
return None
|
||||
|
||||
def validate_parameters(self, parameters: Dict[str, Any]) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate all parameters against this schema.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
errors = []
|
||||
|
||||
# Check required parameters
|
||||
for param_schema in self.required_parameters:
|
||||
value = parameters.get(param_schema.name)
|
||||
is_valid, error = param_schema.validate(value)
|
||||
if not is_valid:
|
||||
errors.append(error)
|
||||
|
||||
# Check optional parameters if provided
|
||||
for param_schema in self.optional_parameters:
|
||||
if param_schema.name in parameters:
|
||||
value = parameters[param_schema.name]
|
||||
is_valid, error = param_schema.validate(value)
|
||||
if not is_valid:
|
||||
errors.append(error)
|
||||
|
||||
# Check for unknown parameters
|
||||
known_params = {p.name for p in self.required_parameters + self.optional_parameters}
|
||||
for param_name in parameters:
|
||||
if param_name not in known_params:
|
||||
errors.append(f"Unknown parameter '{param_name}' for {self.indicator_type.value} indicator")
|
||||
|
||||
return len(errors) == 0, errors
|
||||
|
||||
|
||||
# Define schema for each indicator type
|
||||
INDICATOR_SCHEMAS = {
|
||||
IndicatorType.SMA: IndicatorSchema(
|
||||
indicator_type=IndicatorType.SMA,
|
||||
display_type=DisplayType.OVERLAY,
|
||||
required_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=200,
|
||||
description="Number of periods for moving average"
|
||||
)
|
||||
],
|
||||
optional_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default="close",
|
||||
description="Price column to use (open, high, low, close)"
|
||||
)
|
||||
],
|
||||
min_data_points=1,
|
||||
description="Simple Moving Average - arithmetic mean of closing prices over a specified period"
|
||||
),
|
||||
|
||||
IndicatorType.EMA: IndicatorSchema(
|
||||
indicator_type=IndicatorType.EMA,
|
||||
display_type=DisplayType.OVERLAY,
|
||||
required_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=200,
|
||||
description="Number of periods for exponential moving average"
|
||||
)
|
||||
],
|
||||
optional_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default="close",
|
||||
description="Price column to use (open, high, low, close)"
|
||||
)
|
||||
],
|
||||
min_data_points=1,
|
||||
description="Exponential Moving Average - gives more weight to recent prices"
|
||||
),
|
||||
|
||||
IndicatorType.RSI: IndicatorSchema(
|
||||
indicator_type=IndicatorType.RSI,
|
||||
display_type=DisplayType.SUBPLOT,
|
||||
required_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="period",
|
||||
type=int,
|
||||
min_value=2,
|
||||
max_value=100,
|
||||
description="Number of periods for RSI calculation"
|
||||
)
|
||||
],
|
||||
optional_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default="close",
|
||||
description="Price column to use (open, high, low, close)"
|
||||
)
|
||||
],
|
||||
min_data_points=2,
|
||||
description="Relative Strength Index - momentum oscillator measuring speed and magnitude of price changes"
|
||||
),
|
||||
|
||||
IndicatorType.MACD: IndicatorSchema(
|
||||
indicator_type=IndicatorType.MACD,
|
||||
display_type=DisplayType.SUBPLOT,
|
||||
required_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="fast_period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=50,
|
||||
description="Fast EMA period"
|
||||
),
|
||||
IndicatorParameterSchema(
|
||||
name="slow_period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=100,
|
||||
description="Slow EMA period"
|
||||
),
|
||||
IndicatorParameterSchema(
|
||||
name="signal_period",
|
||||
type=int,
|
||||
min_value=1,
|
||||
max_value=50,
|
||||
description="Signal line EMA period"
|
||||
)
|
||||
],
|
||||
optional_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default="close",
|
||||
description="Price column to use (open, high, low, close)"
|
||||
)
|
||||
],
|
||||
min_data_points=3,
|
||||
description="Moving Average Convergence Divergence - trend-following momentum indicator"
|
||||
),
|
||||
|
||||
IndicatorType.BOLLINGER_BANDS: IndicatorSchema(
|
||||
indicator_type=IndicatorType.BOLLINGER_BANDS,
|
||||
display_type=DisplayType.OVERLAY,
|
||||
required_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="period",
|
||||
type=int,
|
||||
min_value=2,
|
||||
max_value=100,
|
||||
description="Number of periods for moving average"
|
||||
),
|
||||
IndicatorParameterSchema(
|
||||
name="std_dev",
|
||||
type=float,
|
||||
min_value=0.1,
|
||||
max_value=5.0,
|
||||
description="Number of standard deviations for bands"
|
||||
)
|
||||
],
|
||||
optional_parameters=[
|
||||
IndicatorParameterSchema(
|
||||
name="price_column",
|
||||
type=str,
|
||||
required=False,
|
||||
default="close",
|
||||
description="Price column to use (open, high, low, close)"
|
||||
)
|
||||
],
|
||||
min_data_points=2,
|
||||
description="Bollinger Bands - volatility bands placed above and below a moving average"
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChartIndicatorConfig:
|
||||
"""
|
||||
Configuration for chart indicators with display properties.
|
||||
|
||||
Extends the base indicator config with chart-specific properties
|
||||
like colors, line styles, and subplot placement.
|
||||
"""
|
||||
name: str
|
||||
indicator_type: str
|
||||
parameters: Dict[str, Any]
|
||||
display_type: str # 'overlay', 'subplot'
|
||||
color: str
|
||||
line_style: str = 'solid' # 'solid', 'dash', 'dot'
|
||||
line_width: int = 2
|
||||
opacity: float = 1.0
|
||||
visible: bool = True
|
||||
subplot_height_ratio: float = 0.3 # For subplot indicators
|
||||
|
||||
def to_indicator_config(self) -> Dict[str, Any]:
|
||||
"""Convert to format expected by TechnicalIndicators."""
|
||||
config = {'type': self.indicator_type}
|
||||
config.update(self.parameters)
|
||||
return config
|
||||
|
||||
def validate(self) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate this indicator configuration against its schema.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
errors = []
|
||||
|
||||
# Check if indicator type is supported
|
||||
try:
|
||||
indicator_type = IndicatorType(self.indicator_type)
|
||||
except ValueError:
|
||||
return False, [f"Unsupported indicator type: {self.indicator_type}"]
|
||||
|
||||
# Get schema for this indicator type
|
||||
schema = INDICATOR_SCHEMAS.get(indicator_type)
|
||||
if not schema:
|
||||
return False, [f"No schema found for indicator type: {self.indicator_type}"]
|
||||
|
||||
# Validate parameters against schema
|
||||
is_valid, param_errors = schema.validate_parameters(self.parameters)
|
||||
if not is_valid:
|
||||
errors.extend(param_errors)
|
||||
|
||||
# Validate display properties
|
||||
if self.display_type not in [DisplayType.OVERLAY.value, DisplayType.SUBPLOT.value]:
|
||||
errors.append(f"Invalid display_type: {self.display_type}")
|
||||
|
||||
if self.line_style not in [style.value for style in LineStyle]:
|
||||
errors.append(f"Invalid line_style: {self.line_style}")
|
||||
|
||||
if not isinstance(self.line_width, int) or self.line_width < 1:
|
||||
errors.append("line_width must be a positive integer")
|
||||
|
||||
if not isinstance(self.opacity, (int, float)) or not (0.0 <= self.opacity <= 1.0):
|
||||
errors.append("opacity must be a number between 0.0 and 1.0")
|
||||
|
||||
if self.display_type == DisplayType.SUBPLOT.value:
|
||||
if not isinstance(self.subplot_height_ratio, (int, float)) or not (0.1 <= self.subplot_height_ratio <= 1.0):
|
||||
errors.append("subplot_height_ratio must be a number between 0.1 and 1.0")
|
||||
|
||||
return len(errors) == 0, errors
|
||||
|
||||
|
||||
# Built-in indicator definitions with chart display properties
|
||||
INDICATOR_DEFINITIONS = {
|
||||
'sma_20': ChartIndicatorConfig(
|
||||
name='SMA (20)',
|
||||
indicator_type='sma',
|
||||
parameters={'period': 20, 'price_column': 'close'},
|
||||
display_type='overlay',
|
||||
color='#007bff',
|
||||
line_width=2
|
||||
),
|
||||
'sma_50': ChartIndicatorConfig(
|
||||
name='SMA (50)',
|
||||
indicator_type='sma',
|
||||
parameters={'period': 50, 'price_column': 'close'},
|
||||
display_type='overlay',
|
||||
color='#28a745',
|
||||
line_width=2
|
||||
),
|
||||
'ema_12': ChartIndicatorConfig(
|
||||
name='EMA (12)',
|
||||
indicator_type='ema',
|
||||
parameters={'period': 12, 'price_column': 'close'},
|
||||
display_type='overlay',
|
||||
color='#ff6b35',
|
||||
line_width=2
|
||||
),
|
||||
'ema_26': ChartIndicatorConfig(
|
||||
name='EMA (26)',
|
||||
indicator_type='ema',
|
||||
parameters={'period': 26, 'price_column': 'close'},
|
||||
display_type='overlay',
|
||||
color='#dc3545',
|
||||
line_width=2
|
||||
),
|
||||
'rsi_14': ChartIndicatorConfig(
|
||||
name='RSI (14)',
|
||||
indicator_type='rsi',
|
||||
parameters={'period': 14, 'price_column': 'close'},
|
||||
display_type='subplot',
|
||||
color='#20c997',
|
||||
line_width=2,
|
||||
subplot_height_ratio=0.25
|
||||
),
|
||||
'macd_default': ChartIndicatorConfig(
|
||||
name='MACD',
|
||||
indicator_type='macd',
|
||||
parameters={'fast_period': 12, 'slow_period': 26, 'signal_period': 9, 'price_column': 'close'},
|
||||
display_type='subplot',
|
||||
color='#fd7e14',
|
||||
line_width=2,
|
||||
subplot_height_ratio=0.3
|
||||
),
|
||||
'bollinger_bands': ChartIndicatorConfig(
|
||||
name='Bollinger Bands',
|
||||
indicator_type='bollinger_bands',
|
||||
parameters={'period': 20, 'std_dev': 2.0, 'price_column': 'close'},
|
||||
display_type='overlay',
|
||||
color='#6f42c1',
|
||||
line_width=1,
|
||||
opacity=0.7
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
def convert_database_candles_to_ohlcv(candles: List[Dict[str, Any]]) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Convert database candle dictionaries to OHLCVCandle objects.
|
||||
|
||||
Args:
|
||||
candles: List of candle dictionaries from database operations
|
||||
|
||||
Returns:
|
||||
List of OHLCVCandle objects for technical indicators
|
||||
"""
|
||||
ohlcv_candles = []
|
||||
|
||||
for candle in candles:
|
||||
try:
|
||||
# Handle timestamp conversion
|
||||
timestamp = candle['timestamp']
|
||||
if isinstance(timestamp, str):
|
||||
timestamp = datetime.fromisoformat(timestamp.replace('Z', '+00:00'))
|
||||
elif timestamp.tzinfo is None:
|
||||
timestamp = timestamp.replace(tzinfo=timezone.utc)
|
||||
|
||||
# For database candles, start_time and end_time are the same
|
||||
# as we store right-aligned timestamps
|
||||
ohlcv_candle = OHLCVCandle(
|
||||
symbol=candle['symbol'],
|
||||
timeframe=candle['timeframe'],
|
||||
start_time=timestamp,
|
||||
end_time=timestamp,
|
||||
open=Decimal(str(candle['open'])),
|
||||
high=Decimal(str(candle['high'])),
|
||||
low=Decimal(str(candle['low'])),
|
||||
close=Decimal(str(candle['close'])),
|
||||
volume=Decimal(str(candle.get('volume', 0))),
|
||||
trade_count=candle.get('trades_count', 0),
|
||||
exchange=candle.get('exchange', 'okx'),
|
||||
is_complete=True
|
||||
)
|
||||
ohlcv_candles.append(ohlcv_candle)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Indicator Definitions: Error converting candle to OHLCV: {e}")
|
||||
continue
|
||||
|
||||
logger.debug(f"Indicator Definitions: Converted {len(ohlcv_candles)} database candles to OHLCV format")
|
||||
return ohlcv_candles
|
||||
|
||||
|
||||
def calculate_indicators(candles: List[Dict[str, Any]],
|
||||
indicator_configs: List[str],
|
||||
custom_configs: Optional[Dict[str, ChartIndicatorConfig]] = None) -> Dict[str, List[IndicatorResult]]:
|
||||
"""
|
||||
Calculate technical indicators for chart display.
|
||||
|
||||
Args:
|
||||
candles: List of candle dictionaries from database
|
||||
indicator_configs: List of indicator names to calculate
|
||||
custom_configs: Optional custom indicator configurations
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to their calculation results
|
||||
"""
|
||||
if not candles:
|
||||
logger.warning("Indicator Definitions: No candles provided for indicator calculation")
|
||||
return {}
|
||||
|
||||
# Convert to OHLCV format
|
||||
ohlcv_candles = convert_database_candles_to_ohlcv(candles)
|
||||
if not ohlcv_candles:
|
||||
logger.error("Indicator Definitions: Failed to convert candles to OHLCV format")
|
||||
return {}
|
||||
|
||||
# Initialize technical indicators calculator
|
||||
indicators_calc = TechnicalIndicators(logger)
|
||||
|
||||
# Prepare configurations
|
||||
configs_to_calculate = {}
|
||||
all_configs = {**INDICATOR_DEFINITIONS}
|
||||
if custom_configs:
|
||||
all_configs.update(custom_configs)
|
||||
|
||||
for indicator_name in indicator_configs:
|
||||
if indicator_name in all_configs:
|
||||
chart_config = all_configs[indicator_name]
|
||||
configs_to_calculate[indicator_name] = chart_config.to_indicator_config()
|
||||
else:
|
||||
logger.warning(f"Indicator Definitions: Unknown indicator configuration: {indicator_name}")
|
||||
|
||||
if not configs_to_calculate:
|
||||
logger.warning("Indicator Definitions: No valid indicator configurations found")
|
||||
return {}
|
||||
|
||||
# Calculate indicators
|
||||
try:
|
||||
results = indicators_calc.calculate_multiple_indicators(ohlcv_candles, configs_to_calculate)
|
||||
logger.debug(f"Indicator Definitions: Calculated {len(results)} indicators successfully")
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Indicator Definitions: Error calculating indicators: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
def get_indicator_display_config(indicator_name: str) -> Optional[ChartIndicatorConfig]:
|
||||
"""
|
||||
Get display configuration for an indicator.
|
||||
|
||||
Args:
|
||||
indicator_name: Name of the indicator
|
||||
|
||||
Returns:
|
||||
Chart indicator configuration or None if not found
|
||||
"""
|
||||
return INDICATOR_DEFINITIONS.get(indicator_name)
|
||||
|
||||
|
||||
def get_available_indicators() -> Dict[str, str]:
|
||||
"""
|
||||
Get list of available indicators with descriptions.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to descriptions
|
||||
"""
|
||||
return {name: config.name for name, config in INDICATOR_DEFINITIONS.items()}
|
||||
|
||||
|
||||
def get_overlay_indicators() -> List[str]:
|
||||
"""Get list of indicators that display as overlays on the price chart."""
|
||||
return [name for name, config in INDICATOR_DEFINITIONS.items()
|
||||
if config.display_type == 'overlay']
|
||||
|
||||
|
||||
def get_subplot_indicators() -> List[str]:
|
||||
"""Get list of indicators that display in separate subplots."""
|
||||
return [name for name, config in INDICATOR_DEFINITIONS.items()
|
||||
if config.display_type == 'subplot']
|
||||
|
||||
|
||||
def get_default_indicator_params(indicator_type: str) -> Dict[str, Any]:
|
||||
"""
|
||||
Get default parameters for an indicator type.
|
||||
|
||||
Args:
|
||||
indicator_type: Type of indicator ('sma', 'ema', 'rsi', etc.)
|
||||
|
||||
Returns:
|
||||
Dictionary of default parameters
|
||||
"""
|
||||
defaults = {
|
||||
'sma': {'period': 20, 'price_column': 'close'},
|
||||
'ema': {'period': 20, 'price_column': 'close'},
|
||||
'rsi': {'period': 14, 'price_column': 'close'},
|
||||
'macd': {'fast_period': 12, 'slow_period': 26, 'signal_period': 9, 'price_column': 'close'},
|
||||
'bollinger_bands': {'period': 20, 'std_dev': 2.0, 'price_column': 'close'}
|
||||
}
|
||||
|
||||
return defaults.get(indicator_type, {})
|
||||
|
||||
|
||||
def validate_indicator_configuration(config: ChartIndicatorConfig) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate an indicator configuration against its schema.
|
||||
|
||||
Args:
|
||||
config: Chart indicator configuration to validate
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
return config.validate()
|
||||
|
||||
|
||||
def create_indicator_config(
|
||||
name: str,
|
||||
indicator_type: str,
|
||||
parameters: Dict[str, Any],
|
||||
display_type: Optional[str] = None,
|
||||
color: str = "#007bff",
|
||||
**display_options
|
||||
) -> tuple[Optional[ChartIndicatorConfig], List[str]]:
|
||||
"""
|
||||
Create and validate a new indicator configuration.
|
||||
|
||||
Args:
|
||||
name: Display name for the indicator
|
||||
indicator_type: Type of indicator (sma, ema, rsi, etc.)
|
||||
parameters: Indicator parameters
|
||||
display_type: Optional override for display type
|
||||
color: Color for chart display
|
||||
**display_options: Additional display configuration options
|
||||
|
||||
Returns:
|
||||
Tuple of (config_object_or_None, list_of_error_messages)
|
||||
"""
|
||||
errors = []
|
||||
|
||||
# Validate indicator type
|
||||
try:
|
||||
indicator_enum = IndicatorType(indicator_type)
|
||||
except ValueError:
|
||||
return None, [f"Unsupported indicator type: {indicator_type}"]
|
||||
|
||||
# Get schema for validation
|
||||
schema = INDICATOR_SCHEMAS.get(indicator_enum)
|
||||
if not schema:
|
||||
return None, [f"No schema found for indicator type: {indicator_type}"]
|
||||
|
||||
# Use schema display type if not overridden
|
||||
if display_type is None:
|
||||
display_type = schema.display_type.value
|
||||
|
||||
# Fill in default parameters
|
||||
final_parameters = {}
|
||||
|
||||
# Add required parameters with defaults if missing
|
||||
for param_schema in schema.required_parameters:
|
||||
if param_schema.name in parameters:
|
||||
final_parameters[param_schema.name] = parameters[param_schema.name]
|
||||
elif param_schema.default is not None:
|
||||
final_parameters[param_schema.name] = param_schema.default
|
||||
# Required parameters without defaults will be caught by validation
|
||||
|
||||
# Add optional parameters
|
||||
for param_schema in schema.optional_parameters:
|
||||
if param_schema.name in parameters:
|
||||
final_parameters[param_schema.name] = parameters[param_schema.name]
|
||||
elif param_schema.default is not None:
|
||||
final_parameters[param_schema.name] = param_schema.default
|
||||
|
||||
# Create configuration
|
||||
config = ChartIndicatorConfig(
|
||||
name=name,
|
||||
indicator_type=indicator_type,
|
||||
parameters=final_parameters,
|
||||
display_type=display_type,
|
||||
color=color,
|
||||
line_style=display_options.get('line_style', 'solid'),
|
||||
line_width=display_options.get('line_width', 2),
|
||||
opacity=display_options.get('opacity', 1.0),
|
||||
visible=display_options.get('visible', True),
|
||||
subplot_height_ratio=display_options.get('subplot_height_ratio', 0.3)
|
||||
)
|
||||
|
||||
# Validate the configuration
|
||||
is_valid, validation_errors = config.validate()
|
||||
if not is_valid:
|
||||
return None, validation_errors
|
||||
|
||||
return config, []
|
||||
|
||||
|
||||
def get_indicator_schema(indicator_type: str) -> Optional[IndicatorSchema]:
|
||||
"""
|
||||
Get the schema for an indicator type.
|
||||
|
||||
Args:
|
||||
indicator_type: Type of indicator
|
||||
|
||||
Returns:
|
||||
IndicatorSchema object or None if not found
|
||||
"""
|
||||
try:
|
||||
indicator_enum = IndicatorType(indicator_type)
|
||||
return INDICATOR_SCHEMAS.get(indicator_enum)
|
||||
except ValueError:
|
||||
return None
|
||||
|
||||
|
||||
def get_available_indicator_types() -> List[str]:
|
||||
"""
|
||||
Get list of available indicator types.
|
||||
|
||||
Returns:
|
||||
List of supported indicator type strings
|
||||
"""
|
||||
return [indicator_type.value for indicator_type in IndicatorType]
|
||||
|
||||
|
||||
def get_indicator_parameter_info(indicator_type: str) -> Dict[str, Dict[str, Any]]:
|
||||
"""
|
||||
Get detailed parameter information for an indicator type.
|
||||
|
||||
Args:
|
||||
indicator_type: Type of indicator
|
||||
|
||||
Returns:
|
||||
Dictionary with parameter information including types, ranges, and descriptions
|
||||
"""
|
||||
schema = get_indicator_schema(indicator_type)
|
||||
if not schema:
|
||||
return {}
|
||||
|
||||
param_info = {}
|
||||
|
||||
for param in schema.required_parameters + schema.optional_parameters:
|
||||
param_info[param.name] = {
|
||||
'type': param.type.__name__,
|
||||
'required': param.required,
|
||||
'default': param.default,
|
||||
'min_value': param.min_value,
|
||||
'max_value': param.max_value,
|
||||
'description': param.description
|
||||
}
|
||||
|
||||
return param_info
|
||||
|
||||
|
||||
def validate_parameters_for_type(indicator_type: str, parameters: Dict[str, Any]) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate parameters for a specific indicator type.
|
||||
|
||||
Args:
|
||||
indicator_type: Type of indicator
|
||||
parameters: Parameters to validate
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
schema = get_indicator_schema(indicator_type)
|
||||
if not schema:
|
||||
return False, [f"Unknown indicator type: {indicator_type}"]
|
||||
|
||||
return schema.validate_parameters(parameters)
|
||||
|
||||
|
||||
def create_configuration_from_json(json_data: Union[str, Dict[str, Any]]) -> tuple[Optional[ChartIndicatorConfig], List[str]]:
|
||||
"""
|
||||
Create indicator configuration from JSON data.
|
||||
|
||||
Args:
|
||||
json_data: JSON string or dictionary with configuration data
|
||||
|
||||
Returns:
|
||||
Tuple of (config_object_or_None, list_of_error_messages)
|
||||
"""
|
||||
try:
|
||||
if isinstance(json_data, str):
|
||||
data = json.loads(json_data)
|
||||
else:
|
||||
data = json_data
|
||||
|
||||
required_fields = ['name', 'indicator_type', 'parameters']
|
||||
missing_fields = [field for field in required_fields if field not in data]
|
||||
if missing_fields:
|
||||
return None, [f"Missing required fields: {', '.join(missing_fields)}"]
|
||||
|
||||
return create_indicator_config(
|
||||
name=data['name'],
|
||||
indicator_type=data['indicator_type'],
|
||||
parameters=data['parameters'],
|
||||
display_type=data.get('display_type'),
|
||||
color=data.get('color', '#007bff'),
|
||||
**{k: v for k, v in data.items() if k not in ['name', 'indicator_type', 'parameters', 'display_type', 'color']}
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
return None, [f"Invalid JSON: {e}"]
|
||||
except Exception as e:
|
||||
return None, [f"Error creating configuration: {e}"]
|
||||
640
components/charts/config/strategy_charts.py
Normal file
640
components/charts/config/strategy_charts.py
Normal file
@ -0,0 +1,640 @@
|
||||
"""
|
||||
Strategy-Specific Chart Configuration System
|
||||
|
||||
This module provides complete chart configurations for different trading strategies,
|
||||
including indicator combinations, chart layouts, subplot arrangements, and display settings.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Union
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
import json
|
||||
from datetime import datetime
|
||||
|
||||
from .indicator_defs import ChartIndicatorConfig, create_indicator_config, validate_indicator_configuration
|
||||
from .defaults import (
|
||||
TradingStrategy,
|
||||
IndicatorCategory,
|
||||
get_all_default_indicators,
|
||||
get_strategy_indicators,
|
||||
get_strategy_info
|
||||
)
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class ChartLayout(str, Enum):
|
||||
"""Chart layout types."""
|
||||
SINGLE_CHART = "single_chart"
|
||||
MAIN_WITH_SUBPLOTS = "main_with_subplots"
|
||||
MULTI_CHART = "multi_chart"
|
||||
GRID_LAYOUT = "grid_layout"
|
||||
|
||||
|
||||
class SubplotType(str, Enum):
|
||||
"""Types of subplots available."""
|
||||
VOLUME = "volume"
|
||||
RSI = "rsi"
|
||||
MACD = "macd"
|
||||
MOMENTUM = "momentum"
|
||||
CUSTOM = "custom"
|
||||
|
||||
|
||||
@dataclass
|
||||
class SubplotConfig:
|
||||
"""Configuration for a chart subplot."""
|
||||
subplot_type: SubplotType
|
||||
height_ratio: float = 0.3
|
||||
indicators: List[str] = field(default_factory=list)
|
||||
title: Optional[str] = None
|
||||
y_axis_label: Optional[str] = None
|
||||
show_grid: bool = True
|
||||
show_legend: bool = True
|
||||
background_color: Optional[str] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChartStyle:
|
||||
"""Chart styling configuration."""
|
||||
theme: str = "plotly_white"
|
||||
background_color: str = "#ffffff"
|
||||
grid_color: str = "#e6e6e6"
|
||||
text_color: str = "#2c3e50"
|
||||
font_family: str = "Arial, sans-serif"
|
||||
font_size: int = 12
|
||||
candlestick_up_color: str = "#26a69a"
|
||||
candlestick_down_color: str = "#ef5350"
|
||||
volume_color: str = "#78909c"
|
||||
show_volume: bool = True
|
||||
show_grid: bool = True
|
||||
show_legend: bool = True
|
||||
show_toolbar: bool = True
|
||||
|
||||
|
||||
@dataclass
|
||||
class StrategyChartConfig:
|
||||
"""Complete chart configuration for a trading strategy."""
|
||||
strategy_name: str
|
||||
strategy_type: TradingStrategy
|
||||
description: str
|
||||
timeframes: List[str]
|
||||
|
||||
# Chart layout
|
||||
layout: ChartLayout = ChartLayout.MAIN_WITH_SUBPLOTS
|
||||
main_chart_height: float = 0.7
|
||||
|
||||
# Indicators
|
||||
overlay_indicators: List[str] = field(default_factory=list)
|
||||
subplot_configs: List[SubplotConfig] = field(default_factory=list)
|
||||
|
||||
# Style
|
||||
chart_style: ChartStyle = field(default_factory=ChartStyle)
|
||||
|
||||
# Metadata
|
||||
created_at: Optional[datetime] = None
|
||||
updated_at: Optional[datetime] = None
|
||||
version: str = "1.0"
|
||||
tags: List[str] = field(default_factory=list)
|
||||
|
||||
def validate(self) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate the strategy chart configuration.
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
# Use the new comprehensive validation system
|
||||
from .validation import validate_configuration
|
||||
|
||||
try:
|
||||
report = validate_configuration(self)
|
||||
|
||||
# Convert validation report to simple format for backward compatibility
|
||||
error_messages = [str(issue) for issue in report.errors]
|
||||
return report.is_valid, error_messages
|
||||
|
||||
except ImportError:
|
||||
# Fallback to original validation if new system unavailable
|
||||
logger.warning("Strategy Charts: Enhanced validation system unavailable, using basic validation")
|
||||
return self._basic_validate()
|
||||
except Exception as e:
|
||||
logger.error(f"Strategy Charts: Validation error: {e}")
|
||||
return False, [f"Strategy Charts: Validation system error: {e}"]
|
||||
|
||||
def validate_comprehensive(self) -> 'ValidationReport':
|
||||
"""
|
||||
Perform comprehensive validation with detailed reporting.
|
||||
|
||||
Returns:
|
||||
Detailed validation report with errors, warnings, and suggestions
|
||||
"""
|
||||
from .validation import validate_configuration
|
||||
return validate_configuration(self)
|
||||
|
||||
def _basic_validate(self) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Basic validation method (fallback).
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
errors = []
|
||||
|
||||
# Validate basic fields
|
||||
if not self.strategy_name:
|
||||
errors.append("Strategy name is required")
|
||||
|
||||
if not isinstance(self.strategy_type, TradingStrategy):
|
||||
errors.append("Invalid strategy type")
|
||||
|
||||
if not self.timeframes:
|
||||
errors.append("At least one timeframe must be specified")
|
||||
|
||||
# Validate height ratios
|
||||
total_subplot_height = sum(config.height_ratio for config in self.subplot_configs)
|
||||
if self.main_chart_height + total_subplot_height > 1.0:
|
||||
errors.append("Total chart height ratios exceed 1.0")
|
||||
|
||||
if self.main_chart_height <= 0 or self.main_chart_height > 1.0:
|
||||
errors.append("Main chart height must be between 0 and 1.0")
|
||||
|
||||
# Validate indicators exist
|
||||
try:
|
||||
all_default_indicators = get_all_default_indicators()
|
||||
|
||||
for indicator_name in self.overlay_indicators:
|
||||
if indicator_name not in all_default_indicators:
|
||||
errors.append(f"Overlay indicator '{indicator_name}' not found in defaults")
|
||||
|
||||
for subplot_config in self.subplot_configs:
|
||||
for indicator_name in subplot_config.indicators:
|
||||
if indicator_name not in all_default_indicators:
|
||||
errors.append(f"Subplot indicator '{indicator_name}' not found in defaults")
|
||||
except Exception as e:
|
||||
logger.warning(f"Strategy Charts: Could not validate indicator existence: {e}")
|
||||
|
||||
# Validate subplot height ratios
|
||||
for i, subplot_config in enumerate(self.subplot_configs):
|
||||
if subplot_config.height_ratio <= 0 or subplot_config.height_ratio > 1.0:
|
||||
errors.append(f"Subplot {i} height ratio must be between 0 and 1.0")
|
||||
|
||||
return len(errors) == 0, errors
|
||||
|
||||
def get_all_indicators(self) -> List[str]:
|
||||
"""Get all indicators used in this strategy configuration."""
|
||||
all_indicators = list(self.overlay_indicators)
|
||||
for subplot_config in self.subplot_configs:
|
||||
all_indicators.extend(subplot_config.indicators)
|
||||
return list(set(all_indicators))
|
||||
|
||||
def get_indicator_configs(self) -> Dict[str, ChartIndicatorConfig]:
|
||||
"""
|
||||
Get the actual indicator configuration objects for all indicators.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to their configurations
|
||||
"""
|
||||
all_default_indicators = get_all_default_indicators()
|
||||
indicator_configs = {}
|
||||
|
||||
for indicator_name in self.get_all_indicators():
|
||||
if indicator_name in all_default_indicators:
|
||||
preset = all_default_indicators[indicator_name]
|
||||
indicator_configs[indicator_name] = preset.config
|
||||
|
||||
return indicator_configs
|
||||
|
||||
|
||||
def create_default_strategy_configurations() -> Dict[str, StrategyChartConfig]:
|
||||
"""Create default chart configurations for all trading strategies."""
|
||||
strategy_configs = {}
|
||||
|
||||
# Scalping Strategy
|
||||
strategy_configs["scalping"] = StrategyChartConfig(
|
||||
strategy_name="Scalping Strategy",
|
||||
strategy_type=TradingStrategy.SCALPING,
|
||||
description="Fast-paced trading with quick entry/exit on 1-5 minute charts",
|
||||
timeframes=["1m", "5m"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.6,
|
||||
overlay_indicators=["ema_5", "ema_12", "ema_21", "bb_10_15"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_7"],
|
||||
title="RSI (7)",
|
||||
y_axis_label="RSI",
|
||||
show_grid=True
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.2,
|
||||
indicators=["macd_5_13_4"],
|
||||
title="MACD Fast",
|
||||
y_axis_label="MACD",
|
||||
show_grid=True
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=10,
|
||||
show_volume=True,
|
||||
candlestick_up_color="#00d4aa",
|
||||
candlestick_down_color="#fe6a85"
|
||||
),
|
||||
tags=["scalping", "short-term", "fast"]
|
||||
)
|
||||
|
||||
# Day Trading Strategy
|
||||
strategy_configs["day_trading"] = StrategyChartConfig(
|
||||
strategy_name="Day Trading Strategy",
|
||||
strategy_type=TradingStrategy.DAY_TRADING,
|
||||
description="Intraday trading with balanced indicator mix for 5m-1h charts",
|
||||
timeframes=["5m", "15m", "1h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.65,
|
||||
overlay_indicators=["sma_20", "ema_12", "ema_26", "bb_20_20"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_14"],
|
||||
title="RSI (14)",
|
||||
y_axis_label="RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.2,
|
||||
indicators=["macd_12_26_9"],
|
||||
title="MACD",
|
||||
y_axis_label="MACD"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
show_volume=True
|
||||
),
|
||||
tags=["day-trading", "intraday", "balanced"]
|
||||
)
|
||||
|
||||
# Swing Trading Strategy
|
||||
strategy_configs["swing_trading"] = StrategyChartConfig(
|
||||
strategy_name="Swing Trading Strategy",
|
||||
strategy_type=TradingStrategy.SWING_TRADING,
|
||||
description="Medium-term trading for multi-day holds on 1h-1d charts",
|
||||
timeframes=["1h", "4h", "1d"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.7,
|
||||
overlay_indicators=["sma_50", "ema_21", "ema_50", "bb_20_20"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_14", "rsi_21"],
|
||||
title="RSI Comparison",
|
||||
y_axis_label="RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.15,
|
||||
indicators=["macd_12_26_9"],
|
||||
title="MACD",
|
||||
y_axis_label="MACD"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
show_volume=True
|
||||
),
|
||||
tags=["swing-trading", "medium-term", "multi-day"]
|
||||
)
|
||||
|
||||
# Position Trading Strategy
|
||||
strategy_configs["position_trading"] = StrategyChartConfig(
|
||||
strategy_name="Position Trading Strategy",
|
||||
strategy_type=TradingStrategy.POSITION_TRADING,
|
||||
description="Long-term trading for weeks/months holds on 4h-1w charts",
|
||||
timeframes=["4h", "1d", "1w"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.75,
|
||||
overlay_indicators=["sma_100", "sma_200", "ema_50", "ema_100", "bb_50_20"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.12,
|
||||
indicators=["rsi_21"],
|
||||
title="RSI (21)",
|
||||
y_axis_label="RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.13,
|
||||
indicators=["macd_19_39_13"],
|
||||
title="MACD Slow",
|
||||
y_axis_label="MACD"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=14,
|
||||
show_volume=False # Less important for long-term
|
||||
),
|
||||
tags=["position-trading", "long-term", "weeks-months"]
|
||||
)
|
||||
|
||||
# Momentum Strategy
|
||||
strategy_configs["momentum"] = StrategyChartConfig(
|
||||
strategy_name="Momentum Strategy",
|
||||
strategy_type=TradingStrategy.MOMENTUM,
|
||||
description="Trend-following momentum strategy for strong directional moves",
|
||||
timeframes=["15m", "1h", "4h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.6,
|
||||
overlay_indicators=["ema_12", "ema_26"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.15,
|
||||
indicators=["rsi_7", "rsi_14"],
|
||||
title="RSI Momentum",
|
||||
y_axis_label="RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.MACD,
|
||||
height_ratio=0.25,
|
||||
indicators=["macd_8_17_6", "macd_12_26_9"],
|
||||
title="MACD Momentum",
|
||||
y_axis_label="MACD"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
candlestick_up_color="#26a69a",
|
||||
candlestick_down_color="#ef5350"
|
||||
),
|
||||
tags=["momentum", "trend-following", "directional"]
|
||||
)
|
||||
|
||||
# Mean Reversion Strategy
|
||||
strategy_configs["mean_reversion"] = StrategyChartConfig(
|
||||
strategy_name="Mean Reversion Strategy",
|
||||
strategy_type=TradingStrategy.MEAN_REVERSION,
|
||||
description="Counter-trend strategy for oversold/overbought conditions",
|
||||
timeframes=["15m", "1h", "4h"],
|
||||
layout=ChartLayout.MAIN_WITH_SUBPLOTS,
|
||||
main_chart_height=0.65,
|
||||
overlay_indicators=["sma_20", "sma_50", "bb_20_20", "bb_20_25"],
|
||||
subplot_configs=[
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.RSI,
|
||||
height_ratio=0.2,
|
||||
indicators=["rsi_14", "rsi_21"],
|
||||
title="RSI Mean Reversion",
|
||||
y_axis_label="RSI"
|
||||
),
|
||||
SubplotConfig(
|
||||
subplot_type=SubplotType.VOLUME,
|
||||
height_ratio=0.15,
|
||||
indicators=[],
|
||||
title="Volume",
|
||||
y_axis_label="Volume"
|
||||
)
|
||||
],
|
||||
chart_style=ChartStyle(
|
||||
theme="plotly_white",
|
||||
font_size=12,
|
||||
show_volume=True
|
||||
),
|
||||
tags=["mean-reversion", "counter-trend", "oversold-overbought"]
|
||||
)
|
||||
|
||||
return strategy_configs
|
||||
|
||||
|
||||
def validate_strategy_configuration(config: StrategyChartConfig) -> tuple[bool, List[str]]:
|
||||
"""
|
||||
Validate a strategy chart configuration.
|
||||
|
||||
Args:
|
||||
config: Strategy chart configuration to validate
|
||||
|
||||
Returns:
|
||||
Tuple of (is_valid, list_of_error_messages)
|
||||
"""
|
||||
return config.validate()
|
||||
|
||||
|
||||
def create_custom_strategy_config(
|
||||
strategy_name: str,
|
||||
strategy_type: TradingStrategy,
|
||||
description: str,
|
||||
timeframes: List[str],
|
||||
overlay_indicators: List[str],
|
||||
subplot_configs: List[Dict[str, Any]],
|
||||
chart_style: Optional[Dict[str, Any]] = None,
|
||||
**kwargs
|
||||
) -> tuple[Optional[StrategyChartConfig], List[str]]:
|
||||
"""
|
||||
Create a custom strategy chart configuration.
|
||||
|
||||
Args:
|
||||
strategy_name: Name of the strategy
|
||||
strategy_type: Type of trading strategy
|
||||
description: Strategy description
|
||||
timeframes: List of recommended timeframes
|
||||
overlay_indicators: List of overlay indicator names
|
||||
subplot_configs: List of subplot configuration dictionaries
|
||||
chart_style: Optional chart style configuration
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Tuple of (config_object_or_None, list_of_error_messages)
|
||||
"""
|
||||
try:
|
||||
# Create subplot configurations
|
||||
subplots = []
|
||||
for subplot_data in subplot_configs:
|
||||
subplot_type = SubplotType(subplot_data.get("subplot_type", "custom"))
|
||||
subplot = SubplotConfig(
|
||||
subplot_type=subplot_type,
|
||||
height_ratio=subplot_data.get("height_ratio", 0.2),
|
||||
indicators=subplot_data.get("indicators", []),
|
||||
title=subplot_data.get("title"),
|
||||
y_axis_label=subplot_data.get("y_axis_label"),
|
||||
show_grid=subplot_data.get("show_grid", True),
|
||||
show_legend=subplot_data.get("show_legend", True),
|
||||
background_color=subplot_data.get("background_color")
|
||||
)
|
||||
subplots.append(subplot)
|
||||
|
||||
# Create chart style
|
||||
style = ChartStyle()
|
||||
if chart_style:
|
||||
for key, value in chart_style.items():
|
||||
if hasattr(style, key):
|
||||
setattr(style, key, value)
|
||||
|
||||
# Create configuration
|
||||
config = StrategyChartConfig(
|
||||
strategy_name=strategy_name,
|
||||
strategy_type=strategy_type,
|
||||
description=description,
|
||||
timeframes=timeframes,
|
||||
layout=ChartLayout(kwargs.get("layout", ChartLayout.MAIN_WITH_SUBPLOTS.value)),
|
||||
main_chart_height=kwargs.get("main_chart_height", 0.7),
|
||||
overlay_indicators=overlay_indicators,
|
||||
subplot_configs=subplots,
|
||||
chart_style=style,
|
||||
created_at=datetime.now(),
|
||||
version=kwargs.get("version", "1.0"),
|
||||
tags=kwargs.get("tags", [])
|
||||
)
|
||||
|
||||
# Validate configuration
|
||||
is_valid, errors = config.validate()
|
||||
if not is_valid:
|
||||
return None, errors
|
||||
|
||||
return config, []
|
||||
|
||||
except Exception as e:
|
||||
return None, [f"Error creating strategy configuration: {e}"]
|
||||
|
||||
|
||||
def load_strategy_config_from_json(json_data: Union[str, Dict[str, Any]]) -> tuple[Optional[StrategyChartConfig], List[str]]:
|
||||
"""
|
||||
Load strategy configuration from JSON data.
|
||||
|
||||
Args:
|
||||
json_data: JSON string or dictionary with configuration data
|
||||
|
||||
Returns:
|
||||
Tuple of (config_object_or_None, list_of_error_messages)
|
||||
"""
|
||||
try:
|
||||
if isinstance(json_data, str):
|
||||
data = json.loads(json_data)
|
||||
else:
|
||||
data = json_data
|
||||
|
||||
# Extract required fields
|
||||
required_fields = ["strategy_name", "strategy_type", "description", "timeframes"]
|
||||
missing_fields = [field for field in required_fields if field not in data]
|
||||
if missing_fields:
|
||||
return None, [f"Missing required fields: {', '.join(missing_fields)}"]
|
||||
|
||||
# Convert strategy type
|
||||
try:
|
||||
strategy_type = TradingStrategy(data["strategy_type"])
|
||||
except ValueError:
|
||||
return None, [f"Invalid strategy type: {data['strategy_type']}"]
|
||||
|
||||
return create_custom_strategy_config(
|
||||
strategy_name=data["strategy_name"],
|
||||
strategy_type=strategy_type,
|
||||
description=data["description"],
|
||||
timeframes=data["timeframes"],
|
||||
overlay_indicators=data.get("overlay_indicators", []),
|
||||
subplot_configs=data.get("subplot_configs", []),
|
||||
chart_style=data.get("chart_style"),
|
||||
**{k: v for k, v in data.items() if k not in required_fields + ["overlay_indicators", "subplot_configs", "chart_style"]}
|
||||
)
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
return None, [f"Invalid JSON: {e}"]
|
||||
except Exception as e:
|
||||
return None, [f"Error loading configuration: {e}"]
|
||||
|
||||
|
||||
def export_strategy_config_to_json(config: StrategyChartConfig) -> str:
|
||||
"""
|
||||
Export strategy configuration to JSON string.
|
||||
|
||||
Args:
|
||||
config: Strategy configuration to export
|
||||
|
||||
Returns:
|
||||
JSON string representation of the configuration
|
||||
"""
|
||||
# Convert to dictionary
|
||||
config_dict = {
|
||||
"strategy_name": config.strategy_name,
|
||||
"strategy_type": config.strategy_type.value,
|
||||
"description": config.description,
|
||||
"timeframes": config.timeframes,
|
||||
"layout": config.layout.value,
|
||||
"main_chart_height": config.main_chart_height,
|
||||
"overlay_indicators": config.overlay_indicators,
|
||||
"subplot_configs": [
|
||||
{
|
||||
"subplot_type": subplot.subplot_type.value,
|
||||
"height_ratio": subplot.height_ratio,
|
||||
"indicators": subplot.indicators,
|
||||
"title": subplot.title,
|
||||
"y_axis_label": subplot.y_axis_label,
|
||||
"show_grid": subplot.show_grid,
|
||||
"show_legend": subplot.show_legend,
|
||||
"background_color": subplot.background_color
|
||||
}
|
||||
for subplot in config.subplot_configs
|
||||
],
|
||||
"chart_style": {
|
||||
"theme": config.chart_style.theme,
|
||||
"background_color": config.chart_style.background_color,
|
||||
"grid_color": config.chart_style.grid_color,
|
||||
"text_color": config.chart_style.text_color,
|
||||
"font_family": config.chart_style.font_family,
|
||||
"font_size": config.chart_style.font_size,
|
||||
"candlestick_up_color": config.chart_style.candlestick_up_color,
|
||||
"candlestick_down_color": config.chart_style.candlestick_down_color,
|
||||
"volume_color": config.chart_style.volume_color,
|
||||
"show_volume": config.chart_style.show_volume,
|
||||
"show_grid": config.chart_style.show_grid,
|
||||
"show_legend": config.chart_style.show_legend,
|
||||
"show_toolbar": config.chart_style.show_toolbar
|
||||
},
|
||||
"version": config.version,
|
||||
"tags": config.tags
|
||||
}
|
||||
|
||||
return json.dumps(config_dict, indent=2)
|
||||
|
||||
|
||||
def get_strategy_config(strategy_name: str) -> Optional[StrategyChartConfig]:
|
||||
"""
|
||||
Get a default strategy configuration by name.
|
||||
|
||||
Args:
|
||||
strategy_name: Name of the strategy
|
||||
|
||||
Returns:
|
||||
Strategy configuration or None if not found
|
||||
"""
|
||||
default_configs = create_default_strategy_configurations()
|
||||
return default_configs.get(strategy_name)
|
||||
|
||||
|
||||
def get_all_strategy_configs() -> Dict[str, StrategyChartConfig]:
|
||||
"""
|
||||
Get all default strategy configurations.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping strategy names to their configurations
|
||||
"""
|
||||
return create_default_strategy_configurations()
|
||||
|
||||
|
||||
def get_available_strategy_names() -> List[str]:
|
||||
"""
|
||||
Get list of available default strategy names.
|
||||
|
||||
Returns:
|
||||
List of strategy names
|
||||
"""
|
||||
return list(create_default_strategy_configurations().keys())
|
||||
676
components/charts/config/validation.py
Normal file
676
components/charts/config/validation.py
Normal file
@ -0,0 +1,676 @@
|
||||
"""
|
||||
Configuration Validation and Error Handling System
|
||||
|
||||
This module provides comprehensive validation for chart configurations with
|
||||
detailed error reporting, warnings, and configurable validation rules.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Union, Tuple, Set
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
import re
|
||||
from datetime import datetime
|
||||
|
||||
from .indicator_defs import ChartIndicatorConfig, INDICATOR_SCHEMAS, validate_indicator_configuration
|
||||
from .defaults import get_all_default_indicators, TradingStrategy, IndicatorCategory
|
||||
from .strategy_charts import StrategyChartConfig, SubplotConfig, ChartStyle, ChartLayout, SubplotType
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class ValidationLevel(str, Enum):
|
||||
"""Validation severity levels."""
|
||||
ERROR = "error"
|
||||
WARNING = "warning"
|
||||
INFO = "info"
|
||||
DEBUG = "debug"
|
||||
|
||||
|
||||
class ValidationRule(str, Enum):
|
||||
"""Available validation rules."""
|
||||
REQUIRED_FIELDS = "required_fields"
|
||||
HEIGHT_RATIOS = "height_ratios"
|
||||
INDICATOR_EXISTENCE = "indicator_existence"
|
||||
TIMEFRAME_FORMAT = "timeframe_format"
|
||||
CHART_STYLE = "chart_style"
|
||||
SUBPLOT_CONFIG = "subplot_config"
|
||||
STRATEGY_CONSISTENCY = "strategy_consistency"
|
||||
PERFORMANCE_IMPACT = "performance_impact"
|
||||
INDICATOR_CONFLICTS = "indicator_conflicts"
|
||||
RESOURCE_USAGE = "resource_usage"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ValidationIssue:
|
||||
"""Represents a validation issue."""
|
||||
level: ValidationLevel
|
||||
rule: ValidationRule
|
||||
message: str
|
||||
field_path: str = ""
|
||||
suggestion: Optional[str] = None
|
||||
auto_fix: Optional[str] = None
|
||||
context: Dict[str, Any] = field(default_factory=dict)
|
||||
|
||||
def __str__(self) -> str:
|
||||
"""String representation of the validation issue."""
|
||||
prefix = f"[{self.level.value.upper()}]"
|
||||
location = f" at {self.field_path}" if self.field_path else ""
|
||||
suggestion = f" Suggestion: {self.suggestion}" if self.suggestion else ""
|
||||
return f"{prefix} {self.message}{location}.{suggestion}"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ValidationReport:
|
||||
"""Comprehensive validation report."""
|
||||
is_valid: bool
|
||||
errors: List[ValidationIssue] = field(default_factory=list)
|
||||
warnings: List[ValidationIssue] = field(default_factory=list)
|
||||
info: List[ValidationIssue] = field(default_factory=list)
|
||||
debug: List[ValidationIssue] = field(default_factory=list)
|
||||
validation_time: Optional[datetime] = None
|
||||
rules_applied: Set[ValidationRule] = field(default_factory=set)
|
||||
|
||||
def add_issue(self, issue: ValidationIssue) -> None:
|
||||
"""Add a validation issue to the appropriate list."""
|
||||
if issue.level == ValidationLevel.ERROR:
|
||||
self.errors.append(issue)
|
||||
self.is_valid = False
|
||||
elif issue.level == ValidationLevel.WARNING:
|
||||
self.warnings.append(issue)
|
||||
elif issue.level == ValidationLevel.INFO:
|
||||
self.info.append(issue)
|
||||
elif issue.level == ValidationLevel.DEBUG:
|
||||
self.debug.append(issue)
|
||||
|
||||
def get_all_issues(self) -> List[ValidationIssue]:
|
||||
"""Get all validation issues sorted by severity."""
|
||||
return self.errors + self.warnings + self.info + self.debug
|
||||
|
||||
def get_issues_by_rule(self, rule: ValidationRule) -> List[ValidationIssue]:
|
||||
"""Get all issues for a specific validation rule."""
|
||||
return [issue for issue in self.get_all_issues() if issue.rule == rule]
|
||||
|
||||
def has_errors(self) -> bool:
|
||||
"""Check if there are any errors."""
|
||||
return len(self.errors) > 0
|
||||
|
||||
def has_warnings(self) -> bool:
|
||||
"""Check if there are any warnings."""
|
||||
return len(self.warnings) > 0
|
||||
|
||||
def summary(self) -> str:
|
||||
"""Get a summary of the validation report."""
|
||||
total_issues = len(self.get_all_issues())
|
||||
status = "INVALID" if not self.is_valid else "VALID"
|
||||
return (f"Validation {status}: {len(self.errors)} errors, "
|
||||
f"{len(self.warnings)} warnings, {total_issues} total issues")
|
||||
|
||||
|
||||
class ConfigurationValidator:
|
||||
"""Comprehensive configuration validator."""
|
||||
|
||||
def __init__(self, enabled_rules: Optional[Set[ValidationRule]] = None):
|
||||
"""
|
||||
Initialize validator with optional rule filtering.
|
||||
|
||||
Args:
|
||||
enabled_rules: Set of rules to apply. If None, applies all rules.
|
||||
"""
|
||||
self.enabled_rules = enabled_rules or set(ValidationRule)
|
||||
self.timeframe_pattern = re.compile(r'^(\d+)(m|h|d|w)$')
|
||||
self.color_pattern = re.compile(r'^#[0-9a-fA-F]{6}$')
|
||||
|
||||
# Load indicator information for validation
|
||||
self._load_indicator_info()
|
||||
|
||||
def _load_indicator_info(self) -> None:
|
||||
"""Load indicator information for validation."""
|
||||
try:
|
||||
self.available_indicators = get_all_default_indicators()
|
||||
self.indicator_schemas = INDICATOR_SCHEMAS
|
||||
except Exception as e:
|
||||
logger.warning(f"Validation: Failed to load indicator information: {e}")
|
||||
self.available_indicators = {}
|
||||
self.indicator_schemas = {}
|
||||
|
||||
def validate_strategy_config(self, config: StrategyChartConfig) -> ValidationReport:
|
||||
"""
|
||||
Perform comprehensive validation of a strategy configuration.
|
||||
|
||||
Args:
|
||||
config: Strategy configuration to validate
|
||||
|
||||
Returns:
|
||||
Detailed validation report
|
||||
"""
|
||||
report = ValidationReport(is_valid=True, validation_time=datetime.now())
|
||||
|
||||
# Apply validation rules
|
||||
if ValidationRule.REQUIRED_FIELDS in self.enabled_rules:
|
||||
self._validate_required_fields(config, report)
|
||||
|
||||
if ValidationRule.HEIGHT_RATIOS in self.enabled_rules:
|
||||
self._validate_height_ratios(config, report)
|
||||
|
||||
if ValidationRule.INDICATOR_EXISTENCE in self.enabled_rules:
|
||||
self._validate_indicator_existence(config, report)
|
||||
|
||||
if ValidationRule.TIMEFRAME_FORMAT in self.enabled_rules:
|
||||
self._validate_timeframe_format(config, report)
|
||||
|
||||
if ValidationRule.CHART_STYLE in self.enabled_rules:
|
||||
self._validate_chart_style(config, report)
|
||||
|
||||
if ValidationRule.SUBPLOT_CONFIG in self.enabled_rules:
|
||||
self._validate_subplot_configs(config, report)
|
||||
|
||||
if ValidationRule.STRATEGY_CONSISTENCY in self.enabled_rules:
|
||||
self._validate_strategy_consistency(config, report)
|
||||
|
||||
if ValidationRule.PERFORMANCE_IMPACT in self.enabled_rules:
|
||||
self._validate_performance_impact(config, report)
|
||||
|
||||
if ValidationRule.INDICATOR_CONFLICTS in self.enabled_rules:
|
||||
self._validate_indicator_conflicts(config, report)
|
||||
|
||||
if ValidationRule.RESOURCE_USAGE in self.enabled_rules:
|
||||
self._validate_resource_usage(config, report)
|
||||
|
||||
# Update applied rules
|
||||
report.rules_applied = self.enabled_rules
|
||||
|
||||
return report
|
||||
|
||||
def _validate_required_fields(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate required fields."""
|
||||
# Strategy name
|
||||
if not config.strategy_name or not config.strategy_name.strip():
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.REQUIRED_FIELDS,
|
||||
message="Strategy name is required and cannot be empty",
|
||||
field_path="strategy_name",
|
||||
suggestion="Provide a descriptive name for your strategy"
|
||||
))
|
||||
elif len(config.strategy_name.strip()) < 3:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.REQUIRED_FIELDS,
|
||||
message="Strategy name is very short",
|
||||
field_path="strategy_name",
|
||||
suggestion="Use a more descriptive name (at least 3 characters)"
|
||||
))
|
||||
|
||||
# Strategy type
|
||||
if not isinstance(config.strategy_type, TradingStrategy):
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.REQUIRED_FIELDS,
|
||||
message="Invalid strategy type",
|
||||
field_path="strategy_type",
|
||||
suggestion=f"Must be one of: {[s.value for s in TradingStrategy]}"
|
||||
))
|
||||
|
||||
# Description
|
||||
if not config.description or not config.description.strip():
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.REQUIRED_FIELDS,
|
||||
message="Strategy description is missing",
|
||||
field_path="description",
|
||||
suggestion="Add a description to help users understand the strategy"
|
||||
))
|
||||
|
||||
# Timeframes
|
||||
if not config.timeframes:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.REQUIRED_FIELDS,
|
||||
message="At least one timeframe must be specified",
|
||||
field_path="timeframes",
|
||||
suggestion="Add recommended timeframes for this strategy"
|
||||
))
|
||||
|
||||
def _validate_height_ratios(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate chart height ratios."""
|
||||
# Main chart height
|
||||
if config.main_chart_height <= 0 or config.main_chart_height > 1.0:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Main chart height ({config.main_chart_height}) must be between 0 and 1.0",
|
||||
field_path="main_chart_height",
|
||||
suggestion="Set a value between 0.1 and 0.9",
|
||||
auto_fix="0.7"
|
||||
))
|
||||
elif config.main_chart_height < 0.3:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Main chart height ({config.main_chart_height}) is very small",
|
||||
field_path="main_chart_height",
|
||||
suggestion="Consider using at least 0.3 for better visibility"
|
||||
))
|
||||
|
||||
# Subplot heights
|
||||
total_subplot_height = sum(subplot.height_ratio for subplot in config.subplot_configs)
|
||||
total_height = config.main_chart_height + total_subplot_height
|
||||
|
||||
if total_height > 1.0:
|
||||
excess = total_height - 1.0
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Total chart height ({total_height:.3f}) exceeds 1.0 by {excess:.3f}",
|
||||
field_path="height_ratios",
|
||||
suggestion="Reduce main chart height or subplot heights",
|
||||
context={"total_height": total_height, "excess": excess}
|
||||
))
|
||||
elif total_height < 0.8:
|
||||
unused = 1.0 - total_height
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Chart height ({total_height:.3f}) leaves {unused:.3f} unused space",
|
||||
field_path="height_ratios",
|
||||
suggestion="Consider increasing chart or subplot heights for better space utilization"
|
||||
))
|
||||
|
||||
# Individual subplot heights
|
||||
for i, subplot in enumerate(config.subplot_configs):
|
||||
if subplot.height_ratio <= 0 or subplot.height_ratio > 1.0:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Subplot {i} height ratio ({subplot.height_ratio}) must be between 0 and 1.0",
|
||||
field_path=f"subplot_configs[{i}].height_ratio",
|
||||
suggestion="Set a value between 0.1 and 0.5"
|
||||
))
|
||||
elif subplot.height_ratio < 0.1:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.HEIGHT_RATIOS,
|
||||
message=f"Subplot {i} height ratio ({subplot.height_ratio}) is very small",
|
||||
field_path=f"subplot_configs[{i}].height_ratio",
|
||||
suggestion="Consider using at least 0.1 for better readability"
|
||||
))
|
||||
|
||||
def _validate_indicator_existence(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate that indicators exist in the available indicators."""
|
||||
# Check overlay indicators
|
||||
for indicator in config.overlay_indicators:
|
||||
if indicator not in self.available_indicators:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.INDICATOR_EXISTENCE,
|
||||
message=f"Overlay indicator '{indicator}' not found",
|
||||
field_path=f"overlay_indicators.{indicator}",
|
||||
suggestion="Check indicator name or add it to defaults",
|
||||
context={"available_count": len(self.available_indicators)}
|
||||
))
|
||||
|
||||
# Check subplot indicators
|
||||
for i, subplot in enumerate(config.subplot_configs):
|
||||
for indicator in subplot.indicators:
|
||||
if indicator not in self.available_indicators:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.INDICATOR_EXISTENCE,
|
||||
message=f"Subplot indicator '{indicator}' not found",
|
||||
field_path=f"subplot_configs[{i}].indicators.{indicator}",
|
||||
suggestion="Check indicator name or add it to defaults"
|
||||
))
|
||||
|
||||
def _validate_timeframe_format(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate timeframe format."""
|
||||
valid_timeframes = ['1m', '5m', '15m', '30m', '1h', '2h', '4h', '6h', '8h', '12h', '1d', '3d', '1w', '1M']
|
||||
|
||||
for timeframe in config.timeframes:
|
||||
if timeframe not in valid_timeframes:
|
||||
if self.timeframe_pattern.match(timeframe):
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.TIMEFRAME_FORMAT,
|
||||
message=f"Timeframe '{timeframe}' format is valid but not in common list",
|
||||
field_path=f"timeframes.{timeframe}",
|
||||
suggestion=f"Consider using standard timeframes: {valid_timeframes[:8]}"
|
||||
))
|
||||
else:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.TIMEFRAME_FORMAT,
|
||||
message=f"Invalid timeframe format '{timeframe}'",
|
||||
field_path=f"timeframes.{timeframe}",
|
||||
suggestion="Use format like '1m', '5m', '1h', '4h', '1d', '1w'",
|
||||
context={"valid_timeframes": valid_timeframes}
|
||||
))
|
||||
|
||||
def _validate_chart_style(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate chart style configuration."""
|
||||
style = config.chart_style
|
||||
|
||||
# Validate colors
|
||||
color_fields = [
|
||||
('background_color', style.background_color),
|
||||
('grid_color', style.grid_color),
|
||||
('text_color', style.text_color),
|
||||
('candlestick_up_color', style.candlestick_up_color),
|
||||
('candlestick_down_color', style.candlestick_down_color),
|
||||
('volume_color', style.volume_color)
|
||||
]
|
||||
|
||||
for field_name, color_value in color_fields:
|
||||
if color_value and not self.color_pattern.match(color_value):
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.ERROR,
|
||||
rule=ValidationRule.CHART_STYLE,
|
||||
message=f"Invalid color format for {field_name}: '{color_value}'",
|
||||
field_path=f"chart_style.{field_name}",
|
||||
suggestion="Use hex color format like '#ffffff' or '#123456'"
|
||||
))
|
||||
|
||||
# Validate font size
|
||||
if style.font_size < 6 or style.font_size > 24:
|
||||
level = ValidationLevel.ERROR if style.font_size < 1 or style.font_size > 48 else ValidationLevel.WARNING
|
||||
report.add_issue(ValidationIssue(
|
||||
level=level,
|
||||
rule=ValidationRule.CHART_STYLE,
|
||||
message=f"Font size {style.font_size} may cause readability issues",
|
||||
field_path="chart_style.font_size",
|
||||
suggestion="Use font size between 8 and 18 for optimal readability"
|
||||
))
|
||||
|
||||
# Validate theme
|
||||
valid_themes = ['plotly', 'plotly_white', 'plotly_dark', 'ggplot2', 'seaborn', 'simple_white']
|
||||
if style.theme not in valid_themes:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.CHART_STYLE,
|
||||
message=f"Theme '{style.theme}' may not be supported",
|
||||
field_path="chart_style.theme",
|
||||
suggestion=f"Consider using: {valid_themes[:3]}",
|
||||
context={"valid_themes": valid_themes}
|
||||
))
|
||||
|
||||
def _validate_subplot_configs(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate subplot configurations."""
|
||||
subplot_types = [subplot.subplot_type for subplot in config.subplot_configs]
|
||||
|
||||
# Check for duplicate subplot types
|
||||
seen_types = set()
|
||||
for i, subplot in enumerate(config.subplot_configs):
|
||||
if subplot.subplot_type in seen_types:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.SUBPLOT_CONFIG,
|
||||
message=f"Duplicate subplot type '{subplot.subplot_type.value}' at position {i}",
|
||||
field_path=f"subplot_configs[{i}].subplot_type",
|
||||
suggestion="Consider using different subplot types or combining indicators"
|
||||
))
|
||||
seen_types.add(subplot.subplot_type)
|
||||
|
||||
# Validate subplot-specific indicators
|
||||
if subplot.subplot_type == SubplotType.RSI and subplot.indicators:
|
||||
for indicator in subplot.indicators:
|
||||
if indicator in self.available_indicators:
|
||||
indicator_config = self.available_indicators[indicator].config
|
||||
indicator_type = indicator_config.indicator_type
|
||||
# Handle both string and enum types
|
||||
if hasattr(indicator_type, 'value'):
|
||||
indicator_type_value = indicator_type.value
|
||||
else:
|
||||
indicator_type_value = str(indicator_type)
|
||||
|
||||
if indicator_type_value != 'rsi':
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.SUBPLOT_CONFIG,
|
||||
message=f"Non-RSI indicator '{indicator}' in RSI subplot",
|
||||
field_path=f"subplot_configs[{i}].indicators.{indicator}",
|
||||
suggestion="Use RSI indicators in RSI subplots for consistency"
|
||||
))
|
||||
|
||||
elif subplot.subplot_type == SubplotType.MACD and subplot.indicators:
|
||||
for indicator in subplot.indicators:
|
||||
if indicator in self.available_indicators:
|
||||
indicator_config = self.available_indicators[indicator].config
|
||||
indicator_type = indicator_config.indicator_type
|
||||
# Handle both string and enum types
|
||||
if hasattr(indicator_type, 'value'):
|
||||
indicator_type_value = indicator_type.value
|
||||
else:
|
||||
indicator_type_value = str(indicator_type)
|
||||
|
||||
if indicator_type_value != 'macd':
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.SUBPLOT_CONFIG,
|
||||
message=f"Non-MACD indicator '{indicator}' in MACD subplot",
|
||||
field_path=f"subplot_configs[{i}].indicators.{indicator}",
|
||||
suggestion="Use MACD indicators in MACD subplots for consistency"
|
||||
))
|
||||
|
||||
def _validate_strategy_consistency(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate strategy consistency with indicator choices."""
|
||||
strategy_type = config.strategy_type
|
||||
timeframes = config.timeframes
|
||||
|
||||
# Check timeframe consistency with strategy
|
||||
strategy_timeframe_recommendations = {
|
||||
TradingStrategy.SCALPING: ['1m', '5m'],
|
||||
TradingStrategy.DAY_TRADING: ['5m', '15m', '1h'],
|
||||
TradingStrategy.SWING_TRADING: ['1h', '4h', '1d'],
|
||||
TradingStrategy.POSITION_TRADING: ['4h', '1d', '1w'],
|
||||
TradingStrategy.MOMENTUM: ['15m', '1h', '4h'],
|
||||
TradingStrategy.MEAN_REVERSION: ['15m', '1h', '4h']
|
||||
}
|
||||
|
||||
recommended = strategy_timeframe_recommendations.get(strategy_type, [])
|
||||
if recommended:
|
||||
mismatched_timeframes = [tf for tf in timeframes if tf not in recommended]
|
||||
if mismatched_timeframes:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.STRATEGY_CONSISTENCY,
|
||||
message=f"Timeframes {mismatched_timeframes} may not be optimal for {strategy_type.value}",
|
||||
field_path="timeframes",
|
||||
suggestion=f"Consider using: {recommended}",
|
||||
context={"recommended": recommended, "current": timeframes}
|
||||
))
|
||||
|
||||
def _validate_performance_impact(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate potential performance impact."""
|
||||
total_indicators = len(config.overlay_indicators)
|
||||
for subplot in config.subplot_configs:
|
||||
total_indicators += len(subplot.indicators)
|
||||
|
||||
if total_indicators > 10:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.PERFORMANCE_IMPACT,
|
||||
message=f"High indicator count ({total_indicators}) may impact performance",
|
||||
field_path="indicators",
|
||||
suggestion="Consider reducing the number of indicators for better performance",
|
||||
context={"indicator_count": total_indicators}
|
||||
))
|
||||
|
||||
# Check for complex indicators
|
||||
complex_indicators = ['bollinger_bands', 'macd']
|
||||
complex_count = 0
|
||||
all_indicators = config.overlay_indicators.copy()
|
||||
for subplot in config.subplot_configs:
|
||||
all_indicators.extend(subplot.indicators)
|
||||
|
||||
for indicator in all_indicators:
|
||||
if indicator in self.available_indicators:
|
||||
indicator_config = self.available_indicators[indicator].config
|
||||
indicator_type = indicator_config.indicator_type
|
||||
# Handle both string and enum types
|
||||
if hasattr(indicator_type, 'value'):
|
||||
indicator_type_value = indicator_type.value
|
||||
else:
|
||||
indicator_type_value = str(indicator_type)
|
||||
|
||||
if indicator_type_value in complex_indicators:
|
||||
complex_count += 1
|
||||
|
||||
if complex_count > 3:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.PERFORMANCE_IMPACT,
|
||||
message=f"Multiple complex indicators ({complex_count}) detected",
|
||||
field_path="indicators",
|
||||
suggestion="Complex indicators may increase calculation time",
|
||||
context={"complex_count": complex_count}
|
||||
))
|
||||
|
||||
def _validate_indicator_conflicts(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate for potential indicator conflicts or redundancy."""
|
||||
all_indicators = config.overlay_indicators.copy()
|
||||
for subplot in config.subplot_configs:
|
||||
all_indicators.extend(subplot.indicators)
|
||||
|
||||
# Check for similar indicators
|
||||
sma_indicators = [ind for ind in all_indicators if 'sma_' in ind]
|
||||
ema_indicators = [ind for ind in all_indicators if 'ema_' in ind]
|
||||
rsi_indicators = [ind for ind in all_indicators if 'rsi_' in ind]
|
||||
|
||||
if len(sma_indicators) > 3:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.INDICATOR_CONFLICTS,
|
||||
message=f"Multiple SMA indicators ({len(sma_indicators)}) may create visual clutter",
|
||||
field_path="overlay_indicators",
|
||||
suggestion="Consider using fewer SMA periods for cleaner charts"
|
||||
))
|
||||
|
||||
if len(ema_indicators) > 3:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.INDICATOR_CONFLICTS,
|
||||
message=f"Multiple EMA indicators ({len(ema_indicators)}) may create visual clutter",
|
||||
field_path="overlay_indicators",
|
||||
suggestion="Consider using fewer EMA periods for cleaner charts"
|
||||
))
|
||||
|
||||
if len(rsi_indicators) > 2:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.INDICATOR_CONFLICTS,
|
||||
message=f"Multiple RSI indicators ({len(rsi_indicators)}) provide redundant information",
|
||||
field_path="subplot_indicators",
|
||||
suggestion="Usually one or two RSI periods are sufficient"
|
||||
))
|
||||
|
||||
def _validate_resource_usage(self, config: StrategyChartConfig, report: ValidationReport) -> None:
|
||||
"""Validate estimated resource usage."""
|
||||
# Estimate memory usage based on indicators and subplots
|
||||
base_memory = 1.0 # Base chart memory in MB
|
||||
indicator_memory = len(config.overlay_indicators) * 0.1 # 0.1 MB per overlay indicator
|
||||
subplot_memory = len(config.subplot_configs) * 0.5 # 0.5 MB per subplot
|
||||
|
||||
total_memory = base_memory + indicator_memory + subplot_memory
|
||||
|
||||
if total_memory > 5.0:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.WARNING,
|
||||
rule=ValidationRule.RESOURCE_USAGE,
|
||||
message=f"Estimated memory usage ({total_memory:.1f} MB) is high",
|
||||
field_path="configuration",
|
||||
suggestion="Consider reducing indicators or subplots for lower memory usage",
|
||||
context={"estimated_memory_mb": total_memory}
|
||||
))
|
||||
|
||||
# Check for potential rendering complexity
|
||||
rendering_complexity = len(config.overlay_indicators) + (len(config.subplot_configs) * 2)
|
||||
if rendering_complexity > 15:
|
||||
report.add_issue(ValidationIssue(
|
||||
level=ValidationLevel.INFO,
|
||||
rule=ValidationRule.RESOURCE_USAGE,
|
||||
message=f"High rendering complexity ({rendering_complexity}) detected",
|
||||
field_path="configuration",
|
||||
suggestion="Complex charts may have slower rendering times"
|
||||
))
|
||||
|
||||
|
||||
def validate_configuration(
|
||||
config: StrategyChartConfig,
|
||||
rules: Optional[Set[ValidationRule]] = None,
|
||||
strict: bool = False
|
||||
) -> ValidationReport:
|
||||
"""
|
||||
Validate a strategy configuration with comprehensive error checking.
|
||||
|
||||
Args:
|
||||
config: Strategy configuration to validate
|
||||
rules: Optional set of validation rules to apply
|
||||
strict: If True, treats warnings as errors
|
||||
|
||||
Returns:
|
||||
Comprehensive validation report
|
||||
"""
|
||||
validator = ConfigurationValidator(enabled_rules=rules)
|
||||
report = validator.validate_strategy_config(config)
|
||||
|
||||
# In strict mode, treat warnings as errors
|
||||
if strict and report.warnings:
|
||||
for warning in report.warnings:
|
||||
warning.level = ValidationLevel.ERROR
|
||||
report.errors.append(warning)
|
||||
report.warnings.clear()
|
||||
report.is_valid = False
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def get_validation_rules_info() -> Dict[ValidationRule, Dict[str, str]]:
|
||||
"""
|
||||
Get information about available validation rules.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping rules to their descriptions
|
||||
"""
|
||||
return {
|
||||
ValidationRule.REQUIRED_FIELDS: {
|
||||
"name": "Required Fields",
|
||||
"description": "Validates that all required configuration fields are present and valid"
|
||||
},
|
||||
ValidationRule.HEIGHT_RATIOS: {
|
||||
"name": "Height Ratios",
|
||||
"description": "Validates chart and subplot height ratios sum correctly"
|
||||
},
|
||||
ValidationRule.INDICATOR_EXISTENCE: {
|
||||
"name": "Indicator Existence",
|
||||
"description": "Validates that all referenced indicators exist in the defaults"
|
||||
},
|
||||
ValidationRule.TIMEFRAME_FORMAT: {
|
||||
"name": "Timeframe Format",
|
||||
"description": "Validates timeframe format and common usage patterns"
|
||||
},
|
||||
ValidationRule.CHART_STYLE: {
|
||||
"name": "Chart Style",
|
||||
"description": "Validates chart styling options like colors, fonts, and themes"
|
||||
},
|
||||
ValidationRule.SUBPLOT_CONFIG: {
|
||||
"name": "Subplot Configuration",
|
||||
"description": "Validates subplot configurations and indicator compatibility"
|
||||
},
|
||||
ValidationRule.STRATEGY_CONSISTENCY: {
|
||||
"name": "Strategy Consistency",
|
||||
"description": "Validates that configuration matches strategy type recommendations"
|
||||
},
|
||||
ValidationRule.PERFORMANCE_IMPACT: {
|
||||
"name": "Performance Impact",
|
||||
"description": "Warns about configurations that may impact performance"
|
||||
},
|
||||
ValidationRule.INDICATOR_CONFLICTS: {
|
||||
"name": "Indicator Conflicts",
|
||||
"description": "Detects redundant or conflicting indicator combinations"
|
||||
},
|
||||
ValidationRule.RESOURCE_USAGE: {
|
||||
"name": "Resource Usage",
|
||||
"description": "Estimates and warns about high resource usage configurations"
|
||||
}
|
||||
}
|
||||
597
components/charts/data_integration.py
Normal file
597
components/charts/data_integration.py
Normal file
@ -0,0 +1,597 @@
|
||||
"""
|
||||
Market Data Integration for Chart Layers
|
||||
|
||||
This module provides seamless integration between database market data and
|
||||
indicator layer calculations, handling data format conversions, validation,
|
||||
and optimization for real-time chart updates.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import List, Dict, Any, Optional, Union, Tuple
|
||||
from decimal import Decimal
|
||||
from dataclasses import dataclass
|
||||
|
||||
from database.operations import get_database_operations, DatabaseOperationError
|
||||
from data.common.data_types import OHLCVCandle
|
||||
from data.common.indicators import TechnicalIndicators, IndicatorResult
|
||||
from components.charts.config.indicator_defs import convert_database_candles_to_ohlcv
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataIntegrationConfig:
|
||||
"""Configuration for market data integration"""
|
||||
default_days_back: int = 7
|
||||
min_candles_required: int = 50
|
||||
max_candles_limit: int = 1000
|
||||
cache_timeout_minutes: int = 5
|
||||
enable_data_validation: bool = True
|
||||
enable_sparse_data_handling: bool = True
|
||||
|
||||
|
||||
class MarketDataIntegrator:
|
||||
"""
|
||||
Integrates market data from database with indicator calculations.
|
||||
|
||||
This class handles:
|
||||
- Fetching market data from database
|
||||
- Converting to indicator-compatible formats
|
||||
- Caching for performance
|
||||
- Data validation and error handling
|
||||
- Sparse data handling (gaps in time series)
|
||||
"""
|
||||
|
||||
def __init__(self, config: DataIntegrationConfig = None):
|
||||
"""
|
||||
Initialize market data integrator.
|
||||
|
||||
Args:
|
||||
config: Integration configuration
|
||||
"""
|
||||
self.config = config or DataIntegrationConfig()
|
||||
self.logger = logger
|
||||
self.db_ops = get_database_operations(self.logger)
|
||||
self.indicators = TechnicalIndicators()
|
||||
|
||||
# Simple in-memory cache for recent data
|
||||
self._cache: Dict[str, Dict[str, Any]] = {}
|
||||
|
||||
def get_market_data_for_indicators(
|
||||
self,
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
days_back: Optional[int] = None,
|
||||
exchange: str = "okx"
|
||||
) -> Tuple[List[Dict[str, Any]], List[OHLCVCandle]]:
|
||||
"""
|
||||
Fetch and prepare market data for indicator calculations.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair (e.g., 'BTC-USDT')
|
||||
timeframe: Timeframe (e.g., '1h', '1d')
|
||||
days_back: Number of days to look back
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Tuple of (raw_candles, ohlcv_candles) for different use cases
|
||||
"""
|
||||
try:
|
||||
# Use default or provided days_back
|
||||
days_back = days_back or self.config.default_days_back
|
||||
|
||||
# Check cache first
|
||||
cache_key = f"{symbol}_{timeframe}_{days_back}_{exchange}"
|
||||
cached_data = self._get_cached_data(cache_key)
|
||||
if cached_data:
|
||||
self.logger.debug(f"Data Integration: Using cached data for {cache_key}")
|
||||
return cached_data['raw_candles'], cached_data['ohlcv_candles']
|
||||
|
||||
# Fetch from database
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
raw_candles = self.db_ops.market_data.get_candles(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
exchange=exchange
|
||||
)
|
||||
|
||||
if not raw_candles:
|
||||
self.logger.warning(f"Data Integration: No market data found for {symbol} {timeframe}")
|
||||
return [], []
|
||||
|
||||
# Validate data if enabled
|
||||
if self.config.enable_data_validation:
|
||||
raw_candles = self._validate_and_clean_data(raw_candles)
|
||||
|
||||
# Handle sparse data if enabled
|
||||
if self.config.enable_sparse_data_handling:
|
||||
raw_candles = self._handle_sparse_data(raw_candles, timeframe)
|
||||
|
||||
# Convert to OHLCV format for indicators
|
||||
ohlcv_candles = convert_database_candles_to_ohlcv(raw_candles)
|
||||
|
||||
# Cache the results
|
||||
self._cache_data(cache_key, {
|
||||
'raw_candles': raw_candles,
|
||||
'ohlcv_candles': ohlcv_candles,
|
||||
'timestamp': datetime.now(timezone.utc)
|
||||
})
|
||||
|
||||
self.logger.debug(f"Data Integration: Fetched {len(raw_candles)} candles for {symbol} {timeframe}")
|
||||
return raw_candles, ohlcv_candles
|
||||
|
||||
except DatabaseOperationError as e:
|
||||
self.logger.error(f"Data Integration: Database error fetching market data: {e}")
|
||||
return [], []
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Unexpected error fetching market data: {e}")
|
||||
return [], []
|
||||
|
||||
def calculate_indicators_for_symbol(
|
||||
self,
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
indicator_configs: List[Dict[str, Any]],
|
||||
days_back: Optional[int] = None,
|
||||
exchange: str = "okx"
|
||||
) -> Dict[str, List[IndicatorResult]]:
|
||||
"""
|
||||
Calculate multiple indicators for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
indicator_configs: List of indicator configurations
|
||||
days_back: Number of days to look back
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to their results
|
||||
"""
|
||||
try:
|
||||
# Get market data
|
||||
raw_candles, ohlcv_candles = self.get_market_data_for_indicators(
|
||||
symbol, timeframe, days_back, exchange
|
||||
)
|
||||
|
||||
if not ohlcv_candles:
|
||||
self.logger.warning(f"Data Integration: No data available for indicator calculations: {symbol} {timeframe}")
|
||||
return {}
|
||||
|
||||
# Check minimum data requirements
|
||||
if len(ohlcv_candles) < self.config.min_candles_required:
|
||||
self.logger.warning(
|
||||
f"Insufficient data for reliable indicators: {len(ohlcv_candles)} < {self.config.min_candles_required}"
|
||||
)
|
||||
|
||||
# Calculate indicators
|
||||
results = {}
|
||||
for config in indicator_configs:
|
||||
indicator_name = config.get('name', 'unknown')
|
||||
indicator_type = config.get('type', 'unknown')
|
||||
parameters = config.get('parameters', {})
|
||||
|
||||
try:
|
||||
indicator_results = self._calculate_single_indicator(
|
||||
indicator_type, ohlcv_candles, parameters
|
||||
)
|
||||
if indicator_results:
|
||||
results[indicator_name] = indicator_results
|
||||
self.logger.debug(f"Calculated {indicator_name}: {len(indicator_results)} points")
|
||||
else:
|
||||
self.logger.warning(f"Data Integration: No results for indicator {indicator_name}")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Error calculating indicator {indicator_name}: {e}")
|
||||
continue
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Error calculating indicators for {symbol}: {e}")
|
||||
return {}
|
||||
|
||||
def get_latest_market_data(
|
||||
self,
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
limit: int = 100,
|
||||
exchange: str = "okx"
|
||||
) -> Tuple[List[Dict[str, Any]], List[OHLCVCandle]]:
|
||||
"""
|
||||
Get the most recent market data for real-time updates.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
limit: Maximum number of candles to fetch
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Tuple of (raw_candles, ohlcv_candles)
|
||||
"""
|
||||
try:
|
||||
# Calculate time range based on limit and timeframe
|
||||
end_time = datetime.now(timezone.utc)
|
||||
|
||||
# Estimate time range based on timeframe
|
||||
timeframe_minutes = self._parse_timeframe_to_minutes(timeframe)
|
||||
start_time = end_time - timedelta(minutes=timeframe_minutes * limit * 2) # Buffer for sparse data
|
||||
|
||||
raw_candles = self.db_ops.market_data.get_candles(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
exchange=exchange
|
||||
)
|
||||
|
||||
# Limit to most recent candles
|
||||
if len(raw_candles) > limit:
|
||||
raw_candles = raw_candles[-limit:]
|
||||
|
||||
# Convert to OHLCV format
|
||||
ohlcv_candles = convert_database_candles_to_ohlcv(raw_candles)
|
||||
|
||||
self.logger.debug(f"Fetched latest {len(raw_candles)} candles for {symbol} {timeframe}")
|
||||
return raw_candles, ohlcv_candles
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Error fetching latest market data: {e}")
|
||||
return [], []
|
||||
|
||||
def check_data_availability(
|
||||
self,
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
exchange: str = "okx"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Check data availability and quality for a symbol/timeframe.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Dictionary with availability information
|
||||
"""
|
||||
try:
|
||||
# Get latest candle
|
||||
latest_candle = self.db_ops.market_data.get_latest_candle(symbol, timeframe, exchange)
|
||||
|
||||
if not latest_candle:
|
||||
return {
|
||||
'available': False,
|
||||
'latest_timestamp': None,
|
||||
'data_age_minutes': None,
|
||||
'sufficient_for_indicators': False,
|
||||
'message': f"No data available for {symbol} {timeframe}"
|
||||
}
|
||||
|
||||
# Calculate data age
|
||||
latest_time = latest_candle['timestamp']
|
||||
if latest_time.tzinfo is None:
|
||||
latest_time = latest_time.replace(tzinfo=timezone.utc)
|
||||
|
||||
data_age = datetime.now(timezone.utc) - latest_time
|
||||
data_age_minutes = data_age.total_seconds() / 60
|
||||
|
||||
# Check if we have sufficient data for indicators
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(days=1) # Check last day
|
||||
|
||||
recent_candles = self.db_ops.market_data.get_candles(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
exchange=exchange
|
||||
)
|
||||
|
||||
sufficient_data = len(recent_candles) >= self.config.min_candles_required
|
||||
|
||||
return {
|
||||
'available': True,
|
||||
'latest_timestamp': latest_time,
|
||||
'data_age_minutes': data_age_minutes,
|
||||
'recent_candle_count': len(recent_candles),
|
||||
'sufficient_for_indicators': sufficient_data,
|
||||
'is_recent': data_age_minutes < 60, # Less than 1 hour old
|
||||
'message': f"Latest: {latest_time.strftime('%Y-%m-%d %H:%M:%S UTC')}, {len(recent_candles)} recent candles"
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Error checking data availability: {e}")
|
||||
return {
|
||||
'available': False,
|
||||
'latest_timestamp': None,
|
||||
'data_age_minutes': None,
|
||||
'sufficient_for_indicators': False,
|
||||
'message': f"Error checking data: {str(e)}"
|
||||
}
|
||||
|
||||
def _calculate_single_indicator(
|
||||
self,
|
||||
indicator_type: str,
|
||||
candles: List[OHLCVCandle],
|
||||
parameters: Dict[str, Any]
|
||||
) -> List[IndicatorResult]:
|
||||
"""Calculate a single indicator with given parameters."""
|
||||
try:
|
||||
if indicator_type == 'sma':
|
||||
period = parameters.get('period', 20)
|
||||
return self.indicators.sma(candles, period)
|
||||
|
||||
elif indicator_type == 'ema':
|
||||
period = parameters.get('period', 20)
|
||||
return self.indicators.ema(candles, period)
|
||||
|
||||
elif indicator_type == 'rsi':
|
||||
period = parameters.get('period', 14)
|
||||
return self.indicators.rsi(candles, period)
|
||||
|
||||
elif indicator_type == 'macd':
|
||||
fast = parameters.get('fast_period', 12)
|
||||
slow = parameters.get('slow_period', 26)
|
||||
signal = parameters.get('signal_period', 9)
|
||||
return self.indicators.macd(candles, fast, slow, signal)
|
||||
|
||||
elif indicator_type == 'bollinger_bands':
|
||||
period = parameters.get('period', 20)
|
||||
std_dev = parameters.get('std_dev', 2)
|
||||
return self.indicators.bollinger_bands(candles, period, std_dev)
|
||||
|
||||
else:
|
||||
self.logger.warning(f"Unknown indicator type: {indicator_type}")
|
||||
return []
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Data Integration: Error calculating {indicator_type}: {e}")
|
||||
return []
|
||||
|
||||
def _validate_and_clean_data(self, candles: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Validate and clean market data."""
|
||||
cleaned_candles = []
|
||||
|
||||
for i, candle in enumerate(candles):
|
||||
try:
|
||||
# Check required fields
|
||||
required_fields = ['timestamp', 'open', 'high', 'low', 'close', 'volume']
|
||||
if not all(field in candle for field in required_fields):
|
||||
self.logger.warning(f"Data Integration: Missing fields in candle {i}")
|
||||
continue
|
||||
|
||||
# Validate OHLC relationships
|
||||
o, h, l, c = float(candle['open']), float(candle['high']), float(candle['low']), float(candle['close'])
|
||||
if not (h >= max(o, c) and l <= min(o, c)):
|
||||
self.logger.warning(f"Data Integration: Invalid OHLC relationship in candle {i}")
|
||||
continue
|
||||
|
||||
# Validate positive values
|
||||
if any(val <= 0 for val in [o, h, l, c]):
|
||||
self.logger.warning(f"Data Integration: Non-positive price in candle {i}")
|
||||
continue
|
||||
|
||||
cleaned_candles.append(candle)
|
||||
|
||||
except (ValueError, TypeError) as e:
|
||||
self.logger.warning(f"Data Integration: Error validating candle {i}: {e}")
|
||||
continue
|
||||
|
||||
removed_count = len(candles) - len(cleaned_candles)
|
||||
if removed_count > 0:
|
||||
self.logger.info(f"Data Integration: Removed {removed_count} invalid candles during validation")
|
||||
|
||||
return cleaned_candles
|
||||
|
||||
def _handle_sparse_data(self, candles: List[Dict[str, Any]], timeframe: str) -> List[Dict[str, Any]]:
|
||||
"""Handle sparse data by detecting and logging gaps."""
|
||||
if len(candles) < 2:
|
||||
return candles
|
||||
|
||||
# Calculate expected interval
|
||||
timeframe_minutes = self._parse_timeframe_to_minutes(timeframe)
|
||||
expected_interval = timedelta(minutes=timeframe_minutes)
|
||||
|
||||
gaps_detected = 0
|
||||
for i in range(1, len(candles)):
|
||||
prev_time = candles[i-1]['timestamp']
|
||||
curr_time = candles[i]['timestamp']
|
||||
|
||||
if isinstance(prev_time, str):
|
||||
prev_time = datetime.fromisoformat(prev_time.replace('Z', '+00:00'))
|
||||
if isinstance(curr_time, str):
|
||||
curr_time = datetime.fromisoformat(curr_time.replace('Z', '+00:00'))
|
||||
|
||||
actual_interval = curr_time - prev_time
|
||||
if actual_interval > expected_interval * 1.5: # Allow 50% tolerance
|
||||
gaps_detected += 1
|
||||
|
||||
return candles
|
||||
|
||||
def _parse_timeframe_to_minutes(self, timeframe: str) -> int:
|
||||
"""Parse timeframe string to minutes."""
|
||||
timeframe_map = {
|
||||
'1s': 1/60, '5s': 5/60, '10s': 10/60, '15s': 15/60, '30s': 30/60,
|
||||
'1m': 1, '5m': 5, '15m': 15, '30m': 30,
|
||||
'1h': 60, '2h': 120, '4h': 240, '6h': 360, '12h': 720,
|
||||
'1d': 1440, '3d': 4320, '1w': 10080
|
||||
}
|
||||
return timeframe_map.get(timeframe, 60) # Default to 1 hour
|
||||
|
||||
def _get_cached_data(self, cache_key: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get data from cache if still valid."""
|
||||
if cache_key not in self._cache:
|
||||
return None
|
||||
|
||||
cached_item = self._cache[cache_key]
|
||||
cache_age = datetime.now(timezone.utc) - cached_item['timestamp']
|
||||
|
||||
if cache_age.total_seconds() > self.config.cache_timeout_minutes * 60:
|
||||
del self._cache[cache_key]
|
||||
return None
|
||||
|
||||
return cached_item
|
||||
|
||||
def _cache_data(self, cache_key: str, data: Dict[str, Any]) -> None:
|
||||
"""Cache data with timestamp."""
|
||||
# Simple cache size management
|
||||
if len(self._cache) > 50: # Limit cache size
|
||||
# Remove oldest entries
|
||||
oldest_key = min(self._cache.keys(), key=lambda k: self._cache[k]['timestamp'])
|
||||
del self._cache[oldest_key]
|
||||
|
||||
self._cache[cache_key] = data
|
||||
|
||||
def clear_cache(self) -> None:
|
||||
"""Clear the data cache."""
|
||||
self._cache.clear()
|
||||
self.logger.debug("Data Integration: Data cache cleared")
|
||||
|
||||
def get_indicator_data(
|
||||
self,
|
||||
main_df: pd.DataFrame,
|
||||
main_timeframe: str,
|
||||
indicator_configs: List['IndicatorLayerConfig'],
|
||||
indicator_manager: 'IndicatorManager',
|
||||
symbol: str,
|
||||
exchange: str = "okx"
|
||||
) -> Dict[str, pd.DataFrame]:
|
||||
|
||||
indicator_data_map = {}
|
||||
if main_df.empty:
|
||||
return indicator_data_map
|
||||
|
||||
for config in indicator_configs:
|
||||
indicator_id = config.id
|
||||
indicator = indicator_manager.load_indicator(indicator_id)
|
||||
|
||||
if not indicator:
|
||||
logger.warning(f"Data Integrator: Could not load indicator with ID: {indicator_id}")
|
||||
continue
|
||||
|
||||
try:
|
||||
# Determine the timeframe and data to use
|
||||
target_timeframe = indicator.timeframe
|
||||
|
||||
if target_timeframe and target_timeframe != main_timeframe:
|
||||
# Custom timeframe: fetch new data
|
||||
days_back = (main_df.index.max() - main_df.index.min()).days + 2 # Add buffer
|
||||
|
||||
raw_candles, _ = self.get_market_data_for_indicators(
|
||||
symbol=symbol,
|
||||
timeframe=target_timeframe,
|
||||
days_back=days_back,
|
||||
exchange=exchange
|
||||
)
|
||||
|
||||
if not raw_candles:
|
||||
self.logger.warning(f"No data for indicator '{indicator.name}' on timeframe {target_timeframe}")
|
||||
continue
|
||||
|
||||
from components.charts.utils import prepare_chart_data
|
||||
indicator_df = prepare_chart_data(raw_candles)
|
||||
else:
|
||||
# Use main chart's dataframe
|
||||
indicator_df = main_df
|
||||
|
||||
# Calculate the indicator
|
||||
indicator_result_pkg = self.indicators.calculate(
|
||||
indicator.type,
|
||||
indicator_df,
|
||||
**indicator.parameters
|
||||
)
|
||||
|
||||
if indicator_result_pkg and indicator_result_pkg.get('data'):
|
||||
indicator_results = indicator_result_pkg['data']
|
||||
|
||||
if not indicator_results:
|
||||
self.logger.warning(f"Indicator '{indicator.name}' produced no results.")
|
||||
continue
|
||||
|
||||
result_df = pd.DataFrame([
|
||||
{'timestamp': r.timestamp, **r.values}
|
||||
for r in indicator_results
|
||||
])
|
||||
result_df['timestamp'] = pd.to_datetime(result_df['timestamp'])
|
||||
result_df.set_index('timestamp', inplace=True)
|
||||
|
||||
# Ensure timezone consistency before reindexing
|
||||
if result_df.index.tz is None:
|
||||
result_df = result_df.tz_localize('UTC')
|
||||
result_df = result_df.tz_convert(main_df.index.tz)
|
||||
|
||||
# Align data to main_df's index to handle different timeframes
|
||||
if not result_df.index.equals(main_df.index):
|
||||
aligned_df = result_df.reindex(main_df.index, method='ffill')
|
||||
indicator_data_map[indicator.id] = aligned_df
|
||||
else:
|
||||
indicator_data_map[indicator.id] = result_df
|
||||
else:
|
||||
self.logger.warning(f"No data returned for indicator '{indicator.name}'")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error calculating indicator '{indicator.name}': {e}", exc_info=True)
|
||||
|
||||
return indicator_data_map
|
||||
|
||||
|
||||
# Convenience functions for common operations
|
||||
def get_market_data_integrator(config: DataIntegrationConfig = None) -> MarketDataIntegrator:
|
||||
"""Get a configured market data integrator instance."""
|
||||
return MarketDataIntegrator(config)
|
||||
|
||||
|
||||
def fetch_indicator_data(
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
indicator_configs: List[Dict[str, Any]],
|
||||
days_back: int = 7,
|
||||
exchange: str = "okx"
|
||||
) -> Dict[str, List[IndicatorResult]]:
|
||||
"""
|
||||
Convenience function to fetch and calculate indicators.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
indicator_configs: List of indicator configurations
|
||||
days_back: Number of days to look back
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to results
|
||||
"""
|
||||
integrator = get_market_data_integrator()
|
||||
return integrator.calculate_indicators_for_symbol(
|
||||
symbol, timeframe, indicator_configs, days_back, exchange
|
||||
)
|
||||
|
||||
|
||||
def check_symbol_data_quality(
|
||||
symbol: str,
|
||||
timeframe: str,
|
||||
exchange: str = "okx"
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to check data quality for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading pair
|
||||
timeframe: Timeframe
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
Data quality information
|
||||
"""
|
||||
integrator = get_market_data_integrator()
|
||||
return integrator.check_data_availability(symbol, timeframe, exchange)
|
||||
462
components/charts/error_handling.py
Normal file
462
components/charts/error_handling.py
Normal file
@ -0,0 +1,462 @@
|
||||
"""
|
||||
Error Handling Utilities for Chart Layers
|
||||
|
||||
This module provides comprehensive error handling for chart creation,
|
||||
including custom exceptions, error recovery strategies, and user-friendly
|
||||
error messaging for various insufficient data scenarios.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import List, Dict, Any, Optional, Union, Tuple, Callable
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
class ErrorSeverity(Enum):
|
||||
"""Error severity levels for chart operations"""
|
||||
INFO = "info" # Informational, chart can proceed
|
||||
WARNING = "warning" # Warning, chart proceeds with limitations
|
||||
ERROR = "error" # Error, chart creation may fail
|
||||
CRITICAL = "critical" # Critical error, chart creation impossible
|
||||
|
||||
|
||||
@dataclass
|
||||
class ChartError:
|
||||
"""Container for chart error information"""
|
||||
code: str
|
||||
message: str
|
||||
severity: ErrorSeverity
|
||||
context: Dict[str, Any]
|
||||
recovery_suggestion: Optional[str] = None
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert error to dictionary for logging/serialization"""
|
||||
return {
|
||||
'code': self.code,
|
||||
'message': self.message,
|
||||
'severity': self.severity.value,
|
||||
'context': self.context,
|
||||
'recovery_suggestion': self.recovery_suggestion
|
||||
}
|
||||
|
||||
|
||||
class ChartDataError(Exception):
|
||||
"""Base exception for chart data-related errors"""
|
||||
def __init__(self, error: ChartError):
|
||||
self.error = error
|
||||
super().__init__(error.message)
|
||||
|
||||
|
||||
class InsufficientDataError(ChartDataError):
|
||||
"""Raised when there's insufficient data for chart/indicator calculations"""
|
||||
pass
|
||||
|
||||
|
||||
class DataValidationError(ChartDataError):
|
||||
"""Raised when data validation fails"""
|
||||
pass
|
||||
|
||||
|
||||
class IndicatorCalculationError(ChartDataError):
|
||||
"""Raised when indicator calculations fail"""
|
||||
pass
|
||||
|
||||
|
||||
class DataConnectionError(ChartDataError):
|
||||
"""Raised when database/data source connection fails"""
|
||||
pass
|
||||
|
||||
|
||||
class DataRequirements:
|
||||
"""Data requirements checker for charts and indicators"""
|
||||
|
||||
# Minimum data requirements for different indicators
|
||||
INDICATOR_MIN_PERIODS = {
|
||||
'sma': lambda period: period + 5, # SMA needs period + buffer
|
||||
'ema': lambda period: period * 2, # EMA needs 2x period for stability
|
||||
'rsi': lambda period: period + 10, # RSI needs period + warmup
|
||||
'macd': lambda fast, slow, signal: slow + signal + 10, # MACD most demanding
|
||||
'bollinger_bands': lambda period: period + 5, # BB needs period + buffer
|
||||
'candlestick': lambda: 10, # Basic candlestick minimum
|
||||
'volume': lambda: 5 # Volume minimum
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def check_candlestick_requirements(cls, data_count: int) -> ChartError:
|
||||
"""Check if we have enough data for basic candlestick chart"""
|
||||
min_required = cls.INDICATOR_MIN_PERIODS['candlestick']()
|
||||
|
||||
if data_count == 0:
|
||||
return ChartError(
|
||||
code='NO_DATA',
|
||||
message='No market data available',
|
||||
severity=ErrorSeverity.CRITICAL,
|
||||
context={'data_count': data_count, 'required': min_required},
|
||||
recovery_suggestion='Check data collection service or select different symbol/timeframe'
|
||||
)
|
||||
elif data_count < min_required:
|
||||
return ChartError(
|
||||
code='INSUFFICIENT_CANDLESTICK_DATA',
|
||||
message=f'Insufficient data for candlestick chart: {data_count} candles (need {min_required})',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'data_count': data_count, 'required': min_required},
|
||||
recovery_suggestion='Chart will display with limited data - consider longer time range'
|
||||
)
|
||||
else:
|
||||
return ChartError(
|
||||
code='SUFFICIENT_DATA',
|
||||
message='Sufficient data for candlestick chart',
|
||||
severity=ErrorSeverity.INFO,
|
||||
context={'data_count': data_count, 'required': min_required}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def check_indicator_requirements(cls, indicator_type: str, data_count: int,
|
||||
parameters: Dict[str, Any]) -> ChartError:
|
||||
"""Check if we have enough data for specific indicator"""
|
||||
if indicator_type not in cls.INDICATOR_MIN_PERIODS:
|
||||
return ChartError(
|
||||
code='UNKNOWN_INDICATOR',
|
||||
message=f'Unknown indicator type: {indicator_type}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'indicator_type': indicator_type, 'data_count': data_count},
|
||||
recovery_suggestion='Check indicator type spelling or implementation'
|
||||
)
|
||||
|
||||
# Calculate minimum required data
|
||||
try:
|
||||
if indicator_type in ['sma', 'ema', 'rsi', 'bollinger_bands']:
|
||||
period = parameters.get('period', 20)
|
||||
min_required = cls.INDICATOR_MIN_PERIODS[indicator_type](period)
|
||||
elif indicator_type == 'macd':
|
||||
fast = parameters.get('fast_period', 12)
|
||||
slow = parameters.get('slow_period', 26)
|
||||
signal = parameters.get('signal_period', 9)
|
||||
min_required = cls.INDICATOR_MIN_PERIODS[indicator_type](fast, slow, signal)
|
||||
else:
|
||||
min_required = cls.INDICATOR_MIN_PERIODS[indicator_type]()
|
||||
except Exception as e:
|
||||
return ChartError(
|
||||
code='PARAMETER_ERROR',
|
||||
message=f'Invalid parameters for {indicator_type}: {e}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'indicator_type': indicator_type, 'parameters': parameters},
|
||||
recovery_suggestion='Check indicator parameters for valid values'
|
||||
)
|
||||
|
||||
if data_count < min_required:
|
||||
# Determine severity based on how insufficient the data is
|
||||
if data_count < min_required // 2:
|
||||
# Severely insufficient - less than half the required data
|
||||
severity = ErrorSeverity.ERROR
|
||||
else:
|
||||
# Slightly insufficient - can potentially adjust parameters
|
||||
severity = ErrorSeverity.WARNING
|
||||
|
||||
return ChartError(
|
||||
code='INSUFFICIENT_INDICATOR_DATA',
|
||||
message=f'Insufficient data for {indicator_type}: {data_count} candles (need {min_required})',
|
||||
severity=severity,
|
||||
context={
|
||||
'indicator_type': indicator_type,
|
||||
'data_count': data_count,
|
||||
'required': min_required,
|
||||
'parameters': parameters
|
||||
},
|
||||
recovery_suggestion=f'Increase data range to at least {min_required} candles or adjust {indicator_type} parameters'
|
||||
)
|
||||
else:
|
||||
return ChartError(
|
||||
code='SUFFICIENT_INDICATOR_DATA',
|
||||
message=f'Sufficient data for {indicator_type}',
|
||||
severity=ErrorSeverity.INFO,
|
||||
context={
|
||||
'indicator_type': indicator_type,
|
||||
'data_count': data_count,
|
||||
'required': min_required
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class ErrorRecoveryStrategies:
|
||||
"""Error recovery strategies for different chart scenarios"""
|
||||
|
||||
@staticmethod
|
||||
def handle_insufficient_data(error: ChartError, fallback_options: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Handle insufficient data by providing fallback strategies"""
|
||||
strategy = {
|
||||
'can_proceed': False,
|
||||
'fallback_action': None,
|
||||
'modified_config': None,
|
||||
'user_message': error.message
|
||||
}
|
||||
|
||||
if error.code == 'INSUFFICIENT_CANDLESTICK_DATA':
|
||||
# For candlestick, we can proceed with warnings
|
||||
strategy.update({
|
||||
'can_proceed': True,
|
||||
'fallback_action': 'display_with_warning',
|
||||
'user_message': f"{error.message}. Chart will display available data."
|
||||
})
|
||||
|
||||
elif error.code == 'INSUFFICIENT_INDICATOR_DATA':
|
||||
# For indicators, try to adjust parameters or skip
|
||||
indicator_type = error.context.get('indicator_type')
|
||||
data_count = error.context.get('data_count', 0)
|
||||
|
||||
if indicator_type in ['sma', 'ema', 'bollinger_bands']:
|
||||
# Try reducing period to fit available data
|
||||
max_period = max(5, data_count // 2) # Conservative estimate
|
||||
strategy.update({
|
||||
'can_proceed': True,
|
||||
'fallback_action': 'adjust_parameters',
|
||||
'modified_config': {'period': max_period},
|
||||
'user_message': f"Adjusted {indicator_type} period to {max_period} due to limited data"
|
||||
})
|
||||
|
||||
elif indicator_type == 'rsi':
|
||||
# RSI can work with reduced period
|
||||
max_period = max(7, data_count // 3)
|
||||
strategy.update({
|
||||
'can_proceed': True,
|
||||
'fallback_action': 'adjust_parameters',
|
||||
'modified_config': {'period': max_period},
|
||||
'user_message': f"Adjusted RSI period to {max_period} due to limited data"
|
||||
})
|
||||
|
||||
else:
|
||||
# Skip the indicator entirely
|
||||
strategy.update({
|
||||
'can_proceed': True,
|
||||
'fallback_action': 'skip_indicator',
|
||||
'user_message': f"Skipped {indicator_type} due to insufficient data"
|
||||
})
|
||||
|
||||
return strategy
|
||||
|
||||
@staticmethod
|
||||
def handle_data_validation_error(error: ChartError) -> Dict[str, Any]:
|
||||
"""Handle data validation errors"""
|
||||
return {
|
||||
'can_proceed': False,
|
||||
'fallback_action': 'show_error',
|
||||
'user_message': f"Data validation failed: {error.message}",
|
||||
'recovery_suggestion': error.recovery_suggestion
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def handle_connection_error(error: ChartError) -> Dict[str, Any]:
|
||||
"""Handle database/connection errors"""
|
||||
return {
|
||||
'can_proceed': False,
|
||||
'fallback_action': 'show_error',
|
||||
'user_message': "Unable to connect to data source",
|
||||
'recovery_suggestion': "Check database connection or try again later"
|
||||
}
|
||||
|
||||
|
||||
class ChartErrorHandler:
|
||||
"""Main error handler for chart operations"""
|
||||
|
||||
def __init__(self):
|
||||
self.logger = logger
|
||||
self.errors: List[ChartError] = []
|
||||
self.warnings: List[ChartError] = []
|
||||
|
||||
def clear_errors(self):
|
||||
"""Clear accumulated errors and warnings"""
|
||||
self.errors.clear()
|
||||
self.warnings.clear()
|
||||
|
||||
def validate_data_sufficiency(self, data: Union[pd.DataFrame, List[Dict[str, Any]]],
|
||||
chart_type: str = 'candlestick',
|
||||
indicators: List[Dict[str, Any]] = None) -> bool:
|
||||
"""
|
||||
Validate if data is sufficient for chart and indicator requirements.
|
||||
|
||||
Args:
|
||||
data: Chart data (DataFrame or list of candle dicts)
|
||||
chart_type: Type of chart being created
|
||||
indicators: List of indicator configurations
|
||||
|
||||
Returns:
|
||||
True if data is sufficient, False otherwise
|
||||
"""
|
||||
self.clear_errors()
|
||||
|
||||
# Get data count
|
||||
if isinstance(data, pd.DataFrame):
|
||||
data_count = len(data)
|
||||
elif isinstance(data, list):
|
||||
data_count = len(data)
|
||||
else:
|
||||
self.errors.append(ChartError(
|
||||
code='INVALID_DATA_TYPE',
|
||||
message=f'Invalid data type: {type(data)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'data_type': str(type(data))}
|
||||
))
|
||||
return False
|
||||
|
||||
# Check basic chart requirements
|
||||
chart_error = DataRequirements.check_candlestick_requirements(data_count)
|
||||
if chart_error.severity in [ErrorSeverity.WARNING]:
|
||||
self.warnings.append(chart_error)
|
||||
elif chart_error.severity in [ErrorSeverity.ERROR, ErrorSeverity.CRITICAL]:
|
||||
self.errors.append(chart_error)
|
||||
return False
|
||||
|
||||
# Check indicator requirements
|
||||
if indicators:
|
||||
for indicator_config in indicators:
|
||||
indicator_type = indicator_config.get('type', 'unknown')
|
||||
parameters = indicator_config.get('parameters', {})
|
||||
|
||||
indicator_error = DataRequirements.check_indicator_requirements(
|
||||
indicator_type, data_count, parameters
|
||||
)
|
||||
|
||||
if indicator_error.severity == ErrorSeverity.WARNING:
|
||||
self.warnings.append(indicator_error)
|
||||
elif indicator_error.severity in [ErrorSeverity.ERROR, ErrorSeverity.CRITICAL]:
|
||||
self.errors.append(indicator_error)
|
||||
|
||||
# Return True if no critical errors
|
||||
return len(self.errors) == 0
|
||||
|
||||
def get_error_summary(self) -> Dict[str, Any]:
|
||||
"""Get summary of all errors and warnings"""
|
||||
return {
|
||||
'has_errors': len(self.errors) > 0,
|
||||
'has_warnings': len(self.warnings) > 0,
|
||||
'error_count': len(self.errors),
|
||||
'warning_count': len(self.warnings),
|
||||
'errors': [error.to_dict() for error in self.errors],
|
||||
'warnings': [warning.to_dict() for warning in self.warnings],
|
||||
'can_proceed': len(self.errors) == 0
|
||||
}
|
||||
|
||||
def get_user_friendly_message(self) -> str:
|
||||
"""Get a user-friendly message summarizing errors and warnings"""
|
||||
if not self.errors and not self.warnings:
|
||||
return "Chart data is ready"
|
||||
|
||||
messages = []
|
||||
|
||||
if self.errors:
|
||||
error_msg = f"❌ {len(self.errors)} error(s) prevent chart creation"
|
||||
messages.append(error_msg)
|
||||
|
||||
# Add most relevant error message
|
||||
if self.errors:
|
||||
main_error = self.errors[0] # Show first error
|
||||
messages.append(f"• {main_error.message}")
|
||||
if main_error.recovery_suggestion:
|
||||
messages.append(f" 💡 {main_error.recovery_suggestion}")
|
||||
|
||||
if self.warnings:
|
||||
warning_msg = f"⚠️ {len(self.warnings)} warning(s)"
|
||||
messages.append(warning_msg)
|
||||
|
||||
# Add most relevant warning
|
||||
if self.warnings:
|
||||
main_warning = self.warnings[0]
|
||||
messages.append(f"• {main_warning.message}")
|
||||
|
||||
return "\n".join(messages)
|
||||
|
||||
def apply_error_recovery(self, error: ChartError,
|
||||
fallback_options: Dict[str, Any] = None) -> Dict[str, Any]:
|
||||
"""Apply error recovery strategy for a specific error"""
|
||||
fallback_options = fallback_options or {}
|
||||
|
||||
if error.code.startswith('INSUFFICIENT'):
|
||||
return ErrorRecoveryStrategies.handle_insufficient_data(error, fallback_options)
|
||||
elif 'VALIDATION' in error.code:
|
||||
return ErrorRecoveryStrategies.handle_data_validation_error(error)
|
||||
elif 'CONNECTION' in error.code:
|
||||
return ErrorRecoveryStrategies.handle_connection_error(error)
|
||||
else:
|
||||
# Default recovery strategy
|
||||
return {
|
||||
'can_proceed': False,
|
||||
'fallback_action': 'show_error',
|
||||
'user_message': error.message,
|
||||
'recovery_suggestion': error.recovery_suggestion
|
||||
}
|
||||
|
||||
|
||||
# Convenience functions
|
||||
def check_data_sufficiency(data: Union[pd.DataFrame, List[Dict[str, Any]]],
|
||||
indicators: List[Dict[str, Any]] = None) -> Tuple[bool, Dict[str, Any]]:
|
||||
"""
|
||||
Convenience function to check data sufficiency.
|
||||
|
||||
Args:
|
||||
data: Chart data
|
||||
indicators: List of indicator configurations
|
||||
|
||||
Returns:
|
||||
Tuple of (is_sufficient, error_summary)
|
||||
"""
|
||||
handler = ChartErrorHandler()
|
||||
is_sufficient = handler.validate_data_sufficiency(data, indicators=indicators)
|
||||
return is_sufficient, handler.get_error_summary()
|
||||
|
||||
|
||||
def get_error_message(data: Union[pd.DataFrame, List[Dict[str, Any]]],
|
||||
indicators: List[Dict[str, Any]] = None) -> str:
|
||||
"""
|
||||
Get user-friendly error message for data issues.
|
||||
|
||||
Args:
|
||||
data: Chart data
|
||||
indicators: List of indicator configurations
|
||||
|
||||
Returns:
|
||||
User-friendly error message
|
||||
"""
|
||||
handler = ChartErrorHandler()
|
||||
handler.validate_data_sufficiency(data, indicators=indicators)
|
||||
return handler.get_user_friendly_message()
|
||||
|
||||
|
||||
def create_error_annotation(error_message: str, position: str = "top") -> Dict[str, Any]:
|
||||
"""
|
||||
Create a Plotly annotation for error display.
|
||||
|
||||
Args:
|
||||
error_message: Error message to display
|
||||
position: Position of annotation ('top', 'center', 'bottom')
|
||||
|
||||
Returns:
|
||||
Plotly annotation configuration
|
||||
"""
|
||||
positions = {
|
||||
'top': {'x': 0.5, 'y': 0.9},
|
||||
'center': {'x': 0.5, 'y': 0.5},
|
||||
'bottom': {'x': 0.5, 'y': 0.1}
|
||||
}
|
||||
|
||||
pos = positions.get(position, positions['center'])
|
||||
|
||||
return {
|
||||
'text': error_message,
|
||||
'xref': 'paper',
|
||||
'yref': 'paper',
|
||||
'x': pos['x'],
|
||||
'y': pos['y'],
|
||||
'xanchor': 'center',
|
||||
'yanchor': 'middle',
|
||||
'showarrow': False,
|
||||
'font': {'size': 14, 'color': '#e74c3c'},
|
||||
'bgcolor': 'rgba(255,255,255,0.8)',
|
||||
'bordercolor': '#e74c3c',
|
||||
'borderwidth': 1
|
||||
}
|
||||
133
components/charts/indicator_defaults.py
Normal file
133
components/charts/indicator_defaults.py
Normal file
@ -0,0 +1,133 @@
|
||||
"""
|
||||
Default Indicator Creation
|
||||
|
||||
This module creates a set of default indicators that users can start with.
|
||||
These are common indicator configurations that are immediately useful.
|
||||
"""
|
||||
|
||||
from .indicator_manager import get_indicator_manager, IndicatorType, DisplayType
|
||||
|
||||
|
||||
def create_default_indicators():
|
||||
"""Create default indicators if they don't exist."""
|
||||
manager = get_indicator_manager()
|
||||
|
||||
# Check if we already have indicators
|
||||
existing_indicators = manager.list_indicators()
|
||||
if existing_indicators:
|
||||
manager.logger.info(f"Indicator defaults: Found {len(existing_indicators)} existing indicators, skipping defaults creation")
|
||||
return
|
||||
|
||||
# Define default indicators
|
||||
default_indicators = [
|
||||
# Moving Averages
|
||||
{
|
||||
"name": "SMA 20",
|
||||
"description": "20-period Simple Moving Average for short-term trend",
|
||||
"type": IndicatorType.SMA.value,
|
||||
"parameters": {"period": 20},
|
||||
"color": "#007bff"
|
||||
},
|
||||
{
|
||||
"name": "SMA 50",
|
||||
"description": "50-period Simple Moving Average for medium-term trend",
|
||||
"type": IndicatorType.SMA.value,
|
||||
"parameters": {"period": 50},
|
||||
"color": "#6c757d"
|
||||
},
|
||||
{
|
||||
"name": "EMA 12",
|
||||
"description": "12-period Exponential Moving Average for fast signals",
|
||||
"type": IndicatorType.EMA.value,
|
||||
"parameters": {"period": 12},
|
||||
"color": "#ff6b35"
|
||||
},
|
||||
{
|
||||
"name": "EMA 26",
|
||||
"description": "26-period Exponential Moving Average for slower signals",
|
||||
"type": IndicatorType.EMA.value,
|
||||
"parameters": {"period": 26},
|
||||
"color": "#28a745"
|
||||
},
|
||||
|
||||
# Oscillators
|
||||
{
|
||||
"name": "RSI 14",
|
||||
"description": "14-period RSI for momentum analysis",
|
||||
"type": IndicatorType.RSI.value,
|
||||
"parameters": {"period": 14},
|
||||
"color": "#20c997"
|
||||
},
|
||||
{
|
||||
"name": "RSI 21",
|
||||
"description": "21-period RSI for less sensitive momentum signals",
|
||||
"type": IndicatorType.RSI.value,
|
||||
"parameters": {"period": 21},
|
||||
"color": "#17a2b8"
|
||||
},
|
||||
|
||||
# MACD Variants
|
||||
{
|
||||
"name": "MACD Standard",
|
||||
"description": "Standard MACD (12, 26, 9) for trend changes",
|
||||
"type": IndicatorType.MACD.value,
|
||||
"parameters": {"fast_period": 12, "slow_period": 26, "signal_period": 9},
|
||||
"color": "#fd7e14"
|
||||
},
|
||||
{
|
||||
"name": "MACD Fast",
|
||||
"description": "Fast MACD (5, 13, 4) for quick signals",
|
||||
"type": IndicatorType.MACD.value,
|
||||
"parameters": {"fast_period": 5, "slow_period": 13, "signal_period": 4},
|
||||
"color": "#dc3545"
|
||||
},
|
||||
|
||||
# Bollinger Bands
|
||||
{
|
||||
"name": "Bollinger Bands",
|
||||
"description": "Standard Bollinger Bands (20, 2) for volatility analysis",
|
||||
"type": IndicatorType.BOLLINGER_BANDS.value,
|
||||
"parameters": {"period": 20, "std_dev": 2.0},
|
||||
"color": "#6f42c1"
|
||||
},
|
||||
{
|
||||
"name": "Bollinger Tight",
|
||||
"description": "Tight Bollinger Bands (20, 1.5) for sensitive volatility",
|
||||
"type": IndicatorType.BOLLINGER_BANDS.value,
|
||||
"parameters": {"period": 20, "std_dev": 1.5},
|
||||
"color": "#e83e8c"
|
||||
}
|
||||
]
|
||||
|
||||
# Create indicators
|
||||
created_count = 0
|
||||
for indicator_config in default_indicators:
|
||||
indicator = manager.create_indicator(
|
||||
name=indicator_config["name"],
|
||||
indicator_type=indicator_config["type"],
|
||||
parameters=indicator_config["parameters"],
|
||||
description=indicator_config["description"],
|
||||
color=indicator_config["color"]
|
||||
)
|
||||
|
||||
if indicator:
|
||||
created_count += 1
|
||||
manager.logger.info(f"Indicator defaults: Created default indicator: {indicator.name}")
|
||||
else:
|
||||
manager.logger.error(f"Indicator defaults: Failed to create indicator: {indicator_config['name']}")
|
||||
|
||||
manager.logger.info(f"Indicator defaults: Created {created_count} default indicators")
|
||||
|
||||
|
||||
def ensure_default_indicators():
|
||||
"""Ensure default indicators exist (called during app startup)."""
|
||||
try:
|
||||
create_default_indicators()
|
||||
except Exception as e:
|
||||
manager = get_indicator_manager()
|
||||
manager.logger.error(f"Indicator defaults: Error creating default indicators: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Create defaults when run directly
|
||||
create_default_indicators()
|
||||
454
components/charts/indicator_manager.py
Normal file
454
components/charts/indicator_manager.py
Normal file
@ -0,0 +1,454 @@
|
||||
"""
|
||||
Indicator Management System
|
||||
|
||||
This module provides functionality to manage user-defined indicators with
|
||||
file-based storage. Each indicator is saved as a separate JSON file for
|
||||
portability and easy sharing.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import uuid
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
from typing import Dict, List, Optional, Any, Tuple
|
||||
from dataclasses import dataclass, asdict
|
||||
from enum import Enum
|
||||
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
# Base directory for indicators
|
||||
INDICATORS_DIR = Path("config/indicators")
|
||||
USER_INDICATORS_DIR = INDICATORS_DIR / "user_indicators"
|
||||
TEMPLATES_DIR = INDICATORS_DIR / "templates"
|
||||
|
||||
|
||||
class IndicatorType(str, Enum):
|
||||
"""Supported indicator types."""
|
||||
SMA = "sma"
|
||||
EMA = "ema"
|
||||
RSI = "rsi"
|
||||
MACD = "macd"
|
||||
BOLLINGER_BANDS = "bollinger_bands"
|
||||
|
||||
|
||||
class DisplayType(str, Enum):
|
||||
"""Chart display types for indicators."""
|
||||
OVERLAY = "overlay"
|
||||
SUBPLOT = "subplot"
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorStyling:
|
||||
"""Styling configuration for indicators."""
|
||||
color: str = "#007bff"
|
||||
line_width: int = 2
|
||||
opacity: float = 1.0
|
||||
line_style: str = "solid" # solid, dash, dot, dashdot
|
||||
|
||||
|
||||
@dataclass
|
||||
class UserIndicator:
|
||||
"""User-defined indicator configuration."""
|
||||
id: str
|
||||
name: str
|
||||
description: str
|
||||
type: str # IndicatorType
|
||||
display_type: str # DisplayType
|
||||
parameters: Dict[str, Any]
|
||||
styling: IndicatorStyling
|
||||
timeframe: Optional[str] = None
|
||||
visible: bool = True
|
||||
created_date: str = ""
|
||||
modified_date: str = ""
|
||||
|
||||
def __post_init__(self):
|
||||
"""Initialize timestamps if not provided."""
|
||||
current_time = datetime.now(timezone.utc).isoformat()
|
||||
if not self.created_date:
|
||||
self.created_date = current_time
|
||||
if not self.modified_date:
|
||||
self.modified_date = current_time
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert to dictionary for JSON serialization."""
|
||||
return {
|
||||
'id': self.id,
|
||||
'name': self.name,
|
||||
'description': self.description,
|
||||
'type': self.type,
|
||||
'display_type': self.display_type,
|
||||
'parameters': self.parameters,
|
||||
'styling': asdict(self.styling),
|
||||
'timeframe': self.timeframe,
|
||||
'visible': self.visible,
|
||||
'created_date': self.created_date,
|
||||
'modified_date': self.modified_date
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data: Dict[str, Any]) -> 'UserIndicator':
|
||||
"""Create UserIndicator from dictionary."""
|
||||
styling_data = data.get('styling', {})
|
||||
styling = IndicatorStyling(**styling_data)
|
||||
|
||||
return cls(
|
||||
id=data['id'],
|
||||
name=data['name'],
|
||||
description=data.get('description', ''),
|
||||
type=data['type'],
|
||||
display_type=data['display_type'],
|
||||
parameters=data.get('parameters', {}),
|
||||
styling=styling,
|
||||
timeframe=data.get('timeframe'),
|
||||
visible=data.get('visible', True),
|
||||
created_date=data.get('created_date', ''),
|
||||
modified_date=data.get('modified_date', '')
|
||||
)
|
||||
|
||||
|
||||
class IndicatorManager:
|
||||
"""Manager for user-defined indicators with file-based storage."""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the indicator manager."""
|
||||
self.logger = logger
|
||||
self._ensure_directories()
|
||||
self._create_default_templates()
|
||||
|
||||
def _ensure_directories(self):
|
||||
"""Ensure indicator directories exist."""
|
||||
try:
|
||||
USER_INDICATORS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
TEMPLATES_DIR.mkdir(parents=True, exist_ok=True)
|
||||
self.logger.debug("Indicator manager: Indicator directories created/verified")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error creating indicator directories: {e}")
|
||||
|
||||
def _get_indicator_file_path(self, indicator_id: str) -> Path:
|
||||
"""Get file path for an indicator."""
|
||||
return USER_INDICATORS_DIR / f"{indicator_id}.json"
|
||||
|
||||
def _get_template_file_path(self, indicator_type: str) -> Path:
|
||||
"""Get file path for an indicator template."""
|
||||
return TEMPLATES_DIR / f"{indicator_type}_template.json"
|
||||
|
||||
def save_indicator(self, indicator: UserIndicator) -> bool:
|
||||
"""
|
||||
Save an indicator to file.
|
||||
|
||||
Args:
|
||||
indicator: UserIndicator instance to save
|
||||
|
||||
Returns:
|
||||
True if saved successfully, False otherwise
|
||||
"""
|
||||
try:
|
||||
# Update modified date
|
||||
indicator.modified_date = datetime.now(timezone.utc).isoformat()
|
||||
|
||||
file_path = self._get_indicator_file_path(indicator.id)
|
||||
|
||||
with open(file_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(indicator.to_dict(), f, indent=2, ensure_ascii=False)
|
||||
|
||||
self.logger.info(f"Indicator manager: Saved indicator: {indicator.name} ({indicator.id})")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error saving indicator {indicator.id}: {e}")
|
||||
return False
|
||||
|
||||
def load_indicator(self, indicator_id: str) -> Optional[UserIndicator]:
|
||||
"""
|
||||
Load an indicator from file.
|
||||
|
||||
Args:
|
||||
indicator_id: ID of the indicator to load
|
||||
|
||||
Returns:
|
||||
UserIndicator instance or None if not found/error
|
||||
"""
|
||||
try:
|
||||
file_path = self._get_indicator_file_path(indicator_id)
|
||||
|
||||
if not file_path.exists():
|
||||
self.logger.warning(f"Indicator manager: Indicator file not found: {indicator_id}")
|
||||
return None
|
||||
|
||||
with open(file_path, 'r', encoding='utf-8') as f:
|
||||
data = json.load(f)
|
||||
|
||||
indicator = UserIndicator.from_dict(data)
|
||||
self.logger.debug(f"Indicator manager: Loaded indicator: {indicator.name} ({indicator.id})")
|
||||
return indicator
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error loading indicator {indicator_id}: {e}")
|
||||
return None
|
||||
|
||||
def list_indicators(self, visible_only: bool = False) -> List[UserIndicator]:
|
||||
"""
|
||||
List all user indicators.
|
||||
|
||||
Args:
|
||||
visible_only: If True, only return visible indicators
|
||||
|
||||
Returns:
|
||||
List of UserIndicator instances
|
||||
"""
|
||||
indicators = []
|
||||
|
||||
try:
|
||||
for file_path in USER_INDICATORS_DIR.glob("*.json"):
|
||||
indicator_id = file_path.stem
|
||||
indicator = self.load_indicator(indicator_id)
|
||||
|
||||
if indicator:
|
||||
if not visible_only or indicator.visible:
|
||||
indicators.append(indicator)
|
||||
|
||||
# Sort by name
|
||||
indicators.sort(key=lambda x: x.name.lower())
|
||||
self.logger.debug(f"Listed {len(indicators)} indicators")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error listing indicators: {e}")
|
||||
|
||||
return indicators
|
||||
|
||||
def delete_indicator(self, indicator_id: str) -> bool:
|
||||
"""
|
||||
Delete an indicator.
|
||||
|
||||
Args:
|
||||
indicator_id: ID of the indicator to delete
|
||||
|
||||
Returns:
|
||||
True if deleted successfully, False otherwise
|
||||
"""
|
||||
try:
|
||||
file_path = self._get_indicator_file_path(indicator_id)
|
||||
|
||||
if file_path.exists():
|
||||
file_path.unlink()
|
||||
self.logger.info(f"Indicator manager: Deleted indicator: {indicator_id}")
|
||||
return True
|
||||
else:
|
||||
self.logger.warning(f"Indicator manager: Indicator file not found for deletion: {indicator_id}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error deleting indicator {indicator_id}: {e}")
|
||||
return False
|
||||
|
||||
def create_indicator(self, name: str, indicator_type: str, parameters: Dict[str, Any],
|
||||
description: str = "", color: str = "#007bff",
|
||||
display_type: str = None, timeframe: Optional[str] = None) -> Optional[UserIndicator]:
|
||||
"""
|
||||
Create a new indicator.
|
||||
|
||||
Args:
|
||||
name: Display name for the indicator
|
||||
indicator_type: Type of indicator (sma, ema, etc.)
|
||||
parameters: Indicator parameters
|
||||
description: Optional description
|
||||
color: Color for chart display
|
||||
display_type: overlay or subplot (auto-detected if None)
|
||||
timeframe: Optional timeframe for the indicator
|
||||
|
||||
Returns:
|
||||
Created UserIndicator instance or None if error
|
||||
"""
|
||||
try:
|
||||
# Generate unique ID
|
||||
indicator_id = f"{indicator_type}_{uuid.uuid4().hex[:8]}"
|
||||
|
||||
# Auto-detect display type if not provided
|
||||
if display_type is None:
|
||||
display_type = self._get_default_display_type(indicator_type)
|
||||
|
||||
# Create styling
|
||||
styling = IndicatorStyling(color=color)
|
||||
|
||||
# Create indicator
|
||||
indicator = UserIndicator(
|
||||
id=indicator_id,
|
||||
name=name,
|
||||
description=description,
|
||||
type=indicator_type,
|
||||
display_type=display_type,
|
||||
parameters=parameters,
|
||||
styling=styling,
|
||||
timeframe=timeframe
|
||||
)
|
||||
|
||||
# Save to file
|
||||
if self.save_indicator(indicator):
|
||||
self.logger.info(f"Indicator manager: Created new indicator: {name} ({indicator_id})")
|
||||
return indicator
|
||||
else:
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error creating indicator: {e}")
|
||||
return None
|
||||
|
||||
def update_indicator(self, indicator_id: str, **updates) -> bool:
|
||||
"""
|
||||
Update an existing indicator.
|
||||
|
||||
Args:
|
||||
indicator_id: ID of indicator to update
|
||||
**updates: Fields to update
|
||||
|
||||
Returns:
|
||||
True if updated successfully, False otherwise
|
||||
"""
|
||||
try:
|
||||
indicator = self.load_indicator(indicator_id)
|
||||
if not indicator:
|
||||
return False
|
||||
|
||||
# Update fields
|
||||
for key, value in updates.items():
|
||||
if hasattr(indicator, key):
|
||||
if key == 'styling' and isinstance(value, dict):
|
||||
# Update nested styling fields
|
||||
for style_key, style_value in value.items():
|
||||
if hasattr(indicator.styling, style_key):
|
||||
setattr(indicator.styling, style_key, style_value)
|
||||
elif key == 'parameters' and isinstance(value, dict):
|
||||
indicator.parameters.update(value)
|
||||
else:
|
||||
setattr(indicator, key, value)
|
||||
|
||||
# Save updated indicator
|
||||
return self.save_indicator(indicator)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error updating indicator {indicator_id}: {e}")
|
||||
return False
|
||||
|
||||
def get_indicators_by_type(self, display_type: str) -> List[UserIndicator]:
|
||||
"""Get indicators by display type (overlay/subplot)."""
|
||||
indicators = self.list_indicators(visible_only=True)
|
||||
return [ind for ind in indicators if ind.display_type == display_type]
|
||||
|
||||
def get_available_indicator_types(self) -> List[str]:
|
||||
"""Get list of available indicator types."""
|
||||
return [t.value for t in IndicatorType]
|
||||
|
||||
def _get_default_display_type(self, indicator_type: str) -> str:
|
||||
"""Get default display type for an indicator type."""
|
||||
overlay_types = {IndicatorType.SMA, IndicatorType.EMA, IndicatorType.BOLLINGER_BANDS}
|
||||
subplot_types = {IndicatorType.RSI, IndicatorType.MACD}
|
||||
|
||||
if indicator_type in [t.value for t in overlay_types]:
|
||||
return DisplayType.OVERLAY.value
|
||||
elif indicator_type in [t.value for t in subplot_types]:
|
||||
return DisplayType.SUBPLOT.value
|
||||
else:
|
||||
return DisplayType.OVERLAY.value # Default
|
||||
|
||||
def _create_default_templates(self):
|
||||
"""Create default indicator templates if they don't exist."""
|
||||
templates = {
|
||||
IndicatorType.SMA.value: {
|
||||
"name": "Simple Moving Average",
|
||||
"description": "Simple Moving Average indicator",
|
||||
"type": IndicatorType.SMA.value,
|
||||
"display_type": DisplayType.OVERLAY.value,
|
||||
"default_parameters": {"period": 20},
|
||||
"parameter_schema": {
|
||||
"period": {"type": "int", "min": 1, "max": 200, "default": 20, "description": "Period for SMA calculation"}
|
||||
},
|
||||
"default_styling": {"color": "#007bff", "line_width": 2}
|
||||
},
|
||||
IndicatorType.EMA.value: {
|
||||
"name": "Exponential Moving Average",
|
||||
"description": "Exponential Moving Average indicator",
|
||||
"type": IndicatorType.EMA.value,
|
||||
"display_type": DisplayType.OVERLAY.value,
|
||||
"default_parameters": {"period": 12},
|
||||
"parameter_schema": {
|
||||
"period": {"type": "int", "min": 1, "max": 200, "default": 12, "description": "Period for EMA calculation"}
|
||||
},
|
||||
"default_styling": {"color": "#ff6b35", "line_width": 2}
|
||||
},
|
||||
IndicatorType.RSI.value: {
|
||||
"name": "Relative Strength Index",
|
||||
"description": "RSI oscillator indicator",
|
||||
"type": IndicatorType.RSI.value,
|
||||
"display_type": DisplayType.SUBPLOT.value,
|
||||
"default_parameters": {"period": 14},
|
||||
"parameter_schema": {
|
||||
"period": {"type": "int", "min": 2, "max": 50, "default": 14, "description": "Period for RSI calculation"}
|
||||
},
|
||||
"default_styling": {"color": "#20c997", "line_width": 2}
|
||||
},
|
||||
IndicatorType.MACD.value: {
|
||||
"name": "MACD",
|
||||
"description": "Moving Average Convergence Divergence",
|
||||
"type": IndicatorType.MACD.value,
|
||||
"display_type": DisplayType.SUBPLOT.value,
|
||||
"default_parameters": {"fast_period": 12, "slow_period": 26, "signal_period": 9},
|
||||
"parameter_schema": {
|
||||
"fast_period": {"type": "int", "min": 2, "max": 50, "default": 12, "description": "Fast EMA period"},
|
||||
"slow_period": {"type": "int", "min": 5, "max": 100, "default": 26, "description": "Slow EMA period"},
|
||||
"signal_period": {"type": "int", "min": 2, "max": 30, "default": 9, "description": "Signal line period"}
|
||||
},
|
||||
"default_styling": {"color": "#fd7e14", "line_width": 2}
|
||||
},
|
||||
IndicatorType.BOLLINGER_BANDS.value: {
|
||||
"name": "Bollinger Bands",
|
||||
"description": "Bollinger Bands volatility indicator",
|
||||
"type": IndicatorType.BOLLINGER_BANDS.value,
|
||||
"display_type": DisplayType.OVERLAY.value,
|
||||
"default_parameters": {"period": 20, "std_dev": 2.0},
|
||||
"parameter_schema": {
|
||||
"period": {"type": "int", "min": 5, "max": 100, "default": 20, "description": "Period for middle line (SMA)"},
|
||||
"std_dev": {"type": "float", "min": 0.5, "max": 5.0, "default": 2.0, "description": "Standard deviation multiplier"}
|
||||
},
|
||||
"default_styling": {"color": "#6f42c1", "line_width": 1}
|
||||
}
|
||||
}
|
||||
|
||||
for indicator_type, template_data in templates.items():
|
||||
template_path = self._get_template_file_path(indicator_type)
|
||||
|
||||
if not template_path.exists():
|
||||
try:
|
||||
with open(template_path, 'w', encoding='utf-8') as f:
|
||||
json.dump(template_data, f, indent=2, ensure_ascii=False)
|
||||
self.logger.debug(f"Created template: {indicator_type}")
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error creating template {indicator_type}: {e}")
|
||||
|
||||
def get_template(self, indicator_type: str) -> Optional[Dict[str, Any]]:
|
||||
"""Get indicator template by type."""
|
||||
try:
|
||||
template_path = self._get_template_file_path(indicator_type)
|
||||
|
||||
if template_path.exists():
|
||||
with open(template_path, 'r', encoding='utf-8') as f:
|
||||
return json.load(f)
|
||||
else:
|
||||
self.logger.warning(f"Template not found: {indicator_type}")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicator manager: Error loading template {indicator_type}: {e}")
|
||||
return None
|
||||
|
||||
|
||||
# Global instance
|
||||
indicator_manager = IndicatorManager()
|
||||
|
||||
|
||||
def get_indicator_manager() -> IndicatorManager:
|
||||
"""Get the global indicator manager instance."""
|
||||
return indicator_manager
|
||||
240
components/charts/layers/__init__.py
Normal file
240
components/charts/layers/__init__.py
Normal file
@ -0,0 +1,240 @@
|
||||
"""
|
||||
Chart Layers Package
|
||||
|
||||
This package contains the modular layer system for building complex charts
|
||||
with multiple indicators, signals, and subplots.
|
||||
|
||||
Components:
|
||||
- BaseChartLayer: Abstract base class for all layers
|
||||
- CandlestickLayer: OHLC price chart layer
|
||||
- VolumeLayer: Volume subplot layer
|
||||
- LayerManager: Orchestrates multiple layers
|
||||
- SMALayer: Simple Moving Average indicator overlay
|
||||
- EMALayer: Exponential Moving Average indicator overlay
|
||||
- BollingerBandsLayer: Bollinger Bands overlay with fill area
|
||||
- RSILayer: RSI oscillator subplot
|
||||
- MACDLayer: MACD lines and histogram subplot
|
||||
- TradingSignalLayer: Buy/sell/hold signal markers
|
||||
- TradeExecutionLayer: Trade entry/exit point visualization
|
||||
- Bot Integration: Automated data fetching and bot-integrated layers
|
||||
"""
|
||||
|
||||
from .base import (
|
||||
BaseChartLayer,
|
||||
CandlestickLayer,
|
||||
VolumeLayer,
|
||||
LayerManager,
|
||||
LayerConfig
|
||||
)
|
||||
|
||||
from .indicators import (
|
||||
BaseIndicatorLayer,
|
||||
IndicatorLayerConfig,
|
||||
SMALayer,
|
||||
EMALayer,
|
||||
BollingerBandsLayer,
|
||||
create_sma_layer,
|
||||
create_ema_layer,
|
||||
create_bollinger_bands_layer,
|
||||
create_common_ma_layers,
|
||||
create_common_overlay_indicators
|
||||
)
|
||||
|
||||
from .subplots import (
|
||||
BaseSubplotLayer,
|
||||
SubplotLayerConfig,
|
||||
RSILayer,
|
||||
MACDLayer,
|
||||
create_rsi_layer,
|
||||
create_macd_layer,
|
||||
create_common_subplot_indicators
|
||||
)
|
||||
|
||||
from .signals import (
|
||||
BaseSignalLayer,
|
||||
SignalLayerConfig,
|
||||
TradingSignalLayer,
|
||||
BaseTradeLayer,
|
||||
TradeLayerConfig,
|
||||
TradeExecutionLayer,
|
||||
BaseSupportResistanceLayer,
|
||||
SupportResistanceLayerConfig,
|
||||
SupportResistanceLayer,
|
||||
CustomStrategySignalInterface,
|
||||
BaseCustomStrategyLayer,
|
||||
CustomStrategySignalConfig,
|
||||
CustomStrategySignalLayer,
|
||||
SignalStyleConfig,
|
||||
SignalStyleManager,
|
||||
EnhancedSignalLayer,
|
||||
create_trading_signal_layer,
|
||||
create_buy_signals_only_layer,
|
||||
create_sell_signals_only_layer,
|
||||
create_high_confidence_signals_layer,
|
||||
create_trade_execution_layer,
|
||||
create_profitable_trades_only_layer,
|
||||
create_losing_trades_only_layer,
|
||||
create_support_resistance_layer,
|
||||
create_support_only_layer,
|
||||
create_resistance_only_layer,
|
||||
create_trend_lines_layer,
|
||||
create_key_levels_layer,
|
||||
create_custom_strategy_layer,
|
||||
create_pairs_trading_layer,
|
||||
create_momentum_strategy_layer,
|
||||
create_arbitrage_layer,
|
||||
create_mean_reversion_layer,
|
||||
create_breakout_strategy_layer,
|
||||
create_enhanced_signal_layer,
|
||||
create_professional_signal_layer,
|
||||
create_colorblind_friendly_signal_layer,
|
||||
create_dark_theme_signal_layer,
|
||||
create_minimal_signal_layer
|
||||
)
|
||||
|
||||
from .bot_integration import (
|
||||
BotFilterConfig,
|
||||
BotDataService,
|
||||
BotSignalLayerIntegration,
|
||||
bot_data_service,
|
||||
bot_integration,
|
||||
get_active_bot_signals,
|
||||
get_active_bot_trades,
|
||||
get_bot_signals_by_strategy,
|
||||
get_bot_performance_summary
|
||||
)
|
||||
|
||||
from .bot_enhanced_layers import (
|
||||
BotSignalLayerConfig,
|
||||
BotTradeLayerConfig,
|
||||
BotIntegratedSignalLayer,
|
||||
BotIntegratedTradeLayer,
|
||||
BotMultiLayerIntegration,
|
||||
bot_multi_layer,
|
||||
create_bot_signal_layer,
|
||||
create_bot_trade_layer,
|
||||
create_complete_bot_layers
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Base layers
|
||||
'BaseChartLayer',
|
||||
'CandlestickLayer',
|
||||
'VolumeLayer',
|
||||
'LayerManager',
|
||||
'LayerConfig',
|
||||
|
||||
# Indicator layers (overlays)
|
||||
'BaseIndicatorLayer',
|
||||
'IndicatorLayerConfig',
|
||||
'SMALayer',
|
||||
'EMALayer',
|
||||
'BollingerBandsLayer',
|
||||
|
||||
# Subplot layers
|
||||
'BaseSubplotLayer',
|
||||
'SubplotLayerConfig',
|
||||
'RSILayer',
|
||||
'MACDLayer',
|
||||
|
||||
# Signal layers
|
||||
'BaseSignalLayer',
|
||||
'SignalLayerConfig',
|
||||
'TradingSignalLayer',
|
||||
|
||||
# Trade layers
|
||||
'BaseTradeLayer',
|
||||
'TradeLayerConfig',
|
||||
'TradeExecutionLayer',
|
||||
|
||||
# Support/Resistance layers
|
||||
'BaseSupportResistanceLayer',
|
||||
'SupportResistanceLayerConfig',
|
||||
'SupportResistanceLayer',
|
||||
|
||||
# Custom Strategy layers
|
||||
'CustomStrategySignalInterface',
|
||||
'BaseCustomStrategyLayer',
|
||||
'CustomStrategySignalConfig',
|
||||
'CustomStrategySignalLayer',
|
||||
|
||||
# Signal Styling
|
||||
'SignalStyleConfig',
|
||||
'SignalStyleManager',
|
||||
'EnhancedSignalLayer',
|
||||
|
||||
# Bot Integration
|
||||
'BotFilterConfig',
|
||||
'BotDataService',
|
||||
'BotSignalLayerIntegration',
|
||||
'bot_data_service',
|
||||
'bot_integration',
|
||||
|
||||
# Bot Enhanced Layers
|
||||
'BotSignalLayerConfig',
|
||||
'BotTradeLayerConfig',
|
||||
'BotIntegratedSignalLayer',
|
||||
'BotIntegratedTradeLayer',
|
||||
'BotMultiLayerIntegration',
|
||||
'bot_multi_layer',
|
||||
|
||||
# Convenience functions
|
||||
'create_sma_layer',
|
||||
'create_ema_layer',
|
||||
'create_bollinger_bands_layer',
|
||||
'create_common_ma_layers',
|
||||
'create_common_overlay_indicators',
|
||||
'create_rsi_layer',
|
||||
'create_macd_layer',
|
||||
'create_common_subplot_indicators',
|
||||
'create_trading_signal_layer',
|
||||
'create_buy_signals_only_layer',
|
||||
'create_sell_signals_only_layer',
|
||||
'create_high_confidence_signals_layer',
|
||||
'create_trade_execution_layer',
|
||||
'create_profitable_trades_only_layer',
|
||||
'create_losing_trades_only_layer',
|
||||
'create_support_resistance_layer',
|
||||
'create_support_only_layer',
|
||||
'create_resistance_only_layer',
|
||||
'create_trend_lines_layer',
|
||||
'create_key_levels_layer',
|
||||
'create_custom_strategy_layer',
|
||||
'create_pairs_trading_layer',
|
||||
'create_momentum_strategy_layer',
|
||||
'create_arbitrage_layer',
|
||||
'create_mean_reversion_layer',
|
||||
'create_breakout_strategy_layer',
|
||||
'create_enhanced_signal_layer',
|
||||
'create_professional_signal_layer',
|
||||
'create_colorblind_friendly_signal_layer',
|
||||
'create_dark_theme_signal_layer',
|
||||
'create_minimal_signal_layer',
|
||||
'get_active_bot_signals',
|
||||
'get_active_bot_trades',
|
||||
'get_bot_signals_by_strategy',
|
||||
'get_bot_performance_summary',
|
||||
'create_bot_signal_layer',
|
||||
'create_bot_trade_layer',
|
||||
'create_complete_bot_layers'
|
||||
]
|
||||
|
||||
__version__ = "0.1.0"
|
||||
|
||||
# Package metadata
|
||||
# __version__ = "0.1.0"
|
||||
# __package_name__ = "layers"
|
||||
|
||||
# Layers will be imported once they are created
|
||||
# from .base import BaseCandlestickLayer
|
||||
# from .indicators import IndicatorLayer
|
||||
# from .subplots import SubplotManager
|
||||
# from .signals import SignalLayer
|
||||
|
||||
# Public exports (will be populated as layers are implemented)
|
||||
# __all__ = [
|
||||
# # "BaseCandlestickLayer",
|
||||
# # "IndicatorLayer",
|
||||
# # "SubplotManager",
|
||||
# # "SignalLayer"
|
||||
# ]
|
||||
952
components/charts/layers/base.py
Normal file
952
components/charts/layers/base.py
Normal file
@ -0,0 +1,952 @@
|
||||
"""
|
||||
Base Chart Layer Components
|
||||
|
||||
This module contains the foundational layer classes that serve as building blocks
|
||||
for all chart components including candlestick charts, indicators, and signals.
|
||||
"""
|
||||
|
||||
import plotly.graph_objects as go
|
||||
from plotly.subplots import make_subplots
|
||||
import pandas as pd
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, Any, Optional, List, Union
|
||||
from dataclasses import dataclass
|
||||
|
||||
from utils.logger import get_logger
|
||||
from ..error_handling import (
|
||||
ChartErrorHandler, ChartError, ErrorSeverity,
|
||||
InsufficientDataError, DataValidationError, IndicatorCalculationError,
|
||||
create_error_annotation, get_error_message
|
||||
)
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class LayerConfig:
|
||||
"""Configuration for chart layers"""
|
||||
name: str
|
||||
enabled: bool = True
|
||||
color: Optional[str] = None
|
||||
style: Dict[str, Any] = None
|
||||
subplot_row: Optional[int] = None # None = main chart, 1+ = subplot row
|
||||
|
||||
def __post_init__(self):
|
||||
if self.style is None:
|
||||
self.style = {}
|
||||
|
||||
|
||||
class BaseLayer:
|
||||
"""
|
||||
Base class for all chart layers providing common functionality
|
||||
for data validation, error handling, and trace management.
|
||||
"""
|
||||
|
||||
def __init__(self, config: LayerConfig):
|
||||
self.config = config
|
||||
self.logger = get_logger('default_logger')
|
||||
self.error_handler = ChartErrorHandler()
|
||||
self.traces = []
|
||||
self._is_valid = False
|
||||
self._error_message = None
|
||||
|
||||
def validate_data(self, data: Union[pd.DataFrame, List[Dict[str, Any]]]) -> bool:
|
||||
"""
|
||||
Validate input data for layer requirements.
|
||||
|
||||
Args:
|
||||
data: Input data to validate
|
||||
|
||||
Returns:
|
||||
True if data is valid, False otherwise
|
||||
"""
|
||||
try:
|
||||
self.error_handler.clear_errors()
|
||||
|
||||
# Check data type
|
||||
if not isinstance(data, (pd.DataFrame, list)):
|
||||
error = ChartError(
|
||||
code='INVALID_DATA_TYPE',
|
||||
message=f'Invalid data type for {self.__class__.__name__}: {type(data)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'layer': self.__class__.__name__, 'data_type': str(type(data))},
|
||||
recovery_suggestion='Provide data as pandas DataFrame or list of dictionaries'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
# Check data sufficiency
|
||||
is_sufficient = self.error_handler.validate_data_sufficiency(
|
||||
data,
|
||||
chart_type='candlestick', # Default chart type since LayerConfig doesn't have layer_type
|
||||
indicators=[{'type': 'candlestick', 'parameters': {}}] # Default indicator type
|
||||
)
|
||||
|
||||
self._is_valid = is_sufficient
|
||||
if not is_sufficient:
|
||||
self._error_message = self.error_handler.get_user_friendly_message()
|
||||
|
||||
return is_sufficient
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Base layer: Data validation error in {self.__class__.__name__}: {e}")
|
||||
error = ChartError(
|
||||
code='VALIDATION_EXCEPTION',
|
||||
message=f'Validation error: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'layer': self.__class__.__name__, 'exception': str(e)}
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
self._is_valid = False
|
||||
self._error_message = str(e)
|
||||
return False
|
||||
|
||||
def get_error_info(self) -> Dict[str, Any]:
|
||||
"""Get error information for this layer"""
|
||||
return {
|
||||
'is_valid': self._is_valid,
|
||||
'error_message': self._error_message,
|
||||
'error_summary': self.error_handler.get_error_summary(),
|
||||
'can_proceed': len(self.error_handler.errors) == 0
|
||||
}
|
||||
|
||||
def create_error_trace(self, error_message: str) -> go.Scatter:
|
||||
"""Create an error display trace"""
|
||||
return go.Scatter(
|
||||
x=[],
|
||||
y=[],
|
||||
mode='text',
|
||||
text=[error_message],
|
||||
textposition='middle center',
|
||||
textfont={'size': 14, 'color': '#e74c3c'},
|
||||
showlegend=False,
|
||||
name=f"{self.__class__.__name__} Error"
|
||||
)
|
||||
|
||||
|
||||
class BaseChartLayer(ABC):
|
||||
"""
|
||||
Abstract base class for all chart layers.
|
||||
|
||||
This defines the interface that all chart layers must implement,
|
||||
whether they are candlestick charts, indicators, or signal overlays.
|
||||
"""
|
||||
|
||||
def __init__(self, config: LayerConfig):
|
||||
"""
|
||||
Initialize the base layer.
|
||||
|
||||
Args:
|
||||
config: Layer configuration
|
||||
"""
|
||||
self.config = config
|
||||
self.logger = logger
|
||||
|
||||
@abstractmethod
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render the layer onto the provided figure.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure to render onto
|
||||
data: Chart data (OHLCV format)
|
||||
**kwargs: Additional rendering parameters
|
||||
|
||||
Returns:
|
||||
Updated figure with layer rendered
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def validate_data(self, data: pd.DataFrame) -> bool:
|
||||
"""
|
||||
Validate that the data is suitable for this layer.
|
||||
|
||||
Args:
|
||||
data: Chart data to validate
|
||||
|
||||
Returns:
|
||||
True if data is valid, False otherwise
|
||||
"""
|
||||
pass
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
"""Check if the layer is enabled."""
|
||||
return self.config.enabled
|
||||
|
||||
def get_subplot_row(self) -> Optional[int]:
|
||||
"""Get the subplot row for this layer."""
|
||||
return self.config.subplot_row
|
||||
|
||||
def is_overlay(self) -> bool:
|
||||
"""Check if this layer is an overlay (main chart) or subplot."""
|
||||
return self.config.subplot_row is None
|
||||
|
||||
|
||||
class CandlestickLayer(BaseLayer):
|
||||
"""
|
||||
Candlestick chart layer implementation with enhanced error handling.
|
||||
|
||||
This layer renders OHLC data as candlesticks on the main chart.
|
||||
"""
|
||||
|
||||
def __init__(self, config: LayerConfig = None):
|
||||
"""
|
||||
Initialize candlestick layer.
|
||||
|
||||
Args:
|
||||
config: Layer configuration (optional, uses defaults)
|
||||
"""
|
||||
if config is None:
|
||||
config = LayerConfig(
|
||||
name="candlestick",
|
||||
enabled=True,
|
||||
style={
|
||||
'increasing_color': '#00C851', # Green for bullish
|
||||
'decreasing_color': '#FF4444', # Red for bearish
|
||||
'line_width': 1
|
||||
}
|
||||
)
|
||||
|
||||
super().__init__(config)
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
"""Check if the layer is enabled."""
|
||||
return self.config.enabled
|
||||
|
||||
def is_overlay(self) -> bool:
|
||||
"""Check if this layer is an overlay (main chart) or subplot."""
|
||||
return self.config.subplot_row is None
|
||||
|
||||
def get_subplot_row(self) -> Optional[int]:
|
||||
"""Get the subplot row for this layer."""
|
||||
return self.config.subplot_row
|
||||
|
||||
def validate_data(self, data: Union[pd.DataFrame, List[Dict[str, Any]]]) -> bool:
|
||||
"""Enhanced validation with comprehensive error handling"""
|
||||
try:
|
||||
# Use parent class error handling for comprehensive validation
|
||||
parent_valid = super().validate_data(data)
|
||||
|
||||
# Convert to DataFrame if needed for local validation
|
||||
if isinstance(data, list):
|
||||
df = pd.DataFrame(data)
|
||||
else:
|
||||
df = data.copy()
|
||||
|
||||
# Additional candlestick-specific validation
|
||||
required_columns = ['timestamp', 'open', 'high', 'low', 'close']
|
||||
|
||||
if not all(col in df.columns for col in required_columns):
|
||||
missing = [col for col in required_columns if col not in df.columns]
|
||||
error = ChartError(
|
||||
code='MISSING_OHLC_COLUMNS',
|
||||
message=f'Missing required OHLC columns: {missing}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'missing_columns': missing, 'available_columns': list(df.columns)},
|
||||
recovery_suggestion='Ensure data contains timestamp, open, high, low, close columns'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
if len(df) == 0:
|
||||
error = ChartError(
|
||||
code='EMPTY_CANDLESTICK_DATA',
|
||||
message='No candlestick data available',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'data_count': 0},
|
||||
recovery_suggestion='Check data source or time range'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
# Check for price data validity
|
||||
invalid_prices = df[
|
||||
(df['high'] < df['low']) |
|
||||
(df['open'] < 0) | (df['close'] < 0) |
|
||||
(df['high'] < 0) | (df['low'] < 0) |
|
||||
pd.isna(df[['open', 'high', 'low', 'close']]).any(axis=1)
|
||||
]
|
||||
|
||||
if len(invalid_prices) > len(df) * 0.5: # More than 50% invalid
|
||||
error = ChartError(
|
||||
code='EXCESSIVE_INVALID_PRICES',
|
||||
message=f'Too many invalid price records: {len(invalid_prices)}/{len(df)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'invalid_count': len(invalid_prices), 'total_count': len(df)},
|
||||
recovery_suggestion='Check data quality and price data sources'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
elif len(invalid_prices) > 0:
|
||||
# Warning for some invalid data
|
||||
error = ChartError(
|
||||
code='SOME_INVALID_PRICES',
|
||||
message=f'Found {len(invalid_prices)} invalid price records (will be filtered)',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'invalid_count': len(invalid_prices), 'total_count': len(df)},
|
||||
recovery_suggestion='Invalid records will be automatically removed'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
|
||||
return parent_valid and len(self.error_handler.errors) == 0
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Candlestick layer: Error validating candlestick data: {e}")
|
||||
error = ChartError(
|
||||
code='CANDLESTICK_VALIDATION_ERROR',
|
||||
message=f'Candlestick validation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'exception': str(e)}
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render candlestick chart with error handling and recovery.
|
||||
|
||||
Args:
|
||||
fig: Target figure
|
||||
data: OHLCV data
|
||||
**kwargs: Additional parameters (row, col for subplots)
|
||||
|
||||
Returns:
|
||||
Figure with candlestick trace added or error display
|
||||
"""
|
||||
try:
|
||||
# Validate data
|
||||
if not self.validate_data(data):
|
||||
self.logger.error("Candlestick layer: Invalid data for candlestick layer")
|
||||
|
||||
# Add error annotation to figure
|
||||
if self.error_handler.errors:
|
||||
error_msg = self.error_handler.errors[0].message
|
||||
fig.add_annotation(create_error_annotation(
|
||||
f"Candlestick Error: {error_msg}",
|
||||
position='center'
|
||||
))
|
||||
return fig
|
||||
|
||||
# Clean and prepare data
|
||||
clean_data = self._clean_candlestick_data(data)
|
||||
if clean_data.empty:
|
||||
fig.add_annotation(create_error_annotation(
|
||||
"No valid candlestick data after cleaning",
|
||||
position='center'
|
||||
))
|
||||
return fig
|
||||
|
||||
# Extract styling
|
||||
style = self.config.style
|
||||
increasing_color = style.get('increasing_color', '#00C851')
|
||||
decreasing_color = style.get('decreasing_color', '#FF4444')
|
||||
|
||||
# Create candlestick trace
|
||||
candlestick = go.Candlestick(
|
||||
x=clean_data['timestamp'],
|
||||
open=clean_data['open'],
|
||||
high=clean_data['high'],
|
||||
low=clean_data['low'],
|
||||
close=clean_data['close'],
|
||||
name=self.config.name,
|
||||
increasing_line_color=increasing_color,
|
||||
decreasing_line_color=decreasing_color,
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Add to figure
|
||||
row = kwargs.get('row', 1)
|
||||
col = kwargs.get('col', 1)
|
||||
|
||||
try:
|
||||
if hasattr(fig, 'add_trace') and row == 1 and col == 1:
|
||||
# Simple figure without subplots
|
||||
fig.add_trace(candlestick)
|
||||
elif hasattr(fig, 'add_trace'):
|
||||
# Subplot figure
|
||||
fig.add_trace(candlestick, row=row, col=col)
|
||||
else:
|
||||
# Fallback
|
||||
fig.add_trace(candlestick)
|
||||
except Exception as trace_error:
|
||||
# If subplot call fails, try simple add_trace
|
||||
try:
|
||||
fig.add_trace(candlestick)
|
||||
except Exception as fallback_error:
|
||||
self.logger.error(f"Candlestick layer: Failed to add candlestick trace: {fallback_error}")
|
||||
fig.add_annotation(create_error_annotation(
|
||||
f"Failed to add candlestick trace: {str(fallback_error)}",
|
||||
position='center'
|
||||
))
|
||||
return fig
|
||||
|
||||
# Add warning annotations if needed
|
||||
if self.error_handler.warnings:
|
||||
warning_msg = f"⚠️ {self.error_handler.warnings[0].message}"
|
||||
fig.add_annotation({
|
||||
'text': warning_msg,
|
||||
'xref': 'paper', 'yref': 'paper',
|
||||
'x': 0.02, 'y': 0.98,
|
||||
'xanchor': 'left', 'yanchor': 'top',
|
||||
'showarrow': False,
|
||||
'font': {'size': 10, 'color': '#f39c12'},
|
||||
'bgcolor': 'rgba(255,255,255,0.8)'
|
||||
})
|
||||
|
||||
self.logger.debug(f"Rendered candlestick layer with {len(clean_data)} candles")
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Candlestick layer: Error rendering candlestick layer: {e}")
|
||||
fig.add_annotation(create_error_annotation(
|
||||
f"Candlestick render error: {str(e)}",
|
||||
position='center'
|
||||
))
|
||||
return fig
|
||||
|
||||
def _clean_candlestick_data(self, data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Clean and validate candlestick data"""
|
||||
try:
|
||||
clean_data = data.copy()
|
||||
|
||||
# Remove rows with invalid prices
|
||||
invalid_mask = (
|
||||
(clean_data['high'] < clean_data['low']) |
|
||||
(clean_data['open'] < 0) | (clean_data['close'] < 0) |
|
||||
(clean_data['high'] < 0) | (clean_data['low'] < 0) |
|
||||
pd.isna(clean_data[['open', 'high', 'low', 'close']]).any(axis=1)
|
||||
)
|
||||
|
||||
initial_count = len(clean_data)
|
||||
clean_data = clean_data[~invalid_mask]
|
||||
|
||||
if len(clean_data) < initial_count:
|
||||
removed_count = initial_count - len(clean_data)
|
||||
self.logger.info(f"Removed {removed_count} invalid candlestick records")
|
||||
|
||||
# Ensure timestamp is properly formatted
|
||||
if not pd.api.types.is_datetime64_any_dtype(clean_data['timestamp']):
|
||||
clean_data['timestamp'] = pd.to_datetime(clean_data['timestamp'])
|
||||
|
||||
# Sort by timestamp
|
||||
clean_data = clean_data.sort_values('timestamp')
|
||||
|
||||
return clean_data
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Candlestick layer: Error cleaning candlestick data: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
|
||||
class VolumeLayer(BaseLayer):
|
||||
"""
|
||||
Volume subplot layer implementation with enhanced error handling.
|
||||
|
||||
This layer renders volume data as a bar chart in a separate subplot,
|
||||
with bars colored based on price movement.
|
||||
"""
|
||||
|
||||
def __init__(self, config: LayerConfig = None):
|
||||
"""
|
||||
Initialize volume layer.
|
||||
|
||||
Args:
|
||||
config: Layer configuration (optional, uses defaults)
|
||||
"""
|
||||
if config is None:
|
||||
config = LayerConfig(
|
||||
name="volume",
|
||||
enabled=True,
|
||||
subplot_row=2, # Volume goes in second row by default
|
||||
style={
|
||||
'bullish_color': '#00C851',
|
||||
'bearish_color': '#FF4444',
|
||||
'opacity': 0.7
|
||||
}
|
||||
)
|
||||
|
||||
super().__init__(config)
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
"""Check if the layer is enabled."""
|
||||
return self.config.enabled
|
||||
|
||||
def is_overlay(self) -> bool:
|
||||
"""Check if this layer is an overlay (main chart) or subplot."""
|
||||
return self.config.subplot_row is None
|
||||
|
||||
def get_subplot_row(self) -> Optional[int]:
|
||||
"""Get the subplot row for this layer."""
|
||||
return self.config.subplot_row
|
||||
|
||||
def validate_data(self, data: Union[pd.DataFrame, List[Dict[str, Any]]]) -> bool:
|
||||
"""Enhanced validation with comprehensive error handling"""
|
||||
try:
|
||||
# Use parent class error handling
|
||||
parent_valid = super().validate_data(data)
|
||||
|
||||
# Convert to DataFrame if needed
|
||||
if isinstance(data, list):
|
||||
df = pd.DataFrame(data)
|
||||
else:
|
||||
df = data.copy()
|
||||
|
||||
# Volume-specific validation
|
||||
required_columns = ['timestamp', 'open', 'close', 'volume']
|
||||
|
||||
if not all(col in df.columns for col in required_columns):
|
||||
missing = [col for col in required_columns if col not in df.columns]
|
||||
error = ChartError(
|
||||
code='MISSING_VOLUME_COLUMNS',
|
||||
message=f'Missing required volume columns: {missing}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'missing_columns': missing, 'available_columns': list(df.columns)},
|
||||
recovery_suggestion='Ensure data contains timestamp, open, close, volume columns'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
if len(df) == 0:
|
||||
error = ChartError(
|
||||
code='EMPTY_VOLUME_DATA',
|
||||
message='No volume data available',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'data_count': 0},
|
||||
recovery_suggestion='Check data source or time range'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
# Check if volume data exists and is valid
|
||||
valid_volume_mask = (df['volume'] >= 0) & pd.notna(df['volume'])
|
||||
valid_volume_count = valid_volume_mask.sum()
|
||||
|
||||
if valid_volume_count == 0:
|
||||
error = ChartError(
|
||||
code='NO_VALID_VOLUME',
|
||||
message='No valid volume data found',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'total_records': len(df), 'valid_volume': 0},
|
||||
recovery_suggestion='Volume chart will be skipped'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
|
||||
elif valid_volume_count < len(df) * 0.5: # Less than 50% valid
|
||||
error = ChartError(
|
||||
code='MOSTLY_INVALID_VOLUME',
|
||||
message=f'Most volume data is invalid: {valid_volume_count}/{len(df)} valid',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'total_records': len(df), 'valid_volume': valid_volume_count},
|
||||
recovery_suggestion='Invalid volume records will be filtered out'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
|
||||
elif df['volume'].sum() <= 0:
|
||||
error = ChartError(
|
||||
code='ZERO_VOLUME_TOTAL',
|
||||
message='Total volume is zero or negative',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'volume_sum': float(df['volume'].sum())},
|
||||
recovery_suggestion='Volume chart may not be meaningful'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
|
||||
return parent_valid and valid_volume_count > 0
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Volume layer: Error validating volume data: {e}")
|
||||
error = ChartError(
|
||||
code='VOLUME_VALIDATION_ERROR',
|
||||
message=f'Volume validation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'exception': str(e)}
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render volume bars with error handling and recovery.
|
||||
|
||||
Args:
|
||||
fig: Target figure (must be subplot figure)
|
||||
data: OHLCV data
|
||||
**kwargs: Additional parameters (row, col for subplots)
|
||||
|
||||
Returns:
|
||||
Figure with volume trace added or error handling
|
||||
"""
|
||||
try:
|
||||
# Validate data
|
||||
if not self.validate_data(data):
|
||||
# Check if we can skip gracefully (warnings only)
|
||||
if not self.error_handler.errors and self.error_handler.warnings:
|
||||
self.logger.debug("Skipping volume layer due to warnings")
|
||||
return fig
|
||||
else:
|
||||
self.logger.error("Volume layer: Invalid data for volume layer")
|
||||
return fig
|
||||
|
||||
# Clean and prepare data
|
||||
clean_data = self._clean_volume_data(data)
|
||||
if clean_data.empty:
|
||||
self.logger.debug("No valid volume data after cleaning")
|
||||
return fig
|
||||
|
||||
# Calculate bar colors based on price movement
|
||||
style = self.config.style
|
||||
bullish_color = style.get('bullish_color', '#00C851')
|
||||
bearish_color = style.get('bearish_color', '#FF4444')
|
||||
opacity = style.get('opacity', 0.7)
|
||||
|
||||
colors = [
|
||||
bullish_color if close >= open_price else bearish_color
|
||||
for close, open_price in zip(clean_data['close'], clean_data['open'])
|
||||
]
|
||||
|
||||
# Create volume bar trace
|
||||
volume_bars = go.Bar(
|
||||
x=clean_data['timestamp'],
|
||||
y=clean_data['volume'],
|
||||
name='Volume',
|
||||
marker_color=colors,
|
||||
opacity=opacity,
|
||||
showlegend=False
|
||||
)
|
||||
|
||||
# Add to figure
|
||||
row = kwargs.get('row', 2) # Default to row 2 for volume
|
||||
col = kwargs.get('col', 1)
|
||||
|
||||
fig.add_trace(volume_bars, row=row, col=col)
|
||||
|
||||
self.logger.debug(f"Volume layer: Rendered volume layer with {len(clean_data)} bars")
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Volume layer: Error rendering volume layer: {e}")
|
||||
return fig
|
||||
|
||||
def _clean_volume_data(self, data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Clean and validate volume data"""
|
||||
try:
|
||||
clean_data = data.copy()
|
||||
|
||||
# Remove rows with invalid volume
|
||||
valid_mask = (clean_data['volume'] >= 0) & pd.notna(clean_data['volume'])
|
||||
initial_count = len(clean_data)
|
||||
clean_data = clean_data[valid_mask]
|
||||
|
||||
if len(clean_data) < initial_count:
|
||||
removed_count = initial_count - len(clean_data)
|
||||
self.logger.info(f"Removed {removed_count} invalid volume records")
|
||||
|
||||
# Ensure timestamp is properly formatted
|
||||
if not pd.api.types.is_datetime64_any_dtype(clean_data['timestamp']):
|
||||
clean_data['timestamp'] = pd.to_datetime(clean_data['timestamp'])
|
||||
|
||||
# Sort by timestamp
|
||||
clean_data = clean_data.sort_values('timestamp')
|
||||
|
||||
return clean_data
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Volume layer: Error cleaning volume data: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
|
||||
class LayerManager:
|
||||
"""
|
||||
Manager class for coordinating multiple chart layers.
|
||||
|
||||
This class handles the orchestration of multiple layers, including
|
||||
setting up subplots and rendering layers in the correct order.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the layer manager."""
|
||||
self.layers: List[BaseLayer] = []
|
||||
self.logger = logger
|
||||
|
||||
def add_layer(self, layer: BaseLayer) -> None:
|
||||
"""
|
||||
Add a layer to the manager.
|
||||
|
||||
Args:
|
||||
layer: Chart layer to add
|
||||
"""
|
||||
self.layers.append(layer)
|
||||
self.logger.debug(f"Added layer: {layer.config.name}")
|
||||
|
||||
def remove_layer(self, layer_name: str) -> bool:
|
||||
"""
|
||||
Remove a layer by name.
|
||||
|
||||
Args:
|
||||
layer_name: Name of layer to remove
|
||||
|
||||
Returns:
|
||||
True if layer was removed, False if not found
|
||||
"""
|
||||
for i, layer in enumerate(self.layers):
|
||||
if layer.config.name == layer_name:
|
||||
self.layers.pop(i)
|
||||
self.logger.debug(f"Removed layer: {layer_name}")
|
||||
return True
|
||||
|
||||
self.logger.warning(f"Layer not found for removal: {layer_name}")
|
||||
return False
|
||||
|
||||
def get_enabled_layers(self) -> List[BaseLayer]:
|
||||
"""Get list of enabled layers."""
|
||||
return [layer for layer in self.layers if layer.is_enabled()]
|
||||
|
||||
def get_overlay_layers(self) -> List[BaseLayer]:
|
||||
"""Get layers that render on the main chart."""
|
||||
return [layer for layer in self.get_enabled_layers() if layer.is_overlay()]
|
||||
|
||||
def get_subplot_layers(self) -> Dict[int, List[BaseLayer]]:
|
||||
"""Get layers grouped by subplot row."""
|
||||
subplot_layers = {}
|
||||
|
||||
for layer in self.get_enabled_layers():
|
||||
if not layer.is_overlay():
|
||||
row = layer.get_subplot_row()
|
||||
if row not in subplot_layers:
|
||||
subplot_layers[row] = []
|
||||
subplot_layers[row].append(layer)
|
||||
|
||||
return subplot_layers
|
||||
|
||||
def calculate_subplot_layout(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate subplot configuration based on layers.
|
||||
|
||||
Returns:
|
||||
Dict with subplot configuration parameters
|
||||
"""
|
||||
subplot_layers = self.get_subplot_layers()
|
||||
|
||||
if not subplot_layers:
|
||||
# No subplots needed
|
||||
return {
|
||||
'rows': 1,
|
||||
'cols': 1,
|
||||
'subplot_titles': None,
|
||||
'row_heights': None
|
||||
}
|
||||
|
||||
# Reassign subplot rows dynamically to ensure proper ordering
|
||||
self._reassign_subplot_rows()
|
||||
|
||||
# Recalculate after reassignment
|
||||
subplot_layers = self.get_subplot_layers()
|
||||
|
||||
# Calculate number of rows (main chart + subplots)
|
||||
max_subplot_row = max(subplot_layers.keys()) if subplot_layers else 0
|
||||
total_rows = max(1, max_subplot_row) # Row numbers are 1-indexed, so max_subplot_row is the total rows needed
|
||||
|
||||
# Create subplot titles
|
||||
subplot_titles = ['Price'] # Main chart
|
||||
for row in range(2, total_rows + 1):
|
||||
if row in subplot_layers:
|
||||
# Use the first layer's name as the subtitle
|
||||
layer_names = [layer.config.name for layer in subplot_layers[row]]
|
||||
subplot_titles.append(' / '.join(layer_names).title())
|
||||
else:
|
||||
subplot_titles.append(f'Subplot {row}')
|
||||
|
||||
# Calculate row heights based on subplot height ratios
|
||||
row_heights = self._calculate_dynamic_row_heights(subplot_layers, total_rows)
|
||||
|
||||
return {
|
||||
'rows': total_rows,
|
||||
'cols': 1,
|
||||
'subplot_titles': subplot_titles,
|
||||
'row_heights': row_heights,
|
||||
'shared_xaxes': True,
|
||||
'vertical_spacing': 0.03
|
||||
}
|
||||
|
||||
def _reassign_subplot_rows(self) -> None:
|
||||
"""
|
||||
Reassign subplot rows to ensure proper sequential ordering.
|
||||
|
||||
This method dynamically assigns subplot rows starting from row 2,
|
||||
ensuring no gaps in the subplot layout.
|
||||
"""
|
||||
subplot_layers = []
|
||||
|
||||
# Collect all subplot layers
|
||||
for layer in self.get_enabled_layers():
|
||||
if not layer.is_overlay():
|
||||
subplot_layers.append(layer)
|
||||
|
||||
# Sort by priority: volume first, then by current subplot row
|
||||
def layer_priority(layer):
|
||||
# Volume gets highest priority (0), then by current row
|
||||
if hasattr(layer, 'config') and layer.config.name == 'volume':
|
||||
return (0, layer.get_subplot_row() or 999)
|
||||
else:
|
||||
return (1, layer.get_subplot_row() or 999)
|
||||
|
||||
subplot_layers.sort(key=layer_priority)
|
||||
|
||||
# Reassign rows starting from 2
|
||||
for i, layer in enumerate(subplot_layers):
|
||||
new_row = i + 2 # Start from row 2 (row 1 is main chart)
|
||||
layer.config.subplot_row = new_row
|
||||
self.logger.debug(f"Assigned {layer.config.name} to subplot row {new_row}")
|
||||
|
||||
def _calculate_dynamic_row_heights(self, subplot_layers: Dict[int, List], total_rows: int) -> List[float]:
|
||||
"""
|
||||
Calculate row heights based on subplot height ratios.
|
||||
|
||||
Args:
|
||||
subplot_layers: Dictionary of subplot layers by row
|
||||
total_rows: Total number of rows
|
||||
|
||||
Returns:
|
||||
List of height ratios for each row
|
||||
"""
|
||||
if total_rows == 1:
|
||||
return [1.0] # Single row gets full height
|
||||
|
||||
# Calculate total requested subplot height
|
||||
total_subplot_ratio = 0.0
|
||||
subplot_ratios = {}
|
||||
|
||||
for row in range(2, total_rows + 1):
|
||||
if row in subplot_layers:
|
||||
# Get height ratio from first layer in the row
|
||||
layer = subplot_layers[row][0]
|
||||
if hasattr(layer, 'get_subplot_height_ratio'):
|
||||
ratio = layer.get_subplot_height_ratio()
|
||||
else:
|
||||
ratio = 0.25 # Default ratio
|
||||
subplot_ratios[row] = ratio
|
||||
total_subplot_ratio += ratio
|
||||
else:
|
||||
subplot_ratios[row] = 0.25 # Default for empty rows
|
||||
total_subplot_ratio += 0.25
|
||||
|
||||
# Ensure total doesn't exceed reasonable limits
|
||||
max_subplot_ratio = 0.6 # Maximum 60% for all subplots
|
||||
if total_subplot_ratio > max_subplot_ratio:
|
||||
# Scale down proportionally
|
||||
scale_factor = max_subplot_ratio / total_subplot_ratio
|
||||
for row in subplot_ratios:
|
||||
subplot_ratios[row] *= scale_factor
|
||||
total_subplot_ratio = max_subplot_ratio
|
||||
|
||||
# Main chart gets remaining space
|
||||
main_chart_ratio = 1.0 - total_subplot_ratio
|
||||
|
||||
# Build final height list
|
||||
row_heights = [main_chart_ratio] # Main chart
|
||||
for row in range(2, total_rows + 1):
|
||||
row_heights.append(subplot_ratios.get(row, 0.25))
|
||||
|
||||
return row_heights
|
||||
|
||||
def render_all_layers(self, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render all enabled layers onto a new figure.
|
||||
|
||||
Args:
|
||||
data: Chart data (OHLCV format)
|
||||
**kwargs: Additional rendering parameters
|
||||
|
||||
Returns:
|
||||
Complete figure with all layers rendered
|
||||
"""
|
||||
try:
|
||||
# Calculate subplot layout
|
||||
layout_config = self.calculate_subplot_layout()
|
||||
|
||||
# Create figure with subplots if needed
|
||||
if layout_config['rows'] > 1:
|
||||
fig = make_subplots(**layout_config)
|
||||
else:
|
||||
fig = go.Figure()
|
||||
|
||||
# Render overlay layers (main chart)
|
||||
overlay_layers = self.get_overlay_layers()
|
||||
for layer in overlay_layers:
|
||||
fig = layer.render(fig, data, row=1, col=1, **kwargs)
|
||||
|
||||
# Render subplot layers
|
||||
subplot_layers = self.get_subplot_layers()
|
||||
for row, layers in subplot_layers.items():
|
||||
for layer in layers:
|
||||
fig = layer.render(fig, data, row=row, col=1, **kwargs)
|
||||
|
||||
# Update layout styling
|
||||
self._apply_layout_styling(fig, layout_config)
|
||||
|
||||
self.logger.debug(f"Rendered {len(self.get_enabled_layers())} layers")
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Layer manager: Error rendering layers: {e}")
|
||||
# Return empty figure on error
|
||||
return go.Figure()
|
||||
|
||||
def _apply_layout_styling(self, fig: go.Figure, layout_config: Dict[str, Any]) -> None:
|
||||
"""Apply consistent styling to the figure layout."""
|
||||
try:
|
||||
# Basic layout settings
|
||||
fig.update_layout(
|
||||
template="plotly_white",
|
||||
showlegend=False,
|
||||
hovermode='x unified',
|
||||
xaxis_rangeslider_visible=False
|
||||
)
|
||||
|
||||
# Update axes for subplots
|
||||
if layout_config['rows'] > 1:
|
||||
# Update main chart axes
|
||||
fig.update_yaxes(title_text="Price (USDT)", row=1, col=1)
|
||||
fig.update_xaxes(showticklabels=False, row=1, col=1)
|
||||
|
||||
# Update subplot axes
|
||||
subplot_layers = self.get_subplot_layers()
|
||||
for row in range(2, layout_config['rows'] + 1):
|
||||
if row in subplot_layers:
|
||||
# Set y-axis title and range based on layer type
|
||||
layers_in_row = subplot_layers[row]
|
||||
layer = layers_in_row[0] # Use first layer for configuration
|
||||
|
||||
# Set y-axis title
|
||||
if hasattr(layer, 'config') and hasattr(layer.config, 'indicator_type'):
|
||||
indicator_type = layer.config.indicator_type
|
||||
if indicator_type == 'rsi':
|
||||
fig.update_yaxes(title_text="RSI", row=row, col=1)
|
||||
elif indicator_type == 'macd':
|
||||
fig.update_yaxes(title_text="MACD", row=row, col=1)
|
||||
else:
|
||||
layer_names = [l.config.name for l in layers_in_row]
|
||||
fig.update_yaxes(title_text=' / '.join(layer_names), row=row, col=1)
|
||||
|
||||
# Set fixed y-axis range if specified
|
||||
if hasattr(layer, 'has_fixed_range') and layer.has_fixed_range():
|
||||
y_range = layer.get_y_axis_range()
|
||||
if y_range:
|
||||
fig.update_yaxes(range=list(y_range), row=row, col=1)
|
||||
|
||||
# Only show x-axis labels on the bottom subplot
|
||||
if row == layout_config['rows']:
|
||||
fig.update_xaxes(title_text="Time", row=row, col=1)
|
||||
else:
|
||||
fig.update_xaxes(showticklabels=False, row=row, col=1)
|
||||
else:
|
||||
# Single chart
|
||||
fig.update_layout(
|
||||
xaxis_title="Time",
|
||||
yaxis_title="Price (USDT)"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Layer manager: Error applying layout styling: {e}")
|
||||
694
components/charts/layers/bot_enhanced_layers.py
Normal file
694
components/charts/layers/bot_enhanced_layers.py
Normal file
@ -0,0 +1,694 @@
|
||||
"""
|
||||
Bot-Enhanced Signal Layers
|
||||
|
||||
This module provides enhanced versions of signal layers that automatically integrate
|
||||
with the bot management system, making it easier to display bot signals and trades
|
||||
without manual data fetching.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
from typing import Dict, Any, Optional, List, Union, Tuple
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
from .signals import (
|
||||
TradingSignalLayer, TradeExecutionLayer, EnhancedSignalLayer,
|
||||
SignalLayerConfig, TradeLayerConfig, SignalStyleConfig
|
||||
)
|
||||
from .bot_integration import (
|
||||
BotFilterConfig, BotSignalLayerIntegration, bot_integration,
|
||||
get_active_bot_signals, get_active_bot_trades
|
||||
)
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class BotSignalLayerConfig(SignalLayerConfig):
|
||||
"""Extended configuration for bot-integrated signal layers"""
|
||||
# Bot filtering options
|
||||
bot_filter: Optional[BotFilterConfig] = None
|
||||
auto_fetch_data: bool = True # Automatically fetch bot data
|
||||
time_window_days: int = 7 # Time window for data fetching
|
||||
active_bots_only: bool = True # Only show signals from active bots
|
||||
include_bot_info: bool = True # Include bot info in hover text
|
||||
group_by_strategy: bool = False # Group signals by strategy
|
||||
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
if self.bot_filter is None:
|
||||
self.bot_filter = BotFilterConfig(active_only=self.active_bots_only)
|
||||
|
||||
|
||||
@dataclass
|
||||
class BotTradeLayerConfig(TradeLayerConfig):
|
||||
"""Extended configuration for bot-integrated trade layers"""
|
||||
# Bot filtering options
|
||||
bot_filter: Optional[BotFilterConfig] = None
|
||||
auto_fetch_data: bool = True # Automatically fetch bot data
|
||||
time_window_days: int = 7 # Time window for data fetching
|
||||
active_bots_only: bool = True # Only show trades from active bots
|
||||
include_bot_info: bool = True # Include bot info in hover text
|
||||
group_by_strategy: bool = False # Group trades by strategy
|
||||
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
if self.bot_filter is None:
|
||||
self.bot_filter = BotFilterConfig(active_only=self.active_bots_only)
|
||||
|
||||
|
||||
class BotIntegratedSignalLayer(TradingSignalLayer):
|
||||
"""
|
||||
Signal layer that automatically integrates with bot management system.
|
||||
"""
|
||||
|
||||
def __init__(self, config: BotSignalLayerConfig = None):
|
||||
"""
|
||||
Initialize bot-integrated signal layer.
|
||||
|
||||
Args:
|
||||
config: Bot signal layer configuration (optional)
|
||||
"""
|
||||
if config is None:
|
||||
config = BotSignalLayerConfig(
|
||||
name="Bot Signals",
|
||||
enabled=True,
|
||||
signal_types=['buy', 'sell'],
|
||||
confidence_threshold=0.3,
|
||||
auto_fetch_data=True,
|
||||
active_bots_only=True
|
||||
)
|
||||
|
||||
# Convert to base config for parent class
|
||||
base_config = SignalLayerConfig(
|
||||
name=config.name,
|
||||
enabled=config.enabled,
|
||||
signal_types=config.signal_types,
|
||||
confidence_threshold=config.confidence_threshold,
|
||||
show_confidence=config.show_confidence,
|
||||
marker_size=config.marker_size,
|
||||
show_price_labels=config.show_price_labels,
|
||||
bot_id=config.bot_id
|
||||
)
|
||||
|
||||
super().__init__(base_config)
|
||||
self.bot_config = config
|
||||
self.integration = BotSignalLayerIntegration()
|
||||
|
||||
self.logger.info(f"Bot Enhanced Signal Layer: Initialized BotIntegratedSignalLayer: {config.name}")
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, signals: pd.DataFrame = None, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render bot signals on the chart with automatic data fetching.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure to render onto
|
||||
data: Market data (OHLCV format)
|
||||
signals: Optional manual signal data (if not provided, will auto-fetch)
|
||||
**kwargs: Additional rendering parameters including 'symbol' and 'timeframe'
|
||||
|
||||
Returns:
|
||||
Updated figure with bot signal overlays
|
||||
"""
|
||||
try:
|
||||
# Auto-fetch bot signals if not provided and auto_fetch is enabled
|
||||
if signals is None and self.bot_config.auto_fetch_data:
|
||||
symbol = kwargs.get('symbol')
|
||||
timeframe = kwargs.get('timeframe')
|
||||
|
||||
if not symbol:
|
||||
self.logger.warning("No symbol provided and no manual signals - cannot auto-fetch bot signals")
|
||||
return fig
|
||||
|
||||
# Calculate time range
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=self.bot_config.time_window_days)
|
||||
time_range = (start_time, end_time)
|
||||
|
||||
# Fetch signals from bots
|
||||
signals = self.integration.get_signals_for_chart(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
bot_filter=self.bot_config.bot_filter,
|
||||
time_range=time_range,
|
||||
signal_types=self.bot_config.signal_types,
|
||||
min_confidence=self.bot_config.confidence_threshold
|
||||
)
|
||||
|
||||
if signals.empty:
|
||||
self.logger.info(f"No bot signals found for {symbol}")
|
||||
return fig
|
||||
|
||||
self.logger.info(f"Auto-fetched {len(signals)} bot signals for {symbol}")
|
||||
|
||||
# Enhance signals with bot information if available
|
||||
if signals is not None and not signals.empty and self.bot_config.include_bot_info:
|
||||
signals = self._enhance_signals_with_bot_info(signals)
|
||||
|
||||
# Use parent render method
|
||||
return super().render(fig, data, signals, **kwargs)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error rendering bot-integrated signals: {e}")
|
||||
# Add error annotation
|
||||
fig.add_annotation(
|
||||
text=f"Bot Signal Error: {str(e)}",
|
||||
x=0.5, y=0.95,
|
||||
xref="paper", yref="paper",
|
||||
showarrow=False,
|
||||
font=dict(color="red", size=10)
|
||||
)
|
||||
return fig
|
||||
|
||||
def _enhance_signals_with_bot_info(self, signals: pd.DataFrame) -> pd.DataFrame:
|
||||
"""
|
||||
Enhance signals with additional bot information for better visualization.
|
||||
|
||||
Args:
|
||||
signals: Signal data
|
||||
|
||||
Returns:
|
||||
Enhanced signal data
|
||||
"""
|
||||
if 'bot_name' in signals.columns and 'strategy' in signals.columns:
|
||||
# Signals already enhanced
|
||||
return signals
|
||||
|
||||
# If we have bot info columns, enhance hover text would be handled in trace creation
|
||||
return signals
|
||||
|
||||
def create_signal_traces(self, signals: pd.DataFrame) -> List[go.Scatter]:
|
||||
"""
|
||||
Create enhanced signal traces with bot information.
|
||||
|
||||
Args:
|
||||
signals: Filtered signal data
|
||||
|
||||
Returns:
|
||||
List of enhanced Plotly traces
|
||||
"""
|
||||
traces = []
|
||||
|
||||
try:
|
||||
if signals.empty:
|
||||
return traces
|
||||
|
||||
# Group by strategy if enabled
|
||||
if self.bot_config.group_by_strategy and 'strategy' in signals.columns:
|
||||
for strategy in signals['strategy'].unique():
|
||||
strategy_signals = signals[signals['strategy'] == strategy]
|
||||
strategy_traces = self._create_strategy_traces(strategy_signals, strategy)
|
||||
traces.extend(strategy_traces)
|
||||
else:
|
||||
# Use parent method for standard signal grouping
|
||||
traces = super().create_signal_traces(signals)
|
||||
|
||||
# Enhance traces with bot information
|
||||
if self.bot_config.include_bot_info:
|
||||
traces = self._enhance_traces_with_bot_info(traces, signals)
|
||||
|
||||
return traces
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error creating bot signal traces: {e}")
|
||||
error_trace = self.create_error_trace(f"Error displaying bot signals: {str(e)}")
|
||||
return [error_trace]
|
||||
|
||||
def _create_strategy_traces(self, signals: pd.DataFrame, strategy: str) -> List[go.Scatter]:
|
||||
"""
|
||||
Create traces grouped by strategy.
|
||||
|
||||
Args:
|
||||
signals: Signal data for specific strategy
|
||||
strategy: Strategy name
|
||||
|
||||
Returns:
|
||||
List of traces for this strategy
|
||||
"""
|
||||
traces = []
|
||||
|
||||
# Group by signal type within strategy
|
||||
for signal_type in signals['signal_type'].unique():
|
||||
type_signals = signals[signals['signal_type'] == signal_type]
|
||||
|
||||
if type_signals.empty:
|
||||
continue
|
||||
|
||||
# Enhanced hover text with bot and strategy info
|
||||
hover_text = []
|
||||
for _, signal in type_signals.iterrows():
|
||||
hover_parts = [
|
||||
f"Signal: {signal['signal_type'].upper()}",
|
||||
f"Price: ${signal['price']:.4f}",
|
||||
f"Time: {signal['timestamp']}",
|
||||
f"Strategy: {strategy}"
|
||||
]
|
||||
|
||||
if 'confidence' in signal and signal['confidence'] is not None:
|
||||
hover_parts.append(f"Confidence: {signal['confidence']:.1%}")
|
||||
|
||||
if 'bot_name' in signal and signal['bot_name']:
|
||||
hover_parts.append(f"Bot: {signal['bot_name']}")
|
||||
|
||||
if 'bot_status' in signal and signal['bot_status']:
|
||||
hover_parts.append(f"Status: {signal['bot_status']}")
|
||||
|
||||
hover_text.append("<br>".join(hover_parts))
|
||||
|
||||
# Create trace for this signal type in strategy
|
||||
trace = go.Scatter(
|
||||
x=type_signals['timestamp'],
|
||||
y=type_signals['price'],
|
||||
mode='markers',
|
||||
marker=dict(
|
||||
symbol=self.signal_symbols.get(signal_type, 'circle'),
|
||||
size=self.config.marker_size,
|
||||
color=self.signal_colors.get(signal_type, '#666666'),
|
||||
line=dict(width=1, color='white'),
|
||||
opacity=0.8
|
||||
),
|
||||
name=f"{strategy} - {signal_type.upper()}",
|
||||
text=hover_text,
|
||||
hoverinfo='text',
|
||||
showlegend=True,
|
||||
legendgroup=f"strategy_{strategy}_{signal_type}"
|
||||
)
|
||||
|
||||
traces.append(trace)
|
||||
|
||||
return traces
|
||||
|
||||
def _enhance_traces_with_bot_info(self, traces: List[go.Scatter], signals: pd.DataFrame) -> List[go.Scatter]:
|
||||
"""
|
||||
Enhance existing traces with bot information.
|
||||
|
||||
Args:
|
||||
traces: Original traces
|
||||
signals: Signal data with bot info
|
||||
|
||||
Returns:
|
||||
Enhanced traces
|
||||
"""
|
||||
# This would be implemented to modify hover text of existing traces
|
||||
# For now, return traces as-is since bot info enhancement happens in trace creation
|
||||
return traces
|
||||
|
||||
|
||||
class BotIntegratedTradeLayer(TradeExecutionLayer):
|
||||
"""
|
||||
Trade layer that automatically integrates with bot management system.
|
||||
"""
|
||||
|
||||
def __init__(self, config: BotTradeLayerConfig = None):
|
||||
"""
|
||||
Initialize bot-integrated trade layer.
|
||||
|
||||
Args:
|
||||
config: Bot trade layer configuration (optional)
|
||||
"""
|
||||
if config is None:
|
||||
config = BotTradeLayerConfig(
|
||||
name="Bot Trades",
|
||||
enabled=True,
|
||||
show_pnl=True,
|
||||
show_trade_lines=True,
|
||||
auto_fetch_data=True,
|
||||
active_bots_only=True
|
||||
)
|
||||
|
||||
# Convert to base config for parent class
|
||||
base_config = TradeLayerConfig(
|
||||
name=config.name,
|
||||
enabled=config.enabled,
|
||||
show_pnl=config.show_pnl,
|
||||
show_trade_lines=config.show_trade_lines,
|
||||
show_quantity=config.show_quantity,
|
||||
show_fees=config.show_fees,
|
||||
min_pnl_display=config.min_pnl_display,
|
||||
bot_id=config.bot_id,
|
||||
trade_marker_size=config.trade_marker_size
|
||||
)
|
||||
|
||||
super().__init__(base_config)
|
||||
self.bot_config = config
|
||||
self.integration = BotSignalLayerIntegration()
|
||||
|
||||
self.logger.info(f"Bot Enhanced Trade Layer: Initialized BotIntegratedTradeLayer: {config.name}")
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, trades: pd.DataFrame = None, **kwargs) -> go.Figure:
|
||||
"""
|
||||
Render bot trades on the chart with automatic data fetching.
|
||||
|
||||
Args:
|
||||
fig: Plotly figure to render onto
|
||||
data: Market data (OHLCV format)
|
||||
trades: Optional manual trade data (if not provided, will auto-fetch)
|
||||
**kwargs: Additional rendering parameters including 'symbol' and 'timeframe'
|
||||
|
||||
Returns:
|
||||
Updated figure with bot trade overlays
|
||||
"""
|
||||
try:
|
||||
# Auto-fetch bot trades if not provided and auto_fetch is enabled
|
||||
if trades is None and self.bot_config.auto_fetch_data:
|
||||
symbol = kwargs.get('symbol')
|
||||
timeframe = kwargs.get('timeframe')
|
||||
|
||||
if not symbol:
|
||||
self.logger.warning("Bot Enhanced Trade Layer: No symbol provided and no manual trades - cannot auto-fetch bot trades")
|
||||
return fig
|
||||
|
||||
# Calculate time range
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=self.bot_config.time_window_days)
|
||||
time_range = (start_time, end_time)
|
||||
|
||||
# Fetch trades from bots
|
||||
trades = self.integration.get_trades_for_chart(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
bot_filter=self.bot_config.bot_filter,
|
||||
time_range=time_range
|
||||
)
|
||||
|
||||
if trades.empty:
|
||||
self.logger.info(f"Bot Enhanced Trade Layer: No bot trades found for {symbol}")
|
||||
return fig
|
||||
|
||||
self.logger.info(f"Bot Enhanced Trade Layer: Auto-fetched {len(trades)} bot trades for {symbol}")
|
||||
|
||||
# Use parent render method
|
||||
return super().render(fig, data, trades, **kwargs)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Enhanced Trade Layer: Error rendering bot-integrated trades: {e}")
|
||||
# Add error annotation
|
||||
fig.add_annotation(
|
||||
text=f"Bot Trade Error: {str(e)}",
|
||||
x=0.5, y=0.95,
|
||||
xref="paper", yref="paper",
|
||||
showarrow=False,
|
||||
font=dict(color="red", size=10)
|
||||
)
|
||||
return fig
|
||||
|
||||
|
||||
class BotMultiLayerIntegration:
|
||||
"""
|
||||
Integration utility for managing multiple bot-related chart layers.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize multi-layer bot integration."""
|
||||
self.integration = BotSignalLayerIntegration()
|
||||
self.logger = logger
|
||||
|
||||
def create_bot_layers_for_symbol(self,
|
||||
symbol: str,
|
||||
timeframe: str = None,
|
||||
bot_filter: BotFilterConfig = None,
|
||||
include_signals: bool = True,
|
||||
include_trades: bool = True,
|
||||
time_window_days: int = 7) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a complete set of bot-integrated layers for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
bot_filter: Bot filtering configuration
|
||||
include_signals: Include signal layer
|
||||
include_trades: Include trade layer
|
||||
time_window_days: Time window for data
|
||||
|
||||
Returns:
|
||||
Dictionary with layer instances and metadata
|
||||
"""
|
||||
layers = {}
|
||||
metadata = {}
|
||||
|
||||
try:
|
||||
if bot_filter is None:
|
||||
bot_filter = BotFilterConfig(symbols=[symbol], active_only=True)
|
||||
|
||||
# Create signal layer
|
||||
if include_signals:
|
||||
signal_config = BotSignalLayerConfig(
|
||||
name=f"{symbol} Bot Signals",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
time_window_days=time_window_days,
|
||||
signal_types=['buy', 'sell'],
|
||||
confidence_threshold=0.3,
|
||||
include_bot_info=True
|
||||
)
|
||||
|
||||
layers['signals'] = BotIntegratedSignalLayer(signal_config)
|
||||
metadata['signals'] = {
|
||||
'layer_type': 'bot_signals',
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'time_window_days': time_window_days
|
||||
}
|
||||
|
||||
# Create trade layer
|
||||
if include_trades:
|
||||
trade_config = BotTradeLayerConfig(
|
||||
name=f"{symbol} Bot Trades",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
time_window_days=time_window_days,
|
||||
show_pnl=True,
|
||||
show_trade_lines=True,
|
||||
include_bot_info=True
|
||||
)
|
||||
|
||||
layers['trades'] = BotIntegratedTradeLayer(trade_config)
|
||||
metadata['trades'] = {
|
||||
'layer_type': 'bot_trades',
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'time_window_days': time_window_days
|
||||
}
|
||||
|
||||
# Get bot summary for metadata
|
||||
bot_summary = self.integration.get_bot_summary_stats()
|
||||
metadata['bot_summary'] = bot_summary
|
||||
|
||||
self.logger.info(f"Bot Enhanced Multi Layer Integration: Created {len(layers)} bot layers for {symbol}")
|
||||
|
||||
return {
|
||||
'layers': layers,
|
||||
'metadata': metadata,
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'success': True
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Enhanced Multi Layer Integration: Error creating bot layers for {symbol}: {e}")
|
||||
return {
|
||||
'layers': {},
|
||||
'metadata': {},
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
def create_strategy_comparison_layers(self,
|
||||
symbol: str,
|
||||
strategies: List[str],
|
||||
timeframe: str = None,
|
||||
time_window_days: int = 7) -> Dict[str, Any]:
|
||||
"""
|
||||
Create layers to compare different strategies for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
strategies: List of strategy names to compare
|
||||
timeframe: Chart timeframe (optional)
|
||||
time_window_days: Time window for data
|
||||
|
||||
Returns:
|
||||
Dictionary with strategy comparison layers
|
||||
"""
|
||||
layers = {}
|
||||
metadata = {}
|
||||
|
||||
try:
|
||||
for strategy in strategies:
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=[symbol],
|
||||
strategies=[strategy],
|
||||
active_only=False # Include all bots for comparison
|
||||
)
|
||||
|
||||
# Create signal layer for this strategy
|
||||
signal_config = BotSignalLayerConfig(
|
||||
name=f"{strategy} Signals",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
time_window_days=time_window_days,
|
||||
group_by_strategy=True,
|
||||
include_bot_info=True
|
||||
)
|
||||
|
||||
layers[f"{strategy}_signals"] = BotIntegratedSignalLayer(signal_config)
|
||||
|
||||
# Create trade layer for this strategy
|
||||
trade_config = BotTradeLayerConfig(
|
||||
name=f"{strategy} Trades",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
time_window_days=time_window_days,
|
||||
group_by_strategy=True,
|
||||
include_bot_info=True
|
||||
)
|
||||
|
||||
layers[f"{strategy}_trades"] = BotIntegratedTradeLayer(trade_config)
|
||||
|
||||
metadata[strategy] = {
|
||||
'strategy': strategy,
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'layer_count': 2
|
||||
}
|
||||
|
||||
self.logger.info(f"Bot Enhanced Multi Layer Integration: Created strategy comparison layers for {len(strategies)} strategies on {symbol}")
|
||||
|
||||
return {
|
||||
'layers': layers,
|
||||
'metadata': metadata,
|
||||
'symbol': symbol,
|
||||
'strategies': strategies,
|
||||
'success': True
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Enhanced Multi Layer Integration: Error creating strategy comparison layers: {e}")
|
||||
return {
|
||||
'layers': {},
|
||||
'metadata': {},
|
||||
'symbol': symbol,
|
||||
'strategies': strategies,
|
||||
'success': False,
|
||||
'error': str(e)
|
||||
}
|
||||
|
||||
|
||||
# Global instance for easy access
|
||||
bot_multi_layer = BotMultiLayerIntegration()
|
||||
|
||||
|
||||
# Convenience functions for creating bot-integrated layers
|
||||
|
||||
def create_bot_signal_layer(symbol: str,
|
||||
timeframe: str = None,
|
||||
active_only: bool = True,
|
||||
confidence_threshold: float = 0.3,
|
||||
time_window_days: int = 7,
|
||||
**kwargs) -> BotIntegratedSignalLayer:
|
||||
"""
|
||||
Create a bot-integrated signal layer for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
active_only: Only include active bots
|
||||
confidence_threshold: Minimum confidence threshold
|
||||
time_window_days: Time window for data fetching
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured BotIntegratedSignalLayer
|
||||
"""
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=[symbol],
|
||||
active_only=active_only
|
||||
)
|
||||
|
||||
config = BotSignalLayerConfig(
|
||||
name=f"{symbol} Bot Signals",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
confidence_threshold=confidence_threshold,
|
||||
time_window_days=time_window_days,
|
||||
signal_types=kwargs.get('signal_types', ['buy', 'sell']),
|
||||
include_bot_info=kwargs.get('include_bot_info', True),
|
||||
group_by_strategy=kwargs.get('group_by_strategy', False),
|
||||
**{k: v for k, v in kwargs.items() if k not in [
|
||||
'signal_types', 'include_bot_info', 'group_by_strategy'
|
||||
]}
|
||||
)
|
||||
|
||||
return BotIntegratedSignalLayer(config)
|
||||
|
||||
|
||||
def create_bot_trade_layer(symbol: str,
|
||||
timeframe: str = None,
|
||||
active_only: bool = True,
|
||||
show_pnl: bool = True,
|
||||
time_window_days: int = 7,
|
||||
**kwargs) -> BotIntegratedTradeLayer:
|
||||
"""
|
||||
Create a bot-integrated trade layer for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
active_only: Only include active bots
|
||||
show_pnl: Show profit/loss information
|
||||
time_window_days: Time window for data fetching
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured BotIntegratedTradeLayer
|
||||
"""
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=[symbol],
|
||||
active_only=active_only
|
||||
)
|
||||
|
||||
config = BotTradeLayerConfig(
|
||||
name=f"{symbol} Bot Trades",
|
||||
enabled=True,
|
||||
bot_filter=bot_filter,
|
||||
show_pnl=show_pnl,
|
||||
time_window_days=time_window_days,
|
||||
show_trade_lines=kwargs.get('show_trade_lines', True),
|
||||
include_bot_info=kwargs.get('include_bot_info', True),
|
||||
group_by_strategy=kwargs.get('group_by_strategy', False),
|
||||
**{k: v for k, v in kwargs.items() if k not in [
|
||||
'show_trade_lines', 'include_bot_info', 'group_by_strategy'
|
||||
]}
|
||||
)
|
||||
|
||||
return BotIntegratedTradeLayer(config)
|
||||
|
||||
|
||||
def create_complete_bot_layers(symbol: str,
|
||||
timeframe: str = None,
|
||||
active_only: bool = True,
|
||||
time_window_days: int = 7) -> Dict[str, Any]:
|
||||
"""
|
||||
Create a complete set of bot-integrated layers for a symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
active_only: Only include active bots
|
||||
time_window_days: Time window for data fetching
|
||||
|
||||
Returns:
|
||||
Dictionary with signal and trade layers
|
||||
"""
|
||||
return bot_multi_layer.create_bot_layers_for_symbol(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
bot_filter=BotFilterConfig(symbols=[symbol], active_only=active_only),
|
||||
time_window_days=time_window_days
|
||||
)
|
||||
737
components/charts/layers/bot_integration.py
Normal file
737
components/charts/layers/bot_integration.py
Normal file
@ -0,0 +1,737 @@
|
||||
"""
|
||||
Bot Management Integration for Chart Signal Layers
|
||||
|
||||
This module provides integration points between the signal layer system and the bot management
|
||||
system, including data fetching utilities, bot filtering, and integration helpers.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
from typing import Dict, Any, Optional, List, Union, Tuple
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timedelta
|
||||
from decimal import Decimal
|
||||
|
||||
from database.connection import get_session
|
||||
from database.models import Bot, Signal, Trade, BotPerformance
|
||||
from database.operations import DatabaseOperationError
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class BotFilterConfig:
|
||||
"""Configuration for filtering bot data for chart layers"""
|
||||
bot_ids: Optional[List[int]] = None # Specific bot IDs to include
|
||||
bot_names: Optional[List[str]] = None # Specific bot names to include
|
||||
strategies: Optional[List[str]] = None # Specific strategies to include
|
||||
symbols: Optional[List[str]] = None # Specific symbols to include
|
||||
statuses: Optional[List[str]] = None # Bot statuses to include
|
||||
date_range: Optional[Tuple[datetime, datetime]] = None # Date range filter
|
||||
active_only: bool = False # Only include active bots
|
||||
|
||||
def __post_init__(self):
|
||||
if self.statuses is None:
|
||||
self.statuses = ['active', 'inactive', 'paused'] # Exclude 'error' by default
|
||||
|
||||
|
||||
class BotDataService:
|
||||
"""
|
||||
Service for fetching bot-related data for chart layers.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize bot data service."""
|
||||
self.logger = logger
|
||||
|
||||
def get_bots(self, filter_config: BotFilterConfig = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get bot information based on filter configuration.
|
||||
|
||||
Args:
|
||||
filter_config: Filter configuration (optional)
|
||||
|
||||
Returns:
|
||||
DataFrame with bot information
|
||||
"""
|
||||
try:
|
||||
if filter_config is None:
|
||||
filter_config = BotFilterConfig()
|
||||
|
||||
with get_session() as session:
|
||||
query = session.query(Bot)
|
||||
|
||||
# Apply filters
|
||||
if filter_config.bot_ids:
|
||||
query = query.filter(Bot.id.in_(filter_config.bot_ids))
|
||||
|
||||
if filter_config.bot_names:
|
||||
query = query.filter(Bot.name.in_(filter_config.bot_names))
|
||||
|
||||
if filter_config.strategies:
|
||||
query = query.filter(Bot.strategy_name.in_(filter_config.strategies))
|
||||
|
||||
if filter_config.symbols:
|
||||
query = query.filter(Bot.symbol.in_(filter_config.symbols))
|
||||
|
||||
if filter_config.statuses:
|
||||
query = query.filter(Bot.status.in_(filter_config.statuses))
|
||||
|
||||
if filter_config.active_only:
|
||||
query = query.filter(Bot.status == 'active')
|
||||
|
||||
# Execute query
|
||||
bots = query.all()
|
||||
|
||||
# Convert to DataFrame
|
||||
bot_data = []
|
||||
for bot in bots:
|
||||
bot_data.append({
|
||||
'id': bot.id,
|
||||
'name': bot.name,
|
||||
'strategy_name': bot.strategy_name,
|
||||
'symbol': bot.symbol,
|
||||
'timeframe': bot.timeframe,
|
||||
'status': bot.status,
|
||||
'config_file': bot.config_file,
|
||||
'virtual_balance': float(bot.virtual_balance) if bot.virtual_balance else 0.0,
|
||||
'current_balance': float(bot.current_balance) if bot.current_balance else 0.0,
|
||||
'pnl': float(bot.pnl) if bot.pnl else 0.0,
|
||||
'is_active': bot.is_active,
|
||||
'last_heartbeat': bot.last_heartbeat,
|
||||
'created_at': bot.created_at,
|
||||
'updated_at': bot.updated_at
|
||||
})
|
||||
|
||||
df = pd.DataFrame(bot_data)
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(df)} bots with filters: {filter_config}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error retrieving bots: {e}")
|
||||
raise DatabaseOperationError(f"Failed to retrieve bots: {e}")
|
||||
|
||||
def get_signals_for_bots(self,
|
||||
bot_ids: Union[int, List[int]] = None,
|
||||
start_time: datetime = None,
|
||||
end_time: datetime = None,
|
||||
signal_types: List[str] = None,
|
||||
min_confidence: float = 0.0) -> pd.DataFrame:
|
||||
"""
|
||||
Get signals for specific bots or all bots.
|
||||
|
||||
Args:
|
||||
bot_ids: Bot ID(s) to fetch signals for (None for all bots)
|
||||
start_time: Start time for signal filtering
|
||||
end_time: End time for signal filtering
|
||||
signal_types: Signal types to include (['buy', 'sell', 'hold'])
|
||||
min_confidence: Minimum confidence threshold
|
||||
|
||||
Returns:
|
||||
DataFrame with signal data
|
||||
"""
|
||||
try:
|
||||
# Default time range if not provided
|
||||
if end_time is None:
|
||||
end_time = datetime.now()
|
||||
if start_time is None:
|
||||
start_time = end_time - timedelta(days=7) # Last 7 days by default
|
||||
|
||||
# Normalize bot_ids to list
|
||||
if isinstance(bot_ids, int):
|
||||
bot_ids = [bot_ids]
|
||||
|
||||
with get_session() as session:
|
||||
query = session.query(Signal)
|
||||
|
||||
# Apply filters
|
||||
if bot_ids is not None:
|
||||
query = query.filter(Signal.bot_id.in_(bot_ids))
|
||||
|
||||
query = query.filter(
|
||||
Signal.timestamp >= start_time,
|
||||
Signal.timestamp <= end_time
|
||||
)
|
||||
|
||||
if signal_types:
|
||||
query = query.filter(Signal.signal_type.in_(signal_types))
|
||||
|
||||
if min_confidence > 0:
|
||||
query = query.filter(Signal.confidence >= min_confidence)
|
||||
|
||||
# Order by timestamp
|
||||
query = query.order_by(Signal.timestamp.asc())
|
||||
|
||||
# Execute query
|
||||
signals = query.all()
|
||||
|
||||
# Convert to DataFrame
|
||||
signal_data = []
|
||||
for signal in signals:
|
||||
signal_data.append({
|
||||
'id': signal.id,
|
||||
'bot_id': signal.bot_id,
|
||||
'timestamp': signal.timestamp,
|
||||
'signal_type': signal.signal_type,
|
||||
'price': float(signal.price) if signal.price else None,
|
||||
'confidence': float(signal.confidence) if signal.confidence else None,
|
||||
'indicators': signal.indicators, # JSONB data
|
||||
'created_at': signal.created_at
|
||||
})
|
||||
|
||||
df = pd.DataFrame(signal_data)
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(df)} signals for bots: {bot_ids}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error retrieving signals: {e}")
|
||||
raise DatabaseOperationError(f"Failed to retrieve signals: {e}")
|
||||
|
||||
def get_trades_for_bots(self,
|
||||
bot_ids: Union[int, List[int]] = None,
|
||||
start_time: datetime = None,
|
||||
end_time: datetime = None,
|
||||
sides: List[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get trades for specific bots or all bots.
|
||||
|
||||
Args:
|
||||
bot_ids: Bot ID(s) to fetch trades for (None for all bots)
|
||||
start_time: Start time for trade filtering
|
||||
end_time: End time for trade filtering
|
||||
sides: Trade sides to include (['buy', 'sell'])
|
||||
|
||||
Returns:
|
||||
DataFrame with trade data
|
||||
"""
|
||||
try:
|
||||
# Default time range if not provided
|
||||
if end_time is None:
|
||||
end_time = datetime.now()
|
||||
if start_time is None:
|
||||
start_time = end_time - timedelta(days=7) # Last 7 days by default
|
||||
|
||||
# Normalize bot_ids to list
|
||||
if isinstance(bot_ids, int):
|
||||
bot_ids = [bot_ids]
|
||||
|
||||
with get_session() as session:
|
||||
query = session.query(Trade)
|
||||
|
||||
# Apply filters
|
||||
if bot_ids is not None:
|
||||
query = query.filter(Trade.bot_id.in_(bot_ids))
|
||||
|
||||
query = query.filter(
|
||||
Trade.timestamp >= start_time,
|
||||
Trade.timestamp <= end_time
|
||||
)
|
||||
|
||||
if sides:
|
||||
query = query.filter(Trade.side.in_(sides))
|
||||
|
||||
# Order by timestamp
|
||||
query = query.order_by(Trade.timestamp.asc())
|
||||
|
||||
# Execute query
|
||||
trades = query.all()
|
||||
|
||||
# Convert to DataFrame
|
||||
trade_data = []
|
||||
for trade in trades:
|
||||
trade_data.append({
|
||||
'id': trade.id,
|
||||
'bot_id': trade.bot_id,
|
||||
'signal_id': trade.signal_id,
|
||||
'timestamp': trade.timestamp,
|
||||
'side': trade.side,
|
||||
'price': float(trade.price),
|
||||
'quantity': float(trade.quantity),
|
||||
'fees': float(trade.fees),
|
||||
'pnl': float(trade.pnl) if trade.pnl else None,
|
||||
'balance_after': float(trade.balance_after) if trade.balance_after else None,
|
||||
'trade_value': float(trade.trade_value),
|
||||
'net_pnl': float(trade.net_pnl),
|
||||
'created_at': trade.created_at
|
||||
})
|
||||
|
||||
df = pd.DataFrame(trade_data)
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(df)} trades for bots: {bot_ids}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error retrieving trades: {e}")
|
||||
raise DatabaseOperationError(f"Failed to retrieve trades: {e}")
|
||||
|
||||
def get_bot_performance(self,
|
||||
bot_ids: Union[int, List[int]] = None,
|
||||
start_time: datetime = None,
|
||||
end_time: datetime = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get performance data for specific bots.
|
||||
|
||||
Args:
|
||||
bot_ids: Bot ID(s) to fetch performance for (None for all bots)
|
||||
start_time: Start time for performance filtering
|
||||
end_time: End time for performance filtering
|
||||
|
||||
Returns:
|
||||
DataFrame with performance data
|
||||
"""
|
||||
try:
|
||||
# Default time range if not provided
|
||||
if end_time is None:
|
||||
end_time = datetime.now()
|
||||
if start_time is None:
|
||||
start_time = end_time - timedelta(days=30) # Last 30 days by default
|
||||
|
||||
# Normalize bot_ids to list
|
||||
if isinstance(bot_ids, int):
|
||||
bot_ids = [bot_ids]
|
||||
|
||||
with get_session() as session:
|
||||
query = session.query(BotPerformance)
|
||||
|
||||
# Apply filters
|
||||
if bot_ids is not None:
|
||||
query = query.filter(BotPerformance.bot_id.in_(bot_ids))
|
||||
|
||||
query = query.filter(
|
||||
BotPerformance.timestamp >= start_time,
|
||||
BotPerformance.timestamp <= end_time
|
||||
)
|
||||
|
||||
# Order by timestamp
|
||||
query = query.order_by(BotPerformance.timestamp.asc())
|
||||
|
||||
# Execute query
|
||||
performance_records = query.all()
|
||||
|
||||
# Convert to DataFrame
|
||||
performance_data = []
|
||||
for perf in performance_records:
|
||||
performance_data.append({
|
||||
'id': perf.id,
|
||||
'bot_id': perf.bot_id,
|
||||
'timestamp': perf.timestamp,
|
||||
'total_value': float(perf.total_value),
|
||||
'cash_balance': float(perf.cash_balance),
|
||||
'crypto_balance': float(perf.crypto_balance),
|
||||
'total_trades': perf.total_trades,
|
||||
'winning_trades': perf.winning_trades,
|
||||
'total_fees': float(perf.total_fees),
|
||||
'win_rate': perf.win_rate,
|
||||
'portfolio_allocation': perf.portfolio_allocation,
|
||||
'created_at': perf.created_at
|
||||
})
|
||||
|
||||
df = pd.DataFrame(performance_data)
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(df)} performance records for bots: {bot_ids}")
|
||||
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error retrieving bot performance: {e}")
|
||||
raise DatabaseOperationError(f"Failed to retrieve bot performance: {e}")
|
||||
|
||||
|
||||
class BotSignalLayerIntegration:
|
||||
"""
|
||||
Integration utilities for signal layers with bot management system.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize bot signal layer integration."""
|
||||
self.data_service = BotDataService()
|
||||
self.logger = logger
|
||||
|
||||
def get_signals_for_chart(self,
|
||||
symbol: str,
|
||||
timeframe: str = None,
|
||||
bot_filter: BotFilterConfig = None,
|
||||
time_range: Tuple[datetime, datetime] = None,
|
||||
signal_types: List[str] = None,
|
||||
min_confidence: float = 0.0) -> pd.DataFrame:
|
||||
"""
|
||||
Get signals filtered by chart context (symbol, timeframe) and bot criteria.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol for the chart
|
||||
timeframe: Chart timeframe (optional)
|
||||
bot_filter: Bot filtering configuration
|
||||
time_range: (start_time, end_time) tuple
|
||||
signal_types: Signal types to include
|
||||
min_confidence: Minimum confidence threshold
|
||||
|
||||
Returns:
|
||||
DataFrame with signals ready for chart rendering
|
||||
"""
|
||||
try:
|
||||
# Get relevant bots for this symbol/timeframe
|
||||
if bot_filter is None:
|
||||
bot_filter = BotFilterConfig()
|
||||
|
||||
# Add symbol filter
|
||||
if bot_filter.symbols is None:
|
||||
bot_filter.symbols = [symbol]
|
||||
elif symbol not in bot_filter.symbols:
|
||||
bot_filter.symbols.append(symbol)
|
||||
|
||||
# Get bots matching criteria
|
||||
bots_df = self.data_service.get_bots(bot_filter)
|
||||
|
||||
if bots_df.empty:
|
||||
self.logger.info(f"No bots found for symbol {symbol}")
|
||||
return pd.DataFrame()
|
||||
|
||||
bot_ids = bots_df['id'].tolist()
|
||||
|
||||
# Get time range
|
||||
start_time, end_time = time_range if time_range else (None, None)
|
||||
|
||||
# Get signals for these bots
|
||||
signals_df = self.data_service.get_signals_for_bots(
|
||||
bot_ids=bot_ids,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
signal_types=signal_types,
|
||||
min_confidence=min_confidence
|
||||
)
|
||||
|
||||
# Enrich signals with bot information
|
||||
if not signals_df.empty:
|
||||
signals_df = signals_df.merge(
|
||||
bots_df[['id', 'name', 'strategy_name', 'status']],
|
||||
left_on='bot_id',
|
||||
right_on='id',
|
||||
suffixes=('', '_bot')
|
||||
)
|
||||
|
||||
# Add metadata fields for chart rendering
|
||||
signals_df['bot_name'] = signals_df['name']
|
||||
signals_df['strategy'] = signals_df['strategy_name']
|
||||
signals_df['bot_status'] = signals_df['status']
|
||||
|
||||
# Clean up duplicate columns
|
||||
signals_df = signals_df.drop(['id_bot', 'name', 'strategy_name', 'status'], axis=1)
|
||||
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(signals_df)} signals for chart {symbol} from {len(bot_ids)} bots")
|
||||
return signals_df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error getting signals for chart: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
def get_trades_for_chart(self,
|
||||
symbol: str,
|
||||
timeframe: str = None,
|
||||
bot_filter: BotFilterConfig = None,
|
||||
time_range: Tuple[datetime, datetime] = None,
|
||||
sides: List[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get trades filtered by chart context (symbol, timeframe) and bot criteria.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol for the chart
|
||||
timeframe: Chart timeframe (optional)
|
||||
bot_filter: Bot filtering configuration
|
||||
time_range: (start_time, end_time) tuple
|
||||
sides: Trade sides to include
|
||||
|
||||
Returns:
|
||||
DataFrame with trades ready for chart rendering
|
||||
"""
|
||||
try:
|
||||
# Get relevant bots for this symbol/timeframe
|
||||
if bot_filter is None:
|
||||
bot_filter = BotFilterConfig()
|
||||
|
||||
# Add symbol filter
|
||||
if bot_filter.symbols is None:
|
||||
bot_filter.symbols = [symbol]
|
||||
elif symbol not in bot_filter.symbols:
|
||||
bot_filter.symbols.append(symbol)
|
||||
|
||||
# Get bots matching criteria
|
||||
bots_df = self.data_service.get_bots(bot_filter)
|
||||
|
||||
if bots_df.empty:
|
||||
self.logger.info(f"No bots found for symbol {symbol}")
|
||||
return pd.DataFrame()
|
||||
|
||||
bot_ids = bots_df['id'].tolist()
|
||||
|
||||
# Get time range
|
||||
start_time, end_time = time_range if time_range else (None, None)
|
||||
|
||||
# Get trades for these bots
|
||||
trades_df = self.data_service.get_trades_for_bots(
|
||||
bot_ids=bot_ids,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
sides=sides
|
||||
)
|
||||
|
||||
# Enrich trades with bot information
|
||||
if not trades_df.empty:
|
||||
trades_df = trades_df.merge(
|
||||
bots_df[['id', 'name', 'strategy_name', 'status']],
|
||||
left_on='bot_id',
|
||||
right_on='id',
|
||||
suffixes=('', '_bot')
|
||||
)
|
||||
|
||||
# Add metadata fields for chart rendering
|
||||
trades_df['bot_name'] = trades_df['name']
|
||||
trades_df['strategy'] = trades_df['strategy_name']
|
||||
trades_df['bot_status'] = trades_df['status']
|
||||
|
||||
# Clean up duplicate columns
|
||||
trades_df = trades_df.drop(['id_bot', 'name', 'strategy_name', 'status'], axis=1)
|
||||
|
||||
self.logger.info(f"Bot Integration: Retrieved {len(trades_df)} trades for chart {symbol} from {len(bot_ids)} bots")
|
||||
return trades_df
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error getting trades for chart: {e}")
|
||||
return pd.DataFrame()
|
||||
|
||||
def get_bot_summary_stats(self, bot_ids: List[int] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Get summary statistics for bots.
|
||||
|
||||
Args:
|
||||
bot_ids: Specific bot IDs (None for all bots)
|
||||
|
||||
Returns:
|
||||
Dictionary with summary statistics
|
||||
"""
|
||||
try:
|
||||
# Get bots
|
||||
bot_filter = BotFilterConfig(bot_ids=bot_ids) if bot_ids else BotFilterConfig()
|
||||
bots_df = self.data_service.get_bots(bot_filter)
|
||||
|
||||
if bots_df.empty:
|
||||
return {
|
||||
'total_bots': 0,
|
||||
'active_bots': 0,
|
||||
'total_balance': 0.0,
|
||||
'total_pnl': 0.0,
|
||||
'strategies': [],
|
||||
'symbols': []
|
||||
}
|
||||
|
||||
# Calculate statistics
|
||||
stats = {
|
||||
'total_bots': len(bots_df),
|
||||
'active_bots': len(bots_df[bots_df['status'] == 'active']),
|
||||
'inactive_bots': len(bots_df[bots_df['status'] == 'inactive']),
|
||||
'paused_bots': len(bots_df[bots_df['status'] == 'paused']),
|
||||
'error_bots': len(bots_df[bots_df['status'] == 'error']),
|
||||
'total_virtual_balance': bots_df['virtual_balance'].sum(),
|
||||
'total_current_balance': bots_df['current_balance'].sum(),
|
||||
'total_pnl': bots_df['pnl'].sum(),
|
||||
'average_pnl': bots_df['pnl'].mean(),
|
||||
'best_performing_bot': None,
|
||||
'worst_performing_bot': None,
|
||||
'strategies': bots_df['strategy_name'].unique().tolist(),
|
||||
'symbols': bots_df['symbol'].unique().tolist(),
|
||||
'timeframes': bots_df['timeframe'].unique().tolist()
|
||||
}
|
||||
|
||||
# Get best and worst performing bots
|
||||
if not bots_df.empty:
|
||||
best_bot = bots_df.loc[bots_df['pnl'].idxmax()]
|
||||
worst_bot = bots_df.loc[bots_df['pnl'].idxmin()]
|
||||
|
||||
stats['best_performing_bot'] = {
|
||||
'id': best_bot['id'],
|
||||
'name': best_bot['name'],
|
||||
'pnl': best_bot['pnl']
|
||||
}
|
||||
|
||||
stats['worst_performing_bot'] = {
|
||||
'id': worst_bot['id'],
|
||||
'name': worst_bot['name'],
|
||||
'pnl': worst_bot['pnl']
|
||||
}
|
||||
|
||||
return stats
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Bot Integration: Error getting bot summary stats: {e}")
|
||||
return {}
|
||||
|
||||
|
||||
# Global instances for easy access
|
||||
bot_data_service = BotDataService()
|
||||
bot_integration = BotSignalLayerIntegration()
|
||||
|
||||
|
||||
# Convenience functions for common use cases
|
||||
|
||||
def get_active_bot_signals(symbol: str,
|
||||
timeframe: str = None,
|
||||
days_back: int = 7,
|
||||
signal_types: List[str] = None,
|
||||
min_confidence: float = 0.3) -> pd.DataFrame:
|
||||
"""
|
||||
Get signals from active bots for a specific symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
days_back: Number of days to look back
|
||||
signal_types: Signal types to include
|
||||
min_confidence: Minimum confidence threshold
|
||||
|
||||
Returns:
|
||||
DataFrame with signals from active bots
|
||||
"""
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=[symbol],
|
||||
active_only=True
|
||||
)
|
||||
|
||||
return bot_integration.get_signals_for_chart(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
bot_filter=bot_filter,
|
||||
time_range=(start_time, end_time),
|
||||
signal_types=signal_types,
|
||||
min_confidence=min_confidence
|
||||
)
|
||||
|
||||
|
||||
def get_active_bot_trades(symbol: str,
|
||||
timeframe: str = None,
|
||||
days_back: int = 7,
|
||||
sides: List[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get trades from active bots for a specific symbol.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol
|
||||
timeframe: Chart timeframe (optional)
|
||||
days_back: Number of days to look back
|
||||
sides: Trade sides to include
|
||||
|
||||
Returns:
|
||||
DataFrame with trades from active bots
|
||||
"""
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
bot_filter = BotFilterConfig(
|
||||
symbols=[symbol],
|
||||
active_only=True
|
||||
)
|
||||
|
||||
return bot_integration.get_trades_for_chart(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
bot_filter=bot_filter,
|
||||
time_range=(start_time, end_time),
|
||||
sides=sides
|
||||
)
|
||||
|
||||
|
||||
def get_bot_signals_by_strategy(strategy_name: str,
|
||||
symbol: str = None,
|
||||
days_back: int = 7,
|
||||
signal_types: List[str] = None) -> pd.DataFrame:
|
||||
"""
|
||||
Get signals from bots using a specific strategy.
|
||||
|
||||
Args:
|
||||
strategy_name: Strategy name to filter by
|
||||
symbol: Trading symbol (optional)
|
||||
days_back: Number of days to look back
|
||||
signal_types: Signal types to include
|
||||
|
||||
Returns:
|
||||
DataFrame with signals from strategy bots
|
||||
"""
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
bot_filter = BotFilterConfig(
|
||||
strategies=[strategy_name],
|
||||
symbols=[symbol] if symbol else None
|
||||
)
|
||||
|
||||
# Get bots for this strategy
|
||||
bots_df = bot_data_service.get_bots(bot_filter)
|
||||
|
||||
if bots_df.empty:
|
||||
return pd.DataFrame()
|
||||
|
||||
bot_ids = bots_df['id'].tolist()
|
||||
|
||||
return bot_data_service.get_signals_for_bots(
|
||||
bot_ids=bot_ids,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
signal_types=signal_types
|
||||
)
|
||||
|
||||
|
||||
def get_bot_performance_summary(bot_id: int = None,
|
||||
days_back: int = 30) -> Dict[str, Any]:
|
||||
"""
|
||||
Get performance summary for a specific bot or all bots.
|
||||
|
||||
Args:
|
||||
bot_id: Specific bot ID (None for all bots)
|
||||
days_back: Number of days to analyze
|
||||
|
||||
Returns:
|
||||
Dictionary with performance summary
|
||||
"""
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
# Get bot summary stats
|
||||
bot_ids = [bot_id] if bot_id else None
|
||||
bot_stats = bot_integration.get_bot_summary_stats(bot_ids)
|
||||
|
||||
# Get signals and trades for performance analysis
|
||||
signals_df = bot_data_service.get_signals_for_bots(
|
||||
bot_ids=bot_ids,
|
||||
start_time=start_time,
|
||||
end_time=end_time
|
||||
)
|
||||
|
||||
trades_df = bot_data_service.get_trades_for_bots(
|
||||
bot_ids=bot_ids,
|
||||
start_time=start_time,
|
||||
end_time=end_time
|
||||
)
|
||||
|
||||
# Calculate additional performance metrics
|
||||
performance = {
|
||||
'bot_stats': bot_stats,
|
||||
'signal_count': len(signals_df),
|
||||
'trade_count': len(trades_df),
|
||||
'signals_by_type': signals_df['signal_type'].value_counts().to_dict() if not signals_df.empty else {},
|
||||
'trades_by_side': trades_df['side'].value_counts().to_dict() if not trades_df.empty else {},
|
||||
'total_trade_volume': trades_df['trade_value'].sum() if not trades_df.empty else 0.0,
|
||||
'total_fees': trades_df['fees'].sum() if not trades_df.empty else 0.0,
|
||||
'profitable_trades': len(trades_df[trades_df['pnl'] > 0]) if not trades_df.empty else 0,
|
||||
'losing_trades': len(trades_df[trades_df['pnl'] < 0]) if not trades_df.empty else 0,
|
||||
'win_rate': (len(trades_df[trades_df['pnl'] > 0]) / len(trades_df) * 100) if not trades_df.empty else 0.0,
|
||||
'time_range': {
|
||||
'start': start_time.isoformat(),
|
||||
'end': end_time.isoformat(),
|
||||
'days': days_back
|
||||
}
|
||||
}
|
||||
|
||||
return performance
|
||||
713
components/charts/layers/indicators.py
Normal file
713
components/charts/layers/indicators.py
Normal file
@ -0,0 +1,713 @@
|
||||
"""
|
||||
Technical Indicator Chart Layers
|
||||
|
||||
This module implements overlay indicator layers for technical analysis visualization
|
||||
including SMA, EMA, and Bollinger Bands with comprehensive error handling.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
from typing import Dict, Any, Optional, List, Union, Callable
|
||||
from dataclasses import dataclass
|
||||
|
||||
from ..error_handling import (
|
||||
ChartErrorHandler, ChartError, ErrorSeverity, DataRequirements,
|
||||
InsufficientDataError, DataValidationError, IndicatorCalculationError,
|
||||
ErrorRecoveryStrategies, create_error_annotation, get_error_message
|
||||
)
|
||||
|
||||
from .base import BaseLayer, LayerConfig
|
||||
from data.common.indicators import TechnicalIndicators
|
||||
from data.common.data_types import OHLCVCandle
|
||||
from components.charts.utils import get_indicator_colors
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorLayerConfig(LayerConfig):
|
||||
"""Extended configuration for indicator layers"""
|
||||
id: str = ""
|
||||
indicator_type: str = "" # e.g., 'sma', 'ema', 'rsi'
|
||||
parameters: Dict[str, Any] = None # Indicator-specific parameters
|
||||
line_width: int = 2
|
||||
opacity: float = 1.0
|
||||
show_middle_line: bool = True # For indicators like Bollinger Bands
|
||||
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
if self.parameters is None:
|
||||
self.parameters = {}
|
||||
|
||||
|
||||
class BaseIndicatorLayer(BaseLayer):
|
||||
"""
|
||||
Enhanced base class for all indicator layers with comprehensive error handling.
|
||||
"""
|
||||
|
||||
def __init__(self, config: IndicatorLayerConfig):
|
||||
"""
|
||||
Initialize base indicator layer.
|
||||
|
||||
Args:
|
||||
config: Indicator layer configuration
|
||||
"""
|
||||
super().__init__(config)
|
||||
self.indicators = TechnicalIndicators()
|
||||
self.colors = get_indicator_colors()
|
||||
self.calculated_data = None
|
||||
self.calculation_errors = []
|
||||
|
||||
def prepare_indicator_data(self, data: pd.DataFrame) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Convert DataFrame to OHLCVCandle format for indicator calculations.
|
||||
|
||||
Args:
|
||||
data: Chart data (OHLCV format)
|
||||
|
||||
Returns:
|
||||
List of OHLCVCandle objects
|
||||
"""
|
||||
try:
|
||||
candles = []
|
||||
for _, row in data.iterrows():
|
||||
# Calculate start_time (assuming 1-minute candles for now)
|
||||
start_time = row['timestamp']
|
||||
end_time = row['timestamp']
|
||||
|
||||
candle = OHLCVCandle(
|
||||
symbol="BTCUSDT", # Default symbol for testing
|
||||
timeframe="1m", # Default timeframe
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
open=Decimal(str(row['open'])),
|
||||
high=Decimal(str(row['high'])),
|
||||
low=Decimal(str(row['low'])),
|
||||
close=Decimal(str(row['close'])),
|
||||
volume=Decimal(str(row.get('volume', 0))),
|
||||
trade_count=1, # Default trade count
|
||||
exchange="test", # Test exchange
|
||||
is_complete=True # Mark as complete for testing
|
||||
)
|
||||
candles.append(candle)
|
||||
|
||||
return candles
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error preparing indicator data: {e}")
|
||||
return []
|
||||
|
||||
def validate_indicator_data(self, data: Union[pd.DataFrame, List[Dict[str, Any]]],
|
||||
required_columns: List[str] = None) -> bool:
|
||||
"""
|
||||
Validate data specifically for indicator calculations.
|
||||
|
||||
Args:
|
||||
data: Input data
|
||||
required_columns: Required columns for this indicator
|
||||
|
||||
Returns:
|
||||
True if data is valid for indicator calculation
|
||||
"""
|
||||
try:
|
||||
# Use parent validation first
|
||||
if not super().validate_data(data):
|
||||
return False
|
||||
|
||||
# Convert to DataFrame if needed
|
||||
if isinstance(data, list):
|
||||
df = pd.DataFrame(data)
|
||||
else:
|
||||
df = data.copy()
|
||||
|
||||
# Check required columns for indicator
|
||||
if required_columns:
|
||||
missing_columns = [col for col in required_columns if col not in df.columns]
|
||||
if missing_columns:
|
||||
error = ChartError(
|
||||
code='MISSING_INDICATOR_COLUMNS',
|
||||
message=f'Missing columns for {self.config.indicator_type}: {missing_columns}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={
|
||||
'indicator_type': self.config.indicator_type,
|
||||
'missing_columns': missing_columns,
|
||||
'available_columns': list(df.columns)
|
||||
},
|
||||
recovery_suggestion=f'Ensure data contains required columns: {required_columns}'
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
# Check data sufficiency for indicator
|
||||
indicator_config = {
|
||||
'type': self.config.indicator_type,
|
||||
'parameters': self.config.parameters or {}
|
||||
}
|
||||
|
||||
indicator_error = DataRequirements.check_indicator_requirements(
|
||||
self.config.indicator_type,
|
||||
len(df),
|
||||
self.config.parameters or {}
|
||||
)
|
||||
|
||||
if indicator_error.severity == ErrorSeverity.WARNING:
|
||||
self.error_handler.warnings.append(indicator_error)
|
||||
elif indicator_error.severity in [ErrorSeverity.ERROR, ErrorSeverity.CRITICAL]:
|
||||
self.error_handler.errors.append(indicator_error)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error validating indicator data: {e}")
|
||||
error = ChartError(
|
||||
code='INDICATOR_VALIDATION_ERROR',
|
||||
message=f'Indicator validation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'exception': str(e), 'indicator_type': self.config.indicator_type}
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return False
|
||||
|
||||
def safe_calculate_indicator(self, data: pd.DataFrame,
|
||||
calculation_func: Callable,
|
||||
**kwargs) -> Optional[pd.DataFrame]:
|
||||
"""
|
||||
Safely calculate indicator with error handling.
|
||||
|
||||
Args:
|
||||
data: Input data
|
||||
calculation_func: Function to calculate indicator
|
||||
**kwargs: Additional arguments for calculation
|
||||
|
||||
Returns:
|
||||
Calculated indicator data or None if failed
|
||||
"""
|
||||
try:
|
||||
# Validate data first
|
||||
if not self.validate_indicator_data(data):
|
||||
return None
|
||||
|
||||
# Try calculation with recovery strategies
|
||||
result = calculation_func(data, **kwargs)
|
||||
|
||||
# Validate result
|
||||
if result is None or (isinstance(result, pd.DataFrame) and result.empty):
|
||||
error = ChartError(
|
||||
code='EMPTY_INDICATOR_RESULT',
|
||||
message=f'Indicator calculation returned no data: {self.config.indicator_type}',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'indicator_type': self.config.indicator_type, 'input_length': len(data)},
|
||||
recovery_suggestion='Check calculation parameters or input data range'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
return None
|
||||
|
||||
# Check for sufficient calculated data
|
||||
if isinstance(result, pd.DataFrame) and len(result) < len(data) * 0.1:
|
||||
error = ChartError(
|
||||
code='INSUFFICIENT_INDICATOR_OUTPUT',
|
||||
message=f'Very few indicator values calculated: {len(result)}/{len(data)}',
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={
|
||||
'indicator_type': self.config.indicator_type,
|
||||
'output_length': len(result),
|
||||
'input_length': len(data)
|
||||
},
|
||||
recovery_suggestion='Consider adjusting indicator parameters'
|
||||
)
|
||||
self.error_handler.warnings.append(error)
|
||||
|
||||
self.calculated_data = result
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error calculating {self.config.indicator_type}: {e}")
|
||||
|
||||
# Try to apply error recovery
|
||||
recovery_strategy = ErrorRecoveryStrategies.handle_insufficient_data(
|
||||
ChartError(
|
||||
code='INDICATOR_CALCULATION_ERROR',
|
||||
message=f'Calculation failed for {self.config.indicator_type}: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'exception': str(e), 'indicator_type': self.config.indicator_type}
|
||||
),
|
||||
fallback_options={'data_length': len(data)}
|
||||
)
|
||||
|
||||
if recovery_strategy['can_proceed'] and recovery_strategy['fallback_action'] == 'adjust_parameters':
|
||||
# Try with adjusted parameters
|
||||
try:
|
||||
modified_config = recovery_strategy.get('modified_config', {})
|
||||
self.logger.info(f"Indicators: Retrying indicator calculation with adjusted parameters: {modified_config}")
|
||||
|
||||
# Update parameters temporarily
|
||||
original_params = self.config.parameters.copy() if self.config.parameters else {}
|
||||
self.config.parameters.update(modified_config)
|
||||
|
||||
# Retry calculation
|
||||
result = calculation_func(data, **kwargs)
|
||||
|
||||
# Restore original parameters
|
||||
self.config.parameters = original_params
|
||||
|
||||
if result is not None and not (isinstance(result, pd.DataFrame) and result.empty):
|
||||
# Add warning about parameter adjustment
|
||||
warning = ChartError(
|
||||
code='INDICATOR_PARAMETERS_ADJUSTED',
|
||||
message=recovery_strategy['user_message'],
|
||||
severity=ErrorSeverity.WARNING,
|
||||
context={'original_params': original_params, 'adjusted_params': modified_config}
|
||||
)
|
||||
self.error_handler.warnings.append(warning)
|
||||
self.calculated_data = result
|
||||
return result
|
||||
|
||||
except Exception as retry_error:
|
||||
self.logger.error(f"Indicators: Retry with adjusted parameters also failed: {retry_error}")
|
||||
|
||||
# Final error if all recovery attempts fail
|
||||
error = ChartError(
|
||||
code='INDICATOR_CALCULATION_FAILED',
|
||||
message=f'Failed to calculate {self.config.indicator_type}: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'exception': str(e), 'indicator_type': self.config.indicator_type}
|
||||
)
|
||||
self.error_handler.errors.append(error)
|
||||
return None
|
||||
|
||||
def create_indicator_traces(self, data: pd.DataFrame, subplot_row: int = 1) -> List[go.Scatter]:
|
||||
"""
|
||||
Create indicator traces with error handling.
|
||||
Must be implemented by subclasses.
|
||||
"""
|
||||
raise NotImplementedError("Subclasses must implement create_indicator_traces")
|
||||
|
||||
def is_enabled(self) -> bool:
|
||||
"""Check if the layer is enabled."""
|
||||
return self.config.enabled
|
||||
|
||||
def is_overlay(self) -> bool:
|
||||
"""Check if this layer is an overlay (main chart) or subplot."""
|
||||
return self.config.subplot_row is None
|
||||
|
||||
def get_subplot_row(self) -> Optional[int]:
|
||||
"""Get the subplot row for this layer."""
|
||||
return self.config.subplot_row
|
||||
|
||||
|
||||
class SMALayer(BaseIndicatorLayer):
|
||||
"""Simple Moving Average layer with enhanced error handling"""
|
||||
|
||||
def __init__(self, config: IndicatorLayerConfig = None):
|
||||
"""Initialize SMA layer"""
|
||||
if config is None:
|
||||
config = IndicatorLayerConfig(
|
||||
indicator_type='sma',
|
||||
parameters={'period': 20}
|
||||
)
|
||||
super().__init__(config)
|
||||
|
||||
def create_traces(self, data: List[Dict[str, Any]], subplot_row: int = 1) -> List[go.Scatter]:
|
||||
"""Create SMA traces with comprehensive error handling"""
|
||||
try:
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(data) if isinstance(data, list) else data.copy()
|
||||
|
||||
# Validate data
|
||||
if not self.validate_indicator_data(df, required_columns=['close', 'timestamp']):
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"SMA Error: {self._error_message}")]
|
||||
|
||||
# Calculate SMA with error handling
|
||||
period = self.config.parameters.get('period', 20)
|
||||
sma_data = self.safe_calculate_indicator(
|
||||
df,
|
||||
self._calculate_sma,
|
||||
period=period
|
||||
)
|
||||
|
||||
if sma_data is None:
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"SMA calculation failed")]
|
||||
else:
|
||||
return [] # Skip layer gracefully
|
||||
|
||||
# Create trace
|
||||
sma_trace = go.Scatter(
|
||||
x=sma_data['timestamp'],
|
||||
y=sma_data['sma'],
|
||||
mode='lines',
|
||||
name=f'SMA({period})',
|
||||
line=dict(
|
||||
color=self.config.color or '#2196F3',
|
||||
width=self.config.line_width
|
||||
)
|
||||
)
|
||||
|
||||
self.traces = [sma_trace]
|
||||
return self.traces
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Indicators: Error creating SMA traces: {str(e)}"
|
||||
self.logger.error(error_msg)
|
||||
return [self.create_error_trace(error_msg)]
|
||||
|
||||
def _calculate_sma(self, data: pd.DataFrame, period: int) -> pd.DataFrame:
|
||||
"""Calculate SMA with validation"""
|
||||
try:
|
||||
result_df = data.copy()
|
||||
result_df['sma'] = result_df['close'].rolling(window=period, min_periods=period).mean()
|
||||
|
||||
# Remove NaN values
|
||||
result_df = result_df.dropna(subset=['sma'])
|
||||
|
||||
if result_df.empty:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='SMA_NO_VALUES',
|
||||
message=f'SMA calculation produced no values (period={period}, data_length={len(data)})',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'data_length': len(data)}
|
||||
))
|
||||
|
||||
return result_df[['timestamp', 'sma']]
|
||||
|
||||
except Exception as e:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='SMA_CALCULATION_ERROR',
|
||||
message=f'SMA calculation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'data_length': len(data), 'exception': str(e)}
|
||||
))
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""Render SMA layer for compatibility with base interface"""
|
||||
try:
|
||||
traces = self.create_traces(data.to_dict('records'), **kwargs)
|
||||
for trace in traces:
|
||||
if hasattr(fig, 'add_trace'):
|
||||
fig.add_trace(trace, **kwargs)
|
||||
else:
|
||||
fig.add_trace(trace)
|
||||
return fig
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error rendering SMA layer: {e}")
|
||||
return fig
|
||||
|
||||
|
||||
class EMALayer(BaseIndicatorLayer):
|
||||
"""Exponential Moving Average layer with enhanced error handling"""
|
||||
|
||||
def __init__(self, config: IndicatorLayerConfig = None):
|
||||
"""Initialize EMA layer"""
|
||||
if config is None:
|
||||
config = IndicatorLayerConfig(
|
||||
indicator_type='ema',
|
||||
parameters={'period': 20}
|
||||
)
|
||||
super().__init__(config)
|
||||
|
||||
def create_traces(self, data: List[Dict[str, Any]], subplot_row: int = 1) -> List[go.Scatter]:
|
||||
"""Create EMA traces with comprehensive error handling"""
|
||||
try:
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(data) if isinstance(data, list) else data.copy()
|
||||
|
||||
# Validate data
|
||||
if not self.validate_indicator_data(df, required_columns=['close', 'timestamp']):
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"EMA Error: {self._error_message}")]
|
||||
|
||||
# Calculate EMA with error handling
|
||||
period = self.config.parameters.get('period', 20)
|
||||
ema_data = self.safe_calculate_indicator(
|
||||
df,
|
||||
self._calculate_ema,
|
||||
period=period
|
||||
)
|
||||
|
||||
if ema_data is None:
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"EMA calculation failed")]
|
||||
else:
|
||||
return [] # Skip layer gracefully
|
||||
|
||||
# Create trace
|
||||
ema_trace = go.Scatter(
|
||||
x=ema_data['timestamp'],
|
||||
y=ema_data['ema'],
|
||||
mode='lines',
|
||||
name=f'EMA({period})',
|
||||
line=dict(
|
||||
color=self.config.color or '#FF9800',
|
||||
width=self.config.line_width
|
||||
)
|
||||
)
|
||||
|
||||
self.traces = [ema_trace]
|
||||
return self.traces
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Indicators: Error creating EMA traces: {str(e)}"
|
||||
self.logger.error(error_msg)
|
||||
return [self.create_error_trace(error_msg)]
|
||||
|
||||
def _calculate_ema(self, data: pd.DataFrame, period: int) -> pd.DataFrame:
|
||||
"""Calculate EMA with validation"""
|
||||
try:
|
||||
result_df = data.copy()
|
||||
result_df['ema'] = result_df['close'].ewm(span=period, adjust=False).mean()
|
||||
|
||||
# For EMA, we can start from the first value, but remove obvious outliers
|
||||
# Skip first few values for stability
|
||||
warmup_period = max(1, period // 4)
|
||||
result_df = result_df.iloc[warmup_period:]
|
||||
|
||||
if result_df.empty:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='EMA_NO_VALUES',
|
||||
message=f'EMA calculation produced no values (period={period}, data_length={len(data)})',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'data_length': len(data)}
|
||||
))
|
||||
|
||||
return result_df[['timestamp', 'ema']]
|
||||
|
||||
except Exception as e:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='EMA_CALCULATION_ERROR',
|
||||
message=f'EMA calculation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'data_length': len(data), 'exception': str(e)}
|
||||
))
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""Render EMA layer for compatibility with base interface"""
|
||||
try:
|
||||
traces = self.create_traces(data.to_dict('records'), **kwargs)
|
||||
for trace in traces:
|
||||
if hasattr(fig, 'add_trace'):
|
||||
fig.add_trace(trace, **kwargs)
|
||||
else:
|
||||
fig.add_trace(trace)
|
||||
return fig
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error rendering EMA layer: {e}")
|
||||
return fig
|
||||
|
||||
|
||||
class BollingerBandsLayer(BaseIndicatorLayer):
|
||||
"""Bollinger Bands layer with enhanced error handling"""
|
||||
|
||||
def __init__(self, config: IndicatorLayerConfig = None):
|
||||
"""Initialize Bollinger Bands layer"""
|
||||
if config is None:
|
||||
config = IndicatorLayerConfig(
|
||||
indicator_type='bollinger_bands',
|
||||
parameters={'period': 20, 'std_dev': 2},
|
||||
show_middle_line=True
|
||||
)
|
||||
super().__init__(config)
|
||||
|
||||
def create_traces(self, data: List[Dict[str, Any]], subplot_row: int = 1) -> List[go.Scatter]:
|
||||
"""Create Bollinger Bands traces with comprehensive error handling"""
|
||||
try:
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(data) if isinstance(data, list) else data.copy()
|
||||
|
||||
# Validate data
|
||||
if not self.validate_indicator_data(df, required_columns=['close', 'timestamp']):
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"Bollinger Bands Error: {self._error_message}")]
|
||||
|
||||
# Calculate Bollinger Bands with error handling
|
||||
period = self.config.parameters.get('period', 20)
|
||||
std_dev = self.config.parameters.get('std_dev', 2)
|
||||
|
||||
bb_data = self.safe_calculate_indicator(
|
||||
df,
|
||||
self._calculate_bollinger_bands,
|
||||
period=period,
|
||||
std_dev=std_dev
|
||||
)
|
||||
|
||||
if bb_data is None:
|
||||
if self.error_handler.errors:
|
||||
return [self.create_error_trace(f"Bollinger Bands calculation failed")]
|
||||
else:
|
||||
return [] # Skip layer gracefully
|
||||
|
||||
# Create traces
|
||||
traces = []
|
||||
|
||||
# Upper band
|
||||
upper_trace = go.Scatter(
|
||||
x=bb_data['timestamp'],
|
||||
y=bb_data['upper_band'],
|
||||
mode='lines',
|
||||
name=f'BB Upper({period})',
|
||||
line=dict(color=self.config.color or '#9C27B0', width=1),
|
||||
showlegend=True
|
||||
)
|
||||
traces.append(upper_trace)
|
||||
|
||||
# Lower band with fill
|
||||
lower_trace = go.Scatter(
|
||||
x=bb_data['timestamp'],
|
||||
y=bb_data['lower_band'],
|
||||
mode='lines',
|
||||
name=f'BB Lower({period})',
|
||||
line=dict(color=self.config.color or '#9C27B0', width=1),
|
||||
fill='tonexty',
|
||||
fillcolor='rgba(156, 39, 176, 0.1)',
|
||||
showlegend=True
|
||||
)
|
||||
traces.append(lower_trace)
|
||||
|
||||
# Middle line (SMA)
|
||||
if self.config.show_middle_line:
|
||||
middle_trace = go.Scatter(
|
||||
x=bb_data['timestamp'],
|
||||
y=bb_data['middle_band'],
|
||||
mode='lines',
|
||||
name=f'BB Middle({period})',
|
||||
line=dict(color=self.config.color or '#9C27B0', width=1, dash='dash'),
|
||||
showlegend=True
|
||||
)
|
||||
traces.append(middle_trace)
|
||||
|
||||
self.traces = traces
|
||||
return self.traces
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Indicators: Error creating Bollinger Bands traces: {str(e)}"
|
||||
self.logger.error(error_msg)
|
||||
return [self.create_error_trace(error_msg)]
|
||||
|
||||
def _calculate_bollinger_bands(self, data: pd.DataFrame, period: int, std_dev: float) -> pd.DataFrame:
|
||||
"""Calculate Bollinger Bands with validation"""
|
||||
try:
|
||||
result_df = data.copy()
|
||||
|
||||
# Calculate middle band (SMA)
|
||||
result_df['middle_band'] = result_df['close'].rolling(window=period, min_periods=period).mean()
|
||||
|
||||
# Calculate standard deviation
|
||||
result_df['std'] = result_df['close'].rolling(window=period, min_periods=period).std()
|
||||
|
||||
# Calculate upper and lower bands
|
||||
result_df['upper_band'] = result_df['middle_band'] + (result_df['std'] * std_dev)
|
||||
result_df['lower_band'] = result_df['middle_band'] - (result_df['std'] * std_dev)
|
||||
|
||||
# Remove NaN values
|
||||
result_df = result_df.dropna(subset=['middle_band', 'upper_band', 'lower_band'])
|
||||
|
||||
if result_df.empty:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='BB_NO_VALUES',
|
||||
message=f'Bollinger Bands calculation produced no values (period={period}, data_length={len(data)})',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'std_dev': std_dev, 'data_length': len(data)}
|
||||
))
|
||||
|
||||
return result_df[['timestamp', 'upper_band', 'middle_band', 'lower_band']]
|
||||
|
||||
except Exception as e:
|
||||
raise IndicatorCalculationError(ChartError(
|
||||
code='BB_CALCULATION_ERROR',
|
||||
message=f'Bollinger Bands calculation failed: {str(e)}',
|
||||
severity=ErrorSeverity.ERROR,
|
||||
context={'period': period, 'std_dev': std_dev, 'data_length': len(data), 'exception': str(e)}
|
||||
))
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""Render Bollinger Bands layer for compatibility with base interface"""
|
||||
try:
|
||||
traces = self.create_traces(data.to_dict('records'), **kwargs)
|
||||
for trace in traces:
|
||||
if hasattr(fig, 'add_trace'):
|
||||
fig.add_trace(trace, **kwargs)
|
||||
else:
|
||||
fig.add_trace(trace)
|
||||
return fig
|
||||
except Exception as e:
|
||||
self.logger.error(f"Indicators: Error rendering Bollinger Bands layer: {e}")
|
||||
return fig
|
||||
|
||||
|
||||
def create_sma_layer(period: int = 20, **kwargs) -> SMALayer:
|
||||
"""
|
||||
Convenience function to create an SMA layer.
|
||||
|
||||
Args:
|
||||
period: SMA period
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured SMA layer
|
||||
"""
|
||||
return SMALayer(period=period, **kwargs)
|
||||
|
||||
|
||||
def create_ema_layer(period: int = 12, **kwargs) -> EMALayer:
|
||||
"""
|
||||
Convenience function to create an EMA layer.
|
||||
|
||||
Args:
|
||||
period: EMA period
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured EMA layer
|
||||
"""
|
||||
return EMALayer(period=period, **kwargs)
|
||||
|
||||
|
||||
def create_bollinger_bands_layer(period: int = 20, std_dev: float = 2.0, **kwargs) -> BollingerBandsLayer:
|
||||
"""
|
||||
Convenience function to create a Bollinger Bands layer.
|
||||
|
||||
Args:
|
||||
period: BB period (default: 20)
|
||||
std_dev: Standard deviation multiplier (default: 2.0)
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured Bollinger Bands layer
|
||||
"""
|
||||
return BollingerBandsLayer(period=period, std_dev=std_dev, **kwargs)
|
||||
|
||||
|
||||
def create_common_ma_layers() -> List[BaseIndicatorLayer]:
|
||||
"""
|
||||
Create commonly used moving average layers.
|
||||
|
||||
Returns:
|
||||
List of configured MA layers (SMA 20, SMA 50, EMA 12, EMA 26)
|
||||
"""
|
||||
colors = get_indicator_colors()
|
||||
|
||||
return [
|
||||
SMALayer(20, color=colors.get('sma', '#007bff'), name="SMA(20)"),
|
||||
SMALayer(50, color='#6c757d', name="SMA(50)"), # Gray for longer SMA
|
||||
EMALayer(12, color=colors.get('ema', '#ff6b35'), name="EMA(12)"),
|
||||
EMALayer(26, color='#28a745', name="EMA(26)") # Green for longer EMA
|
||||
]
|
||||
|
||||
|
||||
def create_common_overlay_indicators() -> List[BaseIndicatorLayer]:
|
||||
"""
|
||||
Create commonly used overlay indicators including moving averages and Bollinger Bands.
|
||||
|
||||
Returns:
|
||||
List of configured overlay indicator layers
|
||||
"""
|
||||
colors = get_indicator_colors()
|
||||
|
||||
return [
|
||||
SMALayer(20, color=colors.get('sma', '#007bff'), name="SMA(20)"),
|
||||
EMALayer(12, color=colors.get('ema', '#ff6b35'), name="EMA(12)"),
|
||||
BollingerBandsLayer(20, 2.0, color=colors.get('bb_upper', '#6f42c1'), name="BB(20,2)")
|
||||
]
|
||||
2977
components/charts/layers/signals.py
Normal file
2977
components/charts/layers/signals.py
Normal file
File diff suppressed because it is too large
Load Diff
425
components/charts/layers/subplots.py
Normal file
425
components/charts/layers/subplots.py
Normal file
@ -0,0 +1,425 @@
|
||||
"""
|
||||
Subplot Chart Layers
|
||||
|
||||
This module contains subplot layer implementations for indicators that render
|
||||
in separate subplots below the main price chart, such as RSI, MACD, and other
|
||||
oscillators and momentum indicators.
|
||||
"""
|
||||
|
||||
import plotly.graph_objects as go
|
||||
import pandas as pd
|
||||
from decimal import Decimal
|
||||
from typing import Dict, Any, Optional, List, Union, Tuple
|
||||
from dataclasses import dataclass
|
||||
|
||||
from .base import BaseChartLayer, LayerConfig
|
||||
from .indicators import BaseIndicatorLayer, IndicatorLayerConfig
|
||||
from data.common.indicators import TechnicalIndicators, IndicatorResult
|
||||
from data.common.data_types import OHLCVCandle
|
||||
from components.charts.utils import get_indicator_colors
|
||||
from utils.logger import get_logger
|
||||
from ..error_handling import (
|
||||
ChartErrorHandler, ChartError, ErrorSeverity, DataRequirements,
|
||||
InsufficientDataError, DataValidationError, IndicatorCalculationError,
|
||||
ErrorRecoveryStrategies, create_error_annotation, get_error_message
|
||||
)
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
@dataclass
|
||||
class SubplotLayerConfig(IndicatorLayerConfig):
|
||||
"""Extended configuration for subplot indicator layers"""
|
||||
subplot_height_ratio: float = 0.25 # Height ratio for subplot (0.25 = 25% of total height)
|
||||
y_axis_range: Optional[Tuple[float, float]] = None # Fixed y-axis range (min, max)
|
||||
show_zero_line: bool = False # Show horizontal line at y=0
|
||||
reference_lines: List[float] = None # Additional horizontal reference lines
|
||||
|
||||
def __post_init__(self):
|
||||
super().__post_init__()
|
||||
if self.reference_lines is None:
|
||||
self.reference_lines = []
|
||||
|
||||
|
||||
class BaseSubplotLayer(BaseIndicatorLayer):
|
||||
"""
|
||||
Base class for all subplot indicator layers.
|
||||
|
||||
Provides common functionality for indicators that render in separate subplots
|
||||
with their own y-axis scaling and reference lines.
|
||||
"""
|
||||
|
||||
def __init__(self, config: SubplotLayerConfig):
|
||||
"""
|
||||
Initialize base subplot layer.
|
||||
|
||||
Args:
|
||||
config: Subplot layer configuration
|
||||
"""
|
||||
super().__init__(config)
|
||||
self.subplot_config = config
|
||||
|
||||
def get_subplot_height_ratio(self) -> float:
|
||||
"""Get the height ratio for this subplot."""
|
||||
return self.subplot_config.subplot_height_ratio
|
||||
|
||||
def has_fixed_range(self) -> bool:
|
||||
"""Check if this subplot has a fixed y-axis range."""
|
||||
return self.subplot_config.y_axis_range is not None
|
||||
|
||||
def get_y_axis_range(self) -> Optional[Tuple[float, float]]:
|
||||
"""Get the fixed y-axis range if defined."""
|
||||
return self.subplot_config.y_axis_range
|
||||
|
||||
def should_show_zero_line(self) -> bool:
|
||||
"""Check if zero line should be shown."""
|
||||
return self.subplot_config.show_zero_line
|
||||
|
||||
def get_reference_lines(self) -> List[float]:
|
||||
"""Get additional reference lines to draw."""
|
||||
return self.subplot_config.reference_lines
|
||||
|
||||
def add_reference_lines(self, fig: go.Figure, row: int, col: int = 1) -> None:
|
||||
"""
|
||||
Add reference lines to the subplot.
|
||||
|
||||
Args:
|
||||
fig: Target figure
|
||||
row: Subplot row
|
||||
col: Subplot column
|
||||
"""
|
||||
try:
|
||||
# Add zero line if enabled
|
||||
if self.should_show_zero_line():
|
||||
fig.add_hline(
|
||||
y=0,
|
||||
line=dict(color='gray', width=1, dash='dash'),
|
||||
row=row,
|
||||
col=col
|
||||
)
|
||||
|
||||
# Add additional reference lines
|
||||
for ref_value in self.get_reference_lines():
|
||||
fig.add_hline(
|
||||
y=ref_value,
|
||||
line=dict(color='lightgray', width=1, dash='dot'),
|
||||
row=row,
|
||||
col=col
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(f"Subplot layers: Could not add reference lines: {e}")
|
||||
|
||||
|
||||
class RSILayer(BaseSubplotLayer):
|
||||
"""
|
||||
Relative Strength Index (RSI) subplot layer.
|
||||
|
||||
Renders RSI oscillator in a separate subplot with standard overbought (70)
|
||||
and oversold (30) reference lines.
|
||||
"""
|
||||
|
||||
def __init__(self, period: int = 14, color: str = None, name: str = None):
|
||||
"""
|
||||
Initialize RSI layer.
|
||||
|
||||
Args:
|
||||
period: RSI period (default: 14)
|
||||
color: Line color (optional, uses default)
|
||||
name: Layer name (optional, auto-generated)
|
||||
"""
|
||||
# Use default color if not specified
|
||||
if color is None:
|
||||
colors = get_indicator_colors()
|
||||
color = colors.get('rsi', '#20c997')
|
||||
|
||||
# Generate name if not specified
|
||||
if name is None:
|
||||
name = f"RSI({period})"
|
||||
|
||||
# Find next available subplot row (will be managed by LayerManager)
|
||||
subplot_row = 2 # Default to row 2 (first subplot after main chart)
|
||||
|
||||
config = SubplotLayerConfig(
|
||||
name=name,
|
||||
indicator_type="rsi",
|
||||
color=color,
|
||||
parameters={'period': period},
|
||||
subplot_row=subplot_row,
|
||||
subplot_height_ratio=0.25,
|
||||
y_axis_range=(0, 100), # RSI ranges from 0 to 100
|
||||
reference_lines=[30, 70], # Oversold and overbought levels
|
||||
style={
|
||||
'line_color': color,
|
||||
'line_width': 2,
|
||||
'opacity': 1.0
|
||||
}
|
||||
)
|
||||
|
||||
super().__init__(config)
|
||||
self.period = period
|
||||
|
||||
def _calculate_rsi(self, data: pd.DataFrame, period: int) -> pd.DataFrame:
|
||||
"""Calculate RSI with validation and error handling"""
|
||||
try:
|
||||
result_df = data.copy()
|
||||
|
||||
# Calculate price changes
|
||||
result_df['price_change'] = result_df['close'].diff()
|
||||
|
||||
# Separate gains and losses
|
||||
result_df['gain'] = result_df['price_change'].clip(lower=0)
|
||||
result_df['loss'] = -result_df['price_change'].clip(upper=0)
|
||||
|
||||
# Calculate average gains and losses using Wilder's smoothing
|
||||
result_df['avg_gain'] = result_df['gain'].ewm(alpha=1/period, adjust=False).mean()
|
||||
result_df['avg_loss'] = result_df['loss'].ewm(alpha=1/period, adjust=False).mean()
|
||||
|
||||
# Calculate RS and RSI
|
||||
result_df['rs'] = result_df['avg_gain'] / result_df['avg_loss']
|
||||
result_df['rsi'] = 100 - (100 / (1 + result_df['rs']))
|
||||
|
||||
# Remove rows where RSI cannot be calculated
|
||||
result_df = result_df.iloc[period:].copy()
|
||||
|
||||
# Remove NaN values and invalid RSI values
|
||||
result_df = result_df.dropna(subset=['rsi'])
|
||||
result_df = result_df[
|
||||
(result_df['rsi'] >= 0) &
|
||||
(result_df['rsi'] <= 100) &
|
||||
pd.notna(result_df['rsi'])
|
||||
]
|
||||
|
||||
if result_df.empty:
|
||||
raise Exception(f'RSI calculation produced no values (period={period}, data_length={len(data)})')
|
||||
|
||||
return result_df[['timestamp', 'rsi']]
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f'RSI calculation failed: {str(e)}')
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""Render RSI layer for compatibility with base interface"""
|
||||
try:
|
||||
# Calculate RSI
|
||||
rsi_data = self._calculate_rsi(data, self.period)
|
||||
if rsi_data.empty:
|
||||
return fig
|
||||
|
||||
# Create RSI trace
|
||||
rsi_trace = go.Scatter(
|
||||
x=rsi_data['timestamp'],
|
||||
y=rsi_data['rsi'],
|
||||
mode='lines',
|
||||
name=self.config.name,
|
||||
line=dict(
|
||||
color=self.config.color,
|
||||
width=2
|
||||
),
|
||||
showlegend=True
|
||||
)
|
||||
|
||||
# Add trace
|
||||
row = kwargs.get('row', self.config.subplot_row or 2)
|
||||
col = kwargs.get('col', 1)
|
||||
|
||||
if hasattr(fig, 'add_trace'):
|
||||
fig.add_trace(rsi_trace, row=row, col=col)
|
||||
else:
|
||||
fig.add_trace(rsi_trace)
|
||||
|
||||
# Add reference lines
|
||||
self.add_reference_lines(fig, row, col)
|
||||
|
||||
return fig
|
||||
except Exception as e:
|
||||
self.logger.error(f"Subplot layers: Error rendering RSI layer: {e}")
|
||||
return fig
|
||||
|
||||
|
||||
class MACDLayer(BaseSubplotLayer):
|
||||
"""MACD (Moving Average Convergence Divergence) subplot layer with enhanced error handling"""
|
||||
|
||||
def __init__(self, fast_period: int = 12, slow_period: int = 26, signal_period: int = 9,
|
||||
color: str = None, name: str = None):
|
||||
"""Initialize MACD layer with custom parameters"""
|
||||
# Use default color if not specified
|
||||
if color is None:
|
||||
colors = get_indicator_colors()
|
||||
color = colors.get('macd', '#fd7e14')
|
||||
|
||||
# Generate name if not specified
|
||||
if name is None:
|
||||
name = f"MACD({fast_period},{slow_period},{signal_period})"
|
||||
|
||||
config = SubplotLayerConfig(
|
||||
name=name,
|
||||
indicator_type="macd",
|
||||
color=color,
|
||||
parameters={
|
||||
'fast_period': fast_period,
|
||||
'slow_period': slow_period,
|
||||
'signal_period': signal_period
|
||||
},
|
||||
subplot_row=3, # Will be managed by LayerManager
|
||||
subplot_height_ratio=0.3,
|
||||
show_zero_line=True,
|
||||
style={
|
||||
'line_color': color,
|
||||
'line_width': 2,
|
||||
'opacity': 1.0
|
||||
}
|
||||
)
|
||||
|
||||
super().__init__(config)
|
||||
self.fast_period = fast_period
|
||||
self.slow_period = slow_period
|
||||
self.signal_period = signal_period
|
||||
|
||||
def _calculate_macd(self, data: pd.DataFrame, fast_period: int,
|
||||
slow_period: int, signal_period: int) -> pd.DataFrame:
|
||||
"""Calculate MACD with validation and error handling"""
|
||||
try:
|
||||
result_df = data.copy()
|
||||
|
||||
# Validate periods
|
||||
if fast_period >= slow_period:
|
||||
raise Exception(f'Fast period ({fast_period}) must be less than slow period ({slow_period})')
|
||||
|
||||
# Calculate EMAs
|
||||
result_df['ema_fast'] = result_df['close'].ewm(span=fast_period, adjust=False).mean()
|
||||
result_df['ema_slow'] = result_df['close'].ewm(span=slow_period, adjust=False).mean()
|
||||
|
||||
# Calculate MACD line
|
||||
result_df['macd'] = result_df['ema_fast'] - result_df['ema_slow']
|
||||
|
||||
# Calculate signal line
|
||||
result_df['signal'] = result_df['macd'].ewm(span=signal_period, adjust=False).mean()
|
||||
|
||||
# Calculate histogram
|
||||
result_df['histogram'] = result_df['macd'] - result_df['signal']
|
||||
|
||||
# Remove rows where MACD cannot be calculated reliably
|
||||
warmup_period = slow_period + signal_period
|
||||
result_df = result_df.iloc[warmup_period:].copy()
|
||||
|
||||
# Remove NaN values
|
||||
result_df = result_df.dropna(subset=['macd', 'signal', 'histogram'])
|
||||
|
||||
if result_df.empty:
|
||||
raise Exception(f'MACD calculation produced no values (fast={fast_period}, slow={slow_period}, signal={signal_period})')
|
||||
|
||||
return result_df[['timestamp', 'macd', 'signal', 'histogram']]
|
||||
|
||||
except Exception as e:
|
||||
raise Exception(f'MACD calculation failed: {str(e)}')
|
||||
|
||||
def render(self, fig: go.Figure, data: pd.DataFrame, **kwargs) -> go.Figure:
|
||||
"""Render MACD layer for compatibility with base interface"""
|
||||
try:
|
||||
# Calculate MACD
|
||||
macd_data = self._calculate_macd(data, self.fast_period, self.slow_period, self.signal_period)
|
||||
if macd_data.empty:
|
||||
return fig
|
||||
|
||||
row = kwargs.get('row', self.config.subplot_row or 3)
|
||||
col = kwargs.get('col', 1)
|
||||
|
||||
# Create MACD line trace
|
||||
macd_trace = go.Scatter(
|
||||
x=macd_data['timestamp'],
|
||||
y=macd_data['macd'],
|
||||
mode='lines',
|
||||
name=f'{self.config.name} Line',
|
||||
line=dict(color=self.config.color, width=2),
|
||||
showlegend=True
|
||||
)
|
||||
|
||||
# Create signal line trace
|
||||
signal_trace = go.Scatter(
|
||||
x=macd_data['timestamp'],
|
||||
y=macd_data['signal'],
|
||||
mode='lines',
|
||||
name=f'{self.config.name} Signal',
|
||||
line=dict(color='#FF9800', width=2),
|
||||
showlegend=True
|
||||
)
|
||||
|
||||
# Create histogram
|
||||
histogram_colors = ['green' if h >= 0 else 'red' for h in macd_data['histogram']]
|
||||
histogram_trace = go.Bar(
|
||||
x=macd_data['timestamp'],
|
||||
y=macd_data['histogram'],
|
||||
name=f'{self.config.name} Histogram',
|
||||
marker_color=histogram_colors,
|
||||
opacity=0.6,
|
||||
showlegend=True
|
||||
)
|
||||
|
||||
# Add traces
|
||||
if hasattr(fig, 'add_trace'):
|
||||
fig.add_trace(macd_trace, row=row, col=col)
|
||||
fig.add_trace(signal_trace, row=row, col=col)
|
||||
fig.add_trace(histogram_trace, row=row, col=col)
|
||||
else:
|
||||
fig.add_trace(macd_trace)
|
||||
fig.add_trace(signal_trace)
|
||||
fig.add_trace(histogram_trace)
|
||||
|
||||
# Add zero line
|
||||
self.add_reference_lines(fig, row, col)
|
||||
|
||||
return fig
|
||||
except Exception as e:
|
||||
self.logger.error(f"Subplot layers: Error rendering MACD layer: {e}")
|
||||
return fig
|
||||
|
||||
|
||||
def create_rsi_layer(period: int = 14, **kwargs) -> 'RSILayer':
|
||||
"""
|
||||
Convenience function to create an RSI layer.
|
||||
|
||||
Args:
|
||||
period: RSI period (default: 14)
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured RSI layer
|
||||
"""
|
||||
return RSILayer(period=period, **kwargs)
|
||||
|
||||
|
||||
def create_macd_layer(fast_period: int = 12, slow_period: int = 26,
|
||||
signal_period: int = 9, **kwargs) -> 'MACDLayer':
|
||||
"""
|
||||
Convenience function to create a MACD layer.
|
||||
|
||||
Args:
|
||||
fast_period: Fast EMA period (default: 12)
|
||||
slow_period: Slow EMA period (default: 26)
|
||||
signal_period: Signal line period (default: 9)
|
||||
**kwargs: Additional configuration options
|
||||
|
||||
Returns:
|
||||
Configured MACD layer
|
||||
"""
|
||||
return MACDLayer(
|
||||
fast_period=fast_period,
|
||||
slow_period=slow_period,
|
||||
signal_period=signal_period,
|
||||
**kwargs
|
||||
)
|
||||
|
||||
|
||||
def create_common_subplot_indicators() -> List[BaseSubplotLayer]:
|
||||
"""
|
||||
Create commonly used subplot indicators.
|
||||
|
||||
Returns:
|
||||
List of configured subplot indicator layers (RSI, MACD)
|
||||
"""
|
||||
return [
|
||||
RSILayer(period=14),
|
||||
MACDLayer(fast_period=12, slow_period=26, signal_period=9)
|
||||
]
|
||||
306
components/charts/utils.py
Normal file
306
components/charts/utils.py
Normal file
@ -0,0 +1,306 @@
|
||||
"""
|
||||
Chart Utilities and Helper Functions
|
||||
|
||||
This module provides utility functions for data processing, validation,
|
||||
and chart styling used by the ChartBuilder and layer components.
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
from datetime import datetime, timezone
|
||||
from typing import List, Dict, Any, Optional, Union
|
||||
from decimal import Decimal
|
||||
from tzlocal import get_localzone
|
||||
|
||||
from utils.logger import get_logger
|
||||
|
||||
# Initialize logger
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
# Default color scheme for charts
|
||||
DEFAULT_CHART_COLORS = {
|
||||
'bullish': '#00C851', # Green for bullish candles
|
||||
'bearish': '#FF4444', # Red for bearish candles
|
||||
'sma': '#007bff', # Blue for SMA
|
||||
'ema': '#ff6b35', # Orange for EMA
|
||||
'bb_upper': '#6f42c1', # Purple for Bollinger upper
|
||||
'bb_lower': '#6f42c1', # Purple for Bollinger lower
|
||||
'bb_middle': '#6c757d', # Gray for Bollinger middle
|
||||
'rsi': '#20c997', # Teal for RSI
|
||||
'macd': '#fd7e14', # Orange for MACD
|
||||
'macd_signal': '#e83e8c', # Pink for MACD signal
|
||||
'volume': '#6c757d', # Gray for volume
|
||||
'support': '#17a2b8', # Light blue for support
|
||||
'resistance': '#dc3545' # Red for resistance
|
||||
}
|
||||
|
||||
|
||||
def validate_market_data(candles: List[Dict[str, Any]]) -> bool:
|
||||
"""
|
||||
Validate market data structure and content.
|
||||
|
||||
Args:
|
||||
candles: List of candle dictionaries from database
|
||||
|
||||
Returns:
|
||||
True if data is valid, False otherwise
|
||||
"""
|
||||
if not candles:
|
||||
logger.warning("Chart utils: Empty candles data")
|
||||
return False
|
||||
|
||||
# Check required fields in first candle
|
||||
required_fields = ['timestamp', 'open', 'high', 'low', 'close']
|
||||
first_candle = candles[0]
|
||||
|
||||
for field in required_fields:
|
||||
if field not in first_candle:
|
||||
logger.error(f"Chart utils: Missing required field: {field}")
|
||||
return False
|
||||
|
||||
# Validate data types and values
|
||||
for i, candle in enumerate(candles[:5]): # Check first 5 candles
|
||||
try:
|
||||
# Validate timestamp
|
||||
if not isinstance(candle['timestamp'], (datetime, str)):
|
||||
logger.error(f"Chart utils: Invalid timestamp type at index {i}")
|
||||
return False
|
||||
|
||||
# Validate OHLC values
|
||||
for field in ['open', 'high', 'low', 'close']:
|
||||
value = candle[field]
|
||||
if value is None:
|
||||
logger.error(f"Chart utils: Null value for {field} at index {i}")
|
||||
return False
|
||||
|
||||
# Convert to float for validation
|
||||
try:
|
||||
float_val = float(value)
|
||||
if float_val <= 0:
|
||||
logger.error(f"Chart utils: Non-positive value for {field} at index {i}: {float_val}")
|
||||
return False
|
||||
except (ValueError, TypeError):
|
||||
logger.error(f"Chart utils: Invalid numeric value for {field} at index {i}: {value}")
|
||||
return False
|
||||
|
||||
# Validate OHLC relationships (high >= low, etc.)
|
||||
try:
|
||||
o, h, l, c = float(candle['open']), float(candle['high']), float(candle['low']), float(candle['close'])
|
||||
if not (h >= max(o, c) and l <= min(o, c)):
|
||||
logger.warning(f"Chart utils: Invalid OHLC relationship at index {i}: O={o}, H={h}, L={l}, C={c}")
|
||||
# Don't fail validation for this, just warn
|
||||
|
||||
except (ValueError, TypeError):
|
||||
logger.error(f"Chart utils: Error validating OHLC relationships at index {i}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Chart utils: Error validating candle at index {i}: {e}")
|
||||
return False
|
||||
|
||||
logger.debug(f"Chart utils: Market data validation passed for {len(candles)} candles")
|
||||
return True
|
||||
|
||||
|
||||
def prepare_chart_data(candles: List[Dict[str, Any]]) -> pd.DataFrame:
|
||||
"""
|
||||
Convert candle data to pandas DataFrame suitable for charting.
|
||||
|
||||
Args:
|
||||
candles: List of candle dictionaries from database
|
||||
|
||||
Returns:
|
||||
Prepared pandas DataFrame
|
||||
"""
|
||||
try:
|
||||
# Convert to DataFrame
|
||||
df = pd.DataFrame(candles)
|
||||
|
||||
# Ensure timestamp is datetime and localized to system time
|
||||
if 'timestamp' in df.columns:
|
||||
df['timestamp'] = pd.to_datetime(df['timestamp'])
|
||||
local_tz = get_localzone()
|
||||
|
||||
# Check if the timestamps are already timezone-aware
|
||||
if df['timestamp'].dt.tz is not None:
|
||||
# If they are, just convert to the local timezone
|
||||
df['timestamp'] = df['timestamp'].dt.tz_convert(local_tz)
|
||||
logger.debug(f"Converted timezone-aware timestamps to local timezone: {local_tz}")
|
||||
else:
|
||||
# If they are naive, localize to UTC first, then convert
|
||||
df['timestamp'] = df['timestamp'].dt.tz_localize('UTC').dt.tz_convert(local_tz)
|
||||
logger.debug(f"Localized naive timestamps to UTC and converted to local timezone: {local_tz}")
|
||||
|
||||
# Convert OHLCV columns to numeric
|
||||
numeric_columns = ['open', 'high', 'low', 'close']
|
||||
if 'volume' in df.columns:
|
||||
numeric_columns.append('volume')
|
||||
|
||||
for col in numeric_columns:
|
||||
if col in df.columns:
|
||||
df[col] = pd.to_numeric(df[col], errors='coerce')
|
||||
|
||||
# Sort by timestamp and set it as the index, keeping the column
|
||||
df = df.sort_values('timestamp')
|
||||
df.index = pd.to_datetime(df['timestamp'])
|
||||
|
||||
# Handle missing volume data
|
||||
if 'volume' not in df.columns:
|
||||
df['volume'] = 0
|
||||
|
||||
# Fill any NaN values with forward fill, then backward fill
|
||||
df = df.ffill().bfill()
|
||||
|
||||
logger.debug(f"Chart utils: Prepared chart data: {len(df)} rows, columns: {list(df.columns)}")
|
||||
return df
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Chart utils: Error preparing chart data: {e}")
|
||||
# Return empty DataFrame with expected structure
|
||||
return pd.DataFrame(columns=['timestamp', 'open', 'high', 'low', 'close', 'volume'])
|
||||
|
||||
|
||||
def get_indicator_colors() -> Dict[str, str]:
|
||||
"""
|
||||
Get the default color scheme for chart indicators.
|
||||
|
||||
Returns:
|
||||
Dictionary of color mappings
|
||||
"""
|
||||
return DEFAULT_CHART_COLORS.copy()
|
||||
|
||||
|
||||
def format_price(price: Union[float, Decimal, str], decimals: int = 4) -> str:
|
||||
"""
|
||||
Format price value for display.
|
||||
|
||||
Args:
|
||||
price: Price value to format
|
||||
decimals: Number of decimal places
|
||||
|
||||
Returns:
|
||||
Formatted price string
|
||||
"""
|
||||
try:
|
||||
return f"{float(price):.{decimals}f}"
|
||||
except (ValueError, TypeError):
|
||||
return "N/A"
|
||||
|
||||
|
||||
def format_volume(volume: Union[float, int, str]) -> str:
|
||||
"""
|
||||
Format volume value for display with K/M/B suffixes.
|
||||
|
||||
Args:
|
||||
volume: Volume value to format
|
||||
|
||||
Returns:
|
||||
Formatted volume string
|
||||
"""
|
||||
try:
|
||||
vol = float(volume)
|
||||
if vol >= 1e9:
|
||||
return f"{vol/1e9:.2f}B"
|
||||
elif vol >= 1e6:
|
||||
return f"{vol/1e6:.2f}M"
|
||||
elif vol >= 1e3:
|
||||
return f"{vol/1e3:.2f}K"
|
||||
else:
|
||||
return f"{vol:.0f}"
|
||||
except (ValueError, TypeError):
|
||||
return "N/A"
|
||||
|
||||
|
||||
def calculate_price_change(current: Union[float, Decimal], previous: Union[float, Decimal]) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate price change and percentage change.
|
||||
|
||||
Args:
|
||||
current: Current price
|
||||
previous: Previous price
|
||||
|
||||
Returns:
|
||||
Dictionary with change, change_percent, and direction
|
||||
"""
|
||||
try:
|
||||
curr = float(current)
|
||||
prev = float(previous)
|
||||
|
||||
if prev == 0:
|
||||
return {'change': 0, 'change_percent': 0, 'direction': 'neutral'}
|
||||
|
||||
change = curr - prev
|
||||
change_percent = (change / prev) * 100
|
||||
|
||||
direction = 'up' if change > 0 else 'down' if change < 0 else 'neutral'
|
||||
|
||||
return {
|
||||
'change': change,
|
||||
'change_percent': change_percent,
|
||||
'direction': direction
|
||||
}
|
||||
|
||||
except (ValueError, TypeError):
|
||||
return {'change': 0, 'change_percent': 0, 'direction': 'neutral'}
|
||||
|
||||
|
||||
def get_chart_height(include_volume: bool = False, num_subplots: int = 0) -> int:
|
||||
"""
|
||||
Calculate appropriate chart height based on components.
|
||||
|
||||
Args:
|
||||
include_volume: Whether volume subplot is included
|
||||
num_subplots: Number of additional subplots (for indicators)
|
||||
|
||||
Returns:
|
||||
Recommended chart height in pixels
|
||||
"""
|
||||
base_height = 500
|
||||
volume_height = 150 if include_volume else 0
|
||||
subplot_height = num_subplots * 120
|
||||
|
||||
return base_height + volume_height + subplot_height
|
||||
|
||||
|
||||
def validate_timeframe(timeframe: str) -> bool:
|
||||
"""
|
||||
Validate if timeframe string is supported.
|
||||
|
||||
Args:
|
||||
timeframe: Timeframe string (e.g., '1m', '5m', '1h', '1d')
|
||||
|
||||
Returns:
|
||||
True if valid, False otherwise
|
||||
"""
|
||||
valid_timeframes = [
|
||||
'1s', '5s', '15s', '30s', # Seconds
|
||||
'1m', '5m', '15m', '30m', # Minutes
|
||||
'1h', '2h', '4h', '6h', '12h', # Hours
|
||||
'1d', '3d', '1w', '1M' # Days, weeks, months
|
||||
]
|
||||
|
||||
return timeframe in valid_timeframes
|
||||
|
||||
|
||||
def validate_symbol(symbol: str) -> bool:
|
||||
"""
|
||||
Validate trading symbol format.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol (e.g., 'BTC-USDT')
|
||||
|
||||
Returns:
|
||||
True if valid format, False otherwise
|
||||
"""
|
||||
if not symbol or not isinstance(symbol, str):
|
||||
return False
|
||||
|
||||
# Basic validation: should contain a dash and have reasonable length
|
||||
parts = symbol.split('-')
|
||||
if len(parts) != 2:
|
||||
return False
|
||||
|
||||
base, quote = parts
|
||||
if len(base) < 2 or len(quote) < 3 or len(base) > 10 or len(quote) > 10:
|
||||
return False
|
||||
|
||||
return True
|
||||
323
components/dashboard.py
Normal file
323
components/dashboard.py
Normal file
@ -0,0 +1,323 @@
|
||||
"""
|
||||
Dashboard Layout Components
|
||||
|
||||
This module contains reusable layout components for the main dashboard interface.
|
||||
These components handle the overall structure and navigation of the dashboard.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
from typing import List, Dict, Any, Optional
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def create_header(title: str = "Crypto Trading Bot Dashboard",
|
||||
subtitle: str = "Real-time monitoring and bot management") -> html.Div:
|
||||
"""
|
||||
Create the main dashboard header component.
|
||||
|
||||
Args:
|
||||
title: Main title text
|
||||
subtitle: Subtitle text
|
||||
|
||||
Returns:
|
||||
Dash HTML component for the header
|
||||
"""
|
||||
return html.Div([
|
||||
html.H1(f"🚀 {title}",
|
||||
style={'margin': '0', 'color': '#2c3e50', 'font-size': '28px'}),
|
||||
html.P(subtitle,
|
||||
style={'margin': '5px 0 0 0', 'color': '#7f8c8d', 'font-size': '14px'})
|
||||
], style={
|
||||
'padding': '20px',
|
||||
'background-color': '#ecf0f1',
|
||||
'border-bottom': '2px solid #bdc3c7',
|
||||
'box-shadow': '0 2px 4px rgba(0,0,0,0.1)'
|
||||
})
|
||||
|
||||
|
||||
def create_navigation_tabs(active_tab: str = 'market-data') -> dcc.Tabs:
|
||||
"""
|
||||
Create the main navigation tabs component.
|
||||
|
||||
Args:
|
||||
active_tab: Default active tab
|
||||
|
||||
Returns:
|
||||
Dash Tabs component
|
||||
"""
|
||||
tab_style = {
|
||||
'borderBottom': '1px solid #d6d6d6',
|
||||
'padding': '6px',
|
||||
'fontWeight': 'bold'
|
||||
}
|
||||
|
||||
tab_selected_style = {
|
||||
'borderTop': '1px solid #d6d6d6',
|
||||
'borderBottom': '1px solid #d6d6d6',
|
||||
'backgroundColor': '#119DFF',
|
||||
'color': 'white',
|
||||
'padding': '6px'
|
||||
}
|
||||
|
||||
return dcc.Tabs(
|
||||
id="main-tabs",
|
||||
value=active_tab,
|
||||
children=[
|
||||
dcc.Tab(
|
||||
label='📊 Market Data',
|
||||
value='market-data',
|
||||
style=tab_style,
|
||||
selected_style=tab_selected_style
|
||||
),
|
||||
dcc.Tab(
|
||||
label='🤖 Bot Management',
|
||||
value='bot-management',
|
||||
style=tab_style,
|
||||
selected_style=tab_selected_style
|
||||
),
|
||||
dcc.Tab(
|
||||
label='📈 Performance',
|
||||
value='performance',
|
||||
style=tab_style,
|
||||
selected_style=tab_selected_style
|
||||
),
|
||||
dcc.Tab(
|
||||
label='⚙️ System Health',
|
||||
value='system-health',
|
||||
style=tab_style,
|
||||
selected_style=tab_selected_style
|
||||
),
|
||||
],
|
||||
style={'margin': '10px 20px'}
|
||||
)
|
||||
|
||||
|
||||
def create_content_container(content_id: str = 'tab-content') -> html.Div:
|
||||
"""
|
||||
Create the main content container.
|
||||
|
||||
Args:
|
||||
content_id: HTML element ID for the content area
|
||||
|
||||
Returns:
|
||||
Dash HTML component for content container
|
||||
"""
|
||||
return html.Div(
|
||||
id=content_id,
|
||||
style={
|
||||
'padding': '20px',
|
||||
'min-height': '600px',
|
||||
'background-color': '#ffffff'
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
def create_status_indicator(status: str, message: str,
|
||||
timestamp: Optional[datetime] = None) -> html.Div:
|
||||
"""
|
||||
Create a status indicator component.
|
||||
|
||||
Args:
|
||||
status: Status type ('connected', 'error', 'warning', 'info')
|
||||
message: Status message
|
||||
timestamp: Optional timestamp for the status
|
||||
|
||||
Returns:
|
||||
Dash HTML component for status indicator
|
||||
"""
|
||||
status_colors = {
|
||||
'connected': '#27ae60',
|
||||
'error': '#e74c3c',
|
||||
'warning': '#f39c12',
|
||||
'info': '#3498db'
|
||||
}
|
||||
|
||||
status_icons = {
|
||||
'connected': '🟢',
|
||||
'error': '🔴',
|
||||
'warning': '🟡',
|
||||
'info': '🔵'
|
||||
}
|
||||
|
||||
color = status_colors.get(status, '#7f8c8d')
|
||||
icon = status_icons.get(status, '⚪')
|
||||
|
||||
components = [
|
||||
html.Span(f"{icon} {message}",
|
||||
style={'color': color, 'font-weight': 'bold'})
|
||||
]
|
||||
|
||||
if timestamp:
|
||||
components.append(
|
||||
html.P(f"Last updated: {timestamp.strftime('%H:%M:%S')}",
|
||||
style={'margin': '5px 0', 'color': '#7f8c8d', 'font-size': '12px'})
|
||||
)
|
||||
|
||||
return html.Div(components)
|
||||
|
||||
|
||||
def create_card(title: str, content: Any,
|
||||
card_id: Optional[str] = None) -> html.Div:
|
||||
"""
|
||||
Create a card component for organizing content.
|
||||
|
||||
Args:
|
||||
title: Card title
|
||||
content: Card content (can be any Dash component)
|
||||
card_id: Optional HTML element ID
|
||||
|
||||
Returns:
|
||||
Dash HTML component for the card
|
||||
"""
|
||||
return html.Div([
|
||||
html.H3(title, style={
|
||||
'margin': '0 0 15px 0',
|
||||
'color': '#2c3e50',
|
||||
'border-bottom': '2px solid #ecf0f1',
|
||||
'padding-bottom': '10px'
|
||||
}),
|
||||
content
|
||||
], style={
|
||||
'border': '1px solid #ddd',
|
||||
'border-radius': '8px',
|
||||
'padding': '20px',
|
||||
'margin': '10px 0',
|
||||
'background-color': '#ffffff',
|
||||
'box-shadow': '0 2px 4px rgba(0,0,0,0.1)'
|
||||
}, id=card_id)
|
||||
|
||||
|
||||
def create_metric_display(metrics: Dict[str, str]) -> html.Div:
|
||||
"""
|
||||
Create a metrics display component.
|
||||
|
||||
Args:
|
||||
metrics: Dictionary of metric names and values
|
||||
|
||||
Returns:
|
||||
Dash HTML component for metrics display
|
||||
"""
|
||||
metric_components = []
|
||||
|
||||
for key, value in metrics.items():
|
||||
# Color coding for percentage changes
|
||||
color = '#27ae60' if '+' in str(value) else '#e74c3c' if '-' in str(value) else '#2c3e50'
|
||||
|
||||
metric_components.append(
|
||||
html.Div([
|
||||
html.Strong(f"{key}: ", style={'color': '#2c3e50'}),
|
||||
html.Span(str(value), style={'color': color})
|
||||
], style={
|
||||
'margin': '8px 0',
|
||||
'padding': '5px',
|
||||
'background-color': '#f8f9fa',
|
||||
'border-radius': '4px'
|
||||
})
|
||||
)
|
||||
|
||||
return html.Div(metric_components, style={
|
||||
'display': 'grid',
|
||||
'grid-template-columns': 'repeat(auto-fit, minmax(200px, 1fr))',
|
||||
'gap': '10px'
|
||||
})
|
||||
|
||||
|
||||
def create_selector_group(selectors: List[Dict[str, Any]]) -> html.Div:
|
||||
"""
|
||||
Create a group of selector components (dropdowns, etc.).
|
||||
|
||||
Args:
|
||||
selectors: List of selector configurations
|
||||
|
||||
Returns:
|
||||
Dash HTML component for selector group
|
||||
"""
|
||||
selector_components = []
|
||||
|
||||
for selector in selectors:
|
||||
selector_div = html.Div([
|
||||
html.Label(
|
||||
selector.get('label', ''),
|
||||
style={'font-weight': 'bold', 'margin-bottom': '5px', 'display': 'block'}
|
||||
),
|
||||
dcc.Dropdown(
|
||||
id=selector.get('id'),
|
||||
options=selector.get('options', []),
|
||||
value=selector.get('value'),
|
||||
style={'margin-bottom': '15px'}
|
||||
)
|
||||
], style={'width': '250px', 'margin': '10px 20px 10px 0', 'display': 'inline-block'})
|
||||
|
||||
selector_components.append(selector_div)
|
||||
|
||||
return html.Div(selector_components, style={'margin': '20px 0'})
|
||||
|
||||
|
||||
def create_loading_component(component_id: str, message: str = "Loading...") -> html.Div:
|
||||
"""
|
||||
Create a loading component for async operations.
|
||||
|
||||
Args:
|
||||
component_id: ID for the component that will replace this loading screen
|
||||
message: Loading message
|
||||
|
||||
Returns:
|
||||
Dash HTML component for loading screen
|
||||
"""
|
||||
return html.Div([
|
||||
html.Div([
|
||||
html.Div(className="loading-spinner", style={
|
||||
'border': '4px solid #f3f3f3',
|
||||
'border-top': '4px solid #3498db',
|
||||
'border-radius': '50%',
|
||||
'width': '40px',
|
||||
'height': '40px',
|
||||
'animation': 'spin 2s linear infinite',
|
||||
'margin': '0 auto 20px auto'
|
||||
}),
|
||||
html.P(message, style={'text-align': 'center', 'color': '#7f8c8d'})
|
||||
], style={
|
||||
'display': 'flex',
|
||||
'flex-direction': 'column',
|
||||
'align-items': 'center',
|
||||
'justify-content': 'center',
|
||||
'height': '200px'
|
||||
})
|
||||
], id=component_id)
|
||||
|
||||
|
||||
def create_placeholder_content(title: str, description: str,
|
||||
phase: str = "future implementation") -> html.Div:
|
||||
"""
|
||||
Create placeholder content for features not yet implemented.
|
||||
|
||||
Args:
|
||||
title: Section title
|
||||
description: Description of what will be implemented
|
||||
phase: Implementation phase information
|
||||
|
||||
Returns:
|
||||
Dash HTML component for placeholder content
|
||||
"""
|
||||
return html.Div([
|
||||
html.H2(title, style={'color': '#2c3e50'}),
|
||||
html.Div([
|
||||
html.P(description, style={'color': '#7f8c8d', 'font-size': '16px'}),
|
||||
html.P(f"🚧 Planned for {phase}",
|
||||
style={'color': '#f39c12', 'font-weight': 'bold', 'font-style': 'italic'})
|
||||
], style={
|
||||
'background-color': '#f8f9fa',
|
||||
'padding': '20px',
|
||||
'border-radius': '8px',
|
||||
'border-left': '4px solid #f39c12'
|
||||
})
|
||||
])
|
||||
|
||||
|
||||
# CSS Styles for animation (to be included in assets or inline styles)
|
||||
LOADING_CSS = """
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
"""
|
||||
73
config/data_collection.json
Normal file
73
config/data_collection.json
Normal file
@ -0,0 +1,73 @@
|
||||
{
|
||||
"exchange": "okx",
|
||||
"connection": {
|
||||
"public_ws_url": "wss://ws.okx.com:8443/ws/v5/public",
|
||||
"private_ws_url": "wss://ws.okx.com:8443/ws/v5/private",
|
||||
"ping_interval": 25.0,
|
||||
"pong_timeout": 10.0,
|
||||
"max_reconnect_attempts": 5,
|
||||
"reconnect_delay": 5.0
|
||||
},
|
||||
"data_collection": {
|
||||
"store_raw_data": false,
|
||||
"health_check_interval": 120.0,
|
||||
"auto_restart": true,
|
||||
"buffer_size": 1000
|
||||
},
|
||||
"trading_pairs": [
|
||||
{
|
||||
"symbol": "BTC-USDT",
|
||||
"enabled": true,
|
||||
"data_types": [
|
||||
"trade",
|
||||
"orderbook"
|
||||
],
|
||||
"timeframes": [
|
||||
"1s",
|
||||
"5s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"1h"
|
||||
],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
},
|
||||
{
|
||||
"symbol": "ETH-USDT",
|
||||
"enabled": true,
|
||||
"data_types": [
|
||||
"trade",
|
||||
"orderbook"
|
||||
],
|
||||
"timeframes": [
|
||||
"1s",
|
||||
"5s",
|
||||
"1m",
|
||||
"5m",
|
||||
"15m",
|
||||
"1h"
|
||||
],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
}
|
||||
],
|
||||
"logging": {
|
||||
"component_name_template": "okx_collector_{symbol}",
|
||||
"log_level": "INFO",
|
||||
"verbose": false
|
||||
},
|
||||
"database": {
|
||||
"store_processed_data": true,
|
||||
"store_raw_data": false,
|
||||
"force_update_candles": false,
|
||||
"batch_size": 100,
|
||||
"flush_interval": 5.0
|
||||
}
|
||||
}
|
||||
36
config/indicators/templates/bollinger_bands_template.json
Normal file
36
config/indicators/templates/bollinger_bands_template.json
Normal file
@ -0,0 +1,36 @@
|
||||
{
|
||||
"name": "Bollinger Bands",
|
||||
"description": "Bollinger Bands volatility indicator",
|
||||
"type": "bollinger_bands",
|
||||
"display_type": "overlay",
|
||||
"timeframe": null,
|
||||
"default_parameters": {
|
||||
"period": 20,
|
||||
"std_dev": 2.0
|
||||
},
|
||||
"parameter_schema": {
|
||||
"period": {
|
||||
"type": "int",
|
||||
"min": 5,
|
||||
"max": 100,
|
||||
"default": 20,
|
||||
"description": "Period for middle line (SMA)"
|
||||
},
|
||||
"std_dev": {
|
||||
"type": "float",
|
||||
"min": 0.5,
|
||||
"max": 5.0,
|
||||
"default": 2.0,
|
||||
"description": "Standard deviation for Bollinger Bands"
|
||||
},
|
||||
"timeframe": {
|
||||
"type": "string",
|
||||
"default": null,
|
||||
"description": "Indicator timeframe (e.g., '1h', '4h'). Null for chart timeframe."
|
||||
}
|
||||
},
|
||||
"default_styling": {
|
||||
"color": "#6f42c1",
|
||||
"line_width": 1
|
||||
}
|
||||
}
|
||||
28
config/indicators/templates/ema_template.json
Normal file
28
config/indicators/templates/ema_template.json
Normal file
@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "Exponential Moving Average",
|
||||
"description": "Exponential Moving Average indicator",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"timeframe": null,
|
||||
"default_parameters": {
|
||||
"period": 12
|
||||
},
|
||||
"parameter_schema": {
|
||||
"period": {
|
||||
"type": "int",
|
||||
"min": 1,
|
||||
"max": 200,
|
||||
"default": 12,
|
||||
"description": "Period for EMA calculation"
|
||||
},
|
||||
"timeframe": {
|
||||
"type": "string",
|
||||
"default": null,
|
||||
"description": "Indicator timeframe (e.g., '1h', '4h'). Null for chart timeframe."
|
||||
}
|
||||
},
|
||||
"default_styling": {
|
||||
"color": "#ff6b35",
|
||||
"line_width": 2
|
||||
}
|
||||
}
|
||||
45
config/indicators/templates/macd_template.json
Normal file
45
config/indicators/templates/macd_template.json
Normal file
@ -0,0 +1,45 @@
|
||||
{
|
||||
"name": "MACD",
|
||||
"description": "Moving Average Convergence Divergence",
|
||||
"type": "macd",
|
||||
"display_type": "subplot",
|
||||
"timeframe": null,
|
||||
"default_parameters": {
|
||||
"fast_period": 12,
|
||||
"slow_period": 26,
|
||||
"signal_period": 9
|
||||
},
|
||||
"parameter_schema": {
|
||||
"fast_period": {
|
||||
"type": "int",
|
||||
"min": 2,
|
||||
"max": 50,
|
||||
"default": 12,
|
||||
"description": "Fast EMA period"
|
||||
},
|
||||
"slow_period": {
|
||||
"type": "int",
|
||||
"min": 5,
|
||||
"max": 100,
|
||||
"default": 26,
|
||||
"description": "Slow EMA period"
|
||||
},
|
||||
"signal_period": {
|
||||
"type": "int",
|
||||
"min": 2,
|
||||
"max": 30,
|
||||
"default": 9,
|
||||
"description": "Signal line period for MACD"
|
||||
},
|
||||
"timeframe": {
|
||||
"type": "string",
|
||||
"default": null,
|
||||
"description": "Indicator timeframe (e.g., '1h', '4h'). Null for chart timeframe."
|
||||
}
|
||||
},
|
||||
"default_styling": {
|
||||
"color": "#fd7e14",
|
||||
"line_width": 2,
|
||||
"macd_line_color": "#007bff"
|
||||
}
|
||||
}
|
||||
28
config/indicators/templates/rsi_template.json
Normal file
28
config/indicators/templates/rsi_template.json
Normal file
@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "Relative Strength Index",
|
||||
"description": "RSI oscillator indicator",
|
||||
"type": "rsi",
|
||||
"display_type": "subplot",
|
||||
"timeframe": null,
|
||||
"default_parameters": {
|
||||
"period": 14
|
||||
},
|
||||
"parameter_schema": {
|
||||
"period": {
|
||||
"type": "int",
|
||||
"min": 2,
|
||||
"max": 50,
|
||||
"default": 14,
|
||||
"description": "Period for RSI calculation"
|
||||
},
|
||||
"timeframe": {
|
||||
"type": "string",
|
||||
"default": null,
|
||||
"description": "Indicator timeframe (e.g., '1h', '4h'). Null for chart timeframe."
|
||||
}
|
||||
},
|
||||
"default_styling": {
|
||||
"color": "#20c997",
|
||||
"line_width": 2
|
||||
}
|
||||
}
|
||||
28
config/indicators/templates/sma_template.json
Normal file
28
config/indicators/templates/sma_template.json
Normal file
@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "Simple Moving Average",
|
||||
"description": "Simple Moving Average indicator",
|
||||
"type": "sma",
|
||||
"display_type": "overlay",
|
||||
"timeframe": null,
|
||||
"default_parameters": {
|
||||
"period": 20
|
||||
},
|
||||
"parameter_schema": {
|
||||
"period": {
|
||||
"type": "int",
|
||||
"min": 1,
|
||||
"max": 200,
|
||||
"default": 20,
|
||||
"description": "Period for SMA calculation"
|
||||
},
|
||||
"timeframe": {
|
||||
"type": "string",
|
||||
"default": null,
|
||||
"description": "Indicator timeframe (e.g., '1h', '4h'). Null for chart timeframe."
|
||||
}
|
||||
},
|
||||
"default_styling": {
|
||||
"color": "#007bff",
|
||||
"line_width": 2
|
||||
}
|
||||
}
|
||||
@ -0,0 +1,20 @@
|
||||
{
|
||||
"id": "bollinger_bands_08c5ed71",
|
||||
"name": "Bollinger Tight",
|
||||
"description": "Tight Bollinger Bands (20, 1.5) for sensitive volatility",
|
||||
"type": "bollinger_bands",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 20,
|
||||
"std_dev": 1.5
|
||||
},
|
||||
"styling": {
|
||||
"color": "#e83e8c",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.460797+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.460797+00:00"
|
||||
}
|
||||
@ -0,0 +1,20 @@
|
||||
{
|
||||
"id": "bollinger_bands_69b378e2",
|
||||
"name": "Bollinger Bands",
|
||||
"description": "Standard Bollinger Bands (20, 2) for volatility analysis",
|
||||
"type": "bollinger_bands",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 20,
|
||||
"std_dev": 2
|
||||
},
|
||||
"styling": {
|
||||
"color": "#6f42c1",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.460105+00:00",
|
||||
"modified_date": "2025-06-06T05:32:24.994486+00:00"
|
||||
}
|
||||
20
config/indicators/user_indicators/ema_b869638d.json
Normal file
20
config/indicators/user_indicators/ema_b869638d.json
Normal file
@ -0,0 +1,20 @@
|
||||
{
|
||||
"id": "ema_b869638d",
|
||||
"name": "EMA 12 (15 minutes)",
|
||||
"description": "",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 12
|
||||
},
|
||||
"styling": {
|
||||
"color": "#007bff",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"timeframe": "15m",
|
||||
"visible": true,
|
||||
"created_date": "2025-06-06T06:56:54.181578+00:00",
|
||||
"modified_date": "2025-06-06T06:56:54.181578+00:00"
|
||||
}
|
||||
20
config/indicators/user_indicators/ema_bfbf3a1d.json
Normal file
20
config/indicators/user_indicators/ema_bfbf3a1d.json
Normal file
@ -0,0 +1,20 @@
|
||||
{
|
||||
"id": "ema_bfbf3a1d",
|
||||
"name": "EMA 12 (5 minutes)",
|
||||
"description": "",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 12
|
||||
},
|
||||
"styling": {
|
||||
"color": "#007bff",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"timeframe": "5m",
|
||||
"visible": true,
|
||||
"created_date": "2025-06-06T07:02:34.613543+00:00",
|
||||
"modified_date": "2025-06-06T07:23:10.757978+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/ema_ca5fd53d.json
Normal file
19
config/indicators/user_indicators/ema_ca5fd53d.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "ema_ca5fd53d",
|
||||
"name": "EMA 12",
|
||||
"description": "12-period Exponential Moving Average for fast signals",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 12
|
||||
},
|
||||
"styling": {
|
||||
"color": "#8880ff",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.455729+00:00",
|
||||
"modified_date": "2025-06-06T04:14:33.123102+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/ema_de4fc14c.json
Normal file
19
config/indicators/user_indicators/ema_de4fc14c.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "ema_de4fc14c",
|
||||
"name": "EMA 26",
|
||||
"description": "26-period Exponential Moving Average for slower signals",
|
||||
"type": "ema",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 26
|
||||
},
|
||||
"styling": {
|
||||
"color": "#28a745",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.456253+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.456253+00:00"
|
||||
}
|
||||
22
config/indicators/user_indicators/macd_307935a7.json
Normal file
22
config/indicators/user_indicators/macd_307935a7.json
Normal file
@ -0,0 +1,22 @@
|
||||
{
|
||||
"id": "macd_307935a7",
|
||||
"name": "MACD Fast",
|
||||
"description": "Fast MACD (5, 13, 4) for quick signals",
|
||||
"type": "macd",
|
||||
"display_type": "subplot",
|
||||
"parameters": {
|
||||
"fast_period": 5,
|
||||
"slow_period": 13,
|
||||
"signal_period": 4
|
||||
},
|
||||
"styling": {
|
||||
"color": "#dc3545",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"timeframe": "1h",
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.459602+00:00",
|
||||
"modified_date": "2025-06-06T07:03:58.642238+00:00"
|
||||
}
|
||||
21
config/indicators/user_indicators/macd_7335a9bd.json
Normal file
21
config/indicators/user_indicators/macd_7335a9bd.json
Normal file
@ -0,0 +1,21 @@
|
||||
{
|
||||
"id": "macd_7335a9bd",
|
||||
"name": "MACD Standard",
|
||||
"description": "Standard MACD (12, 26, 9) for trend changes",
|
||||
"type": "macd",
|
||||
"display_type": "subplot",
|
||||
"parameters": {
|
||||
"fast_period": 12,
|
||||
"slow_period": 26,
|
||||
"signal_period": 9
|
||||
},
|
||||
"styling": {
|
||||
"color": "#fd7e14",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.459030+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.459030+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/rsi_1a0e1320.json
Normal file
19
config/indicators/user_indicators/rsi_1a0e1320.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "rsi_1a0e1320",
|
||||
"name": "RSI 21",
|
||||
"description": "21-period RSI for less sensitive momentum signals",
|
||||
"type": "rsi",
|
||||
"display_type": "subplot",
|
||||
"parameters": {
|
||||
"period": 21
|
||||
},
|
||||
"styling": {
|
||||
"color": "#17a2b8",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.458018+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.458018+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/rsi_5d160ff7.json
Normal file
19
config/indicators/user_indicators/rsi_5d160ff7.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "rsi_5d160ff7",
|
||||
"name": "RSI 14",
|
||||
"description": "14-period RSI for momentum analysis",
|
||||
"type": "rsi",
|
||||
"display_type": "subplot",
|
||||
"parameters": {
|
||||
"period": 14
|
||||
},
|
||||
"styling": {
|
||||
"color": "#20c997",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.457515+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.457515+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/sma_0e235df1.json
Normal file
19
config/indicators/user_indicators/sma_0e235df1.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "sma_0e235df1",
|
||||
"name": "SMA 50",
|
||||
"description": "50-period Simple Moving Average for medium-term trend",
|
||||
"type": "sma",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 50
|
||||
},
|
||||
"styling": {
|
||||
"color": "#6c757d",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.454653+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.454653+00:00"
|
||||
}
|
||||
19
config/indicators/user_indicators/sma_8c487df2.json
Normal file
19
config/indicators/user_indicators/sma_8c487df2.json
Normal file
@ -0,0 +1,19 @@
|
||||
{
|
||||
"id": "sma_8c487df2",
|
||||
"name": "SMA 20",
|
||||
"description": "20-period Simple Moving Average for short-term trend",
|
||||
"type": "sma",
|
||||
"display_type": "overlay",
|
||||
"parameters": {
|
||||
"period": 20
|
||||
},
|
||||
"styling": {
|
||||
"color": "#007bff",
|
||||
"line_width": 2,
|
||||
"opacity": 1.0,
|
||||
"line_style": "solid"
|
||||
},
|
||||
"visible": true,
|
||||
"created_date": "2025-06-04T04:16:35.453614+00:00",
|
||||
"modified_date": "2025-06-04T04:16:35.453614+00:00"
|
||||
}
|
||||
@ -21,10 +21,10 @@ class DatabaseSettings(BaseSettings):
|
||||
"""Database configuration settings."""
|
||||
|
||||
host: str = Field(default="localhost", env="POSTGRES_HOST")
|
||||
port: int = Field(default=5432, env="POSTGRES_PORT")
|
||||
port: int = Field(default=5434, env="POSTGRES_PORT")
|
||||
database: str = Field(default="dashboard", env="POSTGRES_DB")
|
||||
user: str = Field(default="dashboard", env="POSTGRES_USER")
|
||||
password: str = Field(default="dashboard123", env="POSTGRES_PASSWORD")
|
||||
password: str = Field(default="", env="POSTGRES_PASSWORD")
|
||||
url: Optional[str] = Field(default=None, env="DATABASE_URL")
|
||||
|
||||
@property
|
||||
|
||||
12
dashboard/__init__.py
Normal file
12
dashboard/__init__.py
Normal file
@ -0,0 +1,12 @@
|
||||
"""
|
||||
Dashboard package for the Crypto Trading Bot Dashboard.
|
||||
|
||||
This package contains modular dashboard components:
|
||||
- layouts: UI layout definitions
|
||||
- callbacks: Dash callback functions
|
||||
- components: Reusable UI components
|
||||
"""
|
||||
|
||||
from .app import create_app
|
||||
|
||||
__all__ = ['create_app']
|
||||
78
dashboard/app.py
Normal file
78
dashboard/app.py
Normal file
@ -0,0 +1,78 @@
|
||||
"""
|
||||
Main dashboard application module.
|
||||
"""
|
||||
|
||||
import dash
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
from utils.logger import get_logger
|
||||
from dashboard.layouts import (
|
||||
get_market_data_layout,
|
||||
get_bot_management_layout,
|
||||
get_performance_layout,
|
||||
get_system_health_layout
|
||||
)
|
||||
from dashboard.components import create_indicator_modal
|
||||
|
||||
logger = get_logger("dashboard_app")
|
||||
|
||||
|
||||
def create_app():
|
||||
"""Create and configure the Dash application."""
|
||||
# Initialize Dash app
|
||||
app = dash.Dash(__name__, suppress_callback_exceptions=True, external_stylesheets=[dbc.themes.LUX])
|
||||
|
||||
# Define the main layout wrapped in MantineProvider
|
||||
app.layout = html.Div([
|
||||
html.Div([
|
||||
# Page title
|
||||
html.H1("🚀 Crypto Trading Bot Dashboard",
|
||||
style={'text-align': 'center', 'color': '#2c3e50', 'margin-bottom': '30px'}),
|
||||
|
||||
# Navigation tabs
|
||||
dcc.Tabs(id='main-tabs', value='market-data', children=[
|
||||
dcc.Tab(label='📊 Market Data', value='market-data'),
|
||||
dcc.Tab(label='🤖 Bot Management', value='bot-management'),
|
||||
dcc.Tab(label='📈 Performance', value='performance'),
|
||||
dcc.Tab(label='⚙️ System Health', value='system-health'),
|
||||
], style={'margin-bottom': '20px'}),
|
||||
|
||||
# Tab content container
|
||||
html.Div(id='tab-content'),
|
||||
|
||||
# Hidden button for callback compatibility (real button is in market data layout)
|
||||
html.Button(id='add-indicator-btn', style={'display': 'none'}),
|
||||
|
||||
# Add Indicator Modal
|
||||
create_indicator_modal(),
|
||||
|
||||
# Auto-refresh interval
|
||||
dcc.Interval(
|
||||
id='interval-component',
|
||||
interval=30*1000, # Update every 30 seconds
|
||||
n_intervals=0
|
||||
)
|
||||
])
|
||||
])
|
||||
|
||||
return app
|
||||
|
||||
|
||||
def register_callbacks(app):
|
||||
"""Register all dashboard callbacks."""
|
||||
from dashboard.callbacks import (
|
||||
register_navigation_callbacks,
|
||||
register_chart_callbacks,
|
||||
register_indicator_callbacks,
|
||||
register_system_health_callbacks,
|
||||
register_data_analysis_callbacks
|
||||
)
|
||||
|
||||
# Register all callback modules
|
||||
register_navigation_callbacks(app)
|
||||
register_chart_callbacks(app)
|
||||
register_indicator_callbacks(app)
|
||||
register_system_health_callbacks(app)
|
||||
register_data_analysis_callbacks(app)
|
||||
|
||||
logger.info("All dashboard callbacks registered successfully")
|
||||
15
dashboard/callbacks/__init__.py
Normal file
15
dashboard/callbacks/__init__.py
Normal file
@ -0,0 +1,15 @@
|
||||
"""
|
||||
Callback modules for the dashboard.
|
||||
"""
|
||||
|
||||
from .navigation import register_navigation_callbacks
|
||||
from .charts import register_chart_callbacks
|
||||
from .indicators import register_indicator_callbacks
|
||||
from .system_health import register_system_health_callbacks
|
||||
|
||||
__all__ = [
|
||||
'register_navigation_callbacks',
|
||||
'register_chart_callbacks',
|
||||
'register_indicator_callbacks',
|
||||
'register_system_health_callbacks'
|
||||
]
|
||||
214
dashboard/callbacks/charts.py
Normal file
214
dashboard/callbacks/charts.py
Normal file
@ -0,0 +1,214 @@
|
||||
"""
|
||||
Chart-related callbacks for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import Output, Input, State, Patch, ctx, html, no_update, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
from datetime import datetime, timedelta
|
||||
from utils.logger import get_logger
|
||||
from components.charts import (
|
||||
create_strategy_chart,
|
||||
create_chart_with_indicators,
|
||||
create_error_chart,
|
||||
)
|
||||
from dashboard.components.data_analysis import get_market_statistics
|
||||
from components.charts.config import get_all_example_strategies
|
||||
from database.connection import DatabaseManager
|
||||
from components.charts.builder import ChartBuilder
|
||||
from components.charts.utils import prepare_chart_data
|
||||
import pandas as pd
|
||||
import io
|
||||
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
def calculate_time_range(time_range_quick, custom_start_date, custom_end_date, analysis_mode, n_intervals):
|
||||
"""Calculate days_back and status message based on time range controls."""
|
||||
try:
|
||||
predefined_ranges = ['1h', '4h', '6h', '12h', '1d', '3d', '7d', '30d']
|
||||
|
||||
if time_range_quick in predefined_ranges:
|
||||
time_map = {
|
||||
'1h': (1/24, '🕐 Last 1 Hour'), '4h': (4/24, '🕐 Last 4 Hours'), '6h': (6/24, '🕐 Last 6 Hours'),
|
||||
'12h': (12/24, '🕐 Last 12 Hours'), '1d': (1, '📅 Last 1 Day'), '3d': (3, '📅 Last 3 Days'),
|
||||
'7d': (7, '📅 Last 7 Days'), '30d': (30, '📅 Last 30 Days')
|
||||
}
|
||||
days_back_fractional, label = time_map[time_range_quick]
|
||||
mode_text = "🔒 Locked" if analysis_mode == 'locked' else "🔴 Live"
|
||||
status = f"{label} | {mode_text}"
|
||||
days_back = days_back_fractional if days_back_fractional < 1 else int(days_back_fractional)
|
||||
return days_back, status
|
||||
|
||||
if time_range_quick == 'custom' and custom_start_date and custom_end_date:
|
||||
start_date = datetime.fromisoformat(custom_start_date.split('T')[0])
|
||||
end_date = datetime.fromisoformat(custom_end_date.split('T')[0])
|
||||
days_diff = (end_date - start_date).days
|
||||
status = f"📅 Custom Range: {start_date.strftime('%Y-%m-%d')} to {end_date.strftime('%Y-%m-%d')} ({days_diff} days)"
|
||||
return max(1, days_diff), status
|
||||
|
||||
if time_range_quick == 'realtime':
|
||||
mode_text = "🔒 Analysis Mode" if analysis_mode == 'locked' else "🔴 Real-time Updates"
|
||||
status = f"📈 Real-time Mode | {mode_text} (Default: Last 7 Days)"
|
||||
return 7, status
|
||||
|
||||
mode_text = "🔒 Analysis Mode" if analysis_mode == 'locked' else "🔴 Live"
|
||||
default_label = "📅 Default (Last 7 Days)"
|
||||
if time_range_quick == 'custom' and not (custom_start_date and custom_end_date):
|
||||
default_label = "⏳ Select Custom Dates"
|
||||
status = f"{default_label} | {mode_text}"
|
||||
return 7, status
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Error calculating time range: {e}. Defaulting to 7 days.")
|
||||
return 7, f"⚠️ Error in time range. Defaulting to 7 days."
|
||||
|
||||
|
||||
def register_chart_callbacks(app):
|
||||
"""Register chart-related callbacks."""
|
||||
|
||||
@app.callback(
|
||||
[Output('price-chart', 'figure'),
|
||||
Output('time-range-status', 'children'),
|
||||
Output('chart-data-store', 'data')],
|
||||
[Input('symbol-dropdown', 'value'),
|
||||
Input('timeframe-dropdown', 'value'),
|
||||
Input('overlay-indicators-checklist', 'value'),
|
||||
Input('subplot-indicators-checklist', 'value'),
|
||||
Input('strategy-dropdown', 'value'),
|
||||
Input('time-range-quick-select', 'value'),
|
||||
Input('custom-date-range', 'start_date'),
|
||||
Input('custom-date-range', 'end_date'),
|
||||
Input('analysis-mode-toggle', 'value'),
|
||||
Input('interval-component', 'n_intervals')],
|
||||
[State('price-chart', 'relayoutData'),
|
||||
State('price-chart', 'figure')]
|
||||
)
|
||||
def update_price_chart(symbol, timeframe, overlay_indicators, subplot_indicators, selected_strategy,
|
||||
time_range_quick, custom_start_date, custom_end_date, analysis_mode, n_intervals,
|
||||
relayout_data, current_figure):
|
||||
try:
|
||||
triggered_id = ctx.triggered_id
|
||||
if triggered_id == 'interval-component' and analysis_mode == 'locked':
|
||||
return no_update, no_update, no_update
|
||||
|
||||
days_back, status_message = calculate_time_range(
|
||||
time_range_quick, custom_start_date, custom_end_date, analysis_mode, n_intervals
|
||||
)
|
||||
|
||||
chart_df = pd.DataFrame()
|
||||
if selected_strategy and selected_strategy != 'basic':
|
||||
fig, chart_df = create_strategy_chart(symbol, timeframe, selected_strategy, days_back=days_back)
|
||||
else:
|
||||
fig, chart_df = create_chart_with_indicators(
|
||||
symbol=symbol, timeframe=timeframe,
|
||||
overlay_indicators=overlay_indicators or [], subplot_indicators=subplot_indicators or [],
|
||||
days_back=days_back
|
||||
)
|
||||
|
||||
stored_data = None
|
||||
if chart_df is not None and not chart_df.empty:
|
||||
stored_data = chart_df.to_json(orient='split', date_format='iso')
|
||||
|
||||
if relayout_data and 'xaxis.range' in relayout_data:
|
||||
fig.update_layout(xaxis=dict(range=relayout_data['xaxis.range']), yaxis=dict(range=relayout_data.get('yaxis.range')))
|
||||
|
||||
return fig, status_message, stored_data
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating price chart: {e}", exc_info=True)
|
||||
error_fig = create_error_chart(f"Error loading chart: {str(e)}")
|
||||
return error_fig, f"❌ Error: {str(e)}", None
|
||||
|
||||
@app.callback(
|
||||
Output('analysis-mode-toggle', 'value'),
|
||||
Input('price-chart', 'relayoutData'),
|
||||
State('analysis-mode-toggle', 'value'),
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def auto_lock_chart_on_interaction(relayout_data, current_mode):
|
||||
if relayout_data and 'xaxis.range' in relayout_data and current_mode != 'locked':
|
||||
return 'locked'
|
||||
return no_update
|
||||
|
||||
@app.callback(
|
||||
Output('market-stats', 'children'),
|
||||
[Input('chart-data-store', 'data')],
|
||||
[State('symbol-dropdown', 'value'),
|
||||
State('timeframe-dropdown', 'value')]
|
||||
)
|
||||
def update_market_stats(stored_data, symbol, timeframe):
|
||||
if not stored_data:
|
||||
return dbc.Alert("Statistics will be available once chart data is loaded.", color="info")
|
||||
try:
|
||||
df = pd.read_json(io.StringIO(stored_data), orient='split')
|
||||
if df.empty:
|
||||
return dbc.Alert("Not enough data to calculate statistics.", color="warning")
|
||||
return get_market_statistics(df, symbol, timeframe)
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating market stats from stored data: {e}", exc_info=True)
|
||||
return dbc.Alert(f"Error loading statistics: {e}", color="danger")
|
||||
|
||||
@app.callback(
|
||||
Output("download-chart-data", "data"),
|
||||
[Input("export-csv-btn", "n_clicks"),
|
||||
Input("export-json-btn", "n_clicks")],
|
||||
[State("chart-data-store", "data"),
|
||||
State("symbol-dropdown", "value"),
|
||||
State("timeframe-dropdown", "value")],
|
||||
prevent_initial_call=True,
|
||||
)
|
||||
def export_chart_data(csv_clicks, json_clicks, stored_data, symbol, timeframe):
|
||||
triggered_id = ctx.triggered_id
|
||||
if not triggered_id or not stored_data:
|
||||
return no_update
|
||||
try:
|
||||
df = pd.read_json(io.StringIO(stored_data), orient='split')
|
||||
if df.empty:
|
||||
return no_update
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
filename_base = f"chart_data_{symbol}_{timeframe}_{timestamp}"
|
||||
if triggered_id == "export-csv-btn":
|
||||
return dcc.send_data_frame(df.to_csv, f"{filename_base}.csv", index=False)
|
||||
elif triggered_id == "export-json-btn":
|
||||
return dict(content=df.to_json(orient='records', date_format='iso'), filename=f"{filename_base}.json")
|
||||
except Exception as e:
|
||||
logger.error(f"Error exporting chart data from store: {e}", exc_info=True)
|
||||
return no_update
|
||||
|
||||
@app.callback(
|
||||
[Output('overlay-indicators-checklist', 'value'),
|
||||
Output('subplot-indicators-checklist', 'value')],
|
||||
[Input('strategy-dropdown', 'value')]
|
||||
)
|
||||
def update_indicators_from_strategy(selected_strategy):
|
||||
if not selected_strategy or selected_strategy == 'basic':
|
||||
return [], []
|
||||
try:
|
||||
all_strategies = get_all_example_strategies()
|
||||
if selected_strategy in all_strategies:
|
||||
strategy_example = all_strategies[selected_strategy]
|
||||
config = strategy_example.config
|
||||
overlay_indicators = config.overlay_indicators or []
|
||||
subplot_indicators = []
|
||||
for subplot_config in config.subplot_configs or []:
|
||||
subplot_indicators.extend(subplot_config.indicators or [])
|
||||
return overlay_indicators, subplot_indicators
|
||||
else:
|
||||
return [], []
|
||||
except Exception as e:
|
||||
logger.error(f"Error loading strategy indicators: {e}", exc_info=True)
|
||||
return [], []
|
||||
|
||||
@app.callback(
|
||||
[Output('custom-date-range', 'start_date'),
|
||||
Output('custom-date-range', 'end_date'),
|
||||
Output('time-range-quick-select', 'value')],
|
||||
[Input('clear-date-range-btn', 'n_clicks')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def clear_custom_date_range(n_clicks):
|
||||
if n_clicks and n_clicks > 0:
|
||||
return None, None, '7d'
|
||||
return no_update, no_update, no_update
|
||||
|
||||
logger.info("Chart callback: Chart callbacks registered successfully")
|
||||
49
dashboard/callbacks/data_analysis.py
Normal file
49
dashboard/callbacks/data_analysis.py
Normal file
@ -0,0 +1,49 @@
|
||||
"""
|
||||
Data analysis callbacks for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import Output, Input, html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
from utils.logger import get_logger
|
||||
from dashboard.components.data_analysis import (
|
||||
VolumeAnalyzer,
|
||||
PriceMovementAnalyzer,
|
||||
create_volume_analysis_chart,
|
||||
create_price_movement_chart,
|
||||
create_volume_stats_display,
|
||||
create_price_stats_display
|
||||
)
|
||||
|
||||
logger = get_logger("data_analysis_callbacks")
|
||||
|
||||
|
||||
def register_data_analysis_callbacks(app):
|
||||
"""Register data analysis related callbacks."""
|
||||
|
||||
logger.info("🚀 STARTING to register data analysis callbacks...")
|
||||
|
||||
# Initial callback to populate charts on load
|
||||
@app.callback(
|
||||
[Output('analysis-chart-container', 'children'),
|
||||
Output('analysis-stats-container', 'children')],
|
||||
[Input('analysis-type-selector', 'value'),
|
||||
Input('analysis-period-selector', 'value')],
|
||||
prevent_initial_call=False
|
||||
)
|
||||
def update_data_analysis(analysis_type, period):
|
||||
"""Update data analysis with statistical cards only (no duplicate charts)."""
|
||||
logger.info(f"🎯 DATA ANALYSIS CALLBACK TRIGGERED! Type: {analysis_type}, Period: {period}")
|
||||
|
||||
# Return placeholder message since we're moving to enhanced market stats
|
||||
info_msg = dbc.Alert([
|
||||
html.H4("📊 Statistical Analysis", className="alert-heading"),
|
||||
html.P("Data analysis has been integrated into the Market Statistics section above."),
|
||||
html.P("The enhanced statistics now include volume analysis, price movement analysis, and trend indicators."),
|
||||
html.P("Change the symbol and timeframe in the main chart to see updated analysis."),
|
||||
html.Hr(),
|
||||
html.P("This section will be updated with additional analytical tools in future versions.", className="mb-0")
|
||||
], color="info")
|
||||
|
||||
return info_msg, html.Div()
|
||||
|
||||
logger.info("✅ Data analysis callbacks registered successfully")
|
||||
506
dashboard/callbacks/indicators.py
Normal file
506
dashboard/callbacks/indicators.py
Normal file
@ -0,0 +1,506 @@
|
||||
"""
|
||||
Indicator-related callbacks for the dashboard.
|
||||
"""
|
||||
|
||||
import dash
|
||||
from dash import Output, Input, State, html, dcc, callback_context, no_update
|
||||
import dash_bootstrap_components as dbc
|
||||
import json
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
def register_indicator_callbacks(app):
|
||||
"""Register indicator-related callbacks."""
|
||||
|
||||
# Modal control callbacks
|
||||
@app.callback(
|
||||
Output('indicator-modal', 'is_open'),
|
||||
[Input('add-indicator-btn-visible', 'n_clicks'),
|
||||
Input('cancel-indicator-btn', 'n_clicks'),
|
||||
Input('save-indicator-btn', 'n_clicks'),
|
||||
Input({'type': 'edit-indicator-btn', 'index': dash.ALL}, 'n_clicks')],
|
||||
[State('indicator-modal', 'is_open')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def toggle_indicator_modal(add_clicks, cancel_clicks, save_clicks, edit_clicks, is_open):
|
||||
"""Toggle the visibility of the add indicator modal."""
|
||||
ctx = callback_context
|
||||
if not ctx.triggered:
|
||||
return is_open
|
||||
|
||||
triggered_id = ctx.triggered[0]['prop_id'].split('.')[0]
|
||||
|
||||
# Check for add button click
|
||||
if triggered_id == 'add-indicator-btn-visible' and add_clicks:
|
||||
return True
|
||||
|
||||
# Check for edit button clicks, ensuring a click actually happened
|
||||
if 'edit-indicator-btn' in triggered_id and any(c for c in edit_clicks if c is not None):
|
||||
return True
|
||||
|
||||
# Check for cancel or save clicks to close the modal
|
||||
if triggered_id in ['cancel-indicator-btn', 'save-indicator-btn']:
|
||||
return False
|
||||
|
||||
return is_open
|
||||
|
||||
# Update parameter fields based on indicator type
|
||||
@app.callback(
|
||||
[Output('indicator-parameters-message', 'style'),
|
||||
Output('sma-parameters', 'style'),
|
||||
Output('ema-parameters', 'style'),
|
||||
Output('rsi-parameters', 'style'),
|
||||
Output('macd-parameters', 'style'),
|
||||
Output('bb-parameters', 'style')],
|
||||
Input('indicator-type-dropdown', 'value'),
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def update_parameter_fields(indicator_type):
|
||||
"""Show/hide parameter input fields based on selected indicator type."""
|
||||
# Default styles
|
||||
hidden_style = {'display': 'none', 'margin-bottom': '10px'}
|
||||
visible_style = {'display': 'block', 'margin-bottom': '10px'}
|
||||
|
||||
# Default message visibility
|
||||
message_style = {'display': 'block'} if not indicator_type else {'display': 'none'}
|
||||
|
||||
# Initialize all as hidden
|
||||
sma_style = hidden_style
|
||||
ema_style = hidden_style
|
||||
rsi_style = hidden_style
|
||||
macd_style = hidden_style
|
||||
bb_style = hidden_style
|
||||
|
||||
# Show the relevant parameter section
|
||||
if indicator_type == 'sma':
|
||||
sma_style = visible_style
|
||||
elif indicator_type == 'ema':
|
||||
ema_style = visible_style
|
||||
elif indicator_type == 'rsi':
|
||||
rsi_style = visible_style
|
||||
elif indicator_type == 'macd':
|
||||
macd_style = visible_style
|
||||
elif indicator_type == 'bollinger_bands':
|
||||
bb_style = visible_style
|
||||
|
||||
return message_style, sma_style, ema_style, rsi_style, macd_style, bb_style
|
||||
|
||||
# Save indicator callback
|
||||
@app.callback(
|
||||
[Output('save-indicator-feedback', 'children'),
|
||||
Output('overlay-indicators-checklist', 'options'),
|
||||
Output('subplot-indicators-checklist', 'options')],
|
||||
Input('save-indicator-btn', 'n_clicks'),
|
||||
[State('indicator-name-input', 'value'),
|
||||
State('indicator-type-dropdown', 'value'),
|
||||
State('indicator-description-input', 'value'),
|
||||
State('indicator-timeframe-dropdown', 'value'),
|
||||
State('indicator-color-input', 'value'),
|
||||
State('indicator-line-width-slider', 'value'),
|
||||
# SMA parameters
|
||||
State('sma-period-input', 'value'),
|
||||
# EMA parameters
|
||||
State('ema-period-input', 'value'),
|
||||
# RSI parameters
|
||||
State('rsi-period-input', 'value'),
|
||||
# MACD parameters
|
||||
State('macd-fast-period-input', 'value'),
|
||||
State('macd-slow-period-input', 'value'),
|
||||
State('macd-signal-period-input', 'value'),
|
||||
# Bollinger Bands parameters
|
||||
State('bb-period-input', 'value'),
|
||||
State('bb-stddev-input', 'value'),
|
||||
# Edit mode data
|
||||
State('edit-indicator-store', 'data')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def save_new_indicator(n_clicks, name, indicator_type, description, timeframe, color, line_width,
|
||||
sma_period, ema_period, rsi_period,
|
||||
macd_fast, macd_slow, macd_signal,
|
||||
bb_period, bb_stddev, edit_data):
|
||||
"""Save a new indicator or update an existing one."""
|
||||
if not n_clicks or not name or not indicator_type:
|
||||
return "", no_update, no_update
|
||||
|
||||
try:
|
||||
# Get indicator manager
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
manager = get_indicator_manager()
|
||||
|
||||
# Collect parameters based on indicator type and actual input values
|
||||
parameters = {}
|
||||
|
||||
if indicator_type == 'sma':
|
||||
parameters = {'period': sma_period or 20}
|
||||
elif indicator_type == 'ema':
|
||||
parameters = {'period': ema_period or 12}
|
||||
elif indicator_type == 'rsi':
|
||||
parameters = {'period': rsi_period or 14}
|
||||
elif indicator_type == 'macd':
|
||||
parameters = {
|
||||
'fast_period': macd_fast or 12,
|
||||
'slow_period': macd_slow or 26,
|
||||
'signal_period': macd_signal or 9
|
||||
}
|
||||
elif indicator_type == 'bollinger_bands':
|
||||
parameters = {
|
||||
'period': bb_period or 20,
|
||||
'std_dev': bb_stddev or 2.0
|
||||
}
|
||||
|
||||
feedback_msg = None
|
||||
# Check if this is an edit operation
|
||||
is_edit = edit_data and edit_data.get('mode') == 'edit'
|
||||
|
||||
if is_edit:
|
||||
# Update existing indicator
|
||||
indicator_id = edit_data.get('indicator_id')
|
||||
success = manager.update_indicator(
|
||||
indicator_id,
|
||||
name=name,
|
||||
description=description or "",
|
||||
parameters=parameters,
|
||||
styling={'color': color or "#007bff", 'line_width': line_width or 2},
|
||||
timeframe=timeframe or None
|
||||
)
|
||||
|
||||
if success:
|
||||
feedback_msg = dbc.Alert(f"Indicator '{name}' updated successfully!", color="success")
|
||||
else:
|
||||
feedback_msg = dbc.Alert("Failed to update indicator.", color="danger")
|
||||
return feedback_msg, no_update, no_update
|
||||
else:
|
||||
# Create new indicator
|
||||
new_indicator = manager.create_indicator(
|
||||
name=name,
|
||||
indicator_type=indicator_type,
|
||||
parameters=parameters,
|
||||
description=description or "",
|
||||
color=color or "#007bff",
|
||||
timeframe=timeframe or None
|
||||
)
|
||||
|
||||
if not new_indicator:
|
||||
feedback_msg = dbc.Alert("Failed to save indicator.", color="danger")
|
||||
return feedback_msg, no_update, no_update
|
||||
|
||||
feedback_msg = dbc.Alert(f"Indicator '{name}' saved successfully!", color="success")
|
||||
|
||||
# Refresh the indicator options
|
||||
overlay_indicators = manager.get_indicators_by_type('overlay')
|
||||
subplot_indicators = manager.get_indicators_by_type('subplot')
|
||||
|
||||
overlay_options = []
|
||||
for indicator in overlay_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
overlay_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
subplot_options = []
|
||||
for indicator in subplot_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
subplot_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
return feedback_msg, overlay_options, subplot_options
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Indicator callback: Error saving indicator: {e}")
|
||||
error_msg = dbc.Alert(f"Error: {str(e)}", color="danger")
|
||||
return error_msg, no_update, no_update
|
||||
|
||||
# Update custom indicator lists with edit/delete buttons
|
||||
@app.callback(
|
||||
[Output('overlay-indicators-list', 'children'),
|
||||
Output('subplot-indicators-list', 'children')],
|
||||
[Input('overlay-indicators-checklist', 'options'),
|
||||
Input('subplot-indicators-checklist', 'options'),
|
||||
Input('overlay-indicators-checklist', 'value'),
|
||||
Input('subplot-indicators-checklist', 'value')]
|
||||
)
|
||||
def update_custom_indicator_lists(overlay_options, subplot_options, overlay_values, subplot_values):
|
||||
"""Create custom indicator lists with edit and delete buttons."""
|
||||
|
||||
def create_indicator_item(option, is_checked):
|
||||
"""Create a single indicator item with checkbox and buttons."""
|
||||
indicator_id = option['value']
|
||||
indicator_name = option['label']
|
||||
|
||||
return html.Div([
|
||||
# Checkbox and name
|
||||
html.Div([
|
||||
dcc.Checklist(
|
||||
options=[{'label': '', 'value': indicator_id}],
|
||||
value=[indicator_id] if is_checked else [],
|
||||
id={'type': 'indicator-checkbox', 'index': indicator_id},
|
||||
style={'display': 'inline-block', 'margin-right': '8px'}
|
||||
),
|
||||
html.Span(indicator_name, style={'display': 'inline-block', 'vertical-align': 'top'})
|
||||
], style={'display': 'inline-block', 'width': '70%'}),
|
||||
|
||||
# Edit and Delete buttons
|
||||
html.Div([
|
||||
html.Button(
|
||||
"✏️",
|
||||
id={'type': 'edit-indicator-btn', 'index': indicator_id},
|
||||
title="Edit indicator",
|
||||
className="btn btn-sm btn-outline-primary",
|
||||
style={'margin-left': '5px'}
|
||||
),
|
||||
html.Button(
|
||||
"🗑️",
|
||||
id={'type': 'delete-indicator-btn', 'index': indicator_id},
|
||||
title="Delete indicator",
|
||||
className="btn btn-sm btn-outline-danger",
|
||||
style={'margin-left': '5px'}
|
||||
)
|
||||
], style={'display': 'inline-block', 'width': '30%', 'text-align': 'right'})
|
||||
], style={
|
||||
'display': 'block',
|
||||
'padding': '5px 0',
|
||||
'border-bottom': '1px solid #f0f0f0',
|
||||
'margin-bottom': '5px'
|
||||
})
|
||||
|
||||
# Create overlay indicators list
|
||||
overlay_list = []
|
||||
for option in overlay_options:
|
||||
is_checked = option['value'] in (overlay_values or [])
|
||||
overlay_list.append(create_indicator_item(option, is_checked))
|
||||
|
||||
# Create subplot indicators list
|
||||
subplot_list = []
|
||||
for option in subplot_options:
|
||||
is_checked = option['value'] in (subplot_values or [])
|
||||
subplot_list.append(create_indicator_item(option, is_checked))
|
||||
|
||||
return overlay_list, subplot_list
|
||||
|
||||
# Sync individual indicator checkboxes with main checklist
|
||||
@app.callback(
|
||||
Output('overlay-indicators-checklist', 'value', allow_duplicate=True),
|
||||
[Input({'type': 'indicator-checkbox', 'index': dash.ALL}, 'value')],
|
||||
[State('overlay-indicators-checklist', 'options')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def sync_overlay_indicators(checkbox_values, overlay_options):
|
||||
"""Sync individual indicator checkboxes with main overlay checklist."""
|
||||
if not checkbox_values or not overlay_options:
|
||||
return []
|
||||
|
||||
selected_indicators = []
|
||||
overlay_ids = [opt['value'] for opt in overlay_options]
|
||||
|
||||
# Flatten the checkbox values and filter for overlay indicators
|
||||
for values in checkbox_values:
|
||||
if values: # values is a list, check if not empty
|
||||
for indicator_id in values:
|
||||
if indicator_id in overlay_ids:
|
||||
selected_indicators.append(indicator_id)
|
||||
|
||||
# Remove duplicates
|
||||
return list(set(selected_indicators))
|
||||
|
||||
@app.callback(
|
||||
Output('subplot-indicators-checklist', 'value', allow_duplicate=True),
|
||||
[Input({'type': 'indicator-checkbox', 'index': dash.ALL}, 'value')],
|
||||
[State('subplot-indicators-checklist', 'options')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def sync_subplot_indicators(checkbox_values, subplot_options):
|
||||
"""Sync individual indicator checkboxes with main subplot checklist."""
|
||||
if not checkbox_values or not subplot_options:
|
||||
return []
|
||||
|
||||
selected_indicators = []
|
||||
subplot_ids = [opt['value'] for opt in subplot_options]
|
||||
|
||||
# Flatten the checkbox values and filter for subplot indicators
|
||||
for values in checkbox_values:
|
||||
if values: # values is a list, check if not empty
|
||||
for indicator_id in values:
|
||||
if indicator_id in subplot_ids:
|
||||
selected_indicators.append(indicator_id)
|
||||
|
||||
# Remove duplicates
|
||||
return list(set(selected_indicators))
|
||||
|
||||
# Handle delete indicator
|
||||
@app.callback(
|
||||
[Output('save-indicator-feedback', 'children', allow_duplicate=True),
|
||||
Output('overlay-indicators-checklist', 'options', allow_duplicate=True),
|
||||
Output('subplot-indicators-checklist', 'options', allow_duplicate=True)],
|
||||
[Input({'type': 'delete-indicator-btn', 'index': dash.ALL}, 'n_clicks')],
|
||||
[State({'type': 'delete-indicator-btn', 'index': dash.ALL}, 'id')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def delete_indicator(delete_clicks, button_ids):
|
||||
"""Delete an indicator when delete button is clicked."""
|
||||
ctx = callback_context
|
||||
if not ctx.triggered or not any(delete_clicks):
|
||||
return no_update, no_update, no_update
|
||||
|
||||
# Find which button was clicked
|
||||
triggered_id = ctx.triggered[0]['prop_id']
|
||||
button_info = json.loads(triggered_id.split('.')[0])
|
||||
indicator_id = button_info['index']
|
||||
|
||||
try:
|
||||
# Get indicator manager and delete the indicator
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
manager = get_indicator_manager()
|
||||
|
||||
# Load indicator to get its name before deletion
|
||||
indicator = manager.load_indicator(indicator_id)
|
||||
indicator_name = indicator.name if indicator else indicator_id
|
||||
|
||||
if manager.delete_indicator(indicator_id):
|
||||
# Refresh the indicator options
|
||||
overlay_indicators = manager.get_indicators_by_type('overlay')
|
||||
subplot_indicators = manager.get_indicators_by_type('subplot')
|
||||
|
||||
overlay_options = []
|
||||
for indicator in overlay_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
overlay_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
subplot_options = []
|
||||
for indicator in subplot_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
subplot_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
success_msg = dbc.Alert(f"Indicator '{indicator_name}' deleted.", color="warning")
|
||||
|
||||
return success_msg, overlay_options, subplot_options
|
||||
else:
|
||||
error_msg = dbc.Alert("Failed to delete indicator.", color="danger")
|
||||
return error_msg, no_update, no_update
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Indicator callback: Error deleting indicator: {e}")
|
||||
error_msg = dbc.Alert(f"Error: {str(e)}", color="danger")
|
||||
return error_msg, no_update, no_update
|
||||
|
||||
# Handle edit indicator - open modal with existing data
|
||||
@app.callback(
|
||||
[Output('modal-title', 'children'),
|
||||
Output('indicator-name-input', 'value'),
|
||||
Output('indicator-type-dropdown', 'value'),
|
||||
Output('indicator-description-input', 'value'),
|
||||
Output('indicator-timeframe-dropdown', 'value'),
|
||||
Output('indicator-color-input', 'value'),
|
||||
Output('edit-indicator-store', 'data'),
|
||||
# Add parameter field outputs
|
||||
Output('sma-period-input', 'value'),
|
||||
Output('ema-period-input', 'value'),
|
||||
Output('rsi-period-input', 'value'),
|
||||
Output('macd-fast-period-input', 'value'),
|
||||
Output('macd-slow-period-input', 'value'),
|
||||
Output('macd-signal-period-input', 'value'),
|
||||
Output('bb-period-input', 'value'),
|
||||
Output('bb-stddev-input', 'value')],
|
||||
[Input({'type': 'edit-indicator-btn', 'index': dash.ALL}, 'n_clicks')],
|
||||
[State({'type': 'edit-indicator-btn', 'index': dash.ALL}, 'id')],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def edit_indicator(edit_clicks, button_ids):
|
||||
"""Load indicator data for editing."""
|
||||
ctx = callback_context
|
||||
if not ctx.triggered or not any(edit_clicks):
|
||||
return [no_update] * 15
|
||||
|
||||
# Find which button was clicked
|
||||
triggered_id = ctx.triggered[0]['prop_id']
|
||||
button_info = json.loads(triggered_id.split('.')[0])
|
||||
indicator_id = button_info['index']
|
||||
|
||||
try:
|
||||
# Load the indicator data
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
manager = get_indicator_manager()
|
||||
indicator = manager.load_indicator(indicator_id)
|
||||
|
||||
if indicator:
|
||||
# Store indicator ID for update
|
||||
edit_data = {'indicator_id': indicator_id, 'mode': 'edit'}
|
||||
|
||||
# Extract parameter values based on indicator type
|
||||
params = indicator.parameters
|
||||
|
||||
# Default parameter values
|
||||
sma_period = None
|
||||
ema_period = None
|
||||
rsi_period = None
|
||||
macd_fast = None
|
||||
macd_slow = None
|
||||
macd_signal = None
|
||||
bb_period = None
|
||||
bb_stddev = None
|
||||
|
||||
# Update with actual saved values
|
||||
if indicator.type == 'sma':
|
||||
sma_period = params.get('period')
|
||||
elif indicator.type == 'ema':
|
||||
ema_period = params.get('period')
|
||||
elif indicator.type == 'rsi':
|
||||
rsi_period = params.get('period')
|
||||
elif indicator.type == 'macd':
|
||||
macd_fast = params.get('fast_period')
|
||||
macd_slow = params.get('slow_period')
|
||||
macd_signal = params.get('signal_period')
|
||||
elif indicator.type == 'bollinger_bands':
|
||||
bb_period = params.get('period')
|
||||
bb_stddev = params.get('std_dev')
|
||||
|
||||
return (
|
||||
f"✏️ Edit Indicator: {indicator.name}",
|
||||
indicator.name,
|
||||
indicator.type,
|
||||
indicator.description,
|
||||
indicator.timeframe,
|
||||
indicator.styling.color,
|
||||
edit_data,
|
||||
sma_period,
|
||||
ema_period,
|
||||
rsi_period,
|
||||
macd_fast,
|
||||
macd_slow,
|
||||
macd_signal,
|
||||
bb_period,
|
||||
bb_stddev
|
||||
)
|
||||
else:
|
||||
return [no_update] * 15
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Indicator callback: Error loading indicator for edit: {e}")
|
||||
return [no_update] * 15
|
||||
|
||||
# Reset modal form when closed or saved
|
||||
@app.callback(
|
||||
[Output('indicator-name-input', 'value', allow_duplicate=True),
|
||||
Output('indicator-type-dropdown', 'value', allow_duplicate=True),
|
||||
Output('indicator-description-input', 'value', allow_duplicate=True),
|
||||
Output('indicator-timeframe-dropdown', 'value', allow_duplicate=True),
|
||||
Output('indicator-color-input', 'value', allow_duplicate=True),
|
||||
Output('indicator-line-width-slider', 'value'),
|
||||
Output('modal-title', 'children', allow_duplicate=True),
|
||||
Output('edit-indicator-store', 'data', allow_duplicate=True),
|
||||
# Add parameter field resets
|
||||
Output('sma-period-input', 'value', allow_duplicate=True),
|
||||
Output('ema-period-input', 'value', allow_duplicate=True),
|
||||
Output('rsi-period-input', 'value', allow_duplicate=True),
|
||||
Output('macd-fast-period-input', 'value', allow_duplicate=True),
|
||||
Output('macd-slow-period-input', 'value', allow_duplicate=True),
|
||||
Output('macd-signal-period-input', 'value', allow_duplicate=True),
|
||||
Output('bb-period-input', 'value', allow_duplicate=True),
|
||||
Output('bb-stddev-input', 'value', allow_duplicate=True)],
|
||||
[Input('cancel-indicator-btn', 'n_clicks'),
|
||||
Input('save-indicator-btn', 'n_clicks')], # Also reset on successful save
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def reset_modal_form(cancel_clicks, save_clicks):
|
||||
"""Reset the modal form to its default state."""
|
||||
return "", "", "", "", "", 2, "📊 Add New Indicator", None, 20, 12, 14, 12, 26, 9, 20, 2.0
|
||||
|
||||
logger.info("Indicator callbacks: registered successfully")
|
||||
32
dashboard/callbacks/navigation.py
Normal file
32
dashboard/callbacks/navigation.py
Normal file
@ -0,0 +1,32 @@
|
||||
"""
|
||||
Navigation callbacks for tab switching.
|
||||
"""
|
||||
|
||||
from dash import html, Output, Input
|
||||
from dashboard.layouts import (
|
||||
get_market_data_layout,
|
||||
get_bot_management_layout,
|
||||
get_performance_layout,
|
||||
get_system_health_layout
|
||||
)
|
||||
|
||||
|
||||
def register_navigation_callbacks(app):
|
||||
"""Register navigation-related callbacks."""
|
||||
|
||||
@app.callback(
|
||||
Output('tab-content', 'children'),
|
||||
Input('main-tabs', 'value')
|
||||
)
|
||||
def render_tab_content(active_tab):
|
||||
"""Render content based on selected tab."""
|
||||
if active_tab == 'market-data':
|
||||
return get_market_data_layout()
|
||||
elif active_tab == 'bot-management':
|
||||
return get_bot_management_layout()
|
||||
elif active_tab == 'performance':
|
||||
return get_performance_layout()
|
||||
elif active_tab == 'system-health':
|
||||
return get_system_health_layout()
|
||||
else:
|
||||
return html.Div("Tab not found")
|
||||
567
dashboard/callbacks/system_health.py
Normal file
567
dashboard/callbacks/system_health.py
Normal file
@ -0,0 +1,567 @@
|
||||
"""
|
||||
Enhanced system health callbacks for the dashboard.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import subprocess
|
||||
import psutil
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Dict, Any, Optional, List
|
||||
from dash import Output, Input, State, html, callback_context, no_update
|
||||
import dash_bootstrap_components as dbc
|
||||
from utils.logger import get_logger
|
||||
from database.connection import DatabaseManager
|
||||
from database.redis_manager import get_sync_redis_manager
|
||||
|
||||
logger = get_logger("system_health_callbacks")
|
||||
|
||||
|
||||
def register_system_health_callbacks(app):
|
||||
"""Register enhanced system health callbacks with comprehensive monitoring."""
|
||||
|
||||
# Quick Status Updates (Top Cards)
|
||||
@app.callback(
|
||||
[Output('data-collection-quick-status', 'children'),
|
||||
Output('database-quick-status', 'children'),
|
||||
Output('redis-quick-status', 'children'),
|
||||
Output('performance-quick-status', 'children')],
|
||||
Input('interval-component', 'n_intervals')
|
||||
)
|
||||
def update_quick_status(n_intervals):
|
||||
"""Update quick status indicators."""
|
||||
try:
|
||||
# Data Collection Status
|
||||
dc_status = _get_data_collection_quick_status()
|
||||
|
||||
# Database Status
|
||||
db_status = _get_database_quick_status()
|
||||
|
||||
# Redis Status
|
||||
redis_status = _get_redis_quick_status()
|
||||
|
||||
# Performance Status
|
||||
perf_status = _get_performance_quick_status()
|
||||
|
||||
return dc_status, db_status, redis_status, perf_status
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating quick status: {e}")
|
||||
error_status = dbc.Badge("🔴 Error", color="danger", className="me-1")
|
||||
return error_status, error_status, error_status, error_status
|
||||
|
||||
# Detailed Data Collection Service Status
|
||||
@app.callback(
|
||||
[Output('data-collection-service-status', 'children'),
|
||||
Output('data-collection-metrics', 'children')],
|
||||
[Input('interval-component', 'n_intervals'),
|
||||
Input('refresh-data-status-btn', 'n_clicks')]
|
||||
)
|
||||
def update_data_collection_status(n_intervals, refresh_clicks):
|
||||
"""Update detailed data collection service status and metrics."""
|
||||
try:
|
||||
service_status = _get_data_collection_service_status()
|
||||
metrics = _get_data_collection_metrics()
|
||||
|
||||
return service_status, metrics
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating data collection status: {e}")
|
||||
error_div = dbc.Alert(
|
||||
f"Error: {str(e)}",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
return error_div, error_div
|
||||
|
||||
# Individual Collectors Status
|
||||
@app.callback(
|
||||
Output('individual-collectors-status', 'children'),
|
||||
[Input('interval-component', 'n_intervals'),
|
||||
Input('refresh-data-status-btn', 'n_clicks')]
|
||||
)
|
||||
def update_individual_collectors_status(n_intervals, refresh_clicks):
|
||||
"""Update individual data collector health status."""
|
||||
try:
|
||||
return _get_individual_collectors_status()
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating individual collectors status: {e}")
|
||||
return dbc.Alert(
|
||||
f"Error: {str(e)}",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
|
||||
# Database Status and Statistics
|
||||
@app.callback(
|
||||
[Output('database-status', 'children'),
|
||||
Output('database-stats', 'children')],
|
||||
Input('interval-component', 'n_intervals')
|
||||
)
|
||||
def update_database_status(n_intervals):
|
||||
"""Update database connection status and statistics."""
|
||||
try:
|
||||
db_status = _get_database_status()
|
||||
db_stats = _get_database_statistics()
|
||||
|
||||
return db_status, db_stats
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating database status: {e}")
|
||||
error_alert = dbc.Alert(
|
||||
f"Error: {str(e)}",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
return error_alert, error_alert
|
||||
|
||||
# Redis Status and Statistics
|
||||
@app.callback(
|
||||
[Output('redis-status', 'children'),
|
||||
Output('redis-stats', 'children')],
|
||||
Input('interval-component', 'n_intervals')
|
||||
)
|
||||
def update_redis_status(n_intervals):
|
||||
"""Update Redis connection status and statistics."""
|
||||
try:
|
||||
redis_status = _get_redis_status()
|
||||
redis_stats = _get_redis_statistics()
|
||||
|
||||
return redis_status, redis_stats
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating Redis status: {e}")
|
||||
error_alert = dbc.Alert(
|
||||
f"Error: {str(e)}",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
return error_alert, error_alert
|
||||
|
||||
# System Performance Metrics
|
||||
@app.callback(
|
||||
Output('system-performance-metrics', 'children'),
|
||||
Input('interval-component', 'n_intervals')
|
||||
)
|
||||
def update_system_performance(n_intervals):
|
||||
"""Update system performance metrics."""
|
||||
try:
|
||||
return _get_system_performance_metrics()
|
||||
except Exception as e:
|
||||
logger.error(f"Error updating system performance: {e}")
|
||||
return dbc.Alert(
|
||||
f"Error: {str(e)}",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
|
||||
# Data Collection Details Modal
|
||||
@app.callback(
|
||||
[Output("collection-details-modal", "is_open"),
|
||||
Output("collection-details-content", "children")],
|
||||
[Input("view-collection-details-btn", "n_clicks")],
|
||||
[State("collection-details-modal", "is_open")]
|
||||
)
|
||||
def toggle_collection_details_modal(n_clicks, is_open):
|
||||
"""Toggle and populate the collection details modal."""
|
||||
if n_clicks:
|
||||
details_content = _get_collection_details_content()
|
||||
return not is_open, details_content
|
||||
return is_open, no_update
|
||||
|
||||
# Collection Logs Modal
|
||||
@app.callback(
|
||||
[Output("collection-logs-modal", "is_open"),
|
||||
Output("collection-logs-content", "children")],
|
||||
[Input("view-collection-logs-btn", "n_clicks"),
|
||||
Input("refresh-logs-btn", "n_clicks")],
|
||||
[State("collection-logs-modal", "is_open")],
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def toggle_collection_logs_modal(logs_clicks, refresh_clicks, is_open):
|
||||
"""Toggle and populate the collection logs modal."""
|
||||
ctx = callback_context
|
||||
if not ctx.triggered:
|
||||
return is_open, no_update
|
||||
|
||||
triggered_id = ctx.triggered_id
|
||||
if triggered_id in ["view-collection-logs-btn", "refresh-logs-btn"]:
|
||||
logs_content = _get_collection_logs_content()
|
||||
return True, logs_content
|
||||
|
||||
return is_open, no_update
|
||||
|
||||
@app.callback(
|
||||
Output("collection-logs-modal", "is_open", allow_duplicate=True),
|
||||
Input("close-logs-modal", "n_clicks"),
|
||||
State("collection-logs-modal", "is_open"),
|
||||
prevent_initial_call=True
|
||||
)
|
||||
def close_logs_modal(n_clicks, is_open):
|
||||
if n_clicks:
|
||||
return not is_open
|
||||
return is_open
|
||||
|
||||
logger.info("Enhanced system health callbacks registered successfully")
|
||||
|
||||
|
||||
# Helper Functions
|
||||
|
||||
def _get_data_collection_quick_status() -> dbc.Badge:
|
||||
"""Get quick data collection status."""
|
||||
try:
|
||||
is_running = _check_data_collection_service_running()
|
||||
if is_running:
|
||||
return dbc.Badge("Active", color="success", className="me-1")
|
||||
else:
|
||||
return dbc.Badge("Stopped", color="danger", className="me-1")
|
||||
except:
|
||||
return dbc.Badge("Unknown", color="warning", className="me-1")
|
||||
|
||||
|
||||
def _get_database_quick_status() -> dbc.Badge:
|
||||
"""Get quick database status."""
|
||||
try:
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
if db_manager.test_connection():
|
||||
return dbc.Badge("Connected", color="success", className="me-1")
|
||||
else:
|
||||
return dbc.Badge("Error", color="danger", className="me-1")
|
||||
except:
|
||||
return dbc.Badge("Error", color="danger", className="me-1")
|
||||
|
||||
|
||||
def _get_redis_quick_status() -> dbc.Badge:
|
||||
"""Get quick Redis status."""
|
||||
try:
|
||||
redis_manager = get_sync_redis_manager()
|
||||
redis_manager.initialize()
|
||||
# This check is simplified as initialize() would raise an error on failure.
|
||||
# For a more explicit check, a dedicated test_connection could be added to SyncRedisManager.
|
||||
if redis_manager.client.ping():
|
||||
return dbc.Badge("Connected", color="success", className="me-1")
|
||||
else:
|
||||
return dbc.Badge("Error", color="danger", className="me-1")
|
||||
except Exception as e:
|
||||
logger.error(f"Redis quick status check failed: {e}")
|
||||
return dbc.Badge("Error", color="danger", className="me-1")
|
||||
|
||||
|
||||
def _get_performance_quick_status() -> dbc.Badge:
|
||||
"""Get quick performance status."""
|
||||
try:
|
||||
cpu_percent = psutil.cpu_percent(interval=0.1)
|
||||
memory = psutil.virtual_memory()
|
||||
|
||||
if cpu_percent < 80 and memory.percent < 80:
|
||||
return dbc.Badge("Good", color="success", className="me-1")
|
||||
elif cpu_percent < 90 and memory.percent < 90:
|
||||
return dbc.Badge("Warning", color="warning", className="me-1")
|
||||
else:
|
||||
return dbc.Badge("High", color="danger", className="me-1")
|
||||
except:
|
||||
return dbc.Badge("Unknown", color="secondary", className="me-1")
|
||||
|
||||
|
||||
def _get_data_collection_service_status() -> html.Div:
|
||||
"""Get detailed data collection service status."""
|
||||
try:
|
||||
is_running = _check_data_collection_service_running()
|
||||
current_time = datetime.now().strftime('%H:%M:%S')
|
||||
|
||||
if is_running:
|
||||
status_badge = dbc.Badge("Service Running", color="success", className="me-2")
|
||||
status_text = html.P("Data collection service is actively collecting market data.", className="mb-0")
|
||||
details = html.Div()
|
||||
else:
|
||||
status_badge = dbc.Badge("Service Stopped", color="danger", className="me-2")
|
||||
status_text = html.P("Data collection service is not running.", className="text-danger")
|
||||
details = html.Div([
|
||||
html.P("To start the service, run:", className="mt-2 mb-1"),
|
||||
html.Code("python scripts/start_data_collection.py")
|
||||
])
|
||||
|
||||
return html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col(status_badge, width="auto"),
|
||||
dbc.Col(html.P(f"Checked: {current_time}", className="text-muted mb-0"), width="auto")
|
||||
], align="center", className="mb-2"),
|
||||
status_text,
|
||||
details
|
||||
])
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error checking status: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_data_collection_metrics() -> html.Div:
|
||||
"""Get data collection metrics."""
|
||||
try:
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
|
||||
with db_manager.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
candles_count = session.execute(text("SELECT COUNT(*) FROM market_data")).scalar() or 0
|
||||
tickers_count = session.execute(text("SELECT COUNT(*) FROM raw_trades WHERE data_type = 'ticker'")).scalar() or 0
|
||||
latest_market_data = session.execute(text("SELECT MAX(timestamp) FROM market_data")).scalar()
|
||||
latest_raw_data = session.execute(text("SELECT MAX(timestamp) FROM raw_trades")).scalar()
|
||||
|
||||
latest_data = max(d for d in [latest_market_data, latest_raw_data] if d) if any([latest_market_data, latest_raw_data]) else None
|
||||
|
||||
if latest_data:
|
||||
time_diff = datetime.utcnow() - (latest_data.replace(tzinfo=None) if latest_data.tzinfo else latest_data)
|
||||
if time_diff < timedelta(minutes=5):
|
||||
freshness_badge = dbc.Badge(f"Fresh ({time_diff.seconds // 60}m ago)", color="success")
|
||||
elif time_diff < timedelta(hours=1):
|
||||
freshness_badge = dbc.Badge(f"Recent ({time_diff.seconds // 60}m ago)", color="warning")
|
||||
else:
|
||||
freshness_badge = dbc.Badge(f"Stale ({time_diff.total_seconds() // 3600:.1f}h ago)", color="danger")
|
||||
else:
|
||||
freshness_badge = dbc.Badge("No data", color="secondary")
|
||||
|
||||
return html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col(html.Strong("Candles:")),
|
||||
dbc.Col(f"{candles_count:,}", className="text-end")
|
||||
]),
|
||||
dbc.Row([
|
||||
dbc.Col(html.Strong("Tickers:")),
|
||||
dbc.Col(f"{tickers_count:,}", className="text-end")
|
||||
]),
|
||||
dbc.Row([
|
||||
dbc.Col(html.Strong("Data Freshness:")),
|
||||
dbc.Col(freshness_badge, className="text-end")
|
||||
])
|
||||
])
|
||||
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error loading metrics: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_individual_collectors_status() -> html.Div:
|
||||
"""Get individual data collector status."""
|
||||
try:
|
||||
return dbc.Alert([
|
||||
html.P("Individual collector health data will be displayed here when the data collection service is running.", className="mb-2"),
|
||||
html.Hr(),
|
||||
html.P("To start monitoring, run the following command:", className="mb-1"),
|
||||
html.Code("python scripts/start_data_collection.py")
|
||||
], color="info")
|
||||
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error checking collector status: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_database_status() -> html.Div:
|
||||
"""Get detailed database status."""
|
||||
try:
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
|
||||
with db_manager.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
result = session.execute(text("SELECT version()")).fetchone()
|
||||
version = result[0] if result else "Unknown"
|
||||
connections = session.execute(text("SELECT count(*) FROM pg_stat_activity")).scalar() or 0
|
||||
|
||||
return html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col(dbc.Badge("Database Connected", color="success"), width="auto"),
|
||||
dbc.Col(f"Checked: {datetime.now().strftime('%H:%M:%S')}", className="text-muted")
|
||||
], align="center", className="mb-2"),
|
||||
html.P(f"Version: PostgreSQL {version.split()[1] if 'PostgreSQL' in version else 'Unknown'}", className="mb-1"),
|
||||
html.P(f"Active connections: {connections}", className="mb-0")
|
||||
])
|
||||
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error connecting to database: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_database_statistics() -> html.Div:
|
||||
"""Get database statistics."""
|
||||
try:
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
|
||||
with db_manager.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
table_stats_query = """
|
||||
SELECT tablename, pg_size_pretty(pg_total_relation_size('public.'||tablename)) as size
|
||||
FROM pg_tables WHERE schemaname = 'public'
|
||||
ORDER BY pg_total_relation_size('public.'||tablename) DESC LIMIT 5
|
||||
"""
|
||||
table_stats = session.execute(text(table_stats_query)).fetchall()
|
||||
|
||||
market_data_activity = session.execute(text("SELECT COUNT(*) FROM market_data WHERE timestamp > NOW() - INTERVAL '1 hour'")).scalar() or 0
|
||||
raw_data_activity = session.execute(text("SELECT COUNT(*) FROM raw_trades WHERE timestamp > NOW() - INTERVAL '1 hour'")).scalar() or 0
|
||||
total_recent_activity = market_data_activity + raw_data_activity
|
||||
|
||||
components = [
|
||||
dbc.Row([
|
||||
dbc.Col(html.Strong("Recent Activity (1h):")),
|
||||
dbc.Col(f"{total_recent_activity:,} records", className="text-end")
|
||||
]),
|
||||
html.Hr(className="my-2"),
|
||||
html.Strong("Largest Tables:"),
|
||||
]
|
||||
if table_stats:
|
||||
for table, size in table_stats:
|
||||
components.append(dbc.Row([
|
||||
dbc.Col(f"• {table}"),
|
||||
dbc.Col(size, className="text-end text-muted")
|
||||
]))
|
||||
else:
|
||||
components.append(html.P("No table statistics available.", className="text-muted"))
|
||||
|
||||
return html.Div(components)
|
||||
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error loading database stats: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_redis_status() -> html.Div:
|
||||
"""Get detailed Redis server status."""
|
||||
try:
|
||||
redis_manager = get_sync_redis_manager()
|
||||
redis_manager.initialize()
|
||||
|
||||
if not redis_manager.client.ping():
|
||||
raise ConnectionError("Redis server is not responding.")
|
||||
|
||||
info = redis_manager.client.info()
|
||||
status_badge = dbc.Badge("Connected", color="success", className="me-1")
|
||||
|
||||
return html.Div([
|
||||
html.H5("Redis Status"),
|
||||
status_badge,
|
||||
html.P(f"Version: {info.get('redis_version', 'N/A')}"),
|
||||
html.P(f"Mode: {info.get('redis_mode', 'N/A')}")
|
||||
])
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get Redis status: {e}")
|
||||
return html.Div([
|
||||
html.H5("Redis Status"),
|
||||
dbc.Badge("Error", color="danger", className="me-1"),
|
||||
dbc.Alert(f"Error: {e}", color="danger", dismissable=True)
|
||||
])
|
||||
|
||||
|
||||
def _get_redis_statistics() -> html.Div:
|
||||
"""Get detailed Redis statistics."""
|
||||
try:
|
||||
redis_manager = get_sync_redis_manager()
|
||||
redis_manager.initialize()
|
||||
|
||||
if not redis_manager.client.ping():
|
||||
raise ConnectionError("Redis server is not responding.")
|
||||
|
||||
info = redis_manager.client.info()
|
||||
|
||||
return html.Div([
|
||||
html.H5("Redis Statistics"),
|
||||
html.P(f"Connected Clients: {info.get('connected_clients', 'N/A')}"),
|
||||
html.P(f"Memory Used: {info.get('used_memory_human', 'N/A')}"),
|
||||
html.P(f"Total Commands Processed: {info.get('total_commands_processed', 'N/A')}")
|
||||
])
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get Redis statistics: {e}")
|
||||
return dbc.Alert(f"Error: {e}", color="danger", dismissable=True)
|
||||
|
||||
|
||||
def _get_system_performance_metrics() -> html.Div:
|
||||
"""Get system performance metrics."""
|
||||
try:
|
||||
cpu_percent = psutil.cpu_percent(interval=0.1)
|
||||
cpu_count = psutil.cpu_count()
|
||||
memory = psutil.virtual_memory()
|
||||
disk = psutil.disk_usage('/')
|
||||
|
||||
def get_color(percent):
|
||||
if percent < 70: return "success"
|
||||
if percent < 85: return "warning"
|
||||
return "danger"
|
||||
|
||||
return html.Div([
|
||||
html.Div([
|
||||
html.Strong("CPU Usage: "),
|
||||
dbc.Badge(f"{cpu_percent:.1f}%", color=get_color(cpu_percent)),
|
||||
html.Span(f" ({cpu_count} cores)", className="text-muted ms-1")
|
||||
], className="mb-2"),
|
||||
dbc.Progress(value=cpu_percent, color=get_color(cpu_percent), style={"height": "10px"}, className="mb-3"),
|
||||
|
||||
html.Div([
|
||||
html.Strong("Memory Usage: "),
|
||||
dbc.Badge(f"{memory.percent:.1f}%", color=get_color(memory.percent)),
|
||||
html.Span(f" ({memory.used / (1024**3):.1f} / {memory.total / (1024**3):.1f} GB)", className="text-muted ms-1")
|
||||
], className="mb-2"),
|
||||
dbc.Progress(value=memory.percent, color=get_color(memory.percent), style={"height": "10px"}, className="mb-3"),
|
||||
|
||||
html.Div([
|
||||
html.Strong("Disk Usage: "),
|
||||
dbc.Badge(f"{disk.percent:.1f}%", color=get_color(disk.percent)),
|
||||
html.Span(f" ({disk.used / (1024**3):.1f} / {disk.total / (1024**3):.1f} GB)", className="text-muted ms-1")
|
||||
], className="mb-2"),
|
||||
dbc.Progress(value=disk.percent, color=get_color(disk.percent), style={"height": "10px"})
|
||||
])
|
||||
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error loading performance metrics: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_collection_details_content() -> html.Div:
|
||||
"""Get detailed collection information for modal."""
|
||||
try:
|
||||
return html.Div([
|
||||
html.H5("Data Collection Service Details"),
|
||||
html.P("Comprehensive data collection service information would be displayed here."),
|
||||
html.Hr(),
|
||||
html.H6("Configuration"),
|
||||
html.P("Service configuration details..."),
|
||||
html.H6("Performance Metrics"),
|
||||
html.P("Detailed performance analytics..."),
|
||||
html.H6("Health Status"),
|
||||
html.P("Individual collector health information...")
|
||||
])
|
||||
except Exception as e:
|
||||
return dbc.Alert(f"Error loading details: {e}", color="danger")
|
||||
|
||||
|
||||
def _get_collection_logs_content() -> str:
|
||||
"""Get recent collection service logs."""
|
||||
try:
|
||||
# This would read from actual log files
|
||||
# For now, return a placeholder
|
||||
current_time = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
||||
return f"""[{current_time}] INFO - Data Collection Service Logs
|
||||
|
||||
Recent log entries would be displayed here from the data collection service.
|
||||
|
||||
This would include:
|
||||
- Service startup/shutdown events
|
||||
- Collector connection status changes
|
||||
- Data collection statistics
|
||||
- Error messages and warnings
|
||||
- Performance metrics
|
||||
|
||||
To view real logs, check the logs/ directory or configure log file monitoring.
|
||||
"""
|
||||
except Exception as e:
|
||||
return f"Error loading logs: {str(e)}"
|
||||
|
||||
|
||||
def _check_data_collection_service_running() -> bool:
|
||||
"""Check if data collection service is running."""
|
||||
try:
|
||||
# Check for running processes (simplified)
|
||||
for proc in psutil.process_iter(['pid', 'name', 'cmdline']):
|
||||
try:
|
||||
if proc.info['cmdline']:
|
||||
cmdline = ' '.join(proc.info['cmdline'])
|
||||
if 'start_data_collection.py' in cmdline or 'collection_service' in cmdline:
|
||||
return True
|
||||
except (psutil.NoSuchProcess, psutil.AccessDenied):
|
||||
continue
|
||||
return False
|
||||
except:
|
||||
return False
|
||||
11
dashboard/components/__init__.py
Normal file
11
dashboard/components/__init__.py
Normal file
@ -0,0 +1,11 @@
|
||||
"""
|
||||
Reusable UI components for the dashboard.
|
||||
"""
|
||||
|
||||
from .chart_controls import create_chart_config_panel
|
||||
from .indicator_modal import create_indicator_modal
|
||||
|
||||
__all__ = [
|
||||
'create_chart_config_panel',
|
||||
'create_indicator_modal'
|
||||
]
|
||||
134
dashboard/components/chart_controls.py
Normal file
134
dashboard/components/chart_controls.py
Normal file
@ -0,0 +1,134 @@
|
||||
"""
|
||||
Chart control components for the market data layout.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
from utils.logger import get_logger
|
||||
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
def create_chart_config_panel(strategy_options, overlay_options, subplot_options):
|
||||
"""Create the chart configuration panel with add/edit UI."""
|
||||
return dbc.Card([
|
||||
dbc.CardHeader(html.H5("🎯 Chart Configuration")),
|
||||
dbc.CardBody([
|
||||
dbc.Button("➕ Add New Indicator", id="add-indicator-btn-visible", color="primary", className="mb-3"),
|
||||
|
||||
html.Div([
|
||||
html.Label("Strategy Template:", className="form-label"),
|
||||
dcc.Dropdown(
|
||||
id='strategy-dropdown',
|
||||
options=strategy_options,
|
||||
value=None,
|
||||
placeholder="Select a strategy template (optional)",
|
||||
)
|
||||
], className="mb-3"),
|
||||
|
||||
dbc.Row([
|
||||
dbc.Col([
|
||||
html.Label("Overlay Indicators:", className="form-label"),
|
||||
dcc.Checklist(
|
||||
id='overlay-indicators-checklist',
|
||||
options=overlay_options,
|
||||
value=[],
|
||||
style={'display': 'none'}
|
||||
),
|
||||
html.Div(id='overlay-indicators-list')
|
||||
], width=6),
|
||||
|
||||
dbc.Col([
|
||||
html.Label("Subplot Indicators:", className="form-label"),
|
||||
dcc.Checklist(
|
||||
id='subplot-indicators-checklist',
|
||||
options=subplot_options,
|
||||
value=[],
|
||||
style={'display': 'none'}
|
||||
),
|
||||
html.Div(id='subplot-indicators-list')
|
||||
], width=6)
|
||||
])
|
||||
])
|
||||
], className="mb-4")
|
||||
|
||||
|
||||
def create_auto_update_control():
|
||||
"""Create the auto-update control section."""
|
||||
return html.Div([
|
||||
dbc.Checkbox(
|
||||
id='auto-update-checkbox',
|
||||
label='Auto-update charts',
|
||||
value=True,
|
||||
),
|
||||
html.Div(id='update-status', style={'font-size': '12px', 'color': '#7f8c8d'})
|
||||
], className="mb-3")
|
||||
|
||||
|
||||
def create_time_range_controls():
|
||||
"""Create the time range control panel."""
|
||||
return dbc.Card([
|
||||
dbc.CardHeader(html.H5("⏰ Time Range Controls")),
|
||||
dbc.CardBody([
|
||||
html.Div([
|
||||
html.Label("Quick Select:", className="form-label"),
|
||||
dcc.Dropdown(
|
||||
id='time-range-quick-select',
|
||||
options=[
|
||||
{'label': '🕐 Last 1 Hour', 'value': '1h'},
|
||||
{'label': '🕐 Last 4 Hours', 'value': '4h'},
|
||||
{'label': '🕐 Last 6 Hours', 'value': '6h'},
|
||||
{'label': '🕐 Last 12 Hours', 'value': '12h'},
|
||||
{'label': '📅 Last 1 Day', 'value': '1d'},
|
||||
{'label': '📅 Last 3 Days', 'value': '3d'},
|
||||
{'label': '📅 Last 7 Days', 'value': '7d'},
|
||||
{'label': '📅 Last 30 Days', 'value': '30d'},
|
||||
{'label': '📅 Custom Range', 'value': 'custom'},
|
||||
{'label': '🔴 Real-time', 'value': 'realtime'}
|
||||
],
|
||||
value='7d',
|
||||
placeholder="Select time range",
|
||||
)
|
||||
], className="mb-3"),
|
||||
|
||||
html.Div([
|
||||
html.Label("Custom Date Range:", className="form-label"),
|
||||
dbc.InputGroup([
|
||||
dcc.DatePickerRange(
|
||||
id='custom-date-range',
|
||||
display_format='YYYY-MM-DD',
|
||||
),
|
||||
dbc.Button("Clear", id="clear-date-range-btn", color="secondary", outline=True, size="sm")
|
||||
])
|
||||
], className="mb-3"),
|
||||
|
||||
html.Div([
|
||||
html.Label("Analysis Mode:", className="form-label"),
|
||||
dbc.RadioItems(
|
||||
id='analysis-mode-toggle',
|
||||
options=[
|
||||
{'label': '🔴 Real-time Updates', 'value': 'realtime'},
|
||||
{'label': '🔒 Analysis Mode (Locked)', 'value': 'locked'}
|
||||
],
|
||||
value='realtime',
|
||||
inline=True,
|
||||
)
|
||||
]),
|
||||
|
||||
html.Div(id='time-range-status', className="text-muted fst-italic mt-2")
|
||||
])
|
||||
], className="mb-4")
|
||||
|
||||
|
||||
def create_export_controls():
|
||||
"""Create the data export control panel."""
|
||||
return dbc.Card([
|
||||
dbc.CardHeader(html.H5("💾 Data Export")),
|
||||
dbc.CardBody([
|
||||
dbc.ButtonGroup([
|
||||
dbc.Button("Export to CSV", id="export-csv-btn", color="primary"),
|
||||
dbc.Button("Export to JSON", id="export-json-btn", color="secondary"),
|
||||
]),
|
||||
dcc.Download(id="download-chart-data")
|
||||
])
|
||||
], className="mb-4")
|
||||
600
dashboard/components/data_analysis.py
Normal file
600
dashboard/components/data_analysis.py
Normal file
@ -0,0 +1,600 @@
|
||||
"""
|
||||
Data analysis components for comprehensive market data analysis.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
import plotly.graph_objects as go
|
||||
import plotly.express as px
|
||||
from plotly.subplots import make_subplots
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import Dict, Any, List, Optional
|
||||
|
||||
from utils.logger import get_logger
|
||||
from database.connection import DatabaseManager
|
||||
from database.operations import DatabaseOperationError
|
||||
|
||||
logger = get_logger("data_analysis")
|
||||
|
||||
|
||||
class VolumeAnalyzer:
|
||||
"""Analyze trading volume patterns and trends."""
|
||||
|
||||
def __init__(self):
|
||||
self.db_manager = DatabaseManager()
|
||||
self.db_manager.initialize()
|
||||
|
||||
def get_volume_statistics(self, df: pd.DataFrame) -> Dict[str, Any]:
|
||||
"""Calculate comprehensive volume statistics from a DataFrame."""
|
||||
try:
|
||||
if df.empty or 'volume' not in df.columns:
|
||||
return {'error': 'DataFrame is empty or missing volume column'}
|
||||
|
||||
# Convert all relevant columns to float to avoid type errors with Decimal
|
||||
df = df.copy()
|
||||
numeric_cols = ['open', 'high', 'low', 'close', 'volume']
|
||||
for col in numeric_cols:
|
||||
if col in df.columns:
|
||||
df[col] = df[col].astype(float)
|
||||
if 'trades_count' in df.columns:
|
||||
df['trades_count'] = df['trades_count'].astype(float)
|
||||
|
||||
# Calculate volume statistics
|
||||
total_volume = df['volume'].sum()
|
||||
avg_volume = df['volume'].mean()
|
||||
volume_std = df['volume'].std()
|
||||
|
||||
# Volume trend analysis
|
||||
recent_volume = df['volume'].tail(10).mean() # Last 10 periods
|
||||
older_volume = df['volume'].head(10).mean() # First 10 periods
|
||||
volume_trend = "Increasing" if recent_volume > older_volume else "Decreasing"
|
||||
|
||||
# High volume periods (above 2 standard deviations)
|
||||
high_volume_threshold = avg_volume + (2 * volume_std)
|
||||
high_volume_periods = len(df[df['volume'] > high_volume_threshold])
|
||||
|
||||
# Volume-Price correlation
|
||||
price_change = df['close'] - df['open']
|
||||
volume_price_corr = df['volume'].corr(price_change.abs())
|
||||
|
||||
# Average trade size (volume per trade)
|
||||
if 'trades_count' in df.columns:
|
||||
df['avg_trade_size'] = df['volume'] / df['trades_count'].replace(0, 1)
|
||||
avg_trade_size = df['avg_trade_size'].mean()
|
||||
else:
|
||||
avg_trade_size = None # Not available
|
||||
|
||||
return {
|
||||
'total_volume': total_volume,
|
||||
'avg_volume': avg_volume,
|
||||
'volume_std': volume_std,
|
||||
'volume_trend': volume_trend,
|
||||
'high_volume_periods': high_volume_periods,
|
||||
'volume_price_correlation': volume_price_corr,
|
||||
'avg_trade_size': avg_trade_size,
|
||||
'max_volume': df['volume'].max(),
|
||||
'min_volume': df['volume'].min(),
|
||||
'volume_percentiles': {
|
||||
'25th': df['volume'].quantile(0.25),
|
||||
'50th': df['volume'].quantile(0.50),
|
||||
'75th': df['volume'].quantile(0.75),
|
||||
'95th': df['volume'].quantile(0.95)
|
||||
}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Volume analysis error: {e}")
|
||||
return {'error': str(e)}
|
||||
|
||||
|
||||
class PriceMovementAnalyzer:
|
||||
"""Analyze price movement patterns and statistics."""
|
||||
|
||||
def __init__(self):
|
||||
self.db_manager = DatabaseManager()
|
||||
self.db_manager.initialize()
|
||||
|
||||
def get_price_movement_statistics(self, df: pd.DataFrame) -> Dict[str, Any]:
|
||||
"""Calculate comprehensive price movement statistics from a DataFrame."""
|
||||
try:
|
||||
if df.empty or not all(col in df.columns for col in ['open', 'high', 'low', 'close']):
|
||||
return {'error': 'DataFrame is empty or missing required price columns'}
|
||||
|
||||
# Convert all relevant columns to float to avoid type errors with Decimal
|
||||
df = df.copy()
|
||||
numeric_cols = ['open', 'high', 'low', 'close', 'volume']
|
||||
for col in numeric_cols:
|
||||
if col in df.columns:
|
||||
df[col] = df[col].astype(float)
|
||||
|
||||
# Basic price statistics
|
||||
current_price = df['close'].iloc[-1]
|
||||
period_start_price = df['open'].iloc[0]
|
||||
period_return = ((current_price - period_start_price) / period_start_price) * 100
|
||||
|
||||
# Daily returns (percentage changes)
|
||||
df['returns'] = df['close'].pct_change() * 100
|
||||
df['returns'] = df['returns'].fillna(0)
|
||||
|
||||
# Volatility metrics
|
||||
volatility = df['returns'].std()
|
||||
avg_return = df['returns'].mean()
|
||||
|
||||
# Price range analysis
|
||||
df['range'] = df['high'] - df['low']
|
||||
df['range_pct'] = (df['range'] / df['open']) * 100
|
||||
avg_range_pct = df['range_pct'].mean()
|
||||
|
||||
# Directional analysis
|
||||
bullish_periods = len(df[df['close'] > df['open']])
|
||||
bearish_periods = len(df[df['close'] < df['open']])
|
||||
neutral_periods = len(df[df['close'] == df['open']])
|
||||
|
||||
total_periods = len(df)
|
||||
bullish_ratio = (bullish_periods / total_periods) * 100 if total_periods > 0 else 0
|
||||
|
||||
# Price extremes
|
||||
period_high = df['high'].max()
|
||||
period_low = df['low'].min()
|
||||
|
||||
# Momentum indicators
|
||||
# Simple momentum (current vs N periods ago)
|
||||
momentum_periods = min(10, len(df) - 1)
|
||||
if momentum_periods > 0:
|
||||
momentum = ((current_price - df['close'].iloc[-momentum_periods-1]) / df['close'].iloc[-momentum_periods-1]) * 100
|
||||
else:
|
||||
momentum = 0
|
||||
|
||||
# Trend strength (linear regression slope)
|
||||
if len(df) > 2:
|
||||
x = np.arange(len(df))
|
||||
slope, _ = np.polyfit(x, df['close'], 1)
|
||||
trend_strength = slope / df['close'].mean() * 100 # Normalize by average price
|
||||
else:
|
||||
trend_strength = 0
|
||||
|
||||
return {
|
||||
'current_price': current_price,
|
||||
'period_return': period_return,
|
||||
'volatility': volatility,
|
||||
'avg_return': avg_return,
|
||||
'avg_range_pct': avg_range_pct,
|
||||
'bullish_periods': bullish_periods,
|
||||
'bearish_periods': bearish_periods,
|
||||
'neutral_periods': neutral_periods,
|
||||
'bullish_ratio': bullish_ratio,
|
||||
'period_high': period_high,
|
||||
'period_low': period_low,
|
||||
'momentum': momentum,
|
||||
'trend_strength': trend_strength,
|
||||
'return_percentiles': {
|
||||
'5th': df['returns'].quantile(0.05),
|
||||
'25th': df['returns'].quantile(0.25),
|
||||
'75th': df['returns'].quantile(0.75),
|
||||
'95th': df['returns'].quantile(0.95)
|
||||
},
|
||||
'max_gain': df['returns'].max(),
|
||||
'max_loss': df['returns'].min(),
|
||||
'positive_returns': len(df[df['returns'] > 0]),
|
||||
'negative_returns': len(df[df['returns'] < 0])
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Price movement analysis error: {e}")
|
||||
return {'error': str(e)}
|
||||
|
||||
|
||||
def create_volume_analysis_chart(symbol: str, timeframe: str = "1h", days_back: int = 7) -> go.Figure:
|
||||
"""Create a comprehensive volume analysis chart."""
|
||||
try:
|
||||
analyzer = VolumeAnalyzer()
|
||||
|
||||
# Fetch market data for chart
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
with db_manager.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
|
||||
query = text("""
|
||||
SELECT timestamp, open, high, low, close, volume, trades_count
|
||||
FROM market_data
|
||||
WHERE symbol = :symbol
|
||||
AND timeframe = :timeframe
|
||||
AND timestamp >= :start_time
|
||||
AND timestamp <= :end_time
|
||||
ORDER BY timestamp ASC
|
||||
""")
|
||||
|
||||
result = session.execute(query, {
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'start_time': start_time,
|
||||
'end_time': end_time
|
||||
})
|
||||
|
||||
candles = []
|
||||
for row in result:
|
||||
candles.append({
|
||||
'timestamp': row.timestamp,
|
||||
'open': float(row.open),
|
||||
'high': float(row.high),
|
||||
'low': float(row.low),
|
||||
'close': float(row.close),
|
||||
'volume': float(row.volume),
|
||||
'trades_count': int(row.trades_count) if row.trades_count else 0
|
||||
})
|
||||
|
||||
if not candles:
|
||||
fig = go.Figure()
|
||||
fig.add_annotation(text="No data available", xref="paper", yref="paper", x=0.5, y=0.5)
|
||||
return fig
|
||||
|
||||
df = pd.DataFrame(candles)
|
||||
|
||||
# Calculate volume moving average
|
||||
df['volume_ma'] = df['volume'].rolling(window=20, min_periods=1).mean()
|
||||
|
||||
# Create subplots
|
||||
fig = make_subplots(
|
||||
rows=3, cols=1,
|
||||
subplot_titles=('Price Action', 'Volume Analysis', 'Volume vs Moving Average'),
|
||||
vertical_spacing=0.08,
|
||||
row_heights=[0.4, 0.3, 0.3]
|
||||
)
|
||||
|
||||
# Price candlestick
|
||||
fig.add_trace(
|
||||
go.Candlestick(
|
||||
x=df['timestamp'],
|
||||
open=df['open'],
|
||||
high=df['high'],
|
||||
low=df['low'],
|
||||
close=df['close'],
|
||||
name='Price',
|
||||
increasing_line_color='#26a69a',
|
||||
decreasing_line_color='#ef5350'
|
||||
),
|
||||
row=1, col=1
|
||||
)
|
||||
|
||||
# Volume bars with color coding
|
||||
colors = ['#26a69a' if close >= open else '#ef5350' for close, open in zip(df['close'], df['open'])]
|
||||
|
||||
fig.add_trace(
|
||||
go.Bar(
|
||||
x=df['timestamp'],
|
||||
y=df['volume'],
|
||||
name='Volume',
|
||||
marker_color=colors,
|
||||
opacity=0.7
|
||||
),
|
||||
row=2, col=1
|
||||
)
|
||||
|
||||
# Volume vs moving average
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=df['timestamp'],
|
||||
y=df['volume'],
|
||||
mode='lines',
|
||||
name='Volume',
|
||||
line=dict(color='#2196f3', width=1)
|
||||
),
|
||||
row=3, col=1
|
||||
)
|
||||
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=df['timestamp'],
|
||||
y=df['volume_ma'],
|
||||
mode='lines',
|
||||
name='Volume MA(20)',
|
||||
line=dict(color='#ff9800', width=2)
|
||||
),
|
||||
row=3, col=1
|
||||
)
|
||||
|
||||
# Update layout
|
||||
fig.update_layout(
|
||||
title=f'{symbol} Volume Analysis ({timeframe})',
|
||||
xaxis_rangeslider_visible=False,
|
||||
height=800,
|
||||
showlegend=True,
|
||||
template='plotly_white'
|
||||
)
|
||||
|
||||
# Update y-axes
|
||||
fig.update_yaxes(title_text="Price", row=1, col=1)
|
||||
fig.update_yaxes(title_text="Volume", row=2, col=1)
|
||||
fig.update_yaxes(title_text="Volume", row=3, col=1)
|
||||
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Volume chart creation error: {e}")
|
||||
fig = go.Figure()
|
||||
fig.add_annotation(text=f"Error: {str(e)}", xref="paper", yref="paper", x=0.5, y=0.5)
|
||||
return fig
|
||||
|
||||
|
||||
def create_price_movement_chart(symbol: str, timeframe: str = "1h", days_back: int = 7) -> go.Figure:
|
||||
"""Create a comprehensive price movement analysis chart."""
|
||||
try:
|
||||
# Fetch market data for chart
|
||||
db_manager = DatabaseManager()
|
||||
db_manager.initialize()
|
||||
|
||||
end_time = datetime.now(timezone.utc)
|
||||
start_time = end_time - timedelta(days=days_back)
|
||||
|
||||
with db_manager.get_session() as session:
|
||||
from sqlalchemy import text
|
||||
|
||||
query = text("""
|
||||
SELECT timestamp, open, high, low, close, volume
|
||||
FROM market_data
|
||||
WHERE symbol = :symbol
|
||||
AND timeframe = :timeframe
|
||||
AND timestamp >= :start_time
|
||||
AND timestamp <= :end_time
|
||||
ORDER BY timestamp ASC
|
||||
""")
|
||||
|
||||
result = session.execute(query, {
|
||||
'symbol': symbol,
|
||||
'timeframe': timeframe,
|
||||
'start_time': start_time,
|
||||
'end_time': end_time
|
||||
})
|
||||
|
||||
candles = []
|
||||
for row in result:
|
||||
candles.append({
|
||||
'timestamp': row.timestamp,
|
||||
'open': float(row.open),
|
||||
'high': float(row.high),
|
||||
'low': float(row.low),
|
||||
'close': float(row.close),
|
||||
'volume': float(row.volume)
|
||||
})
|
||||
|
||||
if not candles:
|
||||
fig = go.Figure()
|
||||
fig.add_annotation(text="No data available", xref="paper", yref="paper", x=0.5, y=0.5)
|
||||
return fig
|
||||
|
||||
df = pd.DataFrame(candles)
|
||||
|
||||
# Calculate returns and statistics
|
||||
df['returns'] = df['close'].pct_change() * 100
|
||||
df['returns'] = df['returns'].fillna(0)
|
||||
df['range_pct'] = ((df['high'] - df['low']) / df['open']) * 100
|
||||
df['cumulative_return'] = (1 + df['returns'] / 100).cumprod()
|
||||
|
||||
# Create subplots
|
||||
fig = make_subplots(
|
||||
rows=3, cols=1,
|
||||
subplot_titles=('Cumulative Returns', 'Period Returns (%)', 'Price Range (%)'),
|
||||
vertical_spacing=0.08,
|
||||
row_heights=[0.4, 0.3, 0.3]
|
||||
)
|
||||
|
||||
# Cumulative returns
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=df['timestamp'],
|
||||
y=df['cumulative_return'],
|
||||
mode='lines',
|
||||
name='Cumulative Return',
|
||||
line=dict(color='#2196f3', width=2)
|
||||
),
|
||||
row=1, col=1
|
||||
)
|
||||
|
||||
# Period returns with color coding
|
||||
colors = ['#26a69a' if ret >= 0 else '#ef5350' for ret in df['returns']]
|
||||
|
||||
fig.add_trace(
|
||||
go.Bar(
|
||||
x=df['timestamp'],
|
||||
y=df['returns'],
|
||||
name='Returns (%)',
|
||||
marker_color=colors,
|
||||
opacity=0.7
|
||||
),
|
||||
row=2, col=1
|
||||
)
|
||||
|
||||
# Price range percentage
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=df['timestamp'],
|
||||
y=df['range_pct'],
|
||||
mode='lines+markers',
|
||||
name='Range %',
|
||||
line=dict(color='#ff9800', width=1),
|
||||
marker=dict(size=4)
|
||||
),
|
||||
row=3, col=1
|
||||
)
|
||||
|
||||
# Add zero line for returns
|
||||
fig.add_hline(y=0, line_dash="dash", line_color="gray", row=2, col=1)
|
||||
|
||||
# Update layout
|
||||
fig.update_layout(
|
||||
title=f'{symbol} Price Movement Analysis ({timeframe})',
|
||||
height=800,
|
||||
showlegend=True,
|
||||
template='plotly_white'
|
||||
)
|
||||
|
||||
# Update y-axes
|
||||
fig.update_yaxes(title_text="Cumulative Return", row=1, col=1)
|
||||
fig.update_yaxes(title_text="Returns (%)", row=2, col=1)
|
||||
fig.update_yaxes(title_text="Range (%)", row=3, col=1)
|
||||
|
||||
return fig
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Price movement chart creation error: {e}")
|
||||
fig = go.Figure()
|
||||
fig.add_annotation(text=f"Error: {str(e)}", xref="paper", yref="paper", x=0.5, y=0.5)
|
||||
return fig
|
||||
|
||||
|
||||
def create_data_analysis_panel():
|
||||
"""Create the main data analysis panel with tabs for different analyses."""
|
||||
return html.Div([
|
||||
dcc.Tabs(
|
||||
id="data-analysis-tabs",
|
||||
value="volume-analysis",
|
||||
children=[
|
||||
dcc.Tab(label="Volume Analysis", value="volume-analysis", children=[
|
||||
html.Div(id='volume-analysis-content', children=[
|
||||
html.P("Content for Volume Analysis")
|
||||
]),
|
||||
html.Div(id='volume-stats-container', children=[
|
||||
html.P("Stats container loaded - waiting for callback...")
|
||||
])
|
||||
]),
|
||||
dcc.Tab(label="Price Movement", value="price-movement", children=[
|
||||
html.Div(id='price-movement-content', children=[
|
||||
dbc.Alert("Select a symbol and timeframe to view price movement analysis.", color="primary")
|
||||
])
|
||||
]),
|
||||
],
|
||||
)
|
||||
], id='data-analysis-panel-wrapper')
|
||||
|
||||
|
||||
def format_number(value: float, decimals: int = 2) -> str:
|
||||
"""Format number with appropriate decimals and units."""
|
||||
if pd.isna(value):
|
||||
return "N/A"
|
||||
|
||||
if abs(value) >= 1e9:
|
||||
return f"{value/1e9:.{decimals}f}B"
|
||||
elif abs(value) >= 1e6:
|
||||
return f"{value/1e6:.{decimals}f}M"
|
||||
elif abs(value) >= 1e3:
|
||||
return f"{value/1e3:.{decimals}f}K"
|
||||
else:
|
||||
return f"{value:.{decimals}f}"
|
||||
|
||||
|
||||
def create_volume_stats_display(stats: Dict[str, Any]) -> html.Div:
|
||||
"""Create volume statistics display."""
|
||||
if 'error' in stats:
|
||||
return dbc.Alert(
|
||||
"Error loading volume statistics",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
|
||||
def create_stat_card(icon, title, value, color="primary"):
|
||||
return dbc.Col(dbc.Card(dbc.CardBody([
|
||||
html.Div([
|
||||
html.Div(icon, className="display-6"),
|
||||
html.Div([
|
||||
html.P(title, className="card-title mb-1 text-muted"),
|
||||
html.H4(value, className=f"card-text fw-bold text-{color}")
|
||||
], className="ms-3")
|
||||
], className="d-flex align-items-center")
|
||||
])), width=4, className="mb-3")
|
||||
|
||||
return dbc.Row([
|
||||
create_stat_card("📊", "Total Volume", format_number(stats['total_volume'])),
|
||||
create_stat_card("📈", "Average Volume", format_number(stats['avg_volume'])),
|
||||
create_stat_card("🎯", "Volume Trend", stats['volume_trend'],
|
||||
"success" if stats['volume_trend'] == "Increasing" else "danger"),
|
||||
create_stat_card("⚡", "High Volume Periods", str(stats['high_volume_periods'])),
|
||||
create_stat_card("🔗", "Volume-Price Correlation", f"{stats['volume_price_correlation']:.3f}"),
|
||||
create_stat_card("💱", "Avg Trade Size", format_number(stats['avg_trade_size']))
|
||||
], className="mt-3")
|
||||
|
||||
|
||||
def create_price_stats_display(stats: Dict[str, Any]) -> html.Div:
|
||||
"""Create price movement statistics display."""
|
||||
if 'error' in stats:
|
||||
return dbc.Alert(
|
||||
"Error loading price statistics",
|
||||
color="danger",
|
||||
dismissable=True
|
||||
)
|
||||
|
||||
def create_stat_card(icon, title, value, color="primary"):
|
||||
text_color = "text-dark"
|
||||
if color == "success":
|
||||
text_color = "text-success"
|
||||
elif color == "danger":
|
||||
text_color = "text-danger"
|
||||
|
||||
return dbc.Col(dbc.Card(dbc.CardBody([
|
||||
html.Div([
|
||||
html.Div(icon, className="display-6"),
|
||||
html.Div([
|
||||
html.P(title, className="card-title mb-1 text-muted"),
|
||||
html.H4(value, className=f"card-text fw-bold {text_color}")
|
||||
], className="ms-3")
|
||||
], className="d-flex align-items-center")
|
||||
])), width=4, className="mb-3")
|
||||
|
||||
return dbc.Row([
|
||||
create_stat_card("💰", "Current Price", f"${stats['current_price']:.2f}"),
|
||||
create_stat_card("📈", "Period Return", f"{stats['period_return']:+.2f}%",
|
||||
"success" if stats['period_return'] >= 0 else "danger"),
|
||||
create_stat_card("📊", "Volatility", f"{stats['volatility']:.2f}%", color="warning"),
|
||||
create_stat_card("🎯", "Bullish Ratio", f"{stats['bullish_ratio']:.1f}%"),
|
||||
create_stat_card("⚡", "Momentum", f"{stats['momentum']:+.2f}%",
|
||||
"success" if stats['momentum'] >= 0 else "danger"),
|
||||
create_stat_card("📉", "Max Loss", f"{stats['max_loss']:.2f}%", "danger")
|
||||
], className="mt-3")
|
||||
|
||||
|
||||
def get_market_statistics(df: pd.DataFrame, symbol: str, timeframe: str) -> html.Div:
|
||||
"""
|
||||
Generate a comprehensive market statistics component from a DataFrame.
|
||||
"""
|
||||
if df.empty:
|
||||
return html.Div("No data available for statistics.", className="text-center text-muted")
|
||||
|
||||
try:
|
||||
# Get statistics
|
||||
price_analyzer = PriceMovementAnalyzer()
|
||||
volume_analyzer = VolumeAnalyzer()
|
||||
|
||||
price_stats = price_analyzer.get_price_movement_statistics(df)
|
||||
volume_stats = volume_analyzer.get_volume_statistics(df)
|
||||
|
||||
# Format key statistics for display
|
||||
start_date = df.index.min().strftime('%Y-%m-%d %H:%M')
|
||||
end_date = df.index.max().strftime('%Y-%m-%d %H:%M')
|
||||
|
||||
# Check for errors from analyzers
|
||||
if 'error' in price_stats or 'error' in volume_stats:
|
||||
error_msg = price_stats.get('error') or volume_stats.get('error')
|
||||
return html.Div(f"Error generating statistics: {error_msg}", style={'color': 'red'})
|
||||
|
||||
# Time range for display
|
||||
days_back = (df.index.max() - df.index.min()).days
|
||||
time_status = f"📅 Analysis Range: {start_date} to {end_date} (~{days_back} days)"
|
||||
|
||||
return html.Div([
|
||||
html.H3("📊 Enhanced Market Statistics", className="mb-3"),
|
||||
html.P(
|
||||
time_status,
|
||||
className="lead text-center text-muted mb-4"
|
||||
),
|
||||
create_price_stats_display(price_stats),
|
||||
create_volume_stats_display(volume_stats)
|
||||
])
|
||||
except Exception as e:
|
||||
logger.error(f"Error in get_market_statistics: {e}", exc_info=True)
|
||||
return dbc.Alert(f"Error generating statistics display: {e}", color="danger")
|
||||
141
dashboard/components/indicator_modal.py
Normal file
141
dashboard/components/indicator_modal.py
Normal file
@ -0,0 +1,141 @@
|
||||
"""
|
||||
Indicator modal component for creating and editing indicators.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
import dash_bootstrap_components as dbc
|
||||
|
||||
|
||||
def create_indicator_modal():
|
||||
"""Create the indicator modal dialog for adding/editing indicators."""
|
||||
return html.Div([
|
||||
dcc.Store(id='edit-indicator-store', data=None),
|
||||
dbc.Modal([
|
||||
dbc.ModalHeader(dbc.ModalTitle("📊 Add New Indicator", id="modal-title")),
|
||||
dbc.ModalBody([
|
||||
# Basic Settings
|
||||
html.H5("Basic Settings"),
|
||||
dbc.Row([
|
||||
dbc.Col(dbc.Label("Indicator Name:"), width=12),
|
||||
dbc.Col(dcc.Input(id='indicator-name-input', type='text', placeholder='e.g., "SMA 30 Custom"', className="w-100"), width=12)
|
||||
], className="mb-3"),
|
||||
dbc.Row([
|
||||
dbc.Col(dbc.Label("Indicator Type:"), width=12),
|
||||
dbc.Col(dcc.Dropdown(
|
||||
id='indicator-type-dropdown',
|
||||
options=[
|
||||
{'label': 'Simple Moving Average (SMA)', 'value': 'sma'},
|
||||
{'label': 'Exponential Moving Average (EMA)', 'value': 'ema'},
|
||||
{'label': 'Relative Strength Index (RSI)', 'value': 'rsi'},
|
||||
{'label': 'MACD', 'value': 'macd'},
|
||||
{'label': 'Bollinger Bands', 'value': 'bollinger_bands'}
|
||||
],
|
||||
placeholder='Select indicator type',
|
||||
), width=12)
|
||||
], className="mb-3"),
|
||||
dbc.Row([
|
||||
dbc.Col(dbc.Label("Timeframe (Optional):"), width=12),
|
||||
dbc.Col(dcc.Dropdown(
|
||||
id='indicator-timeframe-dropdown',
|
||||
options=[
|
||||
{'label': 'Chart Timeframe', 'value': ''},
|
||||
{'label': "1 Second", 'value': '1s'},
|
||||
{'label': "5 Seconds", 'value': '5s'},
|
||||
{'label': "15 Seconds", 'value': '15s'},
|
||||
{'label': "30 Seconds", 'value': '30s'},
|
||||
{'label': '1 Minute', 'value': '1m'},
|
||||
{'label': '5 Minutes', 'value': '5m'},
|
||||
{'label': '15 Minutes', 'value': '15m'},
|
||||
{'label': '1 Hour', 'value': '1h'},
|
||||
{'label': '4 Hours', 'value': '4h'},
|
||||
{'label': '1 Day', 'value': '1d'},
|
||||
],
|
||||
value='',
|
||||
placeholder='Defaults to chart timeframe'
|
||||
), width=12),
|
||||
], className="mb-3"),
|
||||
dbc.Row([
|
||||
dbc.Col(dbc.Label("Description (Optional):"), width=12),
|
||||
dbc.Col(dcc.Textarea(
|
||||
id='indicator-description-input',
|
||||
placeholder='Brief description of this indicator configuration...',
|
||||
style={'width': '100%', 'height': '60px'}
|
||||
), width=12)
|
||||
], className="mb-3"),
|
||||
html.Hr(),
|
||||
|
||||
# Parameters Section
|
||||
html.H5("Parameters"),
|
||||
html.Div(
|
||||
id='indicator-parameters-message',
|
||||
children=[html.P("Select an indicator type to configure parameters", className="text-muted fst-italic")]
|
||||
),
|
||||
|
||||
# Parameter fields (SMA, EMA, etc.)
|
||||
create_parameter_fields(),
|
||||
|
||||
html.Hr(),
|
||||
# Styling Section
|
||||
html.H5("Styling"),
|
||||
dbc.Row([
|
||||
dbc.Col([
|
||||
dbc.Label("Color:"),
|
||||
dcc.Input(id='indicator-color-input', type='text', value='#007bff', className="w-100")
|
||||
], width=6),
|
||||
dbc.Col([
|
||||
dbc.Label("Line Width:"),
|
||||
dcc.Slider(id='indicator-line-width-slider', min=1, max=5, step=1, value=2, marks={i: str(i) for i in range(1, 6)})
|
||||
], width=6)
|
||||
], className="mb-3"),
|
||||
]),
|
||||
dbc.ModalFooter([
|
||||
html.Div(id='save-indicator-feedback', className="me-auto"),
|
||||
dbc.Button("Cancel", id="cancel-indicator-btn", color="secondary"),
|
||||
dbc.Button("Save Indicator", id="save-indicator-btn", color="primary")
|
||||
])
|
||||
], id='indicator-modal', size="lg", is_open=False),
|
||||
])
|
||||
|
||||
def create_parameter_fields():
|
||||
"""Helper function to create parameter input fields for all indicator types."""
|
||||
return html.Div([
|
||||
# SMA Parameters
|
||||
html.Div([
|
||||
dbc.Label("Period:"),
|
||||
dcc.Input(id='sma-period-input', type='number', value=20, min=1, max=200),
|
||||
dbc.FormText("Number of periods for Simple Moving Average calculation")
|
||||
], id='sma-parameters', style={'display': 'none'}, className="mb-3"),
|
||||
|
||||
# EMA Parameters
|
||||
html.Div([
|
||||
dbc.Label("Period:"),
|
||||
dcc.Input(id='ema-period-input', type='number', value=12, min=1, max=200),
|
||||
dbc.FormText("Number of periods for Exponential Moving Average calculation")
|
||||
], id='ema-parameters', style={'display': 'none'}, className="mb-3"),
|
||||
|
||||
# RSI Parameters
|
||||
html.Div([
|
||||
dbc.Label("Period:"),
|
||||
dcc.Input(id='rsi-period-input', type='number', value=14, min=2, max=50),
|
||||
dbc.FormText("Number of periods for RSI calculation (typically 14)")
|
||||
], id='rsi-parameters', style={'display': 'none'}, className="mb-3"),
|
||||
|
||||
# MACD Parameters
|
||||
html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col([dbc.Label("Fast Period:"), dcc.Input(id='macd-fast-period-input', type='number', value=12)], width=4),
|
||||
dbc.Col([dbc.Label("Slow Period:"), dcc.Input(id='macd-slow-period-input', type='number', value=26)], width=4),
|
||||
dbc.Col([dbc.Label("Signal Period:"), dcc.Input(id='macd-signal-period-input', type='number', value=9)], width=4),
|
||||
]),
|
||||
dbc.FormText("MACD periods: Fast EMA, Slow EMA, and Signal line")
|
||||
], id='macd-parameters', style={'display': 'none'}, className="mb-3"),
|
||||
|
||||
# Bollinger Bands Parameters
|
||||
html.Div([
|
||||
dbc.Row([
|
||||
dbc.Col([dbc.Label("Period:"), dcc.Input(id='bb-period-input', type='number', value=20)], width=6),
|
||||
dbc.Col([dbc.Label("Standard Deviation:"), dcc.Input(id='bb-stddev-input', type='number', value=2.0, step=0.1)], width=6),
|
||||
]),
|
||||
dbc.FormText("Period for middle line (SMA) and standard deviation multiplier")
|
||||
], id='bb-parameters', style={'display': 'none'}, className="mb-3")
|
||||
])
|
||||
15
dashboard/layouts/__init__.py
Normal file
15
dashboard/layouts/__init__.py
Normal file
@ -0,0 +1,15 @@
|
||||
"""
|
||||
Layout modules for the dashboard.
|
||||
"""
|
||||
|
||||
from .market_data import get_market_data_layout
|
||||
from .bot_management import get_bot_management_layout
|
||||
from .performance import get_performance_layout
|
||||
from .system_health import get_system_health_layout
|
||||
|
||||
__all__ = [
|
||||
'get_market_data_layout',
|
||||
'get_bot_management_layout',
|
||||
'get_performance_layout',
|
||||
'get_system_health_layout'
|
||||
]
|
||||
21
dashboard/layouts/bot_management.py
Normal file
21
dashboard/layouts/bot_management.py
Normal file
@ -0,0 +1,21 @@
|
||||
"""
|
||||
Bot management layout for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import html
|
||||
|
||||
|
||||
def get_bot_management_layout():
|
||||
"""Create the bot management layout."""
|
||||
return html.Div([
|
||||
html.H2("🤖 Bot Management", style={'color': '#2c3e50'}),
|
||||
html.P("Bot management interface will be implemented in Phase 4.0"),
|
||||
|
||||
# Placeholder for bot list
|
||||
html.Div([
|
||||
html.H3("Active Bots"),
|
||||
html.Div(id='bot-list', children=[
|
||||
html.P("No bots currently running", style={'color': '#7f8c8d'})
|
||||
])
|
||||
], style={'margin': '20px 0'})
|
||||
])
|
||||
131
dashboard/layouts/market_data.py
Normal file
131
dashboard/layouts/market_data.py
Normal file
@ -0,0 +1,131 @@
|
||||
"""
|
||||
Market data layout for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import html, dcc
|
||||
from utils.logger import get_logger
|
||||
from components.charts import get_supported_symbols, get_supported_timeframes
|
||||
from components.charts.config import get_available_strategy_names
|
||||
from components.charts.indicator_manager import get_indicator_manager
|
||||
from components.charts.indicator_defaults import ensure_default_indicators
|
||||
from dashboard.components.chart_controls import (
|
||||
create_chart_config_panel,
|
||||
create_time_range_controls,
|
||||
create_export_controls
|
||||
)
|
||||
|
||||
logger = get_logger("default_logger")
|
||||
|
||||
|
||||
def get_market_data_layout():
|
||||
"""Create the market data visualization layout with indicator controls."""
|
||||
# Get available symbols and timeframes from database
|
||||
symbols = get_supported_symbols()
|
||||
timeframes = get_supported_timeframes()
|
||||
|
||||
# Create dropdown options
|
||||
symbol_options = [{'label': symbol, 'value': symbol} for symbol in symbols]
|
||||
timeframe_options = [
|
||||
{'label': "1 Second", 'value': '1s'},
|
||||
{'label': "5 Seconds", 'value': '5s'},
|
||||
{'label': "15 Seconds", 'value': '15s'},
|
||||
{'label': "30 Seconds", 'value': '30s'},
|
||||
{'label': '1 Minute', 'value': '1m'},
|
||||
{'label': '5 Minutes', 'value': '5m'},
|
||||
{'label': '15 Minutes', 'value': '15m'},
|
||||
{'label': '1 Hour', 'value': '1h'},
|
||||
{'label': '4 Hours', 'value': '4h'},
|
||||
{'label': '1 Day', 'value': '1d'},
|
||||
]
|
||||
|
||||
# Filter timeframe options to only show those available in database
|
||||
available_timeframes = [tf for tf in ['1s', '5s', '15s', '30s', '1m', '5m', '15m', '1h', '4h', '1d'] if tf in timeframes]
|
||||
if not available_timeframes:
|
||||
available_timeframes = ['5m'] # Default fallback
|
||||
|
||||
timeframe_options = [opt for opt in timeframe_options if opt['value'] in available_timeframes]
|
||||
|
||||
# Get available strategies and indicators
|
||||
try:
|
||||
strategy_names = get_available_strategy_names()
|
||||
strategy_options = [{'label': name.replace('_', ' ').title(), 'value': name} for name in strategy_names]
|
||||
|
||||
# Get user indicators from the new indicator manager
|
||||
indicator_manager = get_indicator_manager()
|
||||
|
||||
# Ensure default indicators exist
|
||||
ensure_default_indicators()
|
||||
|
||||
# Get indicators by display type
|
||||
overlay_indicators = indicator_manager.get_indicators_by_type('overlay')
|
||||
subplot_indicators = indicator_manager.get_indicators_by_type('subplot')
|
||||
|
||||
# Create checkbox options for overlay indicators
|
||||
overlay_options = []
|
||||
for indicator in overlay_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
overlay_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
# Create checkbox options for subplot indicators
|
||||
subplot_options = []
|
||||
for indicator in subplot_indicators:
|
||||
display_name = f"{indicator.name} ({indicator.type.upper()})"
|
||||
subplot_options.append({'label': display_name, 'value': indicator.id})
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Market data layout: Error loading indicator options: {e}")
|
||||
strategy_options = [{'label': 'Basic Chart', 'value': 'basic'}]
|
||||
overlay_options = []
|
||||
subplot_options = []
|
||||
|
||||
# Create components using the new modular functions
|
||||
chart_config_panel = create_chart_config_panel(strategy_options, overlay_options, subplot_options)
|
||||
time_range_controls = create_time_range_controls()
|
||||
export_controls = create_export_controls()
|
||||
|
||||
return html.Div([
|
||||
# Title and basic controls
|
||||
html.H3("💹 Market Data Visualization", style={'color': '#2c3e50', 'margin-bottom': '20px'}),
|
||||
|
||||
# Main chart controls
|
||||
html.Div([
|
||||
html.Div([
|
||||
html.Label("Symbol:", style={'font-weight': 'bold'}),
|
||||
dcc.Dropdown(
|
||||
id='symbol-dropdown',
|
||||
options=symbol_options,
|
||||
value=symbols[0] if symbols else 'BTC-USDT',
|
||||
clearable=False,
|
||||
style={'margin-bottom': '10px'}
|
||||
)
|
||||
], style={'width': '48%', 'display': 'inline-block'}),
|
||||
html.Div([
|
||||
html.Label("Timeframe:", style={'font-weight': 'bold'}),
|
||||
dcc.Dropdown(
|
||||
id='timeframe-dropdown',
|
||||
options=timeframe_options,
|
||||
value='1h',
|
||||
clearable=False,
|
||||
style={'margin-bottom': '10px'}
|
||||
)
|
||||
], style={'width': '48%', 'float': 'right', 'display': 'inline-block'})
|
||||
], style={'margin-bottom': '20px'}),
|
||||
|
||||
# Chart Configuration Panel
|
||||
chart_config_panel,
|
||||
|
||||
# Time Range Controls (positioned under indicators, next to chart)
|
||||
time_range_controls,
|
||||
|
||||
# Export Controls
|
||||
export_controls,
|
||||
|
||||
# Chart
|
||||
dcc.Graph(id='price-chart'),
|
||||
|
||||
# Hidden store for chart data
|
||||
dcc.Store(id='chart-data-store'),
|
||||
|
||||
# Enhanced Market statistics with integrated data analysis
|
||||
html.Div(id='market-stats', style={'margin-top': '20px'})
|
||||
])
|
||||
19
dashboard/layouts/performance.py
Normal file
19
dashboard/layouts/performance.py
Normal file
@ -0,0 +1,19 @@
|
||||
"""
|
||||
Performance analytics layout for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import html
|
||||
|
||||
|
||||
def get_performance_layout():
|
||||
"""Create the performance monitoring layout."""
|
||||
return html.Div([
|
||||
html.H2("📈 Performance Analytics", style={'color': '#2c3e50'}),
|
||||
html.P("Performance analytics will be implemented in Phase 6.0"),
|
||||
|
||||
# Placeholder for performance metrics
|
||||
html.Div([
|
||||
html.H3("Portfolio Performance"),
|
||||
html.P("Portfolio tracking coming soon", style={'color': '#7f8c8d'})
|
||||
], style={'margin': '20px 0'})
|
||||
])
|
||||
131
dashboard/layouts/system_health.py
Normal file
131
dashboard/layouts/system_health.py
Normal file
@ -0,0 +1,131 @@
|
||||
"""
|
||||
System health monitoring layout for the dashboard.
|
||||
"""
|
||||
|
||||
from dash import html
|
||||
import dash_bootstrap_components as dbc
|
||||
|
||||
def get_system_health_layout():
|
||||
"""Create the enhanced system health monitoring layout with Bootstrap components."""
|
||||
|
||||
def create_quick_status_card(title, component_id, icon):
|
||||
return dbc.Card(dbc.CardBody([
|
||||
html.H5(f"{icon} {title}", className="card-title"),
|
||||
html.Div(id=component_id, children=[
|
||||
dbc.Badge("Checking...", color="warning", className="me-1")
|
||||
])
|
||||
]), className="text-center")
|
||||
|
||||
return html.Div([
|
||||
# Header section
|
||||
html.Div([
|
||||
html.H2("⚙️ System Health & Data Monitoring"),
|
||||
html.P("Real-time monitoring of data collection services, database health, and system performance",
|
||||
className="lead")
|
||||
], className="p-5 mb-4 bg-light rounded-3"),
|
||||
|
||||
# Quick Status Overview Row
|
||||
dbc.Row([
|
||||
dbc.Col(create_quick_status_card("Data Collection", "data-collection-quick-status", "📊"), width=3),
|
||||
dbc.Col(create_quick_status_card("Database", "database-quick-status", "🗄️"), width=3),
|
||||
dbc.Col(create_quick_status_card("Redis", "redis-quick-status", "🔗"), width=3),
|
||||
dbc.Col(create_quick_status_card("Performance", "performance-quick-status", "📈"), width=3),
|
||||
], className="mb-4"),
|
||||
|
||||
# Detailed Monitoring Sections
|
||||
dbc.Row([
|
||||
# Left Column - Data Collection Service
|
||||
dbc.Col([
|
||||
# Data Collection Service Status
|
||||
dbc.Card([
|
||||
dbc.CardHeader(html.H4("📡 Data Collection Service")),
|
||||
dbc.CardBody([
|
||||
html.H5("Service Status", className="card-title"),
|
||||
html.Div(id='data-collection-service-status', className="mb-4"),
|
||||
|
||||
html.H5("Collection Metrics", className="card-title"),
|
||||
html.Div(id='data-collection-metrics', className="mb-4"),
|
||||
|
||||
html.H5("Service Controls", className="card-title"),
|
||||
dbc.ButtonGroup([
|
||||
dbc.Button("🔄 Refresh Status", id="refresh-data-status-btn", color="primary", outline=True, size="sm"),
|
||||
dbc.Button("📊 View Details", id="view-collection-details-btn", color="secondary", outline=True, size="sm"),
|
||||
dbc.Button("📋 View Logs", id="view-collection-logs-btn", color="info", outline=True, size="sm")
|
||||
])
|
||||
])
|
||||
], className="mb-4"),
|
||||
|
||||
# Data Collector Health
|
||||
dbc.Card([
|
||||
dbc.CardHeader(html.H4("🔌 Individual Collectors")),
|
||||
dbc.CardBody([
|
||||
html.Div(id='individual-collectors-status'),
|
||||
html.Div([
|
||||
dbc.Alert(
|
||||
"Collector health data will be displayed here when the data collection service is running.",
|
||||
id="collectors-info-alert",
|
||||
color="info",
|
||||
is_open=True,
|
||||
)
|
||||
], id='collectors-placeholder')
|
||||
])
|
||||
], className="mb-4"),
|
||||
], width=6),
|
||||
|
||||
# Right Column - System Health
|
||||
dbc.Col([
|
||||
# Database Status
|
||||
dbc.Card([
|
||||
dbc.CardHeader(html.H4("🗄️ Database Health")),
|
||||
dbc.CardBody([
|
||||
html.H5("Connection Status", className="card-title"),
|
||||
html.Div(id='database-status', className="mb-3"),
|
||||
html.Hr(),
|
||||
html.H5("Database Statistics", className="card-title"),
|
||||
html.Div(id='database-stats')
|
||||
])
|
||||
], className="mb-4"),
|
||||
|
||||
# Redis Status
|
||||
dbc.Card([
|
||||
dbc.CardHeader(html.H4("🔗 Redis Status")),
|
||||
dbc.CardBody([
|
||||
html.H5("Connection Status", className="card-title"),
|
||||
html.Div(id='redis-status', className="mb-3"),
|
||||
html.Hr(),
|
||||
html.H5("Redis Statistics", className="card-title"),
|
||||
html.Div(id='redis-stats')
|
||||
])
|
||||
], className="mb-4"),
|
||||
|
||||
# System Performance
|
||||
dbc.Card([
|
||||
dbc.CardHeader(html.H4("📈 System Performance")),
|
||||
dbc.CardBody([
|
||||
html.Div(id='system-performance-metrics')
|
||||
])
|
||||
], className="mb-4"),
|
||||
], width=6)
|
||||
]),
|
||||
|
||||
# Data Collection Details Modal
|
||||
dbc.Modal([
|
||||
dbc.ModalHeader(dbc.ModalTitle("📊 Data Collection Details")),
|
||||
dbc.ModalBody(id="collection-details-content")
|
||||
], id="collection-details-modal", is_open=False, size="lg"),
|
||||
|
||||
# Collection Logs Modal
|
||||
dbc.Modal([
|
||||
dbc.ModalHeader(dbc.ModalTitle("📋 Collection Service Logs")),
|
||||
dbc.ModalBody(
|
||||
html.Div(
|
||||
html.Pre(id="collection-logs-content", style={'max-height': '400px', 'overflow-y': 'auto'}),
|
||||
style={'white-space': 'pre-wrap', 'background-color': '#f8f9fa', 'padding': '15px', 'border-radius': '5px'}
|
||||
)
|
||||
),
|
||||
dbc.ModalFooter([
|
||||
dbc.Button("Refresh", id="refresh-logs-btn", color="primary"),
|
||||
dbc.Button("Close", id="close-logs-modal", color="secondary", className="ms-auto")
|
||||
])
|
||||
], id="collection-logs-modal", is_open=False, size="xl")
|
||||
])
|
||||
25
data/__init__.py
Normal file
25
data/__init__.py
Normal file
@ -0,0 +1,25 @@
|
||||
"""
|
||||
Data collection and processing package for the Crypto Trading Bot Platform.
|
||||
|
||||
This package contains modules for collecting market data from various exchanges,
|
||||
processing and validating the data, and storing it in the database.
|
||||
"""
|
||||
|
||||
from .base_collector import (
|
||||
BaseDataCollector, DataCollectorError, DataValidationError,
|
||||
DataType, CollectorStatus, MarketDataPoint, OHLCVData
|
||||
)
|
||||
from .collector_manager import CollectorManager, ManagerStatus, CollectorConfig
|
||||
|
||||
__all__ = [
|
||||
'BaseDataCollector',
|
||||
'DataCollectorError',
|
||||
'DataValidationError',
|
||||
'DataType',
|
||||
'CollectorStatus',
|
||||
'MarketDataPoint',
|
||||
'OHLCVData',
|
||||
'CollectorManager',
|
||||
'ManagerStatus',
|
||||
'CollectorConfig'
|
||||
]
|
||||
735
data/base_collector.py
Normal file
735
data/base_collector.py
Normal file
@ -0,0 +1,735 @@
|
||||
"""
|
||||
Abstract base class for data collectors.
|
||||
|
||||
This module provides a common interface for all data collection implementations,
|
||||
ensuring consistency across different exchange connectors and data sources.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
from abc import ABC, abstractmethod
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from decimal import Decimal
|
||||
from typing import Dict, List, Optional, Any, Callable, Set
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from utils.logger import get_logger
|
||||
|
||||
|
||||
class DataType(Enum):
|
||||
"""Types of data that can be collected."""
|
||||
TICKER = "ticker"
|
||||
TRADE = "trade"
|
||||
ORDERBOOK = "orderbook"
|
||||
CANDLE = "candle"
|
||||
BALANCE = "balance"
|
||||
|
||||
|
||||
class CollectorStatus(Enum):
|
||||
"""Status of the data collector."""
|
||||
STOPPED = "stopped"
|
||||
STARTING = "starting"
|
||||
RUNNING = "running"
|
||||
STOPPING = "stopping"
|
||||
ERROR = "error"
|
||||
RECONNECTING = "reconnecting"
|
||||
UNHEALTHY = "unhealthy" # Added for health monitoring
|
||||
|
||||
|
||||
@dataclass
|
||||
class MarketDataPoint:
|
||||
"""Standardized market data structure."""
|
||||
exchange: str
|
||||
symbol: str
|
||||
timestamp: datetime
|
||||
data_type: DataType
|
||||
data: Dict[str, Any]
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate data after initialization."""
|
||||
if not self.timestamp.tzinfo:
|
||||
self.timestamp = self.timestamp.replace(tzinfo=timezone.utc)
|
||||
|
||||
|
||||
@dataclass
|
||||
class OHLCVData:
|
||||
"""OHLCV (Open, High, Low, Close, Volume) data structure."""
|
||||
symbol: str
|
||||
timeframe: str
|
||||
timestamp: datetime
|
||||
open: Decimal
|
||||
high: Decimal
|
||||
low: Decimal
|
||||
close: Decimal
|
||||
volume: Decimal
|
||||
trades_count: Optional[int] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate OHLCV data after initialization."""
|
||||
if not self.timestamp.tzinfo:
|
||||
self.timestamp = self.timestamp.replace(tzinfo=timezone.utc)
|
||||
|
||||
# Validate price data
|
||||
if not all(isinstance(price, (Decimal, float, int)) for price in [self.open, self.high, self.low, self.close]):
|
||||
raise DataValidationError("All OHLCV prices must be numeric")
|
||||
|
||||
if not isinstance(self.volume, (Decimal, float, int)):
|
||||
raise DataValidationError("Volume must be numeric")
|
||||
|
||||
# Convert to Decimal for precision
|
||||
self.open = Decimal(str(self.open))
|
||||
self.high = Decimal(str(self.high))
|
||||
self.low = Decimal(str(self.low))
|
||||
self.close = Decimal(str(self.close))
|
||||
self.volume = Decimal(str(self.volume))
|
||||
|
||||
# Validate price relationships
|
||||
if not (self.low <= self.open <= self.high and self.low <= self.close <= self.high):
|
||||
raise DataValidationError(f"Invalid OHLCV data: prices don't match expected relationships for {self.symbol}")
|
||||
|
||||
|
||||
class DataCollectorError(Exception):
|
||||
"""Base exception for data collector errors."""
|
||||
pass
|
||||
|
||||
|
||||
class DataValidationError(DataCollectorError):
|
||||
"""Exception raised when data validation fails."""
|
||||
pass
|
||||
|
||||
|
||||
class ConnectionError(DataCollectorError):
|
||||
"""Exception raised when connection to data source fails."""
|
||||
pass
|
||||
|
||||
|
||||
class BaseDataCollector(ABC):
|
||||
"""
|
||||
Abstract base class for all data collectors.
|
||||
|
||||
This class defines the interface that all data collection implementations
|
||||
must follow, providing consistency across different exchanges and data sources.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
exchange_name: str,
|
||||
symbols: List[str],
|
||||
data_types: Optional[List[DataType]] = None,
|
||||
timeframes: Optional[List[str]] = None,
|
||||
component_name: Optional[str] = None,
|
||||
auto_restart: bool = True,
|
||||
health_check_interval: float = 30.0,
|
||||
|
||||
logger = None,
|
||||
log_errors_only: bool = False):
|
||||
"""
|
||||
Initialize the base data collector.
|
||||
|
||||
Args:
|
||||
exchange_name: Name of the exchange (e.g., 'okx', 'binance')
|
||||
symbols: List of trading symbols to collect data for
|
||||
data_types: Types of data to collect (default: [DataType.CANDLE])
|
||||
timeframes: List of timeframes to collect (e.g., ['1s', '1m', '5m'])
|
||||
component_name: Name for logging (default: based on exchange_name)
|
||||
auto_restart: Enable automatic restart on failures (default: True)
|
||||
health_check_interval: Seconds between health checks (default: 30.0)
|
||||
logger: Logger instance. If None, no logging will be performed.
|
||||
log_errors_only: If True and logger is provided, only log error-level messages
|
||||
"""
|
||||
self.exchange_name = exchange_name.lower()
|
||||
self.symbols = set(symbols)
|
||||
self.data_types = data_types or [DataType.CANDLE]
|
||||
self.timeframes = timeframes or ['1m', '5m'] # Default timeframes if none provided
|
||||
self.auto_restart = auto_restart
|
||||
self.health_check_interval = health_check_interval
|
||||
self.log_errors_only = log_errors_only
|
||||
|
||||
# Initialize logger based on parameters
|
||||
if logger is not None:
|
||||
self.logger = logger
|
||||
else:
|
||||
self.logger = None
|
||||
|
||||
# Collector state
|
||||
self.status = CollectorStatus.STOPPED
|
||||
self._running = False
|
||||
self._should_be_running = False # Track desired state
|
||||
self._tasks: Set[asyncio.Task] = set()
|
||||
|
||||
# Data callbacks
|
||||
self._data_callbacks: Dict[DataType, List[Callable]] = {
|
||||
data_type: [] for data_type in DataType
|
||||
}
|
||||
|
||||
# Connection management
|
||||
self._connection = None
|
||||
self._reconnect_attempts = 0
|
||||
self._max_reconnect_attempts = 5
|
||||
self._reconnect_delay = 5.0 # seconds
|
||||
|
||||
# Health monitoring
|
||||
self._last_heartbeat = datetime.now(timezone.utc)
|
||||
self._last_data_received = None
|
||||
self._health_check_task = None
|
||||
self._max_silence_duration = timedelta(minutes=5) # Max time without data before unhealthy
|
||||
|
||||
# Statistics
|
||||
self._stats = {
|
||||
'messages_received': 0,
|
||||
'messages_processed': 0,
|
||||
'errors': 0,
|
||||
'restarts': 0,
|
||||
'last_message_time': None,
|
||||
'connection_uptime': None,
|
||||
'last_error': None,
|
||||
'last_restart_time': None
|
||||
}
|
||||
|
||||
# Log initialization if logger is available
|
||||
if self.logger:
|
||||
component = component_name or f"{self.exchange_name}_collector"
|
||||
self.component_name = component
|
||||
if not self.log_errors_only:
|
||||
self.logger.info(f"{self.component_name}: Initialized {self.exchange_name} data collector for symbols: {', '.join(symbols)}")
|
||||
self.logger.info(f"{self.component_name}: Using timeframes: {', '.join(self.timeframes)}")
|
||||
else:
|
||||
self.component_name = component_name or f"{self.exchange_name}_collector"
|
||||
|
||||
def _log_debug(self, message: str) -> None:
|
||||
"""Log debug message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.debug(message)
|
||||
|
||||
def _log_info(self, message: str) -> None:
|
||||
"""Log info message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.info(message)
|
||||
|
||||
def _log_warning(self, message: str) -> None:
|
||||
"""Log warning message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.warning(message)
|
||||
|
||||
def _log_error(self, message: str, exc_info: bool = False) -> None:
|
||||
"""Log error message if logger is available (always logs errors regardless of log_errors_only)."""
|
||||
if self.logger:
|
||||
self.logger.error(message, exc_info=exc_info)
|
||||
|
||||
def _log_critical(self, message: str, exc_info: bool = False) -> None:
|
||||
"""Log critical message if logger is available (always logs critical regardless of log_errors_only)."""
|
||||
if self.logger:
|
||||
self.logger.critical(message, exc_info=exc_info)
|
||||
|
||||
@abstractmethod
|
||||
async def connect(self) -> bool:
|
||||
"""
|
||||
Establish connection to the data source.
|
||||
|
||||
Returns:
|
||||
True if connection successful, False otherwise
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def disconnect(self) -> None:
|
||||
"""Disconnect from the data source."""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def subscribe_to_data(self, symbols: List[str], data_types: List[DataType]) -> bool:
|
||||
"""
|
||||
Subscribe to data streams for specified symbols and data types.
|
||||
|
||||
Args:
|
||||
symbols: Trading symbols to subscribe to
|
||||
data_types: Types of data to subscribe to
|
||||
|
||||
Returns:
|
||||
True if subscription successful, False otherwise
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def unsubscribe_from_data(self, symbols: List[str], data_types: List[DataType]) -> bool:
|
||||
"""
|
||||
Unsubscribe from data streams.
|
||||
|
||||
Args:
|
||||
symbols: Trading symbols to unsubscribe from
|
||||
data_types: Types of data to unsubscribe from
|
||||
|
||||
Returns:
|
||||
True if unsubscription successful, False otherwise
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def _process_message(self, message: Any) -> Optional[MarketDataPoint]:
|
||||
"""
|
||||
Process incoming message from the data source.
|
||||
|
||||
Args:
|
||||
message: Raw message from the data source
|
||||
|
||||
Returns:
|
||||
Processed MarketDataPoint or None if message should be ignored
|
||||
"""
|
||||
pass
|
||||
|
||||
async def start(self) -> bool:
|
||||
"""
|
||||
Start the data collector.
|
||||
|
||||
Returns:
|
||||
True if started successfully, False otherwise
|
||||
"""
|
||||
# Check if already running or starting
|
||||
if self.status in [CollectorStatus.RUNNING, CollectorStatus.STARTING]:
|
||||
self._log_warning("Data collector is already running or starting")
|
||||
return True
|
||||
|
||||
self._log_info(f"Starting {self.exchange_name} data collector")
|
||||
self.status = CollectorStatus.STARTING
|
||||
self._should_be_running = True
|
||||
|
||||
try:
|
||||
# Connect to data source
|
||||
if not await self.connect():
|
||||
self._log_error("Failed to connect to data source")
|
||||
self.status = CollectorStatus.ERROR
|
||||
return False
|
||||
|
||||
# Subscribe to data streams
|
||||
if not await self.subscribe_to_data(list(self.symbols), self.data_types):
|
||||
self._log_error("Failed to subscribe to data streams")
|
||||
self.status = CollectorStatus.ERROR
|
||||
await self.disconnect()
|
||||
return False
|
||||
|
||||
# Start background tasks
|
||||
self._running = True
|
||||
self.status = CollectorStatus.RUNNING
|
||||
|
||||
# Start message processing task
|
||||
message_task = asyncio.create_task(self._message_loop())
|
||||
self._tasks.add(message_task)
|
||||
message_task.add_done_callback(self._tasks.discard)
|
||||
|
||||
# Start health monitoring task
|
||||
if self.health_check_interval > 0:
|
||||
health_task = asyncio.create_task(self._health_monitor())
|
||||
self._tasks.add(health_task)
|
||||
health_task.add_done_callback(self._tasks.discard)
|
||||
|
||||
self._log_info(f"{self.exchange_name} data collector started successfully")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Failed to start data collector: {e}")
|
||||
self.status = CollectorStatus.ERROR
|
||||
self._should_be_running = False
|
||||
return False
|
||||
|
||||
async def stop(self, force: bool = False) -> None:
|
||||
"""
|
||||
Stop the data collector and cleanup resources.
|
||||
|
||||
Args:
|
||||
force: Force stop even if not graceful
|
||||
"""
|
||||
if self.status == CollectorStatus.STOPPED:
|
||||
self._log_warning("Data collector is already stopped")
|
||||
return
|
||||
|
||||
self._log_info(f"Stopping {self.exchange_name} data collector")
|
||||
self.status = CollectorStatus.STOPPING
|
||||
self._should_be_running = False
|
||||
|
||||
try:
|
||||
# Stop background tasks
|
||||
self._running = False
|
||||
|
||||
# Cancel all tasks
|
||||
for task in list(self._tasks):
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
if not force:
|
||||
try:
|
||||
await task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
|
||||
self._tasks.clear()
|
||||
|
||||
# Unsubscribe and disconnect
|
||||
await self.unsubscribe_from_data(list(self.symbols), self.data_types)
|
||||
await self.disconnect()
|
||||
|
||||
self.status = CollectorStatus.STOPPED
|
||||
self._log_info(f"{self.exchange_name} data collector stopped")
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Error stopping data collector: {e}")
|
||||
self.status = CollectorStatus.ERROR
|
||||
|
||||
async def restart(self) -> bool:
|
||||
"""
|
||||
Restart the data collector.
|
||||
|
||||
Returns:
|
||||
True if restarted successfully, False otherwise
|
||||
"""
|
||||
self._log_info(f"Restarting {self.exchange_name} data collector")
|
||||
self._stats['restarts'] += 1
|
||||
self._stats['last_restart_time'] = datetime.now(timezone.utc)
|
||||
|
||||
# Stop first
|
||||
await self.stop()
|
||||
|
||||
# Wait a bit before restarting
|
||||
await asyncio.sleep(self._reconnect_delay)
|
||||
|
||||
# Start again
|
||||
return await self.start()
|
||||
|
||||
async def _message_loop(self) -> None:
|
||||
"""Main message processing loop."""
|
||||
try:
|
||||
self._log_debug("Starting message processing loop")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
await self._handle_messages()
|
||||
except asyncio.CancelledError:
|
||||
break
|
||||
except Exception as e:
|
||||
self._stats['errors'] += 1
|
||||
self._stats['last_error'] = str(e)
|
||||
self._log_error(f"Error processing messages: {e}")
|
||||
|
||||
# Small delay to prevent tight error loops
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
except asyncio.CancelledError:
|
||||
self._log_debug("Message loop cancelled")
|
||||
raise
|
||||
except Exception as e:
|
||||
self._log_error(f"Error in message loop: {e}")
|
||||
self.status = CollectorStatus.ERROR
|
||||
|
||||
async def _health_monitor(self) -> None:
|
||||
"""Monitor collector health and restart if needed."""
|
||||
try:
|
||||
self._log_debug("Starting health monitor")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
await asyncio.sleep(self.health_check_interval)
|
||||
|
||||
current_time = datetime.now(timezone.utc)
|
||||
|
||||
# Check if collector should be running but isn't
|
||||
if self._should_be_running and self.status != CollectorStatus.RUNNING:
|
||||
self._log_warning("Collector should be running but isn't - restarting")
|
||||
if self.auto_restart:
|
||||
asyncio.create_task(self.restart())
|
||||
continue
|
||||
|
||||
# Check heartbeat
|
||||
time_since_heartbeat = current_time - self._last_heartbeat
|
||||
if time_since_heartbeat > timedelta(seconds=self.health_check_interval * 2):
|
||||
self._log_warning(f"No heartbeat for {time_since_heartbeat.total_seconds():.1f}s - restarting")
|
||||
if self.auto_restart:
|
||||
asyncio.create_task(self.restart())
|
||||
continue
|
||||
|
||||
# Check data reception
|
||||
if self._last_data_received:
|
||||
time_since_data = current_time - self._last_data_received
|
||||
if time_since_data > self._max_silence_duration:
|
||||
self._log_warning(f"No data received for {time_since_data.total_seconds():.1f}s - restarting")
|
||||
if self.auto_restart:
|
||||
asyncio.create_task(self.restart())
|
||||
continue
|
||||
|
||||
# Check for error status
|
||||
if self.status == CollectorStatus.ERROR:
|
||||
self._log_warning(f"Collector in {self.status.value} status - restarting")
|
||||
if self.auto_restart:
|
||||
asyncio.create_task(self.restart())
|
||||
|
||||
except asyncio.CancelledError:
|
||||
break
|
||||
|
||||
except asyncio.CancelledError:
|
||||
self._log_debug("Health monitor cancelled")
|
||||
raise
|
||||
except Exception as e:
|
||||
self._log_error(f"Error in health monitor: {e}")
|
||||
|
||||
@abstractmethod
|
||||
async def _handle_messages(self) -> None:
|
||||
"""
|
||||
Handle incoming messages from the data source.
|
||||
This method should be implemented by subclasses to handle their specific message format.
|
||||
"""
|
||||
pass
|
||||
|
||||
async def _handle_connection_error(self) -> bool:
|
||||
"""
|
||||
Handle connection errors and attempt reconnection.
|
||||
|
||||
Returns:
|
||||
True if reconnection successful, False if max attempts exceeded
|
||||
"""
|
||||
self._reconnect_attempts += 1
|
||||
|
||||
if self._reconnect_attempts > self._max_reconnect_attempts:
|
||||
self._log_error(f"Max reconnection attempts ({self._max_reconnect_attempts}) exceeded")
|
||||
self.status = CollectorStatus.ERROR
|
||||
self._should_be_running = False
|
||||
return False
|
||||
|
||||
self.status = CollectorStatus.RECONNECTING
|
||||
self._log_warning(f"Connection lost. Attempting reconnection {self._reconnect_attempts}/{self._max_reconnect_attempts}")
|
||||
|
||||
# Disconnect and wait before retrying
|
||||
await self.disconnect()
|
||||
await asyncio.sleep(self._reconnect_delay)
|
||||
|
||||
# Attempt to reconnect
|
||||
try:
|
||||
if await self.connect():
|
||||
if await self.subscribe_to_data(list(self.symbols), self.data_types):
|
||||
self._log_info("Reconnection successful")
|
||||
self.status = CollectorStatus.RUNNING
|
||||
self._reconnect_attempts = 0
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Reconnection attempt failed: {e}")
|
||||
|
||||
return False
|
||||
|
||||
def add_data_callback(self, data_type: DataType, callback: Callable[[MarketDataPoint], None]) -> None:
|
||||
"""
|
||||
Add a callback function for specific data type.
|
||||
|
||||
Args:
|
||||
data_type: Type of data to monitor
|
||||
callback: Function to call when data is received
|
||||
"""
|
||||
if callback not in self._data_callbacks[data_type]:
|
||||
self._data_callbacks[data_type].append(callback)
|
||||
self._log_debug(f"Added callback for {data_type.value} data")
|
||||
|
||||
def remove_data_callback(self, data_type: DataType, callback: Callable[[MarketDataPoint], None]) -> None:
|
||||
"""
|
||||
Remove a callback function for specific data type.
|
||||
|
||||
Args:
|
||||
data_type: Type of data to stop monitoring
|
||||
callback: Function to remove
|
||||
"""
|
||||
if callback in self._data_callbacks[data_type]:
|
||||
self._data_callbacks[data_type].remove(callback)
|
||||
self._log_debug(f"Removed callback for {data_type.value} data")
|
||||
|
||||
async def _notify_callbacks(self, data_point: MarketDataPoint) -> None:
|
||||
"""
|
||||
Notify all registered callbacks for a data point.
|
||||
|
||||
Args:
|
||||
data_point: Market data to distribute
|
||||
"""
|
||||
callbacks = self._data_callbacks.get(data_point.data_type, [])
|
||||
|
||||
for callback in callbacks:
|
||||
try:
|
||||
# Handle both sync and async callbacks
|
||||
if asyncio.iscoroutinefunction(callback):
|
||||
await callback(data_point)
|
||||
else:
|
||||
callback(data_point)
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Error in data callback: {e}")
|
||||
|
||||
# Update statistics
|
||||
self._stats['messages_processed'] += 1
|
||||
self._stats['last_message_time'] = data_point.timestamp
|
||||
self._last_data_received = datetime.now(timezone.utc)
|
||||
self._last_heartbeat = datetime.now(timezone.utc)
|
||||
|
||||
def get_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get current collector status and statistics.
|
||||
|
||||
Returns:
|
||||
Dictionary containing status information
|
||||
"""
|
||||
uptime_seconds = None
|
||||
if self._stats['connection_uptime']:
|
||||
uptime_seconds = (datetime.now(timezone.utc) - self._stats['connection_uptime']).total_seconds()
|
||||
|
||||
time_since_heartbeat = None
|
||||
if self._last_heartbeat:
|
||||
time_since_heartbeat = (datetime.now(timezone.utc) - self._last_heartbeat).total_seconds()
|
||||
|
||||
time_since_data = None
|
||||
if self._last_data_received:
|
||||
time_since_data = (datetime.now(timezone.utc) - self._last_data_received).total_seconds()
|
||||
|
||||
return {
|
||||
'exchange': self.exchange_name,
|
||||
'status': self.status.value,
|
||||
'should_be_running': self._should_be_running,
|
||||
'symbols': list(self.symbols),
|
||||
'data_types': [dt.value for dt in self.data_types],
|
||||
'timeframes': self.timeframes,
|
||||
'auto_restart': self.auto_restart,
|
||||
'health': {
|
||||
'time_since_heartbeat': time_since_heartbeat,
|
||||
'time_since_data': time_since_data,
|
||||
'max_silence_duration': self._max_silence_duration.total_seconds()
|
||||
},
|
||||
'statistics': {
|
||||
**self._stats,
|
||||
'uptime_seconds': uptime_seconds,
|
||||
'reconnect_attempts': self._reconnect_attempts
|
||||
}
|
||||
}
|
||||
|
||||
def get_health_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get detailed health status for monitoring.
|
||||
|
||||
Returns:
|
||||
Dictionary containing health information
|
||||
"""
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
is_healthy = True
|
||||
health_issues = []
|
||||
|
||||
# Check if should be running but isn't
|
||||
if self._should_be_running and not self._running:
|
||||
is_healthy = False
|
||||
health_issues.append("Should be running but is stopped")
|
||||
|
||||
# Check heartbeat
|
||||
if self._last_heartbeat:
|
||||
time_since_heartbeat = now - self._last_heartbeat
|
||||
if time_since_heartbeat > timedelta(seconds=self.health_check_interval * 2):
|
||||
is_healthy = False
|
||||
health_issues.append(f"No heartbeat for {time_since_heartbeat.total_seconds():.1f}s")
|
||||
|
||||
# Check data freshness
|
||||
if self._last_data_received:
|
||||
time_since_data = now - self._last_data_received
|
||||
if time_since_data > self._max_silence_duration:
|
||||
is_healthy = False
|
||||
health_issues.append(f"No data for {time_since_data.total_seconds():.1f}s")
|
||||
|
||||
# Check status
|
||||
if self.status in [CollectorStatus.ERROR, CollectorStatus.UNHEALTHY]:
|
||||
is_healthy = False
|
||||
health_issues.append(f"Status: {self.status.value}")
|
||||
|
||||
return {
|
||||
'is_healthy': is_healthy,
|
||||
'issues': health_issues,
|
||||
'status': self.status.value,
|
||||
'last_heartbeat': self._last_heartbeat.isoformat() if self._last_heartbeat else None,
|
||||
'last_data_received': self._last_data_received.isoformat() if self._last_data_received else None,
|
||||
'should_be_running': self._should_be_running,
|
||||
'is_running': self._running,
|
||||
'timeframes': self.timeframes,
|
||||
'data_types': [dt.value for dt in self.data_types]
|
||||
}
|
||||
|
||||
def add_symbol(self, symbol: str) -> None:
|
||||
"""
|
||||
Add a new symbol to collect data for.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol to add
|
||||
"""
|
||||
if symbol not in self.symbols:
|
||||
self.symbols.add(symbol)
|
||||
self._log_info(f"Added symbol: {symbol}")
|
||||
|
||||
# If collector is running, subscribe to new symbol
|
||||
if self.status == CollectorStatus.RUNNING:
|
||||
# Note: This needs to be called from an async context
|
||||
# Users should handle this appropriately
|
||||
pass
|
||||
|
||||
def remove_symbol(self, symbol: str) -> None:
|
||||
"""
|
||||
Remove a symbol from data collection.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol to remove
|
||||
"""
|
||||
if symbol in self.symbols:
|
||||
self.symbols.remove(symbol)
|
||||
self._log_info(f"Removed symbol: {symbol}")
|
||||
|
||||
# If collector is running, unsubscribe from symbol
|
||||
if self.status == CollectorStatus.RUNNING:
|
||||
# Note: This needs to be called from an async context
|
||||
# Users should handle this appropriately
|
||||
pass
|
||||
|
||||
def validate_ohlcv_data(self, data: Dict[str, Any], symbol: str, timeframe: str) -> OHLCVData:
|
||||
"""
|
||||
Validate and convert raw OHLCV data to standardized format.
|
||||
|
||||
Args:
|
||||
data: Raw OHLCV data dictionary
|
||||
symbol: Trading symbol
|
||||
timeframe: Timeframe (e.g., '1m', '5m', '1h')
|
||||
|
||||
Returns:
|
||||
Validated OHLCVData object
|
||||
|
||||
Raises:
|
||||
DataValidationError: If data validation fails
|
||||
"""
|
||||
required_fields = ['timestamp', 'open', 'high', 'low', 'close', 'volume']
|
||||
|
||||
# Check required fields
|
||||
for field in required_fields:
|
||||
if field not in data:
|
||||
raise DataValidationError(f"Missing required field: {field}")
|
||||
|
||||
try:
|
||||
# Parse timestamp
|
||||
timestamp = data['timestamp']
|
||||
if isinstance(timestamp, (int, float)):
|
||||
# Assume Unix timestamp in milliseconds
|
||||
timestamp = datetime.fromtimestamp(timestamp / 1000, tz=timezone.utc)
|
||||
elif isinstance(timestamp, str):
|
||||
timestamp = datetime.fromisoformat(timestamp.replace('Z', '+00:00'))
|
||||
elif not isinstance(timestamp, datetime):
|
||||
raise DataValidationError(f"Invalid timestamp format: {type(timestamp)}")
|
||||
|
||||
return OHLCVData(
|
||||
symbol=symbol,
|
||||
timeframe=timeframe,
|
||||
timestamp=timestamp,
|
||||
open=Decimal(str(data['open'])),
|
||||
high=Decimal(str(data['high'])),
|
||||
low=Decimal(str(data['low'])),
|
||||
close=Decimal(str(data['close'])),
|
||||
volume=Decimal(str(data['volume'])),
|
||||
trades_count=data.get('trades_count')
|
||||
)
|
||||
|
||||
except (ValueError, TypeError, KeyError) as e:
|
||||
raise DataValidationError(f"Invalid OHLCV data for {symbol}: {e}")
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""String representation of the collector."""
|
||||
return f"<{self.__class__.__name__}({self.exchange_name}, {len(self.symbols)} symbols, {self.status.value})>"
|
||||
451
data/collection_service.py
Normal file
451
data/collection_service.py
Normal file
@ -0,0 +1,451 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Data Collection Service
|
||||
|
||||
Production-ready service for cryptocurrency market data collection
|
||||
with clean logging and robust error handling.
|
||||
|
||||
This service manages multiple data collectors for different trading pairs
|
||||
and exchanges, with proper health monitoring and graceful shutdown.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import signal
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
from datetime import datetime
|
||||
from pathlib import Path
|
||||
from typing import List, Optional, Dict, Any
|
||||
import logging
|
||||
|
||||
# Add project root to path
|
||||
project_root = Path(__file__).parent.parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
# Set environment for clean production logging
|
||||
import os
|
||||
os.environ['DEBUG'] = 'false'
|
||||
|
||||
# Suppress verbose SQLAlchemy logging for production
|
||||
logging.getLogger('sqlalchemy').setLevel(logging.WARNING)
|
||||
logging.getLogger('sqlalchemy.engine').setLevel(logging.WARNING)
|
||||
logging.getLogger('sqlalchemy.pool').setLevel(logging.WARNING)
|
||||
logging.getLogger('sqlalchemy.dialects').setLevel(logging.WARNING)
|
||||
logging.getLogger('sqlalchemy.orm').setLevel(logging.WARNING)
|
||||
|
||||
from data.exchanges.factory import ExchangeFactory
|
||||
from data.collector_manager import CollectorManager
|
||||
from data.base_collector import DataType
|
||||
from database.connection import init_database
|
||||
from utils.logger import get_logger
|
||||
|
||||
|
||||
class DataCollectionService:
|
||||
"""
|
||||
Production data collection service.
|
||||
|
||||
Manages multiple data collectors with clean logging focused on:
|
||||
- Service lifecycle (start/stop/restart)
|
||||
- Connection status (connect/disconnect/reconnect)
|
||||
- Health status and errors
|
||||
- Basic collection statistics
|
||||
|
||||
Excludes verbose logging of individual trades/candles for production clarity.
|
||||
"""
|
||||
|
||||
def __init__(self, config_path: str = "config/data_collection.json"):
|
||||
"""Initialize the data collection service."""
|
||||
self.config_path = config_path
|
||||
|
||||
# Initialize clean logging first - only essential information
|
||||
self.logger = get_logger(
|
||||
"data_collection_service",
|
||||
log_level="INFO",
|
||||
verbose=False # Clean console output
|
||||
)
|
||||
|
||||
# Load configuration after logger is initialized
|
||||
self.config = self._load_config()
|
||||
|
||||
# Core components
|
||||
self.collector_manager = CollectorManager(
|
||||
logger=self.logger,
|
||||
log_errors_only=True # Only log errors and essential events
|
||||
)
|
||||
self.collectors: List = []
|
||||
|
||||
# Service state
|
||||
self.running = False
|
||||
self.start_time = None
|
||||
self.shutdown_event = asyncio.Event()
|
||||
|
||||
# Statistics for monitoring
|
||||
self.stats = {
|
||||
'collectors_created': 0,
|
||||
'collectors_running': 0,
|
||||
'total_uptime_seconds': 0,
|
||||
'last_activity': None,
|
||||
'errors_count': 0
|
||||
}
|
||||
|
||||
self.logger.info("🚀 Data Collection Service initialized")
|
||||
self.logger.info(f"📁 Configuration: {config_path}")
|
||||
|
||||
def _load_config(self) -> Dict[str, Any]:
|
||||
"""Load service configuration from JSON file."""
|
||||
try:
|
||||
config_file = Path(self.config_path)
|
||||
if not config_file.exists():
|
||||
# Create default config if it doesn't exist
|
||||
self._create_default_config(config_file)
|
||||
|
||||
with open(config_file, 'r') as f:
|
||||
config = json.load(f)
|
||||
|
||||
self.logger.info(f"✅ Configuration loaded from {self.config_path}")
|
||||
return config
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Failed to load configuration: {e}")
|
||||
raise
|
||||
|
||||
def _create_default_config(self, config_file: Path) -> None:
|
||||
"""Create a default configuration file."""
|
||||
default_config = {
|
||||
"exchange": "okx",
|
||||
"connection": {
|
||||
"public_ws_url": "wss://ws.okx.com:8443/ws/v5/public",
|
||||
"private_ws_url": "wss://ws.okx.com:8443/ws/v5/private",
|
||||
"ping_interval": 25.0,
|
||||
"pong_timeout": 10.0,
|
||||
"max_reconnect_attempts": 5,
|
||||
"reconnect_delay": 5.0
|
||||
},
|
||||
"data_collection": {
|
||||
"store_raw_data": True,
|
||||
"health_check_interval": 120.0,
|
||||
"auto_restart": True,
|
||||
"buffer_size": 1000
|
||||
},
|
||||
"trading_pairs": [
|
||||
{
|
||||
"symbol": "BTC-USDT",
|
||||
"enabled": True,
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"timeframes": ["1m", "5m", "15m", "1h"],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
},
|
||||
{
|
||||
"symbol": "ETH-USDT",
|
||||
"enabled": True,
|
||||
"data_types": ["trade", "orderbook"],
|
||||
"timeframes": ["1m", "5m", "15m", "1h"],
|
||||
"channels": {
|
||||
"trades": "trades",
|
||||
"orderbook": "books5",
|
||||
"ticker": "tickers"
|
||||
}
|
||||
}
|
||||
],
|
||||
"logging": {
|
||||
"component_name_template": "okx_collector_{symbol}",
|
||||
"log_level": "INFO",
|
||||
"verbose": False
|
||||
},
|
||||
"database": {
|
||||
"store_processed_data": True,
|
||||
"store_raw_data": True,
|
||||
"force_update_candles": False,
|
||||
"batch_size": 100,
|
||||
"flush_interval": 5.0
|
||||
}
|
||||
}
|
||||
|
||||
# Ensure directory exists
|
||||
config_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(config_file, 'w') as f:
|
||||
json.dump(default_config, f, indent=2)
|
||||
|
||||
self.logger.info(f"📄 Created default configuration: {config_file}")
|
||||
|
||||
async def initialize_collectors(self) -> bool:
|
||||
"""Initialize all data collectors based on configuration."""
|
||||
try:
|
||||
# Get exchange configuration (now using okx_config.json structure)
|
||||
exchange_name = self.config.get('exchange', 'okx')
|
||||
trading_pairs = self.config.get('trading_pairs', [])
|
||||
data_collection_config = self.config.get('data_collection', {})
|
||||
|
||||
enabled_pairs = [pair for pair in trading_pairs if pair.get('enabled', True)]
|
||||
|
||||
if not enabled_pairs:
|
||||
self.logger.warning(f"⚠️ No enabled trading pairs for {exchange_name}")
|
||||
return False
|
||||
|
||||
self.logger.info(f"🔧 Initializing {len(enabled_pairs)} collectors for {exchange_name.upper()}")
|
||||
|
||||
total_collectors = 0
|
||||
|
||||
# Create collectors for each trading pair
|
||||
for pair_config in enabled_pairs:
|
||||
if await self._create_collector(exchange_name, pair_config, data_collection_config):
|
||||
total_collectors += 1
|
||||
else:
|
||||
self.logger.error(f"❌ Failed to create collector for {pair_config.get('symbol', 'unknown')}")
|
||||
self.stats['errors_count'] += 1
|
||||
|
||||
self.stats['collectors_created'] = total_collectors
|
||||
|
||||
if total_collectors > 0:
|
||||
self.logger.info(f"✅ Successfully initialized {total_collectors} data collectors")
|
||||
return True
|
||||
else:
|
||||
self.logger.error("❌ No collectors were successfully initialized")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Failed to initialize collectors: {e}")
|
||||
self.stats['errors_count'] += 1
|
||||
return False
|
||||
|
||||
async def _create_collector(self, exchange_name: str, pair_config: Dict[str, Any], data_collection_config: Dict[str, Any]) -> bool:
|
||||
"""Create a single data collector for a trading pair."""
|
||||
try:
|
||||
from data.exchanges.factory import ExchangeCollectorConfig
|
||||
|
||||
symbol = pair_config['symbol']
|
||||
data_types = [DataType(dt) for dt in pair_config.get('data_types', ['trade'])]
|
||||
timeframes = pair_config.get('timeframes', ['1m', '5m'])
|
||||
|
||||
# Create collector configuration using the proper structure
|
||||
collector_config = ExchangeCollectorConfig(
|
||||
exchange=exchange_name,
|
||||
symbol=symbol,
|
||||
data_types=data_types,
|
||||
timeframes=timeframes, # Pass timeframes to config
|
||||
auto_restart=data_collection_config.get('auto_restart', True),
|
||||
health_check_interval=data_collection_config.get('health_check_interval', 120.0),
|
||||
store_raw_data=data_collection_config.get('store_raw_data', True),
|
||||
custom_params={
|
||||
'component_name': f"{exchange_name}_collector_{symbol.replace('-', '_').lower()}",
|
||||
'logger': self.logger,
|
||||
'log_errors_only': True, # Clean logging - only errors and essential events
|
||||
'force_update_candles': self.config.get('database', {}).get('force_update_candles', False),
|
||||
'timeframes': timeframes # Pass timeframes to collector
|
||||
}
|
||||
)
|
||||
|
||||
# Create collector using factory with proper config
|
||||
collector = ExchangeFactory.create_collector(collector_config)
|
||||
|
||||
if collector:
|
||||
# Add to manager
|
||||
self.collector_manager.add_collector(collector)
|
||||
self.collectors.append(collector)
|
||||
|
||||
self.logger.info(f"✅ Created collector: {symbol} [{'/'.join(timeframes)}]")
|
||||
return True
|
||||
else:
|
||||
self.logger.error(f"❌ Failed to create collector for {symbol}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Error creating collector for {pair_config.get('symbol', 'unknown')}: {e}")
|
||||
return False
|
||||
|
||||
async def start(self) -> bool:
|
||||
"""Start the data collection service."""
|
||||
try:
|
||||
self.start_time = time.time()
|
||||
self.running = True
|
||||
|
||||
self.logger.info("🚀 Starting Data Collection Service...")
|
||||
|
||||
# Initialize database
|
||||
self.logger.info("📊 Initializing database connection...")
|
||||
init_database()
|
||||
self.logger.info("✅ Database connection established")
|
||||
|
||||
# Start collector manager
|
||||
self.logger.info("🔌 Starting data collectors...")
|
||||
success = await self.collector_manager.start()
|
||||
|
||||
if success:
|
||||
self.stats['collectors_running'] = len(self.collectors)
|
||||
self.stats['last_activity'] = datetime.now()
|
||||
|
||||
self.logger.info("✅ Data Collection Service started successfully")
|
||||
self.logger.info(f"📈 Active collectors: {self.stats['collectors_running']}")
|
||||
return True
|
||||
else:
|
||||
self.logger.error("❌ Failed to start data collectors")
|
||||
self.stats['errors_count'] += 1
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Failed to start service: {e}")
|
||||
self.stats['errors_count'] += 1
|
||||
return False
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Stop the data collection service gracefully."""
|
||||
try:
|
||||
self.logger.info("🛑 Stopping Data Collection Service...")
|
||||
self.running = False
|
||||
|
||||
# Stop all collectors
|
||||
await self.collector_manager.stop()
|
||||
|
||||
# Update statistics
|
||||
if self.start_time:
|
||||
self.stats['total_uptime_seconds'] = time.time() - self.start_time
|
||||
|
||||
self.stats['collectors_running'] = 0
|
||||
|
||||
self.logger.info("✅ Data Collection Service stopped gracefully")
|
||||
self.logger.info(f"📊 Total uptime: {self.stats['total_uptime_seconds']:.1f} seconds")
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Error during service shutdown: {e}")
|
||||
self.stats['errors_count'] += 1
|
||||
|
||||
def get_status(self) -> Dict[str, Any]:
|
||||
"""Get current service status."""
|
||||
current_time = time.time()
|
||||
uptime = current_time - self.start_time if self.start_time else 0
|
||||
|
||||
return {
|
||||
'running': self.running,
|
||||
'uptime_seconds': uptime,
|
||||
'uptime_hours': uptime / 3600,
|
||||
'collectors_total': len(self.collectors),
|
||||
'collectors_running': self.stats['collectors_running'],
|
||||
'errors_count': self.stats['errors_count'],
|
||||
'last_activity': self.stats['last_activity'],
|
||||
'start_time': datetime.fromtimestamp(self.start_time) if self.start_time else None
|
||||
}
|
||||
|
||||
def setup_signal_handlers(self) -> None:
|
||||
"""Setup signal handlers for graceful shutdown."""
|
||||
def signal_handler(signum, frame):
|
||||
self.logger.info(f"📡 Received shutdown signal ({signum}), stopping gracefully...")
|
||||
self.shutdown_event.set()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
async def run(self, duration_hours: Optional[float] = None) -> bool:
|
||||
"""
|
||||
Run the data collection service.
|
||||
|
||||
Args:
|
||||
duration_hours: Optional duration to run (None = indefinite)
|
||||
|
||||
Returns:
|
||||
bool: True if successful, False if error occurred
|
||||
"""
|
||||
self.setup_signal_handlers()
|
||||
|
||||
try:
|
||||
# Initialize collectors
|
||||
if not await self.initialize_collectors():
|
||||
return False
|
||||
|
||||
# Start service
|
||||
if not await self.start():
|
||||
return False
|
||||
|
||||
# Service running notification
|
||||
status = self.get_status()
|
||||
if duration_hours:
|
||||
self.logger.info(f"⏱️ Service will run for {duration_hours} hours")
|
||||
else:
|
||||
self.logger.info("⏱️ Service running indefinitely (until stopped)")
|
||||
|
||||
self.logger.info(f"📊 Active collectors: {status['collectors_running']}")
|
||||
self.logger.info("🔍 Monitor with: python scripts/monitor_clean.py")
|
||||
|
||||
# Main service loop
|
||||
update_interval = 600 # Status update every 10 minutes
|
||||
last_update = time.time()
|
||||
|
||||
while not self.shutdown_event.is_set():
|
||||
# Wait for shutdown signal or timeout
|
||||
try:
|
||||
await asyncio.wait_for(self.shutdown_event.wait(), timeout=1.0)
|
||||
break
|
||||
except asyncio.TimeoutError:
|
||||
pass
|
||||
|
||||
current_time = time.time()
|
||||
|
||||
# Check duration limit
|
||||
if duration_hours:
|
||||
elapsed_hours = (current_time - self.start_time) / 3600
|
||||
if elapsed_hours >= duration_hours:
|
||||
self.logger.info(f"⏰ Completed {duration_hours} hour run")
|
||||
break
|
||||
|
||||
# Periodic status update
|
||||
if current_time - last_update >= update_interval:
|
||||
elapsed_hours = (current_time - self.start_time) / 3600
|
||||
self.logger.info(f"⏱️ Service uptime: {elapsed_hours:.1f} hours")
|
||||
last_update = current_time
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.logger.error(f"❌ Service error: {e}")
|
||||
self.stats['errors_count'] += 1
|
||||
return False
|
||||
|
||||
finally:
|
||||
await self.stop()
|
||||
|
||||
|
||||
# Service entry point function
|
||||
async def run_data_collection_service(
|
||||
config_path: str = "config/data_collection.json",
|
||||
duration_hours: Optional[float] = None
|
||||
) -> bool:
|
||||
"""
|
||||
Run the data collection service.
|
||||
|
||||
Args:
|
||||
config_path: Path to configuration file
|
||||
duration_hours: Optional duration in hours (None = indefinite)
|
||||
|
||||
Returns:
|
||||
bool: True if successful, False otherwise
|
||||
"""
|
||||
service = DataCollectionService(config_path)
|
||||
return await service.run(duration_hours)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Simple CLI when run directly
|
||||
import argparse
|
||||
|
||||
parser = argparse.ArgumentParser(description="Data Collection Service")
|
||||
parser.add_argument('--config', default="config/data_collection.json",
|
||||
help='Configuration file path')
|
||||
parser.add_argument('--hours', type=float,
|
||||
help='Run duration in hours (default: indefinite)')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
success = asyncio.run(run_data_collection_service(args.config, args.hours))
|
||||
sys.exit(0 if success else 1)
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Service interrupted by user")
|
||||
sys.exit(0)
|
||||
except Exception as e:
|
||||
print(f"❌ Fatal error: {e}")
|
||||
sys.exit(1)
|
||||
563
data/collector_manager.py
Normal file
563
data/collector_manager.py
Normal file
@ -0,0 +1,563 @@
|
||||
"""
|
||||
Data Collector Manager for supervising and managing multiple data collectors.
|
||||
|
||||
This module provides centralized management of data collectors with health monitoring,
|
||||
auto-recovery, and coordinated lifecycle management.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from typing import Dict, List, Optional, Any, Set
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
|
||||
from utils.logger import get_logger
|
||||
from .base_collector import BaseDataCollector, CollectorStatus
|
||||
|
||||
|
||||
class ManagerStatus(Enum):
|
||||
"""Status of the collector manager."""
|
||||
STOPPED = "stopped"
|
||||
STARTING = "starting"
|
||||
RUNNING = "running"
|
||||
STOPPING = "stopping"
|
||||
ERROR = "error"
|
||||
|
||||
|
||||
@dataclass
|
||||
class CollectorConfig:
|
||||
"""Configuration for a data collector."""
|
||||
name: str
|
||||
exchange: str
|
||||
symbols: List[str]
|
||||
data_types: List[str]
|
||||
auto_restart: bool = True
|
||||
health_check_interval: float = 30.0
|
||||
enabled: bool = True
|
||||
|
||||
|
||||
class CollectorManager:
|
||||
"""
|
||||
Manages multiple data collectors with health monitoring and auto-recovery.
|
||||
|
||||
The manager is responsible for:
|
||||
- Starting and stopping collectors
|
||||
- Health monitoring and auto-restart
|
||||
- Coordinated lifecycle management
|
||||
- Status reporting and metrics
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
manager_name: str = "collector_manager",
|
||||
global_health_check_interval: float = 60.0,
|
||||
restart_delay: float = 5.0,
|
||||
logger = None,
|
||||
log_errors_only: bool = False):
|
||||
"""
|
||||
Initialize the collector manager.
|
||||
|
||||
Args:
|
||||
manager_name: Name for logging
|
||||
global_health_check_interval: Seconds between global health checks
|
||||
restart_delay: Delay between restart attempts
|
||||
logger: Logger instance. If None, no logging will be performed.
|
||||
log_errors_only: If True and logger is provided, only log error-level messages
|
||||
"""
|
||||
self.manager_name = manager_name
|
||||
self.global_health_check_interval = global_health_check_interval
|
||||
self.restart_delay = restart_delay
|
||||
self.log_errors_only = log_errors_only
|
||||
|
||||
# Initialize logger based on parameters
|
||||
if logger is not None:
|
||||
self.logger = logger
|
||||
else:
|
||||
self.logger = None
|
||||
|
||||
# Manager state
|
||||
self.status = ManagerStatus.STOPPED
|
||||
self._running = False
|
||||
self._tasks: Set[asyncio.Task] = set()
|
||||
|
||||
# Collector management
|
||||
self._collectors: Dict[str, BaseDataCollector] = {}
|
||||
self._collector_configs: Dict[str, CollectorConfig] = {}
|
||||
self._enabled_collectors: Set[str] = set()
|
||||
|
||||
# Health monitoring
|
||||
self._last_global_check = datetime.now(timezone.utc)
|
||||
self._global_health_task = None
|
||||
|
||||
# Statistics
|
||||
self._stats = {
|
||||
'total_collectors': 0,
|
||||
'running_collectors': 0,
|
||||
'failed_collectors': 0,
|
||||
'restarts_performed': 0,
|
||||
'last_global_check': None,
|
||||
'uptime_start': None
|
||||
}
|
||||
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.info(f"Initialized collector manager: {manager_name}")
|
||||
|
||||
def _log_debug(self, message: str) -> None:
|
||||
"""Log debug message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.debug(message)
|
||||
|
||||
def _log_info(self, message: str) -> None:
|
||||
"""Log info message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.info(message)
|
||||
|
||||
def _log_warning(self, message: str) -> None:
|
||||
"""Log warning message if logger is available and not in errors-only mode."""
|
||||
if self.logger and not self.log_errors_only:
|
||||
self.logger.warning(message)
|
||||
|
||||
def _log_error(self, message: str, exc_info: bool = False) -> None:
|
||||
"""Log error message if logger is available (always logs errors regardless of log_errors_only)."""
|
||||
if self.logger:
|
||||
self.logger.error(message, exc_info=exc_info)
|
||||
|
||||
def _log_critical(self, message: str, exc_info: bool = False) -> None:
|
||||
"""Log critical message if logger is available (always logs critical regardless of log_errors_only)."""
|
||||
if self.logger:
|
||||
self.logger.critical(message, exc_info=exc_info)
|
||||
|
||||
def add_collector(self,
|
||||
collector: BaseDataCollector,
|
||||
config: Optional[CollectorConfig] = None) -> None:
|
||||
"""
|
||||
Add a collector to be managed.
|
||||
|
||||
Args:
|
||||
collector: Data collector instance
|
||||
config: Optional configuration (will create default if not provided)
|
||||
"""
|
||||
# Use a more unique name to avoid duplicates
|
||||
collector_name = f"{collector.exchange_name}_{int(time.time() * 1000000) % 1000000}"
|
||||
|
||||
# Ensure unique name
|
||||
counter = 1
|
||||
base_name = collector_name
|
||||
while collector_name in self._collectors:
|
||||
collector_name = f"{base_name}_{counter}"
|
||||
counter += 1
|
||||
|
||||
if config is None:
|
||||
config = CollectorConfig(
|
||||
name=collector_name,
|
||||
exchange=collector.exchange_name,
|
||||
symbols=list(collector.symbols),
|
||||
data_types=[dt.value for dt in collector.data_types],
|
||||
auto_restart=collector.auto_restart,
|
||||
health_check_interval=collector.health_check_interval
|
||||
)
|
||||
|
||||
self._collectors[collector_name] = collector
|
||||
self._collector_configs[collector_name] = config
|
||||
|
||||
if config.enabled:
|
||||
self._enabled_collectors.add(collector_name)
|
||||
|
||||
self._stats['total_collectors'] = len(self._collectors)
|
||||
|
||||
self._log_info(f"Added collector: {collector_name} ({collector.exchange_name}) - "
|
||||
f"Symbols: {', '.join(collector.symbols)} - Enabled: {config.enabled}")
|
||||
|
||||
def remove_collector(self, collector_name: str) -> bool:
|
||||
"""
|
||||
Remove a collector from management.
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector to remove
|
||||
|
||||
Returns:
|
||||
True if removed successfully, False if not found
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
self._log_warning(f"Collector not found: {collector_name}")
|
||||
return False
|
||||
|
||||
# Stop the collector first (only if event loop is running)
|
||||
collector = self._collectors[collector_name]
|
||||
if collector.status != CollectorStatus.STOPPED:
|
||||
try:
|
||||
# Try to create task only if event loop is running
|
||||
asyncio.create_task(collector.stop(force=True))
|
||||
except RuntimeError:
|
||||
# No event loop running, just log
|
||||
self._log_info(f"Collector {collector_name} will be removed without stopping (no event loop)")
|
||||
|
||||
# Remove from management
|
||||
del self._collectors[collector_name]
|
||||
del self._collector_configs[collector_name]
|
||||
self._enabled_collectors.discard(collector_name)
|
||||
|
||||
self._stats['total_collectors'] = len(self._collectors)
|
||||
|
||||
self._log_info(f"Removed collector: {collector_name}")
|
||||
return True
|
||||
|
||||
def enable_collector(self, collector_name: str) -> bool:
|
||||
"""
|
||||
Enable a collector (will be started if manager is running).
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector to enable
|
||||
|
||||
Returns:
|
||||
True if enabled successfully, False if not found
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
self._log_warning(f"Collector not found: {collector_name}")
|
||||
return False
|
||||
|
||||
self._enabled_collectors.add(collector_name)
|
||||
self._collector_configs[collector_name].enabled = True
|
||||
|
||||
# Start the collector if manager is running (only if event loop is running)
|
||||
if self._running:
|
||||
try:
|
||||
asyncio.create_task(self._start_collector(collector_name))
|
||||
except RuntimeError:
|
||||
# No event loop running, will be started when manager starts
|
||||
self._log_debug(f"Collector {collector_name} enabled but will start when manager starts")
|
||||
|
||||
self._log_info(f"Enabled collector: {collector_name}")
|
||||
return True
|
||||
|
||||
def disable_collector(self, collector_name: str) -> bool:
|
||||
"""
|
||||
Disable a collector (will be stopped if running).
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector to disable
|
||||
|
||||
Returns:
|
||||
True if disabled successfully, False if not found
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
self.logger.warning(f"Collector not found: {collector_name}")
|
||||
return False
|
||||
|
||||
self._enabled_collectors.discard(collector_name)
|
||||
self._collector_configs[collector_name].enabled = False
|
||||
|
||||
# Stop the collector (only if event loop is running)
|
||||
collector = self._collectors[collector_name]
|
||||
try:
|
||||
asyncio.create_task(collector.stop(force=True))
|
||||
except RuntimeError:
|
||||
# No event loop running, just log
|
||||
self.logger.debug(f"Collector {collector_name} disabled but cannot stop (no event loop)")
|
||||
|
||||
self.logger.info(f"Disabled collector: {collector_name}")
|
||||
return True
|
||||
|
||||
async def start(self) -> bool:
|
||||
"""
|
||||
Start the collector manager and all enabled collectors.
|
||||
|
||||
Returns:
|
||||
True if started successfully, False otherwise
|
||||
"""
|
||||
if self.status in [ManagerStatus.RUNNING, ManagerStatus.STARTING]:
|
||||
self._log_warning("Collector manager is already running or starting")
|
||||
return True
|
||||
|
||||
self._log_info("Starting collector manager")
|
||||
self.status = ManagerStatus.STARTING
|
||||
|
||||
try:
|
||||
self._running = True
|
||||
self._stats['uptime_start'] = datetime.now(timezone.utc)
|
||||
|
||||
# Start all enabled collectors
|
||||
start_tasks = []
|
||||
for collector_name in self._enabled_collectors:
|
||||
task = asyncio.create_task(self._start_collector(collector_name))
|
||||
start_tasks.append(task)
|
||||
|
||||
# Wait for all collectors to start (with timeout)
|
||||
if start_tasks:
|
||||
try:
|
||||
await asyncio.wait_for(asyncio.gather(*start_tasks, return_exceptions=True), timeout=30.0)
|
||||
except asyncio.TimeoutError:
|
||||
self._log_warning("Some collectors took too long to start")
|
||||
|
||||
# Start global health monitoring
|
||||
health_task = asyncio.create_task(self._global_health_monitor())
|
||||
self._tasks.add(health_task)
|
||||
health_task.add_done_callback(self._tasks.discard)
|
||||
|
||||
self.status = ManagerStatus.RUNNING
|
||||
self._log_info(f"Collector manager started - Managing {len(self._enabled_collectors)} collectors")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
self.status = ManagerStatus.ERROR
|
||||
self._log_error(f"Failed to start collector manager: {e}")
|
||||
return False
|
||||
|
||||
async def stop(self) -> None:
|
||||
"""Stop the collector manager and all collectors."""
|
||||
if self.status == ManagerStatus.STOPPED:
|
||||
self._log_warning("Collector manager is already stopped")
|
||||
return
|
||||
|
||||
self._log_info("Stopping collector manager")
|
||||
self.status = ManagerStatus.STOPPING
|
||||
self._running = False
|
||||
|
||||
try:
|
||||
# Cancel manager tasks
|
||||
for task in list(self._tasks):
|
||||
task.cancel()
|
||||
|
||||
if self._tasks:
|
||||
await asyncio.gather(*self._tasks, return_exceptions=True)
|
||||
|
||||
# Stop all collectors
|
||||
stop_tasks = []
|
||||
for collector in self._collectors.values():
|
||||
task = asyncio.create_task(collector.stop(force=True))
|
||||
stop_tasks.append(task)
|
||||
|
||||
# Wait for all collectors to stop (with timeout)
|
||||
if stop_tasks:
|
||||
try:
|
||||
await asyncio.wait_for(asyncio.gather(*stop_tasks, return_exceptions=True), timeout=30.0)
|
||||
except asyncio.TimeoutError:
|
||||
self._log_warning("Some collectors took too long to stop")
|
||||
|
||||
self.status = ManagerStatus.STOPPED
|
||||
self._log_info("Collector manager stopped")
|
||||
|
||||
except Exception as e:
|
||||
self.status = ManagerStatus.ERROR
|
||||
self._log_error(f"Error stopping collector manager: {e}")
|
||||
|
||||
async def restart_collector(self, collector_name: str) -> bool:
|
||||
"""
|
||||
Restart a specific collector.
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector to restart
|
||||
|
||||
Returns:
|
||||
True if restarted successfully, False otherwise
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
self._log_warning(f"Collector not found: {collector_name}")
|
||||
return False
|
||||
|
||||
collector = self._collectors[collector_name]
|
||||
self._log_info(f"Restarting collector: {collector_name}")
|
||||
|
||||
try:
|
||||
success = await collector.restart()
|
||||
if success:
|
||||
self._stats['restarts_performed'] += 1
|
||||
self._log_info(f"Successfully restarted collector: {collector_name}")
|
||||
else:
|
||||
self._log_error(f"Failed to restart collector: {collector_name}")
|
||||
return success
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Error restarting collector {collector_name}: {e}")
|
||||
return False
|
||||
|
||||
async def _start_collector(self, collector_name: str) -> bool:
|
||||
"""
|
||||
Start a specific collector.
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector to start
|
||||
|
||||
Returns:
|
||||
True if started successfully, False otherwise
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
self._log_warning(f"Collector not found: {collector_name}")
|
||||
return False
|
||||
|
||||
collector = self._collectors[collector_name]
|
||||
|
||||
try:
|
||||
success = await collector.start()
|
||||
if success:
|
||||
self._log_info(f"Started collector: {collector_name}")
|
||||
else:
|
||||
self._log_error(f"Failed to start collector: {collector_name}")
|
||||
return success
|
||||
|
||||
except Exception as e:
|
||||
self._log_error(f"Error starting collector {collector_name}: {e}")
|
||||
return False
|
||||
|
||||
async def _global_health_monitor(self) -> None:
|
||||
"""Global health monitoring for all collectors."""
|
||||
self._log_debug("Starting global health monitor")
|
||||
|
||||
while self._running:
|
||||
try:
|
||||
await asyncio.sleep(self.global_health_check_interval)
|
||||
|
||||
self._last_global_check = datetime.now(timezone.utc)
|
||||
self._stats['last_global_check'] = self._last_global_check
|
||||
|
||||
# Check each enabled collector
|
||||
running_count = 0
|
||||
failed_count = 0
|
||||
|
||||
for collector_name in self._enabled_collectors:
|
||||
collector = self._collectors[collector_name]
|
||||
health_status = collector.get_health_status()
|
||||
|
||||
if health_status['is_healthy'] and collector.status == CollectorStatus.RUNNING:
|
||||
running_count += 1
|
||||
elif not health_status['is_healthy']:
|
||||
failed_count += 1
|
||||
self._log_warning(f"Collector {collector_name} is unhealthy: {health_status['issues']}")
|
||||
|
||||
# Auto-restart if needed and not already restarting
|
||||
if (collector.auto_restart and
|
||||
collector.status not in [CollectorStatus.STARTING, CollectorStatus.STOPPING]):
|
||||
self._log_info(f"Auto-restarting unhealthy collector: {collector_name}")
|
||||
asyncio.create_task(self.restart_collector(collector_name))
|
||||
|
||||
# Update global statistics
|
||||
self._stats['running_collectors'] = running_count
|
||||
self._stats['failed_collectors'] = failed_count
|
||||
|
||||
self._log_debug(f"Health check complete - Running: {running_count}, Failed: {failed_count}")
|
||||
|
||||
except asyncio.CancelledError:
|
||||
self._log_debug("Global health monitor cancelled")
|
||||
break
|
||||
except Exception as e:
|
||||
self._log_error(f"Error in global health monitor: {e}")
|
||||
await asyncio.sleep(self.global_health_check_interval)
|
||||
|
||||
def get_status(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get manager status and statistics.
|
||||
|
||||
Returns:
|
||||
Dictionary containing status information
|
||||
"""
|
||||
uptime_seconds = None
|
||||
if self._stats['uptime_start']:
|
||||
uptime_seconds = (datetime.now(timezone.utc) - self._stats['uptime_start']).total_seconds()
|
||||
|
||||
# Get individual collector statuses
|
||||
collector_statuses = {}
|
||||
for name, collector in self._collectors.items():
|
||||
collector_statuses[name] = {
|
||||
'status': collector.status.value,
|
||||
'enabled': name in self._enabled_collectors,
|
||||
'health': collector.get_health_status()
|
||||
}
|
||||
|
||||
return {
|
||||
'manager_status': self.status.value,
|
||||
'uptime_seconds': uptime_seconds,
|
||||
'statistics': self._stats,
|
||||
'collectors': collector_statuses,
|
||||
'enabled_collectors': list(self._enabled_collectors),
|
||||
'total_collectors': len(self._collectors)
|
||||
}
|
||||
|
||||
def get_collector_status(self, collector_name: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get status for a specific collector.
|
||||
|
||||
Args:
|
||||
collector_name: Name of the collector
|
||||
|
||||
Returns:
|
||||
Collector status dict or None if not found
|
||||
"""
|
||||
if collector_name not in self._collectors:
|
||||
return None
|
||||
|
||||
collector = self._collectors[collector_name]
|
||||
return {
|
||||
'name': collector_name,
|
||||
'config': self._collector_configs[collector_name].__dict__,
|
||||
'status': collector.get_status(),
|
||||
'health': collector.get_health_status()
|
||||
}
|
||||
|
||||
def list_collectors(self) -> List[str]:
|
||||
"""
|
||||
List all managed collector names.
|
||||
|
||||
Returns:
|
||||
List of collector names
|
||||
"""
|
||||
return list(self._collectors.keys())
|
||||
|
||||
def get_running_collectors(self) -> List[str]:
|
||||
"""
|
||||
Get names of currently running collectors.
|
||||
|
||||
Returns:
|
||||
List of running collector names
|
||||
"""
|
||||
running = []
|
||||
for name, collector in self._collectors.items():
|
||||
if collector.status == CollectorStatus.RUNNING:
|
||||
running.append(name)
|
||||
return running
|
||||
|
||||
def get_failed_collectors(self) -> List[str]:
|
||||
"""
|
||||
Get names of failed or unhealthy collectors.
|
||||
|
||||
Returns:
|
||||
List of failed collector names
|
||||
"""
|
||||
failed = []
|
||||
for name, collector in self._collectors.items():
|
||||
health_status = collector.get_health_status()
|
||||
if not health_status['is_healthy']:
|
||||
failed.append(name)
|
||||
return failed
|
||||
|
||||
async def restart_all_collectors(self) -> Dict[str, bool]:
|
||||
"""
|
||||
Restart all enabled collectors.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping collector names to restart success status
|
||||
"""
|
||||
self.logger.info("Restarting all enabled collectors")
|
||||
|
||||
results = {}
|
||||
restart_tasks = []
|
||||
|
||||
for collector_name in self._enabled_collectors:
|
||||
task = asyncio.create_task(self.restart_collector(collector_name))
|
||||
restart_tasks.append((collector_name, task))
|
||||
|
||||
# Wait for all restarts to complete
|
||||
for collector_name, task in restart_tasks:
|
||||
try:
|
||||
results[collector_name] = await task
|
||||
except Exception as e:
|
||||
self.logger.error(f"Error restarting {collector_name}: {e}")
|
||||
results[collector_name] = False
|
||||
|
||||
successful_restarts = sum(1 for success in results.values() if success)
|
||||
self.logger.info(f"Restart complete - {successful_restarts}/{len(results)} collectors restarted successfully")
|
||||
|
||||
return results
|
||||
|
||||
def __repr__(self) -> str:
|
||||
"""String representation of the manager."""
|
||||
return f"<CollectorManager({self.manager_name}, {len(self._collectors)} collectors, {self.status.value})>"
|
||||
67
data/common/__init__.py
Normal file
67
data/common/__init__.py
Normal file
@ -0,0 +1,67 @@
|
||||
"""
|
||||
Common utilities and data structures for the application.
|
||||
|
||||
This package provides shared functionality across different components
|
||||
of the system, including data transformation, validation, and aggregation.
|
||||
"""
|
||||
|
||||
from .data_types import (
|
||||
StandardizedTrade,
|
||||
OHLCVCandle,
|
||||
MarketDataPoint,
|
||||
DataValidationResult,
|
||||
CandleProcessingConfig
|
||||
)
|
||||
|
||||
from .transformation.trade import (
|
||||
TradeTransformer,
|
||||
create_standardized_trade,
|
||||
batch_create_standardized_trades
|
||||
)
|
||||
|
||||
from .transformation.base import BaseDataTransformer
|
||||
from .transformation.unified import UnifiedDataTransformer
|
||||
|
||||
from .transformation.safety import (
|
||||
TradeLimits,
|
||||
DEFAULT_LIMITS,
|
||||
STABLECOIN_LIMITS,
|
||||
VOLATILE_LIMITS,
|
||||
validate_trade_size,
|
||||
validate_trade_price,
|
||||
validate_symbol_format
|
||||
)
|
||||
|
||||
from .validation import (
|
||||
BaseDataValidator,
|
||||
ValidationResult
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Data types
|
||||
'StandardizedTrade',
|
||||
'OHLCVCandle',
|
||||
'MarketDataPoint',
|
||||
'DataValidationResult',
|
||||
'CandleProcessingConfig',
|
||||
|
||||
# Trade transformation
|
||||
'TradeTransformer',
|
||||
'create_standardized_trade',
|
||||
'batch_create_standardized_trades',
|
||||
'BaseDataTransformer',
|
||||
'UnifiedDataTransformer',
|
||||
|
||||
# Safety limits and validation
|
||||
'TradeLimits',
|
||||
'DEFAULT_LIMITS',
|
||||
'STABLECOIN_LIMITS',
|
||||
'VOLATILE_LIMITS',
|
||||
'validate_trade_size',
|
||||
'validate_trade_price',
|
||||
'validate_symbol_format',
|
||||
|
||||
# Validation
|
||||
'BaseDataValidator',
|
||||
'ValidationResult',
|
||||
]
|
||||
34
data/common/aggregation/__init__.py
Normal file
34
data/common/aggregation/__init__.py
Normal file
@ -0,0 +1,34 @@
|
||||
"""
|
||||
Aggregation package for market data processing.
|
||||
|
||||
This package provides functionality for building OHLCV candles from trade data,
|
||||
with support for both real-time and batch processing. It handles:
|
||||
|
||||
- Time-based bucketing of trades
|
||||
- Real-time candle construction
|
||||
- Batch processing for historical data
|
||||
- Multiple timeframe support
|
||||
|
||||
Note: The actual class exports will be added here once the refactoring is complete.
|
||||
"""
|
||||
|
||||
from .bucket import TimeframeBucket
|
||||
from .realtime import RealTimeCandleProcessor
|
||||
from .batch import BatchCandleProcessor
|
||||
from .utils import (
|
||||
aggregate_trades_to_candles,
|
||||
validate_timeframe,
|
||||
parse_timeframe
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
'TimeframeBucket',
|
||||
'RealTimeCandleProcessor',
|
||||
'BatchCandleProcessor',
|
||||
'aggregate_trades_to_candles',
|
||||
'validate_timeframe',
|
||||
'parse_timeframe'
|
||||
]
|
||||
|
||||
# Placeholder for future imports and exports
|
||||
# These will be added as we move the classes into their respective modules
|
||||
153
data/common/aggregation/batch.py
Normal file
153
data/common/aggregation/batch.py
Normal file
@ -0,0 +1,153 @@
|
||||
"""
|
||||
Batch candle processor for historical trade data.
|
||||
|
||||
This module provides the BatchCandleProcessor class for building OHLCV candles
|
||||
from historical trade data in batch mode.
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Any, Iterator
|
||||
from collections import defaultdict
|
||||
|
||||
from ..data_types import StandardizedTrade, OHLCVCandle, ProcessingStats
|
||||
from .bucket import TimeframeBucket
|
||||
|
||||
|
||||
class BatchCandleProcessor:
|
||||
"""
|
||||
Batch candle processor for historical trade data.
|
||||
|
||||
This class processes trades in batch mode, building candles for multiple
|
||||
timeframes simultaneously. It's optimized for processing large amounts
|
||||
of historical trade data efficiently.
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
symbol: str,
|
||||
exchange: str,
|
||||
timeframes: List[str],
|
||||
component_name: str = "batch_candle_processor",
|
||||
logger = None):
|
||||
"""
|
||||
Initialize batch candle processor.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol (e.g., 'BTC-USDT')
|
||||
exchange: Exchange name
|
||||
timeframes: List of timeframes to process (e.g., ['1m', '5m'])
|
||||
component_name: Name for logging/stats
|
||||
logger: Optional logger instance
|
||||
"""
|
||||
self.symbol = symbol
|
||||
self.exchange = exchange
|
||||
self.timeframes = timeframes
|
||||
self.component_name = component_name
|
||||
self.logger = logger
|
||||
|
||||
# Stats tracking
|
||||
self.stats = ProcessingStats()
|
||||
|
||||
def process_trades_to_candles(self, trades: Iterator[StandardizedTrade]) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Process trades in batch and return completed candles.
|
||||
|
||||
Args:
|
||||
trades: Iterator of trades to process
|
||||
|
||||
Returns:
|
||||
List of completed candles for all timeframes
|
||||
"""
|
||||
# Track buckets for each timeframe
|
||||
buckets: Dict[str, Dict[datetime, TimeframeBucket]] = defaultdict(dict)
|
||||
|
||||
# Process all trades
|
||||
for trade in trades:
|
||||
self.stats.trades_processed += 1
|
||||
|
||||
# Process trade for each timeframe
|
||||
for timeframe in self.timeframes:
|
||||
# Get bucket for this trade's timestamp
|
||||
bucket_start = self._get_bucket_start_time(trade.timestamp, timeframe)
|
||||
|
||||
# Create bucket if it doesn't exist
|
||||
if bucket_start not in buckets[timeframe]:
|
||||
buckets[timeframe][bucket_start] = TimeframeBucket(
|
||||
symbol=self.symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=bucket_start,
|
||||
exchange=self.exchange
|
||||
)
|
||||
|
||||
# Add trade to bucket
|
||||
buckets[timeframe][bucket_start].add_trade(trade)
|
||||
|
||||
# Convert all buckets to candles
|
||||
candles = []
|
||||
for timeframe_buckets in buckets.values():
|
||||
for bucket in timeframe_buckets.values():
|
||||
candle = bucket.to_candle(is_complete=True)
|
||||
candles.append(candle)
|
||||
self.stats.candles_emitted += 1
|
||||
|
||||
return sorted(candles, key=lambda x: (x.timeframe, x.end_time))
|
||||
|
||||
def _get_bucket_start_time(self, timestamp: datetime, timeframe: str) -> datetime:
|
||||
"""
|
||||
Calculate the start time for the bucket that this timestamp belongs to.
|
||||
|
||||
IMPORTANT: Uses RIGHT-ALIGNED timestamps
|
||||
- For 5m timeframe, buckets start at 00:00, 00:05, 00:10, etc.
|
||||
- Trade at 09:03:45 belongs to 09:00-09:05 bucket
|
||||
- Trade at 09:07:30 belongs to 09:05-09:10 bucket
|
||||
|
||||
Args:
|
||||
timestamp: Trade timestamp
|
||||
timeframe: Time period (e.g., '1m', '5m', '1h')
|
||||
|
||||
Returns:
|
||||
Start time for the appropriate bucket
|
||||
"""
|
||||
if timeframe == '1s':
|
||||
return timestamp.replace(microsecond=0)
|
||||
elif timeframe == '5s':
|
||||
seconds = (timestamp.second // 5) * 5
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '10s':
|
||||
seconds = (timestamp.second // 10) * 10
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '15s':
|
||||
seconds = (timestamp.second // 15) * 15
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '30s':
|
||||
seconds = (timestamp.second // 30) * 30
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '1m':
|
||||
return timestamp.replace(second=0, microsecond=0)
|
||||
elif timeframe == '5m':
|
||||
minutes = (timestamp.minute // 5) * 5
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '15m':
|
||||
minutes = (timestamp.minute // 15) * 15
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '30m':
|
||||
minutes = (timestamp.minute // 30) * 30
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '1h':
|
||||
return timestamp.replace(minute=0, second=0, microsecond=0)
|
||||
elif timeframe == '4h':
|
||||
hours = (timestamp.hour // 4) * 4
|
||||
return timestamp.replace(hour=hours, minute=0, second=0, microsecond=0)
|
||||
elif timeframe == '1d':
|
||||
return timestamp.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
else:
|
||||
raise ValueError(f"Unsupported timeframe: {timeframe}")
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
"""Get processing statistics."""
|
||||
return {
|
||||
"component": self.component_name,
|
||||
"stats": self.stats.to_dict()
|
||||
}
|
||||
|
||||
|
||||
__all__ = ['BatchCandleProcessor']
|
||||
144
data/common/aggregation/bucket.py
Normal file
144
data/common/aggregation/bucket.py
Normal file
@ -0,0 +1,144 @@
|
||||
"""
|
||||
Time bucket implementation for building OHLCV candles.
|
||||
|
||||
This module provides the TimeframeBucket class which accumulates trades
|
||||
within a specific time period and calculates OHLCV data incrementally.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from decimal import Decimal
|
||||
from typing import Optional, List
|
||||
|
||||
from ..data_types import StandardizedTrade, OHLCVCandle
|
||||
|
||||
|
||||
class TimeframeBucket:
|
||||
"""
|
||||
Time bucket for building OHLCV candles from trades.
|
||||
|
||||
This class accumulates trades within a specific time period
|
||||
and calculates OHLCV data incrementally.
|
||||
|
||||
IMPORTANT: Uses RIGHT-ALIGNED timestamps
|
||||
- start_time: Beginning of the interval (inclusive)
|
||||
- end_time: End of the interval (exclusive) - this becomes the candle timestamp
|
||||
- Example: 09:00:00 - 09:05:00 bucket -> candle timestamp = 09:05:00
|
||||
"""
|
||||
|
||||
def __init__(self, symbol: str, timeframe: str, start_time: datetime, exchange: str = "unknown"):
|
||||
"""
|
||||
Initialize time bucket for candle aggregation.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol (e.g., 'BTC-USDT')
|
||||
timeframe: Time period (e.g., '1m', '5m', '1h')
|
||||
start_time: Start time for this bucket (inclusive)
|
||||
exchange: Exchange name
|
||||
"""
|
||||
self.symbol = symbol
|
||||
self.timeframe = timeframe
|
||||
self.start_time = start_time
|
||||
self.end_time = self._calculate_end_time(start_time, timeframe)
|
||||
self.exchange = exchange
|
||||
|
||||
# OHLCV data
|
||||
self.open: Optional[Decimal] = None
|
||||
self.high: Optional[Decimal] = None
|
||||
self.low: Optional[Decimal] = None
|
||||
self.close: Optional[Decimal] = None
|
||||
self.volume: Decimal = Decimal('0')
|
||||
self.trade_count: int = 0
|
||||
|
||||
# Tracking
|
||||
self.first_trade_time: Optional[datetime] = None
|
||||
self.last_trade_time: Optional[datetime] = None
|
||||
self.trades: List[StandardizedTrade] = []
|
||||
|
||||
def add_trade(self, trade: StandardizedTrade) -> bool:
|
||||
"""
|
||||
Add trade to this bucket if it belongs to this time period.
|
||||
|
||||
Args:
|
||||
trade: Standardized trade data
|
||||
|
||||
Returns:
|
||||
True if trade was added, False if outside time range
|
||||
"""
|
||||
# Check if trade belongs in this bucket (start_time <= trade.timestamp < end_time)
|
||||
if not (self.start_time <= trade.timestamp < self.end_time):
|
||||
return False
|
||||
|
||||
# First trade sets open price
|
||||
if self.open is None:
|
||||
self.open = trade.price
|
||||
self.high = trade.price
|
||||
self.low = trade.price
|
||||
self.first_trade_time = trade.timestamp
|
||||
|
||||
# Update OHLCV
|
||||
self.high = max(self.high, trade.price)
|
||||
self.low = min(self.low, trade.price)
|
||||
self.close = trade.price # Last trade sets close
|
||||
self.volume += trade.size
|
||||
self.trade_count += 1
|
||||
self.last_trade_time = trade.timestamp
|
||||
|
||||
# Store trade for detailed analysis if needed
|
||||
self.trades.append(trade)
|
||||
|
||||
return True
|
||||
|
||||
def to_candle(self, is_complete: bool = True) -> OHLCVCandle:
|
||||
"""
|
||||
Convert bucket to OHLCV candle.
|
||||
|
||||
IMPORTANT: Candle timestamp = end_time (right-aligned, industry standard)
|
||||
"""
|
||||
return OHLCVCandle(
|
||||
symbol=self.symbol,
|
||||
timeframe=self.timeframe,
|
||||
start_time=self.start_time,
|
||||
end_time=self.end_time,
|
||||
open=self.open or Decimal('0'),
|
||||
high=self.high or Decimal('0'),
|
||||
low=self.low or Decimal('0'),
|
||||
close=self.close or Decimal('0'),
|
||||
volume=self.volume,
|
||||
trade_count=self.trade_count,
|
||||
exchange=self.exchange,
|
||||
is_complete=is_complete,
|
||||
first_trade_time=self.first_trade_time,
|
||||
last_trade_time=self.last_trade_time
|
||||
)
|
||||
|
||||
def _calculate_end_time(self, start_time: datetime, timeframe: str) -> datetime:
|
||||
"""Calculate end time for this timeframe (right-aligned timestamp)."""
|
||||
if timeframe == '1s':
|
||||
return start_time + timedelta(seconds=1)
|
||||
elif timeframe == '5s':
|
||||
return start_time + timedelta(seconds=5)
|
||||
elif timeframe == '10s':
|
||||
return start_time + timedelta(seconds=10)
|
||||
elif timeframe == '15s':
|
||||
return start_time + timedelta(seconds=15)
|
||||
elif timeframe == '30s':
|
||||
return start_time + timedelta(seconds=30)
|
||||
elif timeframe == '1m':
|
||||
return start_time + timedelta(minutes=1)
|
||||
elif timeframe == '5m':
|
||||
return start_time + timedelta(minutes=5)
|
||||
elif timeframe == '15m':
|
||||
return start_time + timedelta(minutes=15)
|
||||
elif timeframe == '30m':
|
||||
return start_time + timedelta(minutes=30)
|
||||
elif timeframe == '1h':
|
||||
return start_time + timedelta(hours=1)
|
||||
elif timeframe == '4h':
|
||||
return start_time + timedelta(hours=4)
|
||||
elif timeframe == '1d':
|
||||
return start_time + timedelta(days=1)
|
||||
else:
|
||||
raise ValueError(f"Unsupported timeframe: {timeframe}")
|
||||
|
||||
|
||||
__all__ = ['TimeframeBucket']
|
||||
249
data/common/aggregation/realtime.py
Normal file
249
data/common/aggregation/realtime.py
Normal file
@ -0,0 +1,249 @@
|
||||
"""
|
||||
Real-time candle processor for live trade data.
|
||||
|
||||
This module provides the RealTimeCandleProcessor class for building OHLCV candles
|
||||
from live trade data in real-time.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from decimal import Decimal
|
||||
from typing import Dict, List, Optional, Any, Callable
|
||||
from collections import defaultdict
|
||||
|
||||
from ..data_types import (
|
||||
StandardizedTrade,
|
||||
OHLCVCandle,
|
||||
CandleProcessingConfig,
|
||||
ProcessingStats
|
||||
)
|
||||
from .bucket import TimeframeBucket
|
||||
|
||||
|
||||
class RealTimeCandleProcessor:
|
||||
"""
|
||||
Real-time candle processor for live trade data.
|
||||
|
||||
This class processes trades immediately as they arrive from WebSocket,
|
||||
building candles incrementally and emitting completed candles when
|
||||
time boundaries are crossed.
|
||||
|
||||
AGGREGATION PROCESS (NO FUTURE LEAKAGE):
|
||||
|
||||
1. Trade arrives from WebSocket/API with timestamp T
|
||||
2. For each configured timeframe (1m, 5m, etc.):
|
||||
a. Calculate which time bucket this trade belongs to
|
||||
b. Get current bucket for this timeframe
|
||||
c. Check if trade timestamp crosses time boundary
|
||||
d. If boundary crossed: complete and emit previous bucket, create new bucket
|
||||
e. Add trade to current bucket (updates OHLCV)
|
||||
3. Only emit candles when time boundary is definitively crossed
|
||||
4. Never emit incomplete/future candles during real-time processing
|
||||
|
||||
TIMESTAMP ALIGNMENT:
|
||||
- Uses RIGHT-ALIGNED timestamps (industry standard)
|
||||
- 1-minute candle covering 09:00:00-09:01:00 gets timestamp 09:01:00
|
||||
- 5-minute candle covering 09:00:00-09:05:00 gets timestamp 09:05:00
|
||||
- Candle represents PAST data, never future
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
symbol: str,
|
||||
exchange: str,
|
||||
config: Optional[CandleProcessingConfig] = None,
|
||||
component_name: str = "realtime_candle_processor",
|
||||
logger = None):
|
||||
"""
|
||||
Initialize real-time candle processor.
|
||||
|
||||
Args:
|
||||
symbol: Trading symbol (e.g., 'BTC-USDT')
|
||||
exchange: Exchange name
|
||||
config: Candle processing configuration
|
||||
component_name: Name for logging/stats
|
||||
logger: Optional logger instance
|
||||
"""
|
||||
self.symbol = symbol
|
||||
self.exchange = exchange
|
||||
self.config = config or CandleProcessingConfig()
|
||||
self.component_name = component_name
|
||||
self.logger = logger
|
||||
|
||||
# Current buckets for each timeframe
|
||||
self.current_buckets: Dict[str, TimeframeBucket] = {}
|
||||
|
||||
# Callbacks for completed candles
|
||||
self.candle_callbacks: List[Callable[[OHLCVCandle], None]] = []
|
||||
|
||||
# Stats tracking
|
||||
self.stats = ProcessingStats()
|
||||
self.stats.active_timeframes = len(self.config.timeframes)
|
||||
|
||||
def add_candle_callback(self, callback: Callable[[OHLCVCandle], None]) -> None:
|
||||
"""Add callback to be called when candle is completed."""
|
||||
self.candle_callbacks.append(callback)
|
||||
|
||||
def process_trade(self, trade: StandardizedTrade) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Process a single trade and return any completed candles.
|
||||
|
||||
Args:
|
||||
trade: Standardized trade data
|
||||
|
||||
Returns:
|
||||
List of completed candles (if any time boundaries were crossed)
|
||||
"""
|
||||
self.stats.trades_processed += 1
|
||||
self.stats.last_trade_time = trade.timestamp
|
||||
|
||||
completed_candles = []
|
||||
for timeframe in self.config.timeframes:
|
||||
completed = self._process_trade_for_timeframe(trade, timeframe)
|
||||
if completed:
|
||||
completed_candles.append(completed)
|
||||
self.stats.candles_emitted += 1
|
||||
self.stats.last_candle_time = completed.end_time
|
||||
|
||||
return completed_candles
|
||||
|
||||
def _process_trade_for_timeframe(self, trade: StandardizedTrade, timeframe: str) -> Optional[OHLCVCandle]:
|
||||
"""
|
||||
Process trade for a specific timeframe and return completed candle if boundary crossed.
|
||||
|
||||
Args:
|
||||
trade: Trade to process
|
||||
timeframe: Timeframe to process for (e.g., '1m', '5m')
|
||||
|
||||
Returns:
|
||||
Completed candle if time boundary crossed, None otherwise
|
||||
"""
|
||||
# Calculate which bucket this trade belongs to
|
||||
bucket_start = self._get_bucket_start_time(trade.timestamp, timeframe)
|
||||
|
||||
# Get current bucket for this timeframe
|
||||
current_bucket = self.current_buckets.get(timeframe)
|
||||
completed_candle = None
|
||||
|
||||
# If we have a current bucket and trade belongs in a new bucket,
|
||||
# complete current bucket and create new one
|
||||
if current_bucket and bucket_start >= current_bucket.end_time:
|
||||
completed_candle = current_bucket.to_candle(is_complete=True)
|
||||
self._emit_candle(completed_candle)
|
||||
current_bucket = None
|
||||
|
||||
# Create new bucket if needed
|
||||
if not current_bucket:
|
||||
current_bucket = TimeframeBucket(
|
||||
symbol=self.symbol,
|
||||
timeframe=timeframe,
|
||||
start_time=bucket_start,
|
||||
exchange=self.exchange
|
||||
)
|
||||
self.current_buckets[timeframe] = current_bucket
|
||||
|
||||
# Add trade to current bucket
|
||||
current_bucket.add_trade(trade)
|
||||
|
||||
return completed_candle
|
||||
|
||||
def _get_bucket_start_time(self, timestamp: datetime, timeframe: str) -> datetime:
|
||||
"""
|
||||
Calculate the start time for the bucket that this timestamp belongs to.
|
||||
|
||||
IMPORTANT: Uses RIGHT-ALIGNED timestamps
|
||||
- For 5m timeframe, buckets start at 00:00, 00:05, 00:10, etc.
|
||||
- Trade at 09:03:45 belongs to 09:00-09:05 bucket
|
||||
- Trade at 09:07:30 belongs to 09:05-09:10 bucket
|
||||
|
||||
Args:
|
||||
timestamp: Trade timestamp
|
||||
timeframe: Time period (e.g., '1m', '5m', '1h')
|
||||
|
||||
Returns:
|
||||
Start time for the appropriate bucket
|
||||
"""
|
||||
if timeframe == '1s':
|
||||
return timestamp.replace(microsecond=0)
|
||||
elif timeframe == '5s':
|
||||
seconds = (timestamp.second // 5) * 5
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '10s':
|
||||
seconds = (timestamp.second // 10) * 10
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '15s':
|
||||
seconds = (timestamp.second // 15) * 15
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '30s':
|
||||
seconds = (timestamp.second // 30) * 30
|
||||
return timestamp.replace(second=seconds, microsecond=0)
|
||||
elif timeframe == '1m':
|
||||
return timestamp.replace(second=0, microsecond=0)
|
||||
elif timeframe == '5m':
|
||||
minutes = (timestamp.minute // 5) * 5
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '15m':
|
||||
minutes = (timestamp.minute // 15) * 15
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '30m':
|
||||
minutes = (timestamp.minute // 30) * 30
|
||||
return timestamp.replace(minute=minutes, second=0, microsecond=0)
|
||||
elif timeframe == '1h':
|
||||
return timestamp.replace(minute=0, second=0, microsecond=0)
|
||||
elif timeframe == '4h':
|
||||
hours = (timestamp.hour // 4) * 4
|
||||
return timestamp.replace(hour=hours, minute=0, second=0, microsecond=0)
|
||||
elif timeframe == '1d':
|
||||
return timestamp.replace(hour=0, minute=0, second=0, microsecond=0)
|
||||
else:
|
||||
raise ValueError(f"Unsupported timeframe: {timeframe}")
|
||||
|
||||
def _emit_candle(self, candle: OHLCVCandle) -> None:
|
||||
"""Emit completed candle to all registered callbacks."""
|
||||
for callback in self.candle_callbacks:
|
||||
try:
|
||||
callback(candle)
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error in candle callback: {e}")
|
||||
self.stats.errors_count += 1
|
||||
|
||||
def get_current_candles(self, incomplete: bool = True) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Get current (incomplete) candles for all timeframes.
|
||||
|
||||
Args:
|
||||
incomplete: Whether to mark candles as incomplete (default True)
|
||||
"""
|
||||
return [
|
||||
bucket.to_candle(is_complete=not incomplete)
|
||||
for bucket in self.current_buckets.values()
|
||||
]
|
||||
|
||||
def force_complete_all_candles(self) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Force completion of all current candles (e.g., on connection close).
|
||||
|
||||
Returns:
|
||||
List of completed candles
|
||||
"""
|
||||
completed = []
|
||||
for timeframe, bucket in self.current_buckets.items():
|
||||
candle = bucket.to_candle(is_complete=True)
|
||||
completed.append(candle)
|
||||
self._emit_candle(candle)
|
||||
self.stats.candles_emitted += 1
|
||||
self.current_buckets.clear()
|
||||
return completed
|
||||
|
||||
def get_stats(self) -> Dict[str, Any]:
|
||||
"""Get processing statistics."""
|
||||
stats_dict = self.stats.to_dict()
|
||||
stats_dict.update({
|
||||
'component': self.component_name,
|
||||
'symbol': self.symbol,
|
||||
'exchange': self.exchange,
|
||||
'active_timeframes': list(self.current_buckets.keys())
|
||||
})
|
||||
return stats_dict
|
||||
|
||||
|
||||
__all__ = ['RealTimeCandleProcessor']
|
||||
78
data/common/aggregation/utils.py
Normal file
78
data/common/aggregation/utils.py
Normal file
@ -0,0 +1,78 @@
|
||||
"""
|
||||
Utility functions for market data aggregation.
|
||||
|
||||
This module provides common utility functions for working with OHLCV candles
|
||||
and trade data aggregation.
|
||||
"""
|
||||
|
||||
import re
|
||||
from typing import List, Tuple
|
||||
|
||||
from ..data_types import StandardizedTrade, OHLCVCandle
|
||||
from .batch import BatchCandleProcessor
|
||||
|
||||
|
||||
def aggregate_trades_to_candles(trades: List[StandardizedTrade],
|
||||
timeframes: List[str],
|
||||
symbol: str,
|
||||
exchange: str) -> List[OHLCVCandle]:
|
||||
"""
|
||||
Simple utility function to aggregate a list of trades to candles.
|
||||
|
||||
Args:
|
||||
trades: List of standardized trades
|
||||
timeframes: List of timeframes to generate
|
||||
symbol: Trading symbol
|
||||
exchange: Exchange name
|
||||
|
||||
Returns:
|
||||
List of completed candles
|
||||
"""
|
||||
processor = BatchCandleProcessor(symbol, exchange, timeframes)
|
||||
return processor.process_trades_to_candles(iter(trades))
|
||||
|
||||
|
||||
def validate_timeframe(timeframe: str) -> bool:
|
||||
"""
|
||||
Validate if timeframe is supported.
|
||||
|
||||
Args:
|
||||
timeframe: Timeframe string (e.g., '1s', '5s', '10s', '1m', '5m', '1h')
|
||||
|
||||
Returns:
|
||||
True if supported, False otherwise
|
||||
"""
|
||||
supported = ['1s', '5s', '10s', '15s', '30s', '1m', '5m', '15m', '30m', '1h', '4h', '1d']
|
||||
return timeframe in supported
|
||||
|
||||
|
||||
def parse_timeframe(timeframe: str) -> Tuple[int, str]:
|
||||
"""
|
||||
Parse timeframe string into number and unit.
|
||||
|
||||
Args:
|
||||
timeframe: Timeframe string (e.g., '1s', '5m', '1h')
|
||||
|
||||
Returns:
|
||||
Tuple of (number, unit)
|
||||
|
||||
Examples:
|
||||
'1s' -> (1, 's')
|
||||
'5m' -> (5, 'm')
|
||||
'1h' -> (1, 'h')
|
||||
'1d' -> (1, 'd')
|
||||
"""
|
||||
match = re.match(r'^(\d+)([smhd])$', timeframe.lower())
|
||||
if not match:
|
||||
raise ValueError(f"Invalid timeframe format: {timeframe}")
|
||||
|
||||
number = int(match.group(1))
|
||||
unit = match.group(2)
|
||||
return number, unit
|
||||
|
||||
|
||||
__all__ = [
|
||||
'aggregate_trades_to_candles',
|
||||
'validate_timeframe',
|
||||
'parse_timeframe'
|
||||
]
|
||||
183
data/common/data_types.py
Normal file
183
data/common/data_types.py
Normal file
@ -0,0 +1,183 @@
|
||||
"""
|
||||
Common data types for all exchange implementations.
|
||||
|
||||
These data structures provide a unified interface for market data
|
||||
regardless of the source exchange.
|
||||
"""
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from decimal import Decimal
|
||||
from typing import Dict, List, Optional, Any
|
||||
from dataclasses import dataclass, field
|
||||
from enum import Enum
|
||||
|
||||
from ..base_collector import DataType, MarketDataPoint # Import from base
|
||||
|
||||
|
||||
@dataclass
|
||||
class DataValidationResult:
|
||||
"""Result of data validation - common across all exchanges."""
|
||||
is_valid: bool
|
||||
errors: List[str]
|
||||
warnings: List[str]
|
||||
sanitized_data: Optional[Dict[str, Any]] = None
|
||||
|
||||
|
||||
@dataclass
|
||||
class StandardizedTrade:
|
||||
"""
|
||||
Standardized trade format for unified processing across all exchanges.
|
||||
|
||||
This format works for both real-time and historical data processing,
|
||||
ensuring consistency across all data sources and scenarios.
|
||||
"""
|
||||
symbol: str
|
||||
trade_id: str
|
||||
price: Decimal
|
||||
size: Decimal
|
||||
side: str # 'buy' or 'sell'
|
||||
timestamp: datetime
|
||||
exchange: str
|
||||
raw_data: Optional[Dict[str, Any]] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate and normalize fields after initialization."""
|
||||
# Ensure timestamp is timezone-aware
|
||||
if self.timestamp.tzinfo is None:
|
||||
self.timestamp = self.timestamp.replace(tzinfo=timezone.utc)
|
||||
|
||||
# Normalize side to lowercase
|
||||
self.side = self.side.lower()
|
||||
|
||||
# Validate side
|
||||
if self.side not in ['buy', 'sell']:
|
||||
raise ValueError(f"Invalid trade side: {self.side}")
|
||||
|
||||
|
||||
@dataclass
|
||||
class OHLCVCandle:
|
||||
"""
|
||||
OHLCV candle data structure for time-based aggregation.
|
||||
|
||||
This represents a complete candle for a specific timeframe,
|
||||
built from aggregating multiple trades within the time period.
|
||||
"""
|
||||
symbol: str
|
||||
timeframe: str
|
||||
start_time: datetime
|
||||
end_time: datetime
|
||||
open: Decimal
|
||||
high: Decimal
|
||||
low: Decimal
|
||||
close: Decimal
|
||||
volume: Decimal
|
||||
trade_count: int
|
||||
exchange: str = "unknown"
|
||||
is_complete: bool = False
|
||||
first_trade_time: Optional[datetime] = None
|
||||
last_trade_time: Optional[datetime] = None
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate and normalize fields after initialization."""
|
||||
# Ensure timestamps are timezone-aware
|
||||
if self.start_time.tzinfo is None:
|
||||
self.start_time = self.start_time.replace(tzinfo=timezone.utc)
|
||||
if self.end_time.tzinfo is None:
|
||||
self.end_time = self.end_time.replace(tzinfo=timezone.utc)
|
||||
|
||||
# Validate OHLC relationships
|
||||
if self.high < self.low:
|
||||
raise ValueError("High price cannot be less than low price")
|
||||
if self.open < 0 or self.high < 0 or self.low < 0 or self.close < 0:
|
||||
raise ValueError("Prices cannot be negative")
|
||||
if self.volume < 0:
|
||||
raise ValueError("Volume cannot be negative")
|
||||
if self.trade_count < 0:
|
||||
raise ValueError("Trade count cannot be negative")
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert candle to dictionary for storage/serialization."""
|
||||
return {
|
||||
'symbol': self.symbol,
|
||||
'timeframe': self.timeframe,
|
||||
'start_time': self.start_time.isoformat(),
|
||||
'end_time': self.end_time.isoformat(),
|
||||
'open': str(self.open),
|
||||
'high': str(self.high),
|
||||
'low': str(self.low),
|
||||
'close': str(self.close),
|
||||
'volume': str(self.volume),
|
||||
'trade_count': self.trade_count,
|
||||
'exchange': self.exchange,
|
||||
'is_complete': self.is_complete,
|
||||
'first_trade_time': self.first_trade_time.isoformat() if self.first_trade_time else None,
|
||||
'last_trade_time': self.last_trade_time.isoformat() if self.last_trade_time else None
|
||||
}
|
||||
|
||||
|
||||
@dataclass
|
||||
class CandleProcessingConfig:
|
||||
"""Configuration for candle processing - shared across exchanges."""
|
||||
timeframes: List[str] = field(default_factory=lambda: ['5s', '1m', '5m', '15m', '1h'])
|
||||
auto_save_candles: bool = True
|
||||
emit_incomplete_candles: bool = False
|
||||
max_trades_per_candle: int = 100000 # Safety limit
|
||||
|
||||
def __post_init__(self):
|
||||
"""Validate configuration after initialization."""
|
||||
supported_timeframes = ['1s', '5s', '10s', '15s', '30s', '1m', '5m', '15m', '30m', '1h', '4h', '1d']
|
||||
for tf in self.timeframes:
|
||||
if tf not in supported_timeframes:
|
||||
raise ValueError(f"Unsupported timeframe: {tf}")
|
||||
|
||||
|
||||
class TradeSide(Enum):
|
||||
"""Standardized trade side enumeration."""
|
||||
BUY = "buy"
|
||||
SELL = "sell"
|
||||
|
||||
|
||||
class TimeframeUnit(Enum):
|
||||
"""Time units for candle timeframes."""
|
||||
SECOND = "s"
|
||||
MINUTE = "m"
|
||||
HOUR = "h"
|
||||
DAY = "d"
|
||||
|
||||
|
||||
@dataclass
|
||||
class ProcessingStats:
|
||||
"""Common processing statistics structure."""
|
||||
trades_processed: int = 0
|
||||
candles_emitted: int = 0
|
||||
errors_count: int = 0
|
||||
warnings_count: int = 0
|
||||
last_trade_time: Optional[datetime] = None
|
||||
last_candle_time: Optional[datetime] = None
|
||||
active_timeframes: int = 0
|
||||
|
||||
def to_dict(self) -> Dict[str, Any]:
|
||||
"""Convert stats to dictionary."""
|
||||
return {
|
||||
'trades_processed': self.trades_processed,
|
||||
'candles_emitted': self.candles_emitted,
|
||||
'errors_count': self.errors_count,
|
||||
'warnings_count': self.warnings_count,
|
||||
'last_trade_time': self.last_trade_time.isoformat() if self.last_trade_time else None,
|
||||
'last_candle_time': self.last_candle_time.isoformat() if self.last_candle_time else None,
|
||||
'active_timeframes': self.active_timeframes
|
||||
}
|
||||
|
||||
|
||||
# Re-export from base_collector for convenience
|
||||
__all__ = [
|
||||
'DataType',
|
||||
'MarketDataPoint',
|
||||
'DataValidationResult',
|
||||
'StandardizedTrade',
|
||||
'OHLCVCandle',
|
||||
'CandleProcessingConfig',
|
||||
'TradeSide',
|
||||
'TimeframeUnit',
|
||||
'ProcessingStats'
|
||||
]
|
||||
26
data/common/indicators/__init__.py
Normal file
26
data/common/indicators/__init__.py
Normal file
@ -0,0 +1,26 @@
|
||||
"""
|
||||
Technical Indicators Package
|
||||
|
||||
This package provides technical indicator calculations optimized for sparse OHLCV data
|
||||
as produced by the TCP Trading Platform's aggregation strategy.
|
||||
|
||||
IMPORTANT: Handles Sparse Data
|
||||
- Missing candles (time gaps) are normal in this system
|
||||
- Indicators properly handle gaps without interpolation
|
||||
- Uses pandas for efficient vectorized calculations
|
||||
- Follows right-aligned timestamp convention
|
||||
"""
|
||||
|
||||
from .technical import TechnicalIndicators
|
||||
from .result import IndicatorResult
|
||||
from .utils import (
|
||||
create_default_indicators_config,
|
||||
validate_indicator_config
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
'TechnicalIndicators',
|
||||
'IndicatorResult',
|
||||
'create_default_indicators_config',
|
||||
'validate_indicator_config'
|
||||
]
|
||||
106
data/common/indicators/base.py
Normal file
106
data/common/indicators/base.py
Normal file
@ -0,0 +1,106 @@
|
||||
"""
|
||||
Base classes and interfaces for technical indicators.
|
||||
|
||||
This module provides the foundation for all technical indicators
|
||||
with common functionality and type definitions.
|
||||
"""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import List, Dict, Any
|
||||
import pandas as pd
|
||||
from utils.logger import get_logger
|
||||
|
||||
from .result import IndicatorResult
|
||||
from ..data_types import OHLCVCandle
|
||||
|
||||
|
||||
|
||||
class BaseIndicator(ABC):
|
||||
"""
|
||||
Abstract base class for all technical indicators.
|
||||
|
||||
Provides common functionality and enforces consistent interface
|
||||
across all indicator implementations.
|
||||
"""
|
||||
|
||||
def __init__(self, logger=None):
|
||||
"""
|
||||
Initialize base indicator.
|
||||
|
||||
Args:
|
||||
logger: Optional logger instance
|
||||
"""
|
||||
if logger is None:
|
||||
self.logger = get_logger(__name__)
|
||||
self.logger = logger
|
||||
|
||||
def prepare_dataframe(self, candles: List[OHLCVCandle]) -> pd.DataFrame:
|
||||
"""
|
||||
Convert OHLCV candles to pandas DataFrame for calculations.
|
||||
|
||||
Args:
|
||||
candles: List of OHLCV candles (can be sparse)
|
||||
|
||||
Returns:
|
||||
DataFrame with OHLCV data, sorted by timestamp
|
||||
"""
|
||||
if not candles:
|
||||
return pd.DataFrame()
|
||||
|
||||
# Convert to DataFrame
|
||||
data = []
|
||||
for candle in candles:
|
||||
data.append({
|
||||
'timestamp': candle.end_time, # Right-aligned timestamp
|
||||
'symbol': candle.symbol,
|
||||
'timeframe': candle.timeframe,
|
||||
'open': float(candle.open),
|
||||
'high': float(candle.high),
|
||||
'low': float(candle.low),
|
||||
'close': float(candle.close),
|
||||
'volume': float(candle.volume),
|
||||
'trade_count': candle.trade_count
|
||||
})
|
||||
|
||||
df = pd.DataFrame(data)
|
||||
|
||||
# Sort by timestamp to ensure proper order
|
||||
df = df.sort_values('timestamp').reset_index(drop=True)
|
||||
|
||||
# Set timestamp as index for time-series operations
|
||||
df.set_index('timestamp', inplace=True)
|
||||
|
||||
return df
|
||||
|
||||
@abstractmethod
|
||||
def calculate(self, df: pd.DataFrame, **kwargs) -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate the indicator values.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
**kwargs: Additional parameters specific to each indicator
|
||||
|
||||
Returns:
|
||||
List of indicator results
|
||||
"""
|
||||
pass
|
||||
|
||||
def validate_dataframe(self, df: pd.DataFrame, min_periods: int) -> bool:
|
||||
"""
|
||||
Validate that DataFrame has sufficient data for calculation.
|
||||
|
||||
Args:
|
||||
df: DataFrame to validate
|
||||
min_periods: Minimum number of periods required
|
||||
|
||||
Returns:
|
||||
True if DataFrame is valid, False otherwise
|
||||
"""
|
||||
if df.empty or len(df) < min_periods:
|
||||
if self.logger:
|
||||
self.logger.warning(
|
||||
f"Insufficient data: got {len(df)} periods, need {min_periods}"
|
||||
)
|
||||
return False
|
||||
return True
|
||||
20
data/common/indicators/implementations/__init__.py
Normal file
20
data/common/indicators/implementations/__init__.py
Normal file
@ -0,0 +1,20 @@
|
||||
"""
|
||||
Technical indicator implementations package.
|
||||
|
||||
This package contains individual implementations of technical indicators,
|
||||
each in its own module for better maintainability and separation of concerns.
|
||||
"""
|
||||
|
||||
from .sma import SMAIndicator
|
||||
from .ema import EMAIndicator
|
||||
from .rsi import RSIIndicator
|
||||
from .macd import MACDIndicator
|
||||
from .bollinger import BollingerBandsIndicator
|
||||
|
||||
__all__ = [
|
||||
'SMAIndicator',
|
||||
'EMAIndicator',
|
||||
'RSIIndicator',
|
||||
'MACDIndicator',
|
||||
'BollingerBandsIndicator'
|
||||
]
|
||||
81
data/common/indicators/implementations/bollinger.py
Normal file
81
data/common/indicators/implementations/bollinger.py
Normal file
@ -0,0 +1,81 @@
|
||||
"""
|
||||
Bollinger Bands indicator implementation.
|
||||
"""
|
||||
|
||||
from typing import List
|
||||
import pandas as pd
|
||||
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
|
||||
class BollingerBandsIndicator(BaseIndicator):
|
||||
"""
|
||||
Bollinger Bands technical indicator.
|
||||
|
||||
Calculates a set of lines plotted two standard deviations away from a simple moving average.
|
||||
Handles sparse data appropriately without interpolation.
|
||||
"""
|
||||
|
||||
def calculate(self, df: pd.DataFrame, period: int = 20,
|
||||
std_dev: float = 2.0, price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Bollinger Bands.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average (default 20)
|
||||
std_dev: Number of standard deviations (default 2.0)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with upper band, middle band (SMA), and lower band
|
||||
"""
|
||||
# Validate input data
|
||||
if not self.validate_dataframe(df, period):
|
||||
return []
|
||||
|
||||
try:
|
||||
# Calculate middle band (SMA)
|
||||
df['middle_band'] = df[price_column].rolling(window=period, min_periods=period).mean()
|
||||
|
||||
# Calculate standard deviation
|
||||
df['std'] = df[price_column].rolling(window=period, min_periods=period).std()
|
||||
|
||||
# Calculate upper and lower bands
|
||||
df['upper_band'] = df['middle_band'] + (std_dev * df['std'])
|
||||
df['lower_band'] = df['middle_band'] - (std_dev * df['std'])
|
||||
|
||||
# Calculate bandwidth and %B
|
||||
df['bandwidth'] = (df['upper_band'] - df['lower_band']) / df['middle_band']
|
||||
df['percent_b'] = (df[price_column] - df['lower_band']) / (df['upper_band'] - df['lower_band'])
|
||||
|
||||
# Convert results to IndicatorResult objects
|
||||
results = []
|
||||
for timestamp, row in df.iterrows():
|
||||
if not pd.isna(row['middle_band']):
|
||||
result = IndicatorResult(
|
||||
timestamp=timestamp,
|
||||
symbol=row['symbol'],
|
||||
timeframe=row['timeframe'],
|
||||
values={
|
||||
'upper_band': row['upper_band'],
|
||||
'middle_band': row['middle_band'],
|
||||
'lower_band': row['lower_band'],
|
||||
'bandwidth': row['bandwidth'],
|
||||
'percent_b': row['percent_b']
|
||||
},
|
||||
metadata={
|
||||
'period': period,
|
||||
'std_dev': std_dev,
|
||||
'price_column': price_column
|
||||
}
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating Bollinger Bands: {e}")
|
||||
return []
|
||||
60
data/common/indicators/implementations/ema.py
Normal file
60
data/common/indicators/implementations/ema.py
Normal file
@ -0,0 +1,60 @@
|
||||
"""
|
||||
Exponential Moving Average (EMA) indicator implementation.
|
||||
"""
|
||||
|
||||
from typing import List
|
||||
import pandas as pd
|
||||
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
|
||||
class EMAIndicator(BaseIndicator):
|
||||
"""
|
||||
Exponential Moving Average (EMA) technical indicator.
|
||||
|
||||
Calculates weighted moving average giving more weight to recent prices.
|
||||
Handles sparse data appropriately without interpolation.
|
||||
"""
|
||||
|
||||
def calculate(self, df: pd.DataFrame, period: int = 20,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Exponential Moving Average (EMA).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average (default: 20)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with EMA values
|
||||
"""
|
||||
# Validate input data
|
||||
if not self.validate_dataframe(df, period):
|
||||
return []
|
||||
|
||||
try:
|
||||
# Calculate EMA using pandas exponential weighted moving average
|
||||
df['ema'] = df[price_column].ewm(span=period, adjust=False).mean()
|
||||
|
||||
# Convert results to IndicatorResult objects
|
||||
results = []
|
||||
for i, (timestamp, row) in enumerate(df.iterrows()):
|
||||
# Only return results after minimum period
|
||||
if i >= period - 1 and not pd.isna(row['ema']):
|
||||
result = IndicatorResult(
|
||||
timestamp=timestamp,
|
||||
symbol=row['symbol'],
|
||||
timeframe=row['timeframe'],
|
||||
values={'ema': row['ema']},
|
||||
metadata={'period': period, 'price_column': price_column}
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating EMA: {e}")
|
||||
return []
|
||||
84
data/common/indicators/implementations/macd.py
Normal file
84
data/common/indicators/implementations/macd.py
Normal file
@ -0,0 +1,84 @@
|
||||
"""
|
||||
Moving Average Convergence Divergence (MACD) indicator implementation.
|
||||
"""
|
||||
|
||||
from typing import List
|
||||
import pandas as pd
|
||||
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
|
||||
class MACDIndicator(BaseIndicator):
|
||||
"""
|
||||
Moving Average Convergence Divergence (MACD) technical indicator.
|
||||
|
||||
Calculates trend-following momentum indicator that shows the relationship
|
||||
between two moving averages of a security's price.
|
||||
Handles sparse data appropriately without interpolation.
|
||||
"""
|
||||
|
||||
def calculate(self, df: pd.DataFrame, fast_period: int = 12,
|
||||
slow_period: int = 26, signal_period: int = 9,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Moving Average Convergence Divergence (MACD).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
fast_period: Fast EMA period (default 12)
|
||||
slow_period: Slow EMA period (default 26)
|
||||
signal_period: Signal line EMA period (default 9)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with MACD, signal, and histogram values
|
||||
"""
|
||||
# Validate input data
|
||||
if not self.validate_dataframe(df, slow_period):
|
||||
return []
|
||||
|
||||
try:
|
||||
# Calculate fast and slow EMAs
|
||||
df['ema_fast'] = df[price_column].ewm(span=fast_period, adjust=False).mean()
|
||||
df['ema_slow'] = df[price_column].ewm(span=slow_period, adjust=False).mean()
|
||||
|
||||
# Calculate MACD line
|
||||
df['macd'] = df['ema_fast'] - df['ema_slow']
|
||||
|
||||
# Calculate signal line (EMA of MACD)
|
||||
df['signal'] = df['macd'].ewm(span=signal_period, adjust=False).mean()
|
||||
|
||||
# Calculate histogram
|
||||
df['histogram'] = df['macd'] - df['signal']
|
||||
|
||||
# Convert results to IndicatorResult objects
|
||||
results = []
|
||||
for i, (timestamp, row) in enumerate(df.iterrows()):
|
||||
# Only return results after minimum period
|
||||
if i >= slow_period - 1:
|
||||
if not (pd.isna(row['macd']) or pd.isna(row['signal']) or pd.isna(row['histogram'])):
|
||||
result = IndicatorResult(
|
||||
timestamp=timestamp,
|
||||
symbol=row['symbol'],
|
||||
timeframe=row['timeframe'],
|
||||
values={
|
||||
'macd': row['macd'],
|
||||
'signal': row['signal'],
|
||||
'histogram': row['histogram']
|
||||
},
|
||||
metadata={
|
||||
'fast_period': fast_period,
|
||||
'slow_period': slow_period,
|
||||
'signal_period': signal_period,
|
||||
'price_column': price_column
|
||||
}
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating MACD: {e}")
|
||||
return []
|
||||
75
data/common/indicators/implementations/rsi.py
Normal file
75
data/common/indicators/implementations/rsi.py
Normal file
@ -0,0 +1,75 @@
|
||||
"""
|
||||
Relative Strength Index (RSI) indicator implementation.
|
||||
"""
|
||||
|
||||
from typing import List
|
||||
import pandas as pd
|
||||
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
|
||||
class RSIIndicator(BaseIndicator):
|
||||
"""
|
||||
Relative Strength Index (RSI) technical indicator.
|
||||
|
||||
Measures momentum by comparing the magnitude of recent gains to recent losses.
|
||||
Handles sparse data appropriately without interpolation.
|
||||
"""
|
||||
|
||||
def calculate(self, df: pd.DataFrame, period: int = 14,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Relative Strength Index (RSI).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for RSI calculation (default: 14)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with RSI values
|
||||
"""
|
||||
# Validate input data
|
||||
if not self.validate_dataframe(df, period + 1): # Need extra period for diff
|
||||
return []
|
||||
|
||||
try:
|
||||
# Calculate price changes
|
||||
df['price_change'] = df[price_column].diff()
|
||||
|
||||
# Separate gains and losses
|
||||
df['gain'] = df['price_change'].where(df['price_change'] > 0, 0)
|
||||
df['loss'] = (-df['price_change']).where(df['price_change'] < 0, 0)
|
||||
|
||||
# Calculate average gain and loss using EMA
|
||||
df['avg_gain'] = df['gain'].ewm(span=period, adjust=False).mean()
|
||||
df['avg_loss'] = df['loss'].ewm(span=period, adjust=False).mean()
|
||||
|
||||
# Calculate RS and RSI
|
||||
df['rs'] = df['avg_gain'] / df['avg_loss']
|
||||
df['rsi'] = 100 - (100 / (1 + df['rs']))
|
||||
|
||||
# Handle division by zero
|
||||
df['rsi'] = df['rsi'].fillna(50) # Neutral RSI when no losses
|
||||
|
||||
# Convert results to IndicatorResult objects
|
||||
results = []
|
||||
for i, (timestamp, row) in enumerate(df.iterrows()):
|
||||
# Only return results after minimum period
|
||||
if i >= period and not pd.isna(row['rsi']):
|
||||
result = IndicatorResult(
|
||||
timestamp=timestamp,
|
||||
symbol=row['symbol'],
|
||||
timeframe=row['timeframe'],
|
||||
values={'rsi': row['rsi']},
|
||||
metadata={'period': period, 'price_column': price_column}
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating RSI: {e}")
|
||||
return []
|
||||
59
data/common/indicators/implementations/sma.py
Normal file
59
data/common/indicators/implementations/sma.py
Normal file
@ -0,0 +1,59 @@
|
||||
"""
|
||||
Simple Moving Average (SMA) indicator implementation.
|
||||
"""
|
||||
|
||||
from typing import List
|
||||
import pandas as pd
|
||||
|
||||
from ..base import BaseIndicator
|
||||
from ..result import IndicatorResult
|
||||
|
||||
|
||||
class SMAIndicator(BaseIndicator):
|
||||
"""
|
||||
Simple Moving Average (SMA) technical indicator.
|
||||
|
||||
Calculates the unweighted mean of previous n periods.
|
||||
Handles sparse data appropriately without interpolation.
|
||||
"""
|
||||
|
||||
def calculate(self, df: pd.DataFrame, period: int = 20,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Simple Moving Average (SMA).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average (default: 20)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with SMA values
|
||||
"""
|
||||
# Validate input data
|
||||
if not self.validate_dataframe(df, period):
|
||||
return []
|
||||
|
||||
try:
|
||||
# Calculate SMA using pandas rolling window
|
||||
df['sma'] = df[price_column].rolling(window=period, min_periods=period).mean()
|
||||
|
||||
# Convert results to IndicatorResult objects
|
||||
results = []
|
||||
for timestamp, row in df.iterrows():
|
||||
if not pd.isna(row['sma']):
|
||||
result = IndicatorResult(
|
||||
timestamp=timestamp,
|
||||
symbol=row['symbol'],
|
||||
timeframe=row['timeframe'],
|
||||
values={'sma': row['sma']},
|
||||
metadata={'period': period, 'price_column': price_column}
|
||||
)
|
||||
results.append(result)
|
||||
|
||||
return results
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating SMA: {e}")
|
||||
return []
|
||||
29
data/common/indicators/result.py
Normal file
29
data/common/indicators/result.py
Normal file
@ -0,0 +1,29 @@
|
||||
"""
|
||||
Technical Indicator Result Container
|
||||
|
||||
This module provides the IndicatorResult dataclass for storing
|
||||
technical indicator calculation results in a standardized format.
|
||||
"""
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import Dict, Optional, Any
|
||||
|
||||
|
||||
@dataclass
|
||||
class IndicatorResult:
|
||||
"""
|
||||
Container for technical indicator calculation results.
|
||||
|
||||
Attributes:
|
||||
timestamp: Candle timestamp (right-aligned)
|
||||
symbol: Trading symbol
|
||||
timeframe: Candle timeframe
|
||||
values: Dictionary of indicator values
|
||||
metadata: Additional calculation metadata
|
||||
"""
|
||||
timestamp: datetime
|
||||
symbol: str
|
||||
timeframe: str
|
||||
values: Dict[str, float]
|
||||
metadata: Optional[Dict[str, Any]] = None
|
||||
287
data/common/indicators/technical.py
Normal file
287
data/common/indicators/technical.py
Normal file
@ -0,0 +1,287 @@
|
||||
"""
|
||||
Technical Indicators Module for OHLCV Data
|
||||
|
||||
This module provides technical indicator calculations optimized for sparse OHLCV data
|
||||
as produced by the TCP Trading Platform's aggregation strategy.
|
||||
|
||||
IMPORTANT: Handles Sparse Data
|
||||
- Missing candles (time gaps) are normal in this system
|
||||
- Indicators properly handle gaps without interpolation
|
||||
- Uses pandas for efficient vectorized calculations
|
||||
- Follows right-aligned timestamp convention
|
||||
|
||||
Supported Indicators:
|
||||
- Simple Moving Average (SMA)
|
||||
- Exponential Moving Average (EMA)
|
||||
- Relative Strength Index (RSI)
|
||||
- Moving Average Convergence Divergence (MACD)
|
||||
- Bollinger Bands
|
||||
"""
|
||||
|
||||
from datetime import datetime
|
||||
from typing import Dict, List, Optional, Any, Union
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
|
||||
from .result import IndicatorResult
|
||||
from ..data_types import OHLCVCandle
|
||||
from .base import BaseIndicator
|
||||
from .implementations import (
|
||||
SMAIndicator,
|
||||
EMAIndicator,
|
||||
RSIIndicator,
|
||||
MACDIndicator,
|
||||
BollingerBandsIndicator
|
||||
)
|
||||
|
||||
|
||||
class TechnicalIndicators:
|
||||
"""
|
||||
Technical indicator calculator for OHLCV candle data.
|
||||
|
||||
This class provides vectorized technical indicator calculations
|
||||
designed to handle sparse data efficiently. All calculations use
|
||||
pandas for performance and handle missing data appropriately.
|
||||
|
||||
SPARSE DATA HANDLING:
|
||||
- Gaps in timestamps are preserved (no interpolation)
|
||||
- Indicators calculate only on available data points
|
||||
- Periods with insufficient data return NaN
|
||||
- Results maintain original timestamp alignment
|
||||
"""
|
||||
|
||||
def __init__(self, logger=None):
|
||||
"""
|
||||
Initialize technical indicators calculator.
|
||||
|
||||
Args:
|
||||
logger: Optional logger instance
|
||||
"""
|
||||
self.logger = logger
|
||||
|
||||
# Initialize individual indicator calculators
|
||||
self._sma = SMAIndicator(logger)
|
||||
self._ema = EMAIndicator(logger)
|
||||
self._rsi = RSIIndicator(logger)
|
||||
self._macd = MACDIndicator(logger)
|
||||
self._bollinger = BollingerBandsIndicator(logger)
|
||||
|
||||
if self.logger:
|
||||
self.logger.info("TechnicalIndicators: Initialized indicator calculator")
|
||||
|
||||
def _prepare_dataframe_from_list(self, candles: List[OHLCVCandle]) -> pd.DataFrame:
|
||||
"""
|
||||
Convert OHLCV candles to pandas DataFrame for efficient calculations.
|
||||
|
||||
Args:
|
||||
candles: List of OHLCV candles (can be sparse)
|
||||
|
||||
Returns:
|
||||
DataFrame with OHLCV data, sorted by timestamp
|
||||
"""
|
||||
if not candles:
|
||||
return pd.DataFrame()
|
||||
|
||||
return self._sma.prepare_dataframe(candles)
|
||||
|
||||
def sma(self, df: pd.DataFrame, period: int,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Simple Moving Average (SMA).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with SMA values
|
||||
"""
|
||||
return self._sma.calculate(df, period=period, price_column=price_column)
|
||||
|
||||
def ema(self, df: pd.DataFrame, period: int,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Exponential Moving Average (EMA).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with EMA values
|
||||
"""
|
||||
return self._ema.calculate(df, period=period, price_column=price_column)
|
||||
|
||||
def rsi(self, df: pd.DataFrame, period: int = 14,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Relative Strength Index (RSI).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for RSI calculation (default 14)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with RSI values
|
||||
"""
|
||||
return self._rsi.calculate(df, period=period, price_column=price_column)
|
||||
|
||||
def macd(self, df: pd.DataFrame,
|
||||
fast_period: int = 12, slow_period: int = 26, signal_period: int = 9,
|
||||
price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Moving Average Convergence Divergence (MACD).
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
fast_period: Fast EMA period (default 12)
|
||||
slow_period: Slow EMA period (default 26)
|
||||
signal_period: Signal line EMA period (default 9)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with MACD, signal, and histogram values
|
||||
"""
|
||||
return self._macd.calculate(
|
||||
df,
|
||||
fast_period=fast_period,
|
||||
slow_period=slow_period,
|
||||
signal_period=signal_period,
|
||||
price_column=price_column
|
||||
)
|
||||
|
||||
def bollinger_bands(self, df: pd.DataFrame, period: int = 20,
|
||||
std_dev: float = 2.0, price_column: str = 'close') -> List[IndicatorResult]:
|
||||
"""
|
||||
Calculate Bollinger Bands.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
period: Number of periods for moving average (default 20)
|
||||
std_dev: Number of standard deviations (default 2.0)
|
||||
price_column: Price column to use ('open', 'high', 'low', 'close')
|
||||
|
||||
Returns:
|
||||
List of indicator results with upper band, middle band (SMA), and lower band
|
||||
"""
|
||||
return self._bollinger.calculate(
|
||||
df,
|
||||
period=period,
|
||||
std_dev=std_dev,
|
||||
price_column=price_column
|
||||
)
|
||||
|
||||
def calculate_multiple_indicators(self, df: pd.DataFrame,
|
||||
indicators_config: Dict[str, Dict[str, Any]]) -> Dict[str, List[IndicatorResult]]:
|
||||
"""
|
||||
TODO: need make more procedural without hardcoding indicators type and so on
|
||||
Calculate multiple indicators at once for efficiency.
|
||||
|
||||
Args:
|
||||
df: DataFrame with OHLCV data
|
||||
indicators_config: Configuration for indicators to calculate
|
||||
Example: {
|
||||
'sma_20': {'type': 'sma', 'period': 20},
|
||||
'ema_12': {'type': 'ema', 'period': 12},
|
||||
'rsi_14': {'type': 'rsi', 'period': 14},
|
||||
'macd': {'type': 'macd'},
|
||||
'bb_20': {'type': 'bollinger_bands', 'period': 20}
|
||||
}
|
||||
|
||||
Returns:
|
||||
Dictionary mapping indicator names to their results
|
||||
"""
|
||||
results = {}
|
||||
|
||||
for indicator_name, config in indicators_config.items():
|
||||
indicator_type = config.get('type')
|
||||
|
||||
try:
|
||||
if indicator_type == 'sma':
|
||||
period = config.get('period', 20)
|
||||
price_column = config.get('price_column', 'close')
|
||||
results[indicator_name] = self.sma(df, period, price_column)
|
||||
|
||||
elif indicator_type == 'ema':
|
||||
period = config.get('period', 20)
|
||||
price_column = config.get('price_column', 'close')
|
||||
results[indicator_name] = self.ema(df, period, price_column)
|
||||
|
||||
elif indicator_type == 'rsi':
|
||||
period = config.get('period', 14)
|
||||
price_column = config.get('price_column', 'close')
|
||||
results[indicator_name] = self.rsi(df, period, price_column)
|
||||
|
||||
elif indicator_type == 'macd':
|
||||
fast_period = config.get('fast_period', 12)
|
||||
slow_period = config.get('slow_period', 26)
|
||||
signal_period = config.get('signal_period', 9)
|
||||
price_column = config.get('price_column', 'close')
|
||||
results[indicator_name] = self.macd(
|
||||
df, fast_period, slow_period, signal_period, price_column
|
||||
)
|
||||
|
||||
elif indicator_type == 'bollinger_bands':
|
||||
period = config.get('period', 20)
|
||||
std_dev = config.get('std_dev', 2.0)
|
||||
price_column = config.get('price_column', 'close')
|
||||
results[indicator_name] = self.bollinger_bands(
|
||||
df, period, std_dev, price_column
|
||||
)
|
||||
|
||||
else:
|
||||
if self.logger:
|
||||
self.logger.warning(f"Unknown indicator type: {indicator_type}")
|
||||
results[indicator_name] = []
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating {indicator_name}: {e}")
|
||||
results[indicator_name] = []
|
||||
|
||||
return results
|
||||
|
||||
def calculate(self, indicator_type: str, df: pd.DataFrame, **kwargs) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Calculate a single indicator with dynamic dispatch.
|
||||
|
||||
Args:
|
||||
indicator_type: Name of the indicator (e.g., 'sma', 'ema')
|
||||
df: DataFrame with OHLCV data
|
||||
**kwargs: Indicator-specific parameters (e.g., period=20)
|
||||
|
||||
Returns:
|
||||
A dictionary containing the indicator results, or None if the type is unknown.
|
||||
"""
|
||||
# Get the indicator calculation method
|
||||
indicator_method = getattr(self, indicator_type, None)
|
||||
if not indicator_method:
|
||||
if self.logger:
|
||||
self.logger.error(f"Unknown indicator type '{indicator_type}'")
|
||||
return None
|
||||
|
||||
try:
|
||||
if df.empty:
|
||||
return {'data': [], 'metadata': {}}
|
||||
|
||||
# Call the indicator method
|
||||
raw_result = indicator_method(df, **kwargs)
|
||||
|
||||
# Extract metadata from the first result if available
|
||||
metadata = raw_result[0].metadata if raw_result else {}
|
||||
|
||||
# The methods return List[IndicatorResult], let's package that
|
||||
if raw_result:
|
||||
return {
|
||||
"data": raw_result,
|
||||
"metadata": metadata
|
||||
}
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
if self.logger:
|
||||
self.logger.error(f"Error calculating {indicator_type}: {e}")
|
||||
return None
|
||||
60
data/common/indicators/utils.py
Normal file
60
data/common/indicators/utils.py
Normal file
@ -0,0 +1,60 @@
|
||||
"""
|
||||
Technical Indicator Utilities
|
||||
|
||||
This module provides utility functions for managing technical indicator
|
||||
configurations and validation.
|
||||
"""
|
||||
|
||||
from typing import Dict, Any
|
||||
|
||||
|
||||
def create_default_indicators_config() -> Dict[str, Dict[str, Any]]:
|
||||
"""
|
||||
Create default configuration for common technical indicators.
|
||||
|
||||
Returns:
|
||||
Dictionary with default indicator configurations
|
||||
"""
|
||||
return {
|
||||
'sma_20': {'type': 'sma', 'period': 20},
|
||||
'sma_50': {'type': 'sma', 'period': 50},
|
||||
'ema_12': {'type': 'ema', 'period': 12},
|
||||
'ema_26': {'type': 'ema', 'period': 26},
|
||||
'rsi_14': {'type': 'rsi', 'period': 14},
|
||||
'macd_default': {'type': 'macd'},
|
||||
'bollinger_bands_20': {'type': 'bollinger_bands', 'period': 20}
|
||||
}
|
||||
|
||||
|
||||
def validate_indicator_config(config: Dict[str, Any]) -> bool:
|
||||
"""
|
||||
Validate technical indicator configuration.
|
||||
|
||||
Args:
|
||||
config: Indicator configuration dictionary
|
||||
|
||||
Returns:
|
||||
True if configuration is valid, False otherwise
|
||||
"""
|
||||
required_fields = ['type']
|
||||
|
||||
# Check required fields
|
||||
for field in required_fields:
|
||||
if field not in config:
|
||||
return False
|
||||
|
||||
# Validate indicator type
|
||||
valid_types = ['sma', 'ema', 'rsi', 'macd', 'bollinger_bands']
|
||||
if config['type'] not in valid_types:
|
||||
return False
|
||||
|
||||
# Validate period fields
|
||||
if 'period' in config and (not isinstance(config['period'], int) or config['period'] <= 0):
|
||||
return False
|
||||
|
||||
# Validate standard deviation for Bollinger Bands
|
||||
if config['type'] == 'bollinger_bands' and 'std_dev' in config:
|
||||
if not isinstance(config['std_dev'], (int, float)) or config['std_dev'] <= 0:
|
||||
return False
|
||||
|
||||
return True
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user