Merge branch 'xgboost'
# Conflicts: # .gitignore # README.md # cycles/backtest.py # main.py # pyproject.toml # uv.lock
This commit is contained in:
commit
267f040fe8
61
.cursor/rules/always-global.mdc
Normal file
61
.cursor/rules/always-global.mdc
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
description: Global development standards and AI interaction principles
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Rule: Always Apply - Global Development Standards
|
||||
|
||||
## AI Interaction Principles
|
||||
|
||||
### Step-by-Step Development
|
||||
- **NEVER** generate large blocks of code without explanation
|
||||
- **ALWAYS** ask "provide your plan in a concise bullet list and wait for my confirmation before proceeding"
|
||||
- Break complex tasks into smaller, manageable pieces (≤250 lines per file, ≤50 lines per function)
|
||||
- Explain your reasoning step-by-step before writing code
|
||||
- Wait for explicit approval before moving to the next sub-task
|
||||
|
||||
### Context Awareness
|
||||
- **ALWAYS** reference existing code patterns and data structures before suggesting new approaches
|
||||
- Ask about existing conventions before implementing new functionality
|
||||
- Preserve established architectural decisions unless explicitly asked to change them
|
||||
- Maintain consistency with existing naming conventions and code style
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### File and Function Limits
|
||||
- **Maximum file size**: 250 lines
|
||||
- **Maximum function size**: 50 lines
|
||||
- **Maximum complexity**: If a function does more than one main thing, break it down
|
||||
- **Naming**: Use clear, descriptive names that explain purpose
|
||||
|
||||
### Documentation Requirements
|
||||
- **Every public function** must have a docstring explaining purpose, parameters, and return value
|
||||
- **Every class** must have a class-level docstring
|
||||
- **Complex logic** must have inline comments explaining the "why", not just the "what"
|
||||
- **API endpoints** must be documented with request/response examples
|
||||
|
||||
### Error Handling
|
||||
- **ALWAYS** include proper error handling for external dependencies
|
||||
- **NEVER** use bare except clauses
|
||||
- Provide meaningful error messages that help with debugging
|
||||
- Log errors appropriately for the application context
|
||||
|
||||
## Security and Best Practices
|
||||
- **NEVER** hardcode credentials, API keys, or sensitive data
|
||||
- **ALWAYS** validate user inputs
|
||||
- Use parameterized queries for database operations
|
||||
- Follow the principle of least privilege
|
||||
- Implement proper authentication and authorization
|
||||
|
||||
## Testing Requirements
|
||||
- **Every implementation** should have corresponding unit tests
|
||||
- **Every API endpoint** should have integration tests
|
||||
- Test files should be placed alongside the code they test
|
||||
- Use descriptive test names that explain what is being tested
|
||||
|
||||
## Response Format
|
||||
- Be concise and avoid unnecessary repetition
|
||||
- Focus on actionable information
|
||||
- Provide examples when explaining complex concepts
|
||||
- Ask clarifying questions when requirements are ambiguous
|
||||
237
.cursor/rules/architecture.mdc
Normal file
237
.cursor/rules/architecture.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Modular design principles and architecture guidelines for scalable development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Architecture and Modular Design
|
||||
|
||||
## Goal
|
||||
Maintain a clean, modular architecture that scales effectively and prevents the complexity issues that arise in AI-assisted development.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
### 1. Modular Design
|
||||
- **Single Responsibility**: Each module has one clear purpose
|
||||
- **Loose Coupling**: Modules depend on interfaces, not implementations
|
||||
- **High Cohesion**: Related functionality is grouped together
|
||||
- **Clear Boundaries**: Module interfaces are well-defined and stable
|
||||
|
||||
### 2. Size Constraints
|
||||
- **Files**: Maximum 250 lines per file
|
||||
- **Functions**: Maximum 50 lines per function
|
||||
- **Classes**: Maximum 300 lines per class
|
||||
- **Modules**: Maximum 10 public functions/classes per module
|
||||
|
||||
### 3. Dependency Management
|
||||
- **Layer Dependencies**: Higher layers depend on lower layers only
|
||||
- **No Circular Dependencies**: Modules cannot depend on each other cyclically
|
||||
- **Interface Segregation**: Depend on specific interfaces, not broad ones
|
||||
- **Dependency Injection**: Pass dependencies rather than creating them internally
|
||||
|
||||
## Modular Architecture Patterns
|
||||
|
||||
### Layer Structure
|
||||
```
|
||||
src/
|
||||
├── presentation/ # UI, API endpoints, CLI interfaces
|
||||
├── application/ # Business logic, use cases, workflows
|
||||
├── domain/ # Core business entities and rules
|
||||
├── infrastructure/ # Database, external APIs, file systems
|
||||
└── shared/ # Common utilities, constants, types
|
||||
```
|
||||
|
||||
### Module Organization
|
||||
```
|
||||
module_name/
|
||||
├── __init__.py # Public interface exports
|
||||
├── core.py # Main module logic
|
||||
├── types.py # Type definitions and interfaces
|
||||
├── utils.py # Module-specific utilities
|
||||
├── tests/ # Module tests
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
## Design Patterns for AI Development
|
||||
|
||||
### 1. Repository Pattern
|
||||
Separate data access from business logic:
|
||||
|
||||
```python
|
||||
# Domain interface
|
||||
class UserRepository:
|
||||
def get_by_id(self, user_id: str) -> User: ...
|
||||
def save(self, user: User) -> None: ...
|
||||
|
||||
# Infrastructure implementation
|
||||
class SqlUserRepository(UserRepository):
|
||||
def get_by_id(self, user_id: str) -> User:
|
||||
# Database-specific implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. Service Pattern
|
||||
Encapsulate business logic in focused services:
|
||||
|
||||
```python
|
||||
class UserService:
|
||||
def __init__(self, user_repo: UserRepository):
|
||||
self._user_repo = user_repo
|
||||
|
||||
def create_user(self, data: UserData) -> User:
|
||||
# Validation and business logic
|
||||
# Single responsibility: user creation
|
||||
pass
|
||||
```
|
||||
|
||||
### 3. Factory Pattern
|
||||
Create complex objects with clear interfaces:
|
||||
|
||||
```python
|
||||
class DatabaseFactory:
|
||||
@staticmethod
|
||||
def create_connection(config: DatabaseConfig) -> Connection:
|
||||
# Handle different database types
|
||||
# Encapsulate connection complexity
|
||||
pass
|
||||
```
|
||||
|
||||
## Architecture Decision Guidelines
|
||||
|
||||
### When to Create New Modules
|
||||
Create a new module when:
|
||||
- **Functionality** exceeds size constraints (250 lines)
|
||||
- **Responsibility** is distinct from existing modules
|
||||
- **Dependencies** would create circular references
|
||||
- **Reusability** would benefit other parts of the system
|
||||
- **Testing** requires isolated test environments
|
||||
|
||||
### When to Split Existing Modules
|
||||
Split modules when:
|
||||
- **File size** exceeds 250 lines
|
||||
- **Multiple responsibilities** are evident
|
||||
- **Testing** becomes difficult due to complexity
|
||||
- **Dependencies** become too numerous
|
||||
- **Change frequency** differs significantly between parts
|
||||
|
||||
### Module Interface Design
|
||||
```python
|
||||
# Good: Clear, focused interface
|
||||
class PaymentProcessor:
|
||||
def process_payment(self, amount: Money, method: PaymentMethod) -> PaymentResult:
|
||||
"""Process a single payment transaction."""
|
||||
pass
|
||||
|
||||
# Bad: Unfocused, kitchen-sink interface
|
||||
class PaymentManager:
|
||||
def process_payment(self, ...): pass
|
||||
def validate_card(self, ...): pass
|
||||
def send_receipt(self, ...): pass
|
||||
def update_inventory(self, ...): pass # Wrong responsibility!
|
||||
```
|
||||
|
||||
## Architecture Validation
|
||||
|
||||
### Architecture Review Checklist
|
||||
- [ ] **Dependencies flow in one direction** (no cycles)
|
||||
- [ ] **Layers are respected** (presentation doesn't call infrastructure directly)
|
||||
- [ ] **Modules have single responsibility**
|
||||
- [ ] **Interfaces are stable** and well-defined
|
||||
- [ ] **Size constraints** are maintained
|
||||
- [ ] **Testing** is straightforward for each module
|
||||
|
||||
### Red Flags
|
||||
- **God Objects**: Classes/modules that do too many things
|
||||
- **Circular Dependencies**: Modules that depend on each other
|
||||
- **Deep Inheritance**: More than 3 levels of inheritance
|
||||
- **Large Interfaces**: Interfaces with more than 7 methods
|
||||
- **Tight Coupling**: Modules that know too much about each other's internals
|
||||
|
||||
## Refactoring Guidelines
|
||||
|
||||
### When to Refactor
|
||||
- Module exceeds size constraints
|
||||
- Code duplication across modules
|
||||
- Difficult to test individual components
|
||||
- New features require changing multiple unrelated modules
|
||||
- Performance bottlenecks due to poor separation
|
||||
|
||||
### Refactoring Process
|
||||
1. **Identify** the specific architectural problem
|
||||
2. **Design** the target architecture
|
||||
3. **Create tests** to verify current behavior
|
||||
4. **Implement changes** incrementally
|
||||
5. **Validate** that tests still pass
|
||||
6. **Update documentation** to reflect changes
|
||||
|
||||
### Safe Refactoring Practices
|
||||
- **One change at a time**: Don't mix refactoring with new features
|
||||
- **Tests first**: Ensure comprehensive test coverage before refactoring
|
||||
- **Incremental changes**: Small steps with verification at each stage
|
||||
- **Backward compatibility**: Maintain existing interfaces during transition
|
||||
- **Documentation updates**: Keep architecture documentation current
|
||||
|
||||
## Architecture Documentation
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-003: Service Layer Architecture
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
As the application grows, business logic is scattered across controllers and models.
|
||||
|
||||
## Decision
|
||||
Implement a service layer to encapsulate business logic.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- Clear separation of concerns
|
||||
- Easier testing of business logic
|
||||
- Better reusability across different interfaces
|
||||
|
||||
**Negative:**
|
||||
- Additional abstraction layer
|
||||
- More files to maintain
|
||||
```
|
||||
|
||||
### Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
What this module does and why it exists.
|
||||
|
||||
## Dependencies
|
||||
- **Imports from**: List of modules this depends on
|
||||
- **Used by**: List of modules that depend on this one
|
||||
- **External**: Third-party dependencies
|
||||
|
||||
## Public Interface
|
||||
```python
|
||||
# Key functions and classes exposed by this module
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
- Design patterns used
|
||||
- Important architectural decisions
|
||||
- Known limitations or constraints
|
||||
```
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### Legacy Code Integration
|
||||
- **Strangler Fig Pattern**: Gradually replace old code with new modules
|
||||
- **Adapter Pattern**: Create interfaces to integrate old and new code
|
||||
- **Facade Pattern**: Simplify complex legacy interfaces
|
||||
|
||||
### Gradual Modernization
|
||||
1. **Identify boundaries** in existing code
|
||||
2. **Extract modules** one at a time
|
||||
3. **Create interfaces** for each extracted module
|
||||
4. **Test thoroughly** at each step
|
||||
5. **Update documentation** continuously
|
||||
123
.cursor/rules/code-review.mdc
Normal file
123
.cursor/rules/code-review.mdc
Normal file
@ -0,0 +1,123 @@
|
||||
---
|
||||
description: AI-generated code review checklist and quality assurance guidelines
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Review and Quality Assurance
|
||||
|
||||
## Goal
|
||||
Establish systematic review processes for AI-generated code to maintain quality, security, and maintainability standards.
|
||||
|
||||
## AI Code Review Checklist
|
||||
|
||||
### Pre-Implementation Review
|
||||
Before accepting any AI-generated code:
|
||||
|
||||
1. **Understand the Code**
|
||||
- [ ] Can you explain what the code does in your own words?
|
||||
- [ ] Do you understand each function and its purpose?
|
||||
- [ ] Are there any "magic" values or unexplained logic?
|
||||
- [ ] Does the code solve the actual problem stated?
|
||||
|
||||
2. **Architecture Alignment**
|
||||
- [ ] Does the code follow established project patterns?
|
||||
- [ ] Is it consistent with existing data structures?
|
||||
- [ ] Does it integrate cleanly with existing components?
|
||||
- [ ] Are new dependencies justified and necessary?
|
||||
|
||||
3. **Code Quality**
|
||||
- [ ] Are functions smaller than 50 lines?
|
||||
- [ ] Are files smaller than 250 lines?
|
||||
- [ ] Are variable and function names descriptive?
|
||||
- [ ] Is the code DRY (Don't Repeat Yourself)?
|
||||
|
||||
### Security Review
|
||||
- [ ] **Input Validation**: All user inputs are validated and sanitized
|
||||
- [ ] **Authentication**: Proper authentication checks are in place
|
||||
- [ ] **Authorization**: Access controls are implemented correctly
|
||||
- [ ] **Data Protection**: Sensitive data is handled securely
|
||||
- [ ] **SQL Injection**: Database queries use parameterized statements
|
||||
- [ ] **XSS Prevention**: Output is properly escaped
|
||||
- [ ] **Error Handling**: Errors don't leak sensitive information
|
||||
|
||||
### Integration Review
|
||||
- [ ] **Existing Functionality**: New code doesn't break existing features
|
||||
- [ ] **Data Consistency**: Database changes maintain referential integrity
|
||||
- [ ] **API Compatibility**: Changes don't break existing API contracts
|
||||
- [ ] **Performance Impact**: New code doesn't introduce performance bottlenecks
|
||||
- [ ] **Testing Coverage**: Appropriate tests are included
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Initial Code Analysis
|
||||
1. **Read through the entire generated code** before running it
|
||||
2. **Identify patterns** that don't match existing codebase
|
||||
3. **Check dependencies** - are new packages really needed?
|
||||
4. **Verify logic flow** - does the algorithm make sense?
|
||||
|
||||
### Step 2: Security and Error Handling Review
|
||||
1. **Trace data flow** from input to output
|
||||
2. **Identify potential failure points** and verify error handling
|
||||
3. **Check for security vulnerabilities** using the security checklist
|
||||
4. **Verify proper logging** and monitoring implementation
|
||||
|
||||
### Step 3: Integration Testing
|
||||
1. **Test with existing code** to ensure compatibility
|
||||
2. **Run existing test suite** to verify no regressions
|
||||
3. **Test edge cases** and error conditions
|
||||
4. **Verify performance** under realistic conditions
|
||||
|
||||
## Common AI Code Issues to Watch For
|
||||
|
||||
### Overcomplication Patterns
|
||||
- **Unnecessary abstractions**: AI creating complex patterns for simple tasks
|
||||
- **Over-engineering**: Solutions that are more complex than needed
|
||||
- **Redundant code**: AI recreating existing functionality
|
||||
- **Inappropriate design patterns**: Using patterns that don't fit the use case
|
||||
|
||||
### Context Loss Indicators
|
||||
- **Inconsistent naming**: Different conventions from existing code
|
||||
- **Wrong data structures**: Using different patterns than established
|
||||
- **Ignored existing functions**: Reimplementing existing functionality
|
||||
- **Architectural misalignment**: Code that doesn't fit the overall design
|
||||
|
||||
### Technical Debt Indicators
|
||||
- **Magic numbers**: Hardcoded values without explanation
|
||||
- **Poor error messages**: Generic or unhelpful error handling
|
||||
- **Missing documentation**: Code without adequate comments
|
||||
- **Tight coupling**: Components that are too interdependent
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Mandatory Reviews
|
||||
All AI-generated code must pass these gates before acceptance:
|
||||
|
||||
1. **Security Review**: No security vulnerabilities detected
|
||||
2. **Integration Review**: Integrates cleanly with existing code
|
||||
3. **Performance Review**: Meets performance requirements
|
||||
4. **Maintainability Review**: Code can be easily modified by team members
|
||||
5. **Documentation Review**: Adequate documentation is provided
|
||||
|
||||
### Acceptance Criteria
|
||||
- [ ] Code is understandable by any team member
|
||||
- [ ] Integration requires minimal changes to existing code
|
||||
- [ ] Security review passes all checks
|
||||
- [ ] Performance meets established benchmarks
|
||||
- [ ] Documentation is complete and accurate
|
||||
|
||||
## Rejection Criteria
|
||||
Reject AI-generated code if:
|
||||
- Security vulnerabilities are present
|
||||
- Code is too complex for the problem being solved
|
||||
- Integration requires major refactoring of existing code
|
||||
- Code duplicates existing functionality without justification
|
||||
- Documentation is missing or inadequate
|
||||
|
||||
## Review Documentation
|
||||
For each review, document:
|
||||
- Issues found and how they were resolved
|
||||
- Performance impact assessment
|
||||
- Security concerns and mitigations
|
||||
- Integration challenges and solutions
|
||||
- Recommendations for future similar tasks
|
||||
93
.cursor/rules/context-management.mdc
Normal file
93
.cursor/rules/context-management.mdc
Normal file
@ -0,0 +1,93 @@
|
||||
---
|
||||
description: Context management for maintaining codebase awareness and preventing context drift
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Context Management
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive project context to prevent context drift and ensure AI-generated code integrates seamlessly with existing codebase patterns and architecture.
|
||||
|
||||
## Context Documentation Requirements
|
||||
|
||||
### PRD.md file documentation
|
||||
1. **Project Overview**
|
||||
- Business objectives and goals
|
||||
- Target users and use cases
|
||||
- Key success metrics
|
||||
|
||||
### CONTEXT.md File Structure
|
||||
Every project must maintain a `CONTEXT.md` file in the root directory with:
|
||||
|
||||
1. **Architecture Overview**
|
||||
- High-level system architecture
|
||||
- Key design patterns used
|
||||
- Database schema overview
|
||||
- API structure and conventions
|
||||
|
||||
2. **Technology Stack**
|
||||
- Programming languages and versions
|
||||
- Frameworks and libraries
|
||||
- Database systems
|
||||
- Development and deployment tools
|
||||
|
||||
3. **Coding Conventions**
|
||||
- Naming conventions
|
||||
- File organization patterns
|
||||
- Code structure preferences
|
||||
- Import/export patterns
|
||||
|
||||
4. **Current Implementation Status**
|
||||
- Completed features
|
||||
- Work in progress
|
||||
- Known technical debt
|
||||
- Planned improvements
|
||||
|
||||
## Context Maintenance Protocol
|
||||
|
||||
### Before Every Coding Session
|
||||
1. **Review CONTEXT.md and PRD.md** to understand current project state
|
||||
2. **Scan recent changes** in git history to understand latest patterns
|
||||
3. **Identify existing patterns** for similar functionality before implementing new features
|
||||
4. **Ask for clarification** if existing patterns are unclear or conflicting
|
||||
|
||||
### During Development
|
||||
1. **Reference existing code** when explaining implementation approaches
|
||||
2. **Maintain consistency** with established patterns and conventions
|
||||
3. **Update CONTEXT.md** when making architectural decisions
|
||||
4. **Document deviations** from established patterns with reasoning
|
||||
|
||||
### Context Preservation Strategies
|
||||
- **Incremental development**: Build on existing patterns rather than creating new ones
|
||||
- **Pattern consistency**: Use established data structures and function signatures
|
||||
- **Integration awareness**: Consider how new code affects existing functionality
|
||||
- **Dependency management**: Understand existing dependencies before adding new ones
|
||||
|
||||
## Context Prompting Best Practices
|
||||
|
||||
### Effective Context Sharing
|
||||
- Include relevant sections of CONTEXT.md in prompts for complex tasks
|
||||
- Reference specific existing files when asking for similar functionality
|
||||
- Provide examples of existing patterns when requesting new implementations
|
||||
- Share recent git commit messages to understand latest changes
|
||||
|
||||
### Context Window Optimization
|
||||
- Prioritize most relevant context for current task
|
||||
- Use @filename references to include specific files
|
||||
- Break large contexts into focused, task-specific chunks
|
||||
- Update context references as project evolves
|
||||
|
||||
## Red Flags - Context Loss Indicators
|
||||
- AI suggests patterns that conflict with existing code
|
||||
- New implementations ignore established conventions
|
||||
- Proposed solutions don't integrate with existing architecture
|
||||
- Code suggestions require significant refactoring of existing functionality
|
||||
|
||||
## Recovery Protocol
|
||||
When context loss is detected:
|
||||
1. **Stop development** and review CONTEXT.md
|
||||
2. **Analyze existing codebase** for established patterns
|
||||
3. **Update context documentation** with missing information
|
||||
4. **Restart task** with proper context provided
|
||||
5. **Test integration** with existing code before proceeding
|
||||
67
.cursor/rules/create-prd.mdc
Normal file
67
.cursor/rules/create-prd.mdc
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Product Requirements Document (PRD)
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
|
||||
2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
|
||||
3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
|
||||
4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory.
|
||||
|
||||
## Clarifying Questions (Examples)
|
||||
|
||||
The AI should adapt its questions based on the prompt, but here are some common areas to explore:
|
||||
|
||||
* **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
|
||||
* **Target User:** "Who is the primary user of this feature?"
|
||||
* **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
|
||||
* **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
|
||||
* **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
|
||||
* **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
|
||||
* **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
|
||||
* **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
|
||||
* **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"
|
||||
|
||||
## PRD Structure
|
||||
|
||||
The generated PRD should include the following sections:
|
||||
|
||||
1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
|
||||
2. **Goals:** List the specific, measurable objectives for this feature.
|
||||
3. **User Stories:** Detail the user narratives describing feature usage and benefits.
|
||||
4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
|
||||
5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
|
||||
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
|
||||
7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
|
||||
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
|
||||
9. **Open Questions:** List any remaining questions or areas needing further clarification.
|
||||
|
||||
## Target Audience
|
||||
|
||||
Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.
|
||||
|
||||
## Output
|
||||
|
||||
* **Format:** Markdown (`.md`)
|
||||
* **Location:** `/tasks/`
|
||||
* **Filename:** `prd-[feature-name].md`
|
||||
|
||||
## Final instructions
|
||||
|
||||
1. Do NOT start implmenting the PRD
|
||||
2. Make sure to ask the user clarifying questions
|
||||
|
||||
3. Take the user's answers to the clarifying questions and improve the PRD
|
||||
244
.cursor/rules/documentation.mdc
Normal file
244
.cursor/rules/documentation.mdc
Normal file
@ -0,0 +1,244 @@
|
||||
---
|
||||
description: Documentation standards for code, architecture, and development decisions
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Documentation Standards
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive, up-to-date documentation that supports development, onboarding, and long-term maintenance of the codebase.
|
||||
|
||||
## Documentation Hierarchy
|
||||
|
||||
### 1. Project Level Documentation (in ./docs/)
|
||||
- **README.md**: Project overview, setup instructions, basic usage
|
||||
- **CONTEXT.md**: Current project state, architecture decisions, patterns
|
||||
- **CHANGELOG.md**: Version history and significant changes
|
||||
- **CONTRIBUTING.md**: Development guidelines and processes
|
||||
- **API.md**: API endpoints, request/response formats, authentication
|
||||
|
||||
### 2. Module Level Documentation (in ./docs/modules/)
|
||||
- **[module-name].md**: Purpose, public interfaces, usage examples
|
||||
- **dependencies.md**: External dependencies and their purposes
|
||||
- **architecture.md**: Module relationships and data flow
|
||||
|
||||
### 3. Code Level Documentation
|
||||
- **Docstrings**: Function and class documentation
|
||||
- **Inline comments**: Complex logic explanations
|
||||
- **Type hints**: Clear parameter and return types
|
||||
- **README files**: Directory-specific instructions
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### Code Documentation
|
||||
```python
|
||||
def process_user_data(user_id: str, data: dict) -> UserResult:
|
||||
"""
|
||||
Process and validate user data before storage.
|
||||
|
||||
Args:
|
||||
user_id: Unique identifier for the user
|
||||
data: Dictionary containing user information to process
|
||||
|
||||
Returns:
|
||||
UserResult: Processed user data with validation status
|
||||
|
||||
Raises:
|
||||
ValidationError: When user data fails validation
|
||||
DatabaseError: When storage operation fails
|
||||
|
||||
Example:
|
||||
>>> result = process_user_data("123", {"name": "John", "email": "john@example.com"})
|
||||
>>> print(result.status)
|
||||
'valid'
|
||||
"""
|
||||
```
|
||||
|
||||
### API Documentation Format
|
||||
```markdown
|
||||
### POST /api/users
|
||||
|
||||
Create a new user account.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required)",
|
||||
"email": "string (required, valid email)",
|
||||
"age": "number (optional, min: 13)"
|
||||
}
|
||||
```
|
||||
|
||||
**Response (201):**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "string",
|
||||
"email": "string",
|
||||
"created_at": "iso_datetime"
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- 400: Invalid input data
|
||||
- 409: Email already exists
|
||||
```
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant architecture decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-001: Database Choice - PostgreSQL
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
We need to choose a database for storing user data and application state.
|
||||
|
||||
## Decision
|
||||
We will use PostgreSQL as our primary database.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- ACID compliance ensures data integrity
|
||||
- Rich query capabilities with SQL
|
||||
- Good performance for our expected load
|
||||
|
||||
**Negative:**
|
||||
- More complex setup than simpler alternatives
|
||||
- Requires SQL knowledge from team members
|
||||
|
||||
## Alternatives Considered
|
||||
- MongoDB: Rejected due to consistency requirements
|
||||
- SQLite: Rejected due to scalability needs
|
||||
```
|
||||
|
||||
## Documentation Maintenance
|
||||
|
||||
### When to Update Documentation
|
||||
|
||||
#### Always Update:
|
||||
- **API changes**: Any modification to public interfaces
|
||||
- **Architecture changes**: New patterns, data structures, or workflows
|
||||
- **Configuration changes**: Environment variables, deployment settings
|
||||
- **Dependencies**: Adding, removing, or upgrading packages
|
||||
- **Business logic changes**: Core functionality modifications
|
||||
|
||||
#### Update Weekly:
|
||||
- **CONTEXT.md**: Current development status and priorities
|
||||
- **Known issues**: Bug reports and workarounds
|
||||
- **Performance notes**: Bottlenecks and optimization opportunities
|
||||
|
||||
#### Update per Release:
|
||||
- **CHANGELOG.md**: User-facing changes and improvements
|
||||
- **Version documentation**: Breaking changes and migration guides
|
||||
- **Examples and tutorials**: Keep sample code current
|
||||
|
||||
### Documentation Quality Checklist
|
||||
|
||||
#### Completeness
|
||||
- [ ] Purpose and scope clearly explained
|
||||
- [ ] All public interfaces documented
|
||||
- [ ] Examples provided for complex usage
|
||||
- [ ] Error conditions and handling described
|
||||
- [ ] Dependencies and requirements listed
|
||||
|
||||
#### Accuracy
|
||||
- [ ] Code examples are tested and working
|
||||
- [ ] Links point to correct locations
|
||||
- [ ] Version numbers are current
|
||||
- [ ] Screenshots reflect current UI
|
||||
|
||||
#### Clarity
|
||||
- [ ] Written for the intended audience
|
||||
- [ ] Technical jargon is explained
|
||||
- [ ] Step-by-step instructions are clear
|
||||
- [ ] Visual aids used where helpful
|
||||
|
||||
## Documentation Automation
|
||||
|
||||
### Auto-Generated Documentation
|
||||
- **API docs**: Generate from code annotations
|
||||
- **Type documentation**: Extract from type hints
|
||||
- **Module dependencies**: Auto-update from imports
|
||||
- **Test coverage**: Include coverage reports
|
||||
|
||||
### Documentation Testing
|
||||
```python
|
||||
# Test that code examples in documentation work
|
||||
def test_documentation_examples():
|
||||
"""Verify code examples in docs actually work."""
|
||||
# Test examples from README.md
|
||||
# Test API examples from docs/API.md
|
||||
# Test configuration examples
|
||||
```
|
||||
|
||||
## Documentation Templates
|
||||
|
||||
### New Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
Brief description of what this module does and why it exists.
|
||||
|
||||
## Public Interface
|
||||
### Functions
|
||||
- `function_name(params)`: Description and example
|
||||
|
||||
### Classes
|
||||
- `ClassName`: Purpose and basic usage
|
||||
|
||||
## Usage Examples
|
||||
```python
|
||||
# Basic usage example
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
- Internal: List of internal modules this depends on
|
||||
- External: List of external packages required
|
||||
|
||||
## Testing
|
||||
How to run tests for this module.
|
||||
|
||||
## Known Issues
|
||||
Current limitations or bugs.
|
||||
```
|
||||
|
||||
### API Endpoint Template
|
||||
```markdown
|
||||
### [METHOD] /api/endpoint
|
||||
|
||||
Brief description of what this endpoint does.
|
||||
|
||||
**Authentication:** Required/Optional
|
||||
**Rate Limiting:** X requests per minute
|
||||
|
||||
**Request:**
|
||||
- Headers required
|
||||
- Body schema
|
||||
- Query parameters
|
||||
|
||||
**Response:**
|
||||
- Success response format
|
||||
- Error response format
|
||||
- Status codes
|
||||
|
||||
**Example:**
|
||||
Working request/response example
|
||||
```
|
||||
|
||||
## Review and Maintenance Process
|
||||
|
||||
### Documentation Review
|
||||
- Include documentation updates in code reviews
|
||||
- Verify examples still work with code changes
|
||||
- Check for broken links and outdated information
|
||||
- Ensure consistency with current implementation
|
||||
|
||||
### Regular Audits
|
||||
- Monthly review of documentation accuracy
|
||||
- Quarterly assessment of documentation completeness
|
||||
- Annual review of documentation structure and organization
|
||||
207
.cursor/rules/enhanced-task-list.mdc
Normal file
207
.cursor/rules/enhanced-task-list.mdc
Normal file
@ -0,0 +1,207 @@
|
||||
---
|
||||
description: Enhanced task list management with quality gates and iterative workflow integration
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Enhanced Task List Management
|
||||
|
||||
## Goal
|
||||
Manage task lists with integrated quality gates and iterative workflow to prevent context loss and ensure sustainable development.
|
||||
|
||||
## Task Implementation Protocol
|
||||
|
||||
### Pre-Implementation Check
|
||||
Before starting any sub-task:
|
||||
- [ ] **Context Review**: Have you reviewed CONTEXT.md and relevant documentation?
|
||||
- [ ] **Pattern Identification**: Do you understand existing patterns to follow?
|
||||
- [ ] **Integration Planning**: Do you know how this will integrate with existing code?
|
||||
- [ ] **Size Validation**: Is this task small enough (≤50 lines, ≤250 lines per file)?
|
||||
|
||||
### Implementation Process
|
||||
1. **One sub-task at a time**: Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y"
|
||||
2. **Step-by-step execution**:
|
||||
- Plan the approach in bullet points
|
||||
- Wait for approval
|
||||
- Implement the specific sub-task
|
||||
- Test the implementation
|
||||
- Update documentation if needed
|
||||
3. **Quality validation**: Run through the code review checklist before marking complete
|
||||
|
||||
### Completion Protocol
|
||||
When you finish a **sub‑task**:
|
||||
1. **Immediate marking**: Change `[ ]` to `[x]`
|
||||
2. **Quality check**: Verify the implementation meets quality standards
|
||||
3. **Integration test**: Ensure new code works with existing functionality
|
||||
4. **Documentation update**: Update relevant files if needed
|
||||
5. **Parent task check**: If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed
|
||||
6. **Stop and wait**: Get user approval before proceeding to next sub-task
|
||||
|
||||
## Enhanced Task List Structure
|
||||
|
||||
### Task File Header
|
||||
```markdown
|
||||
# Task List: [Feature Name]
|
||||
|
||||
**Source PRD**: `prd-[feature-name].md`
|
||||
**Status**: In Progress / Complete / Blocked
|
||||
**Context Last Updated**: [Date]
|
||||
**Architecture Review**: Required / Complete / N/A
|
||||
|
||||
## Quick Links
|
||||
- [Context Documentation](./CONTEXT.md)
|
||||
- [Architecture Guidelines](./docs/architecture.md)
|
||||
- [Related Files](#relevant-files)
|
||||
```
|
||||
|
||||
### Task Format with Quality Gates
|
||||
```markdown
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- **Quality Gate**: Architecture review required
|
||||
- **Dependencies**: List any dependencies
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- **Size estimate**: [Small/Medium/Large]
|
||||
- **Pattern reference**: [Reference to existing pattern]
|
||||
- **Test requirements**: [Unit/Integration/Both]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- **Integration points**: [List affected components]
|
||||
- **Risk level**: [Low/Medium/High]
|
||||
```
|
||||
|
||||
## Relevant Files Management
|
||||
|
||||
### Enhanced File Tracking
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
### Implementation Files
|
||||
- `path/to/file1.ts` - Brief description of purpose and role
|
||||
- **Status**: Created / Modified / Needs Review
|
||||
- **Last Modified**: [Date]
|
||||
- **Review Status**: Pending / Approved / Needs Changes
|
||||
|
||||
### Test Files
|
||||
- `path/to/file1.test.ts` - Unit tests for file1.ts
|
||||
- **Coverage**: [Percentage or status]
|
||||
- **Last Run**: [Date and result]
|
||||
|
||||
### Documentation Files
|
||||
- `docs/module-name.md` - Module documentation
|
||||
- **Status**: Up to date / Needs update / Missing
|
||||
- **Last Updated**: [Date]
|
||||
|
||||
### Configuration Files
|
||||
- `config/setting.json` - Configuration changes
|
||||
- **Environment**: [Dev/Staging/Prod affected]
|
||||
- **Backup**: [Location of backup]
|
||||
```
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
### During Development
|
||||
1. **Regular updates**: Update task status after each significant change
|
||||
2. **File tracking**: Add new files as they are created or modified
|
||||
3. **Dependency tracking**: Note when new dependencies between tasks emerge
|
||||
4. **Risk assessment**: Flag tasks that become more complex than anticipated
|
||||
|
||||
### Quality Checkpoints
|
||||
At 25%, 50%, 75%, and 100% completion:
|
||||
- [ ] **Architecture alignment**: Code follows established patterns
|
||||
- [ ] **Performance impact**: No significant performance degradation
|
||||
- [ ] **Security review**: No security vulnerabilities introduced
|
||||
- [ ] **Documentation current**: All changes are documented
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Completion assessment**: What percentage of tasks are actually complete?
|
||||
2. **Quality assessment**: Are completed tasks meeting quality standards?
|
||||
3. **Process assessment**: Is the iterative workflow being followed?
|
||||
4. **Risk assessment**: Are there emerging risks or blockers?
|
||||
|
||||
## Task Status Indicators
|
||||
|
||||
### Status Levels
|
||||
- `[ ]` **Not Started**: Task not yet begun
|
||||
- `[~]` **In Progress**: Currently being worked on
|
||||
- `[?]` **Blocked**: Waiting for dependencies or decisions
|
||||
- `[!]` **Needs Review**: Implementation complete but needs quality review
|
||||
- `[x]` **Complete**: Finished and quality approved
|
||||
|
||||
### Quality Indicators
|
||||
- ✅ **Quality Approved**: Passed all quality gates
|
||||
- ⚠️ **Quality Concerns**: Has issues but functional
|
||||
- ❌ **Quality Failed**: Needs rework before approval
|
||||
- 🔄 **Under Review**: Currently being reviewed
|
||||
|
||||
### Integration Status
|
||||
- 🔗 **Integrated**: Successfully integrated with existing code
|
||||
- 🔧 **Integration Issues**: Problems with existing code integration
|
||||
- ⏳ **Integration Pending**: Ready for integration testing
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Tasks Become Too Complex
|
||||
If a sub-task grows beyond expected scope:
|
||||
1. **Stop implementation** immediately
|
||||
2. **Document current state** and what was discovered
|
||||
3. **Break down** the task into smaller pieces
|
||||
4. **Update task list** with new sub-tasks
|
||||
5. **Get approval** for the new breakdown before proceeding
|
||||
|
||||
### When Context is Lost
|
||||
If AI seems to lose track of project patterns:
|
||||
1. **Pause development**
|
||||
2. **Review CONTEXT.md** and recent changes
|
||||
3. **Update context documentation** with current state
|
||||
4. **Restart** with explicit pattern references
|
||||
5. **Reduce task size** until context is re-established
|
||||
|
||||
### When Quality Gates Fail
|
||||
If implementation doesn't meet quality standards:
|
||||
1. **Mark task** with `[!]` status
|
||||
2. **Document specific issues** found
|
||||
3. **Create remediation tasks** if needed
|
||||
4. **Don't proceed** until quality issues are resolved
|
||||
|
||||
## AI Instructions Integration
|
||||
|
||||
### Context Awareness Commands
|
||||
```markdown
|
||||
**Before starting any task, run these checks:**
|
||||
1. @CONTEXT.md - Review current project state
|
||||
2. @architecture.md - Understand design principles
|
||||
3. @code-review.md - Know quality standards
|
||||
4. Look at existing similar code for patterns
|
||||
```
|
||||
|
||||
### Quality Validation Commands
|
||||
```markdown
|
||||
**After completing any sub-task:**
|
||||
1. Run code review checklist
|
||||
2. Test integration with existing code
|
||||
3. Update documentation if needed
|
||||
4. Mark task complete only after quality approval
|
||||
```
|
||||
|
||||
### Workflow Commands
|
||||
```markdown
|
||||
**For each development session:**
|
||||
1. Review incomplete tasks and their status
|
||||
2. Identify next logical sub-task to work on
|
||||
3. Check dependencies and blockers
|
||||
4. Follow iterative workflow process
|
||||
5. Update task list with progress and findings
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Daily Success Indicators
|
||||
- Tasks are completed according to quality standards
|
||||
- No sub-tasks are started without completing previous ones
|
||||
- File tracking remains accurate and current
|
||||
- Integration issues are caught early
|
||||
|
||||
### Weekly Success Indicators
|
||||
- Overall task completion rate is sustainable
|
||||
- Quality issues are decreasing over time
|
||||
- Context loss incidents are rare
|
||||
- Team confidence in codebase remains high
|
||||
70
.cursor/rules/generate-tasks.mdc
Normal file
70
.cursor/rules/generate-tasks.mdc
Normal file
@ -0,0 +1,70 @@
|
||||
---
|
||||
description: Generate a task list or TODO for a user requirement or implementation.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Task List from a PRD
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation.
|
||||
|
||||
## Output
|
||||
|
||||
- **Format:** Markdown (`.md`)
|
||||
- **Location:** `/tasks/`
|
||||
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive PRD Reference:** The user points the AI to a specific PRD file
|
||||
2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
|
||||
3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
|
||||
4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
|
||||
5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD.
|
||||
6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
|
||||
7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
|
||||
8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`).
|
||||
|
||||
## Output Format
|
||||
|
||||
The generated task list _must_ follow this structure:
|
||||
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature).
|
||||
- `path/to/file1.test.ts` - Unit tests for `file1.ts`.
|
||||
- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission).
|
||||
- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`.
|
||||
- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations).
|
||||
- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`.
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory).
|
||||
- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration.
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- [ ] 2.0 Parent Task Title
|
||||
- [ ] 2.1 [Sub-task description 2.1]
|
||||
- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration)
|
||||
```
|
||||
|
||||
## Interaction Model
|
||||
|
||||
The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.
|
||||
|
||||
## Target Audience
|
||||
|
||||
|
||||
Assume the primary reader of the task list is a **junior developer** who will implement the feature.
|
||||
236
.cursor/rules/iterative-workflow.mdc
Normal file
236
.cursor/rules/iterative-workflow.mdc
Normal file
@ -0,0 +1,236 @@
|
||||
---
|
||||
description: Iterative development workflow for AI-assisted coding
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Iterative Development Workflow
|
||||
|
||||
## Goal
|
||||
Establish a structured, iterative development process that prevents the chaos and complexity that can arise from uncontrolled AI-assisted development.
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: Planning and Design
|
||||
**Before writing any code:**
|
||||
|
||||
1. **Understand the Requirement**
|
||||
- Break down the task into specific, measurable objectives
|
||||
- Identify existing code patterns that should be followed
|
||||
- List dependencies and integration points
|
||||
- Define acceptance criteria
|
||||
|
||||
2. **Design Review**
|
||||
- Propose approach in bullet points
|
||||
- Wait for explicit approval before proceeding
|
||||
- Consider how the solution fits existing architecture
|
||||
- Identify potential risks and mitigation strategies
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
**One small piece at a time:**
|
||||
|
||||
1. **Micro-Tasks** (≤ 50 lines each)
|
||||
- Implement one function or small class at a time
|
||||
- Test immediately after implementation
|
||||
- Ensure integration with existing code
|
||||
- Document decisions and patterns used
|
||||
|
||||
2. **Validation Checkpoints**
|
||||
- After each micro-task, verify it works correctly
|
||||
- Check that it follows established patterns
|
||||
- Confirm it integrates cleanly with existing code
|
||||
- Get approval before moving to next micro-task
|
||||
|
||||
### Phase 3: Integration and Testing
|
||||
**Ensuring system coherence:**
|
||||
|
||||
1. **Integration Testing**
|
||||
- Test new code with existing functionality
|
||||
- Verify no regressions in existing features
|
||||
- Check performance impact
|
||||
- Validate error handling
|
||||
|
||||
2. **Documentation Update**
|
||||
- Update relevant documentation
|
||||
- Record any new patterns or decisions
|
||||
- Update context files if architecture changed
|
||||
|
||||
## Iterative Prompting Strategy
|
||||
|
||||
### Step 1: Context Setting
|
||||
```
|
||||
Before implementing [feature], help me understand:
|
||||
1. What existing patterns should I follow?
|
||||
2. What existing functions/classes are relevant?
|
||||
3. How should this integrate with [specific existing component]?
|
||||
4. What are the potential architectural impacts?
|
||||
```
|
||||
|
||||
### Step 2: Plan Creation
|
||||
```
|
||||
Based on the context, create a detailed plan for implementing [feature]:
|
||||
1. Break it into micro-tasks (≤50 lines each)
|
||||
2. Identify dependencies and order of implementation
|
||||
3. Specify integration points with existing code
|
||||
4. List potential risks and mitigation strategies
|
||||
|
||||
Wait for my approval before implementing.
|
||||
```
|
||||
|
||||
### Step 3: Incremental Implementation
|
||||
```
|
||||
Implement only the first micro-task: [specific task]
|
||||
- Use existing patterns from [reference file/function]
|
||||
- Keep it under 50 lines
|
||||
- Include error handling
|
||||
- Add appropriate tests
|
||||
- Explain your implementation choices
|
||||
|
||||
Stop after this task and wait for approval.
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Before Each Implementation
|
||||
- [ ] **Purpose is clear**: Can explain what this piece does and why
|
||||
- [ ] **Pattern is established**: Following existing code patterns
|
||||
- [ ] **Size is manageable**: Implementation is small enough to understand completely
|
||||
- [ ] **Integration is planned**: Know how it connects to existing code
|
||||
|
||||
### After Each Implementation
|
||||
- [ ] **Code is understood**: Can explain every line of implemented code
|
||||
- [ ] **Tests pass**: All existing and new tests are passing
|
||||
- [ ] **Integration works**: New code works with existing functionality
|
||||
- [ ] **Documentation updated**: Changes are reflected in relevant documentation
|
||||
|
||||
### Before Moving to Next Task
|
||||
- [ ] **Current task complete**: All acceptance criteria met
|
||||
- [ ] **No regressions**: Existing functionality still works
|
||||
- [ ] **Clean state**: No temporary code or debugging artifacts
|
||||
- [ ] **Approval received**: Explicit go-ahead for next task
|
||||
- [ ] **Documentaion updated**: If relevant changes to module was made.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Large Block Implementation
|
||||
**Don't:**
|
||||
```
|
||||
Implement the entire user management system with authentication,
|
||||
CRUD operations, and email notifications.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
First, implement just the User model with basic fields.
|
||||
Stop there and let me review before continuing.
|
||||
```
|
||||
|
||||
### Context Loss
|
||||
**Don't:**
|
||||
```
|
||||
Create a new authentication system.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Looking at the existing auth patterns in auth.py, implement
|
||||
password validation following the same structure as the
|
||||
existing email validation function.
|
||||
```
|
||||
|
||||
### Over-Engineering
|
||||
**Don't:**
|
||||
```
|
||||
Build a flexible, extensible user management framework that
|
||||
can handle any future requirements.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Implement user creation functionality that matches the existing
|
||||
pattern in customer.py, focusing only on the current requirements.
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Task Status Indicators
|
||||
- 🔄 **In Planning**: Requirements gathering and design
|
||||
- ⏳ **In Progress**: Currently implementing
|
||||
- ✅ **Complete**: Implemented, tested, and integrated
|
||||
- 🚫 **Blocked**: Waiting for decisions or dependencies
|
||||
- 🔧 **Needs Refactor**: Working but needs improvement
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Progress Assessment**
|
||||
- What was completed this week?
|
||||
- What challenges were encountered?
|
||||
- How well did the iterative process work?
|
||||
|
||||
2. **Process Adjustment**
|
||||
- Were task sizes appropriate?
|
||||
- Did context management work effectively?
|
||||
- What improvements can be made?
|
||||
|
||||
3. **Architecture Review**
|
||||
- Is the code remaining maintainable?
|
||||
- Are patterns staying consistent?
|
||||
- Is technical debt accumulating?
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Things Go Wrong
|
||||
If development becomes chaotic or problematic:
|
||||
|
||||
1. **Stop Development**
|
||||
- Don't continue adding to the problem
|
||||
- Take time to assess the situation
|
||||
- Don't rush to "fix" with more AI-generated code
|
||||
|
||||
2. **Assess the Situation**
|
||||
- What specific problems exist?
|
||||
- How far has the code diverged from established patterns?
|
||||
- What parts are still working correctly?
|
||||
|
||||
3. **Recovery Process**
|
||||
- Roll back to last known good state
|
||||
- Update context documentation with lessons learned
|
||||
- Restart with smaller, more focused tasks
|
||||
- Get explicit approval for each step of recovery
|
||||
|
||||
### Context Recovery
|
||||
When AI seems to lose track of project patterns:
|
||||
|
||||
1. **Context Refresh**
|
||||
- Review and update CONTEXT.md
|
||||
- Include examples of current code patterns
|
||||
- Clarify architectural decisions
|
||||
|
||||
2. **Pattern Re-establishment**
|
||||
- Show AI examples of existing, working code
|
||||
- Explicitly state patterns to follow
|
||||
- Start with very small, pattern-matching tasks
|
||||
|
||||
3. **Gradual Re-engagement**
|
||||
- Begin with simple, low-risk tasks
|
||||
- Verify pattern adherence at each step
|
||||
- Gradually increase task complexity as consistency returns
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Short-term (Daily)
|
||||
- Code is understandable and well-integrated
|
||||
- No major regressions introduced
|
||||
- Development velocity feels sustainable
|
||||
- Team confidence in codebase remains high
|
||||
|
||||
### Medium-term (Weekly)
|
||||
- Technical debt is not accumulating
|
||||
- New features integrate cleanly
|
||||
- Development patterns remain consistent
|
||||
- Documentation stays current
|
||||
|
||||
### Long-term (Monthly)
|
||||
- Codebase remains maintainable as it grows
|
||||
- New team members can understand and contribute
|
||||
- AI assistance enhances rather than hinders development
|
||||
- Architecture remains clean and purposeful
|
||||
24
.cursor/rules/project.mdc
Normal file
24
.cursor/rules/project.mdc
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# Rule: Project specific rules
|
||||
|
||||
## Goal
|
||||
Unify the project structure and interraction with tools and console
|
||||
|
||||
### System tools
|
||||
- **ALWAYS** use UV for package management
|
||||
- **ALWAYS** use windows PowerShell command for terminal
|
||||
|
||||
### Coding patterns
|
||||
- **ALWYAS** check the arguments and methods before use to avoid errors with whron parameters or names
|
||||
- If in doubt, check [CONTEXT.md](mdc:CONTEXT.md) file and [architecture.md](mdc:docs/architecture.md)
|
||||
- **PREFER** ORM pattern for databases with SQLAclhemy.
|
||||
- **DO NOT USE** emoji in code and comments
|
||||
|
||||
### Testing
|
||||
- Use UV for test in format *uv run pytest [filename]*
|
||||
|
||||
|
||||
237
.cursor/rules/refactoring.mdc
Normal file
237
.cursor/rules/refactoring.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Code refactoring and technical debt management for AI-assisted development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Refactoring and Technical Debt Management
|
||||
|
||||
## Goal
|
||||
Guide AI in systematic code refactoring to improve maintainability, reduce complexity, and prevent technical debt accumulation in AI-assisted development projects.
|
||||
|
||||
## When to Apply This Rule
|
||||
- Code complexity has increased beyond manageable levels
|
||||
- Duplicate code patterns are detected
|
||||
- Performance issues are identified
|
||||
- New features are difficult to integrate
|
||||
- Code review reveals maintainability concerns
|
||||
- Weekly technical debt assessment indicates refactoring needs
|
||||
|
||||
## Pre-Refactoring Assessment
|
||||
|
||||
Before starting any refactoring, the AI MUST:
|
||||
|
||||
1. **Context Analysis:**
|
||||
- Review existing `CONTEXT.md` for architectural decisions
|
||||
- Analyze current code patterns and conventions
|
||||
- Identify all files that will be affected (search the codebase for use)
|
||||
- Check for existing tests that verify current behavior
|
||||
|
||||
2. **Scope Definition:**
|
||||
- Clearly define what will and will not be changed
|
||||
- Identify the specific refactoring pattern to apply
|
||||
- Estimate the blast radius of changes
|
||||
- Plan rollback strategy if needed
|
||||
|
||||
3. **Documentation Review:**
|
||||
- Check `./docs/` for relevant module documentation
|
||||
- Review any existing architectural diagrams
|
||||
- Identify dependencies and integration points
|
||||
- Note any known constraints or limitations
|
||||
|
||||
## Refactoring Process
|
||||
|
||||
### Phase 1: Planning and Safety
|
||||
1. **Create Refactoring Plan:**
|
||||
- Document the current state and desired end state
|
||||
- Break refactoring into small, atomic steps
|
||||
- Identify tests that must pass throughout the process
|
||||
- Plan verification steps for each change
|
||||
|
||||
2. **Establish Safety Net:**
|
||||
- Ensure comprehensive test coverage exists
|
||||
- If tests are missing, create them BEFORE refactoring
|
||||
- Document current behavior that must be preserved
|
||||
- Create backup of current implementation approach
|
||||
|
||||
3. **Get Approval:**
|
||||
- Present the refactoring plan to the user
|
||||
- Wait for explicit "Go" or "Proceed" confirmation
|
||||
- Do NOT start refactoring without approval
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
4. **One Change at a Time:**
|
||||
- Implement ONE refactoring step per iteration
|
||||
- Run tests after each step to ensure nothing breaks
|
||||
- Update documentation if interfaces change
|
||||
- Mark progress in the refactoring plan
|
||||
|
||||
5. **Verification Protocol:**
|
||||
- Run all relevant tests after each change
|
||||
- Verify functionality works as expected
|
||||
- Check performance hasn't degraded
|
||||
- Ensure no new linting or type errors
|
||||
|
||||
6. **User Checkpoint:**
|
||||
- After each significant step, pause for user review
|
||||
- Present what was changed and current status
|
||||
- Wait for approval before continuing
|
||||
- Address any concerns before proceeding
|
||||
|
||||
### Phase 3: Completion and Documentation
|
||||
7. **Final Verification:**
|
||||
- Run full test suite to ensure nothing is broken
|
||||
- Verify all original functionality is preserved
|
||||
- Check that new code follows project conventions
|
||||
- Confirm performance is maintained or improved
|
||||
|
||||
8. **Documentation Update:**
|
||||
- Update `CONTEXT.md` with new patterns/decisions
|
||||
- Update module documentation in `./docs/`
|
||||
- Document any new conventions established
|
||||
- Note lessons learned for future refactoring
|
||||
|
||||
## Common Refactoring Patterns
|
||||
|
||||
### Extract Method/Function
|
||||
```
|
||||
WHEN: Functions/methods exceed 50 lines or have multiple responsibilities
|
||||
HOW:
|
||||
1. Identify logical groupings within the function
|
||||
2. Extract each group into a well-named helper function
|
||||
3. Ensure each function has a single responsibility
|
||||
4. Verify tests still pass
|
||||
```
|
||||
|
||||
### Extract Module/Class
|
||||
```
|
||||
WHEN: Files exceed 250 lines or handle multiple concerns
|
||||
HOW:
|
||||
1. Identify cohesive functionality groups
|
||||
2. Create new files for each group
|
||||
3. Move related functions/classes together
|
||||
4. Update imports and dependencies
|
||||
5. Verify module boundaries are clean
|
||||
```
|
||||
|
||||
### Eliminate Duplication
|
||||
```
|
||||
WHEN: Similar code appears in multiple places
|
||||
HOW:
|
||||
1. Identify the common pattern or functionality
|
||||
2. Extract to a shared utility function or module
|
||||
3. Update all usage sites to use the shared code
|
||||
4. Ensure the abstraction is not over-engineered
|
||||
```
|
||||
|
||||
### Improve Data Structures
|
||||
```
|
||||
WHEN: Complex nested objects or unclear data flow
|
||||
HOW:
|
||||
1. Define clear interfaces/types for data structures
|
||||
2. Create transformation functions between different representations
|
||||
3. Ensure data flow is unidirectional where possible
|
||||
4. Add validation at boundaries
|
||||
```
|
||||
|
||||
### Reduce Coupling
|
||||
```
|
||||
WHEN: Modules are tightly interconnected
|
||||
HOW:
|
||||
1. Identify dependencies between modules
|
||||
2. Extract interfaces for external dependencies
|
||||
3. Use dependency injection where appropriate
|
||||
4. Ensure modules can be tested in isolation
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
Every refactoring must pass these gates:
|
||||
|
||||
### Technical Quality
|
||||
- [ ] All existing tests pass
|
||||
- [ ] No new linting errors introduced
|
||||
- [ ] Code follows established project conventions
|
||||
- [ ] No performance regression detected
|
||||
- [ ] File sizes remain under 250 lines
|
||||
- [ ] Function sizes remain under 50 lines
|
||||
|
||||
### Maintainability
|
||||
- [ ] Code is more readable than before
|
||||
- [ ] Duplicated code has been reduced
|
||||
- [ ] Module responsibilities are clearer
|
||||
- [ ] Dependencies are explicit and minimal
|
||||
- [ ] Error handling is consistent
|
||||
|
||||
### Documentation
|
||||
- [ ] Public interfaces are documented
|
||||
- [ ] Complex logic has explanatory comments
|
||||
- [ ] Architectural decisions are recorded
|
||||
- [ ] Examples are provided where helpful
|
||||
|
||||
## AI Instructions for Refactoring
|
||||
|
||||
1. **Always ask for permission** before starting any refactoring work
|
||||
2. **Start with tests** - ensure comprehensive coverage before changing code
|
||||
3. **Work incrementally** - make small changes and verify each step
|
||||
4. **Preserve behavior** - functionality must remain exactly the same
|
||||
5. **Update documentation** - keep all docs current with changes
|
||||
6. **Follow conventions** - maintain consistency with existing codebase
|
||||
7. **Stop and ask** if any step fails or produces unexpected results
|
||||
8. **Explain changes** - clearly communicate what was changed and why
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Over-Engineering
|
||||
- Don't create abstractions for code that isn't duplicated
|
||||
- Avoid complex inheritance hierarchies
|
||||
- Don't optimize prematurely
|
||||
|
||||
### Breaking Changes
|
||||
- Never change public APIs without explicit approval
|
||||
- Don't remove functionality, even if it seems unused
|
||||
- Avoid changing behavior "while we're here"
|
||||
|
||||
### Scope Creep
|
||||
- Stick to the defined refactoring scope
|
||||
- Don't add new features during refactoring
|
||||
- Resist the urge to "improve" unrelated code
|
||||
|
||||
## Success Metrics
|
||||
|
||||
Track these metrics to ensure refactoring effectiveness:
|
||||
|
||||
### Code Quality
|
||||
- Reduced cyclomatic complexity
|
||||
- Lower code duplication percentage
|
||||
- Improved test coverage
|
||||
- Fewer linting violations
|
||||
|
||||
### Developer Experience
|
||||
- Faster time to understand code
|
||||
- Easier integration of new features
|
||||
- Reduced bug introduction rate
|
||||
- Higher developer confidence in changes
|
||||
|
||||
### Maintainability
|
||||
- Clearer module boundaries
|
||||
- More predictable behavior
|
||||
- Easier debugging and troubleshooting
|
||||
- Better performance characteristics
|
||||
|
||||
## Output Files
|
||||
|
||||
When refactoring is complete, update:
|
||||
- `refactoring-log-[date].md` - Document what was changed and why
|
||||
- `CONTEXT.md` - Update with new patterns and decisions
|
||||
- `./docs/` - Update relevant module documentation
|
||||
- Task lists - Mark refactoring tasks as complete
|
||||
|
||||
## Final Verification
|
||||
|
||||
Before marking refactoring complete:
|
||||
1. Run full test suite and verify all tests pass
|
||||
2. Check that code follows all project conventions
|
||||
3. Verify documentation is up to date
|
||||
4. Confirm user is satisfied with the results
|
||||
5. Record lessons learned for future refactoring efforts
|
||||
44
.cursor/rules/task-list.mdc
Normal file
44
.cursor/rules/task-list.mdc
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
description: TODO list task implementation
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Task List Management
|
||||
|
||||
Guidelines for managing task lists in markdown files to track progress on completing a PRD
|
||||
|
||||
## Task Implementation
|
||||
- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y"
|
||||
- **Completion protocol:**
|
||||
1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.
|
||||
2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed.
|
||||
- Stop after each sub‑task and wait for the user’s go‑ahead.
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
1. **Update the task list as you work:**
|
||||
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
|
||||
- Add new tasks as they emerge.
|
||||
|
||||
2. **Maintain the “Relevant Files” section:**
|
||||
- List every file created or modified.
|
||||
- Give each file a one‑line description of its purpose.
|
||||
|
||||
## AI Instructions
|
||||
|
||||
When working with task lists, the AI must:
|
||||
|
||||
1. Regularly update the task list file after finishing any significant work.
|
||||
2. Follow the completion protocol:
|
||||
- Mark each finished **sub‑task** `[x]`.
|
||||
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
|
||||
3. Add newly discovered tasks.
|
||||
4. Keep “Relevant Files” accurate and up to date.
|
||||
5. Before starting work, check which sub‑task is next.
|
||||
|
||||
6. After implementing a sub‑task, update the file and then pause for user approval.
|
||||
8
.gitignore
vendored
8
.gitignore
vendored
@ -1,13 +1,13 @@
|
||||
# ---> Python
|
||||
/data/*.db
|
||||
/credentials/*.json
|
||||
*.csv
|
||||
*.png
|
||||
# Byte-compiled / optimized / DLL files
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
|
||||
# ignore credentils JSON
|
||||
/credentials/*.json
|
||||
/data/*.npy
|
||||
|
||||
# C extensions
|
||||
*.so
|
||||
@ -179,5 +179,3 @@ README.md
|
||||
.vscode/launch.json
|
||||
data/btcusd_1-day_data.csv
|
||||
data/btcusd_1-min_data.csv
|
||||
|
||||
frontend/
|
||||
@ -1 +0,0 @@
|
||||
3.10
|
||||
611
README.md
611
README.md
@ -1,177 +1,512 @@
|
||||
# Cycles - Advanced Trading Strategy Backtesting Framework
|
||||
# Cycles - Cryptocurrency Trading Strategy Backtesting Framework
|
||||
|
||||
A sophisticated Python framework for backtesting cryptocurrency trading strategies with multi-timeframe analysis, strategy combination, and advanced signal processing.
|
||||
A comprehensive Python framework for backtesting cryptocurrency trading strategies using technical indicators, with advanced features like machine learning price prediction to eliminate lookahead bias.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Overview](#overview)
|
||||
- [Features](#features)
|
||||
- [Quick Start](#quick-start)
|
||||
- [Project Structure](#project-structure)
|
||||
- [Core Modules](#core-modules)
|
||||
- [Configuration](#configuration)
|
||||
- [Usage Examples](#usage-examples)
|
||||
- [API Documentation](#api-documentation)
|
||||
- [Testing](#testing)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
|
||||
## Overview
|
||||
|
||||
Cycles is a sophisticated backtesting framework designed specifically for cryptocurrency trading strategies. It provides robust tools for:
|
||||
|
||||
- **Strategy Backtesting**: Test trading strategies across multiple timeframes with comprehensive metrics
|
||||
- **Technical Analysis**: Built-in indicators including SuperTrend, RSI, Bollinger Bands, and more
|
||||
- **Machine Learning Integration**: Eliminate lookahead bias using XGBoost price prediction
|
||||
- **Multi-timeframe Analysis**: Support for various timeframes from 1-minute to daily data
|
||||
- **Performance Analytics**: Detailed reporting with profit ratios, drawdowns, win rates, and fee calculations
|
||||
|
||||
### Key Goals
|
||||
|
||||
1. **Realistic Trading Simulation**: Eliminate common backtesting pitfalls like lookahead bias
|
||||
2. **Modular Architecture**: Easy to extend with new indicators and strategies
|
||||
3. **Performance Optimization**: Parallel processing for efficient large-scale backtesting
|
||||
4. **Comprehensive Analysis**: Rich reporting and visualization capabilities
|
||||
|
||||
## Features
|
||||
|
||||
- **Multi-Strategy Architecture**: Combine multiple trading strategies with configurable weights and rules
|
||||
- **Multi-Timeframe Analysis**: Strategies can operate on different timeframes (1min, 5min, 15min, 1h, etc.)
|
||||
- **Advanced Strategies**:
|
||||
- **Default Strategy**: Meta-trend analysis using multiple Supertrend indicators
|
||||
- **BBRS Strategy**: Bollinger Bands + RSI with market regime detection
|
||||
- **Flexible Signal Combination**: Weighted consensus, majority voting, any/all combinations
|
||||
- **Precise Stop-Loss**: 1-minute precision for accurate risk management
|
||||
- **Comprehensive Backtesting**: Detailed performance metrics and trade analysis
|
||||
- **Data Visualization**: Interactive charts and performance plots
|
||||
### 🚀 Core Features
|
||||
|
||||
- **Multi-Strategy Backtesting**: Test multiple trading strategies simultaneously
|
||||
- **Advanced Stop Loss Management**: Precise stop-loss execution using 1-minute data
|
||||
- **Fee Integration**: Realistic trading fee calculations (OKX exchange fees)
|
||||
- **Parallel Processing**: Efficient multi-core backtesting execution
|
||||
- **Rich Analytics**: Comprehensive performance metrics and reporting
|
||||
|
||||
### 📊 Technical Indicators
|
||||
|
||||
- **SuperTrend**: Multi-parameter SuperTrend indicator with meta-trend analysis
|
||||
- **RSI**: Relative Strength Index with customizable periods
|
||||
- **Bollinger Bands**: Configurable period and standard deviation multipliers
|
||||
- **Extensible Framework**: Easy to add new technical indicators
|
||||
|
||||
### 🤖 Machine Learning
|
||||
|
||||
- **Price Prediction**: XGBoost-based closing price prediction
|
||||
- **Lookahead Bias Elimination**: Realistic trading simulations
|
||||
- **Feature Engineering**: Advanced technical feature extraction
|
||||
- **Model Persistence**: Save and load trained models
|
||||
|
||||
### 📈 Data Management
|
||||
|
||||
- **Multiple Data Sources**: Support for various cryptocurrency exchanges
|
||||
- **Flexible Timeframes**: 1-minute to daily data aggregation
|
||||
- **Efficient Storage**: Optimized data loading and caching
|
||||
- **Google Sheets Integration**: External data source connectivity
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Python 3.8+
|
||||
- [uv](https://github.com/astral-sh/uv) package manager (recommended)
|
||||
- Python 3.10 or higher
|
||||
- UV package manager (recommended)
|
||||
- Git
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone <repository-url>
|
||||
cd Cycles
|
||||
1. **Clone the repository**:
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd Cycles
|
||||
```
|
||||
|
||||
# Install dependencies with uv
|
||||
uv sync
|
||||
2. **Install dependencies**:
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
# Or install with pip
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
3. **Activate virtual environment**:
|
||||
```bash
|
||||
source .venv/bin/activate # Linux/Mac
|
||||
# or
|
||||
.venv\Scripts\activate # Windows
|
||||
```
|
||||
|
||||
### Running Backtests
|
||||
### Basic Usage
|
||||
|
||||
Use the `uv run` command to execute backtests with different configurations:
|
||||
1. **Prepare your configuration file** (`config.json`):
|
||||
```json
|
||||
{
|
||||
"start_date": "2023-01-01",
|
||||
"stop_date": "2023-12-31",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["5T", "15T", "1H", "4H"],
|
||||
"stop_loss_pcts": [0.02, 0.05, 0.10]
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Run default strategy on 5-minute timeframe
|
||||
uv run .\main.py .\configs\config_default_5min.json
|
||||
2. **Run a backtest**:
|
||||
```bash
|
||||
uv run python main.py --config config.json
|
||||
```
|
||||
|
||||
# Run default strategy on 15-minute timeframe
|
||||
uv run .\main.py .\configs\config_default.json
|
||||
|
||||
# Run BBRS strategy with market regime detection
|
||||
uv run .\main.py .\configs\config_bbrs.json
|
||||
|
||||
# Run combined strategies
|
||||
uv run .\main.py .\configs\config_combined.json
|
||||
```
|
||||
|
||||
### Configuration Examples
|
||||
|
||||
#### Default Strategy (5-minute timeframe)
|
||||
```bash
|
||||
uv run .\main.py .\configs\config_default_5min.json
|
||||
```
|
||||
|
||||
#### BBRS Strategy with Multi-timeframe Analysis
|
||||
```bash
|
||||
uv run .\main.py .\configs\config_bbrs_multi_timeframe.json
|
||||
```
|
||||
|
||||
#### Combined Strategies with Weighted Consensus
|
||||
```bash
|
||||
uv run .\main.py .\configs\config_combined.json
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Strategies are configured using JSON files in the `configs/` directory:
|
||||
|
||||
```json
|
||||
{
|
||||
"start_date": "2024-01-01",
|
||||
"stop_date": "2024-01-31",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["15min"],
|
||||
"stop_loss_pcts": [0.03, 0.05],
|
||||
"strategies": [
|
||||
{
|
||||
"name": "default",
|
||||
"weight": 1.0,
|
||||
"params": {
|
||||
"timeframe": "15min"
|
||||
}
|
||||
}
|
||||
],
|
||||
"combination_rules": {
|
||||
"entry": "any",
|
||||
"exit": "any",
|
||||
"min_confidence": 0.5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Available Strategies
|
||||
|
||||
1. **Default Strategy**: Meta-trend analysis using Supertrend indicators
|
||||
2. **BBRS Strategy**: Bollinger Bands + RSI with market regime detection
|
||||
|
||||
### Combination Rules
|
||||
|
||||
- **Entry**: `any`, `all`, `majority`, `weighted_consensus`
|
||||
- **Exit**: `any`, `all`, `priority` (prioritizes stop-loss signals)
|
||||
3. **View results**:
|
||||
Results will be saved in timestamped CSV files with comprehensive metrics.
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
Cycles/
|
||||
├── configs/ # Configuration files
|
||||
├── cycles/ # Core framework
|
||||
│ ├── strategies/ # Strategy implementation
|
||||
│ │ ├── base.py # Base strategy classes
|
||||
│ │ ├── default_strategy.py
|
||||
│ │ ├── bbrs_strategy.py
|
||||
│ │ └── manager.py # Strategy manager
|
||||
│ ├── Analysis/ # Technical analysis
|
||||
│ ├── utils/ # Utilities
|
||||
│ └── charts.py # Visualization
|
||||
├── docs/ # Documentation
|
||||
├── data/ # Market data
|
||||
├── results/ # Backtest results
|
||||
└── main.py # Main entry point
|
||||
├── cycles/ # Core library modules
|
||||
│ ├── Analysis/ # Technical analysis indicators
|
||||
│ │ ├── boillinger_band.py
|
||||
│ │ ├── rsi.py
|
||||
│ │ └── __init__.py
|
||||
│ ├── utils/ # Utility modules
|
||||
│ │ ├── storage.py # Data storage and management
|
||||
│ │ ├── system.py # System utilities
|
||||
│ │ ├── data_utils.py # Data processing utilities
|
||||
│ │ └── gsheets.py # Google Sheets integration
|
||||
│ ├── backtest.py # Core backtesting engine
|
||||
│ ├── supertrend.py # SuperTrend indicator implementation
|
||||
│ ├── charts.py # Visualization utilities
|
||||
│ ├── market_fees.py # Trading fee calculations
|
||||
│ └── __init__.py
|
||||
├── docs/ # Documentation
|
||||
│ ├── analysis.md # Analysis module documentation
|
||||
│ ├── utils_storage.md # Storage utilities documentation
|
||||
│ └── utils_system.md # System utilities documentation
|
||||
├── data/ # Data directory (not in repo)
|
||||
├── results/ # Backtest results (not in repo)
|
||||
├── xgboost/ # Machine learning components
|
||||
├── OHLCVPredictor/ # Price prediction module
|
||||
├── main.py # Main execution script
|
||||
├── test_bbrsi.py # Example strategy test
|
||||
├── pyproject.toml # Project configuration
|
||||
├── requirements.txt # Dependencies
|
||||
├── uv.lock # UV lock file
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Documentation
|
||||
## Core Modules
|
||||
|
||||
Detailed documentation is available in the `docs/` directory:
|
||||
### Backtest Engine (`cycles/backtest.py`)
|
||||
|
||||
- **[Strategy Manager](./docs/strategy_manager.md)** - Multi-strategy orchestration and signal combination
|
||||
- **[Strategies](./docs/strategies.md)** - Individual strategy implementations and usage
|
||||
- **[Timeframe System](./docs/timeframe_system.md)** - Advanced timeframe management and multi-timeframe strategies
|
||||
- **[Analysis](./docs/analysis.md)** - Technical analysis components
|
||||
- **[Storage Utils](./docs/utils_storage.md)** - Data storage and retrieval
|
||||
- **[System Utils](./docs/utils_system.md)** - System utilities
|
||||
The heart of the framework, providing comprehensive backtesting capabilities:
|
||||
|
||||
## Examples
|
||||
```python
|
||||
from cycles.backtest import Backtest
|
||||
|
||||
results = Backtest.run(
|
||||
min1_df=minute_data,
|
||||
df=timeframe_data,
|
||||
initial_usd=10000,
|
||||
stop_loss_pct=0.05,
|
||||
debug=False
|
||||
)
|
||||
```
|
||||
|
||||
**Key Features**:
|
||||
- Meta-SuperTrend strategy implementation
|
||||
- Precise stop-loss execution using 1-minute data
|
||||
- Comprehensive trade logging and statistics
|
||||
- Fee-aware profit calculations
|
||||
|
||||
### Technical Analysis (`cycles/Analysis/`)
|
||||
|
||||
Modular technical indicator implementations:
|
||||
|
||||
#### RSI (Relative Strength Index)
|
||||
```python
|
||||
from cycles.Analysis.rsi import RSI
|
||||
|
||||
rsi_calculator = RSI(period=14)
|
||||
data_with_rsi = rsi_calculator.calculate(df, price_column='close')
|
||||
```
|
||||
|
||||
#### Bollinger Bands
|
||||
```python
|
||||
from cycles.Analysis.boillinger_band import BollingerBands
|
||||
|
||||
bb = BollingerBands(period=20, std_dev_multiplier=2.0)
|
||||
data_with_bb = bb.calculate(df)
|
||||
```
|
||||
|
||||
### Data Management (`cycles/utils/storage.py`)
|
||||
|
||||
Efficient data loading, processing, and result storage:
|
||||
|
||||
```python
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
storage = Storage(data_dir='./data', logging=logging)
|
||||
data = storage.load_data('btcusd_1-min_data.csv', '2023-01-01', '2023-12-31')
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Backtest Configuration
|
||||
|
||||
Create a `config.json` file with the following structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"start_date": "2023-01-01",
|
||||
"stop_date": "2023-12-31",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": [
|
||||
"1T", // 1 minute
|
||||
"5T", // 5 minutes
|
||||
"15T", // 15 minutes
|
||||
"1H", // 1 hour
|
||||
"4H", // 4 hours
|
||||
"1D" // 1 day
|
||||
],
|
||||
"stop_loss_pcts": [0.02, 0.05, 0.10, 0.15]
|
||||
}
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Set the following environment variables for enhanced functionality:
|
||||
|
||||
### Single Strategy Backtest
|
||||
```bash
|
||||
# Test default strategy on different timeframes
|
||||
uv run .\main.py .\configs\config_default.json # 15min
|
||||
uv run .\main.py .\configs\config_default_5min.json # 5min
|
||||
# Google Sheets integration (optional)
|
||||
export GOOGLE_SHEETS_CREDENTIALS_PATH="/path/to/credentials.json"
|
||||
|
||||
# Data directory (optional, defaults to ./data)
|
||||
export DATA_DIR="/path/to/data"
|
||||
|
||||
# Results directory (optional, defaults to ./results)
|
||||
export RESULTS_DIR="/path/to/results"
|
||||
```
|
||||
|
||||
### Multi-Strategy Backtest
|
||||
## Usage Examples
|
||||
|
||||
### Basic Backtest
|
||||
|
||||
```python
|
||||
import json
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.backtest import Backtest
|
||||
|
||||
# Load configuration
|
||||
with open('config.json', 'r') as f:
|
||||
config = json.load(f)
|
||||
|
||||
# Initialize storage
|
||||
storage = Storage(data_dir='./data')
|
||||
|
||||
# Load data
|
||||
data_1min = storage.load_data(
|
||||
'btcusd_1-min_data.csv',
|
||||
config['start_date'],
|
||||
config['stop_date']
|
||||
)
|
||||
|
||||
# Run backtest
|
||||
results = Backtest.run(
|
||||
min1_df=data_1min,
|
||||
df=data_1min, # Same data for 1-minute strategy
|
||||
initial_usd=config['initial_usd'],
|
||||
stop_loss_pct=0.05,
|
||||
debug=True
|
||||
)
|
||||
|
||||
print(f"Final USD: {results['final_usd']:.2f}")
|
||||
print(f"Number of trades: {results['n_trades']}")
|
||||
print(f"Win rate: {results['win_rate']:.2%}")
|
||||
```
|
||||
|
||||
### Multi-Timeframe Analysis
|
||||
|
||||
```python
|
||||
from main import process
|
||||
|
||||
# Define timeframes to test
|
||||
timeframes = ['5T', '15T', '1H', '4H']
|
||||
stop_loss_pcts = [0.02, 0.05, 0.10]
|
||||
|
||||
# Create tasks for parallel processing
|
||||
tasks = [
|
||||
(timeframe, data_1min, stop_loss_pct, 10000)
|
||||
for timeframe in timeframes
|
||||
for stop_loss_pct in stop_loss_pcts
|
||||
]
|
||||
|
||||
# Process each task
|
||||
for task in tasks:
|
||||
results, trades = process(task, debug=False)
|
||||
print(f"Timeframe: {task[0]}, Stop Loss: {task[2]:.1%}")
|
||||
for result in results:
|
||||
print(f" Final USD: {result['final_usd']:.2f}")
|
||||
```
|
||||
|
||||
### Custom Strategy Development
|
||||
|
||||
```python
|
||||
from cycles.Analysis.rsi import RSI
|
||||
from cycles.Analysis.boillinger_band import BollingerBands
|
||||
|
||||
def custom_strategy(df):
|
||||
"""Example custom trading strategy using RSI and Bollinger Bands"""
|
||||
|
||||
# Calculate indicators
|
||||
rsi = RSI(period=14)
|
||||
bb = BollingerBands(period=20, std_dev_multiplier=2.0)
|
||||
|
||||
df_with_rsi = rsi.calculate(df.copy())
|
||||
df_with_bb = bb.calculate(df_with_rsi)
|
||||
|
||||
# Define signals
|
||||
buy_signals = (
|
||||
(df_with_bb['close'] < df_with_bb['LowerBand']) &
|
||||
(df_with_bb['RSI'] < 30)
|
||||
)
|
||||
|
||||
sell_signals = (
|
||||
(df_with_bb['close'] > df_with_bb['UpperBand']) &
|
||||
(df_with_bb['RSI'] > 70)
|
||||
)
|
||||
|
||||
return buy_signals, sell_signals
|
||||
```
|
||||
|
||||
## API Documentation
|
||||
|
||||
### Core Classes
|
||||
|
||||
#### `Backtest`
|
||||
Main backtesting engine with static methods for strategy execution.
|
||||
|
||||
**Methods**:
|
||||
- `run(min1_df, df, initial_usd, stop_loss_pct, debug=False)`: Execute backtest
|
||||
- `check_stop_loss(...)`: Check stop-loss conditions using 1-minute data
|
||||
- `handle_entry(...)`: Process trade entry logic
|
||||
- `handle_exit(...)`: Process trade exit logic
|
||||
|
||||
#### `Storage`
|
||||
Data management and persistence utilities.
|
||||
|
||||
**Methods**:
|
||||
- `load_data(filename, start_date, stop_date)`: Load and filter historical data
|
||||
- `save_data(df, filename)`: Save processed data
|
||||
- `write_backtest_results(...)`: Save backtest results to CSV
|
||||
|
||||
#### `SystemUtils`
|
||||
System optimization and resource management.
|
||||
|
||||
**Methods**:
|
||||
- `get_optimal_workers()`: Determine optimal number of parallel workers
|
||||
- `get_memory_usage()`: Monitor memory consumption
|
||||
|
||||
### Configuration Parameters
|
||||
|
||||
| Parameter | Type | Description | Default |
|
||||
|-----------|------|-------------|---------|
|
||||
| `start_date` | string | Backtest start date (YYYY-MM-DD) | Required |
|
||||
| `stop_date` | string | Backtest end date (YYYY-MM-DD) | Required |
|
||||
| `initial_usd` | float | Starting capital in USD | Required |
|
||||
| `timeframes` | array | List of timeframes to test | Required |
|
||||
| `stop_loss_pcts` | array | Stop-loss percentages to test | Required |
|
||||
|
||||
## Testing
|
||||
|
||||
### Running Tests
|
||||
|
||||
```bash
|
||||
# Combine multiple strategies with different weights
|
||||
uv run .\main.py .\configs\config_combined.json
|
||||
# Run all tests
|
||||
uv run pytest
|
||||
|
||||
# Run specific test file
|
||||
uv run pytest test_bbrsi.py
|
||||
|
||||
# Run with verbose output
|
||||
uv run pytest -v
|
||||
|
||||
# Run with coverage
|
||||
uv run pytest --cov=cycles
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
Create your own configuration file and run:
|
||||
```bash
|
||||
uv run .\main.py .\configs\your_config.json
|
||||
### Test Structure
|
||||
|
||||
- `test_bbrsi.py`: Example strategy testing with RSI and Bollinger Bands
|
||||
- Unit tests for individual modules (add as needed)
|
||||
- Integration tests for complete workflows
|
||||
|
||||
### Example Test
|
||||
|
||||
```python
|
||||
# test_bbrsi.py demonstrates strategy testing
|
||||
from cycles.Analysis.rsi import RSI
|
||||
from cycles.Analysis.boillinger_band import BollingerBands
|
||||
|
||||
def test_strategy_signals():
|
||||
# Load test data
|
||||
storage = Storage()
|
||||
data = storage.load_data('test_data.csv', '2023-01-01', '2023-02-01')
|
||||
|
||||
# Calculate indicators
|
||||
rsi = RSI(period=14)
|
||||
bb = BollingerBands(period=20)
|
||||
|
||||
data_with_indicators = bb.calculate(rsi.calculate(data))
|
||||
|
||||
# Test signal generation
|
||||
assert 'RSI' in data_with_indicators.columns
|
||||
assert 'UpperBand' in data_with_indicators.columns
|
||||
assert 'LowerBand' in data_with_indicators.columns
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
Backtests generate:
|
||||
- **CSV Results**: Detailed performance metrics per timeframe/strategy
|
||||
- **Trade Log**: Individual trade records with entry/exit details
|
||||
- **Performance Charts**: Visual analysis of strategy performance (in debug mode)
|
||||
- **Log Files**: Detailed execution logs
|
||||
|
||||
## License
|
||||
|
||||
[Add your license information here]
|
||||
|
||||
## Contributing
|
||||
|
||||
[Add contributing guidelines here]
|
||||
### Development Setup
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch: `git checkout -b feature/new-indicator`
|
||||
3. Install development dependencies: `uv sync --dev`
|
||||
4. Make your changes following the coding standards
|
||||
5. Add tests for new functionality
|
||||
6. Run tests: `uv run pytest`
|
||||
7. Submit a pull request
|
||||
|
||||
### Coding Standards
|
||||
|
||||
- **Maximum file size**: 250 lines
|
||||
- **Maximum function size**: 50 lines
|
||||
- **Documentation**: All public functions must have docstrings
|
||||
- **Type hints**: Use type hints for all function parameters and returns
|
||||
- **Error handling**: Include proper error handling and meaningful error messages
|
||||
- **No emoji**: Avoid emoji in code and comments
|
||||
|
||||
### Adding New Indicators
|
||||
|
||||
1. Create a new file in `cycles/Analysis/`
|
||||
2. Follow the existing pattern (see `rsi.py` or `boillinger_band.py`)
|
||||
3. Include comprehensive docstrings and type hints
|
||||
4. Add tests for the new indicator
|
||||
5. Update documentation
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Parallel Processing**: Use the built-in parallel processing for multiple timeframes
|
||||
2. **Data Caching**: Cache frequently used calculations
|
||||
3. **Memory Management**: Monitor memory usage for large datasets
|
||||
4. **Efficient Data Types**: Use appropriate pandas data types
|
||||
|
||||
### Benchmarks
|
||||
|
||||
Typical performance on modern hardware:
|
||||
- **1-minute data**: ~1M candles processed in 2-3 minutes
|
||||
- **Multiple timeframes**: 4 timeframes × 4 stop-loss values in 5-10 minutes
|
||||
- **Memory usage**: ~2-4GB for 1 year of 1-minute BTC data
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Memory errors with large datasets**:
|
||||
- Reduce date range or use data chunking
|
||||
- Increase system RAM or use swap space
|
||||
|
||||
2. **Slow performance**:
|
||||
- Enable parallel processing
|
||||
- Reduce number of timeframes/stop-loss values
|
||||
- Use SSD storage for data files
|
||||
|
||||
3. **Missing data errors**:
|
||||
- Verify data file format and column names
|
||||
- Check date range availability in data
|
||||
- Ensure proper data cleaning
|
||||
|
||||
### Debug Mode
|
||||
|
||||
Enable debug mode for detailed logging:
|
||||
|
||||
```python
|
||||
# Set debug=True for detailed output
|
||||
results = Backtest.run(..., debug=True)
|
||||
```
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License. See the LICENSE file for details.
|
||||
|
||||
## Changelog
|
||||
|
||||
### Version 0.1.0 (Current)
|
||||
- Initial release
|
||||
- Core backtesting framework
|
||||
- SuperTrend strategy implementation
|
||||
- Technical indicators (RSI, Bollinger Bands)
|
||||
- Multi-timeframe analysis
|
||||
- Machine learning price prediction
|
||||
- Parallel processing support
|
||||
|
||||
---
|
||||
|
||||
For more detailed documentation, see the `docs/` directory or visit our [documentation website](link-to-docs).
|
||||
|
||||
**Support**: For questions or issues, please create an issue on GitHub or contact the development team.
|
||||
462
backtest_runner.py
Normal file
462
backtest_runner.py
Normal file
@ -0,0 +1,462 @@
|
||||
import pandas as pd
|
||||
import concurrent.futures
|
||||
import logging
|
||||
from typing import List, Tuple, Dict, Any, Optional
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.system import SystemUtils
|
||||
from cycles.utils.progress_manager import ProgressManager
|
||||
from result_processor import ResultProcessor
|
||||
|
||||
|
||||
def _process_single_task_static(task: Tuple[str, str, pd.DataFrame, float, float], progress_callback=None) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""
|
||||
Static version of _process_single_task for use with ProcessPoolExecutor
|
||||
|
||||
Args:
|
||||
task: Tuple of (task_id, timeframe, data_1min, stop_loss_pct, initial_usd)
|
||||
progress_callback: Optional progress callback function
|
||||
|
||||
Returns:
|
||||
Tuple of (results, trades)
|
||||
"""
|
||||
task_id, timeframe, data_1min, stop_loss_pct, initial_usd = task
|
||||
|
||||
try:
|
||||
if timeframe == "1T" or timeframe == "1min":
|
||||
df = data_1min.copy()
|
||||
else:
|
||||
df = _resample_data_static(data_1min, timeframe)
|
||||
|
||||
# Create required components for processing
|
||||
from cycles.utils.storage import Storage
|
||||
from result_processor import ResultProcessor
|
||||
|
||||
# Create storage with default paths (for subprocess)
|
||||
storage = Storage()
|
||||
result_processor = ResultProcessor(storage)
|
||||
|
||||
results, trades = result_processor.process_timeframe_results(
|
||||
data_1min,
|
||||
df,
|
||||
[stop_loss_pct],
|
||||
timeframe,
|
||||
initial_usd,
|
||||
progress_callback=progress_callback
|
||||
)
|
||||
|
||||
return results, trades
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to process {timeframe} with stop loss {stop_loss_pct}: {e}"
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
|
||||
def _resample_data_static(data_1min: pd.DataFrame, timeframe: str) -> pd.DataFrame:
|
||||
"""
|
||||
Static function to resample 1-minute data to specified timeframe
|
||||
|
||||
Args:
|
||||
data_1min: 1-minute data DataFrame
|
||||
timeframe: Target timeframe string
|
||||
|
||||
Returns:
|
||||
Resampled DataFrame
|
||||
"""
|
||||
try:
|
||||
agg_dict = {
|
||||
'open': 'first',
|
||||
'high': 'max',
|
||||
'low': 'min',
|
||||
'close': 'last',
|
||||
'volume': 'sum'
|
||||
}
|
||||
|
||||
if 'predicted_close_price' in data_1min.columns:
|
||||
agg_dict['predicted_close_price'] = 'last'
|
||||
|
||||
resampled = data_1min.resample(timeframe).agg(agg_dict).dropna()
|
||||
|
||||
return resampled.reset_index()
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to resample data to {timeframe}: {e}"
|
||||
raise ValueError(error_msg) from e
|
||||
|
||||
|
||||
class BacktestRunner:
|
||||
"""Handles the execution of backtests across multiple timeframes and parameters"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
storage: Storage,
|
||||
system_utils: SystemUtils,
|
||||
result_processor: ResultProcessor,
|
||||
logging_instance: Optional[logging.Logger] = None,
|
||||
show_progress: bool = True
|
||||
):
|
||||
"""
|
||||
Initialize backtest runner
|
||||
|
||||
Args:
|
||||
storage: Storage instance for data operations
|
||||
system_utils: System utilities for resource management
|
||||
result_processor: Result processor for handling outputs
|
||||
logging_instance: Optional logging instance
|
||||
show_progress: Whether to show visual progress bars
|
||||
"""
|
||||
self.storage = storage
|
||||
self.system_utils = system_utils
|
||||
self.result_processor = result_processor
|
||||
self.logging = logging_instance
|
||||
self.show_progress = show_progress
|
||||
self.progress_manager = ProgressManager() if show_progress else None
|
||||
|
||||
def run_backtests(
|
||||
self,
|
||||
data_1min: pd.DataFrame,
|
||||
timeframes: List[str],
|
||||
stop_loss_pcts: List[float],
|
||||
initial_usd: float,
|
||||
debug: bool = False
|
||||
) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""
|
||||
Run backtests across all timeframe and stop loss combinations
|
||||
|
||||
Args:
|
||||
data_1min: 1-minute data DataFrame
|
||||
timeframes: List of timeframe strings (e.g., ['1D', '6h'])
|
||||
stop_loss_pcts: List of stop loss percentages
|
||||
initial_usd: Initial USD amount
|
||||
debug: Whether to enable debug mode
|
||||
|
||||
Returns:
|
||||
Tuple of (all_results, all_trades)
|
||||
"""
|
||||
# Create tasks for all combinations
|
||||
tasks = self._create_tasks(timeframes, stop_loss_pcts, data_1min, initial_usd)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Starting {len(tasks)} backtest tasks")
|
||||
|
||||
if debug:
|
||||
return self._run_sequential(tasks)
|
||||
else:
|
||||
return self._run_parallel(tasks)
|
||||
|
||||
def _create_tasks(
|
||||
self,
|
||||
timeframes: List[str],
|
||||
stop_loss_pcts: List[float],
|
||||
data_1min: pd.DataFrame,
|
||||
initial_usd: float
|
||||
) -> List[Tuple]:
|
||||
"""Create task tuples for processing"""
|
||||
tasks = []
|
||||
for timeframe in timeframes:
|
||||
for stop_loss_pct in stop_loss_pcts:
|
||||
task_id = f"{timeframe}_{stop_loss_pct}"
|
||||
task = (task_id, timeframe, data_1min, stop_loss_pct, initial_usd)
|
||||
tasks.append(task)
|
||||
return tasks
|
||||
|
||||
|
||||
|
||||
def _run_sequential(self, tasks: List[Tuple]) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""Run tasks sequentially (for debug mode)"""
|
||||
# Initialize progress tracking if enabled
|
||||
if self.progress_manager:
|
||||
for task in tasks:
|
||||
task_id, timeframe, data_1min, stop_loss_pct, initial_usd = task
|
||||
|
||||
# Calculate actual DataFrame size that will be processed
|
||||
if timeframe == "1T" or timeframe == "1min":
|
||||
actual_df_size = len(data_1min)
|
||||
else:
|
||||
# Get the actual resampled DataFrame size
|
||||
temp_df = self._resample_data(data_1min, timeframe)
|
||||
actual_df_size = len(temp_df)
|
||||
|
||||
task_name = f"{timeframe} SL:{stop_loss_pct:.0%}"
|
||||
self.progress_manager.start_task(task_id, task_name, actual_df_size)
|
||||
|
||||
self.progress_manager.start_display()
|
||||
|
||||
all_results = []
|
||||
all_trades = []
|
||||
|
||||
try:
|
||||
for task in tasks:
|
||||
try:
|
||||
# Get progress callback for this task if available
|
||||
progress_callback = None
|
||||
if self.progress_manager:
|
||||
progress_callback = self.progress_manager.get_task_progress_callback(task[0])
|
||||
|
||||
results, trades = self._process_single_task(task, progress_callback)
|
||||
|
||||
if results:
|
||||
all_results.extend(results)
|
||||
if trades:
|
||||
all_trades.extend(trades)
|
||||
|
||||
# Mark task as completed
|
||||
if self.progress_manager:
|
||||
self.progress_manager.complete_task(task[0])
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error processing task {task[1]} with stop loss {task[3]}: {e}"
|
||||
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
|
||||
raise RuntimeError(error_msg) from e
|
||||
finally:
|
||||
# Stop progress display
|
||||
if self.progress_manager:
|
||||
self.progress_manager.stop_display()
|
||||
|
||||
return all_results, all_trades
|
||||
|
||||
def _run_parallel(self, tasks: List[Tuple]) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""Run tasks in parallel using ProcessPoolExecutor"""
|
||||
workers = self.system_utils.get_optimal_workers()
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Running {len(tasks)} tasks with {workers} workers")
|
||||
|
||||
# OPTIMIZATION: Disable progress manager for parallel execution to reduce overhead
|
||||
# Progress tracking adds significant overhead in multiprocessing
|
||||
if self.progress_manager and self.logging:
|
||||
self.logging.info("Progress tracking disabled for parallel execution (performance optimization)")
|
||||
|
||||
all_results = []
|
||||
all_trades = []
|
||||
completed_tasks = 0
|
||||
|
||||
try:
|
||||
with concurrent.futures.ProcessPoolExecutor(max_workers=workers) as executor:
|
||||
future_to_task = {
|
||||
executor.submit(_process_single_task_static, task): task
|
||||
for task in tasks
|
||||
}
|
||||
|
||||
for future in concurrent.futures.as_completed(future_to_task):
|
||||
task = future_to_task[future]
|
||||
try:
|
||||
results, trades = future.result()
|
||||
if results:
|
||||
all_results.extend(results)
|
||||
if trades:
|
||||
all_trades.extend(trades)
|
||||
|
||||
completed_tasks += 1
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Completed task {task[0]} ({completed_tasks}/{len(tasks)})")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Task {task[1]} with stop loss {task[3]} failed: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Parallel execution failed: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
finally:
|
||||
# Stop progress display
|
||||
if self.progress_manager:
|
||||
self.progress_manager.stop_display()
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"All {len(tasks)} tasks completed successfully")
|
||||
|
||||
return all_results, all_trades
|
||||
|
||||
def _process_single_task(
|
||||
self,
|
||||
task: Tuple[str, str, pd.DataFrame, float, float],
|
||||
progress_callback=None
|
||||
) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""
|
||||
Process a single backtest task
|
||||
|
||||
Args:
|
||||
task: Tuple of (task_id, timeframe, data_1min, stop_loss_pct, initial_usd)
|
||||
progress_callback: Optional progress callback function
|
||||
|
||||
Returns:
|
||||
Tuple of (results, trades)
|
||||
"""
|
||||
task_id, timeframe, data_1min, stop_loss_pct, initial_usd = task
|
||||
|
||||
try:
|
||||
if timeframe == "1T" or timeframe == "1min":
|
||||
df = data_1min.copy()
|
||||
else:
|
||||
df = self._resample_data(data_1min, timeframe)
|
||||
|
||||
results, trades = self.result_processor.process_timeframe_results(
|
||||
data_1min,
|
||||
df,
|
||||
[stop_loss_pct],
|
||||
timeframe,
|
||||
initial_usd,
|
||||
progress_callback=progress_callback
|
||||
)
|
||||
|
||||
# OPTIMIZATION: Skip individual trade file saving during parallel execution
|
||||
# Trade files will be saved in batch at the end
|
||||
# if trades:
|
||||
# self.result_processor.save_trade_file(trades, timeframe, stop_loss_pct)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Completed task {task_id}: {len(results)} results, {len(trades)} trades")
|
||||
|
||||
return results, trades
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to process {timeframe} with stop loss {stop_loss_pct}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
def _resample_data(self, data_1min: pd.DataFrame, timeframe: str) -> pd.DataFrame:
|
||||
"""
|
||||
Resample 1-minute data to specified timeframe
|
||||
|
||||
Args:
|
||||
data_1min: 1-minute data DataFrame
|
||||
timeframe: Target timeframe string
|
||||
|
||||
Returns:
|
||||
Resampled DataFrame
|
||||
"""
|
||||
try:
|
||||
agg_dict = {
|
||||
'open': 'first',
|
||||
'high': 'max',
|
||||
'low': 'min',
|
||||
'close': 'last',
|
||||
'volume': 'sum'
|
||||
}
|
||||
|
||||
if 'predicted_close_price' in data_1min.columns:
|
||||
agg_dict['predicted_close_price'] = 'last'
|
||||
|
||||
resampled = data_1min.resample(timeframe).agg(agg_dict).dropna()
|
||||
|
||||
return resampled.reset_index()
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to resample data to {timeframe}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise ValueError(error_msg) from e
|
||||
|
||||
def _get_timeframe_factor(self, timeframe: str) -> int:
|
||||
"""
|
||||
Get the factor by which data is reduced when resampling to timeframe
|
||||
|
||||
Args:
|
||||
timeframe: Target timeframe string (e.g., '1h', '4h', '1D')
|
||||
|
||||
Returns:
|
||||
Factor for estimating data size after resampling
|
||||
"""
|
||||
timeframe_factors = {
|
||||
'1T': 1, '1min': 1,
|
||||
'5T': 5, '5min': 5,
|
||||
'15T': 15, '15min': 15,
|
||||
'30T': 30, '30min': 30,
|
||||
'1h': 60, '1H': 60,
|
||||
'2h': 120, '2H': 120,
|
||||
'4h': 240, '4H': 240,
|
||||
'6h': 360, '6H': 360,
|
||||
'8h': 480, '8H': 480,
|
||||
'12h': 720, '12H': 720,
|
||||
'1D': 1440, '1d': 1440,
|
||||
'2D': 2880, '2d': 2880,
|
||||
'3D': 4320, '3d': 4320,
|
||||
'1W': 10080, '1w': 10080
|
||||
}
|
||||
return timeframe_factors.get(timeframe, 60) # Default to 1 hour if unknown
|
||||
|
||||
def load_data(self, filename: str, start_date: str, stop_date: str) -> pd.DataFrame:
|
||||
"""
|
||||
Load and validate data for backtesting
|
||||
|
||||
Args:
|
||||
filename: Name of data file
|
||||
start_date: Start date string
|
||||
stop_date: Stop date string
|
||||
|
||||
Returns:
|
||||
Loaded and validated DataFrame
|
||||
|
||||
Raises:
|
||||
ValueError: If data is empty or invalid
|
||||
"""
|
||||
try:
|
||||
data = self.storage.load_data(filename, start_date, stop_date)
|
||||
|
||||
if data.empty:
|
||||
raise ValueError(f"No data loaded for period {start_date} to {stop_date}")
|
||||
|
||||
required_columns = ['open', 'high', 'low', 'close', 'volume']
|
||||
|
||||
if 'predicted_close_price' in data.columns:
|
||||
required_columns.append('predicted_close_price')
|
||||
|
||||
missing_columns = [col for col in required_columns if col not in data.columns]
|
||||
|
||||
if missing_columns:
|
||||
raise ValueError(f"Missing required columns: {missing_columns}")
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Loaded {len(data)} rows of data from {filename}")
|
||||
|
||||
return data
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to load data from {filename}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
def validate_inputs(
|
||||
self,
|
||||
timeframes: List[str],
|
||||
stop_loss_pcts: List[float],
|
||||
initial_usd: float
|
||||
) -> None:
|
||||
"""
|
||||
Validate backtest input parameters
|
||||
|
||||
Args:
|
||||
timeframes: List of timeframe strings
|
||||
stop_loss_pcts: List of stop loss percentages
|
||||
initial_usd: Initial USD amount
|
||||
|
||||
Raises:
|
||||
ValueError: If any input is invalid
|
||||
"""
|
||||
if not timeframes:
|
||||
raise ValueError("At least one timeframe must be specified")
|
||||
|
||||
if not stop_loss_pcts:
|
||||
raise ValueError("At least one stop loss percentage must be specified")
|
||||
|
||||
for pct in stop_loss_pcts:
|
||||
if not 0 < pct < 1:
|
||||
raise ValueError(f"Stop loss percentage must be between 0 and 1, got: {pct}")
|
||||
|
||||
if initial_usd <= 0:
|
||||
raise ValueError("Initial USD must be positive")
|
||||
|
||||
if self.logging:
|
||||
self.logging.info("Input validation completed successfully")
|
||||
175
config_manager.py
Normal file
175
config_manager.py
Normal file
@ -0,0 +1,175 @@
|
||||
import json
|
||||
import datetime
|
||||
import logging
|
||||
from typing import Dict, List, Optional, Any
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class ConfigManager:
|
||||
"""Manages configuration loading, validation, and default values for backtest operations"""
|
||||
|
||||
DEFAULT_CONFIG = {
|
||||
"start_date": "2025-05-01",
|
||||
"stop_date": datetime.datetime.today().strftime('%Y-%m-%d'),
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["1D", "6h", "3h", "1h", "30m", "15m", "5m", "1m"],
|
||||
"stop_loss_pcts": [0.01, 0.02, 0.03, 0.05],
|
||||
"data_dir": "../data",
|
||||
"results_dir": "results"
|
||||
}
|
||||
|
||||
def __init__(self, logging_instance: Optional[logging.Logger] = None):
|
||||
"""
|
||||
Initialize configuration manager
|
||||
|
||||
Args:
|
||||
logging_instance: Optional logging instance for output
|
||||
"""
|
||||
self.logging = logging_instance
|
||||
self.config = {}
|
||||
|
||||
def load_config(self, config_path: Optional[str] = None) -> Dict[str, Any]:
|
||||
"""
|
||||
Load configuration from file or interactive input
|
||||
|
||||
Args:
|
||||
config_path: Path to JSON config file, if None prompts for interactive input
|
||||
|
||||
Returns:
|
||||
Dictionary containing validated configuration
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If config file doesn't exist
|
||||
json.JSONDecodeError: If config file has invalid JSON
|
||||
ValueError: If configuration values are invalid
|
||||
"""
|
||||
if config_path:
|
||||
self.config = self._load_from_file(config_path)
|
||||
else:
|
||||
self.config = self._load_interactive()
|
||||
|
||||
self._validate_config()
|
||||
return self.config
|
||||
|
||||
def _load_from_file(self, config_path: str) -> Dict[str, Any]:
|
||||
"""Load configuration from JSON file"""
|
||||
try:
|
||||
config_file = Path(config_path)
|
||||
if not config_file.exists():
|
||||
raise FileNotFoundError(f"Configuration file not found: {config_path}")
|
||||
|
||||
with open(config_file, 'r') as f:
|
||||
config = json.load(f)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Configuration loaded from {config_path}")
|
||||
|
||||
return config
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
error_msg = f"Invalid JSON in configuration file {config_path}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise json.JSONDecodeError(error_msg, e.doc, e.pos)
|
||||
|
||||
def _load_interactive(self) -> Dict[str, Any]:
|
||||
"""Load configuration through interactive prompts"""
|
||||
print("No config file provided. Please enter the following values (press Enter to use default):")
|
||||
|
||||
config = {}
|
||||
|
||||
# Start date
|
||||
start_date = input(f"Start date [{self.DEFAULT_CONFIG['start_date']}]: ") or self.DEFAULT_CONFIG['start_date']
|
||||
config['start_date'] = start_date
|
||||
|
||||
# Stop date
|
||||
stop_date = input(f"Stop date [{self.DEFAULT_CONFIG['stop_date']}]: ") or self.DEFAULT_CONFIG['stop_date']
|
||||
config['stop_date'] = stop_date
|
||||
|
||||
# Initial USD
|
||||
initial_usd_str = input(f"Initial USD [{self.DEFAULT_CONFIG['initial_usd']}]: ") or str(self.DEFAULT_CONFIG['initial_usd'])
|
||||
try:
|
||||
config['initial_usd'] = float(initial_usd_str)
|
||||
except ValueError:
|
||||
raise ValueError(f"Invalid initial USD value: {initial_usd_str}")
|
||||
|
||||
# Timeframes
|
||||
timeframes_str = input(f"Timeframes (comma separated) [{', '.join(self.DEFAULT_CONFIG['timeframes'])}]: ") or ','.join(self.DEFAULT_CONFIG['timeframes'])
|
||||
config['timeframes'] = [tf.strip() for tf in timeframes_str.split(',') if tf.strip()]
|
||||
|
||||
# Stop loss percentages
|
||||
stop_loss_pcts_str = input(f"Stop loss pcts (comma separated) [{', '.join(str(x) for x in self.DEFAULT_CONFIG['stop_loss_pcts'])}]: ") or ','.join(str(x) for x in self.DEFAULT_CONFIG['stop_loss_pcts'])
|
||||
try:
|
||||
config['stop_loss_pcts'] = [float(x.strip()) for x in stop_loss_pcts_str.split(',') if x.strip()]
|
||||
except ValueError:
|
||||
raise ValueError(f"Invalid stop loss percentages: {stop_loss_pcts_str}")
|
||||
|
||||
# Add default directories
|
||||
config['data_dir'] = self.DEFAULT_CONFIG['data_dir']
|
||||
config['results_dir'] = self.DEFAULT_CONFIG['results_dir']
|
||||
|
||||
return config
|
||||
|
||||
def _validate_config(self) -> None:
|
||||
"""
|
||||
Validate configuration values
|
||||
|
||||
Raises:
|
||||
ValueError: If any configuration value is invalid
|
||||
"""
|
||||
# Validate initial USD
|
||||
if self.config.get('initial_usd', 0) <= 0:
|
||||
raise ValueError("Initial USD must be positive")
|
||||
|
||||
# Validate stop loss percentages
|
||||
stop_loss_pcts = self.config.get('stop_loss_pcts', [])
|
||||
for pct in stop_loss_pcts:
|
||||
if not 0 < pct < 1:
|
||||
raise ValueError(f"Stop loss percentage must be between 0 and 1, got: {pct}")
|
||||
|
||||
# Validate dates
|
||||
try:
|
||||
datetime.datetime.strptime(self.config['start_date'], '%Y-%m-%d')
|
||||
datetime.datetime.strptime(self.config['stop_date'], '%Y-%m-%d')
|
||||
except ValueError as e:
|
||||
raise ValueError(f"Invalid date format (should be YYYY-MM-DD): {e}")
|
||||
|
||||
# Validate timeframes
|
||||
timeframes = self.config.get('timeframes', [])
|
||||
if not timeframes:
|
||||
raise ValueError("At least one timeframe must be specified")
|
||||
|
||||
# Validate directories exist or can be created
|
||||
for dir_key in ['data_dir', 'results_dir']:
|
||||
dir_path = Path(self.config.get(dir_key, ''))
|
||||
try:
|
||||
dir_path.mkdir(parents=True, exist_ok=True)
|
||||
except Exception as e:
|
||||
raise ValueError(f"Cannot create directory {dir_path}: {e}")
|
||||
|
||||
if self.logging:
|
||||
self.logging.info("Configuration validation completed successfully")
|
||||
|
||||
def get_config(self) -> Dict[str, Any]:
|
||||
"""Return the current configuration"""
|
||||
return self.config.copy()
|
||||
|
||||
def save_config(self, output_path: str) -> None:
|
||||
"""
|
||||
Save current configuration to file
|
||||
|
||||
Args:
|
||||
output_path: Path where to save the configuration
|
||||
"""
|
||||
try:
|
||||
with open(output_path, 'w') as f:
|
||||
json.dump(self.config, f, indent=2)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Configuration saved to {output_path}")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to save configuration to {output_path}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise
|
||||
10
configs/flat_2021_2024_config.json
Normal file
10
configs/flat_2021_2024_config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"start_date": "2021-11-01",
|
||||
"stop_date": "2024-04-01",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["1min", "2min", "3min", "4min", "5min", "10min", "15min", "30min", "1h", "2h", "4h", "6h", "8h", "12h", "1d"],
|
||||
"stop_loss_pcts": [0.01, 0.02, 0.03, 0.04, 0.05, 0.1],
|
||||
"data_dir": "../data",
|
||||
"results_dir": "../results",
|
||||
"debug": 0
|
||||
}
|
||||
10
configs/full_config.json
Normal file
10
configs/full_config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"start_date": "2020-01-01",
|
||||
"stop_date": "2025-07-08",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["1h", "4h", "15ME", "5ME", "1ME"],
|
||||
"stop_loss_pcts": [0.01, 0.02, 0.03, 0.05],
|
||||
"data_dir": "../data",
|
||||
"results_dir": "../results",
|
||||
"debug": 1
|
||||
}
|
||||
10
configs/sample_config.json
Normal file
10
configs/sample_config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"start_date": "2023-01-01",
|
||||
"stop_date": "2025-01-15",
|
||||
"initial_usd": 10000,
|
||||
"timeframes": ["4h"],
|
||||
"stop_loss_pcts": [0.05],
|
||||
"data_dir": "../data",
|
||||
"results_dir": "../results",
|
||||
"debug": 0
|
||||
}
|
||||
@ -1,70 +1,30 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import logging
|
||||
from scipy.signal import find_peaks
|
||||
from matplotlib.patches import Rectangle
|
||||
from scipy import stats
|
||||
import concurrent.futures
|
||||
from functools import partial
|
||||
from functools import lru_cache
|
||||
import matplotlib.pyplot as plt
|
||||
|
||||
# Color configuration
|
||||
# Plot colors
|
||||
DARK_BG_COLOR = '#181C27'
|
||||
LEGEND_BG_COLOR = '#333333'
|
||||
TITLE_COLOR = 'white'
|
||||
AXIS_LABEL_COLOR = 'white'
|
||||
|
||||
# Candlestick colors
|
||||
CANDLE_UP_COLOR = '#089981' # Green
|
||||
CANDLE_DOWN_COLOR = '#F23645' # Red
|
||||
|
||||
# Marker colors
|
||||
MIN_COLOR = 'red'
|
||||
MAX_COLOR = 'green'
|
||||
|
||||
# Line style colors
|
||||
MIN_LINE_STYLE = 'g--' # Green dashed
|
||||
MAX_LINE_STYLE = 'r--' # Red dashed
|
||||
SMA7_LINE_STYLE = 'y-' # Yellow solid
|
||||
SMA15_LINE_STYLE = 'm-' # Magenta solid
|
||||
|
||||
# SuperTrend colors
|
||||
ST_COLOR_UP = 'g-'
|
||||
ST_COLOR_DOWN = 'r-'
|
||||
|
||||
# Cache the calculation results by function parameters
|
||||
@lru_cache(maxsize=32)
|
||||
def cached_supertrend_calculation(period, multiplier, data_tuple):
|
||||
# Convert tuple back to numpy arrays
|
||||
high = np.array(data_tuple[0])
|
||||
low = np.array(data_tuple[1])
|
||||
close = np.array(data_tuple[2])
|
||||
|
||||
# Calculate TR and ATR using vectorized operations
|
||||
tr = np.zeros_like(close)
|
||||
tr[0] = high[0] - low[0]
|
||||
hc_range = np.abs(high[1:] - close[:-1])
|
||||
lc_range = np.abs(low[1:] - close[:-1])
|
||||
hl_range = high[1:] - low[1:]
|
||||
tr[1:] = np.maximum.reduce([hl_range, hc_range, lc_range])
|
||||
|
||||
# Use numpy's exponential moving average
|
||||
atr = np.zeros_like(tr)
|
||||
atr[0] = tr[0]
|
||||
multiplier_ema = 2.0 / (period + 1)
|
||||
for i in range(1, len(tr)):
|
||||
atr[i] = (tr[i] * multiplier_ema) + (atr[i-1] * (1 - multiplier_ema))
|
||||
|
||||
# Calculate bands
|
||||
upper_band = np.zeros_like(close)
|
||||
lower_band = np.zeros_like(close)
|
||||
for i in range(len(close)):
|
||||
hl_avg = (high[i] + low[i]) / 2
|
||||
upper_band[i] = hl_avg + (multiplier * atr[i])
|
||||
lower_band[i] = hl_avg - (multiplier * atr[i])
|
||||
|
||||
final_upper = np.zeros_like(close)
|
||||
final_lower = np.zeros_like(close)
|
||||
supertrend = np.zeros_like(close)
|
||||
@ -105,232 +65,151 @@ def cached_supertrend_calculation(period, multiplier, data_tuple):
|
||||
'lower_band': final_lower
|
||||
}
|
||||
|
||||
def calculate_supertrend_external(data, period, multiplier):
|
||||
# Convert DataFrame columns to hashable tuples
|
||||
def calculate_supertrend_external(data, period, multiplier, close_column='close'):
|
||||
"""
|
||||
External function to calculate SuperTrend with configurable close column
|
||||
|
||||
Parameters:
|
||||
- data: DataFrame with OHLC data
|
||||
- period: int, period for ATR calculation
|
||||
- multiplier: float, multiplier for ATR
|
||||
- close_column: str, name of the column to use as close price (default: 'close')
|
||||
"""
|
||||
high_tuple = tuple(data['high'])
|
||||
low_tuple = tuple(data['low'])
|
||||
close_tuple = tuple(data['close'])
|
||||
|
||||
# Call the cached function
|
||||
close_tuple = tuple(data[close_column])
|
||||
return cached_supertrend_calculation(period, multiplier, (high_tuple, low_tuple, close_tuple))
|
||||
|
||||
|
||||
class Supertrends:
|
||||
def __init__(self, data, verbose=False, display=False):
|
||||
def __init__(self, data, close_column='close', verbose=False, display=False):
|
||||
"""
|
||||
Initialize the TrendDetectorSimple class.
|
||||
Initialize Supertrends calculator
|
||||
|
||||
Parameters:
|
||||
- data: pandas DataFrame containing price data
|
||||
- verbose: boolean, whether to display detailed logging information
|
||||
- display: boolean, whether to enable display/plotting features
|
||||
- data: pandas DataFrame with OHLC data or list of prices
|
||||
- close_column: str, name of the column to use as close price (default: 'close')
|
||||
- verbose: bool, enable verbose logging
|
||||
- display: bool, display mode (currently unused)
|
||||
"""
|
||||
|
||||
self.close_column = close_column
|
||||
self.data = data
|
||||
self.verbose = verbose
|
||||
self.display = display
|
||||
|
||||
# Only define display-related variables if display is True
|
||||
if self.display:
|
||||
# Plot style configuration
|
||||
self.plot_style = 'dark_background'
|
||||
self.bg_color = DARK_BG_COLOR
|
||||
self.plot_size = (12, 8)
|
||||
|
||||
# Candlestick configuration
|
||||
self.candle_width = 0.6
|
||||
self.candle_up_color = CANDLE_UP_COLOR
|
||||
self.candle_down_color = CANDLE_DOWN_COLOR
|
||||
self.candle_alpha = 0.8
|
||||
self.wick_width = 1
|
||||
|
||||
# Marker configuration
|
||||
self.min_marker = '^'
|
||||
self.min_color = MIN_COLOR
|
||||
self.min_size = 100
|
||||
self.max_marker = 'v'
|
||||
self.max_color = MAX_COLOR
|
||||
self.max_size = 100
|
||||
self.marker_zorder = 100
|
||||
|
||||
# Line configuration
|
||||
self.line_width = 1
|
||||
self.min_line_style = MIN_LINE_STYLE
|
||||
self.max_line_style = MAX_LINE_STYLE
|
||||
self.sma7_line_style = SMA7_LINE_STYLE
|
||||
self.sma15_line_style = SMA15_LINE_STYLE
|
||||
|
||||
# Text configuration
|
||||
self.title_size = 14
|
||||
self.title_color = TITLE_COLOR
|
||||
self.axis_label_size = 12
|
||||
self.axis_label_color = AXIS_LABEL_COLOR
|
||||
|
||||
# Legend configuration
|
||||
self.legend_loc = 'best'
|
||||
self.legend_bg_color = LEGEND_BG_COLOR
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO if verbose else logging.WARNING,
|
||||
format='%(asctime)s - %(levelname)s - %(message)s')
|
||||
self.logger = logging.getLogger('TrendDetectorSimple')
|
||||
|
||||
# Convert data to pandas DataFrame if it's not already
|
||||
if not isinstance(self.data, pd.DataFrame):
|
||||
if isinstance(self.data, list):
|
||||
self.data = pd.DataFrame({'close': self.data})
|
||||
self.data = pd.DataFrame({self.close_column: self.data})
|
||||
else:
|
||||
raise ValueError("Data must be a pandas DataFrame or a list")
|
||||
|
||||
# Validate that required columns exist
|
||||
required_columns = ['high', 'low', self.close_column]
|
||||
missing_columns = [col for col in required_columns if col not in self.data.columns]
|
||||
if missing_columns:
|
||||
raise ValueError(f"Missing required columns: {missing_columns}")
|
||||
|
||||
def calculate_tr(self):
|
||||
"""Calculate True Range using the configured close column"""
|
||||
df = self.data.copy()
|
||||
high = df['high'].values
|
||||
low = df['low'].values
|
||||
close = df[self.close_column].values
|
||||
tr = np.zeros_like(close)
|
||||
tr[0] = high[0] - low[0]
|
||||
for i in range(1, len(close)):
|
||||
hl_range = high[i] - low[i]
|
||||
hc_range = abs(high[i] - close[i-1])
|
||||
lc_range = abs(low[i] - close[i-1])
|
||||
tr[i] = max(hl_range, hc_range, lc_range)
|
||||
return tr
|
||||
|
||||
def calculate_atr(self, period=14):
|
||||
"""Calculate Average True Range"""
|
||||
tr = self.calculate_tr()
|
||||
atr = np.zeros_like(tr)
|
||||
atr[0] = tr[0]
|
||||
multiplier = 2.0 / (period + 1)
|
||||
for i in range(1, len(tr)):
|
||||
atr[i] = (tr[i] * multiplier) + (atr[i-1] * (1 - multiplier))
|
||||
return atr
|
||||
|
||||
def calculate_supertrend(self, period=10, multiplier=3.0):
|
||||
"""
|
||||
Calculate True Range (TR) for the price data.
|
||||
Calculate SuperTrend indicator for the price data using the configured close column.
|
||||
SuperTrend is a trend-following indicator that uses ATR to determine the trend direction.
|
||||
|
||||
True Range is the greatest of:
|
||||
1. Current high - current low
|
||||
2. |Current high - previous close|
|
||||
3. |Current low - previous close|
|
||||
Parameters:
|
||||
- period: int, the period for the ATR calculation (default: 10)
|
||||
- multiplier: float, the multiplier for the ATR (default: 3.0)
|
||||
|
||||
Returns:
|
||||
- Numpy array of TR values
|
||||
- Dictionary containing SuperTrend values, trend direction, and upper/lower bands
|
||||
"""
|
||||
df = self.data.copy()
|
||||
high = df['high'].values
|
||||
low = df['low'].values
|
||||
close = df['close'].values
|
||||
|
||||
tr = np.zeros_like(close)
|
||||
tr[0] = high[0] - low[0] # First TR is just the first day's range
|
||||
|
||||
close = df[self.close_column].values
|
||||
atr = self.calculate_atr(period)
|
||||
upper_band = np.zeros_like(close)
|
||||
lower_band = np.zeros_like(close)
|
||||
for i in range(len(close)):
|
||||
hl_avg = (high[i] + low[i]) / 2
|
||||
upper_band[i] = hl_avg + (multiplier * atr[i])
|
||||
lower_band[i] = hl_avg - (multiplier * atr[i])
|
||||
final_upper = np.zeros_like(close)
|
||||
final_lower = np.zeros_like(close)
|
||||
supertrend = np.zeros_like(close)
|
||||
trend = np.zeros_like(close)
|
||||
final_upper[0] = upper_band[0]
|
||||
final_lower[0] = lower_band[0]
|
||||
if close[0] <= upper_band[0]:
|
||||
supertrend[0] = upper_band[0]
|
||||
trend[0] = -1
|
||||
else:
|
||||
supertrend[0] = lower_band[0]
|
||||
trend[0] = 1
|
||||
for i in range(1, len(close)):
|
||||
# Current high - current low
|
||||
hl_range = high[i] - low[i]
|
||||
# |Current high - previous close|
|
||||
hc_range = abs(high[i] - close[i-1])
|
||||
# |Current low - previous close|
|
||||
lc_range = abs(low[i] - close[i-1])
|
||||
|
||||
# TR is the maximum of these three values
|
||||
tr[i] = max(hl_range, hc_range, lc_range)
|
||||
|
||||
return tr
|
||||
|
||||
def calculate_atr(self, period=14):
|
||||
"""
|
||||
Calculate Average True Range (ATR) for the price data.
|
||||
|
||||
ATR is the exponential moving average of the True Range over a specified period.
|
||||
|
||||
Parameters:
|
||||
- period: int, the period for the ATR calculation (default: 14)
|
||||
|
||||
Returns:
|
||||
- Numpy array of ATR values
|
||||
"""
|
||||
|
||||
tr = self.calculate_tr()
|
||||
atr = np.zeros_like(tr)
|
||||
|
||||
# First ATR value is just the first TR
|
||||
atr[0] = tr[0]
|
||||
|
||||
# Calculate exponential moving average (EMA) of TR
|
||||
multiplier = 2.0 / (period + 1)
|
||||
|
||||
for i in range(1, len(tr)):
|
||||
atr[i] = (tr[i] * multiplier) + (atr[i-1] * (1 - multiplier))
|
||||
|
||||
return atr
|
||||
|
||||
def detect_trends(self):
|
||||
"""
|
||||
Detect trends by identifying local minima and maxima in the price data
|
||||
using scipy.signal.find_peaks.
|
||||
|
||||
Parameters:
|
||||
- prominence: float, required prominence of peaks (relative to the price range)
|
||||
- width: int, required width of peaks in data points
|
||||
|
||||
Returns:
|
||||
- DataFrame with columns for timestamps, prices, and trend indicators
|
||||
- Dictionary containing analysis results including linear regression, SMAs, and SuperTrend indicators
|
||||
"""
|
||||
df = self.data
|
||||
# close_prices = df['close'].values
|
||||
|
||||
# max_peaks, _ = find_peaks(close_prices)
|
||||
# min_peaks, _ = find_peaks(-close_prices)
|
||||
|
||||
# df['is_min'] = False
|
||||
# df['is_max'] = False
|
||||
|
||||
# for peak in max_peaks:
|
||||
# df.at[peak, 'is_max'] = True
|
||||
# for peak in min_peaks:
|
||||
# df.at[peak, 'is_min'] = True
|
||||
|
||||
# result = df[['timestamp', 'close', 'is_min', 'is_max']].copy()
|
||||
|
||||
# Perform linear regression on min_peaks and max_peaks
|
||||
# min_prices = df['close'].iloc[min_peaks].values
|
||||
# max_prices = df['close'].iloc[max_peaks].values
|
||||
|
||||
# Linear regression for min peaks if we have at least 2 points
|
||||
# min_slope, min_intercept, min_r_value, _, _ = stats.linregress(min_peaks, min_prices)
|
||||
# Linear regression for max peaks if we have at least 2 points
|
||||
# max_slope, max_intercept, max_r_value, _, _ = stats.linregress(max_peaks, max_prices)
|
||||
if (upper_band[i] < final_upper[i-1]) or (close[i-1] > final_upper[i-1]):
|
||||
final_upper[i] = upper_band[i]
|
||||
else:
|
||||
final_upper[i] = final_upper[i-1]
|
||||
if (lower_band[i] > final_lower[i-1]) or (close[i-1] < final_lower[i-1]):
|
||||
final_lower[i] = lower_band[i]
|
||||
else:
|
||||
final_lower[i] = final_lower[i-1]
|
||||
if supertrend[i-1] == final_upper[i-1] and close[i] <= final_upper[i]:
|
||||
supertrend[i] = final_upper[i]
|
||||
trend[i] = -1
|
||||
elif supertrend[i-1] == final_upper[i-1] and close[i] > final_upper[i]:
|
||||
supertrend[i] = final_lower[i]
|
||||
trend[i] = 1
|
||||
elif supertrend[i-1] == final_lower[i-1] and close[i] >= final_lower[i]:
|
||||
supertrend[i] = final_lower[i]
|
||||
trend[i] = 1
|
||||
elif supertrend[i-1] == final_lower[i-1] and close[i] < final_lower[i]:
|
||||
supertrend[i] = final_upper[i]
|
||||
trend[i] = -1
|
||||
supertrend_results = {
|
||||
'supertrend': supertrend,
|
||||
'trend': trend,
|
||||
'upper_band': final_upper,
|
||||
'lower_band': final_lower
|
||||
}
|
||||
return supertrend_results
|
||||
|
||||
# Calculate Simple Moving Averages (SMA) for 7 and 15 periods
|
||||
# sma_7 = pd.Series(close_prices).rolling(window=7, min_periods=1).mean().values
|
||||
# sma_15 = pd.Series(close_prices).rolling(window=15, min_periods=1).mean().values
|
||||
|
||||
analysis_results = {}
|
||||
# analysis_results['linear_regression'] = {
|
||||
# 'min': {
|
||||
# 'slope': min_slope,
|
||||
# 'intercept': min_intercept,
|
||||
# 'r_squared': min_r_value ** 2
|
||||
# },
|
||||
# 'max': {
|
||||
# 'slope': max_slope,
|
||||
# 'intercept': max_intercept,
|
||||
# 'r_squared': max_r_value ** 2
|
||||
# }
|
||||
# }
|
||||
# analysis_results['sma'] = {
|
||||
# '7': sma_7,
|
||||
# '15': sma_15
|
||||
# }
|
||||
|
||||
# Calculate SuperTrend indicators
|
||||
supertrend_results_list = self._calculate_supertrend_indicators()
|
||||
analysis_results['supertrend'] = supertrend_results_list
|
||||
|
||||
return analysis_results
|
||||
|
||||
def calculate_supertrend_indicators(self):
|
||||
"""
|
||||
Calculate SuperTrend indicators with different parameter sets in parallel.
|
||||
Returns:
|
||||
- list, the SuperTrend results
|
||||
"""
|
||||
supertrend_params = [
|
||||
{"period": 12, "multiplier": 3.0, "color_up": ST_COLOR_UP, "color_down": ST_COLOR_DOWN},
|
||||
{"period": 10, "multiplier": 1.0, "color_up": ST_COLOR_UP, "color_down": ST_COLOR_DOWN},
|
||||
{"period": 11, "multiplier": 2.0, "color_up": ST_COLOR_UP, "color_down": ST_COLOR_DOWN}
|
||||
{"period": 12, "multiplier": 3.0},
|
||||
{"period": 10, "multiplier": 1.0},
|
||||
{"period": 11, "multiplier": 2.0}
|
||||
]
|
||||
data = self.data.copy()
|
||||
|
||||
# For just 3 calculations, direct calculation might be faster than process pool
|
||||
results = []
|
||||
for p in supertrend_params:
|
||||
result = calculate_supertrend_external(data, p["period"], p["multiplier"])
|
||||
results.append(result)
|
||||
|
||||
supertrend_results_list = []
|
||||
for params, result in zip(supertrend_params, results):
|
||||
supertrend_results_list.append({
|
||||
result = self.calculate_supertrend(period=p["period"], multiplier=p["multiplier"])
|
||||
results.append({
|
||||
"results": result,
|
||||
"params": params
|
||||
"params": p
|
||||
})
|
||||
return supertrend_results_list
|
||||
return results
|
||||
|
||||
@ -1,167 +1,332 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import time
|
||||
|
||||
from cycles.supertrend import Supertrends
|
||||
from cycles.market_fees import MarketFees
|
||||
|
||||
class Backtest:
|
||||
def __init__(self, initial_usd, df, min1_df, init_strategy_fields) -> None:
|
||||
self.initial_usd = initial_usd
|
||||
self.usd = initial_usd
|
||||
self.max_balance = initial_usd
|
||||
self.coin = 0
|
||||
self.position = 0
|
||||
self.entry_price = 0
|
||||
self.entry_time = None
|
||||
self.current_trade_min1_start_idx = None
|
||||
self.current_min1_end_idx = None
|
||||
self.price_open = None
|
||||
self.price_close = None
|
||||
self.current_date = None
|
||||
self.strategies = {}
|
||||
self.df = df
|
||||
self.min1_df = min1_df
|
||||
|
||||
self.trade_log = []
|
||||
self.drawdowns = []
|
||||
self.trades = []
|
||||
|
||||
self = init_strategy_fields(self)
|
||||
|
||||
def run(self, entry_strategy, exit_strategy, debug=False):
|
||||
@staticmethod
|
||||
def run(min1_df, df, initial_usd, stop_loss_pct, progress_callback=None, verbose=False):
|
||||
"""
|
||||
Runs the backtest using provided entry and exit strategy functions.
|
||||
|
||||
The method iterates over the main DataFrame (self.df), simulating trades based on the entry and exit strategies.
|
||||
It tracks balances, drawdowns, and logs each trade, including fees. At the end, it returns a dictionary of performance statistics.
|
||||
|
||||
Backtest a simple strategy using the meta supertrend (all three supertrends agree).
|
||||
Buys when meta supertrend is positive, sells when negative, applies a percentage stop loss.
|
||||
|
||||
Parameters:
|
||||
- entry_strategy: function, determines when to enter a trade. Should accept (self, i) and return True to enter.
|
||||
- exit_strategy: function, determines when to exit a trade. Should accept (self, i) and return (exit_reason, sell_price) or (None, None) to hold.
|
||||
- debug: bool, whether to print debug info (default: False)
|
||||
|
||||
Returns:
|
||||
- dict with keys: initial_usd, final_usd, n_trades, win_rate, max_drawdown, avg_trade, trade_log, trades, total_fees_usd, and optionally first_trade and last_trade.
|
||||
- min1_df: pandas DataFrame, 1-minute timeframe data for more accurate stop loss checking (optional)
|
||||
- df: pandas DataFrame, main timeframe data for signals
|
||||
- initial_usd: float, starting USD amount
|
||||
- stop_loss_pct: float, stop loss as a fraction (e.g. 0.05 for 5%)
|
||||
- progress_callback: callable, optional callback function to report progress (current_step)
|
||||
- verbose: bool, enable debug logging for stop loss checks
|
||||
"""
|
||||
_df = df.copy().reset_index()
|
||||
|
||||
# Ensure we have a timestamp column regardless of original index name
|
||||
if 'timestamp' not in _df.columns:
|
||||
# If reset_index() created a column with the original index name, rename it
|
||||
if len(_df.columns) > 0 and _df.columns[0] not in ['open', 'high', 'low', 'close', 'volume', 'predicted_close_price']:
|
||||
_df = _df.rename(columns={_df.columns[0]: 'timestamp'})
|
||||
else:
|
||||
raise ValueError("Unable to identify timestamp column in DataFrame")
|
||||
|
||||
_df['timestamp'] = pd.to_datetime(_df['timestamp'])
|
||||
|
||||
supertrends = Supertrends(_df, verbose=False, close_column='predicted_close_price')
|
||||
|
||||
for i in range(1, len(self.df)):
|
||||
self.price_open = self.df['open'].iloc[i]
|
||||
self.price_close = self.df['close'].iloc[i]
|
||||
supertrend_results_list = supertrends.calculate_supertrend_indicators()
|
||||
trends = [st['results']['trend'] for st in supertrend_results_list]
|
||||
trends_arr = np.stack(trends, axis=1)
|
||||
meta_trend = np.where((trends_arr[:,0] == trends_arr[:,1]) & (trends_arr[:,1] == trends_arr[:,2]),
|
||||
trends_arr[:,0], 0)
|
||||
# Shift meta_trend by one to avoid lookahead bias
|
||||
meta_trend_signal = np.roll(meta_trend, 1)
|
||||
meta_trend_signal[0] = 0 # or np.nan, but 0 means 'no signal' for first bar
|
||||
|
||||
position = 0 # 0 = no position, 1 = long
|
||||
entry_price = 0
|
||||
usd = initial_usd
|
||||
coin = 0
|
||||
trade_log = []
|
||||
max_balance = initial_usd
|
||||
drawdowns = []
|
||||
trades = []
|
||||
entry_time = None
|
||||
stop_loss_count = 0 # Track number of stop losses
|
||||
|
||||
# Ensure min1_df has proper DatetimeIndex
|
||||
if min1_df is not None and not min1_df.empty:
|
||||
min1_df.index = pd.to_datetime(min1_df.index)
|
||||
|
||||
for i in range(1, len(_df)):
|
||||
# Report progress if callback is provided
|
||||
if progress_callback:
|
||||
# Update more frequently for better responsiveness
|
||||
update_frequency = max(1, len(_df) // 50) # Update every 2% of dataset (50 updates total)
|
||||
if i % update_frequency == 0 or i == len(_df) - 1: # Always update on last iteration
|
||||
if verbose: # Only print in verbose mode to avoid spam
|
||||
print(f"DEBUG: Progress callback called with i={i}, total={len(_df)-1}")
|
||||
progress_callback(i)
|
||||
|
||||
price_open = _df['open'].iloc[i]
|
||||
price_close = _df['close'].iloc[i]
|
||||
date = _df['timestamp'].iloc[i]
|
||||
prev_mt = meta_trend_signal[i-1]
|
||||
curr_mt = meta_trend_signal[i]
|
||||
|
||||
self.current_date = self.df['timestamp'].iloc[i]
|
||||
# Check stop loss if in position
|
||||
if position == 1:
|
||||
stop_loss_result = Backtest.check_stop_loss(
|
||||
min1_df,
|
||||
entry_time,
|
||||
date,
|
||||
entry_price,
|
||||
stop_loss_pct,
|
||||
coin,
|
||||
verbose=verbose
|
||||
)
|
||||
if stop_loss_result is not None:
|
||||
trade_log_entry, position, coin, entry_price, usd = stop_loss_result
|
||||
trade_log.append(trade_log_entry)
|
||||
stop_loss_count += 1
|
||||
continue
|
||||
|
||||
# check if we are in buy/sell position
|
||||
if self.position == 0:
|
||||
if entry_strategy(self, i):
|
||||
self.handle_entry()
|
||||
elif self.position == 1:
|
||||
exit_test_results, sell_price = exit_strategy(self, i)
|
||||
|
||||
if exit_test_results is not None:
|
||||
self.handle_exit(exit_test_results, sell_price)
|
||||
# Entry: only if not in position and signal changes to 1
|
||||
if position == 0 and prev_mt != 1 and curr_mt == 1:
|
||||
entry_result = Backtest.handle_entry(usd, price_open, date)
|
||||
coin, entry_price, entry_time, usd, position, trade_log_entry = entry_result
|
||||
trade_log.append(trade_log_entry)
|
||||
|
||||
# Exit: only if in position and signal changes from 1 to -1
|
||||
elif position == 1 and prev_mt == 1 and curr_mt == -1:
|
||||
exit_result = Backtest.handle_exit(coin, price_open, entry_price, entry_time, date)
|
||||
usd, coin, position, entry_price, trade_log_entry = exit_result
|
||||
trade_log.append(trade_log_entry)
|
||||
|
||||
# Track drawdown
|
||||
balance = self.usd if self.position == 0 else self.coin * self.price_close
|
||||
balance = usd if position == 0 else coin * price_close
|
||||
if balance > max_balance:
|
||||
max_balance = balance
|
||||
drawdown = (max_balance - balance) / max_balance
|
||||
drawdowns.append(drawdown)
|
||||
|
||||
if balance > self.max_balance:
|
||||
self.max_balance = balance
|
||||
|
||||
drawdown = (self.max_balance - balance) / self.max_balance
|
||||
self.drawdowns.append(drawdown)
|
||||
# Report completion if callback is provided
|
||||
if progress_callback:
|
||||
progress_callback(len(_df) - 1)
|
||||
|
||||
# If still in position at end, sell at last close
|
||||
if self.position == 1:
|
||||
self.handle_exit("EOD", None)
|
||||
|
||||
if position == 1:
|
||||
exit_result = Backtest.handle_exit(coin, _df['close'].iloc[-1], entry_price, entry_time, _df['timestamp'].iloc[-1])
|
||||
usd, coin, position, entry_price, trade_log_entry = exit_result
|
||||
trade_log.append(trade_log_entry)
|
||||
|
||||
# Calculate statistics
|
||||
final_balance = self.usd
|
||||
n_trades = len(self.trade_log)
|
||||
wins = [1 for t in self.trade_log if t['exit'] is not None and t['exit'] > t['entry']]
|
||||
final_balance = usd
|
||||
n_trades = len(trade_log)
|
||||
wins = [1 for t in trade_log if t['exit'] is not None and t['exit'] > t['entry']]
|
||||
win_rate = len(wins) / n_trades if n_trades > 0 else 0
|
||||
max_drawdown = max(self.drawdowns) if self.drawdowns else 0
|
||||
avg_trade = np.mean([t['exit']/t['entry']-1 for t in self.trade_log if t['exit'] is not None]) if self.trade_log else 0
|
||||
max_drawdown = max(drawdowns) if drawdowns else 0
|
||||
avg_trade = np.mean([t['exit']/t['entry']-1 for t in trade_log if t['exit'] is not None]) if trade_log else 0
|
||||
|
||||
trades = []
|
||||
total_fees_usd = 0.0
|
||||
|
||||
for trade in self.trade_log:
|
||||
for trade in trade_log:
|
||||
if trade['exit'] is not None:
|
||||
profit_pct = (trade['exit'] - trade['entry']) / trade['entry']
|
||||
else:
|
||||
profit_pct = 0.0
|
||||
|
||||
# Validate fee_usd field
|
||||
if 'fee_usd' not in trade:
|
||||
raise ValueError(f"Trade missing required field 'fee_usd': {trade}")
|
||||
fee_usd = trade['fee_usd']
|
||||
if fee_usd is None:
|
||||
raise ValueError(f"Trade fee_usd is None: {trade}")
|
||||
|
||||
# Validate trade type field
|
||||
if 'type' not in trade:
|
||||
raise ValueError(f"Trade missing required field 'type': {trade}")
|
||||
trade_type = trade['type']
|
||||
if trade_type is None:
|
||||
raise ValueError(f"Trade type is None: {trade}")
|
||||
|
||||
trades.append({
|
||||
'entry_time': trade['entry_time'],
|
||||
'exit_time': trade['exit_time'],
|
||||
'entry': trade['entry'],
|
||||
'exit': trade['exit'],
|
||||
'profit_pct': profit_pct,
|
||||
'type': trade['type'],
|
||||
'fee_usd': trade['fee_usd']
|
||||
'type': trade_type,
|
||||
'fee_usd': fee_usd
|
||||
})
|
||||
fee_usd = trade.get('fee_usd')
|
||||
total_fees_usd += fee_usd
|
||||
|
||||
results = {
|
||||
"initial_usd": self.initial_usd,
|
||||
"initial_usd": initial_usd,
|
||||
"final_usd": final_balance,
|
||||
"n_trades": n_trades,
|
||||
"n_stop_loss": stop_loss_count, # Add stop loss count
|
||||
"win_rate": win_rate,
|
||||
"max_drawdown": max_drawdown,
|
||||
"avg_trade": avg_trade,
|
||||
"trade_log": self.trade_log,
|
||||
"trade_log": trade_log,
|
||||
"trades": trades,
|
||||
"total_fees_usd": total_fees_usd,
|
||||
}
|
||||
if n_trades > 0:
|
||||
results["first_trade"] = {
|
||||
"entry_time": self.trade_log[0]['entry_time'],
|
||||
"entry": self.trade_log[0]['entry']
|
||||
"entry_time": trade_log[0]['entry_time'],
|
||||
"entry": trade_log[0]['entry']
|
||||
}
|
||||
results["last_trade"] = {
|
||||
"exit_time": self.trade_log[-1]['exit_time'],
|
||||
"exit": self.trade_log[-1]['exit']
|
||||
"exit_time": trade_log[-1]['exit_time'],
|
||||
"exit": trade_log[-1]['exit']
|
||||
}
|
||||
return results
|
||||
|
||||
def handle_entry(self):
|
||||
entry_fee = MarketFees.calculate_okx_taker_maker_fee(self.usd, is_maker=False)
|
||||
usd_after_fee = self.usd - entry_fee
|
||||
|
||||
self.coin = usd_after_fee / self.price_open
|
||||
self.entry_price = self.price_open
|
||||
self.entry_time = self.current_date
|
||||
self.usd = 0
|
||||
self.position = 1
|
||||
@staticmethod
|
||||
def check_stop_loss(min1_df, entry_time, current_time, entry_price, stop_loss_pct, coin, verbose=False):
|
||||
"""
|
||||
Check if stop loss should be triggered based on 1-minute data
|
||||
|
||||
Args:
|
||||
min1_df: 1-minute DataFrame with DatetimeIndex
|
||||
entry_time: Entry timestamp
|
||||
current_time: Current timestamp
|
||||
entry_price: Entry price
|
||||
stop_loss_pct: Stop loss percentage (e.g. 0.05 for 5%)
|
||||
coin: Current coin position
|
||||
verbose: Enable debug logging
|
||||
|
||||
Returns:
|
||||
Tuple of (trade_log_entry, position, coin, entry_price, usd) if stop loss triggered, None otherwise
|
||||
"""
|
||||
if min1_df is None or min1_df.empty:
|
||||
if verbose:
|
||||
print("Warning: No 1-minute data available for stop loss checking")
|
||||
return None
|
||||
|
||||
stop_price = entry_price * (1 - stop_loss_pct)
|
||||
|
||||
try:
|
||||
# Ensure min1_df has a DatetimeIndex
|
||||
if not isinstance(min1_df.index, pd.DatetimeIndex):
|
||||
if verbose:
|
||||
print("Warning: min1_df does not have DatetimeIndex")
|
||||
return None
|
||||
|
||||
# Convert entry_time and current_time to pandas Timestamps for comparison
|
||||
entry_ts = pd.to_datetime(entry_time)
|
||||
current_ts = pd.to_datetime(current_time)
|
||||
|
||||
if verbose:
|
||||
print(f"Checking stop loss from {entry_ts} to {current_ts}, stop_price: {stop_price:.2f}")
|
||||
|
||||
# Handle edge case where entry and current time are the same (1-minute timeframe)
|
||||
if entry_ts == current_ts:
|
||||
if verbose:
|
||||
print("Entry and current time are the same, no range to check")
|
||||
return None
|
||||
|
||||
# Find the range of 1-minute data to check (exclusive of entry time, inclusive of current time)
|
||||
# We start from the candle AFTER entry to avoid checking the entry candle itself
|
||||
start_check_time = entry_ts + pd.Timedelta(minutes=1)
|
||||
|
||||
# Get the slice of data to check for stop loss
|
||||
mask = (min1_df.index > entry_ts) & (min1_df.index <= current_ts)
|
||||
min1_slice = min1_df.loc[mask]
|
||||
|
||||
if len(min1_slice) == 0:
|
||||
if verbose:
|
||||
print(f"No 1-minute data found between {start_check_time} and {current_ts}")
|
||||
return None
|
||||
|
||||
if verbose:
|
||||
print(f"Checking {len(min1_slice)} candles for stop loss")
|
||||
|
||||
# Check if any low price in the slice hits the stop loss
|
||||
stop_triggered = (min1_slice['low'] <= stop_price).any()
|
||||
|
||||
if stop_triggered:
|
||||
# Find the exact candle where stop loss was triggered
|
||||
stop_candle = min1_slice[min1_slice['low'] <= stop_price].iloc[0]
|
||||
|
||||
if verbose:
|
||||
print(f"Stop loss triggered at {stop_candle.name}, low: {stop_candle['low']:.2f}")
|
||||
|
||||
# More realistic fill: if open < stop, fill at open, else at stop
|
||||
if stop_candle['open'] < stop_price:
|
||||
sell_price = stop_candle['open']
|
||||
if verbose:
|
||||
print(f"Filled at open price: {sell_price:.2f}")
|
||||
else:
|
||||
sell_price = stop_price
|
||||
if verbose:
|
||||
print(f"Filled at stop price: {sell_price:.2f}")
|
||||
|
||||
btc_to_sell = coin
|
||||
usd_gross = btc_to_sell * sell_price
|
||||
exit_fee = MarketFees.calculate_okx_taker_maker_fee(usd_gross, is_maker=False)
|
||||
usd_after_stop = usd_gross - exit_fee
|
||||
|
||||
trade_log_entry = {
|
||||
'type': 'STOP',
|
||||
'entry': entry_price,
|
||||
'exit': sell_price,
|
||||
'entry_time': entry_time,
|
||||
'exit_time': stop_candle.name,
|
||||
'fee_usd': exit_fee
|
||||
}
|
||||
# After stop loss, reset position and entry, return USD balance
|
||||
return trade_log_entry, 0, 0, 0, usd_after_stop
|
||||
elif verbose:
|
||||
print(f"No stop loss triggered, min low in range: {min1_slice['low'].min():.2f}")
|
||||
|
||||
except Exception as e:
|
||||
# In case of any error, don't trigger stop loss but log the issue
|
||||
error_msg = f"Warning: Stop loss check failed: {e}"
|
||||
print(error_msg)
|
||||
if verbose:
|
||||
import traceback
|
||||
print(traceback.format_exc())
|
||||
return None
|
||||
|
||||
return None
|
||||
|
||||
@staticmethod
|
||||
def handle_entry(usd, price_open, date):
|
||||
entry_fee = MarketFees.calculate_okx_taker_maker_fee(usd, is_maker=False)
|
||||
usd_after_fee = usd - entry_fee
|
||||
coin = usd_after_fee / price_open
|
||||
entry_price = price_open
|
||||
entry_time = date
|
||||
usd = 0
|
||||
position = 1
|
||||
trade_log_entry = {
|
||||
'type': 'BUY',
|
||||
'entry': self.entry_price,
|
||||
'entry': entry_price,
|
||||
'exit': None,
|
||||
'entry_time': self.entry_time,
|
||||
'entry_time': entry_time,
|
||||
'exit_time': None,
|
||||
'fee_usd': entry_fee
|
||||
}
|
||||
self.trade_log.append(trade_log_entry)
|
||||
return coin, entry_price, entry_time, usd, position, trade_log_entry
|
||||
|
||||
def handle_exit(self, exit_reason, sell_price):
|
||||
btc_to_sell = self.coin
|
||||
exit_price = sell_price if sell_price is not None else self.price_open
|
||||
usd_gross = btc_to_sell * exit_price
|
||||
@staticmethod
|
||||
def handle_exit(coin, price_open, entry_price, entry_time, date):
|
||||
btc_to_sell = coin
|
||||
usd_gross = btc_to_sell * price_open
|
||||
exit_fee = MarketFees.calculate_okx_taker_maker_fee(usd_gross, is_maker=False)
|
||||
|
||||
self.usd = usd_gross - exit_fee
|
||||
|
||||
exit_log_entry = {
|
||||
'type': exit_reason,
|
||||
'entry': self.entry_price,
|
||||
'exit': exit_price,
|
||||
'entry_time': self.entry_time,
|
||||
'exit_time': self.current_date,
|
||||
usd = usd_gross - exit_fee
|
||||
trade_log_entry = {
|
||||
'type': 'SELL',
|
||||
'entry': entry_price,
|
||||
'exit': price_open,
|
||||
'entry_time': entry_time,
|
||||
'exit_time': date,
|
||||
'fee_usd': exit_fee
|
||||
}
|
||||
self.coin = 0
|
||||
self.position = 0
|
||||
self.entry_price = 0
|
||||
|
||||
self.trade_log.append(exit_log_entry)
|
||||
|
||||
coin = 0
|
||||
position = 0
|
||||
entry_price = 0
|
||||
return usd, coin, position, entry_price, trade_log_entry
|
||||
152
cycles/utils/data_loader.py
Normal file
152
cycles/utils/data_loader.py
Normal file
@ -0,0 +1,152 @@
|
||||
import os
|
||||
import json
|
||||
import pandas as pd
|
||||
from typing import Union, Optional
|
||||
import logging
|
||||
|
||||
from .storage_utils import (
|
||||
_parse_timestamp_column,
|
||||
_filter_by_date_range,
|
||||
_normalize_column_names,
|
||||
TimestampParsingError,
|
||||
DataLoadingError
|
||||
)
|
||||
|
||||
|
||||
class DataLoader:
|
||||
"""Handles loading and preprocessing of data from various file formats"""
|
||||
|
||||
def __init__(self, data_dir: str, logging_instance: Optional[logging.Logger] = None):
|
||||
"""Initialize data loader
|
||||
|
||||
Args:
|
||||
data_dir: Directory containing data files
|
||||
logging_instance: Optional logging instance
|
||||
"""
|
||||
self.data_dir = data_dir
|
||||
self.logging = logging_instance
|
||||
|
||||
def load_data(self, file_path: str, start_date: Union[str, pd.Timestamp],
|
||||
stop_date: Union[str, pd.Timestamp]) -> pd.DataFrame:
|
||||
"""Load data with optimized dtypes and filtering, supporting CSV and JSON input
|
||||
|
||||
Args:
|
||||
file_path: path to the data file
|
||||
start_date: start date (string or datetime-like)
|
||||
stop_date: stop date (string or datetime-like)
|
||||
|
||||
Returns:
|
||||
pandas DataFrame with timestamp index
|
||||
|
||||
Raises:
|
||||
DataLoadingError: If data loading fails
|
||||
"""
|
||||
try:
|
||||
# Convert string dates to pandas datetime objects for proper comparison
|
||||
start_date = pd.to_datetime(start_date)
|
||||
stop_date = pd.to_datetime(stop_date)
|
||||
|
||||
# Determine file type
|
||||
_, ext = os.path.splitext(file_path)
|
||||
ext = ext.lower()
|
||||
|
||||
if ext == ".json":
|
||||
return self._load_json_data(file_path, start_date, stop_date)
|
||||
else:
|
||||
return self._load_csv_data(file_path, start_date, stop_date)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error loading data from {file_path}: {e}"
|
||||
if self.logging is not None:
|
||||
self.logging.error(error_msg)
|
||||
# Return an empty DataFrame with a DatetimeIndex
|
||||
return pd.DataFrame(index=pd.to_datetime([]))
|
||||
|
||||
def _load_json_data(self, file_path: str, start_date: pd.Timestamp,
|
||||
stop_date: pd.Timestamp) -> pd.DataFrame:
|
||||
"""Load and process JSON data file
|
||||
|
||||
Args:
|
||||
file_path: Path to JSON file
|
||||
start_date: Start date for filtering
|
||||
stop_date: Stop date for filtering
|
||||
|
||||
Returns:
|
||||
Processed DataFrame with timestamp index
|
||||
"""
|
||||
with open(os.path.join(self.data_dir, file_path), 'r') as f:
|
||||
raw = json.load(f)
|
||||
|
||||
data = pd.DataFrame(raw["Data"])
|
||||
data = _normalize_column_names(data)
|
||||
|
||||
# Convert timestamp to datetime
|
||||
data["timestamp"] = pd.to_datetime(data["timestamp"], unit="s")
|
||||
|
||||
# Filter by date range
|
||||
data = _filter_by_date_range(data, "timestamp", start_date, stop_date)
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} for date range {start_date} to {stop_date}")
|
||||
|
||||
return data.set_index("timestamp")
|
||||
|
||||
def _load_csv_data(self, file_path: str, start_date: pd.Timestamp,
|
||||
stop_date: pd.Timestamp) -> pd.DataFrame:
|
||||
"""Load and process CSV data file
|
||||
|
||||
Args:
|
||||
file_path: Path to CSV file
|
||||
start_date: Start date for filtering
|
||||
stop_date: Stop date for filtering
|
||||
|
||||
Returns:
|
||||
Processed DataFrame with timestamp index
|
||||
"""
|
||||
# Define optimized dtypes
|
||||
dtypes = {
|
||||
'Open': 'float32',
|
||||
'High': 'float32',
|
||||
'Low': 'float32',
|
||||
'Close': 'float32',
|
||||
'Volume': 'float32'
|
||||
}
|
||||
|
||||
# Read data with original capitalized column names
|
||||
data = pd.read_csv(os.path.join(self.data_dir, file_path), dtype=dtypes)
|
||||
|
||||
return self._process_csv_timestamps(data, start_date, stop_date, file_path)
|
||||
|
||||
def _process_csv_timestamps(self, data: pd.DataFrame, start_date: pd.Timestamp,
|
||||
stop_date: pd.Timestamp, file_path: str) -> pd.DataFrame:
|
||||
"""Process timestamps in CSV data and filter by date range
|
||||
|
||||
Args:
|
||||
data: DataFrame with CSV data
|
||||
start_date: Start date for filtering
|
||||
stop_date: Stop date for filtering
|
||||
file_path: Original file path for logging
|
||||
|
||||
Returns:
|
||||
Processed DataFrame with timestamp index
|
||||
"""
|
||||
if 'Timestamp' in data.columns:
|
||||
data = _parse_timestamp_column(data, 'Timestamp')
|
||||
data = _filter_by_date_range(data, 'Timestamp', start_date, stop_date)
|
||||
data = _normalize_column_names(data)
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} for date range {start_date} to {stop_date}")
|
||||
|
||||
return data.set_index('timestamp')
|
||||
else:
|
||||
# Attempt to use the first column if 'Timestamp' is not present
|
||||
data.rename(columns={data.columns[0]: 'timestamp'}, inplace=True)
|
||||
data = _parse_timestamp_column(data, 'timestamp')
|
||||
data = _filter_by_date_range(data, 'timestamp', start_date, stop_date)
|
||||
data = _normalize_column_names(data)
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} (using first column as timestamp) for date range {start_date} to {stop_date}")
|
||||
|
||||
return data.set_index('timestamp')
|
||||
106
cycles/utils/data_saver.py
Normal file
106
cycles/utils/data_saver.py
Normal file
@ -0,0 +1,106 @@
|
||||
import os
|
||||
import pandas as pd
|
||||
from typing import Optional
|
||||
import logging
|
||||
|
||||
from .storage_utils import DataSavingError
|
||||
|
||||
|
||||
class DataSaver:
|
||||
"""Handles saving data to various file formats"""
|
||||
|
||||
def __init__(self, data_dir: str, logging_instance: Optional[logging.Logger] = None):
|
||||
"""Initialize data saver
|
||||
|
||||
Args:
|
||||
data_dir: Directory for saving data files
|
||||
logging_instance: Optional logging instance
|
||||
"""
|
||||
self.data_dir = data_dir
|
||||
self.logging = logging_instance
|
||||
|
||||
def save_data(self, data: pd.DataFrame, file_path: str) -> None:
|
||||
"""Save processed data to a CSV file.
|
||||
If the DataFrame has a DatetimeIndex, it's converted to float Unix timestamps
|
||||
(seconds since epoch) before saving. The index is saved as a column named 'timestamp'.
|
||||
|
||||
Args:
|
||||
data: DataFrame to save
|
||||
file_path: path to the data file relative to the data_dir
|
||||
|
||||
Raises:
|
||||
DataSavingError: If saving fails
|
||||
"""
|
||||
try:
|
||||
data_to_save = data.copy()
|
||||
data_to_save = self._prepare_data_for_saving(data_to_save)
|
||||
|
||||
# Save to CSV, ensuring the 'timestamp' column (if created) is written
|
||||
full_path = os.path.join(self.data_dir, file_path)
|
||||
data_to_save.to_csv(full_path, index=False)
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data saved to {full_path} with Unix timestamp column.")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to save data to {file_path}: {e}"
|
||||
if self.logging is not None:
|
||||
self.logging.error(error_msg)
|
||||
raise DataSavingError(error_msg) from e
|
||||
|
||||
def _prepare_data_for_saving(self, data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Prepare DataFrame for saving by handling different index types
|
||||
|
||||
Args:
|
||||
data: DataFrame to prepare
|
||||
|
||||
Returns:
|
||||
DataFrame ready for saving
|
||||
"""
|
||||
if isinstance(data.index, pd.DatetimeIndex):
|
||||
return self._convert_datetime_index_to_timestamp(data)
|
||||
elif pd.api.types.is_numeric_dtype(data.index.dtype):
|
||||
return self._convert_numeric_index_to_timestamp(data)
|
||||
else:
|
||||
# For other index types, save with the current index
|
||||
return data
|
||||
|
||||
def _convert_datetime_index_to_timestamp(self, data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Convert DatetimeIndex to Unix timestamp column
|
||||
|
||||
Args:
|
||||
data: DataFrame with DatetimeIndex
|
||||
|
||||
Returns:
|
||||
DataFrame with timestamp column
|
||||
"""
|
||||
# Convert DatetimeIndex to Unix timestamp (float seconds since epoch)
|
||||
data['timestamp'] = data.index.astype('int64') / 1e9
|
||||
data.reset_index(drop=True, inplace=True)
|
||||
|
||||
# Ensure 'timestamp' is the first column if other columns exist
|
||||
if 'timestamp' in data.columns and len(data.columns) > 1:
|
||||
cols = ['timestamp'] + [col for col in data.columns if col != 'timestamp']
|
||||
data = data[cols]
|
||||
|
||||
return data
|
||||
|
||||
def _convert_numeric_index_to_timestamp(self, data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Convert numeric index to timestamp column
|
||||
|
||||
Args:
|
||||
data: DataFrame with numeric index
|
||||
|
||||
Returns:
|
||||
DataFrame with timestamp column
|
||||
"""
|
||||
# If index is already numeric (e.g. float Unix timestamps from a previous save/load cycle)
|
||||
data['timestamp'] = data.index
|
||||
data.reset_index(drop=True, inplace=True)
|
||||
|
||||
# Ensure 'timestamp' is the first column if other columns exist
|
||||
if 'timestamp' in data.columns and len(data.columns) > 1:
|
||||
cols = ['timestamp'] + [col for col in data.columns if col != 'timestamp']
|
||||
data = data[cols]
|
||||
|
||||
return data
|
||||
@ -1,128 +0,0 @@
|
||||
import threading
|
||||
import time
|
||||
import queue
|
||||
from google.oauth2.service_account import Credentials
|
||||
import gspread
|
||||
import math
|
||||
import numpy as np
|
||||
from collections import defaultdict
|
||||
|
||||
|
||||
class GSheetBatchPusher(threading.Thread):
|
||||
|
||||
def __init__(self, queue, timestamp, spreadsheet_name, interval=60, logging=None):
|
||||
super().__init__(daemon=True)
|
||||
self.queue = queue
|
||||
self.timestamp = timestamp
|
||||
self.spreadsheet_name = spreadsheet_name
|
||||
self.interval = interval
|
||||
self._stop_event = threading.Event()
|
||||
self.logging = logging
|
||||
|
||||
def run(self):
|
||||
while not self._stop_event.is_set():
|
||||
self.push_all()
|
||||
time.sleep(self.interval)
|
||||
# Final push on stop
|
||||
self.push_all()
|
||||
|
||||
def stop(self):
|
||||
self._stop_event.set()
|
||||
|
||||
def push_all(self):
|
||||
batch_results = []
|
||||
batch_trades = []
|
||||
while True:
|
||||
try:
|
||||
results, trades = self.queue.get_nowait()
|
||||
batch_results.extend(results)
|
||||
batch_trades.extend(trades)
|
||||
except queue.Empty:
|
||||
break
|
||||
|
||||
if batch_results or batch_trades:
|
||||
self.write_results_per_combination_gsheet(batch_results, batch_trades, self.timestamp, self.spreadsheet_name)
|
||||
|
||||
|
||||
def write_results_per_combination_gsheet(self, results_rows, trade_rows, timestamp, spreadsheet_name="GlimBit Backtest Results"):
|
||||
scopes = [
|
||||
"https://www.googleapis.com/auth/spreadsheets",
|
||||
"https://www.googleapis.com/auth/drive"
|
||||
]
|
||||
creds = Credentials.from_service_account_file('credentials/service_account.json', scopes=scopes)
|
||||
gc = gspread.authorize(creds)
|
||||
sh = gc.open(spreadsheet_name)
|
||||
|
||||
try:
|
||||
worksheet = sh.worksheet("Results")
|
||||
except gspread.exceptions.WorksheetNotFound:
|
||||
worksheet = sh.add_worksheet(title="Results", rows="1000", cols="20")
|
||||
|
||||
# Clear the worksheet before writing new results
|
||||
worksheet.clear()
|
||||
|
||||
# Updated fieldnames to match your data rows
|
||||
fieldnames = [
|
||||
"timeframe", "stop_loss_pct", "n_trades", "n_stop_loss", "win_rate",
|
||||
"max_drawdown", "avg_trade", "profit_ratio", "initial_usd", "final_usd"
|
||||
]
|
||||
|
||||
def to_native(val):
|
||||
if isinstance(val, (np.generic, np.ndarray)):
|
||||
val = val.item()
|
||||
if hasattr(val, 'isoformat'):
|
||||
return val.isoformat()
|
||||
# Handle inf, -inf, nan
|
||||
if isinstance(val, float):
|
||||
if math.isinf(val):
|
||||
return "∞" if val > 0 else "-∞"
|
||||
if math.isnan(val):
|
||||
return ""
|
||||
return val
|
||||
|
||||
# Write header if sheet is empty
|
||||
if len(worksheet.get_all_values()) == 0:
|
||||
worksheet.append_row(fieldnames)
|
||||
|
||||
for row in results_rows:
|
||||
values = [to_native(row.get(field, "")) for field in fieldnames]
|
||||
worksheet.append_row(values)
|
||||
|
||||
trades_fieldnames = [
|
||||
"entry_time", "exit_time", "entry_price", "exit_price", "profit_pct", "type"
|
||||
]
|
||||
trades_by_combo = defaultdict(list)
|
||||
|
||||
for trade in trade_rows:
|
||||
tf = trade.get("timeframe")
|
||||
sl = trade.get("stop_loss_pct")
|
||||
trades_by_combo[(tf, sl)].append(trade)
|
||||
|
||||
for (tf, sl), trades in trades_by_combo.items():
|
||||
sl_percent = int(round(sl * 100))
|
||||
sheet_name = f"Trades_{tf}_ST{sl_percent}%"
|
||||
|
||||
try:
|
||||
trades_ws = sh.worksheet(sheet_name)
|
||||
except gspread.exceptions.WorksheetNotFound:
|
||||
trades_ws = sh.add_worksheet(title=sheet_name, rows="1000", cols="20")
|
||||
|
||||
# Clear the trades worksheet before writing new trades
|
||||
trades_ws.clear()
|
||||
|
||||
if len(trades_ws.get_all_values()) == 0:
|
||||
trades_ws.append_row(trades_fieldnames)
|
||||
|
||||
for trade in trades:
|
||||
trade_row = [to_native(trade.get(field, "")) for field in trades_fieldnames]
|
||||
try:
|
||||
trades_ws.append_row(trade_row)
|
||||
except gspread.exceptions.APIError as e:
|
||||
if '429' in str(e):
|
||||
if self.logging is not None:
|
||||
self.logging.warning(f"Google Sheets API quota exceeded (429). Please wait one minute. Will retry on next batch push. Sheet: {sheet_name}")
|
||||
# Re-queue the failed batch for retry
|
||||
self.queue.put((results_rows, trade_rows))
|
||||
return # Stop pushing for this batch, will retry next interval
|
||||
else:
|
||||
raise
|
||||
233
cycles/utils/progress_manager.py
Normal file
233
cycles/utils/progress_manager.py
Normal file
@ -0,0 +1,233 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Progress Manager for tracking multiple parallel backtest tasks
|
||||
"""
|
||||
|
||||
import threading
|
||||
import time
|
||||
import sys
|
||||
from typing import Dict, Optional, Callable
|
||||
from dataclasses import dataclass
|
||||
|
||||
|
||||
@dataclass
|
||||
class TaskProgress:
|
||||
"""Represents progress information for a single task"""
|
||||
task_id: str
|
||||
name: str
|
||||
current: int
|
||||
total: int
|
||||
start_time: float
|
||||
last_update: float
|
||||
|
||||
@property
|
||||
def percentage(self) -> float:
|
||||
"""Calculate completion percentage"""
|
||||
if self.total == 0:
|
||||
return 0.0
|
||||
return (self.current / self.total) * 100
|
||||
|
||||
@property
|
||||
def elapsed_time(self) -> float:
|
||||
"""Calculate elapsed time in seconds"""
|
||||
return time.time() - self.start_time
|
||||
|
||||
@property
|
||||
def eta(self) -> Optional[float]:
|
||||
"""Estimate time to completion in seconds"""
|
||||
if self.current == 0 or self.percentage >= 100:
|
||||
return None
|
||||
|
||||
elapsed = self.elapsed_time
|
||||
rate = self.current / elapsed
|
||||
remaining = self.total - self.current
|
||||
return remaining / rate if rate > 0 else None
|
||||
|
||||
|
||||
class ProgressManager:
|
||||
"""Manages progress tracking for multiple parallel tasks"""
|
||||
|
||||
def __init__(self, update_interval: float = 1.0, display_width: int = 50):
|
||||
"""
|
||||
Initialize progress manager
|
||||
|
||||
Args:
|
||||
update_interval: How often to update display (seconds)
|
||||
display_width: Width of progress bar in characters
|
||||
"""
|
||||
self.tasks: Dict[str, TaskProgress] = {}
|
||||
self.update_interval = update_interval
|
||||
self.display_width = display_width
|
||||
self.lock = threading.Lock()
|
||||
self.display_thread: Optional[threading.Thread] = None
|
||||
self.running = False
|
||||
self.last_display_height = 0
|
||||
|
||||
def start_task(self, task_id: str, name: str, total: int) -> None:
|
||||
"""
|
||||
Start tracking a new task
|
||||
|
||||
Args:
|
||||
task_id: Unique identifier for the task
|
||||
name: Human-readable name for the task
|
||||
total: Total number of steps in the task
|
||||
"""
|
||||
with self.lock:
|
||||
self.tasks[task_id] = TaskProgress(
|
||||
task_id=task_id,
|
||||
name=name,
|
||||
current=0,
|
||||
total=total,
|
||||
start_time=time.time(),
|
||||
last_update=time.time()
|
||||
)
|
||||
|
||||
def update_progress(self, task_id: str, current: int) -> None:
|
||||
"""
|
||||
Update progress for a specific task
|
||||
|
||||
Args:
|
||||
task_id: Task identifier
|
||||
current: Current progress value
|
||||
"""
|
||||
with self.lock:
|
||||
if task_id in self.tasks:
|
||||
self.tasks[task_id].current = current
|
||||
self.tasks[task_id].last_update = time.time()
|
||||
|
||||
def complete_task(self, task_id: str) -> None:
|
||||
"""
|
||||
Mark a task as completed
|
||||
|
||||
Args:
|
||||
task_id: Task identifier
|
||||
"""
|
||||
with self.lock:
|
||||
if task_id in self.tasks:
|
||||
task = self.tasks[task_id]
|
||||
task.current = task.total
|
||||
task.last_update = time.time()
|
||||
|
||||
def start_display(self) -> None:
|
||||
"""Start the progress display thread"""
|
||||
if not self.running:
|
||||
self.running = True
|
||||
self.display_thread = threading.Thread(target=self._display_loop, daemon=True)
|
||||
self.display_thread.start()
|
||||
|
||||
def stop_display(self) -> None:
|
||||
"""Stop the progress display thread"""
|
||||
self.running = False
|
||||
if self.display_thread:
|
||||
self.display_thread.join(timeout=1.0)
|
||||
self._clear_display()
|
||||
|
||||
def _display_loop(self) -> None:
|
||||
"""Main loop for updating the progress display"""
|
||||
while self.running:
|
||||
self._update_display()
|
||||
time.sleep(self.update_interval)
|
||||
|
||||
def _update_display(self) -> None:
|
||||
"""Update the console display with current progress"""
|
||||
with self.lock:
|
||||
if not self.tasks:
|
||||
return
|
||||
|
||||
# Clear previous display
|
||||
self._clear_display()
|
||||
|
||||
# Build display lines
|
||||
lines = []
|
||||
for task in sorted(self.tasks.values(), key=lambda t: t.task_id):
|
||||
line = self._format_progress_line(task)
|
||||
lines.append(line)
|
||||
|
||||
# Print all lines
|
||||
for line in lines:
|
||||
print(line, flush=True)
|
||||
|
||||
self.last_display_height = len(lines)
|
||||
|
||||
def _clear_display(self) -> None:
|
||||
"""Clear the previous progress display"""
|
||||
if self.last_display_height > 0:
|
||||
# Move cursor up and clear lines
|
||||
for _ in range(self.last_display_height):
|
||||
sys.stdout.write('\033[F') # Move cursor up one line
|
||||
sys.stdout.write('\033[K') # Clear line
|
||||
sys.stdout.flush()
|
||||
|
||||
def _format_progress_line(self, task: TaskProgress) -> str:
|
||||
"""
|
||||
Format a single progress line for display
|
||||
|
||||
Args:
|
||||
task: TaskProgress instance
|
||||
|
||||
Returns:
|
||||
Formatted progress string
|
||||
"""
|
||||
# Progress bar
|
||||
filled_width = int(task.percentage / 100 * self.display_width)
|
||||
bar = '█' * filled_width + '░' * (self.display_width - filled_width)
|
||||
|
||||
# Time information
|
||||
elapsed_str = self._format_time(task.elapsed_time)
|
||||
eta_str = self._format_time(task.eta) if task.eta else "N/A"
|
||||
|
||||
# Format line
|
||||
line = (f"{task.name:<25} │{bar}│ "
|
||||
f"{task.percentage:5.1f}% "
|
||||
f"({task.current:,}/{task.total:,}) "
|
||||
f"⏱ {elapsed_str} ETA: {eta_str}")
|
||||
|
||||
return line
|
||||
|
||||
def _format_time(self, seconds: float) -> str:
|
||||
"""
|
||||
Format time duration for display
|
||||
|
||||
Args:
|
||||
seconds: Time in seconds
|
||||
|
||||
Returns:
|
||||
Formatted time string
|
||||
"""
|
||||
if seconds < 60:
|
||||
return f"{seconds:.0f}s"
|
||||
elif seconds < 3600:
|
||||
minutes = seconds / 60
|
||||
return f"{minutes:.1f}m"
|
||||
else:
|
||||
hours = seconds / 3600
|
||||
return f"{hours:.1f}h"
|
||||
|
||||
def get_task_progress_callback(self, task_id: str) -> Callable[[int], None]:
|
||||
"""
|
||||
Get a progress callback function for a specific task
|
||||
|
||||
Args:
|
||||
task_id: Task identifier
|
||||
|
||||
Returns:
|
||||
Callback function that updates progress for this task
|
||||
"""
|
||||
def callback(current: int) -> None:
|
||||
self.update_progress(task_id, current)
|
||||
|
||||
return callback
|
||||
|
||||
def all_tasks_completed(self) -> bool:
|
||||
"""Check if all tasks are completed"""
|
||||
with self.lock:
|
||||
return all(task.current >= task.total for task in self.tasks.values())
|
||||
|
||||
def get_summary(self) -> str:
|
||||
"""Get a summary of all tasks"""
|
||||
with self.lock:
|
||||
total_tasks = len(self.tasks)
|
||||
completed_tasks = sum(1 for task in self.tasks.values()
|
||||
if task.current >= task.total)
|
||||
|
||||
return f"Tasks: {completed_tasks}/{total_tasks} completed"
|
||||
179
cycles/utils/result_formatter.py
Normal file
179
cycles/utils/result_formatter.py
Normal file
@ -0,0 +1,179 @@
|
||||
import os
|
||||
import csv
|
||||
from typing import Dict, List, Optional, Any
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
|
||||
from .storage_utils import DataSavingError
|
||||
|
||||
|
||||
class ResultFormatter:
|
||||
"""Handles formatting and writing of backtest results to CSV files"""
|
||||
|
||||
def __init__(self, results_dir: str, logging_instance: Optional[logging.Logger] = None):
|
||||
"""Initialize result formatter
|
||||
|
||||
Args:
|
||||
results_dir: Directory for saving result files
|
||||
logging_instance: Optional logging instance
|
||||
"""
|
||||
self.results_dir = results_dir
|
||||
self.logging = logging_instance
|
||||
|
||||
def format_row(self, row: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Format a row for a combined results CSV file
|
||||
|
||||
Args:
|
||||
row: Dictionary containing row data
|
||||
|
||||
Returns:
|
||||
Dictionary with formatted values
|
||||
"""
|
||||
return {
|
||||
"timeframe": row["timeframe"],
|
||||
"stop_loss_pct": f"{row['stop_loss_pct']*100:.2f}%",
|
||||
"n_trades": row["n_trades"],
|
||||
"n_stop_loss": row["n_stop_loss"],
|
||||
"win_rate": f"{row['win_rate']*100:.2f}%",
|
||||
"max_drawdown": f"{row['max_drawdown']*100:.2f}%",
|
||||
"avg_trade": f"{row['avg_trade']*100:.2f}%",
|
||||
"profit_ratio": f"{row['profit_ratio']*100:.2f}%",
|
||||
"final_usd": f"{row['final_usd']:.2f}",
|
||||
"total_fees_usd": f"{row['total_fees_usd']:.2f}",
|
||||
}
|
||||
|
||||
def write_results_chunk(self, filename: str, fieldnames: List[str],
|
||||
rows: List[Dict], write_header: bool = False,
|
||||
initial_usd: Optional[float] = None) -> None:
|
||||
"""Write a chunk of results to a CSV file
|
||||
|
||||
Args:
|
||||
filename: filename to write to
|
||||
fieldnames: list of fieldnames
|
||||
rows: list of rows
|
||||
write_header: whether to write the header
|
||||
initial_usd: initial USD value for header comment
|
||||
|
||||
Raises:
|
||||
DataSavingError: If writing fails
|
||||
"""
|
||||
try:
|
||||
mode = 'w' if write_header else 'a'
|
||||
|
||||
with open(filename, mode, newline="") as csvfile:
|
||||
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
|
||||
if write_header:
|
||||
if initial_usd is not None:
|
||||
csvfile.write(f"# initial_usd: {initial_usd}\n")
|
||||
writer.writeheader()
|
||||
|
||||
for row in rows:
|
||||
# Only keep keys that are in fieldnames
|
||||
filtered_row = {k: v for k, v in row.items() if k in fieldnames}
|
||||
writer.writerow(filtered_row)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to write results chunk to {filename}: {e}"
|
||||
if self.logging is not None:
|
||||
self.logging.error(error_msg)
|
||||
raise DataSavingError(error_msg) from e
|
||||
|
||||
def write_backtest_results(self, filename: str, fieldnames: List[str],
|
||||
rows: List[Dict], metadata_lines: Optional[List[str]] = None) -> str:
|
||||
"""Write combined backtest results to a CSV file
|
||||
|
||||
Args:
|
||||
filename: filename to write to
|
||||
fieldnames: list of fieldnames
|
||||
rows: list of result dictionaries
|
||||
metadata_lines: optional list of strings to write as header comments
|
||||
|
||||
Returns:
|
||||
Full path to the written file
|
||||
|
||||
Raises:
|
||||
DataSavingError: If writing fails
|
||||
"""
|
||||
try:
|
||||
fname = os.path.join(self.results_dir, filename)
|
||||
with open(fname, "w", newline="") as csvfile:
|
||||
if metadata_lines:
|
||||
for line in metadata_lines:
|
||||
csvfile.write(f"{line}\n")
|
||||
|
||||
writer = csv.DictWriter(csvfile, fieldnames=fieldnames, delimiter='\t')
|
||||
writer.writeheader()
|
||||
|
||||
for row in rows:
|
||||
writer.writerow(self.format_row(row))
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Combined results written to {fname}")
|
||||
|
||||
return fname
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to write backtest results to {filename}: {e}"
|
||||
if self.logging is not None:
|
||||
self.logging.error(error_msg)
|
||||
raise DataSavingError(error_msg) from e
|
||||
|
||||
def write_trades(self, all_trade_rows: List[Dict], trades_fieldnames: List[str]) -> None:
|
||||
"""Write trades to separate CSV files grouped by timeframe and stop loss
|
||||
|
||||
Args:
|
||||
all_trade_rows: list of trade dictionaries
|
||||
trades_fieldnames: list of trade fieldnames
|
||||
|
||||
Raises:
|
||||
DataSavingError: If writing fails
|
||||
"""
|
||||
try:
|
||||
trades_by_combo = self._group_trades_by_combination(all_trade_rows)
|
||||
|
||||
for (tf, sl), trades in trades_by_combo.items():
|
||||
self._write_single_trade_file(tf, sl, trades, trades_fieldnames)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to write trades: {e}"
|
||||
if self.logging is not None:
|
||||
self.logging.error(error_msg)
|
||||
raise DataSavingError(error_msg) from e
|
||||
|
||||
def _group_trades_by_combination(self, all_trade_rows: List[Dict]) -> Dict:
|
||||
"""Group trades by timeframe and stop loss combination
|
||||
|
||||
Args:
|
||||
all_trade_rows: List of trade dictionaries
|
||||
|
||||
Returns:
|
||||
Dictionary grouped by (timeframe, stop_loss_pct) tuples
|
||||
"""
|
||||
trades_by_combo = defaultdict(list)
|
||||
for trade in all_trade_rows:
|
||||
tf = trade.get("timeframe")
|
||||
sl = trade.get("stop_loss_pct")
|
||||
trades_by_combo[(tf, sl)].append(trade)
|
||||
return trades_by_combo
|
||||
|
||||
def _write_single_trade_file(self, timeframe: str, stop_loss_pct: float,
|
||||
trades: List[Dict], trades_fieldnames: List[str]) -> None:
|
||||
"""Write trades for a single timeframe/stop-loss combination
|
||||
|
||||
Args:
|
||||
timeframe: Timeframe identifier
|
||||
stop_loss_pct: Stop loss percentage
|
||||
trades: List of trades for this combination
|
||||
trades_fieldnames: List of field names for trades
|
||||
"""
|
||||
sl_percent = int(round(stop_loss_pct * 100))
|
||||
trades_filename = os.path.join(self.results_dir, f"trades_{timeframe}_ST{sl_percent}pct.csv")
|
||||
|
||||
with open(trades_filename, "w", newline="") as csvfile:
|
||||
writer = csv.DictWriter(csvfile, fieldnames=trades_fieldnames)
|
||||
writer.writeheader()
|
||||
for trade in trades:
|
||||
writer.writerow({k: trade.get(k, "") for k in trades_fieldnames})
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Trades written to {trades_filename}")
|
||||
@ -1,17 +1,32 @@
|
||||
import os
|
||||
import json
|
||||
import pandas as pd
|
||||
import csv
|
||||
from collections import defaultdict
|
||||
from typing import Optional, Union, Dict, Any, List
|
||||
import logging
|
||||
|
||||
from .data_loader import DataLoader
|
||||
from .data_saver import DataSaver
|
||||
from .result_formatter import ResultFormatter
|
||||
from .storage_utils import DataLoadingError, DataSavingError
|
||||
|
||||
RESULTS_DIR = "../results"
|
||||
DATA_DIR = "../data"
|
||||
|
||||
RESULTS_DIR = "results"
|
||||
DATA_DIR = "data"
|
||||
|
||||
class Storage:
|
||||
|
||||
"""Storage class for storing and loading results and data"""
|
||||
"""Unified storage interface for data and results operations
|
||||
|
||||
Acts as a coordinator for DataLoader, DataSaver, and ResultFormatter components,
|
||||
maintaining backward compatibility while providing a clean separation of concerns.
|
||||
"""
|
||||
|
||||
def __init__(self, logging=None, results_dir=RESULTS_DIR, data_dir=DATA_DIR):
|
||||
|
||||
"""Initialize storage with component instances
|
||||
|
||||
Args:
|
||||
logging: Optional logging instance
|
||||
results_dir: Directory for results files
|
||||
data_dir: Directory for data files
|
||||
"""
|
||||
self.results_dir = results_dir
|
||||
self.data_dir = data_dir
|
||||
self.logging = logging
|
||||
@ -20,196 +35,89 @@ class Storage:
|
||||
os.makedirs(self.results_dir, exist_ok=True)
|
||||
os.makedirs(self.data_dir, exist_ok=True)
|
||||
|
||||
def load_data(self, file_path, start_date, stop_date):
|
||||
# Initialize component instances
|
||||
self.data_loader = DataLoader(data_dir, logging)
|
||||
self.data_saver = DataSaver(data_dir, logging)
|
||||
self.result_formatter = ResultFormatter(results_dir, logging)
|
||||
|
||||
def load_data(self, file_path: str, start_date: Union[str, pd.Timestamp],
|
||||
stop_date: Union[str, pd.Timestamp]) -> pd.DataFrame:
|
||||
"""Load data with optimized dtypes and filtering, supporting CSV and JSON input
|
||||
|
||||
Args:
|
||||
file_path: path to the data file
|
||||
start_date: start date
|
||||
stop_date: stop date
|
||||
start_date: start date (string or datetime-like)
|
||||
stop_date: stop date (string or datetime-like)
|
||||
|
||||
Returns:
|
||||
pandas DataFrame
|
||||
pandas DataFrame with timestamp index
|
||||
|
||||
Raises:
|
||||
DataLoadingError: If data loading fails
|
||||
"""
|
||||
# Determine file type
|
||||
_, ext = os.path.splitext(file_path)
|
||||
ext = ext.lower()
|
||||
try:
|
||||
if ext == ".json":
|
||||
with open(os.path.join(self.data_dir, file_path), 'r') as f:
|
||||
raw = json.load(f)
|
||||
data = pd.DataFrame(raw["Data"])
|
||||
# Convert columns to lowercase
|
||||
data.columns = data.columns.str.lower()
|
||||
# Convert timestamp to datetime
|
||||
data["timestamp"] = pd.to_datetime(data["timestamp"], unit="s")
|
||||
# Filter by date range
|
||||
data = data[(data["timestamp"] >= start_date) & (data["timestamp"] <= stop_date)]
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} for date range {start_date} to {stop_date}")
|
||||
return data.set_index("timestamp")
|
||||
else:
|
||||
# Define optimized dtypes
|
||||
dtypes = {
|
||||
'Open': 'float32',
|
||||
'High': 'float32',
|
||||
'Low': 'float32',
|
||||
'Close': 'float32',
|
||||
'Volume': 'float32'
|
||||
}
|
||||
# Read data with original capitalized column names
|
||||
data = pd.read_csv(os.path.join(self.data_dir, file_path), dtype=dtypes)
|
||||
|
||||
|
||||
# Convert timestamp to datetime
|
||||
if 'Timestamp' in data.columns:
|
||||
data['Timestamp'] = pd.to_datetime(data['Timestamp'], unit='s')
|
||||
# Filter by date range
|
||||
data = data[(data['Timestamp'] >= start_date) & (data['Timestamp'] <= stop_date)]
|
||||
# Now convert column names to lowercase
|
||||
data.columns = data.columns.str.lower()
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} for date range {start_date} to {stop_date}")
|
||||
return data.set_index('timestamp')
|
||||
else: # Attempt to use the first column if 'Timestamp' is not present
|
||||
data.rename(columns={data.columns[0]: 'timestamp'}, inplace=True)
|
||||
data['timestamp'] = pd.to_datetime(data['timestamp'], unit='s')
|
||||
data = data[(data['timestamp'] >= start_date) & (data['timestamp'] <= stop_date)]
|
||||
data.columns = data.columns.str.lower() # Ensure all other columns are lower
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data loaded from {file_path} (using first column as timestamp) for date range {start_date} to {stop_date}")
|
||||
return data.set_index('timestamp')
|
||||
except Exception as e:
|
||||
if self.logging is not None:
|
||||
self.logging.error(f"Error loading data from {file_path}: {e}")
|
||||
# Return an empty DataFrame with a DatetimeIndex
|
||||
return pd.DataFrame(index=pd.to_datetime([]))
|
||||
|
||||
def save_data(self, data: pd.DataFrame, file_path: str):
|
||||
"""Save processed data to a CSV file.
|
||||
If the DataFrame has a DatetimeIndex, it's converted to float Unix timestamps
|
||||
(seconds since epoch) before saving. The index is saved as a column named 'timestamp'.
|
||||
return self.data_loader.load_data(file_path, start_date, stop_date)
|
||||
|
||||
def save_data(self, data: pd.DataFrame, file_path: str) -> None:
|
||||
"""Save processed data to a CSV file
|
||||
|
||||
Args:
|
||||
data (pd.DataFrame): data to save.
|
||||
file_path (str): path to the data file relative to the data_dir.
|
||||
data: DataFrame to save
|
||||
file_path: path to the data file relative to the data_dir
|
||||
|
||||
Raises:
|
||||
DataSavingError: If saving fails
|
||||
"""
|
||||
data_to_save = data.copy()
|
||||
self.data_saver.save_data(data, file_path)
|
||||
|
||||
if isinstance(data_to_save.index, pd.DatetimeIndex):
|
||||
# Convert DatetimeIndex to Unix timestamp (float seconds since epoch)
|
||||
# and make it a column named 'timestamp'.
|
||||
data_to_save['timestamp'] = data_to_save.index.astype('int64') / 1e9
|
||||
# Reset index so 'timestamp' column is saved and old DatetimeIndex is not saved as a column.
|
||||
# We want the 'timestamp' column to be the first one.
|
||||
data_to_save.reset_index(drop=True, inplace=True)
|
||||
# Ensure 'timestamp' is the first column if other columns exist
|
||||
if 'timestamp' in data_to_save.columns and len(data_to_save.columns) > 1:
|
||||
cols = ['timestamp'] + [col for col in data_to_save.columns if col != 'timestamp']
|
||||
data_to_save = data_to_save[cols]
|
||||
elif pd.api.types.is_numeric_dtype(data_to_save.index.dtype):
|
||||
# If index is already numeric (e.g. float Unix timestamps from a previous save/load cycle),
|
||||
# make it a column named 'timestamp'.
|
||||
data_to_save['timestamp'] = data_to_save.index
|
||||
data_to_save.reset_index(drop=True, inplace=True)
|
||||
if 'timestamp' in data_to_save.columns and len(data_to_save.columns) > 1:
|
||||
cols = ['timestamp'] + [col for col in data_to_save.columns if col != 'timestamp']
|
||||
data_to_save = data_to_save[cols]
|
||||
else:
|
||||
# For other index types, or if no index that we want to specifically handle,
|
||||
# save with the current index. pandas to_csv will handle it.
|
||||
# This branch might be removed if we strictly expect either DatetimeIndex or a numeric one from previous save.
|
||||
pass # data_to_save remains as is, to_csv will write its index if index=True
|
||||
|
||||
# Save to CSV, ensuring the 'timestamp' column (if created) is written, and not the DataFrame's active index.
|
||||
full_path = os.path.join(self.data_dir, file_path)
|
||||
data_to_save.to_csv(full_path, index=False) # index=False because timestamp is now a column
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Data saved to {full_path} with Unix timestamp column.")
|
||||
|
||||
|
||||
def format_row(self, row):
|
||||
def format_row(self, row: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Format a row for a combined results CSV file
|
||||
|
||||
Args:
|
||||
row: row to format
|
||||
row: Dictionary containing row data
|
||||
|
||||
Returns:
|
||||
formatted row
|
||||
Dictionary with formatted values
|
||||
"""
|
||||
return self.result_formatter.format_row(row)
|
||||
|
||||
return {
|
||||
"timeframe": row["timeframe"],
|
||||
"stop_loss_pct": f"{row['stop_loss_pct']*100:.2f}%",
|
||||
"n_trades": row["n_trades"],
|
||||
"n_stop_loss": row["n_stop_loss"],
|
||||
"win_rate": f"{row['win_rate']*100:.2f}%",
|
||||
"max_drawdown": f"{row['max_drawdown']*100:.2f}%",
|
||||
"avg_trade": f"{row['avg_trade']*100:.2f}%",
|
||||
"profit_ratio": f"{row['profit_ratio']*100:.2f}%",
|
||||
"final_usd": f"{row['final_usd']:.2f}",
|
||||
"total_fees_usd": f"{row['total_fees_usd']:.2f}",
|
||||
}
|
||||
|
||||
def write_results_chunk(self, filename, fieldnames, rows, write_header=False, initial_usd=None):
|
||||
def write_results_chunk(self, filename: str, fieldnames: List[str],
|
||||
rows: List[Dict], write_header: bool = False,
|
||||
initial_usd: Optional[float] = None) -> None:
|
||||
"""Write a chunk of results to a CSV file
|
||||
|
||||
Args:
|
||||
filename: filename to write to
|
||||
fieldnames: list of fieldnames
|
||||
rows: list of rows
|
||||
write_header: whether to write the header
|
||||
initial_usd: initial USD
|
||||
initial_usd: initial USD value for header comment
|
||||
"""
|
||||
mode = 'w' if write_header else 'a'
|
||||
self.result_formatter.write_results_chunk(
|
||||
filename, fieldnames, rows, write_header, initial_usd
|
||||
)
|
||||
|
||||
def write_backtest_results(self, filename: str, fieldnames: List[str],
|
||||
rows: List[Dict], metadata_lines: Optional[List[str]] = None) -> str:
|
||||
"""Write combined backtest results to a CSV file
|
||||
|
||||
with open(filename, mode, newline="") as csvfile:
|
||||
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
|
||||
if write_header:
|
||||
csvfile.write(f"# initial_usd: {initial_usd}\n")
|
||||
writer.writeheader()
|
||||
|
||||
for row in rows:
|
||||
# Only keep keys that are in fieldnames
|
||||
filtered_row = {k: v for k, v in row.items() if k in fieldnames}
|
||||
writer.writerow(filtered_row)
|
||||
|
||||
def write_backtest_results(self, filename, fieldnames, rows, metadata_lines=None):
|
||||
"""Write a combined results to a CSV file
|
||||
Args:
|
||||
filename: filename to write to
|
||||
fieldnames: list of fieldnames
|
||||
rows: list of rows
|
||||
rows: list of result dictionaries
|
||||
metadata_lines: optional list of strings to write as header comments
|
||||
|
||||
Returns:
|
||||
Full path to the written file
|
||||
"""
|
||||
fname = os.path.join(self.results_dir, filename)
|
||||
with open(fname, "w", newline="") as csvfile:
|
||||
if metadata_lines:
|
||||
for line in metadata_lines:
|
||||
csvfile.write(f"{line}\n")
|
||||
writer = csv.DictWriter(csvfile, fieldnames=fieldnames, delimiter='\t')
|
||||
writer.writeheader()
|
||||
for row in rows:
|
||||
writer.writerow(self.format_row(row))
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Combined results written to {fname}")
|
||||
|
||||
def write_trades(self, all_trade_rows, trades_fieldnames):
|
||||
"""Write trades to a CSV file
|
||||
return self.result_formatter.write_backtest_results(
|
||||
filename, fieldnames, rows, metadata_lines
|
||||
)
|
||||
|
||||
def write_trades(self, all_trade_rows: List[Dict], trades_fieldnames: List[str]) -> None:
|
||||
"""Write trades to separate CSV files grouped by timeframe and stop loss
|
||||
|
||||
Args:
|
||||
all_trade_rows: list of trade rows
|
||||
all_trade_rows: list of trade dictionaries
|
||||
trades_fieldnames: list of trade fieldnames
|
||||
logging: logging object
|
||||
"""
|
||||
|
||||
trades_by_combo = defaultdict(list)
|
||||
for trade in all_trade_rows:
|
||||
tf = trade.get("timeframe")
|
||||
sl = trade.get("stop_loss_pct")
|
||||
trades_by_combo[(tf, sl)].append(trade)
|
||||
|
||||
for (tf, sl), trades in trades_by_combo.items():
|
||||
sl_percent = int(round(sl * 100))
|
||||
trades_filename = os.path.join(self.results_dir, f"trades_{tf}_ST{sl_percent}pct.csv")
|
||||
with open(trades_filename, "w", newline="") as csvfile:
|
||||
writer = csv.DictWriter(csvfile, fieldnames=trades_fieldnames)
|
||||
writer.writeheader()
|
||||
for trade in trades:
|
||||
writer.writerow({k: trade.get(k, "") for k in trades_fieldnames})
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Trades written to {trades_filename}")
|
||||
self.result_formatter.write_trades(all_trade_rows, trades_fieldnames)
|
||||
73
cycles/utils/storage_utils.py
Normal file
73
cycles/utils/storage_utils.py
Normal file
@ -0,0 +1,73 @@
|
||||
import pandas as pd
|
||||
|
||||
|
||||
class TimestampParsingError(Exception):
|
||||
"""Custom exception for timestamp parsing errors"""
|
||||
pass
|
||||
|
||||
|
||||
class DataLoadingError(Exception):
|
||||
"""Custom exception for data loading errors"""
|
||||
pass
|
||||
|
||||
|
||||
class DataSavingError(Exception):
|
||||
"""Custom exception for data saving errors"""
|
||||
pass
|
||||
|
||||
|
||||
def _parse_timestamp_column(data: pd.DataFrame, column_name: str) -> pd.DataFrame:
|
||||
"""Parse timestamp column handling both Unix timestamps and datetime strings
|
||||
|
||||
Args:
|
||||
data: DataFrame containing the timestamp column
|
||||
column_name: Name of the timestamp column
|
||||
|
||||
Returns:
|
||||
DataFrame with parsed timestamp column
|
||||
|
||||
Raises:
|
||||
TimestampParsingError: If timestamp parsing fails
|
||||
"""
|
||||
try:
|
||||
sample_timestamp = str(data[column_name].iloc[0])
|
||||
try:
|
||||
# Check if it's a Unix timestamp (numeric)
|
||||
float(sample_timestamp)
|
||||
# It's a Unix timestamp, convert using unit='s'
|
||||
data[column_name] = pd.to_datetime(data[column_name], unit='s')
|
||||
except ValueError:
|
||||
# It's already in datetime string format, convert without unit
|
||||
data[column_name] = pd.to_datetime(data[column_name])
|
||||
return data
|
||||
except Exception as e:
|
||||
raise TimestampParsingError(f"Failed to parse timestamp column '{column_name}': {e}")
|
||||
|
||||
|
||||
def _filter_by_date_range(data: pd.DataFrame, timestamp_col: str,
|
||||
start_date: pd.Timestamp, stop_date: pd.Timestamp) -> pd.DataFrame:
|
||||
"""Filter DataFrame by date range
|
||||
|
||||
Args:
|
||||
data: DataFrame to filter
|
||||
timestamp_col: Name of timestamp column
|
||||
start_date: Start date for filtering
|
||||
stop_date: Stop date for filtering
|
||||
|
||||
Returns:
|
||||
Filtered DataFrame
|
||||
"""
|
||||
return data[(data[timestamp_col] >= start_date) & (data[timestamp_col] <= stop_date)]
|
||||
|
||||
|
||||
def _normalize_column_names(data: pd.DataFrame) -> pd.DataFrame:
|
||||
"""Convert all column names to lowercase
|
||||
|
||||
Args:
|
||||
data: DataFrame to normalize
|
||||
|
||||
Returns:
|
||||
DataFrame with lowercase column names
|
||||
"""
|
||||
data.columns = data.columns.str.lower()
|
||||
return data
|
||||
@ -10,10 +10,12 @@ class SystemUtils:
|
||||
"""Determine optimal number of worker processes based on system resources"""
|
||||
cpu_count = os.cpu_count() or 4
|
||||
memory_gb = psutil.virtual_memory().total / (1024**3)
|
||||
# Heuristic: Use 75% of cores, but cap based on available memory
|
||||
# Assume each worker needs ~2GB for large datasets
|
||||
workers_by_memory = max(1, int(memory_gb / 2))
|
||||
workers_by_cpu = max(1, int(cpu_count * 0.75))
|
||||
|
||||
# OPTIMIZATION: More aggressive worker allocation for better performance
|
||||
workers_by_memory = max(1, int(memory_gb / 2)) # 2GB per worker
|
||||
workers_by_cpu = max(1, int(cpu_count * 0.8)) # Use 80% of CPU cores
|
||||
optimal_workers = min(workers_by_cpu, workers_by_memory, 8) # Cap at 8 workers
|
||||
|
||||
if self.logging is not None:
|
||||
self.logging.info(f"Using {min(workers_by_cpu, workers_by_memory)} workers for processing")
|
||||
return min(workers_by_cpu, workers_by_memory)
|
||||
self.logging.info(f"Using {optimal_workers} workers for processing (CPU-based: {workers_by_cpu}, Memory-based: {workers_by_memory})")
|
||||
return optimal_workers
|
||||
@ -1,73 +1,207 @@
|
||||
# Storage Utilities
|
||||
|
||||
This document describes the storage utility functions found in `cycles/utils/storage.py`.
|
||||
This document describes the refactored storage utilities found in `cycles/utils/` that provide modular, maintainable data and results management.
|
||||
|
||||
## Overview
|
||||
|
||||
The `storage.py` module provides a `Storage` class designed for handling the loading and saving of data and results. It supports operations with CSV and JSON files and integrates with pandas DataFrames for data manipulation. The class also manages the creation of necessary `results` and `data` directories.
|
||||
The storage utilities have been refactored into a modular architecture with clear separation of concerns:
|
||||
|
||||
- **`Storage`** - Main coordinator class providing unified interface (backward compatible)
|
||||
- **`DataLoader`** - Handles loading data from various file formats
|
||||
- **`DataSaver`** - Manages saving data with proper format handling
|
||||
- **`ResultFormatter`** - Formats and writes backtest results to CSV files
|
||||
- **`storage_utils`** - Shared utilities and custom exceptions
|
||||
|
||||
This design improves maintainability, testability, and follows the single responsibility principle.
|
||||
|
||||
## Constants
|
||||
|
||||
- `RESULTS_DIR`: Defines the default directory name for storing results (default: "results").
|
||||
- `DATA_DIR`: Defines the default directory name for storing input data (default: "data").
|
||||
- `RESULTS_DIR`: Default directory for storing results (default: "../results")
|
||||
- `DATA_DIR`: Default directory for storing input data (default: "../data")
|
||||
|
||||
## Class: `Storage`
|
||||
## Main Classes
|
||||
|
||||
Handles storage operations for data and results.
|
||||
### `Storage` (Coordinator Class)
|
||||
|
||||
### `__init__(self, logging=None, results_dir=RESULTS_DIR, data_dir=DATA_DIR)`
|
||||
The main interface that coordinates all storage operations while maintaining backward compatibility.
|
||||
|
||||
- **Description**: Initializes the `Storage` class. It creates the results and data directories if they don't already exist.
|
||||
- **Parameters**:
|
||||
- `logging` (optional): A logging instance for outputting information. Defaults to `None`.
|
||||
- `results_dir` (str, optional): Path to the directory for storing results. Defaults to `RESULTS_DIR`.
|
||||
- `data_dir` (str, optional): Path to the directory for storing data. Defaults to `DATA_DIR`.
|
||||
#### `__init__(self, logging=None, results_dir=RESULTS_DIR, data_dir=DATA_DIR)`
|
||||
|
||||
### `load_data(self, file_path, start_date, stop_date)`
|
||||
**Description**: Initializes the Storage coordinator with component instances.
|
||||
|
||||
- **Description**: Loads data from a specified file (CSV or JSON), performs type optimization, filters by date range, and converts column names to lowercase. The timestamp column is set as the DataFrame index.
|
||||
- **Parameters**:
|
||||
- `file_path` (str): Path to the data file (relative to `data_dir`).
|
||||
- `start_date` (datetime-like): The start date for filtering data.
|
||||
- `stop_date` (datetime-like): The end date for filtering data.
|
||||
- **Returns**: `pandas.DataFrame` - The loaded and processed data, with a `timestamp` index. Returns an empty DataFrame on error.
|
||||
**Parameters**:
|
||||
- `logging` (optional): A logging instance for outputting information
|
||||
- `results_dir` (str, optional): Path to the directory for storing results
|
||||
- `data_dir` (str, optional): Path to the directory for storing data
|
||||
|
||||
### `save_data(self, data: pd.DataFrame, file_path: str)`
|
||||
**Creates**: Component instances for DataLoader, DataSaver, and ResultFormatter
|
||||
|
||||
- **Description**: Saves a pandas DataFrame to a CSV file within the `data_dir`. If the DataFrame has a DatetimeIndex, it's converted to a Unix timestamp (seconds since epoch) and stored in a column named 'timestamp', which becomes the first column in the CSV. The DataFrame's active index is not saved if a 'timestamp' column is created.
|
||||
- **Parameters**:
|
||||
- `data` (pd.DataFrame): The DataFrame to save.
|
||||
- `file_path` (str): Path to the data file (relative to `data_dir`).
|
||||
#### `load_data(self, file_path: str, start_date: Union[str, pd.Timestamp], stop_date: Union[str, pd.Timestamp]) -> pd.DataFrame`
|
||||
|
||||
### `format_row(self, row)`
|
||||
**Description**: Loads data with optimized dtypes and filtering, supporting CSV and JSON input.
|
||||
|
||||
- **Description**: Formats a dictionary row for output to a combined results CSV file, applying specific string formatting for percentages and float values.
|
||||
- **Parameters**:
|
||||
- `row` (dict): The row of data to format.
|
||||
- **Returns**: `dict` - The formatted row.
|
||||
**Parameters**:
|
||||
- `file_path` (str): Path to the data file (relative to `data_dir`)
|
||||
- `start_date` (datetime-like): The start date for filtering data
|
||||
- `stop_date` (datetime-like): The end date for filtering data
|
||||
|
||||
### `write_results_chunk(self, filename, fieldnames, rows, write_header=False, initial_usd=None)`
|
||||
**Returns**: `pandas.DataFrame` with timestamp index
|
||||
|
||||
- **Description**: Writes a chunk of results (list of dictionaries) to a CSV file. Can append to an existing file or write a new one with a header. An optional `initial_usd` can be written as a comment in the header.
|
||||
- **Parameters**:
|
||||
- `filename` (str): The name of the file to write to (path is absolute or relative to current working dir).
|
||||
- `fieldnames` (list): A list of strings representing the CSV header/column names.
|
||||
- `rows` (list): A list of dictionaries, where each dictionary is a row.
|
||||
- `write_header` (bool, optional): If `True`, writes the header. Defaults to `False`.
|
||||
- `initial_usd` (numeric, optional): If provided and `write_header` is `True`, this value is written as a comment in the CSV header. Defaults to `None`.
|
||||
**Raises**: `DataLoadingError` if loading fails
|
||||
|
||||
### `write_results_combined(self, filename, fieldnames, rows)`
|
||||
#### `save_data(self, data: pd.DataFrame, file_path: str) -> None`
|
||||
|
||||
- **Description**: Writes combined results to a CSV file in the `results_dir`. Uses tab as a delimiter and formats rows using `format_row`.
|
||||
- **Parameters**:
|
||||
- `filename` (str): The name of the file to write to (relative to `results_dir`).
|
||||
- `fieldnames` (list): A list of strings representing the CSV header/column names.
|
||||
- `rows` (list): A list of dictionaries, where each dictionary is a row.
|
||||
**Description**: Saves processed data to a CSV file with proper timestamp handling.
|
||||
|
||||
### `write_trades(self, all_trade_rows, trades_fieldnames)`
|
||||
**Parameters**:
|
||||
- `data` (pd.DataFrame): The DataFrame to save
|
||||
- `file_path` (str): Path to the data file (relative to `data_dir`)
|
||||
|
||||
- **Description**: Writes trade data to separate CSV files based on timeframe and stop-loss percentage. Files are named `trades_{tf}_ST{sl_percent}pct.csv` and stored in `results_dir`.
|
||||
- **Parameters**:
|
||||
- `all_trade_rows` (list): A list of dictionaries, where each dictionary represents a trade.
|
||||
- `trades_fieldnames` (list): A list of strings for the CSV header of trade files.
|
||||
**Raises**: `DataSavingError` if saving fails
|
||||
|
||||
#### `format_row(self, row: Dict[str, Any]) -> Dict[str, str]`
|
||||
|
||||
**Description**: Formats a dictionary row for output to results CSV files.
|
||||
|
||||
**Parameters**:
|
||||
- `row` (dict): The row of data to format
|
||||
|
||||
**Returns**: `dict` with formatted values (percentages, currency, etc.)
|
||||
|
||||
#### `write_results_chunk(self, filename: str, fieldnames: List[str], rows: List[Dict], write_header: bool = False, initial_usd: Optional[float] = None) -> None`
|
||||
|
||||
**Description**: Writes a chunk of results to a CSV file with optional header.
|
||||
|
||||
**Parameters**:
|
||||
- `filename` (str): The name of the file to write to
|
||||
- `fieldnames` (list): CSV header/column names
|
||||
- `rows` (list): List of dictionaries representing rows
|
||||
- `write_header` (bool, optional): Whether to write the header
|
||||
- `initial_usd` (float, optional): Initial USD value for header comment
|
||||
|
||||
#### `write_backtest_results(self, filename: str, fieldnames: List[str], rows: List[Dict], metadata_lines: Optional[List[str]] = None) -> str`
|
||||
|
||||
**Description**: Writes combined backtest results to a CSV file with metadata.
|
||||
|
||||
**Parameters**:
|
||||
- `filename` (str): Name of the file to write to (relative to `results_dir`)
|
||||
- `fieldnames` (list): CSV header/column names
|
||||
- `rows` (list): List of result dictionaries
|
||||
- `metadata_lines` (list, optional): Header comment lines
|
||||
|
||||
**Returns**: Full path to the written file
|
||||
|
||||
#### `write_trades(self, all_trade_rows: List[Dict], trades_fieldnames: List[str]) -> None`
|
||||
|
||||
**Description**: Writes trade data to separate CSV files grouped by timeframe and stop-loss.
|
||||
|
||||
**Parameters**:
|
||||
- `all_trade_rows` (list): List of trade dictionaries
|
||||
- `trades_fieldnames` (list): CSV header for trade files
|
||||
|
||||
**Files Created**: `trades_{timeframe}_ST{sl_percent}pct.csv` in `results_dir`
|
||||
|
||||
### `DataLoader`
|
||||
|
||||
Handles loading and preprocessing of data from various file formats.
|
||||
|
||||
#### Key Features:
|
||||
- Supports CSV and JSON formats
|
||||
- Optimized pandas dtypes for financial data
|
||||
- Intelligent timestamp parsing (Unix timestamps and datetime strings)
|
||||
- Date range filtering
|
||||
- Column name normalization (lowercase)
|
||||
- Comprehensive error handling
|
||||
|
||||
#### Methods:
|
||||
- `load_data()` - Main loading interface
|
||||
- `_load_json_data()` - JSON-specific loading logic
|
||||
- `_load_csv_data()` - CSV-specific loading logic
|
||||
- `_process_csv_timestamps()` - Timestamp parsing for CSV data
|
||||
|
||||
### `DataSaver`
|
||||
|
||||
Manages saving data with proper format handling and index conversion.
|
||||
|
||||
#### Key Features:
|
||||
- Converts DatetimeIndex to Unix timestamps for CSV compatibility
|
||||
- Handles numeric indexes appropriately
|
||||
- Ensures 'timestamp' column is first in output
|
||||
- Comprehensive error handling and logging
|
||||
|
||||
#### Methods:
|
||||
- `save_data()` - Main saving interface
|
||||
- `_prepare_data_for_saving()` - Data preparation logic
|
||||
- `_convert_datetime_index_to_timestamp()` - DatetimeIndex conversion
|
||||
- `_convert_numeric_index_to_timestamp()` - Numeric index conversion
|
||||
|
||||
### `ResultFormatter`
|
||||
|
||||
Handles formatting and writing of backtest results to CSV files.
|
||||
|
||||
#### Key Features:
|
||||
- Consistent formatting for percentages and currency
|
||||
- Grouped trade file writing by timeframe/stop-loss
|
||||
- Metadata header support
|
||||
- Tab-delimited output for results
|
||||
- Error handling for all write operations
|
||||
|
||||
#### Methods:
|
||||
- `format_row()` - Format individual result rows
|
||||
- `write_results_chunk()` - Write result chunks with headers
|
||||
- `write_backtest_results()` - Write combined results with metadata
|
||||
- `write_trades()` - Write grouped trade files
|
||||
|
||||
## Utility Functions and Exceptions
|
||||
|
||||
### Custom Exceptions
|
||||
|
||||
- **`TimestampParsingError`** - Raised when timestamp parsing fails
|
||||
- **`DataLoadingError`** - Raised when data loading operations fail
|
||||
- **`DataSavingError`** - Raised when data saving operations fail
|
||||
|
||||
### Utility Functions
|
||||
|
||||
- **`_parse_timestamp_column()`** - Parse timestamp columns with format detection
|
||||
- **`_filter_by_date_range()`** - Filter DataFrames by date range
|
||||
- **`_normalize_column_names()`** - Convert column names to lowercase
|
||||
|
||||
## Architecture Benefits
|
||||
|
||||
### Separation of Concerns
|
||||
- Each class has a single, well-defined responsibility
|
||||
- Data loading, saving, and result formatting are cleanly separated
|
||||
- Shared utilities are extracted to prevent code duplication
|
||||
|
||||
### Maintainability
|
||||
- All files are under 250 lines (quality gate)
|
||||
- All methods are under 50 lines (quality gate)
|
||||
- Clear interfaces and comprehensive documentation
|
||||
- Type hints for better IDE support and clarity
|
||||
|
||||
### Error Handling
|
||||
- Custom exceptions for different error types
|
||||
- Consistent error logging patterns
|
||||
- Graceful degradation (empty DataFrames on load failure)
|
||||
|
||||
### Backward Compatibility
|
||||
- Storage class maintains exact same public interface
|
||||
- All existing code continues to work unchanged
|
||||
- Component classes are available for advanced usage
|
||||
|
||||
## Migration Notes
|
||||
|
||||
The refactoring maintains full backward compatibility. Existing code using `Storage` will continue to work unchanged. For new code, consider using the component classes directly for more focused functionality:
|
||||
|
||||
```python
|
||||
# Existing pattern (still works)
|
||||
from cycles.utils.storage import Storage
|
||||
storage = Storage(logging=logger)
|
||||
data = storage.load_data('file.csv', start, end)
|
||||
|
||||
# New pattern for focused usage
|
||||
from cycles.utils.data_loader import DataLoader
|
||||
loader = DataLoader(data_dir, logger)
|
||||
data = loader.load_data('file.csv', start, end)
|
||||
```
|
||||
|
||||
|
||||
466
main.py
466
main.py
@ -1,337 +1,175 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Backtest execution script for cryptocurrency trading strategies
|
||||
Refactored for improved maintainability and error handling
|
||||
"""
|
||||
|
||||
import logging
|
||||
import concurrent.futures
|
||||
import os
|
||||
import datetime
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Import custom modules
|
||||
from config_manager import ConfigManager
|
||||
from backtest_runner import BacktestRunner
|
||||
from result_processor import ResultProcessor
|
||||
from cycles.utils.storage import Storage
|
||||
from cycles.utils.system import SystemUtils
|
||||
from cycles.backtest import Backtest
|
||||
from cycles.charts import BacktestCharts
|
||||
from cycles.strategies import create_strategy_manager
|
||||
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=[
|
||||
logging.FileHandler("backtest.log"),
|
||||
logging.StreamHandler()
|
||||
]
|
||||
)
|
||||
|
||||
def strategy_manager_init(backtester: Backtest):
|
||||
"""Strategy Manager initialization function"""
|
||||
# This will be called by Backtest.__init__, but actual initialization
|
||||
# happens in strategy_manager.initialize()
|
||||
pass
|
||||
|
||||
def strategy_manager_entry(backtester: Backtest, df_index: int):
|
||||
"""Strategy Manager entry function"""
|
||||
return backtester.strategy_manager.get_entry_signal(backtester, df_index)
|
||||
|
||||
def strategy_manager_exit(backtester: Backtest, df_index: int):
|
||||
"""Strategy Manager exit function"""
|
||||
return backtester.strategy_manager.get_exit_signal(backtester, df_index)
|
||||
|
||||
def process_timeframe_data(data_1min, timeframe, config, debug=False):
|
||||
"""Process a timeframe using Strategy Manager with configuration"""
|
||||
|
||||
results_rows = []
|
||||
trade_rows = []
|
||||
|
||||
# Extract values from config
|
||||
initial_usd = config['initial_usd']
|
||||
strategy_config = {
|
||||
"strategies": config['strategies'],
|
||||
"combination_rules": config['combination_rules']
|
||||
}
|
||||
|
||||
# Create and initialize strategy manager
|
||||
if not strategy_config:
|
||||
logging.error("No strategy configuration provided")
|
||||
return results_rows, trade_rows
|
||||
def setup_logging() -> logging.Logger:
|
||||
"""Configure and return logging instance"""
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
strategy_manager = create_strategy_manager(strategy_config)
|
||||
|
||||
# Get the primary timeframe from the first strategy for backtester setup
|
||||
primary_strategy = strategy_manager.strategies[0]
|
||||
primary_timeframe = primary_strategy.get_timeframes()[0]
|
||||
|
||||
# For BBRS strategy, it works with 1-minute data directly and handles internal resampling
|
||||
# For other strategies, use their preferred timeframe
|
||||
if primary_strategy.name == "bbrs":
|
||||
# BBRS strategy processes 1-minute data and outputs signals on its internal timeframes
|
||||
# Use 1-minute data for backtester working dataframe
|
||||
working_df = data_1min.copy()
|
||||
else:
|
||||
# Other strategies specify their preferred timeframe
|
||||
# Let the primary strategy resample the data to get the working dataframe
|
||||
primary_strategy._resample_data(data_1min)
|
||||
working_df = primary_strategy.get_primary_timeframe_data()
|
||||
|
||||
# Prepare working dataframe for backtester (ensure timestamp column)
|
||||
working_df_for_backtest = working_df.copy().reset_index()
|
||||
if 'index' in working_df_for_backtest.columns:
|
||||
working_df_for_backtest = working_df_for_backtest.rename(columns={'index': 'timestamp'})
|
||||
|
||||
# Initialize backtest with strategy manager initialization
|
||||
backtester = Backtest(initial_usd, working_df_for_backtest, working_df_for_backtest, strategy_manager_init)
|
||||
|
||||
# Store original min1_df for strategy processing
|
||||
backtester.original_df = data_1min
|
||||
|
||||
# Attach strategy manager to backtester and initialize
|
||||
backtester.strategy_manager = strategy_manager
|
||||
strategy_manager.initialize(backtester)
|
||||
|
||||
# Run backtest with strategy manager functions
|
||||
results = backtester.run(
|
||||
strategy_manager_entry,
|
||||
strategy_manager_exit,
|
||||
debug
|
||||
logging.basicConfig(
|
||||
level=logging.INFO,
|
||||
format="%(asctime)s [%(levelname)s] %(message)s",
|
||||
handlers=[
|
||||
logging.FileHandler("backtest.log"),
|
||||
logging.StreamHandler()
|
||||
]
|
||||
)
|
||||
|
||||
n_trades = results["n_trades"]
|
||||
trades = results.get('trades', [])
|
||||
wins = [1 for t in trades if t['exit'] is not None and t['exit'] > t['entry']]
|
||||
n_winning_trades = len(wins)
|
||||
total_profit = sum(trade['profit_pct'] for trade in trades)
|
||||
total_loss = sum(-trade['profit_pct'] for trade in trades if trade['profit_pct'] < 0)
|
||||
win_rate = n_winning_trades / n_trades if n_trades > 0 else 0
|
||||
avg_trade = total_profit / n_trades if n_trades > 0 else 0
|
||||
profit_ratio = total_profit / total_loss if total_loss > 0 else float('inf')
|
||||
cumulative_profit = 0
|
||||
max_drawdown = 0
|
||||
peak = 0
|
||||
|
||||
for trade in trades:
|
||||
cumulative_profit += trade['profit_pct']
|
||||
|
||||
if cumulative_profit > peak:
|
||||
peak = cumulative_profit
|
||||
drawdown = peak - cumulative_profit
|
||||
|
||||
if drawdown > max_drawdown:
|
||||
max_drawdown = drawdown
|
||||
|
||||
final_usd = initial_usd
|
||||
|
||||
for trade in trades:
|
||||
final_usd *= (1 + trade['profit_pct'])
|
||||
|
||||
total_fees_usd = sum(trade.get('fee_usd', 0.0) for trade in trades)
|
||||
|
||||
# Get stop_loss_pct from the first strategy for reporting
|
||||
# In multi-strategy setups, strategies can have different stop_loss_pct values
|
||||
stop_loss_pct = primary_strategy.params.get("stop_loss_pct", "N/A")
|
||||
|
||||
# Update row to include timeframe information
|
||||
row = {
|
||||
"timeframe": f"{timeframe}({primary_timeframe})", # Show actual timeframe used
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"n_trades": n_trades,
|
||||
"n_stop_loss": sum(1 for trade in trades if 'type' in trade and trade['type'] == 'STOP_LOSS'),
|
||||
"win_rate": win_rate,
|
||||
"max_drawdown": max_drawdown,
|
||||
"avg_trade": avg_trade,
|
||||
"total_profit": total_profit,
|
||||
"total_loss": total_loss,
|
||||
"profit_ratio": profit_ratio,
|
||||
"initial_usd": initial_usd,
|
||||
"final_usd": final_usd,
|
||||
"total_fees_usd": total_fees_usd,
|
||||
}
|
||||
results_rows.append(row)
|
||||
|
||||
for trade in trades:
|
||||
trade_rows.append({
|
||||
"timeframe": f"{timeframe}({primary_timeframe})",
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"entry_time": trade.get("entry_time"),
|
||||
"exit_time": trade.get("exit_time"),
|
||||
"entry_price": trade.get("entry"),
|
||||
"exit_price": trade.get("exit"),
|
||||
"profit_pct": trade.get("profit_pct"),
|
||||
"type": trade.get("type"),
|
||||
"fee_usd": trade.get("fee_usd"),
|
||||
})
|
||||
|
||||
# Log strategy summary
|
||||
strategy_summary = strategy_manager.get_strategy_summary()
|
||||
logging.info(f"Timeframe: {timeframe}({primary_timeframe}), Stop Loss: {stop_loss_pct}, "
|
||||
f"Trades: {n_trades}, Strategies: {[s['name'] for s in strategy_summary['strategies']]}")
|
||||
|
||||
if debug:
|
||||
# Plot after each backtest run
|
||||
try:
|
||||
# Check if any strategy has processed_data for universal plotting
|
||||
processed_data = None
|
||||
for strategy in strategy_manager.strategies:
|
||||
if hasattr(backtester, 'processed_data') and backtester.processed_data is not None:
|
||||
processed_data = backtester.processed_data
|
||||
break
|
||||
|
||||
if processed_data is not None and not processed_data.empty:
|
||||
# Format strategy data with actual executed trades for universal plotting
|
||||
formatted_data = BacktestCharts.format_strategy_data_with_trades(processed_data, results)
|
||||
# Plot using universal function
|
||||
BacktestCharts.plot_data(formatted_data)
|
||||
else:
|
||||
# Fallback to meta_trend plot if available
|
||||
if "meta_trend" in backtester.strategies:
|
||||
meta_trend = backtester.strategies["meta_trend"]
|
||||
# Use the working dataframe for plotting
|
||||
BacktestCharts.plot(working_df, meta_trend)
|
||||
else:
|
||||
print("No plotting data available")
|
||||
except Exception as e:
|
||||
print(f"Plotting failed: {e}")
|
||||
return logger
|
||||
|
||||
return results_rows, trade_rows
|
||||
|
||||
def process(timeframe_info, debug=False):
|
||||
"""Process a single timeframe with strategy config"""
|
||||
timeframe, data_1min, config = timeframe_info
|
||||
|
||||
# Pass the essential data and full config
|
||||
results_rows, all_trade_rows = process_timeframe_data(
|
||||
data_1min, timeframe, config, debug=debug
|
||||
)
|
||||
return results_rows, all_trade_rows
|
||||
|
||||
def aggregate_results(all_rows):
|
||||
"""Aggregate results per stop_loss_pct and per rule (timeframe)"""
|
||||
from collections import defaultdict
|
||||
|
||||
grouped = defaultdict(list)
|
||||
for row in all_rows:
|
||||
key = (row['timeframe'], row['stop_loss_pct'])
|
||||
grouped[key].append(row)
|
||||
|
||||
summary_rows = []
|
||||
for (rule, stop_loss_pct), rows in grouped.items():
|
||||
n_months = len(rows)
|
||||
total_trades = sum(r['n_trades'] for r in rows)
|
||||
total_stop_loss = sum(r['n_stop_loss'] for r in rows)
|
||||
avg_win_rate = np.mean([r['win_rate'] for r in rows])
|
||||
avg_max_drawdown = np.mean([r['max_drawdown'] for r in rows])
|
||||
avg_avg_trade = np.mean([r['avg_trade'] for r in rows])
|
||||
avg_profit_ratio = np.mean([r['profit_ratio'] for r in rows])
|
||||
|
||||
# Calculate final USD
|
||||
final_usd = np.mean([r.get('final_usd', initial_usd) for r in rows])
|
||||
total_fees_usd = np.mean([r.get('total_fees_usd') for r in rows])
|
||||
|
||||
summary_rows.append({
|
||||
"timeframe": rule,
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"n_trades": total_trades,
|
||||
"n_stop_loss": total_stop_loss,
|
||||
"win_rate": avg_win_rate,
|
||||
"max_drawdown": avg_max_drawdown,
|
||||
"avg_trade": avg_avg_trade,
|
||||
"profit_ratio": avg_profit_ratio,
|
||||
"initial_usd": initial_usd,
|
||||
"final_usd": final_usd,
|
||||
"total_fees_usd": total_fees_usd,
|
||||
})
|
||||
return summary_rows
|
||||
|
||||
def get_nearest_price(df, target_date):
|
||||
if len(df) == 0:
|
||||
return None, None
|
||||
target_ts = pd.to_datetime(target_date)
|
||||
nearest_idx = df.index.get_indexer([target_ts], method='nearest')[0]
|
||||
nearest_time = df.index[nearest_idx]
|
||||
price = df.iloc[nearest_idx]['close']
|
||||
return nearest_time, price
|
||||
|
||||
if __name__ == "__main__":
|
||||
debug = True
|
||||
|
||||
parser = argparse.ArgumentParser(description="Run backtest with config file.")
|
||||
parser.add_argument("config", type=str, nargs="?", help="Path to config JSON file.")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Use config_default.json as fallback if no config provided
|
||||
config_file = args.config or "configs/config_default.json"
|
||||
|
||||
try:
|
||||
with open(config_file, 'r') as f:
|
||||
config = json.load(f)
|
||||
print(f"Using config: {config_file}")
|
||||
except FileNotFoundError:
|
||||
print(f"Error: Config file '{config_file}' not found.")
|
||||
print("Available configs: configs/config_default.json, configs/config_bbrs.json, configs/config_combined.json")
|
||||
exit(1)
|
||||
except json.JSONDecodeError as e:
|
||||
print(f"Error: Invalid JSON in config file '{config_file}': {e}")
|
||||
exit(1)
|
||||
|
||||
def create_metadata_lines(config: dict, data_df, result_processor: ResultProcessor) -> list:
|
||||
"""Create metadata lines for results file"""
|
||||
start_date = config['start_date']
|
||||
if config['stop_date'] is None:
|
||||
stop_date = datetime.datetime.now().strftime("%Y-%m-%d")
|
||||
else:
|
||||
stop_date = config['stop_date']
|
||||
stop_date = config['stop_date']
|
||||
initial_usd = config['initial_usd']
|
||||
timeframes = config['timeframes']
|
||||
|
||||
timestamp = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M")
|
||||
|
||||
storage = Storage(logging=logging)
|
||||
system_utils = SystemUtils(logging=logging)
|
||||
|
||||
data_1min = storage.load_data('btcusd_1-min_data.csv', start_date, stop_date)
|
||||
|
||||
nearest_start_time, start_price = get_nearest_price(data_1min, start_date)
|
||||
nearest_stop_time, stop_price = get_nearest_price(data_1min, stop_date)
|
||||
|
||||
|
||||
# Get price information
|
||||
start_time, start_price = result_processor.get_price_info(data_df, start_date)
|
||||
stop_time, stop_price = result_processor.get_price_info(data_df, stop_date)
|
||||
|
||||
metadata_lines = [
|
||||
f"Start date\t{start_date}\tPrice\t{start_price}",
|
||||
f"Stop date\t{stop_date}\tPrice\t{stop_price}",
|
||||
f"Start date\t{start_date}\tPrice\t{start_price or 'N/A'}",
|
||||
f"Stop date\t{stop_date}\tPrice\t{stop_price or 'N/A'}",
|
||||
f"Initial USD\t{initial_usd}"
|
||||
]
|
||||
|
||||
# Create tasks for each timeframe
|
||||
tasks = [
|
||||
(name, data_1min, config)
|
||||
for name in timeframes
|
||||
]
|
||||
return metadata_lines
|
||||
|
||||
|
||||
def main():
|
||||
"""Main execution function"""
|
||||
logger = setup_logging()
|
||||
|
||||
if debug:
|
||||
all_results_rows = []
|
||||
all_trade_rows = []
|
||||
for task in tasks:
|
||||
results, trades = process(task, debug)
|
||||
if results or trades:
|
||||
all_results_rows.extend(results)
|
||||
all_trade_rows.extend(trades)
|
||||
else:
|
||||
workers = system_utils.get_optimal_workers()
|
||||
try:
|
||||
# Parse command line arguments
|
||||
parser = argparse.ArgumentParser(description="Run backtest with config file.")
|
||||
parser.add_argument("config", type=str, nargs="?", help="Path to config JSON file.")
|
||||
args = parser.parse_args()
|
||||
|
||||
# Initialize configuration manager
|
||||
config_manager = ConfigManager(logging_instance=logger)
|
||||
|
||||
# Load configuration
|
||||
logger.info("Loading configuration...")
|
||||
config = config_manager.load_config(args.config)
|
||||
|
||||
# Initialize components
|
||||
logger.info("Initializing components...")
|
||||
storage = Storage(
|
||||
data_dir=config['data_dir'],
|
||||
results_dir=config['results_dir'],
|
||||
logging=logger
|
||||
)
|
||||
system_utils = SystemUtils(logging=logger)
|
||||
result_processor = ResultProcessor(storage, logging_instance=logger)
|
||||
|
||||
# OPTIMIZATION: Disable progress for parallel execution to improve performance
|
||||
show_progress = config.get('show_progress', True)
|
||||
debug_mode = config.get('debug', 0) == 1
|
||||
|
||||
# Only show progress in debug (sequential) mode
|
||||
if not debug_mode:
|
||||
show_progress = False
|
||||
logger.info("Progress tracking disabled for parallel execution (performance optimization)")
|
||||
|
||||
runner = BacktestRunner(
|
||||
storage,
|
||||
system_utils,
|
||||
result_processor,
|
||||
logging_instance=logger,
|
||||
show_progress=show_progress
|
||||
)
|
||||
|
||||
# Validate inputs
|
||||
logger.info("Validating inputs...")
|
||||
runner.validate_inputs(
|
||||
config['timeframes'],
|
||||
config['stop_loss_pcts'],
|
||||
config['initial_usd']
|
||||
)
|
||||
|
||||
# Load data
|
||||
logger.info("Loading market data...")
|
||||
# data_filename = 'btcusd_1-min_data.csv'
|
||||
data_filename = 'btcusd_1-min_data_with_price_predictions.csv'
|
||||
data_1min = runner.load_data(
|
||||
data_filename,
|
||||
config['start_date'],
|
||||
config['stop_date']
|
||||
)
|
||||
|
||||
# Run backtests
|
||||
logger.info("Starting backtest execution...")
|
||||
|
||||
all_results, all_trades = runner.run_backtests(
|
||||
data_1min,
|
||||
config['timeframes'],
|
||||
config['stop_loss_pcts'],
|
||||
config['initial_usd'],
|
||||
debug=debug_mode
|
||||
)
|
||||
|
||||
# Process and save results
|
||||
logger.info("Processing and saving results...")
|
||||
timestamp = datetime.datetime.now().strftime("%Y_%m_%d_%H_%M")
|
||||
|
||||
# OPTIMIZATION: Save trade files in batch after parallel execution
|
||||
if all_trades and not debug_mode:
|
||||
logger.info("Saving trade files in batch...")
|
||||
result_processor.save_all_trade_files(all_trades)
|
||||
|
||||
# Create metadata
|
||||
metadata_lines = create_metadata_lines(config, data_1min, result_processor)
|
||||
|
||||
# Save aggregated results
|
||||
result_file = result_processor.save_backtest_results(
|
||||
all_results,
|
||||
metadata_lines,
|
||||
timestamp
|
||||
)
|
||||
|
||||
logger.info(f"Backtest completed successfully. Results saved to {result_file}")
|
||||
logger.info(f"Processed {len(all_results)} result combinations")
|
||||
logger.info(f"Generated {len(all_trades)} total trades")
|
||||
|
||||
except KeyboardInterrupt:
|
||||
logger.warning("Backtest interrupted by user")
|
||||
sys.exit(130) # Standard exit code for Ctrl+C
|
||||
|
||||
except FileNotFoundError as e:
|
||||
logger.error(f"File not found: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
except ValueError as e:
|
||||
logger.error(f"Invalid configuration or data: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
except RuntimeError as e:
|
||||
logger.error(f"Runtime error during backtest: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Unexpected error: {e}", exc_info=True)
|
||||
sys.exit(1)
|
||||
|
||||
with concurrent.futures.ProcessPoolExecutor(max_workers=workers) as executor:
|
||||
futures = {executor.submit(process, task, debug): task for task in tasks}
|
||||
all_results_rows = []
|
||||
all_trade_rows = []
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
results, trades = future.result()
|
||||
|
||||
if results or trades:
|
||||
all_results_rows.extend(results)
|
||||
all_trade_rows.extend(trades)
|
||||
|
||||
backtest_filename = os.path.join(f"{timestamp}_backtest.csv")
|
||||
backtest_fieldnames = [
|
||||
"timeframe", "stop_loss_pct", "n_trades", "n_stop_loss", "win_rate",
|
||||
"max_drawdown", "avg_trade", "profit_ratio", "final_usd", "total_fees_usd"
|
||||
]
|
||||
storage.write_backtest_results(backtest_filename, backtest_fieldnames, all_results_rows, metadata_lines)
|
||||
|
||||
trades_fieldnames = ["entry_time", "exit_time", "entry_price", "exit_price", "profit_pct", "type", "fee_usd"]
|
||||
storage.write_trades(all_trade_rows, trades_fieldnames)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
@ -5,12 +5,15 @@ description = "Add your description here"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.10"
|
||||
dependencies = [
|
||||
"dash>=3.0.4",
|
||||
"gspread>=6.2.1",
|
||||
"matplotlib>=3.10.3",
|
||||
"numba>=0.61.2",
|
||||
"pandas>=2.2.3",
|
||||
"plotly>=6.1.1",
|
||||
"psutil>=7.0.0",
|
||||
"scikit-learn>=1.6.1",
|
||||
"scipy>=1.15.3",
|
||||
"seaborn>=0.13.2",
|
||||
"websocket>=0.2.1",
|
||||
"ta>=0.11.0",
|
||||
"xgboost>=3.0.2",
|
||||
]
|
||||
|
||||
BIN
requirements.txt
BIN
requirements.txt
Binary file not shown.
446
result_processor.py
Normal file
446
result_processor.py
Normal file
@ -0,0 +1,446 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
import os
|
||||
import csv
|
||||
import logging
|
||||
from typing import List, Dict, Any, Optional, Tuple
|
||||
from collections import defaultdict
|
||||
|
||||
from cycles.utils.storage import Storage
|
||||
|
||||
|
||||
class ResultProcessor:
|
||||
"""Handles processing, aggregation, and saving of backtest results"""
|
||||
|
||||
def __init__(self, storage: Storage, logging_instance: Optional[logging.Logger] = None):
|
||||
"""
|
||||
Initialize result processor
|
||||
|
||||
Args:
|
||||
storage: Storage instance for file operations
|
||||
logging_instance: Optional logging instance
|
||||
"""
|
||||
self.storage = storage
|
||||
self.logging = logging_instance
|
||||
|
||||
def process_timeframe_results(
|
||||
self,
|
||||
min1_df: pd.DataFrame,
|
||||
df: pd.DataFrame,
|
||||
stop_loss_pcts: List[float],
|
||||
timeframe_name: str,
|
||||
initial_usd: float,
|
||||
progress_callback=None
|
||||
) -> Tuple[List[Dict], List[Dict]]:
|
||||
"""
|
||||
Process results for a single timeframe with multiple stop loss values
|
||||
|
||||
Args:
|
||||
min1_df: 1-minute data DataFrame
|
||||
df: Resampled timeframe DataFrame
|
||||
stop_loss_pcts: List of stop loss percentages to test
|
||||
timeframe_name: Name of the timeframe (e.g., '1D', '6h')
|
||||
initial_usd: Initial USD amount
|
||||
progress_callback: Optional progress callback function
|
||||
|
||||
Returns:
|
||||
Tuple of (results_rows, trade_rows)
|
||||
"""
|
||||
from cycles.backtest import Backtest
|
||||
|
||||
df = df.copy().reset_index(drop=True)
|
||||
results_rows = []
|
||||
trade_rows = []
|
||||
|
||||
for stop_loss_pct in stop_loss_pcts:
|
||||
try:
|
||||
results = Backtest.run(
|
||||
min1_df,
|
||||
df,
|
||||
initial_usd=initial_usd,
|
||||
stop_loss_pct=stop_loss_pct,
|
||||
progress_callback=progress_callback,
|
||||
verbose=False # Default to False for production runs
|
||||
)
|
||||
|
||||
# Calculate metrics
|
||||
metrics = self._calculate_metrics(results, initial_usd, stop_loss_pct, timeframe_name)
|
||||
results_rows.append(metrics)
|
||||
|
||||
# Process trades
|
||||
if 'trades' not in results:
|
||||
raise ValueError(f"Backtest results missing 'trades' field for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
trades = self._process_trades(results['trades'], timeframe_name, stop_loss_pct)
|
||||
trade_rows.extend(trades)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Timeframe: {timeframe_name}, Stop Loss: {stop_loss_pct}, Trades: {results['n_trades']}")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Error processing {timeframe_name} with stop loss {stop_loss_pct}: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
return results_rows, trade_rows
|
||||
|
||||
def _calculate_metrics(
|
||||
self,
|
||||
results: Dict[str, Any],
|
||||
initial_usd: float,
|
||||
stop_loss_pct: float,
|
||||
timeframe_name: str
|
||||
) -> Dict[str, Any]:
|
||||
"""Calculate performance metrics from backtest results"""
|
||||
if 'trades' not in results:
|
||||
raise ValueError(f"Backtest results missing 'trades' field for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
trades = results['trades']
|
||||
n_trades = results["n_trades"]
|
||||
|
||||
# Validate that all required fields are present
|
||||
required_fields = ['final_usd', 'max_drawdown', 'total_fees_usd', 'n_trades', 'n_stop_loss', 'win_rate', 'avg_trade']
|
||||
missing_fields = [field for field in required_fields if field not in results]
|
||||
if missing_fields:
|
||||
raise ValueError(f"Backtest results missing required fields: {missing_fields}")
|
||||
|
||||
# Calculate win metrics - validate trade fields
|
||||
winning_trades = []
|
||||
for t in trades:
|
||||
if 'exit' not in t:
|
||||
raise ValueError(f"Trade missing 'exit' field: {t}")
|
||||
if 'entry' not in t:
|
||||
raise ValueError(f"Trade missing 'entry' field: {t}")
|
||||
if t['exit'] is not None and t['exit'] > t['entry']:
|
||||
winning_trades.append(t)
|
||||
n_winning_trades = len(winning_trades)
|
||||
win_rate = n_winning_trades / n_trades if n_trades > 0 else 0
|
||||
|
||||
# Calculate profit metrics
|
||||
total_profit = sum(trade['profit_pct'] for trade in trades if trade['profit_pct'] > 0)
|
||||
total_loss = abs(sum(trade['profit_pct'] for trade in trades if trade['profit_pct'] < 0))
|
||||
avg_trade = sum(trade['profit_pct'] for trade in trades) / n_trades if n_trades > 0 else 0
|
||||
profit_ratio = total_profit / total_loss if total_loss > 0 else (float('inf') if total_profit > 0 else 0)
|
||||
|
||||
# Get values directly from backtest results (no defaults)
|
||||
max_drawdown = results['max_drawdown']
|
||||
final_usd = results['final_usd']
|
||||
total_fees_usd = results['total_fees_usd']
|
||||
n_stop_loss = results['n_stop_loss'] # Get stop loss count directly from backtest
|
||||
|
||||
# Validate no None values
|
||||
if max_drawdown is None:
|
||||
raise ValueError(f"max_drawdown is None for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
if final_usd is None:
|
||||
raise ValueError(f"final_usd is None for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
if total_fees_usd is None:
|
||||
raise ValueError(f"total_fees_usd is None for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
if n_stop_loss is None:
|
||||
raise ValueError(f"n_stop_loss is None for {timeframe_name} with {stop_loss_pct} stop loss")
|
||||
|
||||
return {
|
||||
"timeframe": timeframe_name,
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"n_trades": n_trades,
|
||||
"n_stop_loss": n_stop_loss,
|
||||
"win_rate": win_rate,
|
||||
"max_drawdown": max_drawdown,
|
||||
"avg_trade": avg_trade,
|
||||
"total_profit": total_profit,
|
||||
"total_loss": total_loss,
|
||||
"profit_ratio": profit_ratio,
|
||||
"initial_usd": initial_usd,
|
||||
"final_usd": final_usd,
|
||||
"total_fees_usd": total_fees_usd,
|
||||
}
|
||||
|
||||
def _calculate_max_drawdown(self, trades: List[Dict]) -> float:
|
||||
"""Calculate maximum drawdown from trade sequence"""
|
||||
cumulative_profit = 0
|
||||
max_drawdown = 0
|
||||
peak = 0
|
||||
|
||||
for trade in trades:
|
||||
cumulative_profit += trade['profit_pct']
|
||||
if cumulative_profit > peak:
|
||||
peak = cumulative_profit
|
||||
drawdown = peak - cumulative_profit
|
||||
if drawdown > max_drawdown:
|
||||
max_drawdown = drawdown
|
||||
|
||||
return max_drawdown
|
||||
|
||||
def _process_trades(
|
||||
self,
|
||||
trades: List[Dict],
|
||||
timeframe_name: str,
|
||||
stop_loss_pct: float
|
||||
) -> List[Dict]:
|
||||
"""Process individual trades with metadata"""
|
||||
processed_trades = []
|
||||
|
||||
for trade in trades:
|
||||
# Validate all required trade fields
|
||||
required_fields = ["entry_time", "exit_time", "entry", "exit", "profit_pct", "type", "fee_usd"]
|
||||
missing_fields = [field for field in required_fields if field not in trade]
|
||||
if missing_fields:
|
||||
raise ValueError(f"Trade missing required fields: {missing_fields} in trade: {trade}")
|
||||
|
||||
processed_trade = {
|
||||
"timeframe": timeframe_name,
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"entry_time": trade["entry_time"],
|
||||
"exit_time": trade["exit_time"],
|
||||
"entry_price": trade["entry"],
|
||||
"exit_price": trade["exit"],
|
||||
"profit_pct": trade["profit_pct"],
|
||||
"type": trade["type"],
|
||||
"fee_usd": trade["fee_usd"],
|
||||
}
|
||||
processed_trades.append(processed_trade)
|
||||
|
||||
return processed_trades
|
||||
|
||||
def _debug_output(self, results: Dict[str, Any]) -> None:
|
||||
"""Output debug information for backtest results"""
|
||||
if 'trades' not in results:
|
||||
raise ValueError("Backtest results missing 'trades' field for debug output")
|
||||
trades = results['trades']
|
||||
|
||||
# Print stop loss trades
|
||||
stop_loss_trades = []
|
||||
for t in trades:
|
||||
if 'type' not in t:
|
||||
raise ValueError(f"Trade missing 'type' field: {t}")
|
||||
if t['type'] == 'STOP':
|
||||
stop_loss_trades.append(t)
|
||||
|
||||
if stop_loss_trades:
|
||||
print("Stop Loss Trades:")
|
||||
for trade in stop_loss_trades:
|
||||
print(trade)
|
||||
|
||||
# Print large loss trades
|
||||
large_loss_trades = []
|
||||
for t in trades:
|
||||
if 'profit_pct' not in t:
|
||||
raise ValueError(f"Trade missing 'profit_pct' field: {t}")
|
||||
if t['profit_pct'] < -0.09:
|
||||
large_loss_trades.append(t)
|
||||
|
||||
if large_loss_trades:
|
||||
print("Large Loss Trades:")
|
||||
for trade in large_loss_trades:
|
||||
print("Large loss trade:", trade)
|
||||
|
||||
def aggregate_results(self, all_results: List[Dict]) -> List[Dict]:
|
||||
"""
|
||||
Aggregate results per stop_loss_pct and timeframe
|
||||
|
||||
Args:
|
||||
all_results: List of result dictionaries from all timeframes
|
||||
|
||||
Returns:
|
||||
List of aggregated summary rows
|
||||
"""
|
||||
grouped = defaultdict(list)
|
||||
for row in all_results:
|
||||
key = (row['timeframe'], row['stop_loss_pct'])
|
||||
grouped[key].append(row)
|
||||
|
||||
summary_rows = []
|
||||
for (timeframe, stop_loss_pct), rows in grouped.items():
|
||||
summary = self._aggregate_group(rows, timeframe, stop_loss_pct)
|
||||
summary_rows.append(summary)
|
||||
|
||||
return summary_rows
|
||||
|
||||
def _aggregate_group(self, rows: List[Dict], timeframe: str, stop_loss_pct: float) -> Dict:
|
||||
"""Aggregate a group of rows with the same timeframe and stop loss"""
|
||||
if not rows:
|
||||
raise ValueError(f"No rows to aggregate for {timeframe} with {stop_loss_pct} stop loss")
|
||||
|
||||
# Validate all rows have required fields
|
||||
required_fields = ['n_trades', 'n_stop_loss', 'win_rate', 'max_drawdown', 'avg_trade', 'profit_ratio', 'final_usd', 'total_fees_usd', 'initial_usd']
|
||||
for i, row in enumerate(rows):
|
||||
missing_fields = [field for field in required_fields if field not in row]
|
||||
if missing_fields:
|
||||
raise ValueError(f"Row {i} missing required fields: {missing_fields}")
|
||||
|
||||
total_trades = sum(r['n_trades'] for r in rows)
|
||||
total_stop_loss = sum(r['n_stop_loss'] for r in rows)
|
||||
|
||||
# Calculate averages (no defaults, expect all values to be present)
|
||||
avg_win_rate = np.mean([r['win_rate'] for r in rows])
|
||||
avg_max_drawdown = np.mean([r['max_drawdown'] for r in rows])
|
||||
avg_avg_trade = np.mean([r['avg_trade'] for r in rows])
|
||||
|
||||
# Handle infinite profit ratios properly
|
||||
finite_profit_ratios = [r['profit_ratio'] for r in rows if not np.isinf(r['profit_ratio'])]
|
||||
avg_profit_ratio = np.mean(finite_profit_ratios) if finite_profit_ratios else 0
|
||||
|
||||
# Calculate final USD and fees (no defaults)
|
||||
final_usd = np.mean([r['final_usd'] for r in rows])
|
||||
total_fees_usd = np.mean([r['total_fees_usd'] for r in rows])
|
||||
initial_usd = rows[0]['initial_usd']
|
||||
|
||||
return {
|
||||
"timeframe": timeframe,
|
||||
"stop_loss_pct": stop_loss_pct,
|
||||
"n_trades": total_trades,
|
||||
"n_stop_loss": total_stop_loss,
|
||||
"win_rate": avg_win_rate,
|
||||
"max_drawdown": avg_max_drawdown,
|
||||
"avg_trade": avg_avg_trade,
|
||||
"profit_ratio": avg_profit_ratio,
|
||||
"initial_usd": initial_usd,
|
||||
"final_usd": final_usd,
|
||||
"total_fees_usd": total_fees_usd,
|
||||
}
|
||||
|
||||
def save_trade_file(self, trades: List[Dict], timeframe: str, stop_loss_pct: float) -> None:
|
||||
"""
|
||||
Save individual trade file with summary header
|
||||
|
||||
Args:
|
||||
trades: List of trades for this combination
|
||||
timeframe: Timeframe name
|
||||
stop_loss_pct: Stop loss percentage
|
||||
"""
|
||||
if not trades:
|
||||
return
|
||||
|
||||
try:
|
||||
# Generate filename
|
||||
sl_percent = int(round(stop_loss_pct * 100))
|
||||
trades_filename = os.path.join(self.storage.results_dir, f"trades_{timeframe}_ST{sl_percent}pct.csv")
|
||||
|
||||
# Prepare summary from first trade
|
||||
sample_trade = trades[0]
|
||||
summary_fields = ["timeframe", "stop_loss_pct", "n_trades", "win_rate"]
|
||||
summary_values = [timeframe, stop_loss_pct, len(trades), "calculated_elsewhere"]
|
||||
|
||||
# Write file with header and trades
|
||||
trades_fieldnames = ["entry_time", "exit_time", "entry_price", "exit_price", "profit_pct", "type", "fee_usd"]
|
||||
|
||||
with open(trades_filename, "w", newline="") as f:
|
||||
# Write summary header
|
||||
f.write("\t".join(summary_fields) + "\n")
|
||||
f.write("\t".join(str(v) for v in summary_values) + "\n")
|
||||
|
||||
# Write trades
|
||||
writer = csv.DictWriter(f, fieldnames=trades_fieldnames)
|
||||
writer.writeheader()
|
||||
for trade in trades:
|
||||
# Validate all required fields are present
|
||||
missing_fields = [k for k in trades_fieldnames if k not in trade]
|
||||
if missing_fields:
|
||||
raise ValueError(f"Trade missing required fields for CSV: {missing_fields} in trade: {trade}")
|
||||
writer.writerow({k: trade[k] for k in trades_fieldnames})
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Trades saved to {trades_filename}")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to save trades file for {timeframe}_ST{int(round(stop_loss_pct * 100))}pct: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
def save_backtest_results(
|
||||
self,
|
||||
results: List[Dict],
|
||||
metadata_lines: List[str],
|
||||
timestamp: str
|
||||
) -> str:
|
||||
"""
|
||||
Save aggregated backtest results to CSV file
|
||||
|
||||
Args:
|
||||
results: List of aggregated result dictionaries
|
||||
metadata_lines: List of metadata strings
|
||||
timestamp: Timestamp for filename
|
||||
|
||||
Returns:
|
||||
Path to saved file
|
||||
"""
|
||||
try:
|
||||
filename = f"{timestamp}_backtest.csv"
|
||||
fieldnames = [
|
||||
"timeframe", "stop_loss_pct", "n_trades", "n_stop_loss", "win_rate",
|
||||
"max_drawdown", "avg_trade", "profit_ratio", "final_usd", "total_fees_usd"
|
||||
]
|
||||
|
||||
filepath = self.storage.write_backtest_results(filename, fieldnames, results, metadata_lines)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Backtest results saved to {filepath}")
|
||||
|
||||
return filepath
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to save backtest results: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
|
||||
def get_price_info(self, data_df: pd.DataFrame, date: str) -> Tuple[Optional[str], Optional[float]]:
|
||||
"""
|
||||
Get nearest price information for a given date
|
||||
|
||||
Args:
|
||||
data_df: DataFrame with price data
|
||||
date: Target date string
|
||||
|
||||
Returns:
|
||||
Tuple of (nearest_time, price) or (None, None) if no data
|
||||
"""
|
||||
try:
|
||||
if len(data_df) == 0:
|
||||
return None, None
|
||||
|
||||
target_ts = pd.to_datetime(date)
|
||||
nearest_idx = data_df.index.get_indexer([target_ts], method='nearest')[0]
|
||||
nearest_time = data_df.index[nearest_idx]
|
||||
price = data_df.iloc[nearest_idx]['close']
|
||||
|
||||
return str(nearest_time), float(price)
|
||||
|
||||
except Exception as e:
|
||||
if self.logging:
|
||||
self.logging.warning(f"Could not get price info for {date}: {e}")
|
||||
return None, None
|
||||
|
||||
def save_all_trade_files(self, all_trades: List[Dict]) -> None:
|
||||
"""
|
||||
Save all trade files in batch after parallel execution completes
|
||||
|
||||
Args:
|
||||
all_trades: List of all trades from all tasks
|
||||
"""
|
||||
if not all_trades:
|
||||
return
|
||||
|
||||
try:
|
||||
# Group trades by timeframe and stop loss
|
||||
trade_groups = {}
|
||||
for trade in all_trades:
|
||||
timeframe = trade.get('timeframe')
|
||||
stop_loss_pct = trade.get('stop_loss_pct')
|
||||
if timeframe and stop_loss_pct is not None:
|
||||
key = (timeframe, stop_loss_pct)
|
||||
if key not in trade_groups:
|
||||
trade_groups[key] = []
|
||||
trade_groups[key].append(trade)
|
||||
|
||||
# Save each group
|
||||
for (timeframe, stop_loss_pct), trades in trade_groups.items():
|
||||
self.save_trade_file(trades, timeframe, stop_loss_pct)
|
||||
|
||||
if self.logging:
|
||||
self.logging.info(f"Saved {len(trade_groups)} trade files in batch")
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Failed to save trade files in batch: {e}"
|
||||
if self.logging:
|
||||
self.logging.error(error_msg)
|
||||
raise RuntimeError(error_msg) from e
|
||||
552
uv.lock
generated
552
uv.lock
generated
@ -7,6 +7,15 @@ resolution-markers = [
|
||||
"python_full_version < '3.11'",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "blinker"
|
||||
version = "1.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/21/28/9b3f50ce0e048515135495f198351908d99540d69bfdc8c1d15b73dc55ce/blinker-1.9.0.tar.gz", hash = "sha256:b4ce2265a7abece45e7cc896e98dbebe6cead56bcf805a3d23136d145f5445bf", size = 22460, upload-time = "2024-11-08T17:25:47.436Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/10/cb/f2ad4230dc2eb1a74edf38f1a38b9b52277f75bef262d8908e60d957e13c/blinker-1.9.0-py3-none-any.whl", hash = "sha256:ba0efaa9080b619ff2f3459d1d500c57bddea4a6b424b60a91141db6fd2f08bc", size = 8458, upload-time = "2024-11-08T17:25:46.184Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cachetools"
|
||||
version = "5.5.2"
|
||||
@ -25,25 +34,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/7e/3db2bd1b1f9e95f7cddca6d6e75e2f2bd9f51b1246e546d88addca0106bd/certifi-2025.4.26-py3-none-any.whl", hash = "sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3", size = 159618, upload-time = "2025-04-26T02:12:27.662Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "cffi"
|
||||
version = "1.17.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "pycparser" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/fc/97/c783634659c2920c3fc70419e3af40972dbaf758daa229a7d6ea6135c90d/cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824", size = 516621, upload-time = "2024-09-04T20:45:21.852Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/fe/4d41c2f200c4a457933dbd98d3cf4e911870877bd94d9656cc0fcb390681/cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c", size = 171804, upload-time = "2024-09-04T20:43:48.186Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/b6/0b0f5ab93b0df4acc49cae758c81fe4e5ef26c3ae2e10cc69249dfd8b3ab/cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15", size = 181299, upload-time = "2024-09-04T20:43:49.812Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/33/e1b8a1ba29025adbdcda5fb3a36f94c03d771c1b7b12f726ff7fef2ebe36/cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655", size = 171727, upload-time = "2024-09-04T20:44:09.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/97/50228be003bb2802627d28ec0627837ac0bf35c90cf769812056f235b2d1/cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0", size = 181400, upload-time = "2024-09-04T20:44:10.873Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/86/c5/28b2d6f799ec0bdecf44dced2ec5ed43e0eb63097b0f58c293583b406582/cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65", size = 172448, upload-time = "2024-09-04T20:44:26.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903", size = 181976, upload-time = "2024-09-04T20:44:27.578Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/ee/f94057fa6426481d663b88637a9a10e859e492c73d0384514a17d78ee205/cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d", size = 172475, upload-time = "2024-09-04T20:44:43.733Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/fc/6a8cb64e5f0324877d503c854da15d76c1e50eb722e320b15345c4d0c6de/cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a", size = 182009, upload-time = "2024-09-04T20:44:45.309Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "charset-normalizer"
|
||||
version = "3.4.2"
|
||||
@ -105,6 +95,27 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/20/94/c5790835a017658cbfabd07f3bfb549140c3ac458cfc196323996b10095a/charset_normalizer-3.4.2-py3-none-any.whl", hash = "sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0", size = 52626, upload-time = "2025-05-02T08:34:40.053Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.2.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/60/6c/8ca2efa64cf75a977a0d7fac081354553ebe483345c734fb6b6515d96bbc/click-8.2.1.tar.gz", hash = "sha256:27c491cc05d968d271d5a1db13e3b5a184636d9d930f148c50b038f0d0646202", size = 286342, upload-time = "2025-05-20T23:19:49.832Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/85/32/10bb5764d90a8eee674e9dc6f4db6a0ab47c8c4d0d83c27f7c39ac415a4d/click-8.2.1-py3-none-any.whl", hash = "sha256:61a3265b914e850b85317d0b3109c7f8cd35a670f963866005d6ef1d5175a12b", size = 102215, upload-time = "2025-05-20T23:19:47.796Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "contourpy"
|
||||
version = "1.3.2"
|
||||
@ -186,26 +197,68 @@ name = "cycles"
|
||||
version = "0.1.0"
|
||||
source = { virtual = "." }
|
||||
dependencies = [
|
||||
{ name = "dash" },
|
||||
{ name = "gspread" },
|
||||
{ name = "matplotlib" },
|
||||
{ name = "numba" },
|
||||
{ name = "pandas" },
|
||||
{ name = "plotly" },
|
||||
{ name = "psutil" },
|
||||
{ name = "scikit-learn" },
|
||||
{ name = "scipy" },
|
||||
{ name = "seaborn" },
|
||||
{ name = "websocket" },
|
||||
{ name = "ta" },
|
||||
{ name = "xgboost" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "dash", specifier = ">=3.0.4" },
|
||||
{ name = "gspread", specifier = ">=6.2.1" },
|
||||
{ name = "matplotlib", specifier = ">=3.10.3" },
|
||||
{ name = "numba", specifier = ">=0.61.2" },
|
||||
{ name = "pandas", specifier = ">=2.2.3" },
|
||||
{ name = "plotly", specifier = ">=6.1.1" },
|
||||
{ name = "psutil", specifier = ">=7.0.0" },
|
||||
{ name = "scikit-learn", specifier = ">=1.6.1" },
|
||||
{ name = "scipy", specifier = ">=1.15.3" },
|
||||
{ name = "seaborn", specifier = ">=0.13.2" },
|
||||
{ name = "websocket", specifier = ">=0.2.1" },
|
||||
{ name = "ta", specifier = ">=0.11.0" },
|
||||
{ name = "xgboost", specifier = ">=3.0.2" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "dash"
|
||||
version = "3.0.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "flask" },
|
||||
{ name = "importlib-metadata" },
|
||||
{ name = "nest-asyncio" },
|
||||
{ name = "plotly" },
|
||||
{ name = "requests" },
|
||||
{ name = "retrying" },
|
||||
{ name = "setuptools" },
|
||||
{ name = "typing-extensions" },
|
||||
{ name = "werkzeug" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/88/6d/90f113317d41266e20190185cf1b5121efbab79ff79b2ecdf8316a91be40/dash-3.0.4.tar.gz", hash = "sha256:4f9e62e9d8c5cd1b42dc6d6dcf211fe9498195f73ef0edb62a26e2a1b952a368", size = 7592060, upload-time = "2025-04-24T19:06:49.287Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/20/2e7ab37ea2ef1f8b2592a2615c8b3fb041ad51f32101061d8bc6465b8b40/dash-3.0.4-py3-none-any.whl", hash = "sha256:177f8c3d1fa45555b18f2f670808eba7803c72a6b1cd6fd172fd538aca18eb1d", size = 7935680, upload-time = "2025-04-24T19:06:41.751Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "flask"
|
||||
version = "3.0.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "blinker" },
|
||||
{ name = "click" },
|
||||
{ name = "itsdangerous" },
|
||||
{ name = "jinja2" },
|
||||
{ name = "werkzeug" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/41/e1/d104c83026f8d35dfd2c261df7d64738341067526406b40190bc063e829a/flask-3.0.3.tar.gz", hash = "sha256:ceb27b0af3823ea2737928a4d99d125a06175b8512c445cbd9a9ce200ef76842", size = 676315, upload-time = "2024-04-07T19:26:11.035Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/61/80/ffe1da13ad9300f87c93af113edd0638c75138c42a0994becfacac078c06/flask-3.0.3-py3-none-any.whl", hash = "sha256:34e815dfaa43340d1d15a5c3a02b8476004037eb4840b34910c6e21679d288f3", size = 101735, upload-time = "2024-04-07T19:26:08.569Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -249,54 +302,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/1f/4417c26e26a1feab85a27e927f7a73d8aabc84544be8ba108ce4aa90eb1e/fonttools-4.58.0-py3-none-any.whl", hash = "sha256:c96c36880be2268be409df7b08c5b5dacac1827083461a6bc2cb07b8cbcec1d7", size = 1111440, upload-time = "2025-05-10T17:36:33.607Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gevent"
|
||||
version = "25.5.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "cffi", marker = "platform_python_implementation == 'CPython' and sys_platform == 'win32'" },
|
||||
{ name = "greenlet", marker = "platform_python_implementation == 'CPython'" },
|
||||
{ name = "zope-event" },
|
||||
{ name = "zope-interface" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f1/58/267e8160aea00ab00acd2de97197eecfe307064a376fb5c892870a8a6159/gevent-25.5.1.tar.gz", hash = "sha256:582c948fa9a23188b890d0bc130734a506d039a2e5ad87dae276a456cc683e61", size = 6388207, upload-time = "2025-05-12T12:57:59.833Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/44/a7/438568c37fb255f80e710318bfcad04731b92ce764bc16adee278fdc6b4d/gevent-25.5.1-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:8e5a0fab5e245b15ec1005b3666b0a2e867c26f411c8fe66ae1afe07174a30e9", size = 2922800, upload-time = "2025-05-12T11:11:46.728Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5d/b3/b44d8b1c4a4d01097a7f82ffbc582d054007365c27b28867f0b2d4241d73/gevent-25.5.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c7b80a37f2fb45ee4a8f7e64b77dd8a842d364384046e394227b974a4e9c9a52", size = 1812954, upload-time = "2025-05-12T11:52:27.059Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1e/c6/935b4c973ad827c9ec49c354d68d047da1d23e3018bda63d3723cce43178/gevent-25.5.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:29ab729d50ae85077a68e0385f129f5b01052d01a0ae6d7fdc1824f5337905e4", size = 1900169, upload-time = "2025-05-12T11:54:17.797Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/8a/b745bddfec35fb723cafb036f191e5e0a0013f1698bf0ba4fa2cb8e01879/gevent-25.5.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:80d20592aeabcc4e294fd441fd43d45cb537437fd642c374ea9d964622fad229", size = 1849786, upload-time = "2025-05-12T12:00:01.962Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7c/b3/7aa7b09d91207bebe7608699558bbadd34f63e32904351867c29f8be25de/gevent-25.5.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a8ba0257542ccbb72a8229dc34d00844ccdfba110417e4b7b34599548d0e20e9", size = 2139021, upload-time = "2025-05-12T11:32:58.961Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/da/cf52ae0c84361f4164a04f3338508b1234331ce79719db103e50dbc5598c/gevent-25.5.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cad0821dff998c7c60dd238f92cd61380342c47fb9e92e1a8705d9b5ac7c16e8", size = 1830758, upload-time = "2025-05-12T11:59:55.666Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/93/73a49b896d78eec27f0895ce3008f9825db748a5aacbca47404d1014da4b/gevent-25.5.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:017a7384c0cd1a5907751c991535a0699596e89725468a7fc39228312e10efa1", size = 2199993, upload-time = "2025-05-12T11:40:50.845Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/c7/34680b7d2a75492fa032fa8ecaacc03c1940767a35125f6740954a0132a3/gevent-25.5.1-cp310-cp310-win_amd64.whl", hash = "sha256:469c86d02fccad7e2a3d82fe22237e47ecb376fbf4710bc18747b49c50716817", size = 1652665, upload-time = "2025-05-12T12:35:58.105Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/eb/015e93f16a718e2f836ecebecae9bcd7b4d2a5695d1c8bd5bba2d5d91548/gevent-25.5.1-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:12380aba5c316e9ff53cc21d8ab80f4a91c0df3ada58f65d4f5eb2cf693db00e", size = 2877441, upload-time = "2025-05-12T11:14:57.735Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/86/42d191a6f6672ca59d6d79b4cd9b89d4a15f59c843fbbad42f2b749f8ea9/gevent-25.5.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7f0694daab1a041b69a53f53c2141c12994892b2503870515cabe6a5dbd2a928", size = 1774873, upload-time = "2025-05-12T11:52:29.015Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/9f/42dd255849c9ca2e814f5cbe180980594007ba19044a132cf674069e38bf/gevent-25.5.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:2797885e9aeffdc98e1846723e5aa212e7ce53007dbef40d6fd2add264235c41", size = 1857911, upload-time = "2025-05-12T11:54:19.523Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/fc/8e799a733be48f6114bfc531b94e28812741664d8af89872dd90e117f8a4/gevent-25.5.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:cde6aaac36b54332e10ea2a5bc0de6a8aba6c205c92603fe4396e3777c88e05d", size = 1812751, upload-time = "2025-05-12T12:00:03.719Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/4f/a3f3acd961887da10cb0b49c3d915201973d59ce6bf49e2922eaf2058d5f/gevent-25.5.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:24484f80f14befb8822bf29554cfb3a26a26cb69cd1e5a8be9e23b4bd7a96e25", size = 2087115, upload-time = "2025-05-12T11:33:01.128Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/27/bb38e005106a53787c13ad1f9f73ed990e403e462108acae6320ab11d442/gevent-25.5.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8fdc7446895fa184890d8ca5ea61e502691114f9db55c9b76adc33f3086c4368", size = 1793549, upload-time = "2025-05-12T11:59:57.854Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/56/da817bc69e1f0ae8438f12f2cd150656b09a8c3576c6d12f992dc9ca64ef/gevent-25.5.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5b6106e2414b1797133786258fa1962a5e836480e4d5e861577f9fc63b673a5a", size = 2145899, upload-time = "2025-05-12T11:40:53.275Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/42/989403abbdbb1346a1507083c02018bee3fedaef3f9648940c767d8c0958/gevent-25.5.1-cp311-cp311-win_amd64.whl", hash = "sha256:bc899212d90f311784c58938a9c09c59802fb6dc287a35fabdc36d180f57f575", size = 1635771, upload-time = "2025-05-12T12:26:47.644Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/58/c5/cf71423666a0b83db3d7e3f85788bc47d573fca5fe62b798fe2c4273de7c/gevent-25.5.1-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:d87c0a1bd809d8f70f96b9b229779ec6647339830b8888a192beed33ac8d129f", size = 2909333, upload-time = "2025-05-12T11:11:34.883Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/26/7e/d2f174ee8bec6eb85d961ca203bc599d059c857b8412e367b8fa206603a5/gevent-25.5.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b87a4b66edb3808d4d07bbdb0deed5a710cf3d3c531e082759afd283758bb649", size = 1788420, upload-time = "2025-05-12T11:52:30.306Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/f3/3aba8c147b9108e62ba348c726fe38ae69735a233db425565227336e8ce6/gevent-25.5.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f076779050029a82feb0cb1462021d3404d22f80fa76a181b1a7889cd4d6b519", size = 1868854, upload-time = "2025-05-12T11:54:21.564Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/b1/11a5453f8fcebe90a456471fad48bd154c6a62fcb96e3475a5e408d05fc8/gevent-25.5.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:bb673eb291c19370f69295f7a881a536451408481e2e3deec3f41dedb7c281ec", size = 1833946, upload-time = "2025-05-12T12:00:05.514Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/1c/37d4a62303f86e6af67660a8df38c1171b7290df61b358e618c6fea79567/gevent-25.5.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c1325ed44225c8309c0dd188bdbbbee79e1df8c11ceccac226b861c7d52e4837", size = 2070583, upload-time = "2025-05-12T11:33:02.803Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/8f/3b14929ff28263aba1d268ea97bcf104be1a86ba6f6bb4633838e7a1905e/gevent-25.5.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:fcd5bcad3102bde686d0adcc341fade6245186050ce14386d547ccab4bd54310", size = 1808341, upload-time = "2025-05-12T11:59:59.154Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2f/fc/674ec819fb8a96e482e4d21f8baa43d34602dba09dfce7bbdc8700899d1b/gevent-25.5.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:1a93062609e8fa67ec97cd5fb9206886774b2a09b24887f40148c9c37e6fb71c", size = 2137974, upload-time = "2025-05-12T11:40:54.78Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/9a/048b7f5e28c54e4595ad4a8ad3c338fa89560e558db2bbe8273f44f030de/gevent-25.5.1-cp312-cp312-win_amd64.whl", hash = "sha256:2534c23dc32bed62b659ed4fd9e198906179e68b26c9276a897e04163bdde806", size = 1638344, upload-time = "2025-05-12T12:08:31.776Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/25/2162b38d7b48e08865db6772d632bd1648136ce2bb50e340565e45607cad/gevent-25.5.1-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:a022a9de9275ce0b390b7315595454258c525dc8287a03f1a6cacc5878ab7cbc", size = 2928044, upload-time = "2025-05-12T11:11:36.33Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/e0/dbd597a964ed00176da122ea759bf2a6c1504f1e9f08e185379f92dc355f/gevent-25.5.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3fae8533f9d0ef3348a1f503edcfb531ef7a0236b57da1e24339aceb0ce52922", size = 1788751, upload-time = "2025-05-12T11:52:32.643Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/74/960cc4cf4c9c90eafbe0efc238cdf588862e8e278d0b8c0d15a0da4ed480/gevent-25.5.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c7b32d9c3b5294b39ea9060e20c582e49e1ec81edbfeae6cf05f8ad0829cb13d", size = 1869766, upload-time = "2025-05-12T11:54:23.903Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/56/78/fa84b1c7db79b156929685db09a7c18c3127361dca18a09e998e98118506/gevent-25.5.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7b95815fe44f318ebbfd733b6428b4cb18cc5e68f1c40e8501dd69cc1f42a83d", size = 1835358, upload-time = "2025-05-12T12:00:06.794Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/5c/bfefe3822bbca5b83bfad256c82251b3f5be13d52d14e17a786847b9b625/gevent-25.5.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2d316529b70d325b183b2f3f5cde958911ff7be12eb2b532b5c301f915dbbf1e", size = 2073071, upload-time = "2025-05-12T11:33:04.2Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/20/e4/08a77a3839a37db96393dea952e992d5846a881b887986dde62ead6b48a1/gevent-25.5.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f6ba33c13db91ffdbb489a4f3d177a261ea1843923e1d68a5636c53fe98fa5ce", size = 1809805, upload-time = "2025-05-12T12:00:00.537Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/ac/28848348f790c1283df74b0fc0a554271d0606676470f848eccf84eae42a/gevent-25.5.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:37ee34b77c7553777c0b8379915f75934c3f9c8cd32f7cd098ea43c9323c2276", size = 2138305, upload-time = "2025-05-12T11:40:56.566Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/9e/0e9e40facd2d714bfb00f71fc6dacaacc82c24c1c2e097bf6461e00dec9f/gevent-25.5.1-cp313-cp313-win_amd64.whl", hash = "sha256:9fa6aa0da224ed807d3b76cdb4ee8b54d4d4d5e018aed2478098e685baae7896", size = 1637444, upload-time = "2025-05-12T12:17:45.995Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/60/16/b71171e97ec7b4ded8669542f4369d88d5a289e2704efbbde51e858e062a/gevent-25.5.1-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:0bacf89a65489d26c7087669af89938d5bfd9f7afb12a07b57855b9fad6ccbd0", size = 2937113, upload-time = "2025-05-12T11:12:03.191Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/81/834da3c1ea5e71e4dc1a78a034a15f2813d9760d135464aae5d1f058a8c6/gevent-25.5.1-pp310-pypy310_pp73-macosx_11_0_universal2.whl", hash = "sha256:60ad4ca9ca2c4cc8201b607c229cd17af749831e371d006d8a91303bb5568eb1", size = 1291540, upload-time = "2025-05-12T11:11:55.456Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "google-auth"
|
||||
version = "2.40.1"
|
||||
@ -324,58 +329,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/84/40ee070be95771acd2f4418981edb834979424565c3eec3cd88b6aa09d24/google_auth_oauthlib-1.2.2-py3-none-any.whl", hash = "sha256:fd619506f4b3908b5df17b65f39ca8d66ea56986e5472eb5978fd8f3786f00a2", size = 19072, upload-time = "2025-04-22T16:40:28.174Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "greenlet"
|
||||
version = "3.2.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/34/c1/a82edae11d46c0d83481aacaa1e578fea21d94a1ef400afd734d47ad95ad/greenlet-3.2.2.tar.gz", hash = "sha256:ad053d34421a2debba45aa3cc39acf454acbcd025b3fc1a9f8a0dee237abd485", size = 185797, upload-time = "2025-05-09T19:47:35.066Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/05/66/910217271189cc3f32f670040235f4bf026ded8ca07270667d69c06e7324/greenlet-3.2.2-cp310-cp310-macosx_11_0_universal2.whl", hash = "sha256:c49e9f7c6f625507ed83a7485366b46cbe325717c60837f7244fc99ba16ba9d6", size = 267395, upload-time = "2025-05-09T14:50:45.357Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/36/8d812402ca21017c82880f399309afadb78a0aa300a9b45d741e4df5d954/greenlet-3.2.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c3cc1a3ed00ecfea8932477f729a9f616ad7347a5e55d50929efa50a86cb7be7", size = 625742, upload-time = "2025-05-09T15:23:58.293Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/77/66d7b59dfb7cc1102b2f880bc61cb165ee8998c9ec13c96606ba37e54c77/greenlet-3.2.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:7c9896249fbef2c615853b890ee854f22c671560226c9221cfd27c995db97e5c", size = 637014, upload-time = "2025-05-09T15:24:47.025Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/36/a7/ff0d408f8086a0d9a5aac47fa1b33a040a9fca89bd5a3f7b54d1cd6e2793/greenlet-3.2.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:7409796591d879425997a518138889d8d17e63ada7c99edc0d7a1c22007d4907", size = 632874, upload-time = "2025-05-09T15:29:20.014Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/75/1dc2603bf8184da9ebe69200849c53c3c1dca5b3a3d44d9f5ca06a930550/greenlet-3.2.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7791dcb496ec53d60c7f1c78eaa156c21f402dda38542a00afc3e20cae0f480f", size = 631652, upload-time = "2025-05-09T14:53:30.961Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/74/ddc8c3bd4c2c20548e5bf2b1d2e312a717d44e2eca3eadcfc207b5f5ad80/greenlet-3.2.2-cp310-cp310-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:d8009ae46259e31bc73dc183e402f548e980c96f33a6ef58cc2e7865db012e13", size = 580619, upload-time = "2025-05-09T14:53:42.049Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7e/f2/40f26d7b3077b1c7ae7318a4de1f8ffc1d8ccbad8f1d8979bf5080250fd6/greenlet-3.2.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fd9fb7c941280e2c837b603850efc93c999ae58aae2b40765ed682a6907ebbc5", size = 1109809, upload-time = "2025-05-09T15:26:59.063Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/21/9329e8c276746b0d2318b696606753f5e7b72d478adcf4ad9a975521ea5f/greenlet-3.2.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:00cd814b8959b95a546e47e8d589610534cfb71f19802ea8a2ad99d95d702057", size = 1133455, upload-time = "2025-05-09T14:53:55.823Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/1e/0dca9619dbd736d6981f12f946a497ec21a0ea27262f563bca5729662d4d/greenlet-3.2.2-cp310-cp310-win_amd64.whl", hash = "sha256:d0cb7d47199001de7658c213419358aa8937df767936506db0db7ce1a71f4a2f", size = 294991, upload-time = "2025-05-09T15:05:56.847Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/9f/a47e19261747b562ce88219e5ed8c859d42c6e01e73da6fbfa3f08a7be13/greenlet-3.2.2-cp311-cp311-macosx_11_0_universal2.whl", hash = "sha256:dcb9cebbf3f62cb1e5afacae90761ccce0effb3adaa32339a0670fe7805d8068", size = 268635, upload-time = "2025-05-09T14:50:39.007Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/80/a0042b91b66975f82a914d515e81c1944a3023f2ce1ed7a9b22e10b46919/greenlet-3.2.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bf3fc9145141250907730886b031681dfcc0de1c158f3cc51c092223c0f381ce", size = 628786, upload-time = "2025-05-09T15:24:00.692Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/38/a2/8336bf1e691013f72a6ebab55da04db81a11f68e82bb691f434909fa1327/greenlet-3.2.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:efcdfb9df109e8a3b475c016f60438fcd4be68cd13a365d42b35914cdab4bb2b", size = 640866, upload-time = "2025-05-09T15:24:48.153Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/7e/f2a3a13e424670a5d08826dab7468fa5e403e0fbe0b5f951ff1bc4425b45/greenlet-3.2.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4bd139e4943547ce3a56ef4b8b1b9479f9e40bb47e72cc906f0f66b9d0d5cab3", size = 636752, upload-time = "2025-05-09T15:29:23.182Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fd/5d/ce4a03a36d956dcc29b761283f084eb4a3863401c7cb505f113f73af8774/greenlet-3.2.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:71566302219b17ca354eb274dfd29b8da3c268e41b646f330e324e3967546a74", size = 636028, upload-time = "2025-05-09T14:53:32.854Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/29/b130946b57e3ceb039238413790dd3793c5e7b8e14a54968de1fe449a7cf/greenlet-3.2.2-cp311-cp311-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3091bc45e6b0c73f225374fefa1536cd91b1e987377b12ef5b19129b07d93ebe", size = 583869, upload-time = "2025-05-09T14:53:43.614Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/30/9f538dfe7f87b90ecc75e589d20cbd71635531a617a336c386d775725a8b/greenlet-3.2.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:44671c29da26539a5f142257eaba5110f71887c24d40df3ac87f1117df589e0e", size = 1112886, upload-time = "2025-05-09T15:27:01.304Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/be/92/4b7deeb1a1e9c32c1b59fdca1cac3175731c23311ddca2ea28a8b6ada91c/greenlet-3.2.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:c23ea227847c9dbe0b3910f5c0dd95658b607137614eb821e6cbaecd60d81cc6", size = 1138355, upload-time = "2025-05-09T14:53:58.011Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c5/eb/7551c751a2ea6498907b2fcbe31d7a54b602ba5e8eb9550a9695ca25d25c/greenlet-3.2.2-cp311-cp311-win_amd64.whl", hash = "sha256:0a16fb934fcabfdfacf21d79e6fed81809d8cd97bc1be9d9c89f0e4567143d7b", size = 295437, upload-time = "2025-05-09T15:00:57.733Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/a1/88fdc6ce0df6ad361a30ed78d24c86ea32acb2b563f33e39e927b1da9ea0/greenlet-3.2.2-cp312-cp312-macosx_11_0_universal2.whl", hash = "sha256:df4d1509efd4977e6a844ac96d8be0b9e5aa5d5c77aa27ca9f4d3f92d3fcf330", size = 270413, upload-time = "2025-05-09T14:51:32.455Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/2e/6c1caffd65490c68cd9bcec8cb7feb8ac7b27d38ba1fea121fdc1f2331dc/greenlet-3.2.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da956d534a6d1b9841f95ad0f18ace637668f680b1339ca4dcfb2c1837880a0b", size = 637242, upload-time = "2025-05-09T15:24:02.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/28/088af2cedf8823b6b7ab029a5626302af4ca1037cf8b998bed3a8d3cb9e2/greenlet-3.2.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9c7b15fb9b88d9ee07e076f5a683027bc3befd5bb5d25954bb633c385d8b737e", size = 651444, upload-time = "2025-05-09T15:24:49.856Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4a/9f/0116ab876bb0bc7a81eadc21c3f02cd6100dcd25a1cf2a085a130a63a26a/greenlet-3.2.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:752f0e79785e11180ebd2e726c8a88109ded3e2301d40abced2543aa5d164275", size = 646067, upload-time = "2025-05-09T15:29:24.989Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/35/17/bb8f9c9580e28a94a9575da847c257953d5eb6e39ca888239183320c1c28/greenlet-3.2.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ae572c996ae4b5e122331e12bbb971ea49c08cc7c232d1bd43150800a2d6c65", size = 648153, upload-time = "2025-05-09T14:53:34.716Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/ee/7f31b6f7021b8df6f7203b53b9cc741b939a2591dcc6d899d8042fcf66f2/greenlet-3.2.2-cp312-cp312-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02f5972ff02c9cf615357c17ab713737cccfd0eaf69b951084a9fd43f39833d3", size = 603865, upload-time = "2025-05-09T14:53:45.738Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/2d/759fa59323b521c6f223276a4fc3d3719475dc9ae4c44c2fe7fc750f8de0/greenlet-3.2.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:4fefc7aa68b34b9224490dfda2e70ccf2131368493add64b4ef2d372955c207e", size = 1119575, upload-time = "2025-05-09T15:27:04.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/05/356813470060bce0e81c3df63ab8cd1967c1ff6f5189760c1a4734d405ba/greenlet-3.2.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:a31ead8411a027c2c4759113cf2bd473690517494f3d6e4bf67064589afcd3c5", size = 1147460, upload-time = "2025-05-09T14:54:00.315Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/f4/b2a26a309a04fb844c7406a4501331b9400e1dd7dd64d3450472fd47d2e1/greenlet-3.2.2-cp312-cp312-win_amd64.whl", hash = "sha256:b24c7844c0a0afc3ccbeb0b807adeefb7eff2b5599229ecedddcfeb0ef333bec", size = 296239, upload-time = "2025-05-09T14:57:17.633Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/30/97b49779fff8601af20972a62cc4af0c497c1504dfbb3e93be218e093f21/greenlet-3.2.2-cp313-cp313-macosx_11_0_universal2.whl", hash = "sha256:3ab7194ee290302ca15449f601036007873028712e92ca15fc76597a0aeb4c59", size = 269150, upload-time = "2025-05-09T14:50:30.784Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/21/30/877245def4220f684bc2e01df1c2e782c164e84b32e07373992f14a2d107/greenlet-3.2.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2dc5c43bb65ec3669452af0ab10729e8fdc17f87a1f2ad7ec65d4aaaefabf6bf", size = 637381, upload-time = "2025-05-09T15:24:12.893Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/16/adf937908e1f913856b5371c1d8bdaef5f58f251d714085abeea73ecc471/greenlet-3.2.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:decb0658ec19e5c1f519faa9a160c0fc85a41a7e6654b3ce1b44b939f8bf1325", size = 651427, upload-time = "2025-05-09T15:24:51.074Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ad/49/6d79f58fa695b618654adac64e56aff2eeb13344dc28259af8f505662bb1/greenlet-3.2.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6fadd183186db360b61cb34e81117a096bff91c072929cd1b529eb20dd46e6c5", size = 645795, upload-time = "2025-05-09T15:29:26.673Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/e6/28ed5cb929c6b2f001e96b1d0698c622976cd8f1e41fe7ebc047fa7c6dd4/greenlet-3.2.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1919cbdc1c53ef739c94cf2985056bcc0838c1f217b57647cbf4578576c63825", size = 648398, upload-time = "2025-05-09T14:53:36.61Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/70/b200194e25ae86bc57077f695b6cc47ee3118becf54130c5514456cf8dac/greenlet-3.2.2-cp313-cp313-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3885f85b61798f4192d544aac7b25a04ece5fe2704670b4ab73c2d2c14ab740d", size = 606795, upload-time = "2025-05-09T14:53:47.039Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/c8/ba1def67513a941154ed8f9477ae6e5a03f645be6b507d3930f72ed508d3/greenlet-3.2.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:85f3e248507125bf4af607a26fd6cb8578776197bd4b66e35229cdf5acf1dfbf", size = 1117976, upload-time = "2025-05-09T15:27:06.542Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c3/30/d0e88c1cfcc1b3331d63c2b54a0a3a4a950ef202fb8b92e772ca714a9221/greenlet-3.2.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:1e76106b6fc55fa3d6fe1c527f95ee65e324a13b62e243f77b48317346559708", size = 1145509, upload-time = "2025-05-09T14:54:02.223Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/90/2e/59d6491834b6e289051b252cf4776d16da51c7c6ca6a87ff97e3a50aa0cd/greenlet-3.2.2-cp313-cp313-win_amd64.whl", hash = "sha256:fe46d4f8e94e637634d54477b0cfabcf93c53f29eedcbdeecaf2af32029b4421", size = 296023, upload-time = "2025-05-09T14:53:24.157Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/66/8a73aace5a5335a1cba56d0da71b7bd93e450f17d372c5b7c5fa547557e9/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ba30e88607fb6990544d84caf3c706c4b48f629e18853fc6a646f82db9629418", size = 629911, upload-time = "2025-05-09T15:24:22.376Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/08/c8b8ebac4e0c95dcc68ec99198842e7db53eda4ab3fb0a4e785690883991/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:055916fafad3e3388d27dd68517478933a97edc2fc54ae79d3bec827de2c64c4", size = 635251, upload-time = "2025-05-09T15:24:52.205Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/26/7db30868f73e86b9125264d2959acabea132b444b88185ba5c462cb8e571/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2593283bf81ca37d27d110956b79e8723f9aa50c4bcdc29d3c0543d4743d2763", size = 632620, upload-time = "2025-05-09T15:29:28.051Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/ec/718a3bd56249e729016b0b69bee4adea0dfccf6ca43d147ef3b21edbca16/greenlet-3.2.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:89c69e9a10670eb7a66b8cef6354c24671ba241f46152dd3eed447f79c29fb5b", size = 628851, upload-time = "2025-05-09T14:53:38.472Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9b/9d/d1c79286a76bc62ccdc1387291464af16a4204ea717f24e77b0acd623b99/greenlet-3.2.2-cp313-cp313t-manylinux_2_24_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:02a98600899ca1ca5d3a2590974c9e3ec259503b2d6ba6527605fcd74e08e207", size = 593718, upload-time = "2025-05-09T14:53:48.313Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cd/41/96ba2bf948f67b245784cd294b84e3d17933597dffd3acdb367a210d1949/greenlet-3.2.2-cp313-cp313t-musllinux_1_1_aarch64.whl", hash = "sha256:b50a8c5c162469c3209e5ec92ee4f95c8231b11db6a04db09bbe338176723bb8", size = 1105752, upload-time = "2025-05-09T15:27:08.217Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/3b/3b97f9d33c1f2eb081759da62bd6162159db260f602f048bc2f36b4c453e/greenlet-3.2.2-cp313-cp313t-musllinux_1_1_x86_64.whl", hash = "sha256:45f9f4853fb4cc46783085261c9ec4706628f3b57de3e68bae03e8f8b3c0de51", size = 1125170, upload-time = "2025-05-09T14:54:04.082Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/31/df/b7d17d66c8d0f578d2885a3d8f565e9e4725eacc9d3fdc946d0031c055c4/greenlet-3.2.2-cp314-cp314-macosx_11_0_universal2.whl", hash = "sha256:9ea5231428af34226c05f927e16fc7f6fa5e39e3ad3cd24ffa48ba53a47f4240", size = 269899, upload-time = "2025-05-09T14:54:01.581Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "gspread"
|
||||
version = "6.2.1"
|
||||
@ -398,6 +351,48 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl", hash = "sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3", size = 70442, upload-time = "2024-09-15T18:07:37.964Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "importlib-metadata"
|
||||
version = "8.7.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "zipp" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/76/66/650a33bd90f786193e4de4b3ad86ea60b53c89b669a5c7be931fac31cdb0/importlib_metadata-8.7.0.tar.gz", hash = "sha256:d13b81ad223b890aa16c5471f2ac3056cf76c5f10f82d6f9292f0b415f389000", size = 56641, upload-time = "2025-04-27T15:29:01.736Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/20/b0/36bd937216ec521246249be3bf9855081de4c5e06a0c9b4219dbeda50373/importlib_metadata-8.7.0-py3-none-any.whl", hash = "sha256:e5dd1551894c77868a30651cef00984d50e1002d06942a7101d34870c5f02afd", size = 27656, upload-time = "2025-04-27T15:29:00.214Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itsdangerous"
|
||||
version = "2.2.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9c/cb/8ac0172223afbccb63986cc25049b154ecfb5e85932587206f42317be31d/itsdangerous-2.2.0.tar.gz", hash = "sha256:e0050c0b7da1eea53ffaf149c0cfbb5c6e2e2b69c4bef22c81fa6eb73e5f6173", size = 54410, upload-time = "2024-04-16T21:28:15.614Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/96/92447566d16df59b2a776c0fb82dbc4d9e07cd95062562af01e408583fc4/itsdangerous-2.2.0-py3-none-any.whl", hash = "sha256:c6242fc49e35958c8b15141343aa660db5fc54d4f13a1db01a3f5891b98700ef", size = 16234, upload-time = "2024-04-16T21:28:14.499Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "jinja2"
|
||||
version = "3.1.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/df/bf/f7da0350254c0ed7c72f3e33cef02e048281fec7ecec5f032d4aac52226b/jinja2-3.1.6.tar.gz", hash = "sha256:0137fb05990d35f1275a587e9aee6d56da821fc83491a0fb838183be43f66d6d", size = 245115, upload-time = "2025-03-05T20:05:02.478Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/62/a1/3d680cbfd5f4b8f15abc1d571870c5fc3e594bb582bc3b64ea099db13e56/jinja2-3.1.6-py3-none-any.whl", hash = "sha256:85ece4451f492d0c13c5dd7c13a64681a86afae63a5f347908daf103ce6d2f67", size = 134899, upload-time = "2025-03-05T20:05:00.369Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "joblib"
|
||||
version = "1.5.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/dc/fe/0f5a938c54105553436dbff7a61dc4fed4b1b2c98852f8833beaf4d5968f/joblib-1.5.1.tar.gz", hash = "sha256:f4f86e351f39fe3d0d32a9f2c3d8af1ee4cec285aafcb27003dda5205576b444", size = 330475, upload-time = "2025-05-23T12:04:37.097Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/4f/1195bbac8e0c2acc5f740661631d8d750dc38d4a32b23ee5df3cde6f4e0d/joblib-1.5.1-py3-none-any.whl", hash = "sha256:4719a31f054c7d766948dcd83e9613686b27114f190f717cec7eaa2084f8a74a", size = 307746, upload-time = "2025-05-23T12:04:35.124Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "kiwisolver"
|
||||
version = "1.4.8"
|
||||
@ -485,6 +480,92 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3a/1d/50ad811d1c5dae091e4cf046beba925bcae0a610e79ae4c538f996f63ed5/kiwisolver-1.4.8-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:65ea09a5a3faadd59c2ce96dc7bf0f364986a315949dc6374f04396b0d60e09b", size = 71762, upload-time = "2024-12-24T18:30:48.903Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "llvmlite"
|
||||
version = "0.44.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/89/6a/95a3d3610d5c75293d5dbbb2a76480d5d4eeba641557b69fe90af6c5b84e/llvmlite-0.44.0.tar.gz", hash = "sha256:07667d66a5d150abed9157ab6c0b9393c9356f229784a4385c02f99e94fc94d4", size = 171880, upload-time = "2025-01-20T11:14:41.342Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/41/75/d4863ddfd8ab5f6e70f4504cf8cc37f4e986ec6910f4ef8502bb7d3c1c71/llvmlite-0.44.0-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:9fbadbfba8422123bab5535b293da1cf72f9f478a65645ecd73e781f962ca614", size = 28132306, upload-time = "2025-01-20T11:12:18.634Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/d9/6e8943e1515d2f1003e8278819ec03e4e653e2eeb71e4d00de6cfe59424e/llvmlite-0.44.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:cccf8eb28f24840f2689fb1a45f9c0f7e582dd24e088dcf96e424834af11f791", size = 26201096, upload-time = "2025-01-20T11:12:24.544Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/46/8ffbc114def88cc698906bf5acab54ca9fdf9214fe04aed0e71731fb3688/llvmlite-0.44.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7202b678cdf904823c764ee0fe2dfe38a76981f4c1e51715b4cb5abb6cf1d9e8", size = 42361859, upload-time = "2025-01-20T11:12:31.839Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/1c/9366b29ab050a726af13ebaae8d0dff00c3c58562261c79c635ad4f5eb71/llvmlite-0.44.0-cp310-cp310-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:40526fb5e313d7b96bda4cbb2c85cd5374e04d80732dd36a282d72a560bb6408", size = 41184199, upload-time = "2025-01-20T11:12:40.049Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/07/35e7c594b021ecb1938540f5bce543ddd8713cff97f71d81f021221edc1b/llvmlite-0.44.0-cp310-cp310-win_amd64.whl", hash = "sha256:41e3839150db4330e1b2716c0be3b5c4672525b4c9005e17c7597f835f351ce2", size = 30332381, upload-time = "2025-01-20T11:12:47.054Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b5/e2/86b245397052386595ad726f9742e5223d7aea999b18c518a50e96c3aca4/llvmlite-0.44.0-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:eed7d5f29136bda63b6d7804c279e2b72e08c952b7c5df61f45db408e0ee52f3", size = 28132305, upload-time = "2025-01-20T11:12:53.936Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/ec/506902dc6870249fbe2466d9cf66d531265d0f3a1157213c8f986250c033/llvmlite-0.44.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:ace564d9fa44bb91eb6e6d8e7754977783c68e90a471ea7ce913bff30bd62427", size = 26201090, upload-time = "2025-01-20T11:12:59.847Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/fe/d030f1849ebb1f394bb3f7adad5e729b634fb100515594aca25c354ffc62/llvmlite-0.44.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c5d22c3bfc842668168a786af4205ec8e3ad29fb1bc03fd11fd48460d0df64c1", size = 42361858, upload-time = "2025-01-20T11:13:07.623Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/7a/ce6174664b9077fc673d172e4c888cb0b128e707e306bc33fff8c2035f0d/llvmlite-0.44.0-cp311-cp311-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f01a394e9c9b7b1d4e63c327b096d10f6f0ed149ef53d38a09b3749dcf8c9610", size = 41184200, upload-time = "2025-01-20T11:13:20.058Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/c6/258801143975a6d09a373f2641237992496e15567b907a4d401839d671b8/llvmlite-0.44.0-cp311-cp311-win_amd64.whl", hash = "sha256:d8489634d43c20cd0ad71330dde1d5bc7b9966937a263ff1ec1cebb90dc50955", size = 30331193, upload-time = "2025-01-20T11:13:26.976Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/15/86/e3c3195b92e6e492458f16d233e58a1a812aa2bfbef9bdd0fbafcec85c60/llvmlite-0.44.0-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:1d671a56acf725bf1b531d5ef76b86660a5ab8ef19bb6a46064a705c6ca80aad", size = 28132297, upload-time = "2025-01-20T11:13:32.57Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/53/373b6b8be67b9221d12b24125fd0ec56b1078b660eeae266ec388a6ac9a0/llvmlite-0.44.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5f79a728e0435493611c9f405168682bb75ffd1fbe6fc360733b850c80a026db", size = 26201105, upload-time = "2025-01-20T11:13:38.744Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/da/8341fd3056419441286c8e26bf436923021005ece0bff5f41906476ae514/llvmlite-0.44.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0143a5ef336da14deaa8ec26c5449ad5b6a2b564df82fcef4be040b9cacfea9", size = 42361901, upload-time = "2025-01-20T11:13:46.711Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/53/ad/d79349dc07b8a395a99153d7ce8b01d6fcdc9f8231355a5df55ded649b61/llvmlite-0.44.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d752f89e31b66db6f8da06df8b39f9b91e78c5feea1bf9e8c1fba1d1c24c065d", size = 41184247, upload-time = "2025-01-20T11:13:56.159Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/3b/a9a17366af80127bd09decbe2a54d8974b6d8b274b39bf47fbaedeec6307/llvmlite-0.44.0-cp312-cp312-win_amd64.whl", hash = "sha256:eae7e2d4ca8f88f89d315b48c6b741dcb925d6a1042da694aa16ab3dd4cbd3a1", size = 30332380, upload-time = "2025-01-20T11:14:02.442Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/89/24/4c0ca705a717514c2092b18476e7a12c74d34d875e05e4d742618ebbf449/llvmlite-0.44.0-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:319bddd44e5f71ae2689859b7203080716448a3cd1128fb144fe5c055219d516", size = 28132306, upload-time = "2025-01-20T11:14:09.035Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/01/cf/1dd5a60ba6aee7122ab9243fd614abcf22f36b0437cbbe1ccf1e3391461c/llvmlite-0.44.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:9c58867118bad04a0bb22a2e0068c693719658105e40009ffe95c7000fcde88e", size = 26201090, upload-time = "2025-01-20T11:14:15.401Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/1b/656f5a357de7135a3777bd735cc7c9b8f23b4d37465505bd0eaf4be9befe/llvmlite-0.44.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:46224058b13c96af1365290bdfebe9a6264ae62fb79b2b55693deed11657a8bf", size = 42361904, upload-time = "2025-01-20T11:14:22.949Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d8/e1/12c5f20cb9168fb3464a34310411d5ad86e4163c8ff2d14a2b57e5cc6bac/llvmlite-0.44.0-cp313-cp313-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:aa0097052c32bf721a4efc03bd109d335dfa57d9bffb3d4c24cc680711b8b4fc", size = 41184245, upload-time = "2025-01-20T11:14:31.731Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d0/81/e66fc86539293282fd9cb7c9417438e897f369e79ffb62e1ae5e5154d4dd/llvmlite-0.44.0-cp313-cp313-win_amd64.whl", hash = "sha256:2fb7c4f2fb86cbae6dca3db9ab203eeea0e22d73b99bc2341cdf9de93612e930", size = 30331193, upload-time = "2025-01-20T11:14:38.578Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "markupsafe"
|
||||
version = "3.0.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b2/97/5d42485e71dfc078108a86d6de8fa46db44a1a9295e89c5d6d4a06e23a62/markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0", size = 20537, upload-time = "2024-10-18T15:21:54.129Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/04/90/d08277ce111dd22f77149fd1a5d4653eeb3b3eaacbdfcbae5afb2600eebd/MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8", size = 14357, upload-time = "2024-10-18T15:20:51.44Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/04/e1/6e2194baeae0bca1fae6629dc0cbbb968d4d941469cbab11a3872edff374/MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158", size = 12393, upload-time = "2024-10-18T15:20:52.426Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1d/69/35fa85a8ece0a437493dc61ce0bb6d459dcba482c34197e3efc829aa357f/MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38a9ef736c01fccdd6600705b09dc574584b89bea478200c5fbf112a6b0d5579", size = 21732, upload-time = "2024-10-18T15:20:53.578Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/35/137da042dfb4720b638d2937c38a9c2df83fe32d20e8c8f3185dbfef05f7/MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bbcb445fa71794da8f178f0f6d66789a28d7319071af7a496d4d507ed566270d", size = 20866, upload-time = "2024-10-18T15:20:55.06Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/28/6d029a903727a1b62edb51863232152fd335d602def598dade38996887f0/MarkupSafe-3.0.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:57cb5a3cf367aeb1d316576250f65edec5bb3be939e9247ae594b4bcbc317dfb", size = 20964, upload-time = "2024-10-18T15:20:55.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/cd/07438f95f83e8bc028279909d9c9bd39e24149b0d60053a97b2bc4f8aa51/MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:3809ede931876f5b2ec92eef964286840ed3540dadf803dd570c3b7e13141a3b", size = 21977, upload-time = "2024-10-18T15:20:57.189Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/01/84b57395b4cc062f9c4c55ce0df7d3108ca32397299d9df00fedd9117d3d/MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e07c3764494e3776c602c1e78e298937c3315ccc9043ead7e685b7f2b8d47b3c", size = 21366, upload-time = "2024-10-18T15:20:58.235Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bd/6e/61ebf08d8940553afff20d1fb1ba7294b6f8d279df9fd0c0db911b4bbcfd/MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:b424c77b206d63d500bcb69fa55ed8d0e6a3774056bdc4839fc9298a7edca171", size = 21091, upload-time = "2024-10-18T15:20:59.235Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/23/ffbf53694e8c94ebd1e7e491de185124277964344733c45481f32ede2499/MarkupSafe-3.0.2-cp310-cp310-win32.whl", hash = "sha256:fcabf5ff6eea076f859677f5f0b6b5c1a51e70a376b0579e0eadef8db48c6b50", size = 15065, upload-time = "2024-10-18T15:21:00.307Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/06/e7175d06dd6e9172d4a69a72592cb3f7a996a9c396eee29082826449bbc3/MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:6af100e168aa82a50e186c82875a5893c5597a0c1ccdb0d8b40240b1f28b969a", size = 15514, upload-time = "2024-10-18T15:21:01.122Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/28/bbf83e3f76936960b850435576dd5e67034e200469571be53f69174a2dfd/MarkupSafe-3.0.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d", size = 14353, upload-time = "2024-10-18T15:21:02.187Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/30/316d194b093cde57d448a4c3209f22e3046c5bb2fb0820b118292b334be7/MarkupSafe-3.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93", size = 12392, upload-time = "2024-10-18T15:21:02.941Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f2/96/9cdafba8445d3a53cae530aaf83c38ec64c4d5427d975c974084af5bc5d2/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832", size = 23984, upload-time = "2024-10-18T15:21:03.953Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f1/a4/aefb044a2cd8d7334c8a47d3fb2c9f328ac48cb349468cc31c20b539305f/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84", size = 23120, upload-time = "2024-10-18T15:21:06.495Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/21/5e4851379f88f3fad1de30361db501300d4f07bcad047d3cb0449fc51f8c/MarkupSafe-3.0.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca", size = 23032, upload-time = "2024-10-18T15:21:07.295Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/7b/e92c64e079b2d0d7ddf69899c98842f3f9a60a1ae72657c89ce2655c999d/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798", size = 24057, upload-time = "2024-10-18T15:21:08.073Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/ac/46f960ca323037caa0a10662ef97d0a4728e890334fc156b9f9e52bcc4ca/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e", size = 23359, upload-time = "2024-10-18T15:21:09.318Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/69/84/83439e16197337b8b14b6a5b9c2105fff81d42c2a7c5b58ac7b62ee2c3b1/MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4", size = 23306, upload-time = "2024-10-18T15:21:10.185Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/34/a15aa69f01e2181ed8d2b685c0d2f6655d5cca2c4db0ddea775e631918cd/MarkupSafe-3.0.2-cp311-cp311-win32.whl", hash = "sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d", size = 15094, upload-time = "2024-10-18T15:21:11.005Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/b8/3a3bd761922d416f3dc5d00bfbed11f66b1ab89a0c2b6e887240a30b0f6b/MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b", size = 15521, upload-time = "2024-10-18T15:21:12.911Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/09/d1f21434c97fc42f09d290cbb6350d44eb12f09cc62c9476effdb33a18aa/MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf", size = 14274, upload-time = "2024-10-18T15:21:13.777Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/b0/18f76bba336fa5aecf79d45dcd6c806c280ec44538b3c13671d49099fdd0/MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225", size = 12348, upload-time = "2024-10-18T15:21:14.822Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e0/25/dd5c0f6ac1311e9b40f4af06c78efde0f3b5cbf02502f8ef9501294c425b/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028", size = 24149, upload-time = "2024-10-18T15:21:15.642Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f3/f0/89e7aadfb3749d0f52234a0c8c7867877876e0a20b60e2188e9850794c17/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8", size = 23118, upload-time = "2024-10-18T15:21:17.133Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/da/f2eeb64c723f5e3777bc081da884b414671982008c47dcc1873d81f625b6/MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c", size = 22993, upload-time = "2024-10-18T15:21:18.064Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/0e/1f32af846df486dce7c227fe0f2398dc7e2e51d4a370508281f3c1c5cddc/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557", size = 24178, upload-time = "2024-10-18T15:21:18.859Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/f6/bb3ca0532de8086cbff5f06d137064c8410d10779c4c127e0e47d17c0b71/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22", size = 23319, upload-time = "2024-10-18T15:21:19.671Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/82/8be4c96ffee03c5b4a034e60a31294daf481e12c7c43ab8e34a1453ee48b/MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48", size = 23352, upload-time = "2024-10-18T15:21:20.971Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/51/ae/97827349d3fcffee7e184bdf7f41cd6b88d9919c80f0263ba7acd1bbcb18/MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30", size = 15097, upload-time = "2024-10-18T15:21:22.646Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/80/a61f99dc3a936413c3ee4e1eecac96c0da5ed07ad56fd975f1a9da5bc630/MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87", size = 15601, upload-time = "2024-10-18T15:21:23.499Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/0e/67eb10a7ecc77a0c2bbe2b0235765b98d164d81600746914bebada795e97/MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd", size = 14274, upload-time = "2024-10-18T15:21:24.577Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/6d/9409f3684d3335375d04e5f05744dfe7e9f120062c9857df4ab490a1031a/MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430", size = 12352, upload-time = "2024-10-18T15:21:25.382Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d2/f5/6eadfcd3885ea85fe2a7c128315cc1bb7241e1987443d78c8fe712d03091/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094", size = 24122, upload-time = "2024-10-18T15:21:26.199Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/91/96cf928db8236f1bfab6ce15ad070dfdd02ed88261c2afafd4b43575e9e9/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396", size = 23085, upload-time = "2024-10-18T15:21:27.029Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c2/cf/c9d56af24d56ea04daae7ac0940232d31d5a8354f2b457c6d856b2057d69/MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79", size = 22978, upload-time = "2024-10-18T15:21:27.846Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2a/9f/8619835cd6a711d6272d62abb78c033bda638fdc54c4e7f4272cf1c0962b/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a", size = 24208, upload-time = "2024-10-18T15:21:28.744Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f9/bf/176950a1792b2cd2102b8ffeb5133e1ed984547b75db47c25a67d3359f77/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca", size = 23357, upload-time = "2024-10-18T15:21:29.545Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/4f/9a02c1d335caabe5c4efb90e1b6e8ee944aa245c1aaaab8e8a618987d816/MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c", size = 23344, upload-time = "2024-10-18T15:21:30.366Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ee/55/c271b57db36f748f0e04a759ace9f8f759ccf22b4960c270c78a394f58be/MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1", size = 15101, upload-time = "2024-10-18T15:21:31.207Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/88/07df22d2dd4df40aba9f3e402e6dc1b8ee86297dddbad4872bd5e7b0094f/MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f", size = 15603, upload-time = "2024-10-18T15:21:32.032Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/6a/8b89d24db2d32d433dffcd6a8779159da109842434f1dd2f6e71f32f738c/MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c", size = 14510, upload-time = "2024-10-18T15:21:33.625Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/06/a10f955f70a2e5a9bf78d11a161029d278eeacbd35ef806c3fd17b13060d/MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb", size = 12486, upload-time = "2024-10-18T15:21:34.611Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/cf/65d4a571869a1a9078198ca28f39fba5fbb910f952f9dbc5220afff9f5e6/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c", size = 25480, upload-time = "2024-10-18T15:21:35.398Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/e3/90e9651924c430b885468b56b3d597cabf6d72be4b24a0acd1fa0e12af67/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d", size = 23914, upload-time = "2024-10-18T15:21:36.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/8c/6c7cf61f95d63bb866db39085150df1f2a5bd3335298f14a66b48e92659c/MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe", size = 23796, upload-time = "2024-10-18T15:21:37.073Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/35/cbe9238ec3f47ac9a7c8b3df7a808e7cb50fe149dc7039f5f454b3fba218/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5", size = 25473, upload-time = "2024-10-18T15:21:37.932Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e6/32/7621a4382488aa283cc05e8984a9c219abad3bca087be9ec77e89939ded9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a", size = 24114, upload-time = "2024-10-18T15:21:39.799Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/80/0985960e4b89922cb5a0bac0ed39c5b96cbc1a536a99f30e8c220a996ed9/MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9", size = 24098, upload-time = "2024-10-18T15:21:40.813Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/82/78/fedb03c7d5380df2427038ec8d973587e90561b2d90cd472ce9254cf348b/MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6", size = 15208, upload-time = "2024-10-18T15:21:41.814Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4f/65/6079a46068dfceaeabb5dcad6d674f5f5c61a6fa5673746f42a9f4c233b3/MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f", size = 15739, upload-time = "2024-10-18T15:21:42.784Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "matplotlib"
|
||||
version = "3.10.3"
|
||||
@ -539,11 +620,52 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "narwhals"
|
||||
version = "1.40.0"
|
||||
version = "1.41.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f0/57/283881d06788c2fddd05eb7f0d6c82c5116d2827e83b845c796c74417c56/narwhals-1.40.0.tar.gz", hash = "sha256:17064abffd264ea1cfe6aefc8a0080f3a4ffb3659a98bcad5456ca80b88f2a0a", size = 487625, upload-time = "2025-05-19T07:44:12.103Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/32/fc/7b9a3689911662be59889b1b0b40e17d5dba6f98080994d86ca1f3154d41/narwhals-1.41.0.tar.gz", hash = "sha256:0ab2e5a1757a19b071e37ca74b53b0b5426789321d68939738337dfddea629b5", size = 488446, upload-time = "2025-05-26T12:46:07.43Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/e6/4d16dfa26f40230593c216bf695da01682fdbdf6af4e79abef572ab26bce/narwhals-1.40.0-py3-none-any.whl", hash = "sha256:1e6c731811d01c61147c52433b4d4edfb6511aaf2c859aa01c2e8ca6ff4d27e5", size = 357340, upload-time = "2025-05-19T07:44:10.11Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c9/e0/ade8619846645461c012498f02b93a659e50f07d9d9a6ffefdf5ea2c02a0/narwhals-1.41.0-py3-none-any.whl", hash = "sha256:d958336b40952e4c4b7aeef259a7074851da0800cf902186a58f2faeff97be02", size = 357968, upload-time = "2025-05-26T12:46:05.207Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nest-asyncio"
|
||||
version = "1.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/83/f8/51569ac65d696c8ecbee95938f89d4abf00f47d58d48f6fbabfe8f0baefe/nest_asyncio-1.6.0.tar.gz", hash = "sha256:6f172d5449aca15afd6c646851f4e31e02c598d553a667e38cafa997cfec55fe", size = 7418, upload-time = "2024-01-21T14:25:19.227Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/c4/c2971a3ba4c6103a3d10c4b0f24f461ddc027f0f09763220cf35ca1401b3/nest_asyncio-1.6.0-py3-none-any.whl", hash = "sha256:87af6efd6b5e897c81050477ef65c62e2b2f35d51703cae01aff2905b1852e1c", size = 5195, upload-time = "2024-01-21T14:25:17.223Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "numba"
|
||||
version = "0.61.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "llvmlite" },
|
||||
{ name = "numpy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/1c/a0/e21f57604304aa03ebb8e098429222722ad99176a4f979d34af1d1ee80da/numba-0.61.2.tar.gz", hash = "sha256:8750ee147940a6637b80ecf7f95062185ad8726c8c28a2295b8ec1160a196f7d", size = 2820615, upload-time = "2025-04-09T02:58:07.659Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/eb/ca/f470be59552ccbf9531d2d383b67ae0b9b524d435fb4a0d229fef135116e/numba-0.61.2-cp310-cp310-macosx_10_14_x86_64.whl", hash = "sha256:cf9f9fc00d6eca0c23fc840817ce9f439b9f03c8f03d6246c0e7f0cb15b7162a", size = 2775663, upload-time = "2025-04-09T02:57:34.143Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/13/3bdf52609c80d460a3b4acfb9fdb3817e392875c0d6270cf3fd9546f138b/numba-0.61.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:ea0247617edcb5dd61f6106a56255baab031acc4257bddaeddb3a1003b4ca3fd", size = 2778344, upload-time = "2025-04-09T02:57:36.609Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/7d/bfb2805bcfbd479f04f835241ecf28519f6e3609912e3a985aed45e21370/numba-0.61.2-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ae8c7a522c26215d5f62ebec436e3d341f7f590079245a2f1008dfd498cc1642", size = 3824054, upload-time = "2025-04-09T02:57:38.162Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/27/797b2004745c92955470c73c82f0e300cf033c791f45bdecb4b33b12bdea/numba-0.61.2-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bd1e74609855aa43661edffca37346e4e8462f6903889917e9f41db40907daa2", size = 3518531, upload-time = "2025-04-09T02:57:39.709Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/c6/c2fb11e50482cb310afae87a997707f6c7d8a48967b9696271347441f650/numba-0.61.2-cp310-cp310-win_amd64.whl", hash = "sha256:ae45830b129c6137294093b269ef0a22998ccc27bf7cf096ab8dcf7bca8946f9", size = 2831612, upload-time = "2025-04-09T02:57:41.559Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/97/c99d1056aed767503c228f7099dc11c402906b42a4757fec2819329abb98/numba-0.61.2-cp311-cp311-macosx_10_14_x86_64.whl", hash = "sha256:efd3db391df53aaa5cfbee189b6c910a5b471488749fd6606c3f33fc984c2ae2", size = 2775825, upload-time = "2025-04-09T02:57:43.442Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/95/9e/63c549f37136e892f006260c3e2613d09d5120672378191f2dc387ba65a2/numba-0.61.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:49c980e4171948ffebf6b9a2520ea81feed113c1f4890747ba7f59e74be84b1b", size = 2778695, upload-time = "2025-04-09T02:57:44.968Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/97/c8/8740616c8436c86c1b9a62e72cb891177d2c34c2d24ddcde4c390371bf4c/numba-0.61.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3945615cd73c2c7eba2a85ccc9c1730c21cd3958bfcf5a44302abae0fb07bb60", size = 3829227, upload-time = "2025-04-09T02:57:46.63Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/06/66e99ae06507c31d15ff3ecd1f108f2f59e18b6e08662cd5f8a5853fbd18/numba-0.61.2-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:bbfdf4eca202cebade0b7d43896978e146f39398909a42941c9303f82f403a18", size = 3523422, upload-time = "2025-04-09T02:57:48.222Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/a4/2b309a6a9f6d4d8cfba583401c7c2f9ff887adb5d54d8e2e130274c0973f/numba-0.61.2-cp311-cp311-win_amd64.whl", hash = "sha256:76bcec9f46259cedf888041b9886e257ae101c6268261b19fda8cfbc52bec9d1", size = 2831505, upload-time = "2025-04-09T02:57:50.108Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b4/a0/c6b7b9c615cfa3b98c4c63f4316e3f6b3bbe2387740277006551784218cd/numba-0.61.2-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:34fba9406078bac7ab052efbf0d13939426c753ad72946baaa5bf9ae0ebb8dd2", size = 2776626, upload-time = "2025-04-09T02:57:51.857Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/4a/fe4e3c2ecad72d88f5f8cd04e7f7cff49e718398a2fac02d2947480a00ca/numba-0.61.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:4ddce10009bc097b080fc96876d14c051cc0c7679e99de3e0af59014dab7dfe8", size = 2779287, upload-time = "2025-04-09T02:57:53.658Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9a/2d/e518df036feab381c23a624dac47f8445ac55686ec7f11083655eb707da3/numba-0.61.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:5b1bb509d01f23d70325d3a5a0e237cbc9544dd50e50588bc581ba860c213546", size = 3885928, upload-time = "2025-04-09T02:57:55.206Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/0f/23cced68ead67b75d77cfcca3df4991d1855c897ee0ff3fe25a56ed82108/numba-0.61.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:48a53a3de8f8793526cbe330f2a39fe9a6638efcbf11bd63f3d2f9757ae345cd", size = 3577115, upload-time = "2025-04-09T02:57:56.818Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/1d/ddb3e704c5a8fb90142bf9dc195c27db02a08a99f037395503bfbc1d14b3/numba-0.61.2-cp312-cp312-win_amd64.whl", hash = "sha256:97cf4f12c728cf77c9c1d7c23707e4d8fb4632b46275f8f3397de33e5877af18", size = 2831929, upload-time = "2025-04-09T02:57:58.45Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/f3/0fe4c1b1f2569e8a18ad90c159298d862f96c3964392a20d74fc628aee44/numba-0.61.2-cp313-cp313-macosx_10_14_x86_64.whl", hash = "sha256:3a10a8fc9afac40b1eac55717cece1b8b1ac0b946f5065c89e00bde646b5b154", size = 2771785, upload-time = "2025-04-09T02:57:59.96Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/71/91b277d712e46bd5059f8a5866862ed1116091a7cb03bd2704ba8ebe015f/numba-0.61.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:7d3bcada3c9afba3bed413fba45845f2fb9cd0d2b27dd58a1be90257e293d140", size = 2773289, upload-time = "2025-04-09T02:58:01.435Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0d/e0/5ea04e7ad2c39288c0f0f9e8d47638ad70f28e275d092733b5817cf243c9/numba-0.61.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:bdbca73ad81fa196bd53dc12e3aaf1564ae036e0c125f237c7644fe64a4928ab", size = 3893918, upload-time = "2025-04-09T02:58:02.933Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/17/58/064f4dcb7d7e9412f16ecf80ed753f92297e39f399c905389688cf950b81/numba-0.61.2-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:5f154aaea625fb32cfbe3b80c5456d514d416fcdf79733dd69c0df3a11348e9e", size = 3584056, upload-time = "2025-04-09T02:58:04.538Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/a4/6d3a0f2d3989e62a18749e1e9913d5fa4910bbb3e3311a035baea6caf26d/numba-0.61.2-cp313-cp313-win_amd64.whl", hash = "sha256:59321215e2e0ac5fa928a8020ab00b8e57cda8a97384963ac0dfa4d4e6aa54e7", size = 2831846, upload-time = "2025-04-09T02:58:06.125Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -608,6 +730,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/37/48/ac2a9584402fb6c0cd5b5d1a91dcf176b15760130dd386bbafdbfe3640bf/numpy-2.2.6-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:d042d24c90c41b54fd506da306759e06e568864df8ec17ccc17e9e884634fd00", size = 12812666, upload-time = "2025-05-17T21:45:31.426Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "nvidia-nccl-cu12"
|
||||
version = "2.26.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/55/66/ed9d28946ead0fe1322df2f4fc6ea042340c0fe73b79a1419dc1fdbdd211/nvidia_nccl_cu12-2.26.5-py3-none-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:adb1bf4adcc5a47f597738a0700da6aef61f8ea4251b375540ae138c7d239588", size = 318058262, upload-time = "2025-05-02T23:32:43.197Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/fb/ec4ac065d9b0d56f72eaf1d9b0df601e33da28197b32ca351dc05b342611/nvidia_nccl_cu12-2.26.5-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:ea5ed3e053c735f16809bee7111deac62ac35b10128a8c102960a0462ce16cbe", size = 318069637, upload-time = "2025-05-02T23:33:18.306Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "oauthlib"
|
||||
version = "3.2.2"
|
||||
@ -753,15 +884,15 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "plotly"
|
||||
version = "6.1.1"
|
||||
version = "6.1.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "narwhals" },
|
||||
{ name = "packaging" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/8a/7c/f396bc817975252afbe7af102ce09cd12ac40a8e90b8699a857d1b15c8a3/plotly-6.1.1.tar.gz", hash = "sha256:84a4f3d36655f1328fa3155377c7e8a9533196697d5b79a4bc5e905bdd09a433", size = 7543694, upload-time = "2025-05-20T20:09:31.935Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ae/77/431447616eda6a432dc3ce541b3f808ecb8803ea3d4ab2573b67f8eb4208/plotly-6.1.2.tar.gz", hash = "sha256:4fdaa228926ba3e3a213f4d1713287e69dcad1a7e66cf2025bd7d7026d5014b4", size = 7662971, upload-time = "2025-05-27T20:21:52.56Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/75/f3/f8cb7066f761e2530e1280889e3413769891e349fca35ee7290e4ace35f5/plotly-6.1.1-py3-none-any.whl", hash = "sha256:9cca7167406ebf7ff541422738402159ec3621a608ff7b3e2f025573a1c76225", size = 16118469, upload-time = "2025-05-20T20:09:26.196Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bf/6f/759d5da0517547a5d38aabf05d04d9f8adf83391d2c7fc33f904417d3ba2/plotly-6.1.2-py3-none-any.whl", hash = "sha256:f1548a8ed9158d59e03d7fed548c7db5549f3130d9ae19293c8638c202648f6d", size = 16265530, upload-time = "2025-05-27T20:21:46.6Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -800,15 +931,6 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/47/8d/d529b5d697919ba8c11ad626e835d4039be708a35b0d22de83a269a6682c/pyasn1_modules-0.4.2-py3-none-any.whl", hash = "sha256:29253a9207ce32b64c3ac6600edc75368f98473906e8fd1043bd6b5b1de2c14a", size = 181259, upload-time = "2025-03-28T02:41:19.028Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pycparser"
|
||||
version = "2.22"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/1d/b2/31537cf4b1ca988837256c910a668b553fceb8f069bedc4b1c826024b52c/pycparser-2.22.tar.gz", hash = "sha256:491c8be9c040f5390f5bf44a5b07752bd07f56edf992381b05c701439eec10f6", size = 172736, upload-time = "2024-03-30T13:22:22.564Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl", hash = "sha256:c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc", size = 117552, upload-time = "2024-03-30T13:22:20.476Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyparsing"
|
||||
version = "3.2.3"
|
||||
@ -867,6 +989,18 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/5d/63d4ae3b9daea098d5d6f5da83984853c1bbacd5dc826764b249fe119d24/requests_oauthlib-2.0.0-py2.py3-none-any.whl", hash = "sha256:7dd8a5c40426b779b0868c404bdef9768deccf22749cde15852df527e6269b36", size = 24179, upload-time = "2024-03-22T20:32:28.055Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "retrying"
|
||||
version = "1.3.4"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "six" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/ce/70/15ce8551d65b324e18c5aa6ef6998880f21ead51ebe5ed743c0950d7d9dd/retrying-1.3.4.tar.gz", hash = "sha256:345da8c5765bd982b1d1915deb9102fd3d1f7ad16bd84a9700b85f64d24e8f3e", size = 10929, upload-time = "2022-11-25T09:57:49.43Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8f/04/9e36f28be4c0532c0e9207ff9dc01fb13a2b0eb036476a213b0000837d0e/retrying-1.3.4-py3-none-any.whl", hash = "sha256:8cc4d43cb8e1125e0ff3344e9de678fefd85db3b750b81b2240dc0183af37b35", size = 11602, upload-time = "2022-11-25T09:57:47.494Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rsa"
|
||||
version = "4.9.1"
|
||||
@ -879,6 +1013,44 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/64/8d/0133e4eb4beed9e425d9a98ed6e081a55d195481b7632472be1af08d2f6b/rsa-4.9.1-py3-none-any.whl", hash = "sha256:68635866661c6836b8d39430f97a996acbd61bfa49406748ea243539fe239762", size = 34696, upload-time = "2025-04-16T09:51:17.142Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "scikit-learn"
|
||||
version = "1.6.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "joblib" },
|
||||
{ name = "numpy" },
|
||||
{ name = "scipy" },
|
||||
{ name = "threadpoolctl" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9e/a5/4ae3b3a0755f7b35a280ac90b28817d1f380318973cff14075ab41ef50d9/scikit_learn-1.6.1.tar.gz", hash = "sha256:b4fc2525eca2c69a59260f583c56a7557c6ccdf8deafdba6e060f94c1c59738e", size = 7068312, upload-time = "2025-01-10T08:07:55.348Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/3a/f4597eb41049110b21ebcbb0bcb43e4035017545daa5eedcfeb45c08b9c5/scikit_learn-1.6.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:d056391530ccd1e501056160e3c9673b4da4805eb67eb2bdf4e983e1f9c9204e", size = 12067702, upload-time = "2025-01-10T08:05:56.515Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/19/0423e5e1fd1c6ec5be2352ba05a537a473c1677f8188b9306097d684b327/scikit_learn-1.6.1-cp310-cp310-macosx_12_0_arm64.whl", hash = "sha256:0c8d036eb937dbb568c6242fa598d551d88fb4399c0344d95c001980ec1c7d36", size = 11112765, upload-time = "2025-01-10T08:06:00.272Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/95/d5cb2297a835b0f5fc9a77042b0a2d029866379091ab8b3f52cc62277808/scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8634c4bd21a2a813e0a7e3900464e6d593162a29dd35d25bdf0103b3fce60ed5", size = 12643991, upload-time = "2025-01-10T08:06:04.813Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/91/ab3c697188f224d658969f678be86b0968ccc52774c8ab4a86a07be13c25/scikit_learn-1.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:775da975a471c4f6f467725dff0ced5c7ac7bda5e9316b260225b48475279a1b", size = 13497182, upload-time = "2025-01-10T08:06:08.42Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/17/04/d5d556b6c88886c092cc989433b2bab62488e0f0dafe616a1d5c9cb0efb1/scikit_learn-1.6.1-cp310-cp310-win_amd64.whl", hash = "sha256:8a600c31592bd7dab31e1c61b9bbd6dea1b3433e67d264d17ce1017dbdce8002", size = 11125517, upload-time = "2025-01-10T08:06:12.783Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/2a/e291c29670795406a824567d1dfc91db7b699799a002fdaa452bceea8f6e/scikit_learn-1.6.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:72abc587c75234935e97d09aa4913a82f7b03ee0b74111dcc2881cba3c5a7b33", size = 12102620, upload-time = "2025-01-10T08:06:16.675Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/25/92/ee1d7a00bb6b8c55755d4984fd82608603a3cc59959245068ce32e7fb808/scikit_learn-1.6.1-cp311-cp311-macosx_12_0_arm64.whl", hash = "sha256:b3b00cdc8f1317b5f33191df1386c0befd16625f49d979fe77a8d44cae82410d", size = 11116234, upload-time = "2025-01-10T08:06:21.83Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/30/cd/ed4399485ef364bb25f388ab438e3724e60dc218c547a407b6e90ccccaef/scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc4765af3386811c3ca21638f63b9cf5ecf66261cc4815c1db3f1e7dc7b79db2", size = 12592155, upload-time = "2025-01-10T08:06:27.309Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a8/f3/62fc9a5a659bb58a03cdd7e258956a5824bdc9b4bb3c5d932f55880be569/scikit_learn-1.6.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25fc636bdaf1cc2f4a124a116312d837148b5e10872147bdaf4887926b8c03d8", size = 13497069, upload-time = "2025-01-10T08:06:32.515Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a1/a6/c5b78606743a1f28eae8f11973de6613a5ee87366796583fb74c67d54939/scikit_learn-1.6.1-cp311-cp311-win_amd64.whl", hash = "sha256:fa909b1a36e000a03c382aade0bd2063fd5680ff8b8e501660c0f59f021a6415", size = 11139809, upload-time = "2025-01-10T08:06:35.514Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/18/c797c9b8c10380d05616db3bfb48e2a3358c767affd0857d56c2eb501caa/scikit_learn-1.6.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:926f207c804104677af4857b2c609940b743d04c4c35ce0ddc8ff4f053cddc1b", size = 12104516, upload-time = "2025-01-10T08:06:40.009Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c4/b7/2e35f8e289ab70108f8cbb2e7a2208f0575dc704749721286519dcf35f6f/scikit_learn-1.6.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:2c2cae262064e6a9b77eee1c8e768fc46aa0b8338c6a8297b9b6759720ec0ff2", size = 11167837, upload-time = "2025-01-10T08:06:43.305Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/f6/ff7beaeb644bcad72bcfd5a03ff36d32ee4e53a8b29a639f11bcb65d06cd/scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1061b7c028a8663fb9a1a1baf9317b64a257fcb036dae5c8752b2abef31d136f", size = 12253728, upload-time = "2025-01-10T08:06:47.618Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/7a/8bce8968883e9465de20be15542f4c7e221952441727c4dad24d534c6d99/scikit_learn-1.6.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2e69fab4ebfc9c9b580a7a80111b43d214ab06250f8a7ef590a4edf72464dd86", size = 13147700, upload-time = "2025-01-10T08:06:50.888Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/27/585859e72e117fe861c2079bcba35591a84f801e21bc1ab85bce6ce60305/scikit_learn-1.6.1-cp312-cp312-win_amd64.whl", hash = "sha256:70b1d7e85b1c96383f872a519b3375f92f14731e279a7b4c6cfd650cf5dffc52", size = 11110613, upload-time = "2025-01-10T08:06:54.115Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2e/59/8eb1872ca87009bdcdb7f3cdc679ad557b992c12f4b61f9250659e592c63/scikit_learn-1.6.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:2ffa1e9e25b3d93990e74a4be2c2fc61ee5af85811562f1288d5d055880c4322", size = 12010001, upload-time = "2025-01-10T08:06:58.613Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/05/f2fc4effc5b32e525408524c982c468c29d22f828834f0625c5ef3d601be/scikit_learn-1.6.1-cp313-cp313-macosx_12_0_arm64.whl", hash = "sha256:dc5cf3d68c5a20ad6d571584c0750ec641cc46aeef1c1507be51300e6003a7e1", size = 11096360, upload-time = "2025-01-10T08:07:01.556Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c8/e4/4195d52cf4f113573fb8ebc44ed5a81bd511a92c0228889125fac2f4c3d1/scikit_learn-1.6.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c06beb2e839ecc641366000ca84f3cf6fa9faa1777e29cf0c04be6e4d096a348", size = 12209004, upload-time = "2025-01-10T08:07:06.931Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/94/be/47e16cdd1e7fcf97d95b3cb08bde1abb13e627861af427a3651fcb80b517/scikit_learn-1.6.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e8ca8cb270fee8f1f76fa9bfd5c3507d60c6438bbee5687f81042e2bb98e5a97", size = 13171776, upload-time = "2025-01-10T08:07:11.715Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/34/b0/ca92b90859070a1487827dbc672f998da95ce83edce1270fc23f96f1f61a/scikit_learn-1.6.1-cp313-cp313-win_amd64.whl", hash = "sha256:7a1c43c8ec9fde528d664d947dc4c0789be4077a3647f232869f41d9bf50e0fb", size = 11071865, upload-time = "2025-01-10T08:07:16.088Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/ae/993b0fb24a356e71e9a894e42b8a9eec528d4c70217353a1cd7a48bc25d4/scikit_learn-1.6.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:a17c1dea1d56dcda2fac315712f3651a1fea86565b64b48fa1bc090249cbf236", size = 11955804, upload-time = "2025-01-10T08:07:20.385Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/54/32fa2ee591af44507eac86406fa6bba968d1eb22831494470d0a2e4a1eb1/scikit_learn-1.6.1-cp313-cp313t-macosx_12_0_arm64.whl", hash = "sha256:6a7aa5f9908f0f28f4edaa6963c0a6183f1911e63a69aa03782f0d924c830a35", size = 11100530, upload-time = "2025-01-10T08:07:23.675Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3f/58/55856da1adec655bdce77b502e94a267bf40a8c0b89f8622837f89503b5a/scikit_learn-1.6.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0650e730afb87402baa88afbf31c07b84c98272622aaba002559b614600ca691", size = 12433852, upload-time = "2025-01-10T08:07:26.817Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ff/4f/c83853af13901a574f8f13b645467285a48940f185b690936bb700a50863/scikit_learn-1.6.1-cp313-cp313t-win_amd64.whl", hash = "sha256:3f59fe08dc03ea158605170eb52b22a105f238a5d512c4470ddeca71feae8e5f", size = 11337256, upload-time = "2025-01-10T08:07:31.084Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "scipy"
|
||||
version = "1.15.3"
|
||||
@ -951,11 +1123,11 @@ wheels = [
|
||||
|
||||
[[package]]
|
||||
name = "setuptools"
|
||||
version = "80.8.0"
|
||||
version = "80.9.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/8d/d2/ec1acaaff45caed5c2dedb33b67055ba9d4e96b091094df90762e60135fe/setuptools-80.8.0.tar.gz", hash = "sha256:49f7af965996f26d43c8ae34539c8d99c5042fbff34302ea151eaa9c207cd257", size = 1319720, upload-time = "2025-05-20T14:02:53.503Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/18/5d/3bf57dcd21979b887f014ea83c24ae194cfcd12b9e0fda66b957c69d1fca/setuptools-80.9.0.tar.gz", hash = "sha256:f36b47402ecde768dbfafc46e8e4207b4360c654f1f3bb84475f0a28628fb19c", size = 1319958, upload-time = "2025-05-27T00:56:51.443Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/58/29/93c53c098d301132196c3238c312825324740851d77a8500a2462c0fd888/setuptools-80.8.0-py3-none-any.whl", hash = "sha256:95a60484590d24103af13b686121328cc2736bee85de8936383111e421b9edc0", size = 1201470, upload-time = "2025-05-20T14:02:51.348Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/dc/17031897dae0efacfea57dfd3a82fdd2a2aeb58e0ff71b77b87e44edc772/setuptools-80.9.0-py3-none-any.whl", hash = "sha256:062d34222ad13e0cc312a4c02d73f059e86a4acbfbdea8f8f76b28c99f306922", size = 1201486, upload-time = "2025-05-27T00:56:49.664Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@ -967,6 +1139,34 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050, upload-time = "2024-12-04T17:35:26.475Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ta"
|
||||
version = "0.11.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "numpy" },
|
||||
{ name = "pandas" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/e0/9a/37d92a6b470dc9088612c2399a68f1a9ac22872d4e1eff416818e22ab11b/ta-0.11.0.tar.gz", hash = "sha256:de86af43418420bd6b088a2ea9b95483071bf453c522a8441bc2f12bcf8493fd", size = 25308, upload-time = "2023-11-02T13:53:35.434Z" }
|
||||
|
||||
[[package]]
|
||||
name = "threadpoolctl"
|
||||
version = "3.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b7/4d/08c89e34946fce2aec4fbb45c9016efd5f4d7f24af8e5d93296e935631d8/threadpoolctl-3.6.0.tar.gz", hash = "sha256:8ab8b4aa3491d812b623328249fab5302a68d2d71745c8a4c719a2fcaba9f44e", size = 21274, upload-time = "2025-03-13T13:49:23.031Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/32/d5/f9a850d79b0851d1d4ef6456097579a9005b31fea68726a4ae5f2d82ddd9/threadpoolctl-3.6.0-py3-none-any.whl", hash = "sha256:43a0b8fd5a2928500110039e43a5eed8480b918967083ea48dc3ab9f13c4a7fb", size = 18638, upload-time = "2025-03-13T13:49:21.846Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.13.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f6/37/23083fcd6e35492953e8d2aaaa68b860eb422b34627b13f2ce3eb6106061/typing_extensions-4.13.2.tar.gz", hash = "sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef", size = 106967, upload-time = "2025-04-10T14:19:05.416Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8b/54/b1ae86c0973cc6f0210b53d508ca3641fb6d0c56823f288d108bc7ab3cc8/typing_extensions-4.13.2-py3-none-any.whl", hash = "sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c", size = 45806, upload-time = "2025-04-10T14:19:03.967Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tzdata"
|
||||
version = "2025.2"
|
||||
@ -986,58 +1186,42 @@ wheels = [
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "websocket"
|
||||
version = "0.2.1"
|
||||
name = "werkzeug"
|
||||
version = "3.0.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "gevent" },
|
||||
{ name = "greenlet" },
|
||||
{ name = "markupsafe" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f2/6d/a60d620ea575c885510c574909d2e3ed62129b121fa2df00ca1c81024c87/websocket-0.2.1.tar.gz", hash = "sha256:42b506fae914ac5ed654e23ba9742e6a342b1a1c3eb92632b6166c65256469a4", size = 195339, upload-time = "2010-12-03T11:51:30.867Z" }
|
||||
|
||||
[[package]]
|
||||
name = "zope-event"
|
||||
version = "5.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "setuptools" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/46/c2/427f1867bb96555d1d34342f1dd97f8c420966ab564d58d18469a1db8736/zope.event-5.0.tar.gz", hash = "sha256:bac440d8d9891b4068e2b5a2c5e2c9765a9df762944bda6955f96bb9b91e67cd", size = 17350, upload-time = "2023-06-23T06:28:35.709Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d4/f9/0ba83eaa0df9b9e9d1efeb2ea351d0677c37d41ee5d0f91e98423c7281c9/werkzeug-3.0.6.tar.gz", hash = "sha256:a8dd59d4de28ca70471a34cba79bed5f7ef2e036a76b3ab0835474246eb41f8d", size = 805170, upload-time = "2024-10-25T18:52:31.688Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/fe/42/f8dbc2b9ad59e927940325a22d6d3931d630c3644dae7e2369ef5d9ba230/zope.event-5.0-py3-none-any.whl", hash = "sha256:2832e95014f4db26c47a13fdaef84cef2f4df37e66b59d8f1f4a8f319a632c26", size = 6824, upload-time = "2023-06-23T06:28:32.652Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/69/05837f91dfe42109203ffa3e488214ff86a6d68b2ed6c167da6cdc42349b/werkzeug-3.0.6-py3-none-any.whl", hash = "sha256:1bc0c2310d2fbb07b1dd1105eba2f7af72f322e1e455f2f93c993bee8c8a5f17", size = 227979, upload-time = "2024-10-25T18:52:30.129Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zope-interface"
|
||||
version = "7.2"
|
||||
name = "xgboost"
|
||||
version = "3.0.2"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "setuptools" },
|
||||
{ name = "numpy" },
|
||||
{ name = "nvidia-nccl-cu12", marker = "platform_machine != 'aarch64' and sys_platform == 'linux'" },
|
||||
{ name = "scipy" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/30/93/9210e7606be57a2dfc6277ac97dcc864fd8d39f142ca194fdc186d596fda/zope.interface-7.2.tar.gz", hash = "sha256:8b49f1a3d1ee4cdaf5b32d2e738362c7f5e40ac8b46dd7d1a65e82a4872728fe", size = 252960, upload-time = "2024-11-28T08:45:39.224Z" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/29/42/e6abc9e8c65033e5ff4117efc765e3d670c81c64ebd40ca6283bf4536994/xgboost-3.0.2.tar.gz", hash = "sha256:0ea95fef12313f8563458bbf49458db434d620af27b1991ddb8f46806cb305a5", size = 1159083, upload-time = "2025-05-25T09:09:11.291Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/76/71/e6177f390e8daa7e75378505c5ab974e0bf59c1d3b19155638c7afbf4b2d/zope.interface-7.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:ce290e62229964715f1011c3dbeab7a4a1e4971fd6f31324c4519464473ef9f2", size = 208243, upload-time = "2024-11-28T08:47:29.781Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/52/db/7e5f4226bef540f6d55acfd95cd105782bc6ee044d9b5587ce2c95558a5e/zope.interface-7.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:05b910a5afe03256b58ab2ba6288960a2892dfeef01336dc4be6f1b9ed02ab0a", size = 208759, upload-time = "2024-11-28T08:47:31.908Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/28/ea/fdd9813c1eafd333ad92464d57a4e3a82b37ae57c19497bcffa42df673e4/zope.interface-7.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:550f1c6588ecc368c9ce13c44a49b8d6b6f3ca7588873c679bd8fd88a1b557b6", size = 254922, upload-time = "2024-11-28T09:18:11.795Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/d3/0000a4d497ef9fbf4f66bb6828b8d0a235e690d57c333be877bec763722f/zope.interface-7.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:0ef9e2f865721553c6f22a9ff97da0f0216c074bd02b25cf0d3af60ea4d6931d", size = 249367, upload-time = "2024-11-28T08:48:24.238Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3e/e5/0b359e99084f033d413419eff23ee9c2bd33bca2ca9f4e83d11856f22d10/zope.interface-7.2-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:27f926f0dcb058211a3bb3e0e501c69759613b17a553788b2caeb991bed3b61d", size = 254488, upload-time = "2024-11-28T08:48:28.816Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/90/12d50b95f40e3b2fc0ba7f7782104093b9fd62806b13b98ef4e580f2ca61/zope.interface-7.2-cp310-cp310-win_amd64.whl", hash = "sha256:144964649eba4c5e4410bb0ee290d338e78f179cdbfd15813de1a664e7649b3b", size = 211947, upload-time = "2024-11-28T08:48:18.831Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/98/7d/2e8daf0abea7798d16a58f2f3a2bf7588872eee54ac119f99393fdd47b65/zope.interface-7.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:1909f52a00c8c3dcab6c4fad5d13de2285a4b3c7be063b239b8dc15ddfb73bd2", size = 208776, upload-time = "2024-11-28T08:47:53.009Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/2a/0c03c7170fe61d0d371e4c7ea5b62b8cb79b095b3d630ca16719bf8b7b18/zope.interface-7.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:80ecf2451596f19fd607bb09953f426588fc1e79e93f5968ecf3367550396b22", size = 209296, upload-time = "2024-11-28T08:47:57.993Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/b4/451f19448772b4a1159519033a5f72672221e623b0a1bd2b896b653943d8/zope.interface-7.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:033b3923b63474800b04cba480b70f6e6243a62208071fc148354f3f89cc01b7", size = 260997, upload-time = "2024-11-28T09:18:13.935Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/65/94/5aa4461c10718062c8f8711161faf3249d6d3679c24a0b81dd6fc8ba1dd3/zope.interface-7.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a102424e28c6b47c67923a1f337ede4a4c2bba3965b01cf707978a801fc7442c", size = 255038, upload-time = "2024-11-28T08:48:26.381Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/aa/1a28c02815fe1ca282b54f6705b9ddba20328fabdc37b8cf73fc06b172f0/zope.interface-7.2-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:25e6a61dcb184453bb00eafa733169ab6d903e46f5c2ace4ad275386f9ab327a", size = 259806, upload-time = "2024-11-28T08:48:30.78Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a7/2c/82028f121d27c7e68632347fe04f4a6e0466e77bb36e104c8b074f3d7d7b/zope.interface-7.2-cp311-cp311-win_amd64.whl", hash = "sha256:3f6771d1647b1fc543d37640b45c06b34832a943c80d1db214a37c31161a93f1", size = 212305, upload-time = "2024-11-28T08:49:14.525Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/0b/c7516bc3bad144c2496f355e35bd699443b82e9437aa02d9867653203b4a/zope.interface-7.2-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:086ee2f51eaef1e4a52bd7d3111a0404081dadae87f84c0ad4ce2649d4f708b7", size = 208959, upload-time = "2024-11-28T08:47:47.788Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a2/e9/1463036df1f78ff8c45a02642a7bf6931ae4a38a4acd6a8e07c128e387a7/zope.interface-7.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:21328fcc9d5b80768bf051faa35ab98fb979080c18e6f84ab3f27ce703bce465", size = 209357, upload-time = "2024-11-28T08:47:50.897Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/a8/106ca4c2add440728e382f1b16c7d886563602487bdd90004788d45eb310/zope.interface-7.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f6dd02ec01f4468da0f234da9d9c8545c5412fef80bc590cc51d8dd084138a89", size = 264235, upload-time = "2024-11-28T09:18:15.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/ca/57286866285f4b8a4634c12ca1957c24bdac06eae28fd4a3a578e30cf906/zope.interface-7.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:8e7da17f53e25d1a3bde5da4601e026adc9e8071f9f6f936d0fe3fe84ace6d54", size = 259253, upload-time = "2024-11-28T08:48:29.025Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/96/08/2103587ebc989b455cf05e858e7fbdfeedfc3373358320e9c513428290b1/zope.interface-7.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cab15ff4832580aa440dc9790b8a6128abd0b88b7ee4dd56abacbc52f212209d", size = 264702, upload-time = "2024-11-28T08:48:37.363Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/c7/3c67562e03b3752ba4ab6b23355f15a58ac2d023a6ef763caaca430f91f2/zope.interface-7.2-cp312-cp312-win_amd64.whl", hash = "sha256:29caad142a2355ce7cfea48725aa8bcf0067e2b5cc63fcf5cd9f97ad12d6afb5", size = 212466, upload-time = "2024-11-28T08:49:14.397Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c6/3b/e309d731712c1a1866d61b5356a069dd44e5b01e394b6cb49848fa2efbff/zope.interface-7.2-cp313-cp313-macosx_10_9_x86_64.whl", hash = "sha256:3e0350b51e88658d5ad126c6a57502b19d5f559f6cb0a628e3dc90442b53dd98", size = 208961, upload-time = "2024-11-28T08:48:29.865Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/49/65/78e7cebca6be07c8fc4032bfbb123e500d60efdf7b86727bb8a071992108/zope.interface-7.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:15398c000c094b8855d7d74f4fdc9e73aa02d4d0d5c775acdef98cdb1119768d", size = 209356, upload-time = "2024-11-28T08:48:33.297Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/11/b1/627384b745310d082d29e3695db5f5a9188186676912c14b61a78bbc6afe/zope.interface-7.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:802176a9f99bd8cc276dcd3b8512808716492f6f557c11196d42e26c01a69a4c", size = 264196, upload-time = "2024-11-28T09:18:17.584Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/f6/54548df6dc73e30ac6c8a7ff1da73ac9007ba38f866397091d5a82237bd3/zope.interface-7.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:eb23f58a446a7f09db85eda09521a498e109f137b85fb278edb2e34841055398", size = 259237, upload-time = "2024-11-28T08:48:31.71Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b6/66/ac05b741c2129fdf668b85631d2268421c5cd1a9ff99be1674371139d665/zope.interface-7.2-cp313-cp313-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a71a5b541078d0ebe373a81a3b7e71432c61d12e660f1d67896ca62d9628045b", size = 264696, upload-time = "2024-11-28T08:48:41.161Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0a/2f/1bccc6f4cc882662162a1158cda1a7f616add2ffe322b28c99cb031b4ffc/zope.interface-7.2-cp313-cp313-win_amd64.whl", hash = "sha256:4893395d5dd2ba655c38ceb13014fd65667740f09fa5bb01caa1e6284e48c0cd", size = 212472, upload-time = "2024-11-28T08:49:56.587Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0b/6b/f47143ecab6313272497f324ffe2eafaf2851c0781a9022040adf80f9aab/xgboost-3.0.2-py3-none-macosx_10_15_x86_64.whl", hash = "sha256:923f46cd1b25c0a39fc98e969fa0a72a1a84feb7f55797cb3385962cd8d3b2d4", size = 2246653, upload-time = "2025-05-25T09:09:35.431Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/09/c9/5f0be8e51d55df60a1bd7d09e7b05380e04c38de9554105f6cacffac3886/xgboost-3.0.2-py3-none-macosx_12_0_arm64.whl", hash = "sha256:5c4e377c86df815669939646b3abe7a20559e4d4c0f5c2ab10c31252e7a9d7d9", size = 2025769, upload-time = "2025-05-25T09:09:37.22Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c0/eb/4b5036a16628dc375544ba5375768ddc3653a3372af6f947d73d11d1c3f2/xgboost-3.0.2-py3-none-manylinux2014_aarch64.whl", hash = "sha256:e9acf97b3b2a628b33f1dc80ee3f16a658e1f9f43c4ed2aa85b0a824c87dbde5", size = 4841549, upload-time = "2025-05-25T09:09:41.172Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/db/71/347f78ac21eb9221231bebf7d7a3eaea20b09377d9d602cee15fe9c7aeba/xgboost-3.0.2-py3-none-manylinux2014_x86_64.whl", hash = "sha256:7d1ad8c5ae361161ce5288a04916c89d13d247b9a98e25c4b3983783cfad0377", size = 4904451, upload-time = "2025-05-25T09:09:44.273Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/47/a4/949c50325c6417bfae2b846c43f4a8ad6557278d26b6a526c5c22f2204aa/xgboost-3.0.2-py3-none-manylinux_2_28_aarch64.whl", hash = "sha256:a112df38f2faaae31f1c00d373ff35fb5965a65e74de2eea9081dbef7a9ddffe", size = 4603350, upload-time = "2025-05-25T09:09:46.497Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/cc/f5/1b5d88e5a65168b435e8339b53d027e3e7adecb0c7d157bc86d18f78471b/xgboost-3.0.2-py3-none-manylinux_2_28_x86_64.whl", hash = "sha256:d534242489265621397ff403bb1c6235d2e6c66938639239fdf2d6b39d27e339", size = 253887220, upload-time = "2025-05-25T09:10:24.541Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/22/e3ff2dfafe862a91733dfa0aecdb4794aa1d9a18e09a14e118bde0cbc2db/xgboost-3.0.2-py3-none-win_amd64.whl", hash = "sha256:b4c89b71d134da9fa6318e3c9f5459317d1013b4d57059d10ed2840750e2f7e1", size = 149974575, upload-time = "2025-05-25T09:11:23.554Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "zipp"
|
||||
version = "3.22.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/12/b6/7b3d16792fdf94f146bed92be90b4eb4563569eca91513c8609aebf0c167/zipp-3.22.0.tar.gz", hash = "sha256:dd2f28c3ce4bc67507bfd3781d21b7bb2be31103b51a4553ad7d90b84e57ace5", size = 25257, upload-time = "2025-05-26T14:46:32.217Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ad/da/f64669af4cae46f17b90798a827519ce3737d31dbafad65d391e49643dc4/zipp-3.22.0-py3-none-any.whl", hash = "sha256:fe208f65f2aca48b81f9e6fd8cf7b8b32c26375266b009b413d45306b6148343", size = 9796, upload-time = "2025-05-26T14:46:30.775Z" },
|
||||
]
|
||||
|
||||
39
xgboost/custom_xgboost.py
Normal file
39
xgboost/custom_xgboost.py
Normal file
@ -0,0 +1,39 @@
|
||||
import xgboost as xgb
|
||||
import numpy as np
|
||||
|
||||
class CustomXGBoostGPU:
|
||||
def __init__(self, X_train, X_test, y_train, y_test):
|
||||
self.X_train = X_train.astype(np.float32)
|
||||
self.X_test = X_test.astype(np.float32)
|
||||
self.y_train = y_train.astype(np.float32)
|
||||
self.y_test = y_test.astype(np.float32)
|
||||
self.model = None
|
||||
self.params = None # Will be set during training
|
||||
|
||||
def train(self, **xgb_params):
|
||||
params = {
|
||||
'tree_method': 'hist',
|
||||
'device': 'cuda',
|
||||
'objective': 'reg:squarederror',
|
||||
'eval_metric': 'rmse',
|
||||
'verbosity': 1,
|
||||
}
|
||||
params.update(xgb_params)
|
||||
self.params = params # Store params for later access
|
||||
dtrain = xgb.DMatrix(self.X_train, label=self.y_train)
|
||||
dtest = xgb.DMatrix(self.X_test, label=self.y_test)
|
||||
evals = [(dtrain, 'train'), (dtest, 'eval')]
|
||||
self.model = xgb.train(params, dtrain, num_boost_round=100, evals=evals, early_stopping_rounds=10)
|
||||
return self.model
|
||||
|
||||
def predict(self, X):
|
||||
if self.model is None:
|
||||
raise ValueError('Model not trained yet.')
|
||||
dmatrix = xgb.DMatrix(X.astype(np.float32))
|
||||
return self.model.predict(dmatrix)
|
||||
|
||||
def save_model(self, file_path):
|
||||
"""Save the trained XGBoost model to the specified file path."""
|
||||
if self.model is None:
|
||||
raise ValueError('Model not trained yet.')
|
||||
self.model.save_model(file_path)
|
||||
806
xgboost/main.py
Normal file
806
xgboost/main.py
Normal file
@ -0,0 +1,806 @@
|
||||
import sys
|
||||
import os
|
||||
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from custom_xgboost import CustomXGBoostGPU
|
||||
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
|
||||
from plot_results import plot_prediction_error_distribution, plot_direction_transition_heatmap
|
||||
from cycles.supertrend import Supertrends
|
||||
import time
|
||||
from numba import njit
|
||||
import itertools
|
||||
import csv
|
||||
import pandas_ta as ta
|
||||
|
||||
def run_indicator(func, *args):
|
||||
return func(*args)
|
||||
|
||||
def run_indicator_job(job):
|
||||
import time
|
||||
func, *args = job
|
||||
indicator_name = func.__name__
|
||||
start = time.time()
|
||||
result = func(*args)
|
||||
elapsed = time.time() - start
|
||||
print(f'Indicator {indicator_name} computed in {elapsed:.4f} seconds')
|
||||
return result
|
||||
|
||||
def calc_rsi(close):
|
||||
from ta.momentum import RSIIndicator
|
||||
return ('rsi', RSIIndicator(close, window=14).rsi())
|
||||
|
||||
def calc_macd(close):
|
||||
from ta.trend import MACD
|
||||
return ('macd', MACD(close).macd())
|
||||
|
||||
def calc_bollinger(close):
|
||||
from ta.volatility import BollingerBands
|
||||
bb = BollingerBands(close=close, window=20, window_dev=2)
|
||||
return [
|
||||
('bb_bbm', bb.bollinger_mavg()),
|
||||
('bb_bbh', bb.bollinger_hband()),
|
||||
('bb_bbl', bb.bollinger_lband()),
|
||||
('bb_bb_width', bb.bollinger_hband() - bb.bollinger_lband())
|
||||
]
|
||||
|
||||
def calc_stochastic(high, low, close):
|
||||
from ta.momentum import StochasticOscillator
|
||||
stoch = StochasticOscillator(high=high, low=low, close=close, window=14, smooth_window=3)
|
||||
return [
|
||||
('stoch_k', stoch.stoch()),
|
||||
('stoch_d', stoch.stoch_signal())
|
||||
]
|
||||
|
||||
def calc_atr(high, low, close):
|
||||
from ta.volatility import AverageTrueRange
|
||||
atr = AverageTrueRange(high=high, low=low, close=close, window=14)
|
||||
return ('atr', atr.average_true_range())
|
||||
|
||||
def calc_cci(high, low, close):
|
||||
from ta.trend import CCIIndicator
|
||||
cci = CCIIndicator(high=high, low=low, close=close, window=20)
|
||||
return ('cci', cci.cci())
|
||||
|
||||
def calc_williamsr(high, low, close):
|
||||
from ta.momentum import WilliamsRIndicator
|
||||
willr = WilliamsRIndicator(high=high, low=low, close=close, lbp=14)
|
||||
return ('williams_r', willr.williams_r())
|
||||
|
||||
def calc_ema(close):
|
||||
from ta.trend import EMAIndicator
|
||||
ema = EMAIndicator(close=close, window=14)
|
||||
return ('ema_14', ema.ema_indicator())
|
||||
|
||||
def calc_obv(close, volume):
|
||||
from ta.volume import OnBalanceVolumeIndicator
|
||||
obv = OnBalanceVolumeIndicator(close=close, volume=volume)
|
||||
return ('obv', obv.on_balance_volume())
|
||||
|
||||
def calc_cmf(high, low, close, volume):
|
||||
from ta.volume import ChaikinMoneyFlowIndicator
|
||||
cmf = ChaikinMoneyFlowIndicator(high=high, low=low, close=close, volume=volume, window=20)
|
||||
return ('cmf', cmf.chaikin_money_flow())
|
||||
|
||||
def calc_sma(close):
|
||||
from ta.trend import SMAIndicator
|
||||
return [
|
||||
('sma_50', SMAIndicator(close, window=50).sma_indicator()),
|
||||
('sma_200', SMAIndicator(close, window=200).sma_indicator())
|
||||
]
|
||||
|
||||
def calc_roc(close):
|
||||
from ta.momentum import ROCIndicator
|
||||
return ('roc_10', ROCIndicator(close, window=10).roc())
|
||||
|
||||
def calc_momentum(close):
|
||||
return ('momentum_10', close - close.shift(10))
|
||||
|
||||
def calc_psar(high, low, close):
|
||||
# Use the Numba-accelerated fast_psar function for speed
|
||||
psar_values = fast_psar(np.array(high), np.array(low), np.array(close))
|
||||
return [('psar', pd.Series(psar_values, index=close.index))]
|
||||
|
||||
def calc_donchian(high, low, close):
|
||||
from ta.volatility import DonchianChannel
|
||||
donchian = DonchianChannel(high, low, close, window=20)
|
||||
return [
|
||||
('donchian_hband', donchian.donchian_channel_hband()),
|
||||
('donchian_lband', donchian.donchian_channel_lband()),
|
||||
('donchian_mband', donchian.donchian_channel_mband())
|
||||
]
|
||||
|
||||
def calc_keltner(high, low, close):
|
||||
from ta.volatility import KeltnerChannel
|
||||
keltner = KeltnerChannel(high, low, close, window=20)
|
||||
return [
|
||||
('keltner_hband', keltner.keltner_channel_hband()),
|
||||
('keltner_lband', keltner.keltner_channel_lband()),
|
||||
('keltner_mband', keltner.keltner_channel_mband())
|
||||
]
|
||||
|
||||
def calc_dpo(close):
|
||||
from ta.trend import DPOIndicator
|
||||
return ('dpo_20', DPOIndicator(close, window=20).dpo())
|
||||
|
||||
def calc_ultimate(high, low, close):
|
||||
from ta.momentum import UltimateOscillator
|
||||
return ('ultimate_osc', UltimateOscillator(high, low, close).ultimate_oscillator())
|
||||
|
||||
def calc_ichimoku(high, low):
|
||||
from ta.trend import IchimokuIndicator
|
||||
ichimoku = IchimokuIndicator(high, low, window1=9, window2=26, window3=52)
|
||||
return [
|
||||
('ichimoku_a', ichimoku.ichimoku_a()),
|
||||
('ichimoku_b', ichimoku.ichimoku_b()),
|
||||
('ichimoku_base_line', ichimoku.ichimoku_base_line()),
|
||||
('ichimoku_conversion_line', ichimoku.ichimoku_conversion_line())
|
||||
]
|
||||
|
||||
def calc_elder_ray(close, low, high):
|
||||
from ta.trend import EMAIndicator
|
||||
ema = EMAIndicator(close, window=13).ema_indicator()
|
||||
return [
|
||||
('elder_ray_bull', ema - low),
|
||||
('elder_ray_bear', ema - high)
|
||||
]
|
||||
|
||||
def calc_daily_return(close):
|
||||
from ta.others import DailyReturnIndicator
|
||||
return ('daily_return', DailyReturnIndicator(close).daily_return())
|
||||
|
||||
@njit
|
||||
def fast_psar(high, low, close, af=0.02, max_af=0.2):
|
||||
length = len(close)
|
||||
psar = np.zeros(length)
|
||||
bull = True
|
||||
af_step = af
|
||||
ep = low[0]
|
||||
psar[0] = low[0]
|
||||
for i in range(1, length):
|
||||
prev_psar = psar[i-1]
|
||||
if bull:
|
||||
psar[i] = prev_psar + af_step * (ep - prev_psar)
|
||||
if low[i] < psar[i]:
|
||||
bull = False
|
||||
psar[i] = ep
|
||||
af_step = af
|
||||
ep = low[i]
|
||||
else:
|
||||
if high[i] > ep:
|
||||
ep = high[i]
|
||||
af_step = min(af_step + af, max_af)
|
||||
else:
|
||||
psar[i] = prev_psar + af_step * (ep - prev_psar)
|
||||
if high[i] > psar[i]:
|
||||
bull = True
|
||||
psar[i] = ep
|
||||
af_step = af
|
||||
ep = high[i]
|
||||
else:
|
||||
if low[i] < ep:
|
||||
ep = low[i]
|
||||
af_step = min(af_step + af, max_af)
|
||||
return psar
|
||||
|
||||
def compute_lag(df, col, lag):
|
||||
return df[col].shift(lag)
|
||||
|
||||
def compute_rolling(df, col, stat, window):
|
||||
if stat == 'mean':
|
||||
return df[col].rolling(window).mean()
|
||||
elif stat == 'std':
|
||||
return df[col].rolling(window).std()
|
||||
elif stat == 'min':
|
||||
return df[col].rolling(window).min()
|
||||
elif stat == 'max':
|
||||
return df[col].rolling(window).max()
|
||||
|
||||
def compute_log_return(df, horizon):
|
||||
return np.log(df['Close'] / df['Close'].shift(horizon))
|
||||
|
||||
def compute_volatility(df, window):
|
||||
return df['log_return'].rolling(window).std()
|
||||
|
||||
def run_feature_job(job, df):
|
||||
feature_name, func, *args = job
|
||||
print(f'Computing feature: {feature_name}')
|
||||
result = func(df, *args)
|
||||
return feature_name, result
|
||||
|
||||
def calc_adx(high, low, close):
|
||||
from ta.trend import ADXIndicator
|
||||
adx = ADXIndicator(high=high, low=low, close=close, window=14)
|
||||
return [
|
||||
('adx', adx.adx()),
|
||||
('adx_pos', adx.adx_pos()),
|
||||
('adx_neg', adx.adx_neg())
|
||||
]
|
||||
|
||||
def calc_trix(close):
|
||||
from ta.trend import TRIXIndicator
|
||||
trix = TRIXIndicator(close=close, window=15)
|
||||
return ('trix', trix.trix())
|
||||
|
||||
def calc_vortex(high, low, close):
|
||||
from ta.trend import VortexIndicator
|
||||
vortex = VortexIndicator(high=high, low=low, close=close, window=14)
|
||||
return [
|
||||
('vortex_pos', vortex.vortex_indicator_pos()),
|
||||
('vortex_neg', vortex.vortex_indicator_neg())
|
||||
]
|
||||
|
||||
def calc_kama(close):
|
||||
import pandas_ta as ta
|
||||
kama = ta.kama(close, length=10)
|
||||
return ('kama', kama)
|
||||
|
||||
def calc_force_index(close, volume):
|
||||
from ta.volume import ForceIndexIndicator
|
||||
fi = ForceIndexIndicator(close=close, volume=volume, window=13)
|
||||
return ('force_index', fi.force_index())
|
||||
|
||||
def calc_eom(high, low, volume):
|
||||
from ta.volume import EaseOfMovementIndicator
|
||||
eom = EaseOfMovementIndicator(high=high, low=low, volume=volume, window=14)
|
||||
return ('eom', eom.ease_of_movement())
|
||||
|
||||
def calc_mfi(high, low, close, volume):
|
||||
from ta.volume import MFIIndicator
|
||||
mfi = MFIIndicator(high=high, low=low, close=close, volume=volume, window=14)
|
||||
return ('mfi', mfi.money_flow_index())
|
||||
|
||||
def calc_adi(high, low, close, volume):
|
||||
from ta.volume import AccDistIndexIndicator
|
||||
adi = AccDistIndexIndicator(high=high, low=low, close=close, volume=volume)
|
||||
return ('adi', adi.acc_dist_index())
|
||||
|
||||
def calc_tema(close):
|
||||
import pandas_ta as ta
|
||||
tema = ta.tema(close, length=10)
|
||||
return ('tema', tema)
|
||||
|
||||
def calc_stochrsi(close):
|
||||
from ta.momentum import StochRSIIndicator
|
||||
stochrsi = StochRSIIndicator(close=close, window=14, smooth1=3, smooth2=3)
|
||||
return [
|
||||
('stochrsi', stochrsi.stochrsi()),
|
||||
('stochrsi_k', stochrsi.stochrsi_k()),
|
||||
('stochrsi_d', stochrsi.stochrsi_d())
|
||||
]
|
||||
|
||||
def calc_awesome_oscillator(high, low):
|
||||
from ta.momentum import AwesomeOscillatorIndicator
|
||||
ao = AwesomeOscillatorIndicator(high=high, low=low, window1=5, window2=34)
|
||||
return ('awesome_osc', ao.awesome_oscillator())
|
||||
|
||||
if __name__ == '__main__':
|
||||
IMPUTE_NANS = True # Set to True to impute NaNs, False to drop rows with NaNs
|
||||
csv_path = './data/btcusd_1-min_data.csv'
|
||||
csv_prefix = os.path.splitext(os.path.basename(csv_path))[0]
|
||||
|
||||
print('Reading CSV and filtering data...')
|
||||
df = pd.read_csv(csv_path)
|
||||
df = df[df['Volume'] != 0]
|
||||
|
||||
min_date = '2017-06-01'
|
||||
print('Converting Timestamp and filtering by date...')
|
||||
df['Timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
|
||||
df = df[df['Timestamp'] >= min_date]
|
||||
|
||||
lags = 3
|
||||
|
||||
print('Calculating log returns as the new target...')
|
||||
df['log_return'] = np.log(df['Close'] / df['Close'].shift(1))
|
||||
|
||||
ohlcv_cols = ['Open', 'High', 'Low', 'Close', 'Volume']
|
||||
window_sizes = [5, 15, 30] # in minutes, adjust as needed
|
||||
|
||||
features_dict = {}
|
||||
|
||||
print('Starting feature computation...')
|
||||
feature_start_time = time.time()
|
||||
|
||||
# --- Technical Indicator Features: Calculate or Load from Cache ---
|
||||
print('Calculating or loading technical indicator features...')
|
||||
# RSI
|
||||
feature_file = f'./data/{csv_prefix}_rsi.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['rsi'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: rsi')
|
||||
_, values = calc_rsi(df['Close'])
|
||||
features_dict['rsi'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# MACD
|
||||
feature_file = f'./data/{csv_prefix}_macd.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['macd'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: macd')
|
||||
_, values = calc_macd(df['Close'])
|
||||
features_dict['macd'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# ATR
|
||||
feature_file = f'./data/{csv_prefix}_atr.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['atr'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: atr')
|
||||
_, values = calc_atr(df['High'], df['Low'], df['Close'])
|
||||
features_dict['atr'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# CCI
|
||||
feature_file = f'./data/{csv_prefix}_cci.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['cci'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: cci')
|
||||
_, values = calc_cci(df['High'], df['Low'], df['Close'])
|
||||
features_dict['cci'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# Williams %R
|
||||
feature_file = f'./data/{csv_prefix}_williams_r.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['williams_r'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: williams_r')
|
||||
_, values = calc_williamsr(df['High'], df['Low'], df['Close'])
|
||||
features_dict['williams_r'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# EMA 14
|
||||
feature_file = f'./data/{csv_prefix}_ema_14.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['ema_14'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: ema_14')
|
||||
_, values = calc_ema(df['Close'])
|
||||
features_dict['ema_14'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# OBV
|
||||
feature_file = f'./data/{csv_prefix}_obv.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['obv'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: obv')
|
||||
_, values = calc_obv(df['Close'], df['Volume'])
|
||||
features_dict['obv'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# CMF
|
||||
feature_file = f'./data/{csv_prefix}_cmf.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['cmf'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: cmf')
|
||||
_, values = calc_cmf(df['High'], df['Low'], df['Close'], df['Volume'])
|
||||
features_dict['cmf'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# ROC 10
|
||||
feature_file = f'./data/{csv_prefix}_roc_10.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['roc_10'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: roc_10')
|
||||
_, values = calc_roc(df['Close'])
|
||||
features_dict['roc_10'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# DPO 20
|
||||
feature_file = f'./data/{csv_prefix}_dpo_20.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['dpo_20'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: dpo_20')
|
||||
_, values = calc_dpo(df['Close'])
|
||||
features_dict['dpo_20'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# Ultimate Oscillator
|
||||
feature_file = f'./data/{csv_prefix}_ultimate_osc.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['ultimate_osc'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: ultimate_osc')
|
||||
_, values = calc_ultimate(df['High'], df['Low'], df['Close'])
|
||||
features_dict['ultimate_osc'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# Daily Return
|
||||
feature_file = f'./data/{csv_prefix}_daily_return.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'A Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['daily_return'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: daily_return')
|
||||
_, values = calc_daily_return(df['Close'])
|
||||
features_dict['daily_return'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# Multi-column indicators
|
||||
# Bollinger Bands
|
||||
print('Calculating multi-column indicator: bollinger')
|
||||
result = calc_bollinger(df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Stochastic Oscillator
|
||||
print('Calculating multi-column indicator: stochastic')
|
||||
result = calc_stochastic(df['High'], df['Low'], df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# SMA
|
||||
print('Calculating multi-column indicator: sma')
|
||||
result = calc_sma(df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# PSAR
|
||||
print('Calculating multi-column indicator: psar')
|
||||
result = calc_psar(df['High'], df['Low'], df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Donchian Channel
|
||||
print('Calculating multi-column indicator: donchian')
|
||||
result = calc_donchian(df['High'], df['Low'], df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Keltner Channel
|
||||
print('Calculating multi-column indicator: keltner')
|
||||
result = calc_keltner(df['High'], df['Low'], df['Close'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Ichimoku
|
||||
print('Calculating multi-column indicator: ichimoku')
|
||||
result = calc_ichimoku(df['High'], df['Low'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Elder Ray
|
||||
print('Calculating multi-column indicator: elder_ray')
|
||||
result = calc_elder_ray(df['Close'], df['Low'], df['High'])
|
||||
for subname, values in result:
|
||||
print(f"Adding subfeature: {subname}")
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
if os.path.exists(sub_feature_file):
|
||||
print(f'B Loading cached feature: {sub_feature_file}')
|
||||
arr = np.load(sub_feature_file)
|
||||
features_dict[subname] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Prepare lags, rolling stats, log returns, and volatility features sequentially
|
||||
# Lags
|
||||
for col in ohlcv_cols:
|
||||
for lag in range(1, lags + 1):
|
||||
feature_name = f'{col}_lag{lag}'
|
||||
feature_file = f'./data/{csv_prefix}_{feature_name}.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'C Loading cached feature: {feature_file}')
|
||||
features_dict[feature_name] = np.load(feature_file)
|
||||
else:
|
||||
print(f'Computing lag feature: {feature_name}')
|
||||
result = compute_lag(df, col, lag)
|
||||
features_dict[feature_name] = result
|
||||
np.save(feature_file, result.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
# Rolling statistics
|
||||
for col in ohlcv_cols:
|
||||
for window in window_sizes:
|
||||
if (col == 'Open' and window == 5):
|
||||
continue
|
||||
if (col == 'High' and window == 5):
|
||||
continue
|
||||
if (col == 'High' and window == 30):
|
||||
continue
|
||||
if (col == 'Low' and window == 15):
|
||||
continue
|
||||
for stat in ['mean', 'std', 'min', 'max']:
|
||||
feature_name = f'{col}_roll_{stat}_{window}'
|
||||
feature_file = f'./data/{csv_prefix}_{feature_name}.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'D Loading cached feature: {feature_file}')
|
||||
features_dict[feature_name] = np.load(feature_file)
|
||||
else:
|
||||
print(f'Computing rolling stat feature: {feature_name}')
|
||||
result = compute_rolling(df, col, stat, window)
|
||||
features_dict[feature_name] = result
|
||||
np.save(feature_file, result.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
# Log returns for different horizons
|
||||
for horizon in [5, 15, 30]:
|
||||
feature_name = f'log_return_{horizon}'
|
||||
feature_file = f'./data/{csv_prefix}_{feature_name}.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'E Loading cached feature: {feature_file}')
|
||||
features_dict[feature_name] = np.load(feature_file)
|
||||
else:
|
||||
print(f'Computing log return feature: {feature_name}')
|
||||
result = compute_log_return(df, horizon)
|
||||
features_dict[feature_name] = result
|
||||
np.save(feature_file, result.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
# Volatility
|
||||
for window in window_sizes:
|
||||
feature_name = f'volatility_{window}'
|
||||
feature_file = f'./data/{csv_prefix}_{feature_name}.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'F Loading cached feature: {feature_file}')
|
||||
features_dict[feature_name] = np.load(feature_file)
|
||||
else:
|
||||
print(f'Computing volatility feature: {feature_name}')
|
||||
result = compute_volatility(df, window)
|
||||
features_dict[feature_name] = result
|
||||
np.save(feature_file, result.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# --- Additional Technical Indicator Features ---
|
||||
# ADX
|
||||
adx_names = ['adx', 'adx_pos', 'adx_neg']
|
||||
adx_files = [f'./data/{csv_prefix}_{name}.npy' for name in adx_names]
|
||||
if all(os.path.exists(f) for f in adx_files):
|
||||
print('G Loading cached features: ADX')
|
||||
for name, f in zip(adx_names, adx_files):
|
||||
arr = np.load(f)
|
||||
features_dict[name] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating multi-column indicator: adx')
|
||||
result = calc_adx(df['High'], df['Low'], df['Close'])
|
||||
for subname, values in result:
|
||||
sub_feature_file = f'./data/{csv_prefix}_{subname}.npy'
|
||||
features_dict[subname] = values
|
||||
np.save(sub_feature_file, values.values)
|
||||
print(f'Saved feature: {sub_feature_file}')
|
||||
|
||||
# Force Index
|
||||
feature_file = f'./data/{csv_prefix}_force_index.npy'
|
||||
if os.path.exists(feature_file):
|
||||
print(f'K Loading cached feature: {feature_file}')
|
||||
arr = np.load(feature_file)
|
||||
features_dict['force_index'] = pd.Series(arr, index=df.index)
|
||||
else:
|
||||
print('Calculating feature: force_index')
|
||||
_, values = calc_force_index(df['Close'], df['Volume'])
|
||||
features_dict['force_index'] = values
|
||||
np.save(feature_file, values.values)
|
||||
print(f'Saved feature: {feature_file}')
|
||||
|
||||
# Supertrend indicators
|
||||
for period, multiplier in [(12, 3.0), (10, 1.0), (11, 2.0)]:
|
||||
st_name = f'supertrend_{period}_{multiplier}'
|
||||
st_trend_name = f'supertrend_trend_{period}_{multiplier}'
|
||||
st_file = f'./data/{csv_prefix}_{st_name}.npy'
|
||||
st_trend_file = f'./data/{csv_prefix}_{st_trend_name}.npy'
|
||||
if os.path.exists(st_file) and os.path.exists(st_trend_file):
|
||||
print(f'L Loading cached features: {st_file}, {st_trend_file}')
|
||||
features_dict[st_name] = pd.Series(np.load(st_file), index=df.index)
|
||||
features_dict[st_trend_name] = pd.Series(np.load(st_trend_file), index=df.index)
|
||||
else:
|
||||
print(f'Calculating Supertrend indicator: {st_name}')
|
||||
st = ta.supertrend(df['High'], df['Low'], df['Close'], length=period, multiplier=multiplier)
|
||||
features_dict[st_name] = st[f'SUPERT_{period}_{multiplier}']
|
||||
features_dict[st_trend_name] = st[f'SUPERTd_{period}_{multiplier}']
|
||||
np.save(st_file, features_dict[st_name].values)
|
||||
np.save(st_trend_file, features_dict[st_trend_name].values)
|
||||
print(f'Saved features: {st_file}, {st_trend_file}')
|
||||
|
||||
# Concatenate all new features at once
|
||||
print('Concatenating all new features to DataFrame...')
|
||||
features_df = pd.DataFrame(features_dict)
|
||||
print("Columns in features_df:", features_df.columns.tolist())
|
||||
print("All-NaN columns in features_df:", features_df.columns[features_df.isna().all()].tolist())
|
||||
df = pd.concat([df, features_df], axis=1)
|
||||
|
||||
# Print all columns after concatenation
|
||||
print("All columns in df after concat:", df.columns.tolist())
|
||||
|
||||
# Downcast all float columns to save memory
|
||||
print('Downcasting float columns to save memory...')
|
||||
for col in df.columns:
|
||||
try:
|
||||
df[col] = pd.to_numeric(df[col], downcast='float')
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Add time features (exclude 'dayofweek')
|
||||
print('Adding hour feature...')
|
||||
df['Timestamp'] = pd.to_datetime(df['Timestamp'], errors='coerce')
|
||||
df['hour'] = df['Timestamp'].dt.hour
|
||||
|
||||
# Handle NaNs after all feature engineering
|
||||
if IMPUTE_NANS:
|
||||
print('Imputing NaNs after feature engineering (using mean imputation)...')
|
||||
numeric_cols = df.select_dtypes(include=[np.number]).columns
|
||||
for col in numeric_cols:
|
||||
df[col] = df[col].fillna(df[col].mean())
|
||||
# If you want to impute non-numeric columns differently, add logic here
|
||||
else:
|
||||
print('Dropping NaNs after feature engineering...')
|
||||
df = df.dropna().reset_index(drop=True)
|
||||
|
||||
# Exclude 'Timestamp', 'Close', 'log_return', and any future target columns from features
|
||||
print('Selecting feature columns...')
|
||||
exclude_cols = ['Timestamp', 'Close', 'log_return', 'log_return_5', 'log_return_15', 'log_return_30']
|
||||
feature_cols = [col for col in df.columns if col not in exclude_cols]
|
||||
print('Features used for training:', feature_cols)
|
||||
|
||||
# Prepare CSV for results
|
||||
results_csv = './data/leave_one_out_results.csv'
|
||||
if not os.path.exists(results_csv):
|
||||
with open(results_csv, 'w', newline='') as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow(['left_out_feature', 'used_features', 'rmse', 'mae', 'r2', 'mape', 'directional_accuracy'])
|
||||
|
||||
total_features = len(feature_cols)
|
||||
never_leave_out = {'Open', 'High', 'Low', 'Close', 'Volume'}
|
||||
for idx, left_out in enumerate(feature_cols):
|
||||
if left_out in never_leave_out:
|
||||
continue
|
||||
used = [f for f in feature_cols if f != left_out]
|
||||
print(f'\n=== Leave-one-out {idx+1}/{total_features}: left out {left_out} ===')
|
||||
try:
|
||||
# Prepare X and y for this combination
|
||||
X = df[used].values.astype(np.float32)
|
||||
y = df["log_return"].values.astype(np.float32)
|
||||
split_idx = int(len(X) * 0.8)
|
||||
X_train, X_test = X[:split_idx], X[split_idx:]
|
||||
y_train, y_test = y[:split_idx], y[split_idx:]
|
||||
test_timestamps = df['Timestamp'].values[split_idx:]
|
||||
|
||||
model = CustomXGBoostGPU(X_train, X_test, y_train, y_test)
|
||||
booster = model.train()
|
||||
model.save_model(f'./data/xgboost_model_wo_{left_out}.json')
|
||||
|
||||
test_preds = model.predict(X_test)
|
||||
rmse = np.sqrt(mean_squared_error(y_test, test_preds))
|
||||
|
||||
# Reconstruct price series from log returns
|
||||
if 'Close' in df.columns:
|
||||
close_prices = df['Close'].values
|
||||
else:
|
||||
close_prices = pd.read_csv(csv_path)['Close'].values
|
||||
start_price = close_prices[split_idx]
|
||||
actual_prices = [start_price]
|
||||
for r_ in y_test:
|
||||
actual_prices.append(actual_prices[-1] * np.exp(r_))
|
||||
actual_prices = np.array(actual_prices[1:])
|
||||
predicted_prices = [start_price]
|
||||
for r_ in test_preds:
|
||||
predicted_prices.append(predicted_prices[-1] * np.exp(r_))
|
||||
predicted_prices = np.array(predicted_prices[1:])
|
||||
|
||||
mae = mean_absolute_error(actual_prices, predicted_prices)
|
||||
r2 = r2_score(actual_prices, predicted_prices)
|
||||
direction_actual = np.sign(np.diff(actual_prices))
|
||||
direction_pred = np.sign(np.diff(predicted_prices))
|
||||
directional_accuracy = (direction_actual == direction_pred).mean()
|
||||
mape = np.mean(np.abs((actual_prices - predicted_prices) / actual_prices)) * 100
|
||||
|
||||
# Save results to CSV
|
||||
with open(results_csv, 'a', newline='') as f:
|
||||
writer = csv.writer(f)
|
||||
writer.writerow([left_out, "|".join(used), rmse, mae, r2, mape, directional_accuracy])
|
||||
print(f'Left out {left_out}: RMSE={rmse:.4f}, MAE={mae:.4f}, R2={r2:.4f}, MAPE={mape:.2f}%, DirAcc={directional_accuracy*100:.2f}%')
|
||||
|
||||
# Plotting for this run
|
||||
plot_prefix = f'loo_{left_out}'
|
||||
print('Plotting distribution of absolute prediction errors...')
|
||||
plot_prediction_error_distribution(predicted_prices, actual_prices, prefix=plot_prefix)
|
||||
|
||||
print('Plotting directional accuracy...')
|
||||
plot_direction_transition_heatmap(actual_prices, predicted_prices, prefix=plot_prefix)
|
||||
except Exception as e:
|
||||
print(f'Leave-one-out failed for {left_out}: {e}')
|
||||
print(f'All leave-one-out runs completed. Results saved to {results_csv}')
|
||||
sys.exit(0)
|
||||
318
xgboost/plot_results.py
Normal file
318
xgboost/plot_results.py
Normal file
@ -0,0 +1,318 @@
|
||||
import numpy as np
|
||||
import dash
|
||||
from dash import dcc, html
|
||||
import plotly.graph_objs as go
|
||||
import threading
|
||||
|
||||
|
||||
def display_actual_vs_predicted(y_test, test_preds, timestamps, n_plot=200):
|
||||
import plotly.offline as pyo
|
||||
n_plot = min(n_plot, len(y_test))
|
||||
plot_indices = timestamps[:n_plot]
|
||||
actual = y_test[:n_plot]
|
||||
predicted = test_preds[:n_plot]
|
||||
|
||||
trace_actual = go.Scatter(x=plot_indices, y=actual, mode='lines', name='Actual')
|
||||
trace_predicted = go.Scatter(x=plot_indices, y=predicted, mode='lines', name='Predicted')
|
||||
data = [trace_actual, trace_predicted]
|
||||
layout = go.Layout(
|
||||
title='Actual vs. Predicted BTC Close Prices (Test Set)',
|
||||
xaxis={'title': 'Timestamp'},
|
||||
yaxis={'title': 'BTC Close Price'},
|
||||
legend={'x': 0, 'y': 1},
|
||||
margin={'l': 40, 'b': 40, 't': 40, 'r': 10},
|
||||
hovermode='closest'
|
||||
)
|
||||
fig = go.Figure(data=data, layout=layout)
|
||||
pyo.plot(fig, auto_open=False)
|
||||
|
||||
def plot_target_distribution(y_train, y_test):
|
||||
import plotly.offline as pyo
|
||||
trace_train = go.Histogram(
|
||||
x=y_train,
|
||||
nbinsx=100,
|
||||
opacity=0.5,
|
||||
name='Train',
|
||||
marker=dict(color='blue')
|
||||
)
|
||||
trace_test = go.Histogram(
|
||||
x=y_test,
|
||||
nbinsx=100,
|
||||
opacity=0.5,
|
||||
name='Test',
|
||||
marker=dict(color='orange')
|
||||
)
|
||||
data = [trace_train, trace_test]
|
||||
layout = go.Layout(
|
||||
title='Distribution of Target Variable (Close Price)',
|
||||
xaxis=dict(title='BTC Close Price'),
|
||||
yaxis=dict(title='Frequency'),
|
||||
barmode='overlay'
|
||||
)
|
||||
fig = go.Figure(data=data, layout=layout)
|
||||
pyo.plot(fig, auto_open=False)
|
||||
|
||||
def plot_predicted_vs_actual_log_returns(y_test, test_preds, timestamps=None, n_plot=200):
|
||||
import plotly.offline as pyo
|
||||
import plotly.graph_objs as go
|
||||
n_plot = min(n_plot, len(y_test))
|
||||
actual = y_test[:n_plot]
|
||||
predicted = test_preds[:n_plot]
|
||||
if timestamps is not None:
|
||||
x_axis = timestamps[:n_plot]
|
||||
x_label = 'Timestamp'
|
||||
else:
|
||||
x_axis = list(range(n_plot))
|
||||
x_label = 'Index'
|
||||
|
||||
# Line plot: Actual vs Predicted over time
|
||||
trace_actual = go.Scatter(x=x_axis, y=actual, mode='lines', name='Actual')
|
||||
trace_predicted = go.Scatter(x=x_axis, y=predicted, mode='lines', name='Predicted')
|
||||
data_line = [trace_actual, trace_predicted]
|
||||
layout_line = go.Layout(
|
||||
title='Actual vs. Predicted Log Returns (Test Set)',
|
||||
xaxis={'title': x_label},
|
||||
yaxis={'title': 'Log Return'},
|
||||
legend={'x': 0, 'y': 1},
|
||||
margin={'l': 40, 'b': 40, 't': 40, 'r': 10},
|
||||
hovermode='closest'
|
||||
)
|
||||
fig_line = go.Figure(data=data_line, layout=layout_line)
|
||||
pyo.plot(fig_line, filename='charts/log_return_line_plot.html', auto_open=False)
|
||||
|
||||
# Scatter plot: Predicted vs Actual
|
||||
trace_scatter = go.Scatter(
|
||||
x=actual,
|
||||
y=predicted,
|
||||
mode='markers',
|
||||
name='Predicted vs Actual',
|
||||
opacity=0.5
|
||||
)
|
||||
# Diagonal reference line
|
||||
min_val = min(np.min(actual), np.min(predicted))
|
||||
max_val = max(np.max(actual), np.max(predicted))
|
||||
trace_diag = go.Scatter(
|
||||
x=[min_val, max_val],
|
||||
y=[min_val, max_val],
|
||||
mode='lines',
|
||||
name='Ideal',
|
||||
line=dict(dash='dash', color='red')
|
||||
)
|
||||
data_scatter = [trace_scatter, trace_diag]
|
||||
layout_scatter = go.Layout(
|
||||
title='Predicted vs Actual Log Returns (Scatter)',
|
||||
xaxis={'title': 'Actual Log Return'},
|
||||
yaxis={'title': 'Predicted Log Return'},
|
||||
showlegend=True,
|
||||
margin={'l': 40, 'b': 40, 't': 40, 'r': 10},
|
||||
hovermode='closest'
|
||||
)
|
||||
fig_scatter = go.Figure(data=data_scatter, layout=layout_scatter)
|
||||
pyo.plot(fig_scatter, filename='charts/log_return_scatter_plot.html', auto_open=False)
|
||||
|
||||
def plot_predicted_vs_actual_prices(actual_prices, predicted_prices, timestamps=None, n_plot=200):
|
||||
import plotly.offline as pyo
|
||||
import plotly.graph_objs as go
|
||||
n_plot = min(n_plot, len(actual_prices))
|
||||
actual = actual_prices[:n_plot]
|
||||
predicted = predicted_prices[:n_plot]
|
||||
if timestamps is not None:
|
||||
x_axis = timestamps[:n_plot]
|
||||
x_label = 'Timestamp'
|
||||
else:
|
||||
x_axis = list(range(n_plot))
|
||||
x_label = 'Index'
|
||||
|
||||
# Line plot: Actual vs Predicted over time
|
||||
trace_actual = go.Scatter(x=x_axis, y=actual, mode='lines', name='Actual Price')
|
||||
trace_predicted = go.Scatter(x=x_axis, y=predicted, mode='lines', name='Predicted Price')
|
||||
data_line = [trace_actual, trace_predicted]
|
||||
layout_line = go.Layout(
|
||||
title='Actual vs. Predicted BTC Prices (Test Set)',
|
||||
xaxis={'title': x_label},
|
||||
yaxis={'title': 'BTC Price'},
|
||||
legend={'x': 0, 'y': 1},
|
||||
margin={'l': 40, 'b': 40, 't': 40, 'r': 10},
|
||||
hovermode='closest'
|
||||
)
|
||||
fig_line = go.Figure(data=data_line, layout=layout_line)
|
||||
pyo.plot(fig_line, filename='charts/price_line_plot.html', auto_open=False)
|
||||
|
||||
# Scatter plot: Predicted vs Actual
|
||||
trace_scatter = go.Scatter(
|
||||
x=actual,
|
||||
y=predicted,
|
||||
mode='markers',
|
||||
name='Predicted vs Actual',
|
||||
opacity=0.5
|
||||
)
|
||||
# Diagonal reference line
|
||||
min_val = min(np.min(actual), np.min(predicted))
|
||||
max_val = max(np.max(actual), np.max(predicted))
|
||||
trace_diag = go.Scatter(
|
||||
x=[min_val, max_val],
|
||||
y=[min_val, max_val],
|
||||
mode='lines',
|
||||
name='Ideal',
|
||||
line=dict(dash='dash', color='red')
|
||||
)
|
||||
data_scatter = [trace_scatter, trace_diag]
|
||||
layout_scatter = go.Layout(
|
||||
title='Predicted vs Actual Prices (Scatter)',
|
||||
xaxis={'title': 'Actual Price'},
|
||||
yaxis={'title': 'Predicted Price'},
|
||||
showlegend=True,
|
||||
margin={'l': 40, 'b': 40, 't': 40, 'r': 10},
|
||||
hovermode='closest'
|
||||
)
|
||||
fig_scatter = go.Figure(data=data_scatter, layout=layout_scatter)
|
||||
pyo.plot(fig_scatter, filename='charts/price_scatter_plot.html', auto_open=False)
|
||||
|
||||
def plot_prediction_error_distribution(predicted_prices, actual_prices, nbins=100, prefix=""):
|
||||
"""
|
||||
Plots the distribution of signed prediction errors between predicted and actual prices,
|
||||
coloring negative errors (under-prediction) and positive errors (over-prediction) differently.
|
||||
"""
|
||||
import plotly.offline as pyo
|
||||
import plotly.graph_objs as go
|
||||
errors = np.array(predicted_prices) - np.array(actual_prices)
|
||||
|
||||
# Separate negative and positive errors
|
||||
neg_errors = errors[errors < 0]
|
||||
pos_errors = errors[errors >= 0]
|
||||
|
||||
# Calculate common bin edges
|
||||
min_error = np.min(errors)
|
||||
max_error = np.max(errors)
|
||||
bin_edges = np.linspace(min_error, max_error, nbins + 1)
|
||||
xbins = dict(start=min_error, end=max_error, size=(max_error - min_error) / nbins)
|
||||
|
||||
trace_neg = go.Histogram(
|
||||
x=neg_errors,
|
||||
opacity=0.75,
|
||||
marker=dict(color='blue'),
|
||||
name='Negative Error (Under-prediction)',
|
||||
xbins=xbins
|
||||
)
|
||||
trace_pos = go.Histogram(
|
||||
x=pos_errors,
|
||||
opacity=0.75,
|
||||
marker=dict(color='orange'),
|
||||
name='Positive Error (Over-prediction)',
|
||||
xbins=xbins
|
||||
)
|
||||
layout = go.Layout(
|
||||
title='Distribution of Prediction Errors (Signed)',
|
||||
xaxis=dict(title='Prediction Error (Predicted - Actual)'),
|
||||
yaxis=dict(title='Frequency'),
|
||||
barmode='overlay',
|
||||
bargap=0.05
|
||||
)
|
||||
fig = go.Figure(data=[trace_neg, trace_pos], layout=layout)
|
||||
filename = f'charts/{prefix}_prediction_error_distribution.html'
|
||||
pyo.plot(fig, filename=filename, auto_open=False)
|
||||
|
||||
def plot_directional_accuracy(actual_prices, predicted_prices, timestamps=None, n_plot=200):
|
||||
"""
|
||||
Plots the directional accuracy of predictions compared to actual price movements.
|
||||
Shows whether the predicted direction matches the actual direction of price movement.
|
||||
|
||||
Args:
|
||||
actual_prices: Array of actual price values
|
||||
predicted_prices: Array of predicted price values
|
||||
timestamps: Optional array of timestamps for x-axis
|
||||
n_plot: Number of points to plot (default 200, plots last n_plot points)
|
||||
"""
|
||||
import plotly.graph_objs as go
|
||||
import plotly.offline as pyo
|
||||
import numpy as np
|
||||
|
||||
# Calculate price changes
|
||||
actual_changes = np.diff(actual_prices)
|
||||
predicted_changes = np.diff(predicted_prices)
|
||||
|
||||
# Determine if directions match
|
||||
actual_direction = np.sign(actual_changes)
|
||||
predicted_direction = np.sign(predicted_changes)
|
||||
correct_direction = actual_direction == predicted_direction
|
||||
|
||||
# Get last n_plot points
|
||||
actual_changes = actual_changes[-n_plot:]
|
||||
predicted_changes = predicted_changes[-n_plot:]
|
||||
correct_direction = correct_direction[-n_plot:]
|
||||
|
||||
if timestamps is not None:
|
||||
x_values = timestamps[1:] # Skip first since we took diff
|
||||
x_values = x_values[-n_plot:] # Get last n_plot points
|
||||
else:
|
||||
x_values = list(range(len(actual_changes)))
|
||||
|
||||
# Create traces for correct and incorrect predictions
|
||||
correct_trace = go.Scatter(
|
||||
x=np.array(x_values)[correct_direction],
|
||||
y=actual_changes[correct_direction],
|
||||
mode='markers',
|
||||
name='Correct Direction',
|
||||
marker=dict(color='green', size=8)
|
||||
)
|
||||
|
||||
incorrect_trace = go.Scatter(
|
||||
x=np.array(x_values)[~correct_direction],
|
||||
y=actual_changes[~correct_direction],
|
||||
mode='markers',
|
||||
name='Incorrect Direction',
|
||||
marker=dict(color='red', size=8)
|
||||
)
|
||||
|
||||
# Calculate accuracy percentage
|
||||
accuracy = np.mean(correct_direction) * 100
|
||||
|
||||
layout = go.Layout(
|
||||
title=f'Directional Accuracy (Overall: {accuracy:.1f}%)',
|
||||
xaxis=dict(title='Time' if timestamps is not None else 'Sample'),
|
||||
yaxis=dict(title='Price Change'),
|
||||
showlegend=True
|
||||
)
|
||||
|
||||
fig = go.Figure(data=[correct_trace, incorrect_trace], layout=layout)
|
||||
pyo.plot(fig, filename='charts/directional_accuracy.html', auto_open=False)
|
||||
|
||||
def plot_direction_transition_heatmap(actual_prices, predicted_prices, prefix=""):
|
||||
"""
|
||||
Plots a heatmap showing the frequency of each (actual, predicted) direction pair.
|
||||
"""
|
||||
import numpy as np
|
||||
import plotly.graph_objs as go
|
||||
import plotly.offline as pyo
|
||||
|
||||
# Calculate directions
|
||||
actual_direction = np.sign(np.diff(actual_prices))
|
||||
predicted_direction = np.sign(np.diff(predicted_prices))
|
||||
|
||||
# Build 3x3 matrix: rows=actual, cols=predicted, values=counts
|
||||
# Map -1 -> 0, 0 -> 1, 1 -> 2 for indexing
|
||||
mapping = {-1: 0, 0: 1, 1: 2}
|
||||
matrix = np.zeros((3, 3), dtype=int)
|
||||
for a, p in zip(actual_direction, predicted_direction):
|
||||
matrix[mapping[a], mapping[p]] += 1
|
||||
|
||||
# Axis labels
|
||||
directions = ['Down (-1)', 'No Change (0)', 'Up (+1)']
|
||||
|
||||
# Plot heatmap
|
||||
heatmap = go.Heatmap(
|
||||
z=matrix,
|
||||
x=directions, # predicted
|
||||
y=directions, # actual
|
||||
colorscale='Viridis',
|
||||
colorbar=dict(title='Count')
|
||||
)
|
||||
layout = go.Layout(
|
||||
title='Direction Prediction Transition Matrix',
|
||||
xaxis=dict(title='Predicted Direction'),
|
||||
yaxis=dict(title='Actual Direction')
|
||||
)
|
||||
fig = go.Figure(data=[heatmap], layout=layout)
|
||||
filename = f'charts/{prefix}_direction_transition_heatmap.html'
|
||||
pyo.plot(fig, filename=filename, auto_open=False)
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user