Initial project setup with Python version 3.12, main script for backtesting trading strategies, and configuration files for project management. Added necessary dependencies and documentation structure.
This commit is contained in:
parent
cac0629dcd
commit
56dca05a3e
61
.cursor/rules/always-global.mdc
Normal file
61
.cursor/rules/always-global.mdc
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
description: Global development standards and AI interaction principles
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Rule: Always Apply - Global Development Standards
|
||||
|
||||
## AI Interaction Principles
|
||||
|
||||
### Step-by-Step Development
|
||||
- **NEVER** generate large blocks of code without explanation
|
||||
- **ALWAYS** ask "provide your plan in a concise bullet list and wait for my confirmation before proceeding"
|
||||
- Break complex tasks into smaller, manageable pieces (≤250 lines per file, ≤50 lines per function)
|
||||
- Explain your reasoning step-by-step before writing code
|
||||
- Wait for explicit approval before moving to the next sub-task
|
||||
|
||||
### Context Awareness
|
||||
- **ALWAYS** reference existing code patterns and data structures before suggesting new approaches
|
||||
- Ask about existing conventions before implementing new functionality
|
||||
- Preserve established architectural decisions unless explicitly asked to change them
|
||||
- Maintain consistency with existing naming conventions and code style
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### File and Function Limits
|
||||
- **Maximum file size**: 250 lines
|
||||
- **Maximum function size**: 50 lines
|
||||
- **Maximum complexity**: If a function does more than one main thing, break it down
|
||||
- **Naming**: Use clear, descriptive names that explain purpose
|
||||
|
||||
### Documentation Requirements
|
||||
- **Every public function** must have a docstring explaining purpose, parameters, and return value
|
||||
- **Every class** must have a class-level docstring
|
||||
- **Complex logic** must have inline comments explaining the "why", not just the "what"
|
||||
- **API endpoints** must be documented with request/response examples
|
||||
|
||||
### Error Handling
|
||||
- **ALWAYS** include proper error handling for external dependencies
|
||||
- **NEVER** use bare except clauses
|
||||
- Provide meaningful error messages that help with debugging
|
||||
- Log errors appropriately for the application context
|
||||
|
||||
## Security and Best Practices
|
||||
- **NEVER** hardcode credentials, API keys, or sensitive data
|
||||
- **ALWAYS** validate user inputs
|
||||
- Use parameterized queries for database operations
|
||||
- Follow the principle of least privilege
|
||||
- Implement proper authentication and authorization
|
||||
|
||||
## Testing Requirements
|
||||
- **Every implementation** should have corresponding unit tests
|
||||
- **Every API endpoint** should have integration tests
|
||||
- Test files should be placed alongside the code they test
|
||||
- Use descriptive test names that explain what is being tested
|
||||
|
||||
## Response Format
|
||||
- Be concise and avoid unnecessary repetition
|
||||
- Focus on actionable information
|
||||
- Provide examples when explaining complex concepts
|
||||
- Ask clarifying questions when requirements are ambiguous
|
||||
237
.cursor/rules/architecture.mdc
Normal file
237
.cursor/rules/architecture.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Modular design principles and architecture guidelines for scalable development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Architecture and Modular Design
|
||||
|
||||
## Goal
|
||||
Maintain a clean, modular architecture that scales effectively and prevents the complexity issues that arise in AI-assisted development.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
### 1. Modular Design
|
||||
- **Single Responsibility**: Each module has one clear purpose
|
||||
- **Loose Coupling**: Modules depend on interfaces, not implementations
|
||||
- **High Cohesion**: Related functionality is grouped together
|
||||
- **Clear Boundaries**: Module interfaces are well-defined and stable
|
||||
|
||||
### 2. Size Constraints
|
||||
- **Files**: Maximum 250 lines per file
|
||||
- **Functions**: Maximum 50 lines per function
|
||||
- **Classes**: Maximum 300 lines per class
|
||||
- **Modules**: Maximum 10 public functions/classes per module
|
||||
|
||||
### 3. Dependency Management
|
||||
- **Layer Dependencies**: Higher layers depend on lower layers only
|
||||
- **No Circular Dependencies**: Modules cannot depend on each other cyclically
|
||||
- **Interface Segregation**: Depend on specific interfaces, not broad ones
|
||||
- **Dependency Injection**: Pass dependencies rather than creating them internally
|
||||
|
||||
## Modular Architecture Patterns
|
||||
|
||||
### Layer Structure
|
||||
```
|
||||
src/
|
||||
├── presentation/ # UI, API endpoints, CLI interfaces
|
||||
├── application/ # Business logic, use cases, workflows
|
||||
├── domain/ # Core business entities and rules
|
||||
├── infrastructure/ # Database, external APIs, file systems
|
||||
└── shared/ # Common utilities, constants, types
|
||||
```
|
||||
|
||||
### Module Organization
|
||||
```
|
||||
module_name/
|
||||
├── __init__.py # Public interface exports
|
||||
├── core.py # Main module logic
|
||||
├── types.py # Type definitions and interfaces
|
||||
├── utils.py # Module-specific utilities
|
||||
├── tests/ # Module tests
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
## Design Patterns for AI Development
|
||||
|
||||
### 1. Repository Pattern
|
||||
Separate data access from business logic:
|
||||
|
||||
```python
|
||||
# Domain interface
|
||||
class UserRepository:
|
||||
def get_by_id(self, user_id: str) -> User: ...
|
||||
def save(self, user: User) -> None: ...
|
||||
|
||||
# Infrastructure implementation
|
||||
class SqlUserRepository(UserRepository):
|
||||
def get_by_id(self, user_id: str) -> User:
|
||||
# Database-specific implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. Service Pattern
|
||||
Encapsulate business logic in focused services:
|
||||
|
||||
```python
|
||||
class UserService:
|
||||
def __init__(self, user_repo: UserRepository):
|
||||
self._user_repo = user_repo
|
||||
|
||||
def create_user(self, data: UserData) -> User:
|
||||
# Validation and business logic
|
||||
# Single responsibility: user creation
|
||||
pass
|
||||
```
|
||||
|
||||
### 3. Factory Pattern
|
||||
Create complex objects with clear interfaces:
|
||||
|
||||
```python
|
||||
class DatabaseFactory:
|
||||
@staticmethod
|
||||
def create_connection(config: DatabaseConfig) -> Connection:
|
||||
# Handle different database types
|
||||
# Encapsulate connection complexity
|
||||
pass
|
||||
```
|
||||
|
||||
## Architecture Decision Guidelines
|
||||
|
||||
### When to Create New Modules
|
||||
Create a new module when:
|
||||
- **Functionality** exceeds size constraints (250 lines)
|
||||
- **Responsibility** is distinct from existing modules
|
||||
- **Dependencies** would create circular references
|
||||
- **Reusability** would benefit other parts of the system
|
||||
- **Testing** requires isolated test environments
|
||||
|
||||
### When to Split Existing Modules
|
||||
Split modules when:
|
||||
- **File size** exceeds 250 lines
|
||||
- **Multiple responsibilities** are evident
|
||||
- **Testing** becomes difficult due to complexity
|
||||
- **Dependencies** become too numerous
|
||||
- **Change frequency** differs significantly between parts
|
||||
|
||||
### Module Interface Design
|
||||
```python
|
||||
# Good: Clear, focused interface
|
||||
class PaymentProcessor:
|
||||
def process_payment(self, amount: Money, method: PaymentMethod) -> PaymentResult:
|
||||
"""Process a single payment transaction."""
|
||||
pass
|
||||
|
||||
# Bad: Unfocused, kitchen-sink interface
|
||||
class PaymentManager:
|
||||
def process_payment(self, ...): pass
|
||||
def validate_card(self, ...): pass
|
||||
def send_receipt(self, ...): pass
|
||||
def update_inventory(self, ...): pass # Wrong responsibility!
|
||||
```
|
||||
|
||||
## Architecture Validation
|
||||
|
||||
### Architecture Review Checklist
|
||||
- [ ] **Dependencies flow in one direction** (no cycles)
|
||||
- [ ] **Layers are respected** (presentation doesn't call infrastructure directly)
|
||||
- [ ] **Modules have single responsibility**
|
||||
- [ ] **Interfaces are stable** and well-defined
|
||||
- [ ] **Size constraints** are maintained
|
||||
- [ ] **Testing** is straightforward for each module
|
||||
|
||||
### Red Flags
|
||||
- **God Objects**: Classes/modules that do too many things
|
||||
- **Circular Dependencies**: Modules that depend on each other
|
||||
- **Deep Inheritance**: More than 3 levels of inheritance
|
||||
- **Large Interfaces**: Interfaces with more than 7 methods
|
||||
- **Tight Coupling**: Modules that know too much about each other's internals
|
||||
|
||||
## Refactoring Guidelines
|
||||
|
||||
### When to Refactor
|
||||
- Module exceeds size constraints
|
||||
- Code duplication across modules
|
||||
- Difficult to test individual components
|
||||
- New features require changing multiple unrelated modules
|
||||
- Performance bottlenecks due to poor separation
|
||||
|
||||
### Refactoring Process
|
||||
1. **Identify** the specific architectural problem
|
||||
2. **Design** the target architecture
|
||||
3. **Create tests** to verify current behavior
|
||||
4. **Implement changes** incrementally
|
||||
5. **Validate** that tests still pass
|
||||
6. **Update documentation** to reflect changes
|
||||
|
||||
### Safe Refactoring Practices
|
||||
- **One change at a time**: Don't mix refactoring with new features
|
||||
- **Tests first**: Ensure comprehensive test coverage before refactoring
|
||||
- **Incremental changes**: Small steps with verification at each stage
|
||||
- **Backward compatibility**: Maintain existing interfaces during transition
|
||||
- **Documentation updates**: Keep architecture documentation current
|
||||
|
||||
## Architecture Documentation
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-003: Service Layer Architecture
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
As the application grows, business logic is scattered across controllers and models.
|
||||
|
||||
## Decision
|
||||
Implement a service layer to encapsulate business logic.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- Clear separation of concerns
|
||||
- Easier testing of business logic
|
||||
- Better reusability across different interfaces
|
||||
|
||||
**Negative:**
|
||||
- Additional abstraction layer
|
||||
- More files to maintain
|
||||
```
|
||||
|
||||
### Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
What this module does and why it exists.
|
||||
|
||||
## Dependencies
|
||||
- **Imports from**: List of modules this depends on
|
||||
- **Used by**: List of modules that depend on this one
|
||||
- **External**: Third-party dependencies
|
||||
|
||||
## Public Interface
|
||||
```python
|
||||
# Key functions and classes exposed by this module
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
- Design patterns used
|
||||
- Important architectural decisions
|
||||
- Known limitations or constraints
|
||||
```
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### Legacy Code Integration
|
||||
- **Strangler Fig Pattern**: Gradually replace old code with new modules
|
||||
- **Adapter Pattern**: Create interfaces to integrate old and new code
|
||||
- **Facade Pattern**: Simplify complex legacy interfaces
|
||||
|
||||
### Gradual Modernization
|
||||
1. **Identify boundaries** in existing code
|
||||
2. **Extract modules** one at a time
|
||||
3. **Create interfaces** for each extracted module
|
||||
4. **Test thoroughly** at each step
|
||||
5. **Update documentation** continuously
|
||||
123
.cursor/rules/code-review.mdc
Normal file
123
.cursor/rules/code-review.mdc
Normal file
@ -0,0 +1,123 @@
|
||||
---
|
||||
description: AI-generated code review checklist and quality assurance guidelines
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Review and Quality Assurance
|
||||
|
||||
## Goal
|
||||
Establish systematic review processes for AI-generated code to maintain quality, security, and maintainability standards.
|
||||
|
||||
## AI Code Review Checklist
|
||||
|
||||
### Pre-Implementation Review
|
||||
Before accepting any AI-generated code:
|
||||
|
||||
1. **Understand the Code**
|
||||
- [ ] Can you explain what the code does in your own words?
|
||||
- [ ] Do you understand each function and its purpose?
|
||||
- [ ] Are there any "magic" values or unexplained logic?
|
||||
- [ ] Does the code solve the actual problem stated?
|
||||
|
||||
2. **Architecture Alignment**
|
||||
- [ ] Does the code follow established project patterns?
|
||||
- [ ] Is it consistent with existing data structures?
|
||||
- [ ] Does it integrate cleanly with existing components?
|
||||
- [ ] Are new dependencies justified and necessary?
|
||||
|
||||
3. **Code Quality**
|
||||
- [ ] Are functions smaller than 50 lines?
|
||||
- [ ] Are files smaller than 250 lines?
|
||||
- [ ] Are variable and function names descriptive?
|
||||
- [ ] Is the code DRY (Don't Repeat Yourself)?
|
||||
|
||||
### Security Review
|
||||
- [ ] **Input Validation**: All user inputs are validated and sanitized
|
||||
- [ ] **Authentication**: Proper authentication checks are in place
|
||||
- [ ] **Authorization**: Access controls are implemented correctly
|
||||
- [ ] **Data Protection**: Sensitive data is handled securely
|
||||
- [ ] **SQL Injection**: Database queries use parameterized statements
|
||||
- [ ] **XSS Prevention**: Output is properly escaped
|
||||
- [ ] **Error Handling**: Errors don't leak sensitive information
|
||||
|
||||
### Integration Review
|
||||
- [ ] **Existing Functionality**: New code doesn't break existing features
|
||||
- [ ] **Data Consistency**: Database changes maintain referential integrity
|
||||
- [ ] **API Compatibility**: Changes don't break existing API contracts
|
||||
- [ ] **Performance Impact**: New code doesn't introduce performance bottlenecks
|
||||
- [ ] **Testing Coverage**: Appropriate tests are included
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Initial Code Analysis
|
||||
1. **Read through the entire generated code** before running it
|
||||
2. **Identify patterns** that don't match existing codebase
|
||||
3. **Check dependencies** - are new packages really needed?
|
||||
4. **Verify logic flow** - does the algorithm make sense?
|
||||
|
||||
### Step 2: Security and Error Handling Review
|
||||
1. **Trace data flow** from input to output
|
||||
2. **Identify potential failure points** and verify error handling
|
||||
3. **Check for security vulnerabilities** using the security checklist
|
||||
4. **Verify proper logging** and monitoring implementation
|
||||
|
||||
### Step 3: Integration Testing
|
||||
1. **Test with existing code** to ensure compatibility
|
||||
2. **Run existing test suite** to verify no regressions
|
||||
3. **Test edge cases** and error conditions
|
||||
4. **Verify performance** under realistic conditions
|
||||
|
||||
## Common AI Code Issues to Watch For
|
||||
|
||||
### Overcomplication Patterns
|
||||
- **Unnecessary abstractions**: AI creating complex patterns for simple tasks
|
||||
- **Over-engineering**: Solutions that are more complex than needed
|
||||
- **Redundant code**: AI recreating existing functionality
|
||||
- **Inappropriate design patterns**: Using patterns that don't fit the use case
|
||||
|
||||
### Context Loss Indicators
|
||||
- **Inconsistent naming**: Different conventions from existing code
|
||||
- **Wrong data structures**: Using different patterns than established
|
||||
- **Ignored existing functions**: Reimplementing existing functionality
|
||||
- **Architectural misalignment**: Code that doesn't fit the overall design
|
||||
|
||||
### Technical Debt Indicators
|
||||
- **Magic numbers**: Hardcoded values without explanation
|
||||
- **Poor error messages**: Generic or unhelpful error handling
|
||||
- **Missing documentation**: Code without adequate comments
|
||||
- **Tight coupling**: Components that are too interdependent
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Mandatory Reviews
|
||||
All AI-generated code must pass these gates before acceptance:
|
||||
|
||||
1. **Security Review**: No security vulnerabilities detected
|
||||
2. **Integration Review**: Integrates cleanly with existing code
|
||||
3. **Performance Review**: Meets performance requirements
|
||||
4. **Maintainability Review**: Code can be easily modified by team members
|
||||
5. **Documentation Review**: Adequate documentation is provided
|
||||
|
||||
### Acceptance Criteria
|
||||
- [ ] Code is understandable by any team member
|
||||
- [ ] Integration requires minimal changes to existing code
|
||||
- [ ] Security review passes all checks
|
||||
- [ ] Performance meets established benchmarks
|
||||
- [ ] Documentation is complete and accurate
|
||||
|
||||
## Rejection Criteria
|
||||
Reject AI-generated code if:
|
||||
- Security vulnerabilities are present
|
||||
- Code is too complex for the problem being solved
|
||||
- Integration requires major refactoring of existing code
|
||||
- Code duplicates existing functionality without justification
|
||||
- Documentation is missing or inadequate
|
||||
|
||||
## Review Documentation
|
||||
For each review, document:
|
||||
- Issues found and how they were resolved
|
||||
- Performance impact assessment
|
||||
- Security concerns and mitigations
|
||||
- Integration challenges and solutions
|
||||
- Recommendations for future similar tasks
|
||||
93
.cursor/rules/context-management.mdc
Normal file
93
.cursor/rules/context-management.mdc
Normal file
@ -0,0 +1,93 @@
|
||||
---
|
||||
description: Context management for maintaining codebase awareness and preventing context drift
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Context Management
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive project context to prevent context drift and ensure AI-generated code integrates seamlessly with existing codebase patterns and architecture.
|
||||
|
||||
## Context Documentation Requirements
|
||||
|
||||
### PRD.md file documentation
|
||||
1. **Project Overview**
|
||||
- Business objectives and goals
|
||||
- Target users and use cases
|
||||
- Key success metrics
|
||||
|
||||
### CONTEXT.md File Structure
|
||||
Every project must maintain a `CONTEXT.md` file in the root directory with:
|
||||
|
||||
1. **Architecture Overview**
|
||||
- High-level system architecture
|
||||
- Key design patterns used
|
||||
- Database schema overview
|
||||
- API structure and conventions
|
||||
|
||||
2. **Technology Stack**
|
||||
- Programming languages and versions
|
||||
- Frameworks and libraries
|
||||
- Database systems
|
||||
- Development and deployment tools
|
||||
|
||||
3. **Coding Conventions**
|
||||
- Naming conventions
|
||||
- File organization patterns
|
||||
- Code structure preferences
|
||||
- Import/export patterns
|
||||
|
||||
4. **Current Implementation Status**
|
||||
- Completed features
|
||||
- Work in progress
|
||||
- Known technical debt
|
||||
- Planned improvements
|
||||
|
||||
## Context Maintenance Protocol
|
||||
|
||||
### Before Every Coding Session
|
||||
1. **Review CONTEXT.md and PRD.md** to understand current project state
|
||||
2. **Scan recent changes** in git history to understand latest patterns
|
||||
3. **Identify existing patterns** for similar functionality before implementing new features
|
||||
4. **Ask for clarification** if existing patterns are unclear or conflicting
|
||||
|
||||
### During Development
|
||||
1. **Reference existing code** when explaining implementation approaches
|
||||
2. **Maintain consistency** with established patterns and conventions
|
||||
3. **Update CONTEXT.md** when making architectural decisions
|
||||
4. **Document deviations** from established patterns with reasoning
|
||||
|
||||
### Context Preservation Strategies
|
||||
- **Incremental development**: Build on existing patterns rather than creating new ones
|
||||
- **Pattern consistency**: Use established data structures and function signatures
|
||||
- **Integration awareness**: Consider how new code affects existing functionality
|
||||
- **Dependency management**: Understand existing dependencies before adding new ones
|
||||
|
||||
## Context Prompting Best Practices
|
||||
|
||||
### Effective Context Sharing
|
||||
- Include relevant sections of CONTEXT.md in prompts for complex tasks
|
||||
- Reference specific existing files when asking for similar functionality
|
||||
- Provide examples of existing patterns when requesting new implementations
|
||||
- Share recent git commit messages to understand latest changes
|
||||
|
||||
### Context Window Optimization
|
||||
- Prioritize most relevant context for current task
|
||||
- Use @filename references to include specific files
|
||||
- Break large contexts into focused, task-specific chunks
|
||||
- Update context references as project evolves
|
||||
|
||||
## Red Flags - Context Loss Indicators
|
||||
- AI suggests patterns that conflict with existing code
|
||||
- New implementations ignore established conventions
|
||||
- Proposed solutions don't integrate with existing architecture
|
||||
- Code suggestions require significant refactoring of existing functionality
|
||||
|
||||
## Recovery Protocol
|
||||
When context loss is detected:
|
||||
1. **Stop development** and review CONTEXT.md
|
||||
2. **Analyze existing codebase** for established patterns
|
||||
3. **Update context documentation** with missing information
|
||||
4. **Restart task** with proper context provided
|
||||
5. **Test integration** with existing code before proceeding
|
||||
67
.cursor/rules/create-prd.mdc
Normal file
67
.cursor/rules/create-prd.mdc
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Product Requirements Document (PRD)
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
|
||||
2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
|
||||
3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
|
||||
4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory.
|
||||
|
||||
## Clarifying Questions (Examples)
|
||||
|
||||
The AI should adapt its questions based on the prompt, but here are some common areas to explore:
|
||||
|
||||
* **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
|
||||
* **Target User:** "Who is the primary user of this feature?"
|
||||
* **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
|
||||
* **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
|
||||
* **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
|
||||
* **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
|
||||
* **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
|
||||
* **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
|
||||
* **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"
|
||||
|
||||
## PRD Structure
|
||||
|
||||
The generated PRD should include the following sections:
|
||||
|
||||
1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
|
||||
2. **Goals:** List the specific, measurable objectives for this feature.
|
||||
3. **User Stories:** Detail the user narratives describing feature usage and benefits.
|
||||
4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
|
||||
5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
|
||||
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
|
||||
7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
|
||||
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
|
||||
9. **Open Questions:** List any remaining questions or areas needing further clarification.
|
||||
|
||||
## Target Audience
|
||||
|
||||
Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.
|
||||
|
||||
## Output
|
||||
|
||||
* **Format:** Markdown (`.md`)
|
||||
* **Location:** `/tasks/`
|
||||
* **Filename:** `prd-[feature-name].md`
|
||||
|
||||
## Final instructions
|
||||
|
||||
1. Do NOT start implmenting the PRD
|
||||
2. Make sure to ask the user clarifying questions
|
||||
|
||||
3. Take the user's answers to the clarifying questions and improve the PRD
|
||||
244
.cursor/rules/documentation.mdc
Normal file
244
.cursor/rules/documentation.mdc
Normal file
@ -0,0 +1,244 @@
|
||||
---
|
||||
description: Documentation standards for code, architecture, and development decisions
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Documentation Standards
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive, up-to-date documentation that supports development, onboarding, and long-term maintenance of the codebase.
|
||||
|
||||
## Documentation Hierarchy
|
||||
|
||||
### 1. Project Level Documentation (in ./docs/)
|
||||
- **README.md**: Project overview, setup instructions, basic usage
|
||||
- **CONTEXT.md**: Current project state, architecture decisions, patterns
|
||||
- **CHANGELOG.md**: Version history and significant changes
|
||||
- **CONTRIBUTING.md**: Development guidelines and processes
|
||||
- **API.md**: API endpoints, request/response formats, authentication
|
||||
|
||||
### 2. Module Level Documentation (in ./docs/modules/)
|
||||
- **[module-name].md**: Purpose, public interfaces, usage examples
|
||||
- **dependencies.md**: External dependencies and their purposes
|
||||
- **architecture.md**: Module relationships and data flow
|
||||
|
||||
### 3. Code Level Documentation
|
||||
- **Docstrings**: Function and class documentation
|
||||
- **Inline comments**: Complex logic explanations
|
||||
- **Type hints**: Clear parameter and return types
|
||||
- **README files**: Directory-specific instructions
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### Code Documentation
|
||||
```python
|
||||
def process_user_data(user_id: str, data: dict) -> UserResult:
|
||||
"""
|
||||
Process and validate user data before storage.
|
||||
|
||||
Args:
|
||||
user_id: Unique identifier for the user
|
||||
data: Dictionary containing user information to process
|
||||
|
||||
Returns:
|
||||
UserResult: Processed user data with validation status
|
||||
|
||||
Raises:
|
||||
ValidationError: When user data fails validation
|
||||
DatabaseError: When storage operation fails
|
||||
|
||||
Example:
|
||||
>>> result = process_user_data("123", {"name": "John", "email": "john@example.com"})
|
||||
>>> print(result.status)
|
||||
'valid'
|
||||
"""
|
||||
```
|
||||
|
||||
### API Documentation Format
|
||||
```markdown
|
||||
### POST /api/users
|
||||
|
||||
Create a new user account.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required)",
|
||||
"email": "string (required, valid email)",
|
||||
"age": "number (optional, min: 13)"
|
||||
}
|
||||
```
|
||||
|
||||
**Response (201):**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "string",
|
||||
"email": "string",
|
||||
"created_at": "iso_datetime"
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- 400: Invalid input data
|
||||
- 409: Email already exists
|
||||
```
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant architecture decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-001: Database Choice - PostgreSQL
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
We need to choose a database for storing user data and application state.
|
||||
|
||||
## Decision
|
||||
We will use PostgreSQL as our primary database.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- ACID compliance ensures data integrity
|
||||
- Rich query capabilities with SQL
|
||||
- Good performance for our expected load
|
||||
|
||||
**Negative:**
|
||||
- More complex setup than simpler alternatives
|
||||
- Requires SQL knowledge from team members
|
||||
|
||||
## Alternatives Considered
|
||||
- MongoDB: Rejected due to consistency requirements
|
||||
- SQLite: Rejected due to scalability needs
|
||||
```
|
||||
|
||||
## Documentation Maintenance
|
||||
|
||||
### When to Update Documentation
|
||||
|
||||
#### Always Update:
|
||||
- **API changes**: Any modification to public interfaces
|
||||
- **Architecture changes**: New patterns, data structures, or workflows
|
||||
- **Configuration changes**: Environment variables, deployment settings
|
||||
- **Dependencies**: Adding, removing, or upgrading packages
|
||||
- **Business logic changes**: Core functionality modifications
|
||||
|
||||
#### Update Weekly:
|
||||
- **CONTEXT.md**: Current development status and priorities
|
||||
- **Known issues**: Bug reports and workarounds
|
||||
- **Performance notes**: Bottlenecks and optimization opportunities
|
||||
|
||||
#### Update per Release:
|
||||
- **CHANGELOG.md**: User-facing changes and improvements
|
||||
- **Version documentation**: Breaking changes and migration guides
|
||||
- **Examples and tutorials**: Keep sample code current
|
||||
|
||||
### Documentation Quality Checklist
|
||||
|
||||
#### Completeness
|
||||
- [ ] Purpose and scope clearly explained
|
||||
- [ ] All public interfaces documented
|
||||
- [ ] Examples provided for complex usage
|
||||
- [ ] Error conditions and handling described
|
||||
- [ ] Dependencies and requirements listed
|
||||
|
||||
#### Accuracy
|
||||
- [ ] Code examples are tested and working
|
||||
- [ ] Links point to correct locations
|
||||
- [ ] Version numbers are current
|
||||
- [ ] Screenshots reflect current UI
|
||||
|
||||
#### Clarity
|
||||
- [ ] Written for the intended audience
|
||||
- [ ] Technical jargon is explained
|
||||
- [ ] Step-by-step instructions are clear
|
||||
- [ ] Visual aids used where helpful
|
||||
|
||||
## Documentation Automation
|
||||
|
||||
### Auto-Generated Documentation
|
||||
- **API docs**: Generate from code annotations
|
||||
- **Type documentation**: Extract from type hints
|
||||
- **Module dependencies**: Auto-update from imports
|
||||
- **Test coverage**: Include coverage reports
|
||||
|
||||
### Documentation Testing
|
||||
```python
|
||||
# Test that code examples in documentation work
|
||||
def test_documentation_examples():
|
||||
"""Verify code examples in docs actually work."""
|
||||
# Test examples from README.md
|
||||
# Test API examples from docs/API.md
|
||||
# Test configuration examples
|
||||
```
|
||||
|
||||
## Documentation Templates
|
||||
|
||||
### New Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
Brief description of what this module does and why it exists.
|
||||
|
||||
## Public Interface
|
||||
### Functions
|
||||
- `function_name(params)`: Description and example
|
||||
|
||||
### Classes
|
||||
- `ClassName`: Purpose and basic usage
|
||||
|
||||
## Usage Examples
|
||||
```python
|
||||
# Basic usage example
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
- Internal: List of internal modules this depends on
|
||||
- External: List of external packages required
|
||||
|
||||
## Testing
|
||||
How to run tests for this module.
|
||||
|
||||
## Known Issues
|
||||
Current limitations or bugs.
|
||||
```
|
||||
|
||||
### API Endpoint Template
|
||||
```markdown
|
||||
### [METHOD] /api/endpoint
|
||||
|
||||
Brief description of what this endpoint does.
|
||||
|
||||
**Authentication:** Required/Optional
|
||||
**Rate Limiting:** X requests per minute
|
||||
|
||||
**Request:**
|
||||
- Headers required
|
||||
- Body schema
|
||||
- Query parameters
|
||||
|
||||
**Response:**
|
||||
- Success response format
|
||||
- Error response format
|
||||
- Status codes
|
||||
|
||||
**Example:**
|
||||
Working request/response example
|
||||
```
|
||||
|
||||
## Review and Maintenance Process
|
||||
|
||||
### Documentation Review
|
||||
- Include documentation updates in code reviews
|
||||
- Verify examples still work with code changes
|
||||
- Check for broken links and outdated information
|
||||
- Ensure consistency with current implementation
|
||||
|
||||
### Regular Audits
|
||||
- Monthly review of documentation accuracy
|
||||
- Quarterly assessment of documentation completeness
|
||||
- Annual review of documentation structure and organization
|
||||
207
.cursor/rules/enhanced-task-list.mdc
Normal file
207
.cursor/rules/enhanced-task-list.mdc
Normal file
@ -0,0 +1,207 @@
|
||||
---
|
||||
description: Enhanced task list management with quality gates and iterative workflow integration
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Enhanced Task List Management
|
||||
|
||||
## Goal
|
||||
Manage task lists with integrated quality gates and iterative workflow to prevent context loss and ensure sustainable development.
|
||||
|
||||
## Task Implementation Protocol
|
||||
|
||||
### Pre-Implementation Check
|
||||
Before starting any sub-task:
|
||||
- [ ] **Context Review**: Have you reviewed CONTEXT.md and relevant documentation?
|
||||
- [ ] **Pattern Identification**: Do you understand existing patterns to follow?
|
||||
- [ ] **Integration Planning**: Do you know how this will integrate with existing code?
|
||||
- [ ] **Size Validation**: Is this task small enough (≤50 lines, ≤250 lines per file)?
|
||||
|
||||
### Implementation Process
|
||||
1. **One sub-task at a time**: Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y"
|
||||
2. **Step-by-step execution**:
|
||||
- Plan the approach in bullet points
|
||||
- Wait for approval
|
||||
- Implement the specific sub-task
|
||||
- Test the implementation
|
||||
- Update documentation if needed
|
||||
3. **Quality validation**: Run through the code review checklist before marking complete
|
||||
|
||||
### Completion Protocol
|
||||
When you finish a **sub‑task**:
|
||||
1. **Immediate marking**: Change `[ ]` to `[x]`
|
||||
2. **Quality check**: Verify the implementation meets quality standards
|
||||
3. **Integration test**: Ensure new code works with existing functionality
|
||||
4. **Documentation update**: Update relevant files if needed
|
||||
5. **Parent task check**: If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed
|
||||
6. **Stop and wait**: Get user approval before proceeding to next sub-task
|
||||
|
||||
## Enhanced Task List Structure
|
||||
|
||||
### Task File Header
|
||||
```markdown
|
||||
# Task List: [Feature Name]
|
||||
|
||||
**Source PRD**: `prd-[feature-name].md`
|
||||
**Status**: In Progress / Complete / Blocked
|
||||
**Context Last Updated**: [Date]
|
||||
**Architecture Review**: Required / Complete / N/A
|
||||
|
||||
## Quick Links
|
||||
- [Context Documentation](./CONTEXT.md)
|
||||
- [Architecture Guidelines](./docs/architecture.md)
|
||||
- [Related Files](#relevant-files)
|
||||
```
|
||||
|
||||
### Task Format with Quality Gates
|
||||
```markdown
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- **Quality Gate**: Architecture review required
|
||||
- **Dependencies**: List any dependencies
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- **Size estimate**: [Small/Medium/Large]
|
||||
- **Pattern reference**: [Reference to existing pattern]
|
||||
- **Test requirements**: [Unit/Integration/Both]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- **Integration points**: [List affected components]
|
||||
- **Risk level**: [Low/Medium/High]
|
||||
```
|
||||
|
||||
## Relevant Files Management
|
||||
|
||||
### Enhanced File Tracking
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
### Implementation Files
|
||||
- `path/to/file1.ts` - Brief description of purpose and role
|
||||
- **Status**: Created / Modified / Needs Review
|
||||
- **Last Modified**: [Date]
|
||||
- **Review Status**: Pending / Approved / Needs Changes
|
||||
|
||||
### Test Files
|
||||
- `path/to/file1.test.ts` - Unit tests for file1.ts
|
||||
- **Coverage**: [Percentage or status]
|
||||
- **Last Run**: [Date and result]
|
||||
|
||||
### Documentation Files
|
||||
- `docs/module-name.md` - Module documentation
|
||||
- **Status**: Up to date / Needs update / Missing
|
||||
- **Last Updated**: [Date]
|
||||
|
||||
### Configuration Files
|
||||
- `config/setting.json` - Configuration changes
|
||||
- **Environment**: [Dev/Staging/Prod affected]
|
||||
- **Backup**: [Location of backup]
|
||||
```
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
### During Development
|
||||
1. **Regular updates**: Update task status after each significant change
|
||||
2. **File tracking**: Add new files as they are created or modified
|
||||
3. **Dependency tracking**: Note when new dependencies between tasks emerge
|
||||
4. **Risk assessment**: Flag tasks that become more complex than anticipated
|
||||
|
||||
### Quality Checkpoints
|
||||
At 25%, 50%, 75%, and 100% completion:
|
||||
- [ ] **Architecture alignment**: Code follows established patterns
|
||||
- [ ] **Performance impact**: No significant performance degradation
|
||||
- [ ] **Security review**: No security vulnerabilities introduced
|
||||
- [ ] **Documentation current**: All changes are documented
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Completion assessment**: What percentage of tasks are actually complete?
|
||||
2. **Quality assessment**: Are completed tasks meeting quality standards?
|
||||
3. **Process assessment**: Is the iterative workflow being followed?
|
||||
4. **Risk assessment**: Are there emerging risks or blockers?
|
||||
|
||||
## Task Status Indicators
|
||||
|
||||
### Status Levels
|
||||
- `[ ]` **Not Started**: Task not yet begun
|
||||
- `[~]` **In Progress**: Currently being worked on
|
||||
- `[?]` **Blocked**: Waiting for dependencies or decisions
|
||||
- `[!]` **Needs Review**: Implementation complete but needs quality review
|
||||
- `[x]` **Complete**: Finished and quality approved
|
||||
|
||||
### Quality Indicators
|
||||
- ✅ **Quality Approved**: Passed all quality gates
|
||||
- ⚠️ **Quality Concerns**: Has issues but functional
|
||||
- ❌ **Quality Failed**: Needs rework before approval
|
||||
- 🔄 **Under Review**: Currently being reviewed
|
||||
|
||||
### Integration Status
|
||||
- 🔗 **Integrated**: Successfully integrated with existing code
|
||||
- 🔧 **Integration Issues**: Problems with existing code integration
|
||||
- ⏳ **Integration Pending**: Ready for integration testing
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Tasks Become Too Complex
|
||||
If a sub-task grows beyond expected scope:
|
||||
1. **Stop implementation** immediately
|
||||
2. **Document current state** and what was discovered
|
||||
3. **Break down** the task into smaller pieces
|
||||
4. **Update task list** with new sub-tasks
|
||||
5. **Get approval** for the new breakdown before proceeding
|
||||
|
||||
### When Context is Lost
|
||||
If AI seems to lose track of project patterns:
|
||||
1. **Pause development**
|
||||
2. **Review CONTEXT.md** and recent changes
|
||||
3. **Update context documentation** with current state
|
||||
4. **Restart** with explicit pattern references
|
||||
5. **Reduce task size** until context is re-established
|
||||
|
||||
### When Quality Gates Fail
|
||||
If implementation doesn't meet quality standards:
|
||||
1. **Mark task** with `[!]` status
|
||||
2. **Document specific issues** found
|
||||
3. **Create remediation tasks** if needed
|
||||
4. **Don't proceed** until quality issues are resolved
|
||||
|
||||
## AI Instructions Integration
|
||||
|
||||
### Context Awareness Commands
|
||||
```markdown
|
||||
**Before starting any task, run these checks:**
|
||||
1. @CONTEXT.md - Review current project state
|
||||
2. @architecture.md - Understand design principles
|
||||
3. @code-review.md - Know quality standards
|
||||
4. Look at existing similar code for patterns
|
||||
```
|
||||
|
||||
### Quality Validation Commands
|
||||
```markdown
|
||||
**After completing any sub-task:**
|
||||
1. Run code review checklist
|
||||
2. Test integration with existing code
|
||||
3. Update documentation if needed
|
||||
4. Mark task complete only after quality approval
|
||||
```
|
||||
|
||||
### Workflow Commands
|
||||
```markdown
|
||||
**For each development session:**
|
||||
1. Review incomplete tasks and their status
|
||||
2. Identify next logical sub-task to work on
|
||||
3. Check dependencies and blockers
|
||||
4. Follow iterative workflow process
|
||||
5. Update task list with progress and findings
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Daily Success Indicators
|
||||
- Tasks are completed according to quality standards
|
||||
- No sub-tasks are started without completing previous ones
|
||||
- File tracking remains accurate and current
|
||||
- Integration issues are caught early
|
||||
|
||||
### Weekly Success Indicators
|
||||
- Overall task completion rate is sustainable
|
||||
- Quality issues are decreasing over time
|
||||
- Context loss incidents are rare
|
||||
- Team confidence in codebase remains high
|
||||
70
.cursor/rules/generate-tasks.mdc
Normal file
70
.cursor/rules/generate-tasks.mdc
Normal file
@ -0,0 +1,70 @@
|
||||
---
|
||||
description: Generate a task list or TODO for a user requirement or implementation.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Task List from a PRD
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation.
|
||||
|
||||
## Output
|
||||
|
||||
- **Format:** Markdown (`.md`)
|
||||
- **Location:** `/tasks/`
|
||||
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive PRD Reference:** The user points the AI to a specific PRD file
|
||||
2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
|
||||
3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
|
||||
4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
|
||||
5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD.
|
||||
6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
|
||||
7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
|
||||
8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`).
|
||||
|
||||
## Output Format
|
||||
|
||||
The generated task list _must_ follow this structure:
|
||||
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature).
|
||||
- `path/to/file1.test.ts` - Unit tests for `file1.ts`.
|
||||
- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission).
|
||||
- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`.
|
||||
- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations).
|
||||
- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`.
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory).
|
||||
- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration.
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- [ ] 2.0 Parent Task Title
|
||||
- [ ] 2.1 [Sub-task description 2.1]
|
||||
- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration)
|
||||
```
|
||||
|
||||
## Interaction Model
|
||||
|
||||
The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.
|
||||
|
||||
## Target Audience
|
||||
|
||||
|
||||
Assume the primary reader of the task list is a **junior developer** who will implement the feature.
|
||||
236
.cursor/rules/iterative-workflow.mdc
Normal file
236
.cursor/rules/iterative-workflow.mdc
Normal file
@ -0,0 +1,236 @@
|
||||
---
|
||||
description: Iterative development workflow for AI-assisted coding
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Iterative Development Workflow
|
||||
|
||||
## Goal
|
||||
Establish a structured, iterative development process that prevents the chaos and complexity that can arise from uncontrolled AI-assisted development.
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: Planning and Design
|
||||
**Before writing any code:**
|
||||
|
||||
1. **Understand the Requirement**
|
||||
- Break down the task into specific, measurable objectives
|
||||
- Identify existing code patterns that should be followed
|
||||
- List dependencies and integration points
|
||||
- Define acceptance criteria
|
||||
|
||||
2. **Design Review**
|
||||
- Propose approach in bullet points
|
||||
- Wait for explicit approval before proceeding
|
||||
- Consider how the solution fits existing architecture
|
||||
- Identify potential risks and mitigation strategies
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
**One small piece at a time:**
|
||||
|
||||
1. **Micro-Tasks** (≤ 50 lines each)
|
||||
- Implement one function or small class at a time
|
||||
- Test immediately after implementation
|
||||
- Ensure integration with existing code
|
||||
- Document decisions and patterns used
|
||||
|
||||
2. **Validation Checkpoints**
|
||||
- After each micro-task, verify it works correctly
|
||||
- Check that it follows established patterns
|
||||
- Confirm it integrates cleanly with existing code
|
||||
- Get approval before moving to next micro-task
|
||||
|
||||
### Phase 3: Integration and Testing
|
||||
**Ensuring system coherence:**
|
||||
|
||||
1. **Integration Testing**
|
||||
- Test new code with existing functionality
|
||||
- Verify no regressions in existing features
|
||||
- Check performance impact
|
||||
- Validate error handling
|
||||
|
||||
2. **Documentation Update**
|
||||
- Update relevant documentation
|
||||
- Record any new patterns or decisions
|
||||
- Update context files if architecture changed
|
||||
|
||||
## Iterative Prompting Strategy
|
||||
|
||||
### Step 1: Context Setting
|
||||
```
|
||||
Before implementing [feature], help me understand:
|
||||
1. What existing patterns should I follow?
|
||||
2. What existing functions/classes are relevant?
|
||||
3. How should this integrate with [specific existing component]?
|
||||
4. What are the potential architectural impacts?
|
||||
```
|
||||
|
||||
### Step 2: Plan Creation
|
||||
```
|
||||
Based on the context, create a detailed plan for implementing [feature]:
|
||||
1. Break it into micro-tasks (≤50 lines each)
|
||||
2. Identify dependencies and order of implementation
|
||||
3. Specify integration points with existing code
|
||||
4. List potential risks and mitigation strategies
|
||||
|
||||
Wait for my approval before implementing.
|
||||
```
|
||||
|
||||
### Step 3: Incremental Implementation
|
||||
```
|
||||
Implement only the first micro-task: [specific task]
|
||||
- Use existing patterns from [reference file/function]
|
||||
- Keep it under 50 lines
|
||||
- Include error handling
|
||||
- Add appropriate tests
|
||||
- Explain your implementation choices
|
||||
|
||||
Stop after this task and wait for approval.
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Before Each Implementation
|
||||
- [ ] **Purpose is clear**: Can explain what this piece does and why
|
||||
- [ ] **Pattern is established**: Following existing code patterns
|
||||
- [ ] **Size is manageable**: Implementation is small enough to understand completely
|
||||
- [ ] **Integration is planned**: Know how it connects to existing code
|
||||
|
||||
### After Each Implementation
|
||||
- [ ] **Code is understood**: Can explain every line of implemented code
|
||||
- [ ] **Tests pass**: All existing and new tests are passing
|
||||
- [ ] **Integration works**: New code works with existing functionality
|
||||
- [ ] **Documentation updated**: Changes are reflected in relevant documentation
|
||||
|
||||
### Before Moving to Next Task
|
||||
- [ ] **Current task complete**: All acceptance criteria met
|
||||
- [ ] **No regressions**: Existing functionality still works
|
||||
- [ ] **Clean state**: No temporary code or debugging artifacts
|
||||
- [ ] **Approval received**: Explicit go-ahead for next task
|
||||
- [ ] **Documentaion updated**: If relevant changes to module was made.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Large Block Implementation
|
||||
**Don't:**
|
||||
```
|
||||
Implement the entire user management system with authentication,
|
||||
CRUD operations, and email notifications.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
First, implement just the User model with basic fields.
|
||||
Stop there and let me review before continuing.
|
||||
```
|
||||
|
||||
### Context Loss
|
||||
**Don't:**
|
||||
```
|
||||
Create a new authentication system.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Looking at the existing auth patterns in auth.py, implement
|
||||
password validation following the same structure as the
|
||||
existing email validation function.
|
||||
```
|
||||
|
||||
### Over-Engineering
|
||||
**Don't:**
|
||||
```
|
||||
Build a flexible, extensible user management framework that
|
||||
can handle any future requirements.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Implement user creation functionality that matches the existing
|
||||
pattern in customer.py, focusing only on the current requirements.
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Task Status Indicators
|
||||
- 🔄 **In Planning**: Requirements gathering and design
|
||||
- ⏳ **In Progress**: Currently implementing
|
||||
- ✅ **Complete**: Implemented, tested, and integrated
|
||||
- 🚫 **Blocked**: Waiting for decisions or dependencies
|
||||
- 🔧 **Needs Refactor**: Working but needs improvement
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Progress Assessment**
|
||||
- What was completed this week?
|
||||
- What challenges were encountered?
|
||||
- How well did the iterative process work?
|
||||
|
||||
2. **Process Adjustment**
|
||||
- Were task sizes appropriate?
|
||||
- Did context management work effectively?
|
||||
- What improvements can be made?
|
||||
|
||||
3. **Architecture Review**
|
||||
- Is the code remaining maintainable?
|
||||
- Are patterns staying consistent?
|
||||
- Is technical debt accumulating?
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Things Go Wrong
|
||||
If development becomes chaotic or problematic:
|
||||
|
||||
1. **Stop Development**
|
||||
- Don't continue adding to the problem
|
||||
- Take time to assess the situation
|
||||
- Don't rush to "fix" with more AI-generated code
|
||||
|
||||
2. **Assess the Situation**
|
||||
- What specific problems exist?
|
||||
- How far has the code diverged from established patterns?
|
||||
- What parts are still working correctly?
|
||||
|
||||
3. **Recovery Process**
|
||||
- Roll back to last known good state
|
||||
- Update context documentation with lessons learned
|
||||
- Restart with smaller, more focused tasks
|
||||
- Get explicit approval for each step of recovery
|
||||
|
||||
### Context Recovery
|
||||
When AI seems to lose track of project patterns:
|
||||
|
||||
1. **Context Refresh**
|
||||
- Review and update CONTEXT.md
|
||||
- Include examples of current code patterns
|
||||
- Clarify architectural decisions
|
||||
|
||||
2. **Pattern Re-establishment**
|
||||
- Show AI examples of existing, working code
|
||||
- Explicitly state patterns to follow
|
||||
- Start with very small, pattern-matching tasks
|
||||
|
||||
3. **Gradual Re-engagement**
|
||||
- Begin with simple, low-risk tasks
|
||||
- Verify pattern adherence at each step
|
||||
- Gradually increase task complexity as consistency returns
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Short-term (Daily)
|
||||
- Code is understandable and well-integrated
|
||||
- No major regressions introduced
|
||||
- Development velocity feels sustainable
|
||||
- Team confidence in codebase remains high
|
||||
|
||||
### Medium-term (Weekly)
|
||||
- Technical debt is not accumulating
|
||||
- New features integrate cleanly
|
||||
- Development patterns remain consistent
|
||||
- Documentation stays current
|
||||
|
||||
### Long-term (Monthly)
|
||||
- Codebase remains maintainable as it grows
|
||||
- New team members can understand and contribute
|
||||
- AI assistance enhances rather than hinders development
|
||||
- Architecture remains clean and purposeful
|
||||
24
.cursor/rules/project.mdc
Normal file
24
.cursor/rules/project.mdc
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# Rule: Project specific rules
|
||||
|
||||
## Goal
|
||||
Unify the project structure and interraction with tools and console
|
||||
|
||||
### System tools
|
||||
- **ALWAYS** use UV for package management
|
||||
- **ALWAYS** use windows PowerShell command for terminal
|
||||
|
||||
### Coding patterns
|
||||
- **ALWYAS** check the arguments and methods before use to avoid errors with whron parameters or names
|
||||
- If in doubt, check [CONTEXT.md](mdc:CONTEXT.md) file and [architecture.md](mdc:docs/architecture.md)
|
||||
- **PREFER** ORM pattern for databases with SQLAclhemy.
|
||||
- **DO NOT USE** emoji in code and comments
|
||||
|
||||
### Testing
|
||||
- Use UV for test in format *uv run pytest [filename]*
|
||||
|
||||
|
||||
237
.cursor/rules/refactoring.mdc
Normal file
237
.cursor/rules/refactoring.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Code refactoring and technical debt management for AI-assisted development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Refactoring and Technical Debt Management
|
||||
|
||||
## Goal
|
||||
Guide AI in systematic code refactoring to improve maintainability, reduce complexity, and prevent technical debt accumulation in AI-assisted development projects.
|
||||
|
||||
## When to Apply This Rule
|
||||
- Code complexity has increased beyond manageable levels
|
||||
- Duplicate code patterns are detected
|
||||
- Performance issues are identified
|
||||
- New features are difficult to integrate
|
||||
- Code review reveals maintainability concerns
|
||||
- Weekly technical debt assessment indicates refactoring needs
|
||||
|
||||
## Pre-Refactoring Assessment
|
||||
|
||||
Before starting any refactoring, the AI MUST:
|
||||
|
||||
1. **Context Analysis:**
|
||||
- Review existing `CONTEXT.md` for architectural decisions
|
||||
- Analyze current code patterns and conventions
|
||||
- Identify all files that will be affected (search the codebase for use)
|
||||
- Check for existing tests that verify current behavior
|
||||
|
||||
2. **Scope Definition:**
|
||||
- Clearly define what will and will not be changed
|
||||
- Identify the specific refactoring pattern to apply
|
||||
- Estimate the blast radius of changes
|
||||
- Plan rollback strategy if needed
|
||||
|
||||
3. **Documentation Review:**
|
||||
- Check `./docs/` for relevant module documentation
|
||||
- Review any existing architectural diagrams
|
||||
- Identify dependencies and integration points
|
||||
- Note any known constraints or limitations
|
||||
|
||||
## Refactoring Process
|
||||
|
||||
### Phase 1: Planning and Safety
|
||||
1. **Create Refactoring Plan:**
|
||||
- Document the current state and desired end state
|
||||
- Break refactoring into small, atomic steps
|
||||
- Identify tests that must pass throughout the process
|
||||
- Plan verification steps for each change
|
||||
|
||||
2. **Establish Safety Net:**
|
||||
- Ensure comprehensive test coverage exists
|
||||
- If tests are missing, create them BEFORE refactoring
|
||||
- Document current behavior that must be preserved
|
||||
- Create backup of current implementation approach
|
||||
|
||||
3. **Get Approval:**
|
||||
- Present the refactoring plan to the user
|
||||
- Wait for explicit "Go" or "Proceed" confirmation
|
||||
- Do NOT start refactoring without approval
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
4. **One Change at a Time:**
|
||||
- Implement ONE refactoring step per iteration
|
||||
- Run tests after each step to ensure nothing breaks
|
||||
- Update documentation if interfaces change
|
||||
- Mark progress in the refactoring plan
|
||||
|
||||
5. **Verification Protocol:**
|
||||
- Run all relevant tests after each change
|
||||
- Verify functionality works as expected
|
||||
- Check performance hasn't degraded
|
||||
- Ensure no new linting or type errors
|
||||
|
||||
6. **User Checkpoint:**
|
||||
- After each significant step, pause for user review
|
||||
- Present what was changed and current status
|
||||
- Wait for approval before continuing
|
||||
- Address any concerns before proceeding
|
||||
|
||||
### Phase 3: Completion and Documentation
|
||||
7. **Final Verification:**
|
||||
- Run full test suite to ensure nothing is broken
|
||||
- Verify all original functionality is preserved
|
||||
- Check that new code follows project conventions
|
||||
- Confirm performance is maintained or improved
|
||||
|
||||
8. **Documentation Update:**
|
||||
- Update `CONTEXT.md` with new patterns/decisions
|
||||
- Update module documentation in `./docs/`
|
||||
- Document any new conventions established
|
||||
- Note lessons learned for future refactoring
|
||||
|
||||
## Common Refactoring Patterns
|
||||
|
||||
### Extract Method/Function
|
||||
```
|
||||
WHEN: Functions/methods exceed 50 lines or have multiple responsibilities
|
||||
HOW:
|
||||
1. Identify logical groupings within the function
|
||||
2. Extract each group into a well-named helper function
|
||||
3. Ensure each function has a single responsibility
|
||||
4. Verify tests still pass
|
||||
```
|
||||
|
||||
### Extract Module/Class
|
||||
```
|
||||
WHEN: Files exceed 250 lines or handle multiple concerns
|
||||
HOW:
|
||||
1. Identify cohesive functionality groups
|
||||
2. Create new files for each group
|
||||
3. Move related functions/classes together
|
||||
4. Update imports and dependencies
|
||||
5. Verify module boundaries are clean
|
||||
```
|
||||
|
||||
### Eliminate Duplication
|
||||
```
|
||||
WHEN: Similar code appears in multiple places
|
||||
HOW:
|
||||
1. Identify the common pattern or functionality
|
||||
2. Extract to a shared utility function or module
|
||||
3. Update all usage sites to use the shared code
|
||||
4. Ensure the abstraction is not over-engineered
|
||||
```
|
||||
|
||||
### Improve Data Structures
|
||||
```
|
||||
WHEN: Complex nested objects or unclear data flow
|
||||
HOW:
|
||||
1. Define clear interfaces/types for data structures
|
||||
2. Create transformation functions between different representations
|
||||
3. Ensure data flow is unidirectional where possible
|
||||
4. Add validation at boundaries
|
||||
```
|
||||
|
||||
### Reduce Coupling
|
||||
```
|
||||
WHEN: Modules are tightly interconnected
|
||||
HOW:
|
||||
1. Identify dependencies between modules
|
||||
2. Extract interfaces for external dependencies
|
||||
3. Use dependency injection where appropriate
|
||||
4. Ensure modules can be tested in isolation
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
Every refactoring must pass these gates:
|
||||
|
||||
### Technical Quality
|
||||
- [ ] All existing tests pass
|
||||
- [ ] No new linting errors introduced
|
||||
- [ ] Code follows established project conventions
|
||||
- [ ] No performance regression detected
|
||||
- [ ] File sizes remain under 250 lines
|
||||
- [ ] Function sizes remain under 50 lines
|
||||
|
||||
### Maintainability
|
||||
- [ ] Code is more readable than before
|
||||
- [ ] Duplicated code has been reduced
|
||||
- [ ] Module responsibilities are clearer
|
||||
- [ ] Dependencies are explicit and minimal
|
||||
- [ ] Error handling is consistent
|
||||
|
||||
### Documentation
|
||||
- [ ] Public interfaces are documented
|
||||
- [ ] Complex logic has explanatory comments
|
||||
- [ ] Architectural decisions are recorded
|
||||
- [ ] Examples are provided where helpful
|
||||
|
||||
## AI Instructions for Refactoring
|
||||
|
||||
1. **Always ask for permission** before starting any refactoring work
|
||||
2. **Start with tests** - ensure comprehensive coverage before changing code
|
||||
3. **Work incrementally** - make small changes and verify each step
|
||||
4. **Preserve behavior** - functionality must remain exactly the same
|
||||
5. **Update documentation** - keep all docs current with changes
|
||||
6. **Follow conventions** - maintain consistency with existing codebase
|
||||
7. **Stop and ask** if any step fails or produces unexpected results
|
||||
8. **Explain changes** - clearly communicate what was changed and why
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Over-Engineering
|
||||
- Don't create abstractions for code that isn't duplicated
|
||||
- Avoid complex inheritance hierarchies
|
||||
- Don't optimize prematurely
|
||||
|
||||
### Breaking Changes
|
||||
- Never change public APIs without explicit approval
|
||||
- Don't remove functionality, even if it seems unused
|
||||
- Avoid changing behavior "while we're here"
|
||||
|
||||
### Scope Creep
|
||||
- Stick to the defined refactoring scope
|
||||
- Don't add new features during refactoring
|
||||
- Resist the urge to "improve" unrelated code
|
||||
|
||||
## Success Metrics
|
||||
|
||||
Track these metrics to ensure refactoring effectiveness:
|
||||
|
||||
### Code Quality
|
||||
- Reduced cyclomatic complexity
|
||||
- Lower code duplication percentage
|
||||
- Improved test coverage
|
||||
- Fewer linting violations
|
||||
|
||||
### Developer Experience
|
||||
- Faster time to understand code
|
||||
- Easier integration of new features
|
||||
- Reduced bug introduction rate
|
||||
- Higher developer confidence in changes
|
||||
|
||||
### Maintainability
|
||||
- Clearer module boundaries
|
||||
- More predictable behavior
|
||||
- Easier debugging and troubleshooting
|
||||
- Better performance characteristics
|
||||
|
||||
## Output Files
|
||||
|
||||
When refactoring is complete, update:
|
||||
- `refactoring-log-[date].md` - Document what was changed and why
|
||||
- `CONTEXT.md` - Update with new patterns and decisions
|
||||
- `./docs/` - Update relevant module documentation
|
||||
- Task lists - Mark refactoring tasks as complete
|
||||
|
||||
## Final Verification
|
||||
|
||||
Before marking refactoring complete:
|
||||
1. Run full test suite and verify all tests pass
|
||||
2. Check that code follows all project conventions
|
||||
3. Verify documentation is up to date
|
||||
4. Confirm user is satisfied with the results
|
||||
5. Record lessons learned for future refactoring efforts
|
||||
44
.cursor/rules/task-list.mdc
Normal file
44
.cursor/rules/task-list.mdc
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
description: TODO list task implementation
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Task List Management
|
||||
|
||||
Guidelines for managing task lists in markdown files to track progress on completing a PRD
|
||||
|
||||
## Task Implementation
|
||||
- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y"
|
||||
- **Completion protocol:**
|
||||
1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.
|
||||
2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed.
|
||||
- Stop after each sub‑task and wait for the user’s go‑ahead.
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
1. **Update the task list as you work:**
|
||||
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
|
||||
- Add new tasks as they emerge.
|
||||
|
||||
2. **Maintain the “Relevant Files” section:**
|
||||
- List every file created or modified.
|
||||
- Give each file a one‑line description of its purpose.
|
||||
|
||||
## AI Instructions
|
||||
|
||||
When working with task lists, the AI must:
|
||||
|
||||
1. Regularly update the task list file after finishing any significant work.
|
||||
2. Follow the completion protocol:
|
||||
- Mark each finished **sub‑task** `[x]`.
|
||||
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
|
||||
3. Add newly discovered tasks.
|
||||
4. Keep “Relevant Files” accurate and up to date.
|
||||
5. Before starting work, check which sub‑task is next.
|
||||
|
||||
6. After implementing a sub‑task, update the file and then pause for user approval.
|
||||
1
.python-version
Normal file
1
.python-version
Normal file
@ -0,0 +1 @@
|
||||
3.12
|
||||
16
.vscode/launch.json
vendored
Normal file
16
.vscode/launch.json
vendored
Normal file
@ -0,0 +1,16 @@
|
||||
{
|
||||
// Use IntelliSense to learn about possible attributes.
|
||||
// Hover to view descriptions of existing attributes.
|
||||
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
|
||||
{
|
||||
"name": "Python Debugger: main.py",
|
||||
"type": "debugpy",
|
||||
"request": "launch",
|
||||
"program": "main.py",
|
||||
"console": "integratedTerminal"
|
||||
}
|
||||
]
|
||||
}
|
||||
1
OHLCVPredictor
Symbolic link
1
OHLCVPredictor
Symbolic link
@ -0,0 +1 @@
|
||||
../OHLCVPredictor
|
||||
203
main.py
Normal file
203
main.py
Normal file
@ -0,0 +1,203 @@
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from ta.volatility import AverageTrueRange
|
||||
|
||||
|
||||
def load_data(since):
|
||||
df = pd.read_csv('../data/btcusd_1-min_data.csv')
|
||||
df['Timestamp'] = pd.to_datetime(df['Timestamp'], unit='s')
|
||||
df = df[df['Timestamp'] >= pd.Timestamp(since)]
|
||||
return df
|
||||
|
||||
def aggregate_data(df, timeframe):
|
||||
df = df.set_index('Timestamp')
|
||||
df = df.resample(timeframe).agg({
|
||||
'Open': 'first',
|
||||
'High': 'max',
|
||||
'Low': 'min',
|
||||
'Close': 'last',
|
||||
'Volume': 'sum'
|
||||
})
|
||||
df = df.reset_index()
|
||||
return df
|
||||
|
||||
def calculate_okx_taker_maker_fee(amount, is_maker=False):
|
||||
fee_rate = 0.0008 if is_maker else 0.0010
|
||||
return amount * fee_rate
|
||||
|
||||
def calculate_supertrend(df, period, multiplier):
|
||||
"""
|
||||
Calculate the Supertrend indicator for a given period and multiplier.
|
||||
Optionally displays progress during calculation.
|
||||
Args:
|
||||
df (pd.DataFrame): DataFrame with 'High', 'Low', 'Close' columns.
|
||||
period (int): ATR period.
|
||||
multiplier (float): Multiplier for ATR.
|
||||
progress_step (int): Step interval for progress display.
|
||||
show_progress (bool): Whether to print progress updates.
|
||||
Returns:
|
||||
pd.Series: Supertrend values.
|
||||
"""
|
||||
high = df['High'].values
|
||||
low = df['Low'].values
|
||||
close = df['Close'].values
|
||||
atr = AverageTrueRange(df['High'], df['Low'], df['Close'], window=period).average_true_range().values
|
||||
|
||||
hl2 = (high + low) / 2
|
||||
upperband = hl2 + (multiplier * atr)
|
||||
lowerband = hl2 - (multiplier * atr)
|
||||
|
||||
supertrend = np.full_like(close, np.nan)
|
||||
in_uptrend = True
|
||||
|
||||
supertrend[0] = upperband[0]
|
||||
total_steps = len(close) - 1
|
||||
|
||||
for i in range(1, len(close)):
|
||||
if close[i] > upperband[i-1]:
|
||||
in_uptrend = True
|
||||
elif close[i] < lowerband[i-1]:
|
||||
in_uptrend = False
|
||||
# else, keep previous trend
|
||||
|
||||
if in_uptrend:
|
||||
supertrend[i] = max(lowerband[i], supertrend[i-1] if not np.isnan(supertrend[i-1]) else lowerband[i])
|
||||
else:
|
||||
supertrend[i] = min(upperband[i], supertrend[i-1] if not np.isnan(supertrend[i-1]) else upperband[i])
|
||||
|
||||
return pd.Series(supertrend, index=df.index)
|
||||
|
||||
def add_supertrend_indicators(df):
|
||||
"""
|
||||
Adds Supertrend indicators to the dataframe for the specified (period, multiplier) pairs.
|
||||
Args:
|
||||
df (pd.DataFrame): DataFrame with columns 'High', 'Low', 'Close'.
|
||||
Returns:
|
||||
pd.DataFrame: DataFrame with new Supertrend columns added.
|
||||
"""
|
||||
supertrend_params = [(12, 3.0), (10, 1.0), (11, 2.0)]
|
||||
for period, multiplier in supertrend_params:
|
||||
try:
|
||||
st_col = f'supertrend_{period}_{multiplier}'
|
||||
df[st_col] = calculate_supertrend(df, period, multiplier)
|
||||
except Exception as e:
|
||||
print(f"Error calculating Supertrend {period}, {multiplier}: {e}")
|
||||
df[f'supertrend_{period}_{multiplier}'] = np.nan
|
||||
return df
|
||||
|
||||
def precompute_1min_slice_indices(df_aggregated, df_1min):
|
||||
"""
|
||||
Precompute start and end indices for each aggregated bar using searchsorted.
|
||||
Returns a list of (start_idx, end_idx) tuples for fast iloc slicing.
|
||||
"""
|
||||
timestamps = df_aggregated['Timestamp'].values
|
||||
one_min_timestamps = df_1min['Timestamp'].values
|
||||
# Ensure both are sorted
|
||||
sorted_1min = np.argsort(one_min_timestamps)
|
||||
one_min_timestamps = one_min_timestamps[sorted_1min]
|
||||
indices = []
|
||||
prev_idx = 0
|
||||
for i in range(1, len(timestamps)):
|
||||
start, end = timestamps[i-1], timestamps[i]
|
||||
# Find indices using searchsorted (right for start, right for end)
|
||||
start_idx = np.searchsorted(one_min_timestamps, start, side='right')
|
||||
end_idx = np.searchsorted(one_min_timestamps, end, side='right')
|
||||
indices.append((start_idx, end_idx))
|
||||
return indices, sorted_1min
|
||||
|
||||
def backtest(df_aggregated, df_1min, stop_loss_pct, progress_step=1000):
|
||||
"""
|
||||
Backtest trading strategy based on Supertrend indicators with trailing stop loss.
|
||||
Buys when all three Supertrend columns are positive (>0),
|
||||
sells when any is negative (<0), or when trailing stop loss is hit.
|
||||
|
||||
Args:
|
||||
df_aggregated (pd.DataFrame): Aggregated OHLCV data with Supertrend columns.
|
||||
df_1min (pd.DataFrame): 1-minute OHLCV data.
|
||||
stop_loss_pct (float): Trailing stop loss percentage (e.g., 0.02 for 2%).
|
||||
progress_step (int): Step interval for progress display.
|
||||
"""
|
||||
required_st_cols = ["supertrend_12_3.0", "supertrend_10_1.0", "supertrend_11_2.0"]
|
||||
for col in required_st_cols:
|
||||
if col not in df_aggregated.columns:
|
||||
raise ValueError(f"Missing required Supertrend column: {col}")
|
||||
|
||||
# Precompute 1-min slice indices for each aggregated bar
|
||||
slice_indices, sorted_1min = precompute_1min_slice_indices(df_aggregated, df_1min)
|
||||
df_1min_sorted = df_1min.iloc[sorted_1min].reset_index(drop=True)
|
||||
|
||||
in_position = False
|
||||
init_usd = 1000
|
||||
usd = init_usd
|
||||
coin = 0
|
||||
highest_price = None
|
||||
nb_stop_loss = 0
|
||||
|
||||
total_steps = len(df_aggregated) - 1
|
||||
for i in range(1, len(df_aggregated)):
|
||||
st_vals = [df_aggregated[col][i] for col in required_st_cols]
|
||||
all_positive = all(val > 0 for val in st_vals)
|
||||
any_negative = any(val < 0 for val in st_vals)
|
||||
close_price = df_aggregated['Close'][i]
|
||||
|
||||
# Buy condition: all Supertrend values positive
|
||||
if not in_position and all_positive:
|
||||
in_position = True
|
||||
coin = usd / close_price
|
||||
usd = 0
|
||||
highest_price = close_price
|
||||
# If in position, update highest price and check stop loss on 1-min data
|
||||
elif in_position:
|
||||
# Update highest price if new high on aggregated bar
|
||||
if close_price > highest_price:
|
||||
highest_price = close_price
|
||||
|
||||
# Use precomputed indices for this bar
|
||||
start_idx, end_idx = slice_indices[i-1]
|
||||
df_1min_slice = df_1min_sorted.iloc[start_idx:end_idx]
|
||||
|
||||
stop_triggered = False
|
||||
for _, row in df_1min_slice.iterrows():
|
||||
# Update highest price if new high in 1-min bar
|
||||
if row['Close'] > highest_price:
|
||||
highest_price = row['Close']
|
||||
# Trailing stop loss condition on 1-min close
|
||||
if row['Close'] < highest_price * (1 - stop_loss_pct):
|
||||
in_position = False
|
||||
usd = coin * row['Close']
|
||||
coin = 0
|
||||
# print(f"Stop loss triggered at {row['Close']:.2f} on {row['Timestamp']}")
|
||||
nb_stop_loss += 1
|
||||
highest_price = None
|
||||
stop_triggered = True
|
||||
break
|
||||
|
||||
# If stop loss was triggered, skip further checks for this bar
|
||||
if stop_triggered:
|
||||
continue
|
||||
|
||||
# Sell condition: any Supertrend value negative (on aggregated bar close)
|
||||
if any_negative:
|
||||
in_position = False
|
||||
usd = coin * close_price
|
||||
coin = 0
|
||||
highest_price = None
|
||||
|
||||
if i % progress_step == 0 or i == total_steps:
|
||||
percent = (i / total_steps) * 100
|
||||
print(f"Progress: {percent:.1f}% ({i}/{total_steps})")
|
||||
|
||||
print(f"Total profit: {usd - init_usd}")
|
||||
print(f"Number of stop losses: {nb_stop_loss}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
df_1min = load_data('2020-01-01')
|
||||
df_aggregated = aggregate_data(df_1min, '5min')
|
||||
|
||||
# Add Supertrend indicators
|
||||
df_aggregated = add_supertrend_indicators(df_aggregated)
|
||||
|
||||
df_aggregated['log_return'] = np.log(df_aggregated['Close'] / df_aggregated['Close'].shift(1))
|
||||
|
||||
# Example: 2% trailing stop loss
|
||||
backtest(df_aggregated, df_1min, stop_loss_pct=0.02)
|
||||
7
pyproject.toml
Normal file
7
pyproject.toml
Normal file
@ -0,0 +1,7 @@
|
||||
[project]
|
||||
name = "lowkey-backtest"
|
||||
version = "0.1.0"
|
||||
description = "Add your description here"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.12"
|
||||
dependencies = []
|
||||
Loading…
x
Reference in New Issue
Block a user