Initialize granular-backtest package with core modules and CLI functionality
This commit is contained in:
parent
b80a8f881b
commit
69c5cd6236
61
.cursor/rules/always-global.mdc
Normal file
61
.cursor/rules/always-global.mdc
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
description: Global development standards and AI interaction principles
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
|
||||
# Rule: Always Apply - Global Development Standards
|
||||
|
||||
## AI Interaction Principles
|
||||
|
||||
### Step-by-Step Development
|
||||
- **NEVER** generate large blocks of code without explanation
|
||||
- **ALWAYS** ask "provide your plan in a concise bullet list and wait for my confirmation before proceeding"
|
||||
- Break complex tasks into smaller, manageable pieces (≤250 lines per file, ≤50 lines per function)
|
||||
- Explain your reasoning step-by-step before writing code
|
||||
- Wait for explicit approval before moving to the next sub-task
|
||||
|
||||
### Context Awareness
|
||||
- **ALWAYS** reference existing code patterns and data structures before suggesting new approaches
|
||||
- Ask about existing conventions before implementing new functionality
|
||||
- Preserve established architectural decisions unless explicitly asked to change them
|
||||
- Maintain consistency with existing naming conventions and code style
|
||||
|
||||
## Code Quality Standards
|
||||
|
||||
### File and Function Limits
|
||||
- **Maximum file size**: 250 lines
|
||||
- **Maximum function size**: 50 lines
|
||||
- **Maximum complexity**: If a function does more than one main thing, break it down
|
||||
- **Naming**: Use clear, descriptive names that explain purpose
|
||||
|
||||
### Documentation Requirements
|
||||
- **Every public function** must have a docstring explaining purpose, parameters, and return value
|
||||
- **Every class** must have a class-level docstring
|
||||
- **Complex logic** must have inline comments explaining the "why", not just the "what"
|
||||
- **API endpoints** must be documented with request/response examples
|
||||
|
||||
### Error Handling
|
||||
- **ALWAYS** include proper error handling for external dependencies
|
||||
- **NEVER** use bare except clauses
|
||||
- Provide meaningful error messages that help with debugging
|
||||
- Log errors appropriately for the application context
|
||||
|
||||
## Security and Best Practices
|
||||
- **NEVER** hardcode credentials, API keys, or sensitive data
|
||||
- **ALWAYS** validate user inputs
|
||||
- Use parameterized queries for database operations
|
||||
- Follow the principle of least privilege
|
||||
- Implement proper authentication and authorization
|
||||
|
||||
## Testing Requirements
|
||||
- **Every implementation** should have corresponding unit tests
|
||||
- **Every API endpoint** should have integration tests
|
||||
- Test files should be placed alongside the code they test
|
||||
- Use descriptive test names that explain what is being tested
|
||||
|
||||
## Response Format
|
||||
- Be concise and avoid unnecessary repetition
|
||||
- Focus on actionable information
|
||||
- Provide examples when explaining complex concepts
|
||||
- Ask clarifying questions when requirements are ambiguous
|
||||
237
.cursor/rules/architecture.mdc
Normal file
237
.cursor/rules/architecture.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Modular design principles and architecture guidelines for scalable development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Architecture and Modular Design
|
||||
|
||||
## Goal
|
||||
Maintain a clean, modular architecture that scales effectively and prevents the complexity issues that arise in AI-assisted development.
|
||||
|
||||
## Core Architecture Principles
|
||||
|
||||
### 1. Modular Design
|
||||
- **Single Responsibility**: Each module has one clear purpose
|
||||
- **Loose Coupling**: Modules depend on interfaces, not implementations
|
||||
- **High Cohesion**: Related functionality is grouped together
|
||||
- **Clear Boundaries**: Module interfaces are well-defined and stable
|
||||
|
||||
### 2. Size Constraints
|
||||
- **Files**: Maximum 250 lines per file
|
||||
- **Functions**: Maximum 50 lines per function
|
||||
- **Classes**: Maximum 300 lines per class
|
||||
- **Modules**: Maximum 10 public functions/classes per module
|
||||
|
||||
### 3. Dependency Management
|
||||
- **Layer Dependencies**: Higher layers depend on lower layers only
|
||||
- **No Circular Dependencies**: Modules cannot depend on each other cyclically
|
||||
- **Interface Segregation**: Depend on specific interfaces, not broad ones
|
||||
- **Dependency Injection**: Pass dependencies rather than creating them internally
|
||||
|
||||
## Modular Architecture Patterns
|
||||
|
||||
### Layer Structure
|
||||
```
|
||||
src/
|
||||
├── presentation/ # UI, API endpoints, CLI interfaces
|
||||
├── application/ # Business logic, use cases, workflows
|
||||
├── domain/ # Core business entities and rules
|
||||
├── infrastructure/ # Database, external APIs, file systems
|
||||
└── shared/ # Common utilities, constants, types
|
||||
```
|
||||
|
||||
### Module Organization
|
||||
```
|
||||
module_name/
|
||||
├── __init__.py # Public interface exports
|
||||
├── core.py # Main module logic
|
||||
├── types.py # Type definitions and interfaces
|
||||
├── utils.py # Module-specific utilities
|
||||
├── tests/ # Module tests
|
||||
└── README.md # Module documentation
|
||||
```
|
||||
|
||||
## Design Patterns for AI Development
|
||||
|
||||
### 1. Repository Pattern
|
||||
Separate data access from business logic:
|
||||
|
||||
```python
|
||||
# Domain interface
|
||||
class UserRepository:
|
||||
def get_by_id(self, user_id: str) -> User: ...
|
||||
def save(self, user: User) -> None: ...
|
||||
|
||||
# Infrastructure implementation
|
||||
class SqlUserRepository(UserRepository):
|
||||
def get_by_id(self, user_id: str) -> User:
|
||||
# Database-specific implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### 2. Service Pattern
|
||||
Encapsulate business logic in focused services:
|
||||
|
||||
```python
|
||||
class UserService:
|
||||
def __init__(self, user_repo: UserRepository):
|
||||
self._user_repo = user_repo
|
||||
|
||||
def create_user(self, data: UserData) -> User:
|
||||
# Validation and business logic
|
||||
# Single responsibility: user creation
|
||||
pass
|
||||
```
|
||||
|
||||
### 3. Factory Pattern
|
||||
Create complex objects with clear interfaces:
|
||||
|
||||
```python
|
||||
class DatabaseFactory:
|
||||
@staticmethod
|
||||
def create_connection(config: DatabaseConfig) -> Connection:
|
||||
# Handle different database types
|
||||
# Encapsulate connection complexity
|
||||
pass
|
||||
```
|
||||
|
||||
## Architecture Decision Guidelines
|
||||
|
||||
### When to Create New Modules
|
||||
Create a new module when:
|
||||
- **Functionality** exceeds size constraints (250 lines)
|
||||
- **Responsibility** is distinct from existing modules
|
||||
- **Dependencies** would create circular references
|
||||
- **Reusability** would benefit other parts of the system
|
||||
- **Testing** requires isolated test environments
|
||||
|
||||
### When to Split Existing Modules
|
||||
Split modules when:
|
||||
- **File size** exceeds 250 lines
|
||||
- **Multiple responsibilities** are evident
|
||||
- **Testing** becomes difficult due to complexity
|
||||
- **Dependencies** become too numerous
|
||||
- **Change frequency** differs significantly between parts
|
||||
|
||||
### Module Interface Design
|
||||
```python
|
||||
# Good: Clear, focused interface
|
||||
class PaymentProcessor:
|
||||
def process_payment(self, amount: Money, method: PaymentMethod) -> PaymentResult:
|
||||
"""Process a single payment transaction."""
|
||||
pass
|
||||
|
||||
# Bad: Unfocused, kitchen-sink interface
|
||||
class PaymentManager:
|
||||
def process_payment(self, ...): pass
|
||||
def validate_card(self, ...): pass
|
||||
def send_receipt(self, ...): pass
|
||||
def update_inventory(self, ...): pass # Wrong responsibility!
|
||||
```
|
||||
|
||||
## Architecture Validation
|
||||
|
||||
### Architecture Review Checklist
|
||||
- [ ] **Dependencies flow in one direction** (no cycles)
|
||||
- [ ] **Layers are respected** (presentation doesn't call infrastructure directly)
|
||||
- [ ] **Modules have single responsibility**
|
||||
- [ ] **Interfaces are stable** and well-defined
|
||||
- [ ] **Size constraints** are maintained
|
||||
- [ ] **Testing** is straightforward for each module
|
||||
|
||||
### Red Flags
|
||||
- **God Objects**: Classes/modules that do too many things
|
||||
- **Circular Dependencies**: Modules that depend on each other
|
||||
- **Deep Inheritance**: More than 3 levels of inheritance
|
||||
- **Large Interfaces**: Interfaces with more than 7 methods
|
||||
- **Tight Coupling**: Modules that know too much about each other's internals
|
||||
|
||||
## Refactoring Guidelines
|
||||
|
||||
### When to Refactor
|
||||
- Module exceeds size constraints
|
||||
- Code duplication across modules
|
||||
- Difficult to test individual components
|
||||
- New features require changing multiple unrelated modules
|
||||
- Performance bottlenecks due to poor separation
|
||||
|
||||
### Refactoring Process
|
||||
1. **Identify** the specific architectural problem
|
||||
2. **Design** the target architecture
|
||||
3. **Create tests** to verify current behavior
|
||||
4. **Implement changes** incrementally
|
||||
5. **Validate** that tests still pass
|
||||
6. **Update documentation** to reflect changes
|
||||
|
||||
### Safe Refactoring Practices
|
||||
- **One change at a time**: Don't mix refactoring with new features
|
||||
- **Tests first**: Ensure comprehensive test coverage before refactoring
|
||||
- **Incremental changes**: Small steps with verification at each stage
|
||||
- **Backward compatibility**: Maintain existing interfaces during transition
|
||||
- **Documentation updates**: Keep architecture documentation current
|
||||
|
||||
## Architecture Documentation
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-003: Service Layer Architecture
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
As the application grows, business logic is scattered across controllers and models.
|
||||
|
||||
## Decision
|
||||
Implement a service layer to encapsulate business logic.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- Clear separation of concerns
|
||||
- Easier testing of business logic
|
||||
- Better reusability across different interfaces
|
||||
|
||||
**Negative:**
|
||||
- Additional abstraction layer
|
||||
- More files to maintain
|
||||
```
|
||||
|
||||
### Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
What this module does and why it exists.
|
||||
|
||||
## Dependencies
|
||||
- **Imports from**: List of modules this depends on
|
||||
- **Used by**: List of modules that depend on this one
|
||||
- **External**: Third-party dependencies
|
||||
|
||||
## Public Interface
|
||||
```python
|
||||
# Key functions and classes exposed by this module
|
||||
```
|
||||
|
||||
## Architecture Notes
|
||||
- Design patterns used
|
||||
- Important architectural decisions
|
||||
- Known limitations or constraints
|
||||
```
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### Legacy Code Integration
|
||||
- **Strangler Fig Pattern**: Gradually replace old code with new modules
|
||||
- **Adapter Pattern**: Create interfaces to integrate old and new code
|
||||
- **Facade Pattern**: Simplify complex legacy interfaces
|
||||
|
||||
### Gradual Modernization
|
||||
1. **Identify boundaries** in existing code
|
||||
2. **Extract modules** one at a time
|
||||
3. **Create interfaces** for each extracted module
|
||||
4. **Test thoroughly** at each step
|
||||
5. **Update documentation** continuously
|
||||
123
.cursor/rules/code-review.mdc
Normal file
123
.cursor/rules/code-review.mdc
Normal file
@ -0,0 +1,123 @@
|
||||
---
|
||||
description: AI-generated code review checklist and quality assurance guidelines
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Review and Quality Assurance
|
||||
|
||||
## Goal
|
||||
Establish systematic review processes for AI-generated code to maintain quality, security, and maintainability standards.
|
||||
|
||||
## AI Code Review Checklist
|
||||
|
||||
### Pre-Implementation Review
|
||||
Before accepting any AI-generated code:
|
||||
|
||||
1. **Understand the Code**
|
||||
- [ ] Can you explain what the code does in your own words?
|
||||
- [ ] Do you understand each function and its purpose?
|
||||
- [ ] Are there any "magic" values or unexplained logic?
|
||||
- [ ] Does the code solve the actual problem stated?
|
||||
|
||||
2. **Architecture Alignment**
|
||||
- [ ] Does the code follow established project patterns?
|
||||
- [ ] Is it consistent with existing data structures?
|
||||
- [ ] Does it integrate cleanly with existing components?
|
||||
- [ ] Are new dependencies justified and necessary?
|
||||
|
||||
3. **Code Quality**
|
||||
- [ ] Are functions smaller than 50 lines?
|
||||
- [ ] Are files smaller than 250 lines?
|
||||
- [ ] Are variable and function names descriptive?
|
||||
- [ ] Is the code DRY (Don't Repeat Yourself)?
|
||||
|
||||
### Security Review
|
||||
- [ ] **Input Validation**: All user inputs are validated and sanitized
|
||||
- [ ] **Authentication**: Proper authentication checks are in place
|
||||
- [ ] **Authorization**: Access controls are implemented correctly
|
||||
- [ ] **Data Protection**: Sensitive data is handled securely
|
||||
- [ ] **SQL Injection**: Database queries use parameterized statements
|
||||
- [ ] **XSS Prevention**: Output is properly escaped
|
||||
- [ ] **Error Handling**: Errors don't leak sensitive information
|
||||
|
||||
### Integration Review
|
||||
- [ ] **Existing Functionality**: New code doesn't break existing features
|
||||
- [ ] **Data Consistency**: Database changes maintain referential integrity
|
||||
- [ ] **API Compatibility**: Changes don't break existing API contracts
|
||||
- [ ] **Performance Impact**: New code doesn't introduce performance bottlenecks
|
||||
- [ ] **Testing Coverage**: Appropriate tests are included
|
||||
|
||||
## Review Process
|
||||
|
||||
### Step 1: Initial Code Analysis
|
||||
1. **Read through the entire generated code** before running it
|
||||
2. **Identify patterns** that don't match existing codebase
|
||||
3. **Check dependencies** - are new packages really needed?
|
||||
4. **Verify logic flow** - does the algorithm make sense?
|
||||
|
||||
### Step 2: Security and Error Handling Review
|
||||
1. **Trace data flow** from input to output
|
||||
2. **Identify potential failure points** and verify error handling
|
||||
3. **Check for security vulnerabilities** using the security checklist
|
||||
4. **Verify proper logging** and monitoring implementation
|
||||
|
||||
### Step 3: Integration Testing
|
||||
1. **Test with existing code** to ensure compatibility
|
||||
2. **Run existing test suite** to verify no regressions
|
||||
3. **Test edge cases** and error conditions
|
||||
4. **Verify performance** under realistic conditions
|
||||
|
||||
## Common AI Code Issues to Watch For
|
||||
|
||||
### Overcomplication Patterns
|
||||
- **Unnecessary abstractions**: AI creating complex patterns for simple tasks
|
||||
- **Over-engineering**: Solutions that are more complex than needed
|
||||
- **Redundant code**: AI recreating existing functionality
|
||||
- **Inappropriate design patterns**: Using patterns that don't fit the use case
|
||||
|
||||
### Context Loss Indicators
|
||||
- **Inconsistent naming**: Different conventions from existing code
|
||||
- **Wrong data structures**: Using different patterns than established
|
||||
- **Ignored existing functions**: Reimplementing existing functionality
|
||||
- **Architectural misalignment**: Code that doesn't fit the overall design
|
||||
|
||||
### Technical Debt Indicators
|
||||
- **Magic numbers**: Hardcoded values without explanation
|
||||
- **Poor error messages**: Generic or unhelpful error handling
|
||||
- **Missing documentation**: Code without adequate comments
|
||||
- **Tight coupling**: Components that are too interdependent
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Mandatory Reviews
|
||||
All AI-generated code must pass these gates before acceptance:
|
||||
|
||||
1. **Security Review**: No security vulnerabilities detected
|
||||
2. **Integration Review**: Integrates cleanly with existing code
|
||||
3. **Performance Review**: Meets performance requirements
|
||||
4. **Maintainability Review**: Code can be easily modified by team members
|
||||
5. **Documentation Review**: Adequate documentation is provided
|
||||
|
||||
### Acceptance Criteria
|
||||
- [ ] Code is understandable by any team member
|
||||
- [ ] Integration requires minimal changes to existing code
|
||||
- [ ] Security review passes all checks
|
||||
- [ ] Performance meets established benchmarks
|
||||
- [ ] Documentation is complete and accurate
|
||||
|
||||
## Rejection Criteria
|
||||
Reject AI-generated code if:
|
||||
- Security vulnerabilities are present
|
||||
- Code is too complex for the problem being solved
|
||||
- Integration requires major refactoring of existing code
|
||||
- Code duplicates existing functionality without justification
|
||||
- Documentation is missing or inadequate
|
||||
|
||||
## Review Documentation
|
||||
For each review, document:
|
||||
- Issues found and how they were resolved
|
||||
- Performance impact assessment
|
||||
- Security concerns and mitigations
|
||||
- Integration challenges and solutions
|
||||
- Recommendations for future similar tasks
|
||||
93
.cursor/rules/context-management.mdc
Normal file
93
.cursor/rules/context-management.mdc
Normal file
@ -0,0 +1,93 @@
|
||||
---
|
||||
description: Context management for maintaining codebase awareness and preventing context drift
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Context Management
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive project context to prevent context drift and ensure AI-generated code integrates seamlessly with existing codebase patterns and architecture.
|
||||
|
||||
## Context Documentation Requirements
|
||||
|
||||
### PRD.md file documentation
|
||||
1. **Project Overview**
|
||||
- Business objectives and goals
|
||||
- Target users and use cases
|
||||
- Key success metrics
|
||||
|
||||
### CONTEXT.md File Structure
|
||||
Every project must maintain a `CONTEXT.md` file in the root directory with:
|
||||
|
||||
1. **Architecture Overview**
|
||||
- High-level system architecture
|
||||
- Key design patterns used
|
||||
- Database schema overview
|
||||
- API structure and conventions
|
||||
|
||||
2. **Technology Stack**
|
||||
- Programming languages and versions
|
||||
- Frameworks and libraries
|
||||
- Database systems
|
||||
- Development and deployment tools
|
||||
|
||||
3. **Coding Conventions**
|
||||
- Naming conventions
|
||||
- File organization patterns
|
||||
- Code structure preferences
|
||||
- Import/export patterns
|
||||
|
||||
4. **Current Implementation Status**
|
||||
- Completed features
|
||||
- Work in progress
|
||||
- Known technical debt
|
||||
- Planned improvements
|
||||
|
||||
## Context Maintenance Protocol
|
||||
|
||||
### Before Every Coding Session
|
||||
1. **Review CONTEXT.md and PRD.md** to understand current project state
|
||||
2. **Scan recent changes** in git history to understand latest patterns
|
||||
3. **Identify existing patterns** for similar functionality before implementing new features
|
||||
4. **Ask for clarification** if existing patterns are unclear or conflicting
|
||||
|
||||
### During Development
|
||||
1. **Reference existing code** when explaining implementation approaches
|
||||
2. **Maintain consistency** with established patterns and conventions
|
||||
3. **Update CONTEXT.md** when making architectural decisions
|
||||
4. **Document deviations** from established patterns with reasoning
|
||||
|
||||
### Context Preservation Strategies
|
||||
- **Incremental development**: Build on existing patterns rather than creating new ones
|
||||
- **Pattern consistency**: Use established data structures and function signatures
|
||||
- **Integration awareness**: Consider how new code affects existing functionality
|
||||
- **Dependency management**: Understand existing dependencies before adding new ones
|
||||
|
||||
## Context Prompting Best Practices
|
||||
|
||||
### Effective Context Sharing
|
||||
- Include relevant sections of CONTEXT.md in prompts for complex tasks
|
||||
- Reference specific existing files when asking for similar functionality
|
||||
- Provide examples of existing patterns when requesting new implementations
|
||||
- Share recent git commit messages to understand latest changes
|
||||
|
||||
### Context Window Optimization
|
||||
- Prioritize most relevant context for current task
|
||||
- Use @filename references to include specific files
|
||||
- Break large contexts into focused, task-specific chunks
|
||||
- Update context references as project evolves
|
||||
|
||||
## Red Flags - Context Loss Indicators
|
||||
- AI suggests patterns that conflict with existing code
|
||||
- New implementations ignore established conventions
|
||||
- Proposed solutions don't integrate with existing architecture
|
||||
- Code suggestions require significant refactoring of existing functionality
|
||||
|
||||
## Recovery Protocol
|
||||
When context loss is detected:
|
||||
1. **Stop development** and review CONTEXT.md
|
||||
2. **Analyze existing codebase** for established patterns
|
||||
3. **Update context documentation** with missing information
|
||||
4. **Restart task** with proper context provided
|
||||
5. **Test integration** with existing code before proceeding
|
||||
67
.cursor/rules/create-prd.mdc
Normal file
67
.cursor/rules/create-prd.mdc
Normal file
@ -0,0 +1,67 @@
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description: Creating PRD for a project or specific task/function
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Product Requirements Document (PRD)
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality.
|
||||
2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out).
|
||||
3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below.
|
||||
4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory.
|
||||
|
||||
## Clarifying Questions (Examples)
|
||||
|
||||
The AI should adapt its questions based on the prompt, but here are some common areas to explore:
|
||||
|
||||
* **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?"
|
||||
* **Target User:** "Who is the primary user of this feature?"
|
||||
* **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?"
|
||||
* **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)"
|
||||
* **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?"
|
||||
* **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?"
|
||||
* **Data Requirements:** "What kind of data does this feature need to display or manipulate?"
|
||||
* **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?"
|
||||
* **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?"
|
||||
|
||||
## PRD Structure
|
||||
|
||||
The generated PRD should include the following sections:
|
||||
|
||||
1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal.
|
||||
2. **Goals:** List the specific, measurable objectives for this feature.
|
||||
3. **User Stories:** Detail the user narratives describing feature usage and benefits.
|
||||
4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements.
|
||||
5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope.
|
||||
6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable.
|
||||
7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module").
|
||||
8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X").
|
||||
9. **Open Questions:** List any remaining questions or areas needing further clarification.
|
||||
|
||||
## Target Audience
|
||||
|
||||
Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic.
|
||||
|
||||
## Output
|
||||
|
||||
* **Format:** Markdown (`.md`)
|
||||
* **Location:** `/tasks/`
|
||||
* **Filename:** `prd-[feature-name].md`
|
||||
|
||||
## Final instructions
|
||||
|
||||
1. Do NOT start implmenting the PRD
|
||||
2. Make sure to ask the user clarifying questions
|
||||
|
||||
3. Take the user's answers to the clarifying questions and improve the PRD
|
||||
244
.cursor/rules/documentation.mdc
Normal file
244
.cursor/rules/documentation.mdc
Normal file
@ -0,0 +1,244 @@
|
||||
---
|
||||
description: Documentation standards for code, architecture, and development decisions
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Documentation Standards
|
||||
|
||||
## Goal
|
||||
Maintain comprehensive, up-to-date documentation that supports development, onboarding, and long-term maintenance of the codebase.
|
||||
|
||||
## Documentation Hierarchy
|
||||
|
||||
### 1. Project Level Documentation (in ./docs/)
|
||||
- **README.md**: Project overview, setup instructions, basic usage
|
||||
- **CONTEXT.md**: Current project state, architecture decisions, patterns
|
||||
- **CHANGELOG.md**: Version history and significant changes
|
||||
- **CONTRIBUTING.md**: Development guidelines and processes
|
||||
- **API.md**: API endpoints, request/response formats, authentication
|
||||
|
||||
### 2. Module Level Documentation (in ./docs/modules/)
|
||||
- **[module-name].md**: Purpose, public interfaces, usage examples
|
||||
- **dependencies.md**: External dependencies and their purposes
|
||||
- **architecture.md**: Module relationships and data flow
|
||||
|
||||
### 3. Code Level Documentation
|
||||
- **Docstrings**: Function and class documentation
|
||||
- **Inline comments**: Complex logic explanations
|
||||
- **Type hints**: Clear parameter and return types
|
||||
- **README files**: Directory-specific instructions
|
||||
|
||||
## Documentation Standards
|
||||
|
||||
### Code Documentation
|
||||
```python
|
||||
def process_user_data(user_id: str, data: dict) -> UserResult:
|
||||
"""
|
||||
Process and validate user data before storage.
|
||||
|
||||
Args:
|
||||
user_id: Unique identifier for the user
|
||||
data: Dictionary containing user information to process
|
||||
|
||||
Returns:
|
||||
UserResult: Processed user data with validation status
|
||||
|
||||
Raises:
|
||||
ValidationError: When user data fails validation
|
||||
DatabaseError: When storage operation fails
|
||||
|
||||
Example:
|
||||
>>> result = process_user_data("123", {"name": "John", "email": "john@example.com"})
|
||||
>>> print(result.status)
|
||||
'valid'
|
||||
"""
|
||||
```
|
||||
|
||||
### API Documentation Format
|
||||
```markdown
|
||||
### POST /api/users
|
||||
|
||||
Create a new user account.
|
||||
|
||||
**Request:**
|
||||
```json
|
||||
{
|
||||
"name": "string (required)",
|
||||
"email": "string (required, valid email)",
|
||||
"age": "number (optional, min: 13)"
|
||||
}
|
||||
```
|
||||
|
||||
**Response (201):**
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"name": "string",
|
||||
"email": "string",
|
||||
"created_at": "iso_datetime"
|
||||
}
|
||||
```
|
||||
|
||||
**Errors:**
|
||||
- 400: Invalid input data
|
||||
- 409: Email already exists
|
||||
```
|
||||
|
||||
### Architecture Decision Records (ADRs)
|
||||
Document significant architecture decisions in `./docs/decisions/`:
|
||||
|
||||
```markdown
|
||||
# ADR-001: Database Choice - PostgreSQL
|
||||
|
||||
## Status
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
We need to choose a database for storing user data and application state.
|
||||
|
||||
## Decision
|
||||
We will use PostgreSQL as our primary database.
|
||||
|
||||
## Consequences
|
||||
**Positive:**
|
||||
- ACID compliance ensures data integrity
|
||||
- Rich query capabilities with SQL
|
||||
- Good performance for our expected load
|
||||
|
||||
**Negative:**
|
||||
- More complex setup than simpler alternatives
|
||||
- Requires SQL knowledge from team members
|
||||
|
||||
## Alternatives Considered
|
||||
- MongoDB: Rejected due to consistency requirements
|
||||
- SQLite: Rejected due to scalability needs
|
||||
```
|
||||
|
||||
## Documentation Maintenance
|
||||
|
||||
### When to Update Documentation
|
||||
|
||||
#### Always Update:
|
||||
- **API changes**: Any modification to public interfaces
|
||||
- **Architecture changes**: New patterns, data structures, or workflows
|
||||
- **Configuration changes**: Environment variables, deployment settings
|
||||
- **Dependencies**: Adding, removing, or upgrading packages
|
||||
- **Business logic changes**: Core functionality modifications
|
||||
|
||||
#### Update Weekly:
|
||||
- **CONTEXT.md**: Current development status and priorities
|
||||
- **Known issues**: Bug reports and workarounds
|
||||
- **Performance notes**: Bottlenecks and optimization opportunities
|
||||
|
||||
#### Update per Release:
|
||||
- **CHANGELOG.md**: User-facing changes and improvements
|
||||
- **Version documentation**: Breaking changes and migration guides
|
||||
- **Examples and tutorials**: Keep sample code current
|
||||
|
||||
### Documentation Quality Checklist
|
||||
|
||||
#### Completeness
|
||||
- [ ] Purpose and scope clearly explained
|
||||
- [ ] All public interfaces documented
|
||||
- [ ] Examples provided for complex usage
|
||||
- [ ] Error conditions and handling described
|
||||
- [ ] Dependencies and requirements listed
|
||||
|
||||
#### Accuracy
|
||||
- [ ] Code examples are tested and working
|
||||
- [ ] Links point to correct locations
|
||||
- [ ] Version numbers are current
|
||||
- [ ] Screenshots reflect current UI
|
||||
|
||||
#### Clarity
|
||||
- [ ] Written for the intended audience
|
||||
- [ ] Technical jargon is explained
|
||||
- [ ] Step-by-step instructions are clear
|
||||
- [ ] Visual aids used where helpful
|
||||
|
||||
## Documentation Automation
|
||||
|
||||
### Auto-Generated Documentation
|
||||
- **API docs**: Generate from code annotations
|
||||
- **Type documentation**: Extract from type hints
|
||||
- **Module dependencies**: Auto-update from imports
|
||||
- **Test coverage**: Include coverage reports
|
||||
|
||||
### Documentation Testing
|
||||
```python
|
||||
# Test that code examples in documentation work
|
||||
def test_documentation_examples():
|
||||
"""Verify code examples in docs actually work."""
|
||||
# Test examples from README.md
|
||||
# Test API examples from docs/API.md
|
||||
# Test configuration examples
|
||||
```
|
||||
|
||||
## Documentation Templates
|
||||
|
||||
### New Module Documentation Template
|
||||
```markdown
|
||||
# Module: [Name]
|
||||
|
||||
## Purpose
|
||||
Brief description of what this module does and why it exists.
|
||||
|
||||
## Public Interface
|
||||
### Functions
|
||||
- `function_name(params)`: Description and example
|
||||
|
||||
### Classes
|
||||
- `ClassName`: Purpose and basic usage
|
||||
|
||||
## Usage Examples
|
||||
```python
|
||||
# Basic usage example
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
- Internal: List of internal modules this depends on
|
||||
- External: List of external packages required
|
||||
|
||||
## Testing
|
||||
How to run tests for this module.
|
||||
|
||||
## Known Issues
|
||||
Current limitations or bugs.
|
||||
```
|
||||
|
||||
### API Endpoint Template
|
||||
```markdown
|
||||
### [METHOD] /api/endpoint
|
||||
|
||||
Brief description of what this endpoint does.
|
||||
|
||||
**Authentication:** Required/Optional
|
||||
**Rate Limiting:** X requests per minute
|
||||
|
||||
**Request:**
|
||||
- Headers required
|
||||
- Body schema
|
||||
- Query parameters
|
||||
|
||||
**Response:**
|
||||
- Success response format
|
||||
- Error response format
|
||||
- Status codes
|
||||
|
||||
**Example:**
|
||||
Working request/response example
|
||||
```
|
||||
|
||||
## Review and Maintenance Process
|
||||
|
||||
### Documentation Review
|
||||
- Include documentation updates in code reviews
|
||||
- Verify examples still work with code changes
|
||||
- Check for broken links and outdated information
|
||||
- Ensure consistency with current implementation
|
||||
|
||||
### Regular Audits
|
||||
- Monthly review of documentation accuracy
|
||||
- Quarterly assessment of documentation completeness
|
||||
- Annual review of documentation structure and organization
|
||||
207
.cursor/rules/enhanced-task-list.mdc
Normal file
207
.cursor/rules/enhanced-task-list.mdc
Normal file
@ -0,0 +1,207 @@
|
||||
---
|
||||
description: Enhanced task list management with quality gates and iterative workflow integration
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Enhanced Task List Management
|
||||
|
||||
## Goal
|
||||
Manage task lists with integrated quality gates and iterative workflow to prevent context loss and ensure sustainable development.
|
||||
|
||||
## Task Implementation Protocol
|
||||
|
||||
### Pre-Implementation Check
|
||||
Before starting any sub-task:
|
||||
- [ ] **Context Review**: Have you reviewed CONTEXT.md and relevant documentation?
|
||||
- [ ] **Pattern Identification**: Do you understand existing patterns to follow?
|
||||
- [ ] **Integration Planning**: Do you know how this will integrate with existing code?
|
||||
- [ ] **Size Validation**: Is this task small enough (≤50 lines, ≤250 lines per file)?
|
||||
|
||||
### Implementation Process
|
||||
1. **One sub-task at a time**: Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y"
|
||||
2. **Step-by-step execution**:
|
||||
- Plan the approach in bullet points
|
||||
- Wait for approval
|
||||
- Implement the specific sub-task
|
||||
- Test the implementation
|
||||
- Update documentation if needed
|
||||
3. **Quality validation**: Run through the code review checklist before marking complete
|
||||
|
||||
### Completion Protocol
|
||||
When you finish a **sub‑task**:
|
||||
1. **Immediate marking**: Change `[ ]` to `[x]`
|
||||
2. **Quality check**: Verify the implementation meets quality standards
|
||||
3. **Integration test**: Ensure new code works with existing functionality
|
||||
4. **Documentation update**: Update relevant files if needed
|
||||
5. **Parent task check**: If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed
|
||||
6. **Stop and wait**: Get user approval before proceeding to next sub-task
|
||||
|
||||
## Enhanced Task List Structure
|
||||
|
||||
### Task File Header
|
||||
```markdown
|
||||
# Task List: [Feature Name]
|
||||
|
||||
**Source PRD**: `prd-[feature-name].md`
|
||||
**Status**: In Progress / Complete / Blocked
|
||||
**Context Last Updated**: [Date]
|
||||
**Architecture Review**: Required / Complete / N/A
|
||||
|
||||
## Quick Links
|
||||
- [Context Documentation](./CONTEXT.md)
|
||||
- [Architecture Guidelines](./docs/architecture.md)
|
||||
- [Related Files](#relevant-files)
|
||||
```
|
||||
|
||||
### Task Format with Quality Gates
|
||||
```markdown
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- **Quality Gate**: Architecture review required
|
||||
- **Dependencies**: List any dependencies
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- **Size estimate**: [Small/Medium/Large]
|
||||
- **Pattern reference**: [Reference to existing pattern]
|
||||
- **Test requirements**: [Unit/Integration/Both]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- **Integration points**: [List affected components]
|
||||
- **Risk level**: [Low/Medium/High]
|
||||
```
|
||||
|
||||
## Relevant Files Management
|
||||
|
||||
### Enhanced File Tracking
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
### Implementation Files
|
||||
- `path/to/file1.ts` - Brief description of purpose and role
|
||||
- **Status**: Created / Modified / Needs Review
|
||||
- **Last Modified**: [Date]
|
||||
- **Review Status**: Pending / Approved / Needs Changes
|
||||
|
||||
### Test Files
|
||||
- `path/to/file1.test.ts` - Unit tests for file1.ts
|
||||
- **Coverage**: [Percentage or status]
|
||||
- **Last Run**: [Date and result]
|
||||
|
||||
### Documentation Files
|
||||
- `docs/module-name.md` - Module documentation
|
||||
- **Status**: Up to date / Needs update / Missing
|
||||
- **Last Updated**: [Date]
|
||||
|
||||
### Configuration Files
|
||||
- `config/setting.json` - Configuration changes
|
||||
- **Environment**: [Dev/Staging/Prod affected]
|
||||
- **Backup**: [Location of backup]
|
||||
```
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
### During Development
|
||||
1. **Regular updates**: Update task status after each significant change
|
||||
2. **File tracking**: Add new files as they are created or modified
|
||||
3. **Dependency tracking**: Note when new dependencies between tasks emerge
|
||||
4. **Risk assessment**: Flag tasks that become more complex than anticipated
|
||||
|
||||
### Quality Checkpoints
|
||||
At 25%, 50%, 75%, and 100% completion:
|
||||
- [ ] **Architecture alignment**: Code follows established patterns
|
||||
- [ ] **Performance impact**: No significant performance degradation
|
||||
- [ ] **Security review**: No security vulnerabilities introduced
|
||||
- [ ] **Documentation current**: All changes are documented
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Completion assessment**: What percentage of tasks are actually complete?
|
||||
2. **Quality assessment**: Are completed tasks meeting quality standards?
|
||||
3. **Process assessment**: Is the iterative workflow being followed?
|
||||
4. **Risk assessment**: Are there emerging risks or blockers?
|
||||
|
||||
## Task Status Indicators
|
||||
|
||||
### Status Levels
|
||||
- `[ ]` **Not Started**: Task not yet begun
|
||||
- `[~]` **In Progress**: Currently being worked on
|
||||
- `[?]` **Blocked**: Waiting for dependencies or decisions
|
||||
- `[!]` **Needs Review**: Implementation complete but needs quality review
|
||||
- `[x]` **Complete**: Finished and quality approved
|
||||
|
||||
### Quality Indicators
|
||||
- ✅ **Quality Approved**: Passed all quality gates
|
||||
- ⚠️ **Quality Concerns**: Has issues but functional
|
||||
- ❌ **Quality Failed**: Needs rework before approval
|
||||
- 🔄 **Under Review**: Currently being reviewed
|
||||
|
||||
### Integration Status
|
||||
- 🔗 **Integrated**: Successfully integrated with existing code
|
||||
- 🔧 **Integration Issues**: Problems with existing code integration
|
||||
- ⏳ **Integration Pending**: Ready for integration testing
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Tasks Become Too Complex
|
||||
If a sub-task grows beyond expected scope:
|
||||
1. **Stop implementation** immediately
|
||||
2. **Document current state** and what was discovered
|
||||
3. **Break down** the task into smaller pieces
|
||||
4. **Update task list** with new sub-tasks
|
||||
5. **Get approval** for the new breakdown before proceeding
|
||||
|
||||
### When Context is Lost
|
||||
If AI seems to lose track of project patterns:
|
||||
1. **Pause development**
|
||||
2. **Review CONTEXT.md** and recent changes
|
||||
3. **Update context documentation** with current state
|
||||
4. **Restart** with explicit pattern references
|
||||
5. **Reduce task size** until context is re-established
|
||||
|
||||
### When Quality Gates Fail
|
||||
If implementation doesn't meet quality standards:
|
||||
1. **Mark task** with `[!]` status
|
||||
2. **Document specific issues** found
|
||||
3. **Create remediation tasks** if needed
|
||||
4. **Don't proceed** until quality issues are resolved
|
||||
|
||||
## AI Instructions Integration
|
||||
|
||||
### Context Awareness Commands
|
||||
```markdown
|
||||
**Before starting any task, run these checks:**
|
||||
1. @CONTEXT.md - Review current project state
|
||||
2. @architecture.md - Understand design principles
|
||||
3. @code-review.md - Know quality standards
|
||||
4. Look at existing similar code for patterns
|
||||
```
|
||||
|
||||
### Quality Validation Commands
|
||||
```markdown
|
||||
**After completing any sub-task:**
|
||||
1. Run code review checklist
|
||||
2. Test integration with existing code
|
||||
3. Update documentation if needed
|
||||
4. Mark task complete only after quality approval
|
||||
```
|
||||
|
||||
### Workflow Commands
|
||||
```markdown
|
||||
**For each development session:**
|
||||
1. Review incomplete tasks and their status
|
||||
2. Identify next logical sub-task to work on
|
||||
3. Check dependencies and blockers
|
||||
4. Follow iterative workflow process
|
||||
5. Update task list with progress and findings
|
||||
```
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Daily Success Indicators
|
||||
- Tasks are completed according to quality standards
|
||||
- No sub-tasks are started without completing previous ones
|
||||
- File tracking remains accurate and current
|
||||
- Integration issues are caught early
|
||||
|
||||
### Weekly Success Indicators
|
||||
- Overall task completion rate is sustainable
|
||||
- Quality issues are decreasing over time
|
||||
- Context loss incidents are rare
|
||||
- Team confidence in codebase remains high
|
||||
70
.cursor/rules/generate-tasks.mdc
Normal file
70
.cursor/rules/generate-tasks.mdc
Normal file
@ -0,0 +1,70 @@
|
||||
---
|
||||
description: Generate a task list or TODO for a user requirement or implementation.
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Rule: Generating a Task List from a PRD
|
||||
|
||||
## Goal
|
||||
|
||||
To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation.
|
||||
|
||||
## Output
|
||||
|
||||
- **Format:** Markdown (`.md`)
|
||||
- **Location:** `/tasks/`
|
||||
- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`)
|
||||
|
||||
## Process
|
||||
|
||||
1. **Receive PRD Reference:** The user points the AI to a specific PRD file
|
||||
2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD.
|
||||
3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed."
|
||||
4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go".
|
||||
5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD.
|
||||
6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable.
|
||||
7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure.
|
||||
8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`).
|
||||
|
||||
## Output Format
|
||||
|
||||
The generated task list _must_ follow this structure:
|
||||
|
||||
```markdown
|
||||
## Relevant Files
|
||||
|
||||
- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature).
|
||||
- `path/to/file1.test.ts` - Unit tests for `file1.ts`.
|
||||
- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission).
|
||||
- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`.
|
||||
- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations).
|
||||
- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`.
|
||||
|
||||
### Notes
|
||||
|
||||
- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory).
|
||||
- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration.
|
||||
|
||||
## Tasks
|
||||
|
||||
- [ ] 1.0 Parent Task Title
|
||||
- [ ] 1.1 [Sub-task description 1.1]
|
||||
- [ ] 1.2 [Sub-task description 1.2]
|
||||
- [ ] 2.0 Parent Task Title
|
||||
- [ ] 2.1 [Sub-task description 2.1]
|
||||
- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration)
|
||||
```
|
||||
|
||||
## Interaction Model
|
||||
|
||||
The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details.
|
||||
|
||||
## Target Audience
|
||||
|
||||
|
||||
Assume the primary reader of the task list is a **junior developer** who will implement the feature.
|
||||
236
.cursor/rules/iterative-workflow.mdc
Normal file
236
.cursor/rules/iterative-workflow.mdc
Normal file
@ -0,0 +1,236 @@
|
||||
---
|
||||
description: Iterative development workflow for AI-assisted coding
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Iterative Development Workflow
|
||||
|
||||
## Goal
|
||||
Establish a structured, iterative development process that prevents the chaos and complexity that can arise from uncontrolled AI-assisted development.
|
||||
|
||||
## Development Phases
|
||||
|
||||
### Phase 1: Planning and Design
|
||||
**Before writing any code:**
|
||||
|
||||
1. **Understand the Requirement**
|
||||
- Break down the task into specific, measurable objectives
|
||||
- Identify existing code patterns that should be followed
|
||||
- List dependencies and integration points
|
||||
- Define acceptance criteria
|
||||
|
||||
2. **Design Review**
|
||||
- Propose approach in bullet points
|
||||
- Wait for explicit approval before proceeding
|
||||
- Consider how the solution fits existing architecture
|
||||
- Identify potential risks and mitigation strategies
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
**One small piece at a time:**
|
||||
|
||||
1. **Micro-Tasks** (≤ 50 lines each)
|
||||
- Implement one function or small class at a time
|
||||
- Test immediately after implementation
|
||||
- Ensure integration with existing code
|
||||
- Document decisions and patterns used
|
||||
|
||||
2. **Validation Checkpoints**
|
||||
- After each micro-task, verify it works correctly
|
||||
- Check that it follows established patterns
|
||||
- Confirm it integrates cleanly with existing code
|
||||
- Get approval before moving to next micro-task
|
||||
|
||||
### Phase 3: Integration and Testing
|
||||
**Ensuring system coherence:**
|
||||
|
||||
1. **Integration Testing**
|
||||
- Test new code with existing functionality
|
||||
- Verify no regressions in existing features
|
||||
- Check performance impact
|
||||
- Validate error handling
|
||||
|
||||
2. **Documentation Update**
|
||||
- Update relevant documentation
|
||||
- Record any new patterns or decisions
|
||||
- Update context files if architecture changed
|
||||
|
||||
## Iterative Prompting Strategy
|
||||
|
||||
### Step 1: Context Setting
|
||||
```
|
||||
Before implementing [feature], help me understand:
|
||||
1. What existing patterns should I follow?
|
||||
2. What existing functions/classes are relevant?
|
||||
3. How should this integrate with [specific existing component]?
|
||||
4. What are the potential architectural impacts?
|
||||
```
|
||||
|
||||
### Step 2: Plan Creation
|
||||
```
|
||||
Based on the context, create a detailed plan for implementing [feature]:
|
||||
1. Break it into micro-tasks (≤50 lines each)
|
||||
2. Identify dependencies and order of implementation
|
||||
3. Specify integration points with existing code
|
||||
4. List potential risks and mitigation strategies
|
||||
|
||||
Wait for my approval before implementing.
|
||||
```
|
||||
|
||||
### Step 3: Incremental Implementation
|
||||
```
|
||||
Implement only the first micro-task: [specific task]
|
||||
- Use existing patterns from [reference file/function]
|
||||
- Keep it under 50 lines
|
||||
- Include error handling
|
||||
- Add appropriate tests
|
||||
- Explain your implementation choices
|
||||
|
||||
Stop after this task and wait for approval.
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### Before Each Implementation
|
||||
- [ ] **Purpose is clear**: Can explain what this piece does and why
|
||||
- [ ] **Pattern is established**: Following existing code patterns
|
||||
- [ ] **Size is manageable**: Implementation is small enough to understand completely
|
||||
- [ ] **Integration is planned**: Know how it connects to existing code
|
||||
|
||||
### After Each Implementation
|
||||
- [ ] **Code is understood**: Can explain every line of implemented code
|
||||
- [ ] **Tests pass**: All existing and new tests are passing
|
||||
- [ ] **Integration works**: New code works with existing functionality
|
||||
- [ ] **Documentation updated**: Changes are reflected in relevant documentation
|
||||
|
||||
### Before Moving to Next Task
|
||||
- [ ] **Current task complete**: All acceptance criteria met
|
||||
- [ ] **No regressions**: Existing functionality still works
|
||||
- [ ] **Clean state**: No temporary code or debugging artifacts
|
||||
- [ ] **Approval received**: Explicit go-ahead for next task
|
||||
- [ ] **Documentaion updated**: If relevant changes to module was made.
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Large Block Implementation
|
||||
**Don't:**
|
||||
```
|
||||
Implement the entire user management system with authentication,
|
||||
CRUD operations, and email notifications.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
First, implement just the User model with basic fields.
|
||||
Stop there and let me review before continuing.
|
||||
```
|
||||
|
||||
### Context Loss
|
||||
**Don't:**
|
||||
```
|
||||
Create a new authentication system.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Looking at the existing auth patterns in auth.py, implement
|
||||
password validation following the same structure as the
|
||||
existing email validation function.
|
||||
```
|
||||
|
||||
### Over-Engineering
|
||||
**Don't:**
|
||||
```
|
||||
Build a flexible, extensible user management framework that
|
||||
can handle any future requirements.
|
||||
```
|
||||
|
||||
**Do:**
|
||||
```
|
||||
Implement user creation functionality that matches the existing
|
||||
pattern in customer.py, focusing only on the current requirements.
|
||||
```
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
### Task Status Indicators
|
||||
- 🔄 **In Planning**: Requirements gathering and design
|
||||
- ⏳ **In Progress**: Currently implementing
|
||||
- ✅ **Complete**: Implemented, tested, and integrated
|
||||
- 🚫 **Blocked**: Waiting for decisions or dependencies
|
||||
- 🔧 **Needs Refactor**: Working but needs improvement
|
||||
|
||||
### Weekly Review Process
|
||||
1. **Progress Assessment**
|
||||
- What was completed this week?
|
||||
- What challenges were encountered?
|
||||
- How well did the iterative process work?
|
||||
|
||||
2. **Process Adjustment**
|
||||
- Were task sizes appropriate?
|
||||
- Did context management work effectively?
|
||||
- What improvements can be made?
|
||||
|
||||
3. **Architecture Review**
|
||||
- Is the code remaining maintainable?
|
||||
- Are patterns staying consistent?
|
||||
- Is technical debt accumulating?
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### When Things Go Wrong
|
||||
If development becomes chaotic or problematic:
|
||||
|
||||
1. **Stop Development**
|
||||
- Don't continue adding to the problem
|
||||
- Take time to assess the situation
|
||||
- Don't rush to "fix" with more AI-generated code
|
||||
|
||||
2. **Assess the Situation**
|
||||
- What specific problems exist?
|
||||
- How far has the code diverged from established patterns?
|
||||
- What parts are still working correctly?
|
||||
|
||||
3. **Recovery Process**
|
||||
- Roll back to last known good state
|
||||
- Update context documentation with lessons learned
|
||||
- Restart with smaller, more focused tasks
|
||||
- Get explicit approval for each step of recovery
|
||||
|
||||
### Context Recovery
|
||||
When AI seems to lose track of project patterns:
|
||||
|
||||
1. **Context Refresh**
|
||||
- Review and update CONTEXT.md
|
||||
- Include examples of current code patterns
|
||||
- Clarify architectural decisions
|
||||
|
||||
2. **Pattern Re-establishment**
|
||||
- Show AI examples of existing, working code
|
||||
- Explicitly state patterns to follow
|
||||
- Start with very small, pattern-matching tasks
|
||||
|
||||
3. **Gradual Re-engagement**
|
||||
- Begin with simple, low-risk tasks
|
||||
- Verify pattern adherence at each step
|
||||
- Gradually increase task complexity as consistency returns
|
||||
|
||||
## Success Metrics
|
||||
|
||||
### Short-term (Daily)
|
||||
- Code is understandable and well-integrated
|
||||
- No major regressions introduced
|
||||
- Development velocity feels sustainable
|
||||
- Team confidence in codebase remains high
|
||||
|
||||
### Medium-term (Weekly)
|
||||
- Technical debt is not accumulating
|
||||
- New features integrate cleanly
|
||||
- Development patterns remain consistent
|
||||
- Documentation stays current
|
||||
|
||||
### Long-term (Monthly)
|
||||
- Codebase remains maintainable as it grows
|
||||
- New team members can understand and contribute
|
||||
- AI assistance enhances rather than hinders development
|
||||
- Architecture remains clean and purposeful
|
||||
24
.cursor/rules/project.mdc
Normal file
24
.cursor/rules/project.mdc
Normal file
@ -0,0 +1,24 @@
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: true
|
||||
---
|
||||
# Rule: Project specific rules
|
||||
|
||||
## Goal
|
||||
Unify the project structure and interraction with tools and console
|
||||
|
||||
### System tools
|
||||
- **ALWAYS** use UV for package management
|
||||
- **ALWAYS** use Arch linux compatible command for terminal
|
||||
|
||||
### Coding patterns
|
||||
- **ALWYAS** check the arguments and methods before use to avoid errors with wrong parameters or names
|
||||
- If in doubt, check [CONTEXT.md](mdc:CONTEXT.md) file and [architecture.md](mdc:docs/architecture.md)
|
||||
- **PREFER** ORM pattern for databases with SQLAclhemy.
|
||||
- **DO NOT USE** emoji in code and comments
|
||||
|
||||
### Testing
|
||||
- Use UV for test in format *uv run pytest [filename]*
|
||||
|
||||
|
||||
237
.cursor/rules/refactoring.mdc
Normal file
237
.cursor/rules/refactoring.mdc
Normal file
@ -0,0 +1,237 @@
|
||||
---
|
||||
description: Code refactoring and technical debt management for AI-assisted development
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
|
||||
# Rule: Code Refactoring and Technical Debt Management
|
||||
|
||||
## Goal
|
||||
Guide AI in systematic code refactoring to improve maintainability, reduce complexity, and prevent technical debt accumulation in AI-assisted development projects.
|
||||
|
||||
## When to Apply This Rule
|
||||
- Code complexity has increased beyond manageable levels
|
||||
- Duplicate code patterns are detected
|
||||
- Performance issues are identified
|
||||
- New features are difficult to integrate
|
||||
- Code review reveals maintainability concerns
|
||||
- Weekly technical debt assessment indicates refactoring needs
|
||||
|
||||
## Pre-Refactoring Assessment
|
||||
|
||||
Before starting any refactoring, the AI MUST:
|
||||
|
||||
1. **Context Analysis:**
|
||||
- Review existing `CONTEXT.md` for architectural decisions
|
||||
- Analyze current code patterns and conventions
|
||||
- Identify all files that will be affected (search the codebase for use)
|
||||
- Check for existing tests that verify current behavior
|
||||
|
||||
2. **Scope Definition:**
|
||||
- Clearly define what will and will not be changed
|
||||
- Identify the specific refactoring pattern to apply
|
||||
- Estimate the blast radius of changes
|
||||
- Plan rollback strategy if needed
|
||||
|
||||
3. **Documentation Review:**
|
||||
- Check `./docs/` for relevant module documentation
|
||||
- Review any existing architectural diagrams
|
||||
- Identify dependencies and integration points
|
||||
- Note any known constraints or limitations
|
||||
|
||||
## Refactoring Process
|
||||
|
||||
### Phase 1: Planning and Safety
|
||||
1. **Create Refactoring Plan:**
|
||||
- Document the current state and desired end state
|
||||
- Break refactoring into small, atomic steps
|
||||
- Identify tests that must pass throughout the process
|
||||
- Plan verification steps for each change
|
||||
|
||||
2. **Establish Safety Net:**
|
||||
- Ensure comprehensive test coverage exists
|
||||
- If tests are missing, create them BEFORE refactoring
|
||||
- Document current behavior that must be preserved
|
||||
- Create backup of current implementation approach
|
||||
|
||||
3. **Get Approval:**
|
||||
- Present the refactoring plan to the user
|
||||
- Wait for explicit "Go" or "Proceed" confirmation
|
||||
- Do NOT start refactoring without approval
|
||||
|
||||
### Phase 2: Incremental Implementation
|
||||
4. **One Change at a Time:**
|
||||
- Implement ONE refactoring step per iteration
|
||||
- Run tests after each step to ensure nothing breaks
|
||||
- Update documentation if interfaces change
|
||||
- Mark progress in the refactoring plan
|
||||
|
||||
5. **Verification Protocol:**
|
||||
- Run all relevant tests after each change
|
||||
- Verify functionality works as expected
|
||||
- Check performance hasn't degraded
|
||||
- Ensure no new linting or type errors
|
||||
|
||||
6. **User Checkpoint:**
|
||||
- After each significant step, pause for user review
|
||||
- Present what was changed and current status
|
||||
- Wait for approval before continuing
|
||||
- Address any concerns before proceeding
|
||||
|
||||
### Phase 3: Completion and Documentation
|
||||
7. **Final Verification:**
|
||||
- Run full test suite to ensure nothing is broken
|
||||
- Verify all original functionality is preserved
|
||||
- Check that new code follows project conventions
|
||||
- Confirm performance is maintained or improved
|
||||
|
||||
8. **Documentation Update:**
|
||||
- Update `CONTEXT.md` with new patterns/decisions
|
||||
- Update module documentation in `./docs/`
|
||||
- Document any new conventions established
|
||||
- Note lessons learned for future refactoring
|
||||
|
||||
## Common Refactoring Patterns
|
||||
|
||||
### Extract Method/Function
|
||||
```
|
||||
WHEN: Functions/methods exceed 50 lines or have multiple responsibilities
|
||||
HOW:
|
||||
1. Identify logical groupings within the function
|
||||
2. Extract each group into a well-named helper function
|
||||
3. Ensure each function has a single responsibility
|
||||
4. Verify tests still pass
|
||||
```
|
||||
|
||||
### Extract Module/Class
|
||||
```
|
||||
WHEN: Files exceed 250 lines or handle multiple concerns
|
||||
HOW:
|
||||
1. Identify cohesive functionality groups
|
||||
2. Create new files for each group
|
||||
3. Move related functions/classes together
|
||||
4. Update imports and dependencies
|
||||
5. Verify module boundaries are clean
|
||||
```
|
||||
|
||||
### Eliminate Duplication
|
||||
```
|
||||
WHEN: Similar code appears in multiple places
|
||||
HOW:
|
||||
1. Identify the common pattern or functionality
|
||||
2. Extract to a shared utility function or module
|
||||
3. Update all usage sites to use the shared code
|
||||
4. Ensure the abstraction is not over-engineered
|
||||
```
|
||||
|
||||
### Improve Data Structures
|
||||
```
|
||||
WHEN: Complex nested objects or unclear data flow
|
||||
HOW:
|
||||
1. Define clear interfaces/types for data structures
|
||||
2. Create transformation functions between different representations
|
||||
3. Ensure data flow is unidirectional where possible
|
||||
4. Add validation at boundaries
|
||||
```
|
||||
|
||||
### Reduce Coupling
|
||||
```
|
||||
WHEN: Modules are tightly interconnected
|
||||
HOW:
|
||||
1. Identify dependencies between modules
|
||||
2. Extract interfaces for external dependencies
|
||||
3. Use dependency injection where appropriate
|
||||
4. Ensure modules can be tested in isolation
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
Every refactoring must pass these gates:
|
||||
|
||||
### Technical Quality
|
||||
- [ ] All existing tests pass
|
||||
- [ ] No new linting errors introduced
|
||||
- [ ] Code follows established project conventions
|
||||
- [ ] No performance regression detected
|
||||
- [ ] File sizes remain under 250 lines
|
||||
- [ ] Function sizes remain under 50 lines
|
||||
|
||||
### Maintainability
|
||||
- [ ] Code is more readable than before
|
||||
- [ ] Duplicated code has been reduced
|
||||
- [ ] Module responsibilities are clearer
|
||||
- [ ] Dependencies are explicit and minimal
|
||||
- [ ] Error handling is consistent
|
||||
|
||||
### Documentation
|
||||
- [ ] Public interfaces are documented
|
||||
- [ ] Complex logic has explanatory comments
|
||||
- [ ] Architectural decisions are recorded
|
||||
- [ ] Examples are provided where helpful
|
||||
|
||||
## AI Instructions for Refactoring
|
||||
|
||||
1. **Always ask for permission** before starting any refactoring work
|
||||
2. **Start with tests** - ensure comprehensive coverage before changing code
|
||||
3. **Work incrementally** - make small changes and verify each step
|
||||
4. **Preserve behavior** - functionality must remain exactly the same
|
||||
5. **Update documentation** - keep all docs current with changes
|
||||
6. **Follow conventions** - maintain consistency with existing codebase
|
||||
7. **Stop and ask** if any step fails or produces unexpected results
|
||||
8. **Explain changes** - clearly communicate what was changed and why
|
||||
|
||||
## Anti-Patterns to Avoid
|
||||
|
||||
### Over-Engineering
|
||||
- Don't create abstractions for code that isn't duplicated
|
||||
- Avoid complex inheritance hierarchies
|
||||
- Don't optimize prematurely
|
||||
|
||||
### Breaking Changes
|
||||
- Never change public APIs without explicit approval
|
||||
- Don't remove functionality, even if it seems unused
|
||||
- Avoid changing behavior "while we're here"
|
||||
|
||||
### Scope Creep
|
||||
- Stick to the defined refactoring scope
|
||||
- Don't add new features during refactoring
|
||||
- Resist the urge to "improve" unrelated code
|
||||
|
||||
## Success Metrics
|
||||
|
||||
Track these metrics to ensure refactoring effectiveness:
|
||||
|
||||
### Code Quality
|
||||
- Reduced cyclomatic complexity
|
||||
- Lower code duplication percentage
|
||||
- Improved test coverage
|
||||
- Fewer linting violations
|
||||
|
||||
### Developer Experience
|
||||
- Faster time to understand code
|
||||
- Easier integration of new features
|
||||
- Reduced bug introduction rate
|
||||
- Higher developer confidence in changes
|
||||
|
||||
### Maintainability
|
||||
- Clearer module boundaries
|
||||
- More predictable behavior
|
||||
- Easier debugging and troubleshooting
|
||||
- Better performance characteristics
|
||||
|
||||
## Output Files
|
||||
|
||||
When refactoring is complete, update:
|
||||
- `refactoring-log-[date].md` - Document what was changed and why
|
||||
- `CONTEXT.md` - Update with new patterns and decisions
|
||||
- `./docs/` - Update relevant module documentation
|
||||
- Task lists - Mark refactoring tasks as complete
|
||||
|
||||
## Final Verification
|
||||
|
||||
Before marking refactoring complete:
|
||||
1. Run full test suite and verify all tests pass
|
||||
2. Check that code follows all project conventions
|
||||
3. Verify documentation is up to date
|
||||
4. Confirm user is satisfied with the results
|
||||
5. Record lessons learned for future refactoring efforts
|
||||
44
.cursor/rules/task-list.mdc
Normal file
44
.cursor/rules/task-list.mdc
Normal file
@ -0,0 +1,44 @@
|
||||
---
|
||||
description: TODO list task implementation
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
---
|
||||
description:
|
||||
globs:
|
||||
alwaysApply: false
|
||||
---
|
||||
# Task List Management
|
||||
|
||||
Guidelines for managing task lists in markdown files to track progress on completing a PRD
|
||||
|
||||
## Task Implementation
|
||||
- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y"
|
||||
- **Completion protocol:**
|
||||
1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`.
|
||||
2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed.
|
||||
- Stop after each sub‑task and wait for the user’s go‑ahead.
|
||||
|
||||
## Task List Maintenance
|
||||
|
||||
1. **Update the task list as you work:**
|
||||
- Mark tasks and subtasks as completed (`[x]`) per the protocol above.
|
||||
- Add new tasks as they emerge.
|
||||
|
||||
2. **Maintain the “Relevant Files” section:**
|
||||
- List every file created or modified.
|
||||
- Give each file a one‑line description of its purpose.
|
||||
|
||||
## AI Instructions
|
||||
|
||||
When working with task lists, the AI must:
|
||||
|
||||
1. Regularly update the task list file after finishing any significant work.
|
||||
2. Follow the completion protocol:
|
||||
- Mark each finished **sub‑task** `[x]`.
|
||||
- Mark the **parent task** `[x]` once **all** its subtasks are `[x]`.
|
||||
3. Add newly discovered tasks.
|
||||
4. Keep “Relevant Files” accurate and up to date.
|
||||
5. Before starting work, check which sub‑task is next.
|
||||
|
||||
6. After implementing a sub‑task, update the file and then pause for user approval.
|
||||
24
book_backtest/__init__.py
Normal file
24
book_backtest/__init__.py
Normal file
@ -0,0 +1,24 @@
|
||||
"""Book-only granular backtester package.
|
||||
|
||||
This package provides an event-driven backtesting engine operating solely on
|
||||
order book snapshots streamed from an aggregated SQLite database. It is scoped
|
||||
to long-only strategies for the initial implementation.
|
||||
|
||||
Modules are intentionally small and composable. The CLI entrypoint is
|
||||
`book_backtest.cli` with subcommands for initializing the aggregated database,
|
||||
ingesting source OKX book databases, and running a backtest.
|
||||
"""
|
||||
|
||||
__all__ = [
|
||||
"cli",
|
||||
"db",
|
||||
"events",
|
||||
"engine",
|
||||
"strategy",
|
||||
"orders",
|
||||
"fees",
|
||||
"metrics",
|
||||
"io",
|
||||
]
|
||||
|
||||
|
||||
218
book_backtest/cli.py
Normal file
218
book_backtest/cli.py
Normal file
@ -0,0 +1,218 @@
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from .db import init_aggregated_db, ingest_source_db, get_instrument_time_bounds
|
||||
from .events import stream_book_events
|
||||
from .engine import PortfolioState, process_event
|
||||
from .strategy import DemoMomentumConfig, DemoMomentumStrategy
|
||||
from .metrics import compute_metrics
|
||||
from .io import write_trade_log, append_summary_row
|
||||
import datetime as _dt
|
||||
import re as _re
|
||||
import time as _time
|
||||
|
||||
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Granular order-book backtester (book-only)"
|
||||
)
|
||||
sub = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
# init-db
|
||||
init_p = sub.add_parser("init-db", help="Create aggregated SQLite database for book snapshots")
|
||||
init_p.add_argument("--agg-db", required=True, help="Path to aggregated database (e.g., ./okx_agg.db)")
|
||||
|
||||
# ingest
|
||||
ingest_p = sub.add_parser("ingest", help="Ingest one or more OKX book SQLite file(s) into the aggregated DB")
|
||||
ingest_p.add_argument("--agg-db", required=True, help="Path to aggregated database")
|
||||
ingest_p.add_argument(
|
||||
"--db",
|
||||
action="append",
|
||||
required=True,
|
||||
help="Path to a source OKX SQLite file containing a book table. Repeat flag to add multiple files.",
|
||||
)
|
||||
|
||||
# run
|
||||
run_p = sub.add_parser("run", help="Run a backtest over the aggregated DB (book-only)")
|
||||
run_p.add_argument("--agg-db", required=True, help="Path to aggregated database")
|
||||
run_p.add_argument("--instrument", required=True, help="Instrument symbol, e.g., BTC-USDT")
|
||||
run_p.add_argument("--since", required=False, help="ISO8601 or epoch milliseconds start time (defaults to earliest)")
|
||||
run_p.add_argument("--until", required=False, help="ISO8601 or epoch milliseconds end time (defaults to latest)")
|
||||
run_p.add_argument("--stoploss", type=float, default=0.05, help="Stop-loss percentage (e.g., 0.05)")
|
||||
run_p.add_argument("--maker", action="store_true", help="Use maker fees (default taker)")
|
||||
run_p.add_argument("--init-usd", type=float, default=1000.0, help="Initial USD balance")
|
||||
run_p.add_argument("--strategy", default="demo", help="Strategy to run (demo for now)")
|
||||
|
||||
return parser
|
||||
|
||||
|
||||
def cmd_init_db(agg_db: Path) -> None:
|
||||
init_aggregated_db(agg_db)
|
||||
print(f"[init-db] Aggregated DB ready at: {agg_db}")
|
||||
|
||||
|
||||
def cmd_ingest(agg_db: Path, src_dbs: list[Path]) -> None:
|
||||
total_inserted = 0
|
||||
for src in src_dbs:
|
||||
inserted = ingest_source_db(agg_db, src)
|
||||
total_inserted += inserted
|
||||
print(f"[ingest] {inserted} rows ingested from {src}")
|
||||
print(f"[ingest] Total inserted: {total_inserted} into {agg_db}")
|
||||
|
||||
|
||||
def cmd_run(
|
||||
agg_db: Path,
|
||||
instrument: str,
|
||||
since: str | None,
|
||||
until: str | None,
|
||||
stoploss: float,
|
||||
is_maker: bool,
|
||||
init_usd: float,
|
||||
strategy_name: str,
|
||||
) -> None:
|
||||
def _iso(ms: int) -> str:
|
||||
return _dt.datetime.fromtimestamp(ms / 1000.0, tz=_dt.timezone.utc).isoformat().replace("+00:00", "Z")
|
||||
def _parse_time(s: str) -> int:
|
||||
# epoch ms
|
||||
if s.isdigit():
|
||||
return int(s)
|
||||
# basic ISO8601 without timezone -> assume Z
|
||||
# Accept formats like 2024-06-09T12:00:00Z or 2024-06-09 12:00:00
|
||||
s2 = s.replace(" ", "T")
|
||||
if s2.endswith("Z"):
|
||||
s2 = s2[:-1]
|
||||
try:
|
||||
dt = _dt.datetime.fromisoformat(s2)
|
||||
except ValueError:
|
||||
raise SystemExit(f"Cannot parse time: {s}")
|
||||
# Treat naive datetime as UTC
|
||||
return int(dt.timestamp() * 1000)
|
||||
|
||||
if since is None or until is None:
|
||||
bounds = get_instrument_time_bounds(agg_db, instrument)
|
||||
if not bounds:
|
||||
raise SystemExit("No data for instrument in aggregated DB")
|
||||
min_ms, max_ms = bounds
|
||||
if since is None:
|
||||
since_ms = min_ms
|
||||
else:
|
||||
since_ms = _parse_time(since)
|
||||
if until is None:
|
||||
until_ms = max_ms
|
||||
else:
|
||||
until_ms = _parse_time(until)
|
||||
print(f"[run] Using time range: {_iso(since_ms)} to {_iso(until_ms)}")
|
||||
else:
|
||||
since_ms = _parse_time(since)
|
||||
until_ms = _parse_time(until)
|
||||
|
||||
state = PortfolioState(cash_usd=init_usd)
|
||||
if strategy_name == "demo":
|
||||
strat = DemoMomentumStrategy(DemoMomentumConfig())
|
||||
else:
|
||||
raise SystemExit(f"Unknown strategy: {strategy_name}")
|
||||
|
||||
logs = []
|
||||
trade_results = []
|
||||
last_event = None
|
||||
total_span = max(1, until_ms - since_ms)
|
||||
started_at = _time.time()
|
||||
count = 0
|
||||
next_report_pct = 1
|
||||
for ev in stream_book_events(agg_db, instrument, since_ms, until_ms):
|
||||
last_event = ev
|
||||
orders = strat.on_book_event(ev)
|
||||
exec_logs = process_event(state, ev, orders, stoploss_pct=stoploss, is_maker=is_maker)
|
||||
for le in exec_logs:
|
||||
logs.append(le)
|
||||
if le.get("type") in {"sell", "stop_loss"} and "pnl" in le:
|
||||
try:
|
||||
trade_results.append(float(le["pnl"]))
|
||||
except Exception:
|
||||
pass
|
||||
count += 1
|
||||
pct = int(min(100.0, max(0.0, (ev.timestamp_ms - since_ms) * 100.0 / total_span)))
|
||||
if pct >= next_report_pct:
|
||||
elapsed = max(1e-6, _time.time() - started_at)
|
||||
speed_k = (count / elapsed) / 1000.0
|
||||
eq = state.equity_curve[-1][1] if state.equity_curve else state.cash_usd
|
||||
print(f"\r[run] {pct:3d}% events={count} speed={speed_k:6.1f}k ev/s equity=${eq:,.2f}", end="", flush=True)
|
||||
next_report_pct = pct + 1
|
||||
if count:
|
||||
print()
|
||||
|
||||
# Forced close if holding
|
||||
if last_event and state.coin > 0:
|
||||
price = last_event.best_bid
|
||||
gross = state.coin * price
|
||||
from .fees import calculate_okx_fee
|
||||
fee = calculate_okx_fee(gross, is_maker=is_maker)
|
||||
state.cash_usd += (gross - fee)
|
||||
state.fees_paid += fee
|
||||
pnl = 0.0
|
||||
if state.entry_price:
|
||||
pnl = (price - state.entry_price) / state.entry_price
|
||||
logs.append({
|
||||
"type": "forced_close",
|
||||
"time": last_event.timestamp_ms,
|
||||
"price": price,
|
||||
"usd": state.cash_usd,
|
||||
"coin": 0.0,
|
||||
"pnl": pnl,
|
||||
"fee": fee,
|
||||
})
|
||||
trade_results.append(pnl)
|
||||
state.coin = 0.0
|
||||
state.entry_price = None
|
||||
state.entry_time_ms = None
|
||||
|
||||
# Outputs
|
||||
stop_str = f"{stoploss:.2f}".replace(".", "p")
|
||||
logfile = f"trade_log_book_sl{stop_str}.csv"
|
||||
write_trade_log(logs, logfile)
|
||||
|
||||
metrics = compute_metrics(state.equity_curve, trade_results)
|
||||
summary_row = {
|
||||
"timeframe": "book",
|
||||
"stop_loss": stoploss,
|
||||
"total_return": f"{metrics['total_return']*100:.2f}%",
|
||||
"max_drawdown": f"{metrics['max_drawdown']*100:.2f}%",
|
||||
"sharpe_ratio": f"{metrics['sharpe_ratio']:.2f}",
|
||||
"win_rate": f"{metrics['win_rate']*100:.2f}%",
|
||||
"num_trades": int(metrics["num_trades"]),
|
||||
"final_equity": f"${state.equity_curve[-1][1]:.2f}" if state.equity_curve else "$0.00",
|
||||
"initial_equity": f"${state.equity_curve[0][1]:.2f}" if state.equity_curve else "$0.00",
|
||||
"num_stop_losses": sum(1 for l in logs if l.get("type") == "stop_loss"),
|
||||
"total_fees": f"${state.fees_paid:.4f}",
|
||||
}
|
||||
append_summary_row(summary_row)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = build_parser()
|
||||
args = parser.parse_args()
|
||||
|
||||
cmd = args.command
|
||||
if cmd == "init-db":
|
||||
cmd_init_db(Path(args.agg_db))
|
||||
elif cmd == "ingest":
|
||||
srcs = [Path(p) for p in (args.db or [])]
|
||||
cmd_ingest(Path(args.agg_db), srcs)
|
||||
elif cmd == "run":
|
||||
cmd_run(
|
||||
Path(args.agg_db),
|
||||
args.instrument,
|
||||
getattr(args, "since", None),
|
||||
getattr(args, "until", None),
|
||||
args.stoploss,
|
||||
args.maker,
|
||||
args.init_usd,
|
||||
args.strategy,
|
||||
)
|
||||
else:
|
||||
parser.error("Unknown command")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
||||
89
book_backtest/db.py
Normal file
89
book_backtest/db.py
Normal file
@ -0,0 +1,89 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
from typing import Iterator, Optional, Tuple
|
||||
|
||||
|
||||
AGG_SCHEMA = """
|
||||
CREATE TABLE IF NOT EXISTS book (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
instrument TEXT NOT NULL,
|
||||
bids TEXT NOT NULL,
|
||||
asks TEXT NOT NULL,
|
||||
timestamp TEXT NOT NULL
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_book_instrument_ts ON book (instrument, timestamp);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS ux_book_inst_ts ON book(instrument, timestamp);
|
||||
"""
|
||||
|
||||
|
||||
def init_aggregated_db(db_path: Path) -> None:
|
||||
conn = sqlite3.connect(str(db_path))
|
||||
try:
|
||||
cur = conn.cursor()
|
||||
cur.executescript(AGG_SCHEMA)
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def stream_source_book_rows(src_db: Path) -> Iterator[Tuple[str, str, str, str]]:
|
||||
conn = sqlite3.connect(str(src_db))
|
||||
try:
|
||||
cur = conn.cursor()
|
||||
cur.execute(
|
||||
"SELECT instrument, bids, asks, timestamp FROM book ORDER BY timestamp ASC"
|
||||
)
|
||||
for row in cur:
|
||||
yield row # (instrument, bids, asks, timestamp)
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
def ingest_source_db(agg_db: Path, src_db: Path) -> int:
|
||||
conn = sqlite3.connect(str(agg_db))
|
||||
inserted = 0
|
||||
try:
|
||||
cur = conn.cursor()
|
||||
cur.execute("PRAGMA journal_mode = WAL;")
|
||||
cur.execute("PRAGMA synchronous = NORMAL;")
|
||||
cur.execute("PRAGMA temp_store = MEMORY;")
|
||||
cur.execute("PRAGMA cache_size = -20000;") # ~20MB
|
||||
|
||||
for instrument, bids, asks, ts in stream_source_book_rows(src_db):
|
||||
try:
|
||||
cur.execute(
|
||||
"INSERT OR IGNORE INTO book (instrument, bids, asks, timestamp) VALUES (?, ?, ?, ?)",
|
||||
(instrument, bids, asks, ts),
|
||||
)
|
||||
if cur.rowcount:
|
||||
inserted += 1
|
||||
except sqlite3.Error:
|
||||
# Skip malformed rows silently during ingestion
|
||||
pass
|
||||
if inserted % 10000 == 0:
|
||||
conn.commit()
|
||||
conn.commit()
|
||||
finally:
|
||||
conn.close()
|
||||
return inserted
|
||||
|
||||
|
||||
def get_instrument_time_bounds(agg_db: Path, instrument: str) -> Optional[Tuple[int, int]]:
|
||||
"""Return (min_timestamp_ms, max_timestamp_ms) for an instrument, or None if empty."""
|
||||
conn = sqlite3.connect(str(agg_db))
|
||||
try:
|
||||
cur = conn.cursor()
|
||||
cur.execute(
|
||||
"SELECT MIN(CAST(timestamp AS INTEGER)), MAX(CAST(timestamp AS INTEGER)) FROM book WHERE instrument = ?",
|
||||
(instrument,),
|
||||
)
|
||||
row = cur.fetchone()
|
||||
if not row or row[0] is None or row[1] is None:
|
||||
return None
|
||||
return int(row[0]), int(row[1])
|
||||
finally:
|
||||
conn.close()
|
||||
|
||||
|
||||
114
book_backtest/engine.py
Normal file
114
book_backtest/engine.py
Normal file
@ -0,0 +1,114 @@
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import List, Dict, Any
|
||||
|
||||
from .events import BookEvent
|
||||
from .orders import MarketBuy, MarketSell
|
||||
from .fees import calculate_okx_fee
|
||||
|
||||
|
||||
@dataclass
|
||||
class PortfolioState:
|
||||
cash_usd: float
|
||||
coin: float = 0.0
|
||||
entry_price: float | None = None
|
||||
entry_time_ms: int | None = None
|
||||
fees_paid: float = 0.0
|
||||
equity_curve: list[tuple[int, float]] = field(default_factory=list)
|
||||
|
||||
|
||||
def process_event(
|
||||
state: PortfolioState,
|
||||
event: BookEvent,
|
||||
orders: list,
|
||||
stoploss_pct: float,
|
||||
is_maker: bool,
|
||||
) -> list[Dict[str, Any]]:
|
||||
"""Apply orders at top-of-book and update stop-loss logic.
|
||||
|
||||
Returns log entries for any executions that occurred.
|
||||
"""
|
||||
logs: list[Dict[str, Any]] = []
|
||||
|
||||
# Evaluate stop-loss for an open long position
|
||||
if state.coin > 0 and state.entry_price is not None:
|
||||
threshold = state.entry_price * (1.0 - stoploss_pct)
|
||||
if event.best_bid <= threshold:
|
||||
# Exit at the worse of best_bid and threshold to emulate slippage
|
||||
exit_price = min(event.best_bid, threshold)
|
||||
gross = state.coin * exit_price
|
||||
fee = calculate_okx_fee(gross, is_maker=is_maker)
|
||||
state.cash_usd = gross - fee
|
||||
state.fees_paid += fee
|
||||
pnl = (exit_price - state.entry_price) / state.entry_price
|
||||
logs.append({
|
||||
"type": "stop_loss",
|
||||
"time": event.timestamp_ms,
|
||||
"price": exit_price,
|
||||
"usd": state.cash_usd,
|
||||
"coin": 0.0,
|
||||
"pnl": pnl,
|
||||
"fee": fee,
|
||||
})
|
||||
state.coin = 0.0
|
||||
state.entry_price = None
|
||||
state.entry_time_ms = None
|
||||
|
||||
# Apply incoming orders
|
||||
for order in orders:
|
||||
if isinstance(order, MarketBuy):
|
||||
if state.cash_usd <= 0:
|
||||
continue
|
||||
# Execute at best ask
|
||||
price = event.best_ask
|
||||
notional = min(order.usd_notional, state.cash_usd)
|
||||
qty = notional / price if price > 0 else 0.0
|
||||
fee = calculate_okx_fee(notional, is_maker=is_maker)
|
||||
# Deduct both notional and fee from cash
|
||||
state.cash_usd -= (notional + fee)
|
||||
state.fees_paid += fee
|
||||
state.coin += qty
|
||||
state.entry_price = price
|
||||
state.entry_time_ms = event.timestamp_ms
|
||||
logs.append({
|
||||
"type": "buy",
|
||||
"time": event.timestamp_ms,
|
||||
"price": price,
|
||||
"usd": state.cash_usd,
|
||||
"coin": state.coin,
|
||||
"fee": fee,
|
||||
})
|
||||
elif isinstance(order, MarketSell):
|
||||
if state.coin <= 0:
|
||||
continue
|
||||
price = event.best_bid
|
||||
qty = min(order.amount, state.coin)
|
||||
gross = qty * price
|
||||
fee = calculate_okx_fee(gross, is_maker=is_maker)
|
||||
state.cash_usd += (gross - fee)
|
||||
state.fees_paid += fee
|
||||
state.coin -= qty
|
||||
pnl = 0.0
|
||||
if state.entry_price:
|
||||
pnl = (price - state.entry_price) / state.entry_price
|
||||
logs.append({
|
||||
"type": "sell",
|
||||
"time": event.timestamp_ms,
|
||||
"price": price,
|
||||
"usd": state.cash_usd,
|
||||
"coin": state.coin,
|
||||
"pnl": pnl,
|
||||
"fee": fee,
|
||||
})
|
||||
if state.coin <= 0:
|
||||
state.entry_price = None
|
||||
state.entry_time_ms = None
|
||||
|
||||
# Track equity at each event using best bid for mark-to-market
|
||||
mark_price = event.best_bid
|
||||
equity = state.cash_usd + state.coin * mark_price
|
||||
state.equity_curve.append((event.timestamp_ms, equity))
|
||||
return logs
|
||||
|
||||
|
||||
21
pyproject.toml
Normal file
21
pyproject.toml
Normal file
@ -0,0 +1,21 @@
|
||||
[project]
|
||||
name = "granular-backtest"
|
||||
version = "0.1.0"
|
||||
description = "Granular order-book backtesting tools"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.11"
|
||||
dependencies = []
|
||||
|
||||
[project.scripts]
|
||||
book-backtest = "book_backtest.cli:main"
|
||||
|
||||
[build-system]
|
||||
requires = ["setuptools>=61.0"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
[tool.setuptools.packages.find]
|
||||
where = ["."]
|
||||
include = ["book_backtest*"]
|
||||
exclude = ["lowkey_backtest*"]
|
||||
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user