diff --git a/.cursor/rules/always-global.mdc b/.cursor/rules/always-global.mdc new file mode 100644 index 0000000..1997270 --- /dev/null +++ b/.cursor/rules/always-global.mdc @@ -0,0 +1,61 @@ +--- +description: Global development standards and AI interaction principles +globs: +alwaysApply: true +--- + +# Rule: Always Apply - Global Development Standards + +## AI Interaction Principles + +### Step-by-Step Development +- **NEVER** generate large blocks of code without explanation +- **ALWAYS** ask "provide your plan in a concise bullet list and wait for my confirmation before proceeding" +- Break complex tasks into smaller, manageable pieces (≤250 lines per file, ≤50 lines per function) +- Explain your reasoning step-by-step before writing code +- Wait for explicit approval before moving to the next sub-task + +### Context Awareness +- **ALWAYS** reference existing code patterns and data structures before suggesting new approaches +- Ask about existing conventions before implementing new functionality +- Preserve established architectural decisions unless explicitly asked to change them +- Maintain consistency with existing naming conventions and code style + +## Code Quality Standards + +### File and Function Limits +- **Maximum file size**: 250 lines +- **Maximum function size**: 50 lines +- **Maximum complexity**: If a function does more than one main thing, break it down +- **Naming**: Use clear, descriptive names that explain purpose + +### Documentation Requirements +- **Every public function** must have a docstring explaining purpose, parameters, and return value +- **Every class** must have a class-level docstring +- **Complex logic** must have inline comments explaining the "why", not just the "what" +- **API endpoints** must be documented with request/response examples + +### Error Handling +- **ALWAYS** include proper error handling for external dependencies +- **NEVER** use bare except clauses +- Provide meaningful error messages that help with debugging +- Log errors appropriately for the application context + +## Security and Best Practices +- **NEVER** hardcode credentials, API keys, or sensitive data +- **ALWAYS** validate user inputs +- Use parameterized queries for database operations +- Follow the principle of least privilege +- Implement proper authentication and authorization + +## Testing Requirements +- **Every implementation** should have corresponding unit tests +- **Every API endpoint** should have integration tests +- Test files should be placed alongside the code they test +- Use descriptive test names that explain what is being tested + +## Response Format +- Be concise and avoid unnecessary repetition +- Focus on actionable information +- Provide examples when explaining complex concepts +- Ask clarifying questions when requirements are ambiguous \ No newline at end of file diff --git a/.cursor/rules/architecture.mdc b/.cursor/rules/architecture.mdc new file mode 100644 index 0000000..9fbc494 --- /dev/null +++ b/.cursor/rules/architecture.mdc @@ -0,0 +1,237 @@ +--- +description: Modular design principles and architecture guidelines for scalable development +globs: +alwaysApply: false +--- + +# Rule: Architecture and Modular Design + +## Goal +Maintain a clean, modular architecture that scales effectively and prevents the complexity issues that arise in AI-assisted development. + +## Core Architecture Principles + +### 1. Modular Design +- **Single Responsibility**: Each module has one clear purpose +- **Loose Coupling**: Modules depend on interfaces, not implementations +- **High Cohesion**: Related functionality is grouped together +- **Clear Boundaries**: Module interfaces are well-defined and stable + +### 2. Size Constraints +- **Files**: Maximum 250 lines per file +- **Functions**: Maximum 50 lines per function +- **Classes**: Maximum 300 lines per class +- **Modules**: Maximum 10 public functions/classes per module + +### 3. Dependency Management +- **Layer Dependencies**: Higher layers depend on lower layers only +- **No Circular Dependencies**: Modules cannot depend on each other cyclically +- **Interface Segregation**: Depend on specific interfaces, not broad ones +- **Dependency Injection**: Pass dependencies rather than creating them internally + +## Modular Architecture Patterns + +### Layer Structure +``` +src/ +├── presentation/ # UI, API endpoints, CLI interfaces +├── application/ # Business logic, use cases, workflows +├── domain/ # Core business entities and rules +├── infrastructure/ # Database, external APIs, file systems +└── shared/ # Common utilities, constants, types +``` + +### Module Organization +``` +module_name/ +├── __init__.py # Public interface exports +├── core.py # Main module logic +├── types.py # Type definitions and interfaces +├── utils.py # Module-specific utilities +├── tests/ # Module tests +└── README.md # Module documentation +``` + +## Design Patterns for AI Development + +### 1. Repository Pattern +Separate data access from business logic: + +```python +# Domain interface +class UserRepository: + def get_by_id(self, user_id: str) -> User: ... + def save(self, user: User) -> None: ... + +# Infrastructure implementation +class SqlUserRepository(UserRepository): + def get_by_id(self, user_id: str) -> User: + # Database-specific implementation + pass +``` + +### 2. Service Pattern +Encapsulate business logic in focused services: + +```python +class UserService: + def __init__(self, user_repo: UserRepository): + self._user_repo = user_repo + + def create_user(self, data: UserData) -> User: + # Validation and business logic + # Single responsibility: user creation + pass +``` + +### 3. Factory Pattern +Create complex objects with clear interfaces: + +```python +class DatabaseFactory: + @staticmethod + def create_connection(config: DatabaseConfig) -> Connection: + # Handle different database types + # Encapsulate connection complexity + pass +``` + +## Architecture Decision Guidelines + +### When to Create New Modules +Create a new module when: +- **Functionality** exceeds size constraints (250 lines) +- **Responsibility** is distinct from existing modules +- **Dependencies** would create circular references +- **Reusability** would benefit other parts of the system +- **Testing** requires isolated test environments + +### When to Split Existing Modules +Split modules when: +- **File size** exceeds 250 lines +- **Multiple responsibilities** are evident +- **Testing** becomes difficult due to complexity +- **Dependencies** become too numerous +- **Change frequency** differs significantly between parts + +### Module Interface Design +```python +# Good: Clear, focused interface +class PaymentProcessor: + def process_payment(self, amount: Money, method: PaymentMethod) -> PaymentResult: + """Process a single payment transaction.""" + pass + +# Bad: Unfocused, kitchen-sink interface +class PaymentManager: + def process_payment(self, ...): pass + def validate_card(self, ...): pass + def send_receipt(self, ...): pass + def update_inventory(self, ...): pass # Wrong responsibility! +``` + +## Architecture Validation + +### Architecture Review Checklist +- [ ] **Dependencies flow in one direction** (no cycles) +- [ ] **Layers are respected** (presentation doesn't call infrastructure directly) +- [ ] **Modules have single responsibility** +- [ ] **Interfaces are stable** and well-defined +- [ ] **Size constraints** are maintained +- [ ] **Testing** is straightforward for each module + +### Red Flags +- **God Objects**: Classes/modules that do too many things +- **Circular Dependencies**: Modules that depend on each other +- **Deep Inheritance**: More than 3 levels of inheritance +- **Large Interfaces**: Interfaces with more than 7 methods +- **Tight Coupling**: Modules that know too much about each other's internals + +## Refactoring Guidelines + +### When to Refactor +- Module exceeds size constraints +- Code duplication across modules +- Difficult to test individual components +- New features require changing multiple unrelated modules +- Performance bottlenecks due to poor separation + +### Refactoring Process +1. **Identify** the specific architectural problem +2. **Design** the target architecture +3. **Create tests** to verify current behavior +4. **Implement changes** incrementally +5. **Validate** that tests still pass +6. **Update documentation** to reflect changes + +### Safe Refactoring Practices +- **One change at a time**: Don't mix refactoring with new features +- **Tests first**: Ensure comprehensive test coverage before refactoring +- **Incremental changes**: Small steps with verification at each stage +- **Backward compatibility**: Maintain existing interfaces during transition +- **Documentation updates**: Keep architecture documentation current + +## Architecture Documentation + +### Architecture Decision Records (ADRs) +Document significant decisions in `./docs/decisions/`: + +```markdown +# ADR-003: Service Layer Architecture + +## Status +Accepted + +## Context +As the application grows, business logic is scattered across controllers and models. + +## Decision +Implement a service layer to encapsulate business logic. + +## Consequences +**Positive:** +- Clear separation of concerns +- Easier testing of business logic +- Better reusability across different interfaces + +**Negative:** +- Additional abstraction layer +- More files to maintain +``` + +### Module Documentation Template +```markdown +# Module: [Name] + +## Purpose +What this module does and why it exists. + +## Dependencies +- **Imports from**: List of modules this depends on +- **Used by**: List of modules that depend on this one +- **External**: Third-party dependencies + +## Public Interface +```python +# Key functions and classes exposed by this module +``` + +## Architecture Notes +- Design patterns used +- Important architectural decisions +- Known limitations or constraints +``` + +## Migration Strategies + +### Legacy Code Integration +- **Strangler Fig Pattern**: Gradually replace old code with new modules +- **Adapter Pattern**: Create interfaces to integrate old and new code +- **Facade Pattern**: Simplify complex legacy interfaces + +### Gradual Modernization +1. **Identify boundaries** in existing code +2. **Extract modules** one at a time +3. **Create interfaces** for each extracted module +4. **Test thoroughly** at each step +5. **Update documentation** continuously \ No newline at end of file diff --git a/.cursor/rules/code-review.mdc b/.cursor/rules/code-review.mdc new file mode 100644 index 0000000..8b0808c --- /dev/null +++ b/.cursor/rules/code-review.mdc @@ -0,0 +1,123 @@ +--- +description: AI-generated code review checklist and quality assurance guidelines +globs: +alwaysApply: false +--- + +# Rule: Code Review and Quality Assurance + +## Goal +Establish systematic review processes for AI-generated code to maintain quality, security, and maintainability standards. + +## AI Code Review Checklist + +### Pre-Implementation Review +Before accepting any AI-generated code: + +1. **Understand the Code** + - [ ] Can you explain what the code does in your own words? + - [ ] Do you understand each function and its purpose? + - [ ] Are there any "magic" values or unexplained logic? + - [ ] Does the code solve the actual problem stated? + +2. **Architecture Alignment** + - [ ] Does the code follow established project patterns? + - [ ] Is it consistent with existing data structures? + - [ ] Does it integrate cleanly with existing components? + - [ ] Are new dependencies justified and necessary? + +3. **Code Quality** + - [ ] Are functions smaller than 50 lines? + - [ ] Are files smaller than 250 lines? + - [ ] Are variable and function names descriptive? + - [ ] Is the code DRY (Don't Repeat Yourself)? + +### Security Review +- [ ] **Input Validation**: All user inputs are validated and sanitized +- [ ] **Authentication**: Proper authentication checks are in place +- [ ] **Authorization**: Access controls are implemented correctly +- [ ] **Data Protection**: Sensitive data is handled securely +- [ ] **SQL Injection**: Database queries use parameterized statements +- [ ] **XSS Prevention**: Output is properly escaped +- [ ] **Error Handling**: Errors don't leak sensitive information + +### Integration Review +- [ ] **Existing Functionality**: New code doesn't break existing features +- [ ] **Data Consistency**: Database changes maintain referential integrity +- [ ] **API Compatibility**: Changes don't break existing API contracts +- [ ] **Performance Impact**: New code doesn't introduce performance bottlenecks +- [ ] **Testing Coverage**: Appropriate tests are included + +## Review Process + +### Step 1: Initial Code Analysis +1. **Read through the entire generated code** before running it +2. **Identify patterns** that don't match existing codebase +3. **Check dependencies** - are new packages really needed? +4. **Verify logic flow** - does the algorithm make sense? + +### Step 2: Security and Error Handling Review +1. **Trace data flow** from input to output +2. **Identify potential failure points** and verify error handling +3. **Check for security vulnerabilities** using the security checklist +4. **Verify proper logging** and monitoring implementation + +### Step 3: Integration Testing +1. **Test with existing code** to ensure compatibility +2. **Run existing test suite** to verify no regressions +3. **Test edge cases** and error conditions +4. **Verify performance** under realistic conditions + +## Common AI Code Issues to Watch For + +### Overcomplication Patterns +- **Unnecessary abstractions**: AI creating complex patterns for simple tasks +- **Over-engineering**: Solutions that are more complex than needed +- **Redundant code**: AI recreating existing functionality +- **Inappropriate design patterns**: Using patterns that don't fit the use case + +### Context Loss Indicators +- **Inconsistent naming**: Different conventions from existing code +- **Wrong data structures**: Using different patterns than established +- **Ignored existing functions**: Reimplementing existing functionality +- **Architectural misalignment**: Code that doesn't fit the overall design + +### Technical Debt Indicators +- **Magic numbers**: Hardcoded values without explanation +- **Poor error messages**: Generic or unhelpful error handling +- **Missing documentation**: Code without adequate comments +- **Tight coupling**: Components that are too interdependent + +## Quality Gates + +### Mandatory Reviews +All AI-generated code must pass these gates before acceptance: + +1. **Security Review**: No security vulnerabilities detected +2. **Integration Review**: Integrates cleanly with existing code +3. **Performance Review**: Meets performance requirements +4. **Maintainability Review**: Code can be easily modified by team members +5. **Documentation Review**: Adequate documentation is provided + +### Acceptance Criteria +- [ ] Code is understandable by any team member +- [ ] Integration requires minimal changes to existing code +- [ ] Security review passes all checks +- [ ] Performance meets established benchmarks +- [ ] Documentation is complete and accurate + +## Rejection Criteria +Reject AI-generated code if: +- Security vulnerabilities are present +- Code is too complex for the problem being solved +- Integration requires major refactoring of existing code +- Code duplicates existing functionality without justification +- Documentation is missing or inadequate + +## Review Documentation +For each review, document: +- Issues found and how they were resolved +- Performance impact assessment +- Security concerns and mitigations +- Integration challenges and solutions +- Recommendations for future similar tasks \ No newline at end of file diff --git a/.cursor/rules/context-management.mdc b/.cursor/rules/context-management.mdc new file mode 100644 index 0000000..399658a --- /dev/null +++ b/.cursor/rules/context-management.mdc @@ -0,0 +1,93 @@ +--- +description: Context management for maintaining codebase awareness and preventing context drift +globs: +alwaysApply: false +--- + +# Rule: Context Management + +## Goal +Maintain comprehensive project context to prevent context drift and ensure AI-generated code integrates seamlessly with existing codebase patterns and architecture. + +## Context Documentation Requirements + +### PRD.md file documentation +1. **Project Overview** + - Business objectives and goals + - Target users and use cases + - Key success metrics + +### CONTEXT.md File Structure +Every project must maintain a `CONTEXT.md` file in the root directory with: + +1. **Architecture Overview** + - High-level system architecture + - Key design patterns used + - Database schema overview + - API structure and conventions + +2. **Technology Stack** + - Programming languages and versions + - Frameworks and libraries + - Database systems + - Development and deployment tools + +3. **Coding Conventions** + - Naming conventions + - File organization patterns + - Code structure preferences + - Import/export patterns + +4. **Current Implementation Status** + - Completed features + - Work in progress + - Known technical debt + - Planned improvements + +## Context Maintenance Protocol + +### Before Every Coding Session +1. **Review CONTEXT.md and PRD.md** to understand current project state +2. **Scan recent changes** in git history to understand latest patterns +3. **Identify existing patterns** for similar functionality before implementing new features +4. **Ask for clarification** if existing patterns are unclear or conflicting + +### During Development +1. **Reference existing code** when explaining implementation approaches +2. **Maintain consistency** with established patterns and conventions +3. **Update CONTEXT.md** when making architectural decisions +4. **Document deviations** from established patterns with reasoning + +### Context Preservation Strategies +- **Incremental development**: Build on existing patterns rather than creating new ones +- **Pattern consistency**: Use established data structures and function signatures +- **Integration awareness**: Consider how new code affects existing functionality +- **Dependency management**: Understand existing dependencies before adding new ones + +## Context Prompting Best Practices + +### Effective Context Sharing +- Include relevant sections of CONTEXT.md in prompts for complex tasks +- Reference specific existing files when asking for similar functionality +- Provide examples of existing patterns when requesting new implementations +- Share recent git commit messages to understand latest changes + +### Context Window Optimization +- Prioritize most relevant context for current task +- Use @filename references to include specific files +- Break large contexts into focused, task-specific chunks +- Update context references as project evolves + +## Red Flags - Context Loss Indicators +- AI suggests patterns that conflict with existing code +- New implementations ignore established conventions +- Proposed solutions don't integrate with existing architecture +- Code suggestions require significant refactoring of existing functionality + +## Recovery Protocol +When context loss is detected: +1. **Stop development** and review CONTEXT.md +2. **Analyze existing codebase** for established patterns +3. **Update context documentation** with missing information +4. **Restart task** with proper context provided +5. **Test integration** with existing code before proceeding \ No newline at end of file diff --git a/.cursor/rules/create-prd.mdc b/.cursor/rules/create-prd.mdc new file mode 100644 index 0000000..046dfa6 --- /dev/null +++ b/.cursor/rules/create-prd.mdc @@ -0,0 +1,67 @@ +--- +description: Creating PRD for a project or specific task/function +globs: +alwaysApply: false +--- +--- +description: Creating PRD for a project or specific task/function +globs: +alwaysApply: false +--- +# Rule: Generating a Product Requirements Document (PRD) + +## Goal + +To guide an AI assistant in creating a detailed Product Requirements Document (PRD) in Markdown format, based on an initial user prompt. The PRD should be clear, actionable, and suitable for a junior developer to understand and implement the feature. + +## Process + +1. **Receive Initial Prompt:** The user provides a brief description or request for a new feature or functionality. +2. **Ask Clarifying Questions:** Before writing the PRD, the AI *must* ask clarifying questions to gather sufficient detail. The goal is to understand the "what" and "why" of the feature, not necessarily the "how" (which the developer will figure out). +3. **Generate PRD:** Based on the initial prompt and the user's answers to the clarifying questions, generate a PRD using the structure outlined below. +4. **Save PRD:** Save the generated document as `prd-[feature-name].md` inside the `/tasks` directory. + +## Clarifying Questions (Examples) + +The AI should adapt its questions based on the prompt, but here are some common areas to explore: + +* **Problem/Goal:** "What problem does this feature solve for the user?" or "What is the main goal we want to achieve with this feature?" +* **Target User:** "Who is the primary user of this feature?" +* **Core Functionality:** "Can you describe the key actions a user should be able to perform with this feature?" +* **User Stories:** "Could you provide a few user stories? (e.g., As a [type of user], I want to [perform an action] so that [benefit].)" +* **Acceptance Criteria:** "How will we know when this feature is successfully implemented? What are the key success criteria?" +* **Scope/Boundaries:** "Are there any specific things this feature *should not* do (non-goals)?" +* **Data Requirements:** "What kind of data does this feature need to display or manipulate?" +* **Design/UI:** "Are there any existing design mockups or UI guidelines to follow?" or "Can you describe the desired look and feel?" +* **Edge Cases:** "Are there any potential edge cases or error conditions we should consider?" + +## PRD Structure + +The generated PRD should include the following sections: + +1. **Introduction/Overview:** Briefly describe the feature and the problem it solves. State the goal. +2. **Goals:** List the specific, measurable objectives for this feature. +3. **User Stories:** Detail the user narratives describing feature usage and benefits. +4. **Functional Requirements:** List the specific functionalities the feature must have. Use clear, concise language (e.g., "The system must allow users to upload a profile picture."). Number these requirements. +5. **Non-Goals (Out of Scope):** Clearly state what this feature will *not* include to manage scope. +6. **Design Considerations (Optional):** Link to mockups, describe UI/UX requirements, or mention relevant components/styles if applicable. +7. **Technical Considerations (Optional):** Mention any known technical constraints, dependencies, or suggestions (e.g., "Should integrate with the existing Auth module"). +8. **Success Metrics:** How will the success of this feature be measured? (e.g., "Increase user engagement by 10%", "Reduce support tickets related to X"). +9. **Open Questions:** List any remaining questions or areas needing further clarification. + +## Target Audience + +Assume the primary reader of the PRD is a **junior developer**. Therefore, requirements should be explicit, unambiguous, and avoid jargon where possible. Provide enough detail for them to understand the feature's purpose and core logic. + +## Output + +* **Format:** Markdown (`.md`) +* **Location:** `/tasks/` +* **Filename:** `prd-[feature-name].md` + +## Final instructions + +1. Do NOT start implmenting the PRD +2. Make sure to ask the user clarifying questions + +3. Take the user's answers to the clarifying questions and improve the PRD \ No newline at end of file diff --git a/.cursor/rules/documentation.mdc b/.cursor/rules/documentation.mdc new file mode 100644 index 0000000..4388350 --- /dev/null +++ b/.cursor/rules/documentation.mdc @@ -0,0 +1,244 @@ +--- +description: Documentation standards for code, architecture, and development decisions +globs: +alwaysApply: false +--- + +# Rule: Documentation Standards + +## Goal +Maintain comprehensive, up-to-date documentation that supports development, onboarding, and long-term maintenance of the codebase. + +## Documentation Hierarchy + +### 1. Project Level Documentation (in ./docs/) +- **README.md**: Project overview, setup instructions, basic usage +- **CONTEXT.md**: Current project state, architecture decisions, patterns +- **CHANGELOG.md**: Version history and significant changes +- **CONTRIBUTING.md**: Development guidelines and processes +- **API.md**: API endpoints, request/response formats, authentication + +### 2. Module Level Documentation (in ./docs/modules/) +- **[module-name].md**: Purpose, public interfaces, usage examples +- **dependencies.md**: External dependencies and their purposes +- **architecture.md**: Module relationships and data flow + +### 3. Code Level Documentation +- **Docstrings**: Function and class documentation +- **Inline comments**: Complex logic explanations +- **Type hints**: Clear parameter and return types +- **README files**: Directory-specific instructions + +## Documentation Standards + +### Code Documentation +```python +def process_user_data(user_id: str, data: dict) -> UserResult: + """ + Process and validate user data before storage. + + Args: + user_id: Unique identifier for the user + data: Dictionary containing user information to process + + Returns: + UserResult: Processed user data with validation status + + Raises: + ValidationError: When user data fails validation + DatabaseError: When storage operation fails + + Example: + >>> result = process_user_data("123", {"name": "John", "email": "john@example.com"}) + >>> print(result.status) + 'valid' + """ +``` + +### API Documentation Format +```markdown +### POST /api/users + +Create a new user account. + +**Request:** +```json +{ + "name": "string (required)", + "email": "string (required, valid email)", + "age": "number (optional, min: 13)" +} +``` + +**Response (201):** +```json +{ + "id": "uuid", + "name": "string", + "email": "string", + "created_at": "iso_datetime" +} +``` + +**Errors:** +- 400: Invalid input data +- 409: Email already exists +``` + +### Architecture Decision Records (ADRs) +Document significant architecture decisions in `./docs/decisions/`: + +```markdown +# ADR-001: Database Choice - PostgreSQL + +## Status +Accepted + +## Context +We need to choose a database for storing user data and application state. + +## Decision +We will use PostgreSQL as our primary database. + +## Consequences +**Positive:** +- ACID compliance ensures data integrity +- Rich query capabilities with SQL +- Good performance for our expected load + +**Negative:** +- More complex setup than simpler alternatives +- Requires SQL knowledge from team members + +## Alternatives Considered +- MongoDB: Rejected due to consistency requirements +- SQLite: Rejected due to scalability needs +``` + +## Documentation Maintenance + +### When to Update Documentation + +#### Always Update: +- **API changes**: Any modification to public interfaces +- **Architecture changes**: New patterns, data structures, or workflows +- **Configuration changes**: Environment variables, deployment settings +- **Dependencies**: Adding, removing, or upgrading packages +- **Business logic changes**: Core functionality modifications + +#### Update Weekly: +- **CONTEXT.md**: Current development status and priorities +- **Known issues**: Bug reports and workarounds +- **Performance notes**: Bottlenecks and optimization opportunities + +#### Update per Release: +- **CHANGELOG.md**: User-facing changes and improvements +- **Version documentation**: Breaking changes and migration guides +- **Examples and tutorials**: Keep sample code current + +### Documentation Quality Checklist + +#### Completeness +- [ ] Purpose and scope clearly explained +- [ ] All public interfaces documented +- [ ] Examples provided for complex usage +- [ ] Error conditions and handling described +- [ ] Dependencies and requirements listed + +#### Accuracy +- [ ] Code examples are tested and working +- [ ] Links point to correct locations +- [ ] Version numbers are current +- [ ] Screenshots reflect current UI + +#### Clarity +- [ ] Written for the intended audience +- [ ] Technical jargon is explained +- [ ] Step-by-step instructions are clear +- [ ] Visual aids used where helpful + +## Documentation Automation + +### Auto-Generated Documentation +- **API docs**: Generate from code annotations +- **Type documentation**: Extract from type hints +- **Module dependencies**: Auto-update from imports +- **Test coverage**: Include coverage reports + +### Documentation Testing +```python +# Test that code examples in documentation work +def test_documentation_examples(): + """Verify code examples in docs actually work.""" + # Test examples from README.md + # Test API examples from docs/API.md + # Test configuration examples +``` + +## Documentation Templates + +### New Module Documentation Template +```markdown +# Module: [Name] + +## Purpose +Brief description of what this module does and why it exists. + +## Public Interface +### Functions +- `function_name(params)`: Description and example + +### Classes +- `ClassName`: Purpose and basic usage + +## Usage Examples +```python +# Basic usage example +``` + +## Dependencies +- Internal: List of internal modules this depends on +- External: List of external packages required + +## Testing +How to run tests for this module. + +## Known Issues +Current limitations or bugs. +``` + +### API Endpoint Template +```markdown +### [METHOD] /api/endpoint + +Brief description of what this endpoint does. + +**Authentication:** Required/Optional +**Rate Limiting:** X requests per minute + +**Request:** +- Headers required +- Body schema +- Query parameters + +**Response:** +- Success response format +- Error response format +- Status codes + +**Example:** +Working request/response example +``` + +## Review and Maintenance Process + +### Documentation Review +- Include documentation updates in code reviews +- Verify examples still work with code changes +- Check for broken links and outdated information +- Ensure consistency with current implementation + +### Regular Audits +- Monthly review of documentation accuracy +- Quarterly assessment of documentation completeness +- Annual review of documentation structure and organization \ No newline at end of file diff --git a/.cursor/rules/enhanced-task-list.mdc b/.cursor/rules/enhanced-task-list.mdc new file mode 100644 index 0000000..b2272e8 --- /dev/null +++ b/.cursor/rules/enhanced-task-list.mdc @@ -0,0 +1,207 @@ +--- +description: Enhanced task list management with quality gates and iterative workflow integration +globs: +alwaysApply: false +--- + +# Rule: Enhanced Task List Management + +## Goal +Manage task lists with integrated quality gates and iterative workflow to prevent context loss and ensure sustainable development. + +## Task Implementation Protocol + +### Pre-Implementation Check +Before starting any sub-task: +- [ ] **Context Review**: Have you reviewed CONTEXT.md and relevant documentation? +- [ ] **Pattern Identification**: Do you understand existing patterns to follow? +- [ ] **Integration Planning**: Do you know how this will integrate with existing code? +- [ ] **Size Validation**: Is this task small enough (≤50 lines, ≤250 lines per file)? + +### Implementation Process +1. **One sub-task at a time**: Do **NOT** start the next sub‑task until you ask the user for permission and they say "yes" or "y" +2. **Step-by-step execution**: + - Plan the approach in bullet points + - Wait for approval + - Implement the specific sub-task + - Test the implementation + - Update documentation if needed +3. **Quality validation**: Run through the code review checklist before marking complete + +### Completion Protocol +When you finish a **sub‑task**: +1. **Immediate marking**: Change `[ ]` to `[x]` +2. **Quality check**: Verify the implementation meets quality standards +3. **Integration test**: Ensure new code works with existing functionality +4. **Documentation update**: Update relevant files if needed +5. **Parent task check**: If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed +6. **Stop and wait**: Get user approval before proceeding to next sub-task + +## Enhanced Task List Structure + +### Task File Header +```markdown +# Task List: [Feature Name] + +**Source PRD**: `prd-[feature-name].md` +**Status**: In Progress / Complete / Blocked +**Context Last Updated**: [Date] +**Architecture Review**: Required / Complete / N/A + +## Quick Links +- [Context Documentation](./CONTEXT.md) +- [Architecture Guidelines](./docs/architecture.md) +- [Related Files](#relevant-files) +``` + +### Task Format with Quality Gates +```markdown +- [ ] 1.0 Parent Task Title + - **Quality Gate**: Architecture review required + - **Dependencies**: List any dependencies + - [ ] 1.1 [Sub-task description 1.1] + - **Size estimate**: [Small/Medium/Large] + - **Pattern reference**: [Reference to existing pattern] + - **Test requirements**: [Unit/Integration/Both] + - [ ] 1.2 [Sub-task description 1.2] + - **Integration points**: [List affected components] + - **Risk level**: [Low/Medium/High] +``` + +## Relevant Files Management + +### Enhanced File Tracking +```markdown +## Relevant Files + +### Implementation Files +- `path/to/file1.ts` - Brief description of purpose and role + - **Status**: Created / Modified / Needs Review + - **Last Modified**: [Date] + - **Review Status**: Pending / Approved / Needs Changes + +### Test Files +- `path/to/file1.test.ts` - Unit tests for file1.ts + - **Coverage**: [Percentage or status] + - **Last Run**: [Date and result] + +### Documentation Files +- `docs/module-name.md` - Module documentation + - **Status**: Up to date / Needs update / Missing + - **Last Updated**: [Date] + +### Configuration Files +- `config/setting.json` - Configuration changes + - **Environment**: [Dev/Staging/Prod affected] + - **Backup**: [Location of backup] +``` + +## Task List Maintenance + +### During Development +1. **Regular updates**: Update task status after each significant change +2. **File tracking**: Add new files as they are created or modified +3. **Dependency tracking**: Note when new dependencies between tasks emerge +4. **Risk assessment**: Flag tasks that become more complex than anticipated + +### Quality Checkpoints +At 25%, 50%, 75%, and 100% completion: +- [ ] **Architecture alignment**: Code follows established patterns +- [ ] **Performance impact**: No significant performance degradation +- [ ] **Security review**: No security vulnerabilities introduced +- [ ] **Documentation current**: All changes are documented + +### Weekly Review Process +1. **Completion assessment**: What percentage of tasks are actually complete? +2. **Quality assessment**: Are completed tasks meeting quality standards? +3. **Process assessment**: Is the iterative workflow being followed? +4. **Risk assessment**: Are there emerging risks or blockers? + +## Task Status Indicators + +### Status Levels +- `[ ]` **Not Started**: Task not yet begun +- `[~]` **In Progress**: Currently being worked on +- `[?]` **Blocked**: Waiting for dependencies or decisions +- `[!]` **Needs Review**: Implementation complete but needs quality review +- `[x]` **Complete**: Finished and quality approved + +### Quality Indicators +- ✅ **Quality Approved**: Passed all quality gates +- ⚠️ **Quality Concerns**: Has issues but functional +- ❌ **Quality Failed**: Needs rework before approval +- 🔄 **Under Review**: Currently being reviewed + +### Integration Status +- 🔗 **Integrated**: Successfully integrated with existing code +- 🔧 **Integration Issues**: Problems with existing code integration +- ⏳ **Integration Pending**: Ready for integration testing + +## Emergency Procedures + +### When Tasks Become Too Complex +If a sub-task grows beyond expected scope: +1. **Stop implementation** immediately +2. **Document current state** and what was discovered +3. **Break down** the task into smaller pieces +4. **Update task list** with new sub-tasks +5. **Get approval** for the new breakdown before proceeding + +### When Context is Lost +If AI seems to lose track of project patterns: +1. **Pause development** +2. **Review CONTEXT.md** and recent changes +3. **Update context documentation** with current state +4. **Restart** with explicit pattern references +5. **Reduce task size** until context is re-established + +### When Quality Gates Fail +If implementation doesn't meet quality standards: +1. **Mark task** with `[!]` status +2. **Document specific issues** found +3. **Create remediation tasks** if needed +4. **Don't proceed** until quality issues are resolved + +## AI Instructions Integration + +### Context Awareness Commands +```markdown +**Before starting any task, run these checks:** +1. @CONTEXT.md - Review current project state +2. @architecture.md - Understand design principles +3. @code-review.md - Know quality standards +4. Look at existing similar code for patterns +``` + +### Quality Validation Commands +```markdown +**After completing any sub-task:** +1. Run code review checklist +2. Test integration with existing code +3. Update documentation if needed +4. Mark task complete only after quality approval +``` + +### Workflow Commands +```markdown +**For each development session:** +1. Review incomplete tasks and their status +2. Identify next logical sub-task to work on +3. Check dependencies and blockers +4. Follow iterative workflow process +5. Update task list with progress and findings +``` + +## Success Metrics + +### Daily Success Indicators +- Tasks are completed according to quality standards +- No sub-tasks are started without completing previous ones +- File tracking remains accurate and current +- Integration issues are caught early + +### Weekly Success Indicators +- Overall task completion rate is sustainable +- Quality issues are decreasing over time +- Context loss incidents are rare +- Team confidence in codebase remains high \ No newline at end of file diff --git a/.cursor/rules/generate-tasks.mdc b/.cursor/rules/generate-tasks.mdc new file mode 100644 index 0000000..ef2f83b --- /dev/null +++ b/.cursor/rules/generate-tasks.mdc @@ -0,0 +1,70 @@ +--- +description: Generate a task list or TODO for a user requirement or implementation. +globs: +alwaysApply: false +--- +--- +description: +globs: +alwaysApply: false +--- +# Rule: Generating a Task List from a PRD + +## Goal + +To guide an AI assistant in creating a detailed, step-by-step task list in Markdown format based on an existing Product Requirements Document (PRD). The task list should guide a developer through implementation. + +## Output + +- **Format:** Markdown (`.md`) +- **Location:** `/tasks/` +- **Filename:** `tasks-[prd-file-name].md` (e.g., `tasks-prd-user-profile-editing.md`) + +## Process + +1. **Receive PRD Reference:** The user points the AI to a specific PRD file +2. **Analyze PRD:** The AI reads and analyzes the functional requirements, user stories, and other sections of the specified PRD. +3. **Phase 1: Generate Parent Tasks:** Based on the PRD analysis, create the file and generate the main, high-level tasks required to implement the feature. Use your judgement on how many high-level tasks to use. It's likely to be about 5. Present these tasks to the user in the specified format (without sub-tasks yet). Inform the user: "I have generated the high-level tasks based on the PRD. Ready to generate the sub-tasks? Respond with 'Go' to proceed." +4. **Wait for Confirmation:** Pause and wait for the user to respond with "Go". +5. **Phase 2: Generate Sub-Tasks:** Once the user confirms, break down each parent task into smaller, actionable sub-tasks necessary to complete the parent task. Ensure sub-tasks logically follow from the parent task and cover the implementation details implied by the PRD. +6. **Identify Relevant Files:** Based on the tasks and PRD, identify potential files that will need to be created or modified. List these under the `Relevant Files` section, including corresponding test files if applicable. +7. **Generate Final Output:** Combine the parent tasks, sub-tasks, relevant files, and notes into the final Markdown structure. +8. **Save Task List:** Save the generated document in the `/tasks/` directory with the filename `tasks-[prd-file-name].md`, where `[prd-file-name]` matches the base name of the input PRD file (e.g., if the input was `prd-user-profile-editing.md`, the output is `tasks-prd-user-profile-editing.md`). + +## Output Format + +The generated task list _must_ follow this structure: + +```markdown +## Relevant Files + +- `path/to/potential/file1.ts` - Brief description of why this file is relevant (e.g., Contains the main component for this feature). +- `path/to/file1.test.ts` - Unit tests for `file1.ts`. +- `path/to/another/file.tsx` - Brief description (e.g., API route handler for data submission). +- `path/to/another/file.test.tsx` - Unit tests for `another/file.tsx`. +- `lib/utils/helpers.ts` - Brief description (e.g., Utility functions needed for calculations). +- `lib/utils/helpers.test.ts` - Unit tests for `helpers.ts`. + +### Notes + +- Unit tests should typically be placed alongside the code files they are testing (e.g., `MyComponent.tsx` and `MyComponent.test.tsx` in the same directory). +- Use `npx jest [optional/path/to/test/file]` to run tests. Running without a path executes all tests found by the Jest configuration. + +## Tasks + +- [ ] 1.0 Parent Task Title + - [ ] 1.1 [Sub-task description 1.1] + - [ ] 1.2 [Sub-task description 1.2] +- [ ] 2.0 Parent Task Title + - [ ] 2.1 [Sub-task description 2.1] +- [ ] 3.0 Parent Task Title (may not require sub-tasks if purely structural or configuration) +``` + +## Interaction Model + +The process explicitly requires a pause after generating parent tasks to get user confirmation ("Go") before proceeding to generate the detailed sub-tasks. This ensures the high-level plan aligns with user expectations before diving into details. + +## Target Audience + + +Assume the primary reader of the task list is a **junior developer** who will implement the feature. \ No newline at end of file diff --git a/.cursor/rules/iterative-workflow.mdc b/.cursor/rules/iterative-workflow.mdc new file mode 100644 index 0000000..65681ca --- /dev/null +++ b/.cursor/rules/iterative-workflow.mdc @@ -0,0 +1,236 @@ +--- +description: Iterative development workflow for AI-assisted coding +globs: +alwaysApply: false +--- + +# Rule: Iterative Development Workflow + +## Goal +Establish a structured, iterative development process that prevents the chaos and complexity that can arise from uncontrolled AI-assisted development. + +## Development Phases + +### Phase 1: Planning and Design +**Before writing any code:** + +1. **Understand the Requirement** + - Break down the task into specific, measurable objectives + - Identify existing code patterns that should be followed + - List dependencies and integration points + - Define acceptance criteria + +2. **Design Review** + - Propose approach in bullet points + - Wait for explicit approval before proceeding + - Consider how the solution fits existing architecture + - Identify potential risks and mitigation strategies + +### Phase 2: Incremental Implementation +**One small piece at a time:** + +1. **Micro-Tasks** (≤ 50 lines each) + - Implement one function or small class at a time + - Test immediately after implementation + - Ensure integration with existing code + - Document decisions and patterns used + +2. **Validation Checkpoints** + - After each micro-task, verify it works correctly + - Check that it follows established patterns + - Confirm it integrates cleanly with existing code + - Get approval before moving to next micro-task + +### Phase 3: Integration and Testing +**Ensuring system coherence:** + +1. **Integration Testing** + - Test new code with existing functionality + - Verify no regressions in existing features + - Check performance impact + - Validate error handling + +2. **Documentation Update** + - Update relevant documentation + - Record any new patterns or decisions + - Update context files if architecture changed + +## Iterative Prompting Strategy + +### Step 1: Context Setting +``` +Before implementing [feature], help me understand: +1. What existing patterns should I follow? +2. What existing functions/classes are relevant? +3. How should this integrate with [specific existing component]? +4. What are the potential architectural impacts? +``` + +### Step 2: Plan Creation +``` +Based on the context, create a detailed plan for implementing [feature]: +1. Break it into micro-tasks (≤50 lines each) +2. Identify dependencies and order of implementation +3. Specify integration points with existing code +4. List potential risks and mitigation strategies + +Wait for my approval before implementing. +``` + +### Step 3: Incremental Implementation +``` +Implement only the first micro-task: [specific task] +- Use existing patterns from [reference file/function] +- Keep it under 50 lines +- Include error handling +- Add appropriate tests +- Explain your implementation choices + +Stop after this task and wait for approval. +``` + +## Quality Gates + +### Before Each Implementation +- [ ] **Purpose is clear**: Can explain what this piece does and why +- [ ] **Pattern is established**: Following existing code patterns +- [ ] **Size is manageable**: Implementation is small enough to understand completely +- [ ] **Integration is planned**: Know how it connects to existing code + +### After Each Implementation +- [ ] **Code is understood**: Can explain every line of implemented code +- [ ] **Tests pass**: All existing and new tests are passing +- [ ] **Integration works**: New code works with existing functionality +- [ ] **Documentation updated**: Changes are reflected in relevant documentation + +### Before Moving to Next Task +- [ ] **Current task complete**: All acceptance criteria met +- [ ] **No regressions**: Existing functionality still works +- [ ] **Clean state**: No temporary code or debugging artifacts +- [ ] **Approval received**: Explicit go-ahead for next task +- [ ] **Documentaion updated**: If relevant changes to module was made. + +## Anti-Patterns to Avoid + +### Large Block Implementation +**Don't:** +``` +Implement the entire user management system with authentication, +CRUD operations, and email notifications. +``` + +**Do:** +``` +First, implement just the User model with basic fields. +Stop there and let me review before continuing. +``` + +### Context Loss +**Don't:** +``` +Create a new authentication system. +``` + +**Do:** +``` +Looking at the existing auth patterns in auth.py, implement +password validation following the same structure as the +existing email validation function. +``` + +### Over-Engineering +**Don't:** +``` +Build a flexible, extensible user management framework that +can handle any future requirements. +``` + +**Do:** +``` +Implement user creation functionality that matches the existing +pattern in customer.py, focusing only on the current requirements. +``` + +## Progress Tracking + +### Task Status Indicators +- 🔄 **In Planning**: Requirements gathering and design +- ⏳ **In Progress**: Currently implementing +- ✅ **Complete**: Implemented, tested, and integrated +- 🚫 **Blocked**: Waiting for decisions or dependencies +- 🔧 **Needs Refactor**: Working but needs improvement + +### Weekly Review Process +1. **Progress Assessment** + - What was completed this week? + - What challenges were encountered? + - How well did the iterative process work? + +2. **Process Adjustment** + - Were task sizes appropriate? + - Did context management work effectively? + - What improvements can be made? + +3. **Architecture Review** + - Is the code remaining maintainable? + - Are patterns staying consistent? + - Is technical debt accumulating? + +## Emergency Procedures + +### When Things Go Wrong +If development becomes chaotic or problematic: + +1. **Stop Development** + - Don't continue adding to the problem + - Take time to assess the situation + - Don't rush to "fix" with more AI-generated code + +2. **Assess the Situation** + - What specific problems exist? + - How far has the code diverged from established patterns? + - What parts are still working correctly? + +3. **Recovery Process** + - Roll back to last known good state + - Update context documentation with lessons learned + - Restart with smaller, more focused tasks + - Get explicit approval for each step of recovery + +### Context Recovery +When AI seems to lose track of project patterns: + +1. **Context Refresh** + - Review and update CONTEXT.md + - Include examples of current code patterns + - Clarify architectural decisions + +2. **Pattern Re-establishment** + - Show AI examples of existing, working code + - Explicitly state patterns to follow + - Start with very small, pattern-matching tasks + +3. **Gradual Re-engagement** + - Begin with simple, low-risk tasks + - Verify pattern adherence at each step + - Gradually increase task complexity as consistency returns + +## Success Metrics + +### Short-term (Daily) +- Code is understandable and well-integrated +- No major regressions introduced +- Development velocity feels sustainable +- Team confidence in codebase remains high + +### Medium-term (Weekly) +- Technical debt is not accumulating +- New features integrate cleanly +- Development patterns remain consistent +- Documentation stays current + +### Long-term (Monthly) +- Codebase remains maintainable as it grows +- New team members can understand and contribute +- AI assistance enhances rather than hinders development +- Architecture remains clean and purposeful \ No newline at end of file diff --git a/.cursor/rules/project.mdc b/.cursor/rules/project.mdc new file mode 100644 index 0000000..28ce870 --- /dev/null +++ b/.cursor/rules/project.mdc @@ -0,0 +1,24 @@ +--- +description: +globs: +alwaysApply: true +--- +# Rule: Project specific rules + +## Goal +Unify the project structure and interraction with tools and console + +### System tools +- **ALWAYS** use UV for package management +- **ALWAYS** use windows PowerShell command for terminal + +### Coding patterns +- **ALWYAS** check the arguments and methods before use to avoid errors with whron parameters or names +- If in doubt, check [CONTEXT.md](mdc:CONTEXT.md) file and [architecture.md](mdc:docs/architecture.md) +- **PREFER** ORM pattern for databases with SQLAclhemy. +- **DO NOT USE** emoji in code and comments + +### Testing +- Use UV for test in format *uv run pytest [filename]* + + diff --git a/.cursor/rules/refactoring.mdc b/.cursor/rules/refactoring.mdc new file mode 100644 index 0000000..c141666 --- /dev/null +++ b/.cursor/rules/refactoring.mdc @@ -0,0 +1,237 @@ +--- +description: Code refactoring and technical debt management for AI-assisted development +globs: +alwaysApply: false +--- + +# Rule: Code Refactoring and Technical Debt Management + +## Goal +Guide AI in systematic code refactoring to improve maintainability, reduce complexity, and prevent technical debt accumulation in AI-assisted development projects. + +## When to Apply This Rule +- Code complexity has increased beyond manageable levels +- Duplicate code patterns are detected +- Performance issues are identified +- New features are difficult to integrate +- Code review reveals maintainability concerns +- Weekly technical debt assessment indicates refactoring needs + +## Pre-Refactoring Assessment + +Before starting any refactoring, the AI MUST: + +1. **Context Analysis:** + - Review existing `CONTEXT.md` for architectural decisions + - Analyze current code patterns and conventions + - Identify all files that will be affected (search the codebase for use) + - Check for existing tests that verify current behavior + +2. **Scope Definition:** + - Clearly define what will and will not be changed + - Identify the specific refactoring pattern to apply + - Estimate the blast radius of changes + - Plan rollback strategy if needed + +3. **Documentation Review:** + - Check `./docs/` for relevant module documentation + - Review any existing architectural diagrams + - Identify dependencies and integration points + - Note any known constraints or limitations + +## Refactoring Process + +### Phase 1: Planning and Safety +1. **Create Refactoring Plan:** + - Document the current state and desired end state + - Break refactoring into small, atomic steps + - Identify tests that must pass throughout the process + - Plan verification steps for each change + +2. **Establish Safety Net:** + - Ensure comprehensive test coverage exists + - If tests are missing, create them BEFORE refactoring + - Document current behavior that must be preserved + - Create backup of current implementation approach + +3. **Get Approval:** + - Present the refactoring plan to the user + - Wait for explicit "Go" or "Proceed" confirmation + - Do NOT start refactoring without approval + +### Phase 2: Incremental Implementation +4. **One Change at a Time:** + - Implement ONE refactoring step per iteration + - Run tests after each step to ensure nothing breaks + - Update documentation if interfaces change + - Mark progress in the refactoring plan + +5. **Verification Protocol:** + - Run all relevant tests after each change + - Verify functionality works as expected + - Check performance hasn't degraded + - Ensure no new linting or type errors + +6. **User Checkpoint:** + - After each significant step, pause for user review + - Present what was changed and current status + - Wait for approval before continuing + - Address any concerns before proceeding + +### Phase 3: Completion and Documentation +7. **Final Verification:** + - Run full test suite to ensure nothing is broken + - Verify all original functionality is preserved + - Check that new code follows project conventions + - Confirm performance is maintained or improved + +8. **Documentation Update:** + - Update `CONTEXT.md` with new patterns/decisions + - Update module documentation in `./docs/` + - Document any new conventions established + - Note lessons learned for future refactoring + +## Common Refactoring Patterns + +### Extract Method/Function +``` +WHEN: Functions/methods exceed 50 lines or have multiple responsibilities +HOW: +1. Identify logical groupings within the function +2. Extract each group into a well-named helper function +3. Ensure each function has a single responsibility +4. Verify tests still pass +``` + +### Extract Module/Class +``` +WHEN: Files exceed 250 lines or handle multiple concerns +HOW: +1. Identify cohesive functionality groups +2. Create new files for each group +3. Move related functions/classes together +4. Update imports and dependencies +5. Verify module boundaries are clean +``` + +### Eliminate Duplication +``` +WHEN: Similar code appears in multiple places +HOW: +1. Identify the common pattern or functionality +2. Extract to a shared utility function or module +3. Update all usage sites to use the shared code +4. Ensure the abstraction is not over-engineered +``` + +### Improve Data Structures +``` +WHEN: Complex nested objects or unclear data flow +HOW: +1. Define clear interfaces/types for data structures +2. Create transformation functions between different representations +3. Ensure data flow is unidirectional where possible +4. Add validation at boundaries +``` + +### Reduce Coupling +``` +WHEN: Modules are tightly interconnected +HOW: +1. Identify dependencies between modules +2. Extract interfaces for external dependencies +3. Use dependency injection where appropriate +4. Ensure modules can be tested in isolation +``` + +## Quality Gates + +Every refactoring must pass these gates: + +### Technical Quality +- [ ] All existing tests pass +- [ ] No new linting errors introduced +- [ ] Code follows established project conventions +- [ ] No performance regression detected +- [ ] File sizes remain under 250 lines +- [ ] Function sizes remain under 50 lines + +### Maintainability +- [ ] Code is more readable than before +- [ ] Duplicated code has been reduced +- [ ] Module responsibilities are clearer +- [ ] Dependencies are explicit and minimal +- [ ] Error handling is consistent + +### Documentation +- [ ] Public interfaces are documented +- [ ] Complex logic has explanatory comments +- [ ] Architectural decisions are recorded +- [ ] Examples are provided where helpful + +## AI Instructions for Refactoring + +1. **Always ask for permission** before starting any refactoring work +2. **Start with tests** - ensure comprehensive coverage before changing code +3. **Work incrementally** - make small changes and verify each step +4. **Preserve behavior** - functionality must remain exactly the same +5. **Update documentation** - keep all docs current with changes +6. **Follow conventions** - maintain consistency with existing codebase +7. **Stop and ask** if any step fails or produces unexpected results +8. **Explain changes** - clearly communicate what was changed and why + +## Anti-Patterns to Avoid + +### Over-Engineering +- Don't create abstractions for code that isn't duplicated +- Avoid complex inheritance hierarchies +- Don't optimize prematurely + +### Breaking Changes +- Never change public APIs without explicit approval +- Don't remove functionality, even if it seems unused +- Avoid changing behavior "while we're here" + +### Scope Creep +- Stick to the defined refactoring scope +- Don't add new features during refactoring +- Resist the urge to "improve" unrelated code + +## Success Metrics + +Track these metrics to ensure refactoring effectiveness: + +### Code Quality +- Reduced cyclomatic complexity +- Lower code duplication percentage +- Improved test coverage +- Fewer linting violations + +### Developer Experience +- Faster time to understand code +- Easier integration of new features +- Reduced bug introduction rate +- Higher developer confidence in changes + +### Maintainability +- Clearer module boundaries +- More predictable behavior +- Easier debugging and troubleshooting +- Better performance characteristics + +## Output Files + +When refactoring is complete, update: +- `refactoring-log-[date].md` - Document what was changed and why +- `CONTEXT.md` - Update with new patterns and decisions +- `./docs/` - Update relevant module documentation +- Task lists - Mark refactoring tasks as complete + +## Final Verification + +Before marking refactoring complete: +1. Run full test suite and verify all tests pass +2. Check that code follows all project conventions +3. Verify documentation is up to date +4. Confirm user is satisfied with the results +5. Record lessons learned for future refactoring efforts diff --git a/.cursor/rules/task-list.mdc b/.cursor/rules/task-list.mdc new file mode 100644 index 0000000..939a9f1 --- /dev/null +++ b/.cursor/rules/task-list.mdc @@ -0,0 +1,44 @@ +--- +description: TODO list task implementation +globs: +alwaysApply: false +--- +--- +description: +globs: +alwaysApply: false +--- +# Task List Management + +Guidelines for managing task lists in markdown files to track progress on completing a PRD + +## Task Implementation +- **One sub-task at a time:** Do **NOT** start the next sub‑task until you ask the user for permission and they say “yes” or "y" +- **Completion protocol:** + 1. When you finish a **sub‑task**, immediately mark it as completed by changing `[ ]` to `[x]`. + 2. If **all** subtasks underneath a parent task are now `[x]`, also mark the **parent task** as completed. +- Stop after each sub‑task and wait for the user’s go‑ahead. + +## Task List Maintenance + +1. **Update the task list as you work:** + - Mark tasks and subtasks as completed (`[x]`) per the protocol above. + - Add new tasks as they emerge. + +2. **Maintain the “Relevant Files” section:** + - List every file created or modified. + - Give each file a one‑line description of its purpose. + +## AI Instructions + +When working with task lists, the AI must: + +1. Regularly update the task list file after finishing any significant work. +2. Follow the completion protocol: + - Mark each finished **sub‑task** `[x]`. + - Mark the **parent task** `[x]` once **all** its subtasks are `[x]`. +3. Add newly discovered tasks. +4. Keep “Relevant Files” accurate and up to date. +5. Before starting work, check which sub‑task is next. + +6. After implementing a sub‑task, update the file and then pause for user approval. \ No newline at end of file